Campus Ambassador
- Executed strategic campus outreach campaigns to bridge the gap between academia and the corporate SaaS industry.
- Collaborated with cross-functional teams to drive brand awareness, successfully enrolling 70% of peers in company-led mentorship programs.
Cloud App Development and Maintenance Intern
- Developed a full-stack app using Vue.js, Node.js, Express.js, and Firebase, achieving 100% real-time data sync.
- Integrated AI summarization (Gemini API), reducing note review time by 60% and enhancing usability
Summer Research Intern
- In this research, we developed a lightweight pipeline using LoRA and TinyLlama-1.1B to solve this. By fine-tuning on a specific dataset of job-related content, we managed to cut computational requirements by 70% while maintaining a high standard of output quality (85% correctness). This project offers a replicable framework for developers looking to build efficient, domain-specific text generators.
Backend Developer
- Engineered a Django-based backend architecture that accelerated API response time by 40%.
- Reduced database queries by 30% and deployed system on Vercel with 99.5% uptime.
Parameter-Efficient Fine-Tuning of Compact Language Models for Professional LinkedIn Post Generation
While generative AI has transformed content creation, professional platforms like LinkedIn require a specific tone and context that general models often miss or are too expensive to generate at scale. In this research, we developed a lightweight pipeline using LoRA and TinyLlama-1.1B to solve this. By fine-tuning on a specific dataset of job-related content, we managed to cut computational requirements by 70% while maintaining a high standard of output quality (85% correctness). This project offers a replicable framework for developers looking to build efficient, domain-specific text generators.
Efficient Reinforcement Learning for Autonomous Planetary Landing Tasks
Autonomous lunar landing is a notoriously difficult control challenge, requiring split-second decisions in uncertain terrain with strictly limited computational resources. Traditional control methods can struggle with these real-time demands, so this research investigates a more adaptive solution using Reinforcement Learning.In this paper, I developed a Deep Q-Network (DQN) agent within the Gymnasium LunarLander-v3 environment to tackle these flight dynamics. Through extensive experimentation with different neural architectures—ranging from Tiny to Deep—I identified that a Wide network structure (128–128 layers) offered the optimal balance of performance and efficiency.The Result: The final agent achieved a 93% landing success rate with an average reward of 262.89, outperforming other configurations. This study proves that lightweight, well-tuned RL agents can handle complex control tasks effectively, providing a scalable blueprint for future applications in autonomous space robotics.
Mobile-Friendly Dog Breed Recognition Using ResNet-18 with Structured Pruning
Fine-grained image classification—like distinguishing between a Siberian Husky and an Eskimo Dog—is notoriously difficult due to extreme visual similarities and limited data. In this study, I set out to determine the best approach for balancing high accuracy with the computational efficiency required for real-world apps.I conducted a direct comparison between a custom CNN and ResNet-18 using the Stanford Dogs dataset. The pre-trained ResNet-18 proved significantly better at detecting subtle breed differences, achieving 80.82% accuracy compared to 66.46% for the custom architecture.The Key Innovation: To make this feasible for mobile deployment, I optimized the ResNet model by pruning 30% of its channels. This successfully reduced the inference time to just 64ms on a mobile CPU with less than a 1.5% drop in accuracy. This project demonstrates a practical, scalable path for building robust image classifiers capable of running on edge devices.
Algorithmic Insights into the N-Queens Problem: A Comparative Study Across Scales
The N-Queens problem is a classic benchmark for constraint satisfaction, specifically because the difficulty explodes exponentially as the board size increases. In this study, I wanted to move beyond theory and stress-test four distinct algorithms—Depth-First Search, Hill Climbing, Simulated Annealing, and Genetic Algorithms—on board sizes up to 200×200.The Findings: While exhaustive methods like DFS crumbled under resource demands and simple heuristics often got stuck in local optima, Simulated Annealing emerged as the clear winner. It provided the most consistent performance with low memory usage, effectively solving instances up to N=200. This research underscores that for large-scale optimization problems, choosing the right metaheuristic strategy is often more critical than raw computing power.
Contact Form
Please contact me directly at zaidmohsin45@gmail.com or drop your info here.