Projects

AI-Enhanced E-commerce Assistant

Conversational AI

Built an AI-powered assistant to improve customer interactions in an e-commerce setting. The system handles customer queries, product questions, and contextual follow-ups using intelligent response generation. I designed a scalable data layer to keep things reliable as usage grew, and focused on reducing response latency to make conversations feel more natural. This project sharpened my skills in building customer-facing AI systems where usability and scale both matter.

Custom Named Entity Recognition Model

NLP · Model Fine-Tuning

Built and fine-tuned a domain-specific NER system to accurately extract specialized entities from unstructured text. I designed and curated a custom dataset for the task, fine-tuned a transformer-based model to handle domain terminology, and optimized training workflows to get strong performance on unseen data. This project demonstrates hands-on experience with dataset design, model fine-tuning, and NLP evaluation.

Native-Language Voice Model

Speech AI · Generative Audio

Researched and fine-tuned a text-to-speech model to generate natural, expressive speech in a native language. I adapted a generative voice model to better capture pronunciation, intonation, and expressiveness specific to the target language, focusing on audio quality and conversational naturalness for voice-enabled AI systems. Evaluated outputs across clarity, expressiveness, and consistency metrics. This work supports my broader research in voice-first conversational AI.

End-to-End Sentiment Analysis System

Machine Learning · Production API

Designed and deployed a complete ML pipeline for large-scale sentiment classification. Built the full data-to-deployment workflow, trained and compared multiple models with systematic hyperparameter tuning, and developed a production-ready API supporting both real-time and batch inference. Added monitoring and health endpoints to keep things reliable in production. This project highlights experience in production ML — not just model training, but the whole system.

Experiments & Research

RAG System Optimization

Research

Working on making retrieval-augmented generation actually useful. I've been comparing different RAG approaches — basic, hierarchical, hybrid — to see what works best for long documents, structured reports, and conversational use cases. Spent time experimenting with chunking strategies and embedding models to improve how well retrieved context grounds the final response. This research feeds directly into production RAG systems I build.

Multi-Agent Orchestration

Research

Exploring how multiple AI agents can work together on complex tasks instead of acting alone. I've been designing flows where agents collaborate, hand off subtasks, validate each other's work, and recover when things go wrong. Also experimenting with memory — both short and long-term — so agents can keep context across sessions. The goal is building agent systems that can handle real-world automation reliably.

LLM Fine-tuning & Evaluation

Research

Researching how to adapt large models for specialized tasks while keeping them generally capable. A big focus has been on LLM-based text-to-speech — where models predict audio as discrete tokens step by step, learning to generate speech that sounds natural and expressive. I'm studying how to bridge text and audio generation in a unified way, and building evaluation frameworks that measure naturalness, clarity, and how well the voice matches the intended meaning.