MS in CS @ UC Davis | AI/ML Engineer @ CHEST | Software Engineering | LLMs | Explainable AI
I recently finished my Masters in Computer Science at UC Davis. I’m a software engineer building AI applications for hardware. I build LLM-based tools, APIs, and data pipelines that analyze chip designs, detect vulnerabilities, and streamline hardware workflows.
Current Work
-
Center for Hardware and Embedded Systems Security and Trust:
- Set up computing infrastructure to run LLM inference on HPC clusters for hardware code analysis
- Fine-tuned LLM models to analyze chip design to identify bugs and security vulnerabilities.
- Co-authored two papers on use of LLMs in Chip Design (ISQED 2025, MLCAD 2025)
Previously:
-
Artificial Intelligence Accountability Explainability Lab
- Designed frameworks that make large language models more accurate and reliable
- Reduced hallucinations by 30% using RAG and structured reasoning approaches (Chain of Thought Reasoning)
- Built full stack web application with Python Flask, ReactJS, and MongoDB
-
Elite Softwares
- Built full-stack CRM application with React.js frontend and Flask backend with REST APIs
- Implemented automated testing with 70% code coverage using PyTest
- Designed responsive UI that increased user engagement time by 20%
- Optimized database queries, reducing data retrieval time by 35%
Strengths: backend architecture, API design, performance & reliability, applied LLM systems, hardware-software integration, ML for chip design, ML infrastructure
Interests: AI for adjacent domains, agentic AI, data/infra platforms, full-stack development
- Languages: Python, JavaScript, C++, HTML, CSS
- Databases & APIs: MySQL, PostgreSQL, MongoDB, RESTful API design, OAuth 2.0, JWT, Postman
- Frameworks & Libraries: React.js, Django, Flask, PyTorch, Pandas, Tensorflow, scikit-learn, LLMs
- AI/ML & Vector DBs: Pinecone, FAISS, OpenAI API, RAG, CUDA 12.1, Vector Similarity Search, Transformers
- DevOps & Tools:Github, GitHub Actions, Docker, GCP, JIRA, CI/CD, Selenium
🎮 Gameboi
An open-source generative AI tool that creates complete 2D games from text prompts using GPT-4, DALL·E, and PyGame.
- An open source tool that uses GPT4, Dall-E and PyGame to create customized 2D games based on user prompts.
- It is designed to streamline the game development process by automating various stages, from generating game sprites and assets to writing PyGame code.
- It is also capable of troubleshooting if the game fails to run and improving itself based on user feedback.
- Within a week of open sourcing Gameboi, it had 7 stars and multiple contributions on github.
A local LLM-based financial assistant built by fine-tuning LLaMA3-8B on historic stock market data.
- Fine tuned a Llama3-8B base model on historic stock market data.
- Developed as a part of my Advanced deep Learning course project to create a financial analysis tool which can run locally, analyze and predict stock trends.
- LLMs have the potential to help you understand your financial portfolio based on current news sentiments but you can’t upload your sensitive data onto applications like ChatGPT.
- Observed a 19% improvement in trend prediction and 28% improvement in sentiment analysis over Mistral7B and 10% improvement in trend prediction and 17% improvement in sentiment analysis over base Llama3-8B model
🗣️ HindiSetu
A DeepSeek-powered platform for Hindi learners, currentlybeing used as a experimental learning tool @ Department of Middle East and South Asian Studies, UC Davis.
- Deepseek powered platform to help novice and intermediate learners learn Hindi better.
- Generates transcripts for a given youtube video and generate relevant Q&A based on the transcript which helps build student’s reading and comprehension skills.
- Also implemented a dictionary and word lookup feature to help expand students vocabulary.
- Integrated OpenAI’s APIs and a FAISS vector database to create adaptive assessments, AI generated Q&A pairs, and personalized learning workflows.
-
K. I. Gubbi, M. Halm, S. Kumar, A. Sudarshan, P. D. Kota, M. Tarighat, A. Sasan, and H. Homayoun. Prompting for power: Benchmarking large language models for low-power rtl design generation. In 2025 ACM/IEEE 7th Symposium on Machine Learning for CAD (MLCAD), pages 1–7, 2025.
-
K. I. Gubbi, M. Tarighat, A. Sudarshan, I. Kaur, P. D. Kota, A. Sasan, and H. Homayoun. State of hardware fuzzing: Current methods and the potential of machine learning and large language models. In 2025 26th International Symposium on Quality Electronic Design (ISQED), pages 1–7. IEEE, 2025
