"Writes code so the robots behave; rides bikes so he remembers he's squishy; riffs on guitar so he knows he's neither AI nor Achilles."
I'm a Master's in Computer Science student with a passion for making AI systems more interpretable and trustworthy. Currently serving as President of NEURAI Lab at Northeastern University, where we're pushing the boundaries of AI research through interdisciplinary innovation.
- π§ Mechanistic Interpretability & XAI Safety - Reverse engineering how AI systems actually work
- π Quantitative Finance - Building retail quant strategies using Reinforcement Learning
- π€ Community Building - Hosting Bay Area interpretability workshops
Northeastern University's Research in AI
Leading interdisciplinary AI research initiatives focused on:
- Explainable AI: Making artificial minds more transparent
- AI in Healthcare: Enhancing nurse training simulations
- AI for Accessibility: Improving audio descriptions for visually impaired users
- Autonomous Systems: Wildlife-safe drone technology
- Traffic Modeling: Advanced particle-based car representation
Recent Highlights:
- Won 2025 Silicon Valley Rising Star Award
Much of my work is conducted through institutional collaborations and research initiatives
- AI Safety Research - Mechanistic interpretability studies (primarily private/institutional repos)
- Quantitative Finance Models - RL-based trading strategies and risk management systems
- NEURAI Lab Projects - Collaborative research in explainable AI and healthcare applications
- Workshop Materials - Bay Area interpretability meetup resources and educational content
Public repositories are selectively curated - most active development occurs in research environments
When I'm not debugging neural networks or building trading algorithms, you'll find me:
- πΈ Riffing on guitar - Keeping the creative side alive
- ποΈ Riding my Ducati - Embracing the thrill of the open road
- π΄ Cycling - Staying grounded and remembering I'm human
- π€ Organizing workshops - Sharing knowledge with the AI interpretability community
I'm always excited to discuss:
- AI Safety & Interpretability
- Quantum Machine Learning applications
- Quantitative trading strategies
- Research collaborations
- Or just chat about guitars and motorcycles! πΈποΈ
π San Francisco Bay Area | π Northeastern University
Building the future of interpretable AI, one algorithm at a time β¨

