Turning experimental models into production-ready systems - AWS SageMaker, MLflow, Kubeflow, and cloud-native ML infrastructure.
- 🔭 Current Focus: AWS SageMaker, MLflow, and Kubeflow
- 🌱 Learning: GPU orchestration on EKS and real-time Feature Stores.
- 📫 Reach me: LinkedIn
|
Automated provisioning of an Amazon SageMaker environment using Terraform. Includes VPC networking, IAM roles for least-privilege access, and EKS cluster setup for distributed training. |
Built a low-latency Feature Store using AWS Glue and Redis. Migrated legacy Oracle relational data into a versioned format suitable for real-time ML inference with DVC. |
|
A full CI/CD/CT (Continuous Training) pipeline. Uses GitHub Actions to trigger model retraining in MLflow when new data arrives in S3, ensuring zero-downtime deployment via ArgoCD. |
Prometheus & Grafana stack designed to monitor Model Drift. Tracks prediction latency and accuracy decay, mirroring the "Database Health Checks" of a traditional DBA. |
| ML Platforms & Ops | Infrastructure & Data | CI/CD & Automation |
|---|---|---|
- Infrastructure as Code: Automating ML environments to eliminate "it works on my laptop" syndrome.
- Data Integrity: Applying DBA-level rigor to Feature Stores and Data Versioning (DVC).
- Observability: Moving beyond system health to monitor Model Drift and Performance Decay.
- From DBA to MLOps: Specialized in architecting high-throughput data bridges between Legacy RDBMS (Oracle/PeopleSoft) and modern ML Feature Stores.
- 🗣 Commented on #1 in singhajeet79/sales_pipeline
- 🔒 Closed issue #1 in singhajeet79/sales_pipeline
- ❗ Opened issue #1 in singhajeet79/sales_pipeline
- 🗣 Commented on #1 in singhajeet79/inventory-data-pipeline
- 🔒 Closed issue #1 in singhajeet79/inventory-data-pipeline
|
|
|

