Skip to content
View joannany's full-sized avatar

Block or report joannany

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
joannany/README.md

Hi, I’m Anna

Senior AI Product Manager focused on evaluating, monitoring, and hardening machine learning systems in production.

I build tools for:

  • Medical AI model monitoring (real-world drift detection, calibration analysis, post-market evaluation)
  • Statistical evaluation frameworks for calibration, threshold selection, and robust performance analysis
  • LLM behavioral reliability (drift, safety boundaries, consistency) — in progress

My work sits at the intersection of research and product, with an emphasis on making AI systems measurable, predictable, and operationally safe.


Featured Projects

  • Model Eval & Drift Lab
    Tools for detecting distribution shift, assessing calibration, and stress-testing deployed ML systems.

  • GPT-Drift
    Lightweight behavioral fingerprinting to detect silent behavior changes in LLM APIs.


📫 Connect

Pinned Loading

  1. gpt-drift gpt-drift Public

    Detect when LLM APIs silently change behavior

    Python

  2. model-eval-drift-lab model-eval-drift-lab Public

    Tools for catching ML models before they fail silently in production. Drift detection, calibration, and thresholding utilities used in high-stakes medical AI deployments.

    Python