AI Model Evaluation | RLHF | Prompt Engineering. Focused on adversarial testing, constraint satisfaction, and creating rigorous evaluation rubrics.
- in/yaswanthghanta
-
Joined
Jan 27, 2026
Pinned Loading
-
llm-logical-integrity-benchmark
llm-logical-integrity-benchmark PublicAdversarial testing of LLMs on constraint satisfaction deadlocks
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.