Surface LLM errors in CLI, use pi-ai credential detection, and implement model fallback#2
Conversation
Priyanshu-Priyam
commented
Mar 6, 2026
…ent model fallback Three related fixes for silent model failures: 1. CLI now prints LLM-level errors (stopReason: "error") instead of showing a blank line. Fires on_error hooks and audit logging for these failures. 2. Replace hardcoded 6-provider API key map with pi-ai's getEnvApiKey(), which supports all providers including Bedrock, Vertex, Azure, etc. 3. Implement model fallback: when the preferred model fails, automatically retry with models from agent.yaml's model.fallback list. Works in both CLI (REPL + single-shot) and SDK (query()) paths. Made-with: Cursor
shreyas-lyzr
left a comment
There was a problem hiding this comment.
This PR adds error surfacing and model fallback — good goals. However the diff is against a very old base and will have massive merge conflicts with current main. The codebase has changed significantly since this was opened.
Recommend closing this and re-opening against current main if the features are still desired.
|
Apologies for the long silence on this — two months is too long. Updating you on the current main, because parts of this PR are already in:
The right path forward, if you have time: narrow this PR to just the two un-merged concerns — If a fresh PR is easier than a rebase, that's fine — link this one in the description and your authorship credit carries over. Real apologies for the wait — the Bedrock error-surfacing piece you flagged was useful enough that it shaped how main handles errors now. That part is materially yours. |