RLM v1 (draft — base API decision needed)#86
Draft
darinkishore wants to merge 44 commits into
Draft
Conversation
- Refactor Message from flat enum to Role + Vec<ContentBlock> - Reasoning continuity preserved through rig round-trips - From<RigMessage> trivial (no data loss), RigChatMessage removed - Predict API split: forward(input) + forward_continue(chat) - ToolLoopMode::CallerManaged for caller-controlled tool loops - Full conversation history in LMResponse.chat - temp_env replaces unsafe set_var in all tests - 14 new tests: round-trip, CallerManaged conversation, reasoning preservation
Two trybuild tests (render_invalid_jinja, render_non_literal) failed on CI because syn::Error::new(span, msg) with .span() produces different underline widths on stable (CI) vs nightly (local). Switch to syn::Error::new_spanned(tokens, msg) which reliably spans from first to last token regardless of compiler version.
…, collapse call API * refactor: unify Predict history API and chat contracts - collapse Predict to forward(input, history) with call() wrapper - preserve full provider content in CallerManaged LMResponse.output - remove inaccurate lossless conversion claim - remove legacy Chat JSON parsing; enforce canonical grouped format - update conversation and chat roundtrip tests * refactor: harden LM transcript fidelity, unify Predicted return shape, collapse call API
… perception-based user messages, custom repr support
This reverts commit cd46a7b.
- Type dedup: persistent visited set, types expand once then reference by name
- Nested method visibility: collect methods by type (schema-driven resolution),
render on all class blocks, not just top-level vars
- Data enum de-wrapper: single-payload variants render as direct payload types
instead of Entry_X { type, data } wrappers
- Doc comment normalization: multi-line docs collapse to single line
- Union indentation fix: preserve nested indentation in continuation lines
- Docstring gating removed: methods without docs still appear in schema
- Defensive synthetic variant guard: prevent method contamination on
BAML-generated variant classes
Schema output: 409 → 222 lines (46% reduction), 0 → 11 nested methods visible.
All changes are general infrastructure — any program using the RLM benefits.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Draft PR for the RLM v1 stack. Not ready to land yet — there's an unresolved base-API decision documented below.
Summary
nqzvplvs 94685374"RLM schema rendering overhaul: type dedup, nested methods, clean unions".The branch is built on top of an orphaned multi-turn-predict refactor that never made it to main, and main has since taken a different shape. Specifically:
ynqyxtnl)3c850e60+5bb65ca5)forward(input, history: Option<Chat>) -> Predictedforward(input) -> PredictedforwardwithSome(history)forward_continue(chat) -> (Predicted, Chat)compose_chat+execute_chat(private)build_chat+call_and_parse(public)lm.call(chat, tools, ToolLoopMode::Auto)lm.call(chat, tools)(noToolLoopMode)Predict.lm: Option<Arc<LM>>+.lm()builderHow it ended up like this
zqytoppy,kqyktwmv) with the split API (forward(input)+forward_continue(chat)) +ToolLoopMode::CallerManaged.1e9d503a"collapse call API" (=ynqyxtnlon this branch): collapses split API back to unifiedforward(input, history), switches toToolLoopMode::Auto, usesRoleenum.3c850e60containing only the first two commits. The "collapse call API" refactor was not included.multi-turn-predictbranch got the orphaned refactor as659c3938 ynqyxtnl. RLM v1 was then built on top.5bb65ca5 feat(predict): add per-instance LM overrideon top of the older split API.So the local stack uses the newer/collapsed API; main uses the older/split API plus an LM-override addition.
Two paths forward
(A) Keep main's split API. Port the RLM stack to use
forward(input)+forward_continue(chat)+ dropToolLoopMode::Autofor the equivalent main-side construct. Theynqyxtnl"collapse call API" refactor is effectively discarded.(B) Restore the unified API. Land
ynqyxtnl(the orphaned refactor) into main first as its own PR, re-apply5bb65ca5(per-instance LM override) on top, then rebase RLM. Preserves the local API design.Both involve a non-trivial API migration in the rebase; the "update with main" delta itself is only one small commit (
5bb65ca5, 143 lines).Test plan
maincargo testacrossdspy-rs+ RLM-derive + integration suites