@batteryphil I'm curious whether the latent TTC loop was backbone-agnostic. Ported Phase 14's inner-loop bypass to RWKV-7 and ran some ablations. Branch is here if you want to poke at it: https://github.com/ItsMick/mamba2backbonerecursion/tree/rwkv7-latent-loop
it works. O(1) stateful decode runs clean, kill-shot proof passes (L2 divergence 549 between 6 vs 12 loops, different top tokens). One thing that stood out — the WKV state is a [n_heads, d_head, d_head] KV matrix rather than an opaque vector, so you can fingerprint, compose, and inspect it directly. State cartridges round-trip at <1e-6 error and blend across sessions.
No training loop yet, so it's inference-only. Halting head is random weights. But the loop runs and the state evolves. Curious if it lines up with anything you've already tried on the RWKV side.
@batteryphil I'm curious whether the latent TTC loop was backbone-agnostic. Ported Phase 14's inner-loop bypass to RWKV-7 and ran some ablations. Branch is here if you want to poke at it: https://github.com/ItsMick/mamba2backbonerecursion/tree/rwkv7-latent-loop
it works. O(1) stateful decode runs clean, kill-shot proof passes (L2 divergence 549 between 6 vs 12 loops, different top tokens). One thing that stood out — the WKV state is a [n_heads, d_head, d_head] KV matrix rather than an opaque vector, so you can fingerprint, compose, and inspect it directly. State cartridges round-trip at <1e-6 error and blend across sessions.
No training loop yet, so it's inference-only. Halting head is random weights. But the loop runs and the state evolves. Curious if it lines up with anything you've already tried on the RWKV side.