-
Notifications
You must be signed in to change notification settings - Fork 135
moq-lite-03: Support dynamic latency #829
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
No more tasks required.
Resolved merge conflicts from debugging PR: - Removed tokio-console feature dependencies - Replaced web_async::spawn_named with web_async::spawn - Replaced tokio::task::Builder with tokio::spawn - Adopted refactored deadlock detection code from main - Updated to use published web-transport crates - Added panic=abort profile settings from main 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
publisher priority takes over, and it's pretty useless anyway.
WalkthroughThe change converts string/json/bool track I/O in JavaScript to a Frame-based API, updates broadcast.subscribe to accept option objects (name/priority/maxLatency), and adds a Frame abstraction and group/frame read/write primitives. Many lite APIs (Track, Group, Broadcast, Writer) were reworked to use Frame/Delivery/Time and new subscribe/update fields (ordered, maxLatency). In Rust, catalog production/consumption, container framing, delivery, expiration, and subscriber state were introduced or moved into moq_lite modules; Track/Group/Frame model types were refactored to carry timestamps/deltas and Draft03 version support was added. 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (7)
js/hang/src/publish/preview.ts (1)
32-40: Add error handling for Frame.fromJson or document type safety rationale.
Frame.fromJson(info)callsJSON.stringify(), which can throw if the object contains circular references or problematictoJSONmethods. WhileCatalog.Previewis currently defined with only primitive types (strings and booleans), there's no explicit error handling if serialization fails. Either:
- Wrap the call in a try-catch and handle serialization errors, or
- Document that the type structure prevents JSON serialization failures (e.g., via a comment noting that
Catalog.Previewschema guarantees serializable types)js/lite/src/ietf/publisher.ts (3)
150-176: Timestamps are likely required for latency-driven behavior—sending frames without them will break ordering/buffering.
new FrameMessage(frame.payload)plus// TODO support timestamps(Line 173-175) is risky given the PR’s “all timing behavior is driven by timestamps included with each frame”. IfFrameMessagesupports an instant/timestamp delta now, this should be wired through before merge.
197-203: Unsubscribe not implemented: likely leaks track/stream resources for long-lived broadcasts.
WithhandleUnsubscribeas TODO (Line 201-203), a client can drop a subscription while#runTrackkeeps waiting ontrack.nextGroup()indefinitely. Consider trackingrequestId -> Trackso you cantrack.close()and stop publishing promptly on unsubscribe/cancel.
83-112: Hard-codedmaxLatencyand missing delivery fields in SUBSCRIBE_OK response.
maxLatency: 100 as Time.Milli(line 101) is a placeholder marked with// TODO delivery timeout. The subscriber'ssubscriberPriorityis used, but there's no mechanism to communicate the publisher's actual delivery constraints (priority, maxLatency, ordered) back via SUBSCRIBE_OK. The Track object maintains these as Signals (can change dynamically), but the subscriber never learns about them, violating the subscribe/subscribe-ok contract needed for proper QoS negotiation.rs/libmoq/src/api.rs (1)
444-460: Add documentation of the maximum allowed value formax_latency_ms.The overflow checking and error mapping are correctly implemented.
moq_lite::Time::from_millis(max_latency_ms)?properly validates the input and returns error code-13(TimestampOverflow) on overflow. However, the API documentation for bothmoq_consume_video_ordered(line 436) andmoq_consume_audio_ordered(line 476) should document the maximum allowed value: 4,611,686,018,427,387,903 milliseconds (2^62-1, the QUIC VarInt maximum). This prevents callers from discovering the limit only through runtime errors.rs/moq-lite/src/ietf/subscribe.rs (2)
102-117:Subscribe::encode_msgdoes not encodedelivery_timeout.The decode path reads
delivery_timeoutfrom parameters (lines 87-89), but encoding writes0u8for "no parameters" and never writes the field. This causes data loss on round-trip.Compare with
SubscribeOk::encode_msg(lines 139-144) andSubscribeUpdate::encode_msg(lines 258-263) which correctly encodedelivery_timeoutvia aParametersblock.🐛 Proposed fix to encode delivery_timeout
fn encode_msg<W: bytes::BufMut>(&self, w: &mut W, version: Version) { self.request_id.encode(w, version); encode_namespace(w, &self.track_namespace, version); self.track_name.encode(w, version); self.subscriber_priority.encode(w, version); GroupOrder::Descending.encode(w, version); true.encode(w, version); // forward assert!( !matches!(self.filter_type, FilterType::AbsoluteStart | FilterType::AbsoluteRange), "Absolute subscribe not supported" ); self.filter_type.encode(w, version); - 0u8.encode(w, version); // no parameters + + let mut params = Parameters::default(); + params.set_int( + SubscribeParameter::DeliveryTimeout.into(), + self.delivery_timeout.as_millis() as u64, + ); + params.encode(w, version); }
310-330: Test should fail due to missing delivery_timeout encoding.The
Subscribe::encode_msgmethod writes zero parameters (line 116), sodelivery_timeoutis never serialized. During decoding (line 88),params.get_int(SubscribeParameter::DeliveryTimeout.into()).unwrap_or(0)returns0, causingTime::from_millis(0)to be decoded instead of the original1000. The assertion at line 329 would therefore fail.Additionally,
encode_msghardcodesGroupOrder::Descending(line 107) instead of encodingself.group_order, which would also break the round-trip for other group orders—though this test doesn't catch it since it usesDescending.
🤖 Fix all issues with AI agents
In @js/lite/src/broadcast.ts:
- Around line 28-40: The requested() loop currently calls
this.#requested.peek().pop(), which mutates the internal array without notifying
subscribers; change it to use the same mutation API as elsewhere (e.g., call
this.#requested.mutate(arr => arr.pop()) or otherwise read-and-update via
this.#requested.set/mutate) so that observers are notified; ensure you still
return the popped Track when non-undefined and preserve the existing
closed-check and Signal.race logic.
In @rs/hang/src/import/avc3.rs:
- Around line 264-271: The Drop impl for Avc3 uses unwrap on
catalog.lock().video.remove(&track) which can panic; change to handle the None
case safely: take self.track as before, call track.close().ok(), then attempt to
remove with catalog.lock().video.remove(&track) using if let Some(config) = ...
{ tracing::info!(track = %track.info(), ?config, "ended track"); } else {
tracing::warn!(track = %track.info(), "ended track but no config found in
catalog"); } so Drop never panics and logs the missing-config case instead of
unwrapping.
In @rs/hang/src/lib.rs:
- Around line 17-29: The review warns that removing the old hang::model exports
is a breaking change; restore compatibility by adding a temporary deprecated
shim module named model that re-exports the previous public items (e.g., add
"pub mod model { #[deprecated(note = \"Re-exported for compatibility; will be
removed in a future major release\")] pub use crate::catalog::*; pub use
crate::container::*; pub use crate::error::*; }" or equivalent) so downstream
code using hang::model::* continues to compile, and either prepare a
semver-major release with release notes if you intend to remove it or keep the
shim for one release cycle; also audit the "pub use moq_lite;" re-export and
decide whether it should remain part of the public API (remove the pub re-export
or document/justify it) and update CHANGELOG/release plan accordingly.
In @rs/moq-lite/src/error.rs:
- Line 3: Add a centralized, documented place for wire-error codes and reference
it from the existing error conversions: create a new constants module or enum
(e.g., WireErrorCode or constants like ERROR_CODE_22, ERROR_CODE_23, and
RESERVED_CODE_21) and move the numeric literals used in to_code() into those
names; update any conversions that currently return 22/23 (and skip 21) to use
the new constants and add a doc comment on the reserved/unused code 21
explaining why it is skipped; also update crate imports (e.g., where
TimeOverflow and coding are used) to reference the new module so the allocation
strategy for wire protocol codes is explicit and discoverable.
In @rs/moq-lite/src/ietf/publish.rs:
- Around line 240-243: The assert! on self.filter_type in the encoding path
(checking FilterType::LargestObject | FilterType::NextGroup) must be replaced
with proper error handling: change the panic to returning a Result::Err from the
encoder method (or an appropriate PublishError) when filter_type is not one of
the expected variants, or enforce the invariant at construction and replace with
debug_assert! if you only want a development-time check; update the encoder
function signature to propagate the error if needed and use the unique symbols
self.filter_type, FilterType::LargestObject, FilterType::NextGroup, and the
encoding method name to locate and modify the code.
In @rs/moq-lite/src/ietf/publisher.rs:
- Around line 118-130: The code currently calls control.send(ietf::SubscribeOk {
... }) before run_track() confirms the track is readable, which can misreport
subscription state; change the flow so you first attempt to start/validate the
track via run_track(request_id) (or the async task that opens the track) and
only send control.send(ietf::SubscribeOk { request_id, track_alias:
request_id.0, delivery_timeout: delivery.max_latency, group_order: ... }) after
run_track succeeds, and if run_track returns an error send
control.send(ietf::SubscribeError { request_id, code, message }) (or equivalent
error response) so subscribers see a failure instead of a premature success.
In @rs/moq-lite/src/model/expires.rs:
- Around line 99-112: The expiration check in the async wait loop uses the
inverted condition; update both occurrences where you compare state.max_instant
and max_latency inside the self.state.wait_for predicate (and any matching
branches using the same logic) to use the canonical form state.max_instant >=
instant + max_latency instead of state.max_instant + max_latency >= instant so
the expiration semantics match the earlier check (instant + max_latency <=
self.max_instant).
In @rs/moq-lite/src/session.rs:
- Around line 71-74: In accept() the server is reading its own parameters
instead of the client's, so change the extraction to read from
client.parameters: locate the block that sets request_id_max using
get_int(ietf::SetupParameter::MaxRequestId.into()) and replace server.parameters
with client.parameters (mirror the connect() pattern) so the server honors the
incoming MaxRequestId; keep the mapping to ietf::RequestId and unwrap_or_default
behavior intact.
🟡 Minor comments (13)
rs/moq-clock/src/clock.rs-96-99 (1)
96-99: Errors fromread_frame()are silently ignored.The pattern
while let Ok(Some(object))exits the loop on bothOk(None)(end of stream) andErr(...)(actual error), silently swallowing errors. This is inconsistent with lines 87-92 which properly propagate errors for the first frame read.If an error occurs mid-stream (e.g., connection failure, decoding error), it will be silently ignored rather than surfaced to the caller.
Suggested fix to log or propagate errors
- while let Ok(Some(object)) = group.read_frame().await { + loop { + match group.read_frame().await { + Ok(Some(object)) => { + let str = String::from_utf8_lossy(&object); + println!("{base}{str}"); + } + Ok(None) => break, + Err(err) => { + tracing::warn!("failed to read frame: {:?}", err); + break; + } + } + } - let str = String::from_utf8_lossy(&object); - println!("{base}{str}"); - }Alternatively, if silent termination on error is intentional for this toy example, consider adding a brief comment to clarify.
rs/moq-relay/src/web.rs-281-284 (1)
281-284: Error type distinction is lost—internal errors now return NOT_FOUND.The previous implementation likely distinguished between:
Ok(None)→NOT_FOUND(track doesn't exist)Err(_)→INTERNAL_SERVER_ERROR(actual failure)Now both cases return
NOT_FOUND, which could mask real internal errors (network issues, timeouts, etc.) and make debugging harder.Suggested fix to preserve error distinction
let group = match track.next_group().await { Ok(Some(group)) => group, - _ => return Err(StatusCode::NOT_FOUND.into()), + Ok(None) => return Err(StatusCode::NOT_FOUND.into()), + Err(err) => { + tracing::warn!(%err, "failed to fetch group"); + return Err(StatusCode::INTERNAL_SERVER_ERROR.into()); + } };rs/moq-lite/src/model/origin.rs-386-389 (1)
386-389: Verify error handling in cleanup task:broadcast.closed().await.ok()silently discardsError::Cancel.The
BroadcastConsumer::closed()method returnsResult<(), Error>where errors (specificallyError::Cancel) indicate the wait was cancelled or timed out. Using.ok()suppresses this signal, causing the cleanup to proceed unconditionally viaroot.lock().remove()regardless of whether the broadcast actually closed or the wait was interrupted.For cleanup logic, consider whether proceeding with removal is correct when
closed()returnsErr(Error::Cancel). If the wait was cancelled, it may be safer to skip or defer cleanup rather than remove the broadcast unconditionally. Alternatively, if cleanup must always occur, add a comment explaining why errors are acceptable to ignore in this cleanup context.rs/moq-lite/src/model/broadcast.rs-172-172 (1)
172-172: Fix typo in comment."settigns" should be "settings".
📝 Proposed fix
- // The publisher SHOULD replace them with their own settigns on OK. + // The publisher SHOULD replace them with their own settings on OK.rs/hang/src/import/aac.rs-98-119 (1)
98-119: Verify behavior wheninitializeis called multiple times.If
initialize()is called whenself.trackis alreadySome, the existing track will be silently replaced without cleanup. Consider returning early or returning an error if already initialized.🛡️ Proposed defensive check
pub fn initialize<T: Buf>(&mut self, buf: &mut T) -> anyhow::Result<()> { + anyhow::ensure!(self.track.is_none(), "already initialized"); + anyhow::ensure!(buf.remaining() >= 2, "AudioSpecificConfig must be at least 2 bytes");js/lite/src/ietf/subscriber.ts-229-236 (1)
229-236: Frame construction is missing thedeltaproperty.Looking at the
Frameclass fromjs/lite/src/Frame.ts, the constructor expects{ payload, delta }wheredeltais aTime.Milli. The current construction only passespayload:const frame = new Frame({ payload: msg.payload });This will likely cause
deltato beundefined, which may lead to issues when the frame is processed downstream or encoded.Proposed fix to include a default delta
- // TODO support timestamps - const frame = new Frame({ payload: msg.payload }); + // TODO support timestamps from IETF protocol + const frame = new Frame({ payload: msg.payload, delta: Time.Milli.zero });You'll also need to import
Time:import * as Time from "../time.ts";rs/hang/src/catalog/audio/mod.rs-55-56 (1)
55-56: Minor: Doc comment typo.The doc comment says "Remove a audio track" - should be "Remove an audio track".
Fix typo
- /// Remove a audio track from the catalog. + /// Remove an audio track from the catalog.rs/hang/examples/video.rs-133-140 (1)
133-140: Minor typo in comment.Line 138 has a typo: "delver" should be "deliver".
📝 Proposed fix
- // You can even tell the CDN if it should try to delver in group order. + // You can even tell the CDN if it should try to deliver in group order.rs/moq-lite/src/model/subscriber.rs-139-145 (1)
139-145: Debug struct name inconsistency.Similar to
Subscriber, theDebugimplementation forSubscribersnames the struct"TrackMetaConsumer"instead of"Subscribers".Proposed fix
impl fmt::Debug for Subscribers { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { - f.debug_struct("TrackMetaConsumer") + f.debug_struct("Subscribers") .field("max", &self.max.1.borrow().deref()) .finish() } }rs/moq-lite/src/model/expires.rs-44-50 (1)
44-50: Debug struct name mismatch.The
Debugimplementation names the struct"State"instead of"ExpiresProducer", which could cause confusion during debugging.Proposed fix
impl fmt::Debug for ExpiresProducer { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { - f.debug_struct("State") + f.debug_struct("ExpiresProducer") .field("state", &self.state.borrow().deref()) .finish() } }rs/moq-lite/src/model/subscriber.rs-35-42 (1)
35-42: Debug struct name inconsistency.The
Debugimplementation forSubscribernames the struct"TrackMetaProducer"instead of"Subscriber", which appears to be a leftover from refactoring.Proposed fix
impl fmt::Debug for Subscriber { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { - f.debug_struct("TrackMetaProducer") + f.debug_struct("Subscriber") .field("state", &self.max.borrow().deref()) .field("current", &self.current) .finish() } }rs/moq-lite/src/model/subscriber.rs-44-56 (1)
44-56: Consider makingSubscriber::new()pub(crate)or documenting its intended use.
Subscriber::new()creates an isolated watch channel that's never used internally—all internal subscriptions go throughSubscribers::subscribe()(lines 170-181), which properly shares state. The publicSubscriber::new()method appears to be unused dead code unless intended for external consumers. Either restrict its visibility or add documentation explaining the distinction.rs/hang/src/import/opus.rs-106-113 (1)
106-113: Potential panic onunwrap()if track removal fails.The
unwrap()on line 111 will panic if the track isn't found in the catalog. This could happen if initialization failed partway through or in edge cases during cleanup.🔧 Suggested fix
impl Drop for Opus { fn drop(&mut self) { let Some(mut track) = self.track.take() else { return }; track.close().ok(); - let config = self.catalog.lock().audio.remove(&track).unwrap(); - tracing::debug!(track = %track.info(), ?config, "ended track"); + if let Some(config) = self.catalog.lock().audio.remove(&track) { + tracing::debug!(track = %track.info(), ?config, "ended track"); + } } }
🧹 Nitpick comments (22)
rs/moq-relay/src/main.rs (1)
41-41: Errors from spawned tasks are now silently dropped.The change from async blocks with
.expect()to.boxed()means errors returned bycluster.run()andweb.run()are no longer propagated or logged. TheJoinHandlereturned bytokio::spawnis discarded, so if these futures return an error, it will be silently lost.Consider either:
- Keeping the async block wrapper to log/handle errors, or
- Using
tokio::spawnwith error logging similar to the connection loop pattern on lines 74-79.Suggested approach to handle errors
- tokio::spawn(cluster.clone().run().boxed()); + tokio::spawn(async move { + if let Err(err) = cluster.clone().run().await { + tracing::error!(%err, "cluster task failed"); + } + });- tokio::spawn(web.run().boxed()); + tokio::spawn(async move { + if let Err(err) = web.run().await { + tracing::error!(%err, "web server failed"); + } + });Also applies to: 54-54
js/signals/src/index.ts (1)
157-165: LGTM! Clean implementation.The method correctly subscribes to the next signal change and disposes the listener after resolving. The implementation follows the same pattern as the existing
Signal.race()method.Optional: Consider adding leak detection to changed() similar to subscribe()
The
subscribe()method includes DEV-mode leak detection (lines 123-125) that warns when subscriber count reaches powers of 10 starting at 100. Thechanged()method doesn't have this check. With the newpromise()method, it's now easier to accumulate one-time listeners in the#changedset, especially if signals never fire.Consider adding similar leak detection to the
changed()method at line 130:changed(fn: (value: T) => void): Dispose { this.#changed.add(fn); + if (DEV && this.#changed.size >= 100 && Number.isInteger(Math.log10(this.#changed.size))) { + throw new Error("signal has too many changed listeners; may be leaking"); + } return () => this.#changed.delete(fn); }rs/moq-lite/src/coding/writer.rs (1)
1-1: LGTM! Removed unnecessary Debug constraint.The Debug bound was not used in the encode method implementation, so removing it simplifies the API without any functional impact.
Also applies to: 21-21
rs/hang/src/import/mod.rs (1)
17-19: Address the TODO: Make default latency configurable.The constant provides a reasonable default (30 seconds), but the TODO indicates this should be configurable. Consider introducing a configuration mechanism or builder pattern to allow runtime customization.
Do you want me to open an issue to track making this configurable, or would you like me to suggest an implementation approach?
js/lite/src/connection/connect.ts (1)
5-10: KeepTime.Milliend-to-end (avoid widening tonumber).Right now
delay?: Time.Milligets combined with500/0and effectively becomesnumber, which weakens the unit-safety you’re introducing. Consider casting defaults toTime.Milliand updatingconnectWebSocketto acceptTime.Milli. Also makeSUPPORTEDreadonly/as constto avoid accidental mutation.Proposed diff
-const SUPPORTED = [Lite.Version.DRAFT_03, Lite.Version.DRAFT_02, Lite.Version.DRAFT_01, Ietf.Version.DRAFT_14]; +const SUPPORTED = [ + Lite.Version.DRAFT_03, + Lite.Version.DRAFT_02, + Lite.Version.DRAFT_01, + Ietf.Version.DRAFT_14, +] as const; +const DEFAULT_WS_DELAY = 500 as Time.Milli; // Give QUIC a 500ms head start to connect before trying WebSocket, unless WebSocket has won in the past. // NOTE that QUIC should be faster because it involves 1/2 fewer RTTs. -const headstart = !webtransport || websocketWon.has(url.toString()) ? 0 : (props?.websocket?.delay ?? 500); +const headstart: Time.Milli = + !webtransport || websocketWon.has(url.toString()) ? (0 as Time.Milli) : (props?.websocket?.delay ?? DEFAULT_WS_DELAY); // TODO accept arguments to control the port/path used. -async function connectWebSocket(url: URL, delay: number, cancel: Promise<void>): Promise<WebTransport | undefined> { +async function connectWebSocket(url: URL, delay: Time.Milli, cancel: Promise<void>): Promise<WebTransport | undefined> { const timer = new Promise<void>((resolve) => setTimeout(resolve, delay));Also applies to: 19-22, 50-56, 167-175
js/lite/src/connection/reload.ts (1)
2-2: Backoff math: consider integer-clamping before casting toTime.Milli.If
multiplieris non-integer,this.#delay * multipliercan produce fractional milliseconds; theas Time.Millicast will hide that. If you want integer ms, clamp/round explicitly.Proposed diff
- this.#delay = Math.min(this.#delay * this.delay.multiplier, this.delay.max) as Time.Milli; + const next = Math.min(this.#delay * this.delay.multiplier, this.delay.max); + this.#delay = Math.max(0, Math.round(next)) as Time.Milli;Also applies to: 6-18, 52-53, 60-61, 105-108
rs/moq-lite/src/lite/frame.rs (1)
9-17: FrameHeader chunking approach looks solid; consider tighteningsizevisibility + add boundary tests.The
take(size)+ “caller readssizebytes next” pattern is good for chunking, butpub size: usizeis easy to misuse (over/under-read). If you don’t need external mutation, consider keeping it private and exposing apayload_size()accessor. Also worth adding tests that ensure Draft03 consumes only the delta bytes (leaving payload) and Draft01/02 consume none (leaving full payload).Also applies to: 21-42, 44-64
rs/moq-lite/src/model/broadcast.rs (1)
96-126: Consider refactoringclose()andabort()to reduce duplication.Both methods follow nearly identical patterns: use
send_if_modified, check for existing closure state, update state, and return result. This could be extracted into a helper method.♻️ Possible refactor
+ fn set_closed(&mut self, new_state: Result<(), Error>) -> Result<(), Error> { + let mut result = Ok(()); + self.closed.send_if_modified(|closed| { + if let Some(existing) = closed.clone() { + result = Err(existing.err().unwrap_or(Error::Cancel)); + return false; + } + *closed = Some(new_state); + true + }); + result + } + pub fn close(&mut self) -> Result<(), Error> { - let mut result = Ok(()); - self.closed.send_if_modified(|closed| { - if let Some(closed) = closed.clone() { - result = Err(closed.err().unwrap_or(Error::Cancel)); - return false; - } - *closed = Some(Ok(())); - true - }); - result + self.set_closed(Ok(())) } pub fn abort(&mut self, err: Error) -> Result<(), Error> { - let mut result = Ok(()); - self.closed.send_if_modified(|closed| { - if let Some(Err(err)) = closed.clone() { - result = Err(err); - return false; - } - *closed = Some(Err(err)); - true - }); - result + self.set_closed(Err(err)) }js/lite/examples/publish.ts (1)
45-46: Consider clarifying the delay purpose.The 10-second delay keeps the track open, but the rationale might not be immediately clear to users following this example. Consider adding a more detailed comment explaining why the delay is necessary (e.g., to demonstrate that tracks can remain open for additional data, or to prevent premature connection closure).
📝 Suggested comment enhancement
- // Keep the track open for more data + // Keep the track open to demonstrate persistent track connections. + // In production, you would typically keep the track open and write frames as they become available. await new Promise((resolve) => setTimeout(resolve, 10000));rs/libmoq/src/api.rs (1)
189-199: Avoid brittle tuple-field access in FFI glue (.0.consume()).
state.publish.get(broadcast)?.0.consume()(Line 196) couples this code to the internal tuple layout; a named method (e.g..broadcast().consume()) would be more refactor-safe.js/hang/src/watch/video/source.ts (1)
194-200: Subscription call update looks correct; optional shorthand.
ThreadingmaxLatency: this.latencyintobroadcast.subscribe({ ... })is a good alignment with the new API. Minor:name: namecan be justname.js/lite/src/lite/publisher.ts (1)
173-180: Consider error handling for SubscribeUpdate decode failures.If
SubscribeUpdate.decodethrows (e.g., malformed message, connection closed mid-read), the error will propagate to the outer catch block, which is appropriate. However, the loop structure withPromise.anycould be clearer about the expected flow:
servingresolves when track publishing completesstream.reader.done()signals stream endThe current flow seems correct, but the interaction between decode failures and the
donesignal could benefit from a brief comment.rs/moq-lite/src/ietf/publish.rs (1)
201-208: Consider a more descriptive error forforward = false.Returning
DecodeError::Unsupportedwhenforwardisfalseis reasonable, but a more descriptive error or log message could help debugging interoperability issues with peers that might sendforward = false.js/lite/src/frame.ts (1)
28-30: Consider documenting thattoJson()throws on invalid JSON.The method will throw a
SyntaxErrorif the payload isn't valid JSON. This is reasonable behavior, but callers should be aware. A brief JSDoc comment would help.rs/moq-lite/src/lite/subscribe.rs (1)
94-98: Minor: Field ordering inconsistency.In
SubscribeOk::decode_msg, the struct is constructed with fields in the orderpriority, max_latency, ordered, but the struct declaration (lines 61-65) haspriority, ordered, max_latency. While Rust doesn't require matching order, consistency improves readability.Proposed fix
Ok(Self { priority, - max_latency, ordered, + max_latency, })rs/hang/src/catalog/produce.rs (2)
28-34: Potential deadlock with unwrap on mutex lock.Using
.unwrap()onself.current.lock()will panic if the mutex is poisoned (e.g., a previous holder panicked). Consider using.expect()with a descriptive message or handling the poison case gracefully.Proposed fix
pub fn lock(&mut self) -> CatalogGuard<'_> { CatalogGuard { - catalog: self.current.lock().unwrap(), + catalog: self.current.lock().expect("catalog mutex poisoned"), track: &mut self.track, } }
62-71: Consider logging errors in Drop instead of silently ignoring.The
Dropimplementation silently ignores errors fromwrite_frameandclose()with.ok(). While Drop shouldn't panic, logging these errors would aid debugging failed catalog updates.Proposed enhancement
impl Drop for CatalogGuard<'_> { fn drop(&mut self) { if let Ok(mut group) = self.track.append_group() { // TODO decide if this should return an error, or be impossible to fail let frame = self.catalog.to_string().expect("invalid catalog"); - group.write_frame(frame, moq_lite::Time::now()).ok(); - group.close().ok(); + if let Err(e) = group.write_frame(frame, moq_lite::Time::now()) { + tracing::warn!("failed to write catalog frame: {e}"); + } + if let Err(e) = group.close() { + tracing::warn!("failed to close catalog group: {e}"); + } } } }js/lite/src/lite/subscriber.ts (1)
132-144: Consider debouncing or coalescing rapid updates.The
sendUpdatefunction is called independently for each ofpriority,maxLatency, andorderedchanges. If multiple properties change simultaneously, this sends multiple updates. Consider coalescing these updates.rs/moq-lite/src/lite/publisher.rs (1)
246-255: Clever but non-obvious pattern for pollingFuturesUnordered.This async block continuously polls the
FuturesUnorderedtasks without ever matching theselect!arm (returnsfalse, thenunreachable!()). Consider adding a brief comment explaining this pattern for future maintainers.📝 Suggested documentation
// This is a hack to avoid waking up the select! loop each time a group completes. // We poll all of the groups until they're all complete, only matching `else` when all are complete. // We don't use tokio::spawn so we can wait until done, clean up on Drop, and support non-tokio. + // The async block below continuously drains completed tasks but never yields `true`, + // so the arm is never selected. When `tasks` becomes empty, `next()` returns `None` + // and the loop exits, causing the outer `else` arm to match. true = async { // Constantly poll all of the groups until they're all complete. while tasks.next().await.is_some() {} // Never match false } => unreachable!("never match"),rs/moq-lite/src/ietf/subscriber.rs (1)
296-301: Consider usinginfo!instead ofwarn!for subscription cancellation.Subscription cancellation is expected behavior when consumers disconnect. Using
warn!level may cause noise in production logs.📝 Suggested change
track.unused().await; - tracing::warn!(id = %request_id, broadcast = %self.origin.as_ref().unwrap().absolute(&broadcast), track = %track.name, "subscribe cancelled"); + tracing::info!(id = %request_id, broadcast = %self.origin.as_ref().unwrap().absolute(&broadcast), track = %track.name, "subscribe cancelled"); track.abort(Error::Cancel)?;js/lite/src/lite/subscribe.ts (1)
166-183: Inconsistent parameter order in private methods.
SubscribeOk.#encode(version, w)and#decode(version, r)use reversed parameter order compared toSubscribeUpdateandSubscribewhich use(w, version)and(r, version)respectively. This inconsistency could cause confusion during maintenance.♻️ Suggested fix for consistency
- async #encode(version: Version, w: Writer) { + async #encode(w: Writer, version: Version) { switch (version) { ... } } - static async #decode(version: Version, r: Reader): Promise<SubscribeOk> { + static async #decode(r: Reader, version: Version): Promise<SubscribeOk> { ... } async encode(w: Writer, version: Version): Promise<void> { - return Message.encode(w, this.#encode.bind(this, version)); + return Message.encode(w, (w) => this.#encode(w, version)); } static async decode(r: Reader, version: Version): Promise<SubscribeOk> { - return Message.decode(r, (r) => SubscribeOk.#decode(version, r)); + return Message.decode(r, (r) => SubscribeOk.#decode(r, version)); }rs/moq-lite/src/model/track.rs (1)
351-361: Debug implementation relies on implicit Deref.Line 354 uses
&self.namebutTrackConsumerhas aninfo: Trackfield, not anamefield. This works due to theDeref<Target=Track>implementation (line 447-453), but it's subtle and could be confusing.♻️ Suggested fix for clarity
impl fmt::Debug for TrackConsumer { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_struct("TrackConsumer") - .field("name", &self.name) + .field("info", &self.info) .field("state", &self.state.borrow().deref()) .field("subscriber", &self.subscriber) .field("index", &self.index) .field("delivery", &self.delivery) .finish() } }
| async requested(): Promise<Track | undefined> { | ||
| for (;;) { | ||
| // We use pop instead of shift because it's slightly more efficient. | ||
| const track = this.state.requested.peek().pop(); | ||
| const track = this.#requested.peek().pop(); | ||
| if (track) return track; | ||
|
|
||
| const closed = this.state.closed.peek(); | ||
| const closed = this.#closed.peek(); | ||
| if (closed instanceof Error) throw closed; | ||
| if (closed) return undefined; | ||
|
|
||
| await Signal.race(this.state.requested, this.state.closed); | ||
| await Signal.race(this.#requested, this.#closed); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential issue: pop() mutates array without notification.
Using this.#requested.peek().pop() mutates the underlying array directly without calling set() or mutate(), so subscribers to #requested won't be notified of the change. This could cause Signal.race() to miss updates if another caller is also waiting.
Consider using mutate() for consistency with line 58-60.
Proposed fix
async requested(): Promise<Track | undefined> {
for (;;) {
// We use pop instead of shift because it's slightly more efficient.
- const track = this.#requested.peek().pop();
+ let track: Track | undefined;
+ this.#requested.mutate((requested) => {
+ track = requested.pop();
+ });
if (track) return track;
const closed = this.#closed.peek();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async requested(): Promise<Track | undefined> { | |
| for (;;) { | |
| // We use pop instead of shift because it's slightly more efficient. | |
| const track = this.state.requested.peek().pop(); | |
| const track = this.#requested.peek().pop(); | |
| if (track) return track; | |
| const closed = this.state.closed.peek(); | |
| const closed = this.#closed.peek(); | |
| if (closed instanceof Error) throw closed; | |
| if (closed) return undefined; | |
| await Signal.race(this.state.requested, this.state.closed); | |
| await Signal.race(this.#requested, this.#closed); | |
| } | |
| } | |
| async requested(): Promise<Track | undefined> { | |
| for (;;) { | |
| // We use pop instead of shift because it's slightly more efficient. | |
| let track: Track | undefined; | |
| this.#requested.mutate((requested) => { | |
| track = requested.pop(); | |
| }); | |
| if (track) return track; | |
| const closed = this.#closed.peek(); | |
| if (closed instanceof Error) throw closed; | |
| if (closed) return undefined; | |
| await Signal.race(this.#requested, this.#closed); | |
| } | |
| } |
🤖 Prompt for AI Agents
In @js/lite/src/broadcast.ts around lines 28 - 40, The requested() loop
currently calls this.#requested.peek().pop(), which mutates the internal array
without notifying subscribers; change it to use the same mutation API as
elsewhere (e.g., call this.#requested.mutate(arr => arr.pop()) or otherwise
read-and-update via this.#requested.set/mutate) so that observers are notified;
ensure you still return the popped Track when non-undefined and preserve the
existing closed-check and Signal.race logic.
| mod catalog; | ||
| mod container; | ||
| mod error; | ||
|
|
||
| pub mod catalog; | ||
| pub mod feedback; | ||
| pub mod import; | ||
| pub mod model; | ||
|
|
||
| // export the moq-lite version in use | ||
| pub use moq_lite; | ||
|
|
||
| pub use catalog::*; | ||
| pub use container::*; | ||
| pub use error::*; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential breaking change: removal of hang::model exports + new public re-exports need a release/compat plan.
If downstream users relied on hang::model::*, consider a temporary pub mod model { pub use crate::catalog::*; ... } shim (deprecated) or ensure this ships with an appropriate semver major bump + release notes. Also sanity-check whether pub use moq_lite; is an intentional part of the public API contract (it increases coupling).
🤖 Prompt for AI Agents
In @rs/hang/src/lib.rs around lines 17 - 29, The review warns that removing the
old hang::model exports is a breaking change; restore compatibility by adding a
temporary deprecated shim module named model that re-exports the previous public
items (e.g., add "pub mod model { #[deprecated(note = \"Re-exported for
compatibility; will be removed in a future major release\")] pub use
crate::catalog::*; pub use crate::container::*; pub use crate::error::*; }" or
equivalent) so downstream code using hang::model::* continues to compile, and
either prepare a semver-major release with release notes if you intend to remove
it or keep the shim for one release cycle; also audit the "pub use moq_lite;"
re-export and decide whether it should remain part of the public API (remove the
pub re-export or document/justify it) and update CHANGELOG/release plan
accordingly.
| use std::sync::Arc; | ||
|
|
||
| use crate::coding; | ||
| use crate::{coding, TimeOverflow}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, confirm the structure of error.rs file
head -n 20 rs/moq-lite/src/error.rs
wc -l rs/moq-lite/src/error.rsRepository: moq-dev/moq
Length of output: 166
🏁 Script executed:
# Read the full error.rs file to verify the claims
cat -n rs/moq-lite/src/error.rsRepository: moq-dev/moq
Length of output: 3652
🏁 Script executed:
# Search for to_code mappings and error code usage
rg -n "to_code" rs -S --context=3
rg -n "(22|23|21)" rs -S
rg -n "Expired|TimeOverflow" rs js -SRepository: moq-dev/moq
Length of output: 17631
🏁 Script executed:
# Look for error code documentation or centralization
fd -e md -e txt -e rs | xargs rg -l "error.*code|wire.*code" | head -20Repository: moq-dev/moq
Length of output: 472
🏁 Script executed:
# Search for JS error implementations
fd -e js -e ts | xargs rg -l "error|code" | grep -i "js\|javascript" | head -10
rg -n "Expired|TimeOverflow|error.*code" js --type-listRepository: moq-dev/moq
Length of output: 730
🏁 Script executed:
# Look for protocol spec or documentation files
fd -e md -e txt -e spec | xargs rg -l "error.*code|wire.*protocol" 2>/dev/null | head -10Repository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Search more specifically in js directory structure
git ls-files js/ | head -20Repository: moq-dev/moq
Length of output: 573
🏁 Script executed:
# Check if there are error mappings in js implementations
rg -n "22|23|code.*=" js -S --context=2 2>/dev/null | head -50Repository: moq-dev/moq
Length of output: 2653
🏁 Script executed:
# Check the JS error implementation
cat -n js/lite/src/util/error.tsRepository: moq-dev/moq
Length of output: 337
🏁 Script executed:
# Search for any error code constants or mappings in JS
rg -n "22\|23\|21\|Expired\|TimeOverflow" js -S --context=2Repository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Look for protocol spec or error documentation
fd -e md | xargs grep -l "error\|code" | head -10
cat README.mdRepository: moq-dev/moq
Length of output: 12225
🏁 Script executed:
# Check the protocol spec files for error code documentation
cat doc/concepts/protocol.md | head -100
rg -n "error.*code\|code.*mapping\|22\|23" doc/ -S --context=2Repository: moq-dev/moq
Length of output: 2724
🏁 Script executed:
# Look for any constants or centralized error code documentation
rg -n "const\|error.*code\|EXPIRED\|TIME_OVERFLOW" rs -S | grep -i "const\|error"Repository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Look more carefully at JS lite error handling
find js/lite -name "*.ts" -o -name "*.js" | xargs rg -l "error\|Error" | head -10Repository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Check if there's any protocol-level error handling in JS
rg -n "code\|error" js/lite/src/lite/session.ts -S --context=3Repository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Search for any error code constants across the entire codebase
rg -n "^\s*(const|let|var)\s+.*=(.*22|.*23|.*21)" rs js -SRepository: moq-dev/moq
Length of output: 2069
Document and centralize error code allocation strategy for wire protocol stability.
The error codes 22 and 23 are now sent over the wire via to_code(); code 21 is deliberately skipped. While these are currently defined only in the Rust implementation, this allocation should be documented (e.g., in protocol specs or a centralized constants file) to prevent accidental collisions and clarify the reservation strategy as the codebase evolves. Consider adding a doc comment explaining why code 21 is reserved/unused.
🤖 Prompt for AI Agents
In @rs/moq-lite/src/error.rs at line 3, Add a centralized, documented place for
wire-error codes and reference it from the existing error conversions: create a
new constants module or enum (e.g., WireErrorCode or constants like
ERROR_CODE_22, ERROR_CODE_23, and RESERVED_CODE_21) and move the numeric
literals used in to_code() into those names; update any conversions that
currently return 22/23 (and skip 21) to use the new constants and add a doc
comment on the reserved/unused code 21 explaining why it is skipped; also update
crate imports (e.g., where TimeOverflow and coding are used) to reference the
new module so the allocation strategy for wire protocol codes is explicit and
discoverable.
| pub fn get_int(&self, kind: u64) -> Option<u64> { | ||
| assert!(kind.is_multiple_of(2)); | ||
| self.ints.get(&kind).copied() | ||
| } | ||
|
|
||
| pub fn set_varint(&mut self, kind: ParameterVarInt, value: u64) { | ||
| self.vars.insert(kind, value); | ||
| pub fn set_int(&mut self, kind: u64, value: u64) { | ||
| assert!(kind.is_multiple_of(2)); | ||
| self.ints.insert(kind, value); | ||
| } | ||
|
|
||
| pub fn get_bytes(&self, kind: ParameterBytes) -> Option<&[u8]> { | ||
| pub fn get_bytes(&self, kind: u64) -> Option<&[u8]> { | ||
| assert!(!kind.is_multiple_of(2)); | ||
| self.bytes.get(&kind).map(|v| v.as_slice()) | ||
| } | ||
|
|
||
| pub fn set_bytes(&mut self, kind: ParameterBytes, value: Vec<u8>) { | ||
| pub fn set_bytes(&mut self, kind: u64, value: Vec<u8>) { | ||
| assert!(!kind.is_multiple_of(2)); | ||
| self.bytes.insert(kind, value); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace assert! with proper error handling in public API methods.
Using assert! in public API methods will cause panics when callers provide invalid parameter IDs. Library code should never panic on invalid user input.
The JavaScript version (js/lite/src/ietf/parameters.ts) properly throws errors with descriptive messages. The Rust version should either:
- Return
Result<T, Error>from these methods, or - Use debug assertions if these are internal invariants
🔧 Proposed fix using Result
- pub fn get_int(&self, kind: u64) -> Option<u64> {
- assert!(kind.is_multiple_of(2));
+ pub fn get_int(&self, kind: u64) -> Result<Option<u64>, DecodeError> {
+ if !kind.is_multiple_of(2) {
+ return Err(DecodeError::InvalidParameter);
+ }
- self.ints.get(&kind).copied()
+ Ok(self.ints.get(&kind).copied())
}
- pub fn set_int(&mut self, kind: u64, value: u64) {
- assert!(kind.is_multiple_of(2));
+ pub fn set_int(&mut self, kind: u64, value: u64) -> Result<(), DecodeError> {
+ if !kind.is_multiple_of(2) {
+ return Err(DecodeError::InvalidParameter);
+ }
- self.ints.insert(kind, value);
+ self.ints.insert(kind, value);
+ Ok(())
}
- pub fn get_bytes(&self, kind: u64) -> Option<&[u8]> {
- assert!(!kind.is_multiple_of(2));
+ pub fn get_bytes(&self, kind: u64) -> Result<Option<&[u8]>, DecodeError> {
+ if kind.is_multiple_of(2) {
+ return Err(DecodeError::InvalidParameter);
+ }
- self.bytes.get(&kind).map(|v| v.as_slice())
+ Ok(self.bytes.get(&kind).map(|v| v.as_slice()))
}
- pub fn set_bytes(&mut self, kind: u64, value: Vec<u8>) {
- assert!(!kind.is_multiple_of(2));
+ pub fn set_bytes(&mut self, kind: u64, value: Vec<u8>) -> Result<(), DecodeError> {
+ if kind.is_multiple_of(2) {
+ return Err(DecodeError::InvalidParameter);
+ }
- self.bytes.insert(kind, value);
+ self.bytes.insert(kind, value);
+ Ok(())
}| assert!(matches!( | ||
| self.filter_type, | ||
| FilterType::LargestObject | FilterType::NextGroup | ||
| )); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace assert! with proper error handling to avoid panics.
Using assert! in the encoding path will cause a panic if an unexpected filter_type is passed. This could crash the application at runtime rather than returning an error.
🔧 Suggested approach
Consider returning an error or ensuring this invariant is enforced at construction time rather than panicking during encoding. If this is truly an invariant that should never be violated, document it clearly and consider using debug_assert! for development-only checks.
- assert!(matches!(
- self.filter_type,
- FilterType::LargestObject | FilterType::NextGroup
- ));
+ // Only LargestObject and NextGroup are supported for PublishOk
+ debug_assert!(
+ matches!(self.filter_type, FilterType::LargestObject | FilterType::NextGroup),
+ "PublishOk only supports LargestObject or NextGroup filter types"
+ );📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| assert!(matches!( | |
| self.filter_type, | |
| FilterType::LargestObject | FilterType::NextGroup | |
| )); | |
| // Only LargestObject and NextGroup are supported for PublishOk | |
| debug_assert!( | |
| matches!(self.filter_type, FilterType::LargestObject | FilterType::NextGroup), | |
| "PublishOk only supports LargestObject or NextGroup filter types" | |
| ); |
🤖 Prompt for AI Agents
In @rs/moq-lite/src/ietf/publish.rs around lines 240 - 243, The assert! on
self.filter_type in the encoding path (checking FilterType::LargestObject |
FilterType::NextGroup) must be replaced with proper error handling: change the
panic to returning a Result::Err from the encoder method (or an appropriate
PublishError) when filter_type is not one of the expected variants, or enforce
the invariant at construction and replace with debug_assert! if you only want a
development-time check; update the encoder function signature to propagate the
error if needed and use the unique symbols self.filter_type,
FilterType::LargestObject, FilterType::NextGroup, and the encoding method name
to locate and modify the code.
| // TODO: This is wrong, as we haven't actually gotten a response from the origin yet. | ||
| control | ||
| .send(ietf::SubscribeOk { | ||
| request_id, | ||
| track_alias: request_id.0, // NOTE: using track alias as request id for now | ||
| delivery_timeout: delivery.max_latency, | ||
| group_order: if delivery.ordered { | ||
| GroupOrder::Ascending | ||
| } else { | ||
| GroupOrder::Descending | ||
| }, | ||
| }) | ||
| .ok(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, check the file structure in the rs directory
git ls-files rs/moq-lite/src/ietf/ | head -20Repository: moq-dev/moq
Length of output: 716
🏁 Script executed:
# Read the publisher.rs file around the mentioned lines to verify the code and context
wc -l rs/moq-lite/src/ietf/publisher.rsRepository: moq-dev/moq
Length of output: 92
🏁 Script executed:
# Read the relevant section of the file with context
sed -n '100,150p' rs/moq-lite/src/ietf/publisher.rs | cat -nRepository: moq-dev/moq
Length of output: 1882
🏁 Script executed:
# Read more context to see the full function
sed -n '60,140p' rs/moq-lite/src/ietf/publisher.rs | cat -nRepository: moq-dev/moq
Length of output: 3056
🏁 Script executed:
# Look for the run_track function definition
grep -n "fn run_track" rs/moq-lite/src/ietf/publisher.rsRepository: moq-dev/moq
Length of output: 79
🏁 Script executed:
# Search for "origin" references in the file to understand the context
rg -n "origin" rs/moq-lite/src/ietf/publisher.rsRepository: moq-dev/moq
Length of output: 812
🏁 Script executed:
# Read the run_track function implementation
sed -n '165,250p' rs/moq-lite/src/ietf/publisher.rs | cat -nRepository: moq-dev/moq
Length of output: 3513
🏁 Script executed:
# Check if there are any other comments or error handling related to this issue
rg -n "SubscribeError\|subscribe" rs/moq-lite/src/ietf/publisher.rs | head -20Repository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Look at the broader picture - check if there's error handling after SubscribeOk
sed -n '55,100p' rs/moq-lite/src/ietf/publisher.rs | cat -nRepository: moq-dev/moq
Length of output: 1654
SubscribeOk sent before track data confirmation.
The TODO on line 118 correctly identifies that SubscribeOk is sent before run_track() has confirmed the track can be served. If run_track() encounters an error, the subscriber believes the subscription is active when it isn't, violating the protocol state machine.
Defer SubscribeOk until the track is confirmed readable, or implement error recovery to send SubscribeError if run_track() fails.
🤖 Prompt for AI Agents
In @rs/moq-lite/src/ietf/publisher.rs around lines 118 - 130, The code currently
calls control.send(ietf::SubscribeOk { ... }) before run_track() confirms the
track is readable, which can misreport subscription state; change the flow so
you first attempt to start/validate the track via run_track(request_id) (or the
async task that opens the track) and only send control.send(ietf::SubscribeOk {
request_id, track_alias: request_id.0, delivery_timeout: delivery.max_latency,
group_order: ... }) after run_track succeeds, and if run_track returns an error
send control.send(ietf::SubscribeError { request_id, code, message }) (or
equivalent error response) so subscribers see a failure instead of a premature
success.
| impl Drop for Subscriber { | ||
| fn drop(&mut self) { | ||
| self.max.send_if_modified(|state| { | ||
| let old_max_latency = state.max_latency.peek().map(|max| max.1).copied().unwrap_or_default(); | ||
| let old_priority = state | ||
| .priority | ||
| .peek() | ||
| .map(|priority| priority.1) | ||
| .copied() | ||
| .unwrap_or_default(); | ||
| let old_ordered = state | ||
| .ordered | ||
| .peek() | ||
| .map(|ordered| ordered.1) | ||
| .copied() | ||
| .unwrap_or_default(); | ||
|
|
||
| state.priority.remove(&self.id).expect("id not found"); | ||
| state.max_latency.remove(&self.id).expect("id not found"); | ||
| state.ordered.remove(&self.id).expect("id not found"); | ||
|
|
||
| if let (Some((_, new_max_latency)), Some((_, new_priority)), Some((_, new_ordered))) = | ||
| (state.max_latency.peek(), state.priority.peek(), state.ordered.peek()) | ||
| { | ||
| return old_max_latency > *new_max_latency | ||
| || old_priority > *new_priority | ||
| || old_ordered != *new_ordered; | ||
| } | ||
|
|
||
| // Always send if there are no more consumers. | ||
| true | ||
| }); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential panic in Drop on missing ID.
Using .expect("id not found") in Drop will panic if the ID isn't in the queues. While this shouldn't happen in normal operation, panicking in Drop can cause issues during unwinding. Consider logging a warning instead.
Proposed fix
- state.priority.remove(&self.id).expect("id not found");
- state.max_latency.remove(&self.id).expect("id not found");
- state.ordered.remove(&self.id).expect("id not found");
+ if state.priority.remove(&self.id).is_none() {
+ tracing::warn!(id = self.id, "subscriber id not found in priority queue during drop");
+ return false;
+ }
+ if state.max_latency.remove(&self.id).is_none() {
+ tracing::warn!(id = self.id, "subscriber id not found in max_latency queue during drop");
+ return false;
+ }
+ if state.ordered.remove(&self.id).is_none() {
+ tracing::warn!(id = self.id, "subscriber id not found in ordered queue during drop");
+ return false;
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| impl Drop for Subscriber { | |
| fn drop(&mut self) { | |
| self.max.send_if_modified(|state| { | |
| let old_max_latency = state.max_latency.peek().map(|max| max.1).copied().unwrap_or_default(); | |
| let old_priority = state | |
| .priority | |
| .peek() | |
| .map(|priority| priority.1) | |
| .copied() | |
| .unwrap_or_default(); | |
| let old_ordered = state | |
| .ordered | |
| .peek() | |
| .map(|ordered| ordered.1) | |
| .copied() | |
| .unwrap_or_default(); | |
| state.priority.remove(&self.id).expect("id not found"); | |
| state.max_latency.remove(&self.id).expect("id not found"); | |
| state.ordered.remove(&self.id).expect("id not found"); | |
| if let (Some((_, new_max_latency)), Some((_, new_priority)), Some((_, new_ordered))) = | |
| (state.max_latency.peek(), state.priority.peek(), state.ordered.peek()) | |
| { | |
| return old_max_latency > *new_max_latency | |
| || old_priority > *new_priority | |
| || old_ordered != *new_ordered; | |
| } | |
| // Always send if there are no more consumers. | |
| true | |
| }); | |
| } | |
| } | |
| impl Drop for Subscriber { | |
| fn drop(&mut self) { | |
| self.max.send_if_modified(|state| { | |
| let old_max_latency = state.max_latency.peek().map(|max| max.1).copied().unwrap_or_default(); | |
| let old_priority = state | |
| .priority | |
| .peek() | |
| .map(|priority| priority.1) | |
| .copied() | |
| .unwrap_or_default(); | |
| let old_ordered = state | |
| .ordered | |
| .peek() | |
| .map(|ordered| ordered.1) | |
| .copied() | |
| .unwrap_or_default(); | |
| if state.priority.remove(&self.id).is_none() { | |
| tracing::warn!(id = self.id, "subscriber id not found in priority queue during drop"); | |
| return false; | |
| } | |
| if state.max_latency.remove(&self.id).is_none() { | |
| tracing::warn!(id = self.id, "subscriber id not found in max_latency queue during drop"); | |
| return false; | |
| } | |
| if state.ordered.remove(&self.id).is_none() { | |
| tracing::warn!(id = self.id, "subscriber id not found in ordered queue during drop"); | |
| return false; | |
| } | |
| if let (Some((_, new_max_latency)), Some((_, new_priority)), Some((_, new_ordered))) = | |
| (state.max_latency.peek(), state.priority.peek(), state.ordered.peek()) | |
| { | |
| return old_max_latency > *new_max_latency | |
| || old_priority > *new_priority | |
| || old_ordered != *new_ordered; | |
| } | |
| // Always send if there are no more consumers. | |
| true | |
| }); | |
| } | |
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
rs/moq-lite/src/ietf/subscribe.rs (3)
106-108: Bug:group_orderfield is ignored during encoding.The encode method hardcodes
GroupOrder::Descendinginstead of usingself.group_order. This means the struct'sgroup_orderfield value is never actually sent on the wire.🐛 Proposed fix
self.subscriber_priority.encode(w, version); - GroupOrder::Descending.encode(w, version); + self.group_order.encode(w, version); true.encode(w, version); // forward
422-435: Misleading comment in test bytes.The comment
// INVALID: expires = 5is incorrect. Based on theSubscribeOkdecode order (request_id, track_alias, expires...), byte0x05is actuallytrack_alias, and0x02isexpires. The test still works correctly (expires=2 triggers the error), but the comment is misleading.📝 Proposed fix for comments
#[rustfmt::skip] let invalid_bytes = vec![ 0x01, // subscribe_id - 0x05, // INVALID: expires = 5 - 0x02, // group_order + 0x05, // track_alias + 0x02, // INVALID: expires = 2 (non-zero) + 0x02, // group_order 0x00, // content_exists 0x00, // num_params ];Note: The fix also adds a missing
group_orderbyte that should precedecontent_existsbased on the decode order.
403-420: Test bytes do not match Subscribe wire format; missing requiredforwardfield.The test is labeled with incorrect comments that misidentify fields. More critically:
- Subscribe has no
track_aliasfield (that's in SubscribeOk); the comment at byte 1 is wrong.- The required
forwardfield (bool) is missing betweengroup_orderandfilter_type. The decoder expects this field and returnsUnsupportedif false, but the test bytes skip it entirely.- The namespace encoding appears incorrect—the bytes would parse as namespace count=2 instead of 1.
The test passes only because it encounters an invalid filter_type value (0x99), masking the earlier structural issues. Consider updating the byte sequence to correctly encode a valid partial Subscribe message up to the invalid filter_type, matching the order: request_id, namespace, track_name, subscriber_priority, group_order, forward, filter_type.
rs/moq-lite/src/model/broadcast.rs (1)
397-456: The test asserts a known-bad behavior (“BUG … assertion will fail”)—this can’t ship as-is.
Either fix the underlying bug or mark this test#[ignore]and track it separately.Minimal containment option
#[tokio::test] + #[ignore = "Known issue: requested track cleanup/unused semantics not finalized yet"] async fn requested_unused() {
🤖 Fix all issues with AI agents
In @rs/hang/src/import/avc3.rs:
- Around line 264-271: The Drop impl for Avc3 currently unwraps the removed
catalog entry and can panic if the track is missing; change the cleanup to
handle absence gracefully by replacing the unwrap on
self.catalog.lock().video.remove(&track).unwrap() with a safe check (e.g., match
or if let) that logs a warning/error via tracing::warn or tracing::error when no
config is found and only calls tracing::info! when a config was actually
returned; keep the existing early return when self.track is None and ensure
track.close().ok() still runs before attempting the catalog removal.
In @rs/hang/src/import/opus.rs:
- Around line 106-113: The Drop implementation for Opus uses unwrap() when
removing the track config (self.catalog.lock().audio.remove(&track).unwrap()),
which can panic if the entry is missing; replace the unwrap with a safe pattern
such as matching the removal result (if let Some(config) =
self.catalog.lock().audio.remove(&track) { tracing::debug!(track =
%track.info(), ?config, "ended track"); } else { tracing::warn!(track =
%track.info(), "ended track but no catalog entry found"); }) so cleanup never
panics; update the drop method in impl Drop for Opus accordingly, mirroring the
safer approach used for avc3.rs.
In @rs/moq-lite/src/model/expires.rs:
- Around line 44-49: The Debug impl for ExpiresProducer is wrong: it uses
debug_struct("State") and only logs self.state, omitting the delivery field;
change the debug_struct name to "ExpiresProducer" (or the actual type name) and
add a .field("delivery", &self.delivery) alongside the existing .field("state",
&self.state.borrow().deref()) so both state and delivery are included in the
debug output for ExpiresProducer.
In @rs/moq-lite/src/model/frame.rs:
- Around line 17-27: The doc for Frame::instant incorrectly states
"milliseconds" while the field type is Time (which is nanosecond/unix-based);
update the comment or code to be consistent: either change the doc on the
instant field to state the actual Time semantics (nanoseconds / unix epoch /
track-relative as implemented) or convert/store a millisecond representation
(e.g., use a Millis type or call Time::to_millis when setting instant) so
downstream users aren't misled; locate the instant field in the Frame struct and
adjust the documentation or conversion logic accordingly.
- Around line 56-81: Replace the impls that silently truncate by removing impl
From<u64> (and optionally From<u32>) and adding impl TryFrom<u64> for Frame and
impl TryFrom<u32> for Frame that use usize::try_from(size) to convert the
incoming integer and return Result<Frame, std::num::TryFromIntError>; build the
Frame with Time::now() and the converted usize on Ok, and propagate the
conversion error on Err; keep the safe impl From<u16> as-is.
In @rs/moq-lite/src/model/track.rs:
- Around line 332-336: Docs state TrackConsumer can be cloned but the
#[derive(Clone)] on the TrackConsumer struct is commented out; either restore
the derive or update the docs. Inspect the TrackConsumer struct and its fields
(symbol: TrackConsumer) and if all fields are Clone, re-enable #[derive(Clone)]
so cloning is supported; if some fields are not Clone or cloning is no longer
intended, update the doc comment to remove or rephrase the "If the consumer is
cloned..." sentence and ensure the comments accurately reflect current
semantics.
- Around line 524-572: The loop in next_group treats the result of
expires.wait_expired(...) as if it always means expiration; instead check its
return value and handle cancellation by returning an Err rather than proceeding
as if the group expired. Locate next_group and the inner async branch that calls
expires.wait_expired(first.sequence, instant).await, inspect the enum/result it
returns (Cancel vs Expired), and if it indicates cancellation propagate an error
(Err(...)) out of next_group; only on an actual Expired outcome should you pop
and return the buffered group and update self.expected as currently done.
- Around line 373-406: In next_group, when iterating state.groups, a None entry
currently doesn’t advance self.index, which can cause wait_for to repeatedly
return and spin; fix by advancing self.index for None placeholders the same way
you do for Some: set self.index = state.offset + i + 1 before continuing so the
loop makes progress, keep the existing expired-group logic for Some branches,
and ensure this change is applied inside the for i in
self.index.saturating_sub(state.offset)..state.groups.len() loop so the index
always moves past processed/expired None entries.
- Around line 387-395: The call to
self.expires.is_expired(group.consumer.sequence, info.max_latency) passes a
Duration (info.max_latency) where expires::is_expired expects an absolute
timestamp; add a public accessor on GroupConsumer (e.g., max_instant() or
max_timestamp()) that returns the group's maximum instant and use that when
calling is_expired instead of info.max_latency, and update any tests/usages
accordingly; locate the is_expired usage in track.rs (the return block that
currently calls is_expired) and modify it to call
is_expired(group.consumer.sequence, group.consumer.max_instant()) after adding
the accessor in the GroupConsumer impl.
In @rs/moq-relay/src/web.rs:
- Around line 281-284: The match on track.next_group() currently lumps Ok(None)
and Err(...) into the wildcard arm and returns StatusCode::NOT_FOUND, hiding
internal errors; change the match to handle three cases explicitly:
Ok(Some(group)) => proceed, Ok(None) => return NOT_FOUND, and Err(e) => map the
error to an internal server error (e.g., return
StatusCode::INTERNAL_SERVER_ERROR or convert e into a proper error response) so
internal failures from track.next_group() are not mistaken for missing
resources; locate the call to track.next_group() and update the match arms
accordingly.
🧹 Nitpick comments (8)
rs/hang/src/import/mod.rs (1)
19-21: LGTM!The constant is appropriately placed in the
hanglayer for media-specific defaults. Usingfrom_secs_unchecked(30)is safe here since 30 seconds is well within the representable range. The TODO is noted—consider tracking this as a follow-up issue if configurability is needed soon.Would you like me to open an issue to track making
DEFAULT_MAX_LATENCYconfigurable?rs/moq-lite/src/session.rs (1)
147-151: Consider extracting duplicate parameter decoding logic.The request ID extraction pattern (lines 147-151) is duplicated from the client-side (lines 76-79). While not critical, this could be a candidate for a helper function in a future refactor to reduce duplication.
♻️ Optional: Extract helper function
// Could be added as a helper function fn extract_max_request_id(parameters: &ietf::Parameters) -> ietf::RequestId { parameters .get_int(ietf::SetupParameter::MaxRequestId.into()) .map(ietf::RequestId) .unwrap_or_default() }rs/moq-lite/src/model/origin.rs (1)
396-399: Consider logging errors before discarding.While silently ignoring errors in cleanup tasks is reasonable (the broadcast should be removed regardless of how it closed), discarding errors with
.ok()can make debugging harder if unexpected errors occur.💡 Optional: Log errors at trace level for debugging
web_async::spawn(async move { - broadcast.closed().await.ok(); + if let Err(e) = broadcast.closed().await { + tracing::trace!(?e, "broadcast closed with error"); + } root.lock().remove(&full, broadcast, &rest); });rs/moq-clock/src/main.rs (1)
94-101: LGTM!The
subscribe_trackcall correctly uses thebroadcastfrom the announce tuple (line 97), not the outerbroadcastfrom line 56. Theconfig.track.clone()is appropriate since this runs in a loop.Minor note: The variable shadowing of
broadcast(outerProducestruct vs. innerBroadcastConsumerfrom announce) is valid but could be slightly clearer with different naming, e.g.,announced_broadcast. This is purely a readability nit.rs/moq-lite/src/ietf/subscribe.rs (1)
15-20:MaxCacheDurationvariant is defined but unused.The
MaxCacheDuration = 0x4variant is declared but not referenced anywhere in this file. If it's intended for future use, consider adding a brief comment or#[allow(dead_code)]annotation. Otherwise, remove it to avoid dead code.rs/moq-lite/src/model/expires.rs (1)
93-112: Consider returningResult<(), Error>fromwait_expired(current signature forces “errors as control flow”).rs/moq-lite/src/model/group.rs (1)
236-287: Add a cheapmax_instant()accessor forTrackConsumerto avoid blocking/guessing.
TrackConsumerneeds a non-async timestamp to decide “already expired” without callinginstant().await.Proposed addition (enables correct expiration checks in track.rs)
impl GroupConsumer { + /// Returns the maximum instant seen so far for this group (if any). + pub fn max_instant(&self) -> Option<Time> { + self.state.borrow().max_instant + } + pub fn info(&self) -> &Group { &self.info }rs/moq-lite/src/model/frame.rs (1)
126-139:FrameState::close()leaves the frame open on wrong size (may be intended); consider whether this should poison the frame instead.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (29)
js/lite/examples/publish.tsjs/lite/examples/subscribe.tsrs/hang/Cargo.tomlrs/hang/src/catalog/mod.rsrs/hang/src/catalog/root.rsrs/hang/src/catalog/video/mod.rsrs/hang/src/import/avc3.rsrs/hang/src/import/decoder.rsrs/hang/src/import/mod.rsrs/hang/src/import/opus.rsrs/hang/src/lib.rsrs/libmoq/src/api.rsrs/moq-clock/src/main.rsrs/moq-lite/Cargo.tomlrs/moq-lite/src/coding/reader.rsrs/moq-lite/src/coding/writer.rsrs/moq-lite/src/error.rsrs/moq-lite/src/ietf/subscribe.rsrs/moq-lite/src/lite/mod.rsrs/moq-lite/src/model/broadcast.rsrs/moq-lite/src/model/expires.rsrs/moq-lite/src/model/frame.rsrs/moq-lite/src/model/group.rsrs/moq-lite/src/model/origin.rsrs/moq-lite/src/model/produce.rsrs/moq-lite/src/model/track.rsrs/moq-lite/src/session.rsrs/moq-relay/src/main.rsrs/moq-relay/src/web.rs
💤 Files with no reviewable changes (1)
- rs/hang/Cargo.toml
🚧 Files skipped from review as they are similar to previous changes (6)
- rs/hang/src/catalog/mod.rs
- rs/moq-lite/Cargo.toml
- rs/moq-lite/src/coding/writer.rs
- rs/moq-lite/src/model/produce.rs
- rs/hang/src/catalog/video/mod.rs
- rs/moq-relay/src/main.rs
🧰 Additional context used
📓 Path-based instructions (3)
rs/**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
rs/**/*.rs: Usecargofor Rust-specific operations
Rust tests should be integrated within source files rather than in separate test files
Files:
rs/moq-lite/src/ietf/subscribe.rsrs/moq-lite/src/error.rsrs/hang/src/import/avc3.rsrs/moq-lite/src/session.rsrs/moq-lite/src/coding/reader.rsrs/moq-relay/src/web.rsrs/hang/src/lib.rsrs/hang/src/import/decoder.rsrs/moq-lite/src/model/expires.rsrs/hang/src/import/mod.rsrs/moq-lite/src/model/origin.rsrs/hang/src/import/opus.rsrs/moq-lite/src/model/group.rsrs/moq-lite/src/model/broadcast.rsrs/moq-lite/src/model/track.rsrs/libmoq/src/api.rsrs/moq-clock/src/main.rsrs/moq-lite/src/model/frame.rsrs/hang/src/catalog/root.rsrs/moq-lite/src/lite/mod.rs
js/**/*.{ts,tsx,js,jsx,json}
📄 CodeRabbit inference engine (CLAUDE.md)
Use
bunfor all TypeScript/JavaScript package management and script execution (not npm, yarn, or pnpm)
Files:
js/lite/examples/subscribe.tsjs/lite/examples/publish.ts
rs/hang/**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
Media-specific encoding/decoding should be implemented in the hang layer on top of moq-lite
Files:
rs/hang/src/import/avc3.rsrs/hang/src/lib.rsrs/hang/src/import/decoder.rsrs/hang/src/import/mod.rsrs/hang/src/import/opus.rsrs/hang/src/catalog/root.rs
🧠 Learnings (5)
📓 Common learnings
Learnt from: CR
Repo: moq-dev/moq PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-11T03:47:52.751Z
Learning: Applies to rs/moq/**/*.rs : The CDN/relay layer in the moq protocol should not know anything about media; generic rules on the wire define how to deliver content
Learnt from: CR
Repo: moq-dev/moq PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-11T03:47:52.751Z
Learning: Applies to rs/hang/**/*.rs : Media-specific encoding/decoding should be implemented in the hang layer on top of moq-lite
📚 Learning: 2025-12-20T13:22:14.684Z
Learnt from: Frando
Repo: moq-dev/moq PR: 794
File: justfile:150-156
Timestamp: 2025-12-20T13:22:14.684Z
Learning: In moq-relay, WebTransport connections automatically prepend the URL path (e.g., `/anon`) as a prefix to all broadcasts. However, raw QUIC and iroh connections have no HTTP layer, so there's no way to send a path. To match the expected broadcast path for the web UI, the prefix must be manually added to the broadcast name (e.g., `anon/{{name}}`) when publishing via iroh or raw QUIC.
Applied to files:
js/lite/examples/subscribe.tsrs/moq-relay/src/web.rsrs/moq-clock/src/main.rs
📚 Learning: 2026-01-11T03:47:52.751Z
Learnt from: CR
Repo: moq-dev/moq PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-11T03:47:52.751Z
Learning: Applies to rs/hang/**/*.rs : Media-specific encoding/decoding should be implemented in the hang layer on top of moq-lite
Applied to files:
rs/moq-lite/src/error.rsrs/moq-lite/src/session.rsrs/moq-lite/src/coding/reader.rsrs/hang/src/lib.rsrs/hang/src/import/decoder.rsrs/hang/src/import/mod.rsrs/hang/src/import/opus.rsrs/libmoq/src/api.rsrs/hang/src/catalog/root.rsrs/moq-lite/src/lite/mod.rs
📚 Learning: 2025-12-20T13:14:11.182Z
Learnt from: Frando
Repo: moq-dev/moq PR: 794
File: rs/moq-native/src/iroh/client.rs:40-42
Timestamp: 2025-12-20T13:14:11.182Z
Learning: In rs/moq-native/src/iroh/server.rs, the Server::close method uses `&mut self` instead of `&self` (unlike Client::close) because the Server struct contains `FuturesUnordered<BoxFuture<...>>` which is Send but not Sync, and using `&self` would require Self: Sync. This is documented in the code and is an acceptable trade-off.
Applied to files:
rs/moq-lite/src/session.rsrs/moq-lite/src/coding/reader.rsrs/moq-lite/src/model/group.rs
📚 Learning: 2026-01-11T03:47:52.751Z
Learnt from: CR
Repo: moq-dev/moq PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-11T03:47:52.751Z
Learning: Applies to rs/moq/**/*.rs : The CDN/relay layer in the moq protocol should not know anything about media; generic rules on the wire define how to deliver content
Applied to files:
rs/moq-relay/src/web.rsrs/libmoq/src/api.rs
🧬 Code graph analysis (16)
rs/moq-lite/src/ietf/subscribe.rs (4)
rs/moq-lite/src/ietf/namespace.rs (2)
decode_namespace(19-33)encode_namespace(4-16)rs/moq-lite/src/model/time.rs (2)
decode(289-292)from_millis(62-64)rs/moq-lite/src/ietf/parameters.rs (1)
decode(14-42)rs/moq-lite/src/ietf/request.rs (1)
decode(30-33)
js/lite/examples/subscribe.ts (1)
js/hang/src/frame.ts (1)
group(152-197)
rs/hang/src/import/avc3.rs (2)
rs/moq-lite/src/model/track.rs (5)
new(50-54)new(216-228)new(516-522)info(230-232)info(364-366)rs/hang/src/import/opus.rs (2)
pts(96-103)drop(107-113)
rs/moq-lite/src/session.rs (2)
rs/moq-lite/src/lite/version.rs (2)
coding(40-42)try_from(20-30)js/lite/src/ietf/parameters.ts (1)
Parameters(8-109)
rs/moq-lite/src/coding/reader.rs (1)
rs/moq-lite/src/coding/decode.rs (9)
decode(9-9)decode(56-62)decode(66-71)decode(75-80)decode(85-90)decode(94-103)decode(107-116)decode(120-127)decode(132-135)
rs/hang/src/import/decoder.rs (5)
rs/hang/src/import/avc3.rs (1)
new(31-40)rs/hang/src/import/opus.rs (1)
new(14-20)rs/hang/src/import/fmp4.rs (1)
new(53-63)rs/hang/src/import/aac.rs (1)
new(14-20)rs/hang/src/import/hev1.rs (2)
new(33-42)new(316-339)
rs/moq-lite/src/model/expires.rs (1)
rs/moq-lite/src/model/delivery.rs (4)
fmt(63-67)new(40-44)consume(46-50)changed(75-78)
rs/hang/src/import/mod.rs (1)
rs/moq-lite/src/model/time.rs (1)
from_secs_unchecked(54-59)
rs/moq-lite/src/model/origin.rs (3)
rs/moq-lite/src/model/broadcast.rs (3)
produce(27-33)new(56-62)default(50-52)rs/moq-lite/src/model/produce.rs (1)
new(15-17)rs/moq-lite/src/lite/publisher.rs (1)
new(25-34)
rs/moq-lite/src/model/group.rs (3)
rs/moq-lite/src/model/frame.rs (15)
fmt(96-102)fmt(161-166)fmt(253-259)from(48-53)from(57-62)from(66-71)from(75-80)from(223-225)new(32-37)new(106-112)new(170-175)info(177-179)info(263-265)deref(231-233)deref(352-354)rs/moq-lite/src/model/expires.rs (2)
fmt(45-49)fmt(86-90)rs/moq-lite/src/model/time.rs (4)
fmt(207-221)from(201-203)from(263-279)from(283-285)
rs/moq-lite/src/model/broadcast.rs (1)
rs/moq-lite/src/model/track.rs (5)
new(50-54)new(216-228)new(516-522)delivery(235-237)delivery(437-439)
rs/moq-lite/src/model/track.rs (2)
rs/moq-lite/src/model/group.rs (7)
fmt(93-98)fmt(227-233)new(102-108)info(110-112)info(237-239)deref(199-201)deref(310-312)rs/moq-lite/src/model/origin.rs (4)
new(17-19)new(74-79)new(124-130)default(339-343)
rs/libmoq/src/api.rs (1)
rs/moq-lite/src/model/time.rs (1)
from_millis(62-64)
js/lite/examples/publish.ts (1)
js/hang/src/frame.ts (1)
group(152-197)
rs/moq-clock/src/main.rs (1)
rs/moq-lite/src/model/broadcast.rs (3)
produce(27-33)default(50-52)subscribe_track(163-196)
rs/moq-lite/src/model/frame.rs (1)
rs/moq-lite/src/model/time.rs (4)
now(167-170)from(201-203)from(263-279)from(283-285)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Check
🔇 Additional comments (39)
rs/moq-lite/src/lite/mod.rs (1)
8-8: LGTM!The new
framemodule is correctly integrated following the existing alphabetical ordering and visibility patterns. Thepub use frame::*;re-export is consistent with other public API modules in this file. This aligns well with the PR's goal of adding frame-level timing metadata (Instant Delta) to support dynamic latency.Also applies to: 23-23
js/lite/examples/subscribe.ts (2)
10-11: LGTM!The subscribe API correctly updated to use object-based parameters
{ name: "chat" }, aligning with the new subscription pattern used throughout the codebase.
18-23: LGTM!Frame reading correctly updated to use the new
readFrame()API, andframe.toString()appropriately converts the Frame object to a displayable string. The loop structure properly handles the falsy return value when no more frames are available.js/lite/examples/publish.ts (2)
15-26: LGTM!The request handling is appropriately simplified to work directly with the
trackobject returned bybroadcast.requested(). The track name check and error handling for unknown tracks are clear and correct.
35-37: LGTM!Frame publishing correctly updated to use the new Frame-based API with
Moq.Frame.fromString()andgroup.writeFrame(). This aligns with the Frame object-based framing pattern introduced throughout the codebase.rs/moq-lite/src/session.rs (6)
20-25: LGTM! Draft03 added as the preferred version.The version ordering correctly places
Draft03first (highest preference), followed byDraft02,Draft01, and finallyietf::Draft14. The array size annotation is correctly updated to 4.
49-50: LGTM! Parameter encoding updated to integer-keyed approach.The switch from enum-based to integer-keyed parameter encoding via
SetupParameter::MaxRequestId.into()andSetupParameter::Implementation.into()aligns with the new parameter handling pattern shown in the relevant code snippets (where JS usesbigintkeys with even/odd parity rules for varints vs bytes).
76-79: LGTM! Request ID extraction with safe default.The pattern
.map(ietf::RequestId).unwrap_or_default()correctly handles the case whereMaxRequestIdis not present in the parameters, providing a sensible default.
71-72: LGTM! Debug tracing improves observability.Adding
tracing::debug!after successful handshake completion provides useful connection-level visibility without being overly verbose (trace is used for setup message details).Also applies to: 92-93
127-128: Consistent with client-side parameter encoding.Server-side parameter encoding mirrors the client-side approach at lines 49-50, maintaining consistency.
143-144: Server-side debug tracing mirrors client behavior.Consistent with the client-side tracing added at lines 72 and 93.
Also applies to: 164-165
rs/moq-lite/src/model/origin.rs (2)
22-32: LGTM!The
Originstruct andproduce()method follow the established factory pattern used elsewhere in the codebase (matchingBroadcast::produce()). The empty struct with a static bootstrap method is idiomatic for organizing related producer/consumer creation.
361-363: LGTM!Simple constructor delegating to
Default- this provides an explicit API alongside the derivedDefaultimplementation and is consistent with theProduce::new()pattern shown in the relevant snippet.rs/moq-relay/src/web.rs (1)
279-279: LGTM on the API migration.The shift to
subscribe_track(track, Delivery::default())aligns with the new Delivery-based subscription model introduced in this PR. Using the default delivery parameters is appropriate for this simple fetch endpoint.rs/moq-lite/src/coding/reader.rs (2)
24-27: LGTM - RemovingDebugbound is a good simplification.The
Decode<V>trait doesn't requireDebug, so this constraint was unnecessarily restrictive. This change enables decoding for types that implementDecode<V>but notDebug, which aligns with the broader API refinements in this PR.
150-168: LGTM - Robust handling of stream closure detection.The rewritten logic correctly:
- Short-circuits when buffered data exists (lines 151-153)
- Defensively ensures buffer capacity before attempting a read (lines 155-158) — this handles
BytesMut::default()which starts with zero capacity- Distinguishes between stream closure (
None) and additional data arrival (Some(_))This maintains the invariant that
decode_mayberelies on: whenExpectedEndis returned, any available data is already in the buffer ready for decoding.rs/libmoq/src/api.rs (4)
196-197: LGTM!The tuple field access (
.0) aligns with the internal restructuring described in the PR where the publish state now wraps the consumer in a tuple structure.
267-267: LGTM!Removing the
.into()conversion simplifies the code. Thebroadcasttype now directly matches whatstate.consume.start()expects after the moq-lite API changes.
493-493: Consistent with video latency handling.Same pattern as the video consumer on line 454. Both audio and video ordered consumption now use
moq_lite::Timefor max latency, ensuring consistent timing behavior across track types.
454-454:TimeOverflowerror conversion is properly implemented.The
#[from]attribute onTimestampOverflowvariant in the Error enum (error.rs:63) automatically generatesFrom<moq_lite::TimeOverflow>implementation, enabling correct error propagation via the?operator on line 454.rs/moq-clock/src/main.rs (1)
56-65: LGTM!The refactored track creation using
broadcast.producer.create_track()withDelivery::default()correctly demonstrates the new moq-lite API. The broadcast producer is used to create the track, and the consumer is published to the origin as expected.rs/moq-lite/src/error.rs (3)
83-88: New error variants look good.The
ExpiredandTimeOverflowvariants are well-structured:
Expiredis a simple unit variant with a clear error message for expiration scenarios.TimeOverflowuses#[from]for ergonomic conversion, enabling?operator usage whenTimeOverflowerrors occur.This aligns with the PR's goal of adding timing/expiration infrastructure to the moq-lite layer.
3-3: The import is valid;TimeOverflowis properly exported from the crate root.
TimeOverflowis defined inmodel/time.rs, re-exported frommodel/mod.rsviapub use time::*;, and made available at the crate root throughlib.rs'spub use model::*;. The import statementuse crate::{coding, TimeOverflow};works correctly.Likely an incorrect or invalid review comment.
114-115: Add documentation explaining the intentionally skipped error codes.Error codes jump from 20 (
InvalidRole) to 22 (Expired), and codes 7–8 are also skipped. While this appears intentional and likely aligns with the MoQ protocol specification (draft-lcurley-moq-lite), add a comment in theto_code()function explaining that these codes are reserved per the protocol or for future use. This clarifies the design for maintainers.rs/hang/src/lib.rs (1)
21-31: LGTM!The module visibility refactoring follows a good Rust pattern: keeping modules private while re-exporting their public items. This provides a cleaner API surface and allows internal reorganization without breaking external consumers.
rs/hang/src/catalog/root.rs (2)
167-180: LGTM!Test properly updated to reflect the non-Option
VideoandAudiotypes. The serialization roundtrip test remains comprehensive.
86-91: Catalog delivery defaults are appropriate.The
default_delivery()method setspriority: u8::MAX(highest priority) for the catalog track, which is correct—catalog metadata is critical and must be delivered first during congestion. The default values for the remaining fields (max_latency: Time(0)andordered: false) are suitable for catalog semantics: catalogs are self-contained JSON updates that don't require ordered delivery and don't have strict latency constraints like live media streams. No changes needed.rs/hang/src/import/decoder.rs (1)
77-87: LGTM!The
Decoder::newsignature expansion to includecatalogis consistent across all format-specific decoders. This cleanly enables track registration directly within each decoder, aligning with the PR's goal of catalog-driven track lifecycle management.rs/hang/src/import/avc3.rs (2)
75-93: LGTM on initialization flow.The catalog-driven track creation pattern is clean: lock catalog → create track entry → create producer with delivery parameters. The use of
super::DEFAULT_MAX_LATENCYensures consistency across decoders.
228-248: LGTM on group lifecycle management.The group handling correctly implements GOP semantics: IDR frames close the previous group and start a new one. This aligns with H.264 encoding structure where each Group of Pictures begins with an IDR frame.
rs/hang/src/import/opus.rs (2)
79-90: LGTM on audio frame grouping.Creating a single group per Opus frame is the correct approach since audio frames are independently decodable. This enables fine-grained latency control and aligns with the PR's dynamic latency goals.
44-67: LGTM on initialization pattern.The catalog-driven track creation is consistent with other decoders (
avc3,hev1, etc.). The delivery parameters appropriately configure low-latency unordered delivery for audio.rs/moq-lite/src/ietf/subscribe.rs (3)
87-99: LGTM!The decode logic correctly extracts
delivery_timeoutfrom parameters with a sensible default of 0 when absent. Error handling forTime::from_millisoverflow is properly mapped toDecodeError::InvalidValue.
138-180: LGTM!The
SubscribeOkencoding and decoding correctly handlesdelivery_timeoutandgroup_order. The hardcoded0u8for expires is consistent with the decode validation that rejects non-zero values.
256-297: LGTM!The
SubscribeUpdateencoding and decoding correctly handlesdelivery_timeoutvia parameters. The forward flag validation is consistent (encode true, reject false on decode).rs/moq-lite/src/model/group.rs (2)
49-80: GroupState close/abort/append checks look consistent and avoid partial mutation on error.
137-151: Expiration is checked before publishing the frame to consumers (good ordering).rs/moq-lite/src/model/broadcast.rs (1)
163-195: No action required.DeliveryimplementsCopy, so passing it by value to multiple functions (lines 181 and 183) is safe and does not constitute double consumption. The code is correct as written.Likely an incorrect or invalid review comment.
rs/moq-lite/src/model/expires.rs (1)
15-32: The boundary operator<=is correct; do not change to<.The
<=condition is intentional and appears consistently across all three expiration functions (create_frame,wait_expired,is_expired), indicating deliberate design: frames expire when their deadline is reached or exceeded. Withinstant=100andmax_latency=0, the deadline is100; the frame correctly expires whenmax_instant >= 100. This is the proper semantics for a latency window—max_latency=0means zero tolerance for lateness, so the window closes at the deadline itself, not after.Regarding overflow:
Time::addalready handles overflow explicitly withchecked_add().expect("time overflow"), which is an intentional design choice to panic rather than using fallible arithmetic throughout.Likely an incorrect or invalid review comment.
| impl Drop for Avc3 { | ||
| fn drop(&mut self) { | ||
| if let Some(track) = &self.track { | ||
| tracing::debug!(name = ?track.info.name, "ending track"); | ||
| self.broadcast.catalog.lock().remove_video(&track.info.name); | ||
| } | ||
| let Some(mut track) = self.track.take() else { return }; | ||
| track.close().ok(); | ||
|
|
||
| let config = self.catalog.lock().video.remove(&track).unwrap(); | ||
| tracing::info!(track = %track.info(), ?config, "ended track"); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider handling missing track gracefully in Drop.
The unwrap() on Line 269 will panic if the track is not found in the catalog. While this shouldn't happen in normal operation, edge cases (e.g., partial initialization failure, or if the catalog was modified externally) could trigger a panic during cleanup.
Suggested safer alternative
impl Drop for Avc3 {
fn drop(&mut self) {
let Some(mut track) = self.track.take() else { return };
track.close().ok();
- let config = self.catalog.lock().video.remove(&track).unwrap();
- tracing::info!(track = %track.info(), ?config, "ended track");
+ match self.catalog.lock().video.remove(&track) {
+ Some(config) => tracing::info!(track = %track.info(), ?config, "ended track"),
+ None => tracing::warn!(track = %track.info(), "track not found in catalog during cleanup"),
+ }
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| impl Drop for Avc3 { | |
| fn drop(&mut self) { | |
| if let Some(track) = &self.track { | |
| tracing::debug!(name = ?track.info.name, "ending track"); | |
| self.broadcast.catalog.lock().remove_video(&track.info.name); | |
| } | |
| let Some(mut track) = self.track.take() else { return }; | |
| track.close().ok(); | |
| let config = self.catalog.lock().video.remove(&track).unwrap(); | |
| tracing::info!(track = %track.info(), ?config, "ended track"); | |
| } | |
| impl Drop for Avc3 { | |
| fn drop(&mut self) { | |
| let Some(mut track) = self.track.take() else { return }; | |
| track.close().ok(); | |
| match self.catalog.lock().video.remove(&track) { | |
| Some(config) => tracing::info!(track = %track.info(), ?config, "ended track"), | |
| None => tracing::warn!(track = %track.info(), "track not found in catalog during cleanup"), | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
In @rs/hang/src/import/avc3.rs around lines 264 - 271, The Drop impl for Avc3
currently unwraps the removed catalog entry and can panic if the track is
missing; change the cleanup to handle absence gracefully by replacing the unwrap
on self.catalog.lock().video.remove(&track).unwrap() with a safe check (e.g.,
match or if let) that logs a warning/error via tracing::warn or tracing::error
when no config is found and only calls tracing::info! when a config was actually
returned; keep the existing early return when self.track is None and ensure
track.close().ok() still runs before attempting the catalog removal.
| impl fmt::Debug for ExpiresProducer { | ||
| fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { | ||
| f.debug_struct("State") | ||
| .field("state", &self.state.borrow().deref()) | ||
| .finish() | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Debug impl names the wrong type (and omits delivery).
Proposed change
impl fmt::Debug for ExpiresProducer {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
- f.debug_struct("State")
+ f.debug_struct("ExpiresProducer")
.field("state", &self.state.borrow().deref())
+ .field("delivery", &self.delivery)
.finish()
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| impl fmt::Debug for ExpiresProducer { | |
| fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { | |
| f.debug_struct("State") | |
| .field("state", &self.state.borrow().deref()) | |
| .finish() | |
| } | |
| impl fmt::Debug for ExpiresProducer { | |
| fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { | |
| f.debug_struct("ExpiresProducer") | |
| .field("state", &self.state.borrow().deref()) | |
| .field("delivery", &self.delivery) | |
| .finish() | |
| } | |
| } |
🤖 Prompt for AI Agents
In @rs/moq-lite/src/model/expires.rs around lines 44 - 49, The Debug impl for
ExpiresProducer is wrong: it uses debug_struct("State") and only logs
self.state, omitting the delivery field; change the debug_struct name to
"ExpiresProducer" (or the actual type name) and add a .field("delivery",
&self.delivery) alongside the existing .field("state",
&self.state.borrow().deref()) so both state and delivery are included in the
debug output for ExpiresProducer.
| /// A timestamp (in milliseconds) when the frame was originally created, scoped to the *track*. | ||
| /// | ||
| /// This *should* be set as early as possible in the pipeline and proxied through all relays. | ||
| /// There is no clock synchronization or "zero" value; everything is relative to the track. | ||
| /// | ||
| /// This may be used by the application as a replacement for "presentation timestamp", even across tracks. | ||
| /// However, the lack of granularity and inability to go backwards limits its usefulness. | ||
| pub instant: Time, | ||
|
|
||
| /// The size of the frame in bytes. | ||
| pub size: usize, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Docs say “milliseconds” but Time formatting/conversion suggests nanosecond-based Unix time.
If Time is not ms, this will mislead downstream users.
🤖 Prompt for AI Agents
In @rs/moq-lite/src/model/frame.rs around lines 17 - 27, The doc for
Frame::instant incorrectly states "milliseconds" while the field type is Time
(which is nanosecond/unix-based); update the comment or code to be consistent:
either change the doc on the instant field to state the actual Time semantics
(nanoseconds / unix epoch / track-relative as implemented) or convert/store a
millisecond representation (e.g., use a Millis type or call Time::to_millis when
setting instant) so downstream users aren't misled; locate the instant field in
the Frame struct and adjust the documentation or conversion logic accordingly.
| impl From<u64> for Frame { | ||
| fn from(size: u64) -> Self { | ||
| Self { size } | ||
| Self { | ||
| instant: Time::now(), | ||
| size: size as usize, | ||
| } | ||
| } | ||
| } | ||
|
|
||
| impl From<u32> for Frame { | ||
| fn from(size: u32) -> Self { | ||
| Self { size: size as u64 } | ||
| Self { | ||
| instant: Time::now(), | ||
| size: size as usize, | ||
| } | ||
| } | ||
| } | ||
|
|
||
| impl From<u16> for Frame { | ||
| fn from(size: u16) -> Self { | ||
| Self { size: size as u64 } | ||
| Self { | ||
| instant: Time::now(), | ||
| size: size as usize, | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's see the full context of the frame.rs file
cat -n rs/moq-lite/src/model/frame.rs | head -100Repository: moq-dev/moq
Length of output: 3322
🏁 Script executed:
# Check where Frame is used and if From<u64> is called
rg "From.*Frame|Frame.*from" rs/moq-lite/src --type rust -A 2 -B 2Repository: moq-dev/moq
Length of output: 1220
🏁 Script executed:
# Search for size-related handling and protocol constraints
rg "size.*u64|u64.*size" rs/moq-lite/src --type rust -A 2Repository: moq-dev/moq
Length of output: 1832
🏁 Script executed:
# Search for actual usage of Frame::from or Into patterns with numeric literals
rg "Frame::from|\.into\(\)" rs/moq-lite/src --type rust -B 2 -A 2 | head -50Repository: moq-dev/moq
Length of output: 2941
🏁 Script executed:
# Check if there are tests for Frame
fd -e rs . rs/moq-lite/src -x grep -l "mod test\|#\[test\]\|#\[cfg(test)\]" {} \;Repository: moq-dev/moq
Length of output: 376
🏁 Script executed:
# Look for where Frame is actually constructed from u64
rg "Frame.*u64|u64.*Frame" rs/moq-lite/src --type rust -B 2 -A 2Repository: moq-dev/moq
Length of output: 278
🏁 Script executed:
# Search for any imports or explicit usage of the From trait for Frame
rg "use.*Frame|Frame::" rs/moq-lite/src --type rust | grep -v "impl From" | head -20Repository: moq-dev/moq
Length of output: 135
🏁 Script executed:
# Check if there are cargo.toml targets specified
cat rs/moq-lite/Cargo.toml | head -50Repository: moq-dev/moq
Length of output: 832
🏁 Script executed:
# Search for any test files that might test Frame creation
find rs/moq-lite -name "*.rs" -exec grep -l "Frame\|frame" {} \; | xargs -I {} grep -l "test\|#\[cfg" {} 2>/dev/nullRepository: moq-dev/moq
Length of output: 278
🏁 Script executed:
# Check if From<u64> for Frame is called anywhere via .into() pattern
rg "\.into\(\)" rs/moq-lite/src --type rust -B 3 | grep -A 3 -B 3 "Frame\|frame"Repository: moq-dev/moq
Length of output: 865
🏁 Script executed:
# Look at the setup.rs more carefully to understand how sizes are handled on wire
cat -n rs/moq-lite/src/setup.rs | grep -A 5 -B 5 "u64::decode.*usize\|usize.*u64::decode"Repository: moq-dev/moq
Length of output: 915
🏁 Script executed:
# Check if there are any 32-bit target specifications in workspace
find . -name "Cargo.toml" -o -name ".cargo/config*" | xargs grep -l "target" 2>/dev/null | head -5Repository: moq-dev/moq
Length of output: 105
🏁 Script executed:
# Look at how sizes flow through the actual publish/subscribe paths to understand real risk
rg "Frame::new|Frame\{" rs/moq-lite/src --type rust -B 2 -A 2 | head -40Repository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Check if there's any validation or bounds checking on sizes anywhere
rg "size.*MAX\|MAX.*size\|TryFrom.*size\|try_from.*size" rs/moq-lite/src --type rust -B 2 -A 2Repository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Check if there are any tests in frame.rs itself since the coding guidelines mention tests should be in source files
sed -n '/^#\[cfg(test)\]/,/^}/p' rs/moq-lite/src/model/frame.rsRepository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Check what targets this crate is built for
cat Cargo.toml | grep -A 20 "\[build\]\|\[\[test\]\]\|target"Repository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Verify if sizes from wire can actually exceed usize by looking at protocol constraints
rg "VarInt\|Varint\|max.*size\|MAX_SIZE" rs/moq-lite/src --type rust -B 2 -A 2 | head -30Repository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Check what the actual use case for Frame::from might be by looking at the Produce trait
rg "Produce\|Producer\|Consumer" rs/moq-lite/src/model/frame.rs -A 3 -B 1Repository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Look at error handling and see if there's any panicking or unwrap patterns with sizes
rg "unwrap\|expect\|panic" rs/moq-lite/src/model/frame.rsRepository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Check if the crate supports any 32-bit platforms by looking at the full workspace Cargo.toml
cat Cargo.tomlRepository: moq-dev/moq
Length of output: 809
🏁 Script executed:
# Check how sizes actually flow - is Frame created from wire sizes anywhere?
rg "Frame\|frame" rs/moq-lite/src/ietf/publisher.rs rs/moq-lite/src/ietf/subscriber.rs rs/moq-lite/src/lite/subscriber.rs --type rust -B 2 -A 2 | grep -A 5 -B 5 "size\|u64\|u32"Repository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Look at actual frame creation in subscribers to see if From is used
cat -n rs/moq-lite/src/lite/subscriber.rs | head -100Repository: moq-dev/moq
Length of output: 3328
🏁 Script executed:
# Look for how frames are created from incoming wire data - search for Frame creation patterns
cat -n rs/moq-lite/src/lite/subscriber.rs | sed -n '100,250p'Repository: moq-dev/moq
Length of output: 5890
🏁 Script executed:
# Search for where Group/Frame data is actually read from the wire
rg "recv_group\|Frame\|frame" rs/moq-lite/src/lite/subscriber.rs -A 5 -B 1Repository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Search for recv_group implementation
rg "recv_group" rs/moq-lite/src --type rust -A 20Repository: moq-dev/moq
Length of output: 5656
🏁 Script executed:
# Look for how Group and Frame data flows from wire
rg "Group\|ObjectHeader\|ObjectData" rs/moq-lite/src/lite --type rust -B 2 -A 2 | head -50Repository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Find the run_group function in lite subscriber
cat -n rs/moq-lite/src/lite/subscriber.rs | grep -A 30 "async fn run_group"Repository: moq-dev/moq
Length of output: 964
🏁 Script executed:
# Look at how frames are created from wire - check the Object/Frame reading code
rg "Object\|object_data\|frame_data" rs/moq-lite/src/lite --type rust -B 2 -A 2 | head -60Repository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Check the lite module to see FrameHeader definition
rg "struct FrameHeader\|pub size:" rs/moq-lite/src/lite --type rust -A 5 -B 2Repository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Check the lite::FrameHeader definition more carefully
fd -e rs . rs/moq-lite/src/lite -x grep -l "FrameHeader" {} \; | head -3Repository: moq-dev/moq
Length of output: 153
🏁 Script executed:
# Check the frame.rs in lite module
cat -n rs/moq-lite/src/lite/frame.rsRepository: moq-dev/moq
Length of output: 2415
🏁 Script executed:
# Now that we understand the flow, let's verify: is From<u64> for Frame ever actually called?
rg "\.into\(\)|From::" rs/moq-lite/src --type rust -B 3 | grep -A 3 -B 3 "u64\|Frame"Repository: moq-dev/moq
Length of output: 2522
🏁 Script executed:
# Check if there are any tests for Frame that might use these From impls
find rs/moq-lite -name "*.rs" -exec grep -l "#\[test\]\|#\[cfg(test)\]" {} \;Repository: moq-dev/moq
Length of output: 376
🏁 Script executed:
# Look at the complete context of how sizes flow through the system
rg "size.*decode|decode.*size" rs/moq-lite/src/lite --type rust -A 3 -B 1Repository: moq-dev/moq
Length of output: 763
🏁 Script executed:
# Confirm: check if From<u64> for Frame is actually invoked anywhere
rg "Frame::from|frame\.into\(\)" rs/moq-lite/src --type rust | grep -v "impl From"Repository: moq-dev/moq
Length of output: 112
🏁 Script executed:
# Double-check setup.rs context around those u64 casts
cat -n rs/moq-lite/src/setup.rs | sed -n '54,65p'Repository: moq-dev/moq
Length of output: 575
🏁 Script executed:
# Final verification: are these From impls actually exported/used in any public API?
rg "pub.*from|pub.*From" rs/moq-lite/src/model/frame.rsRepository: moq-dev/moq
Length of output: 37
🏁 Script executed:
# Check if there are any examples or tests in the test files that might use these From impls
rg "Frame" rs/moq-lite/src/model/track.rs rs/moq-lite/src/model/broadcast.rs rs/moq-lite/src/model/origin.rs --type rust -A 2 -B 2 | grep -A 3 -B 3 "from\|into"Repository: moq-dev/moq
Length of output: 37
Remove From<u64> and From<u32> implementations or replace with TryFrom.
These implementations silently truncate on 32-bit targets (where usize < u64). While frame sizes currently decode directly to usize in the protocol path, these From implementations pose a latent risk if called elsewhere or in future refactors. Use TryFrom<u64> to enforce explicit handling of potential overflow.
🤖 Prompt for AI Agents
In @rs/moq-lite/src/model/frame.rs around lines 56 - 81, Replace the impls that
silently truncate by removing impl From<u64> (and optionally From<u32>) and
adding impl TryFrom<u64> for Frame and impl TryFrom<u32> for Frame that use
usize::try_from(size) to convert the incoming integer and return Result<Frame,
std::num::TryFromIntError>; build the Frame with Time::now() and the converted
usize on Ok, and propagate the conversion error on Err; keep the safe impl
From<u16> as-is.
| /// A consumer for a track, used to read groups. | ||
| #[derive(Clone)] | ||
| /// | ||
| /// If the consumer is cloned, it will receive a copy of all unread groups. | ||
| //#[derive(Clone)] | ||
| pub struct TrackConsumer { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Docs say TrackConsumer can be cloned, but #[derive(Clone)] is commented out.
Either re-enable Clone (if intended) or update docs to match the new semantics.
🤖 Prompt for AI Agents
In @rs/moq-lite/src/model/track.rs around lines 332 - 336, Docs state
TrackConsumer can be cloned but the #[derive(Clone)] on the TrackConsumer struct
is commented out; either restore the derive or update the docs. Inspect the
TrackConsumer struct and its fields (symbol: TrackConsumer) and if all fields
are Clone, re-enable #[derive(Clone)] so cloning is supported; if some fields
are not Clone or cloning is no longer intended, update the doc comment to remove
or rephrase the "If the consumer is cloned..." sentence and ensure the comments
accurately reflect current semantics.
| pub async fn next_group(&mut self) -> Result<Option<GroupConsumer>> { | ||
| let mut expires = self.track.expires().clone(); | ||
|
|
||
| loop { | ||
| tokio::select! { | ||
| // Get the next group from the track. | ||
| Some(group) = self.track.next_group().transpose() => { | ||
| let group = group?; | ||
|
|
||
| // If we're looking for this sequence number, return it. | ||
| if group.sequence == self.expected { | ||
| self.expected += 1; | ||
| return Ok(Some(group)); | ||
| } | ||
|
|
||
| // If it's old, skip it. | ||
| if group.sequence < self.expected { | ||
| continue; | ||
| } | ||
|
|
||
| // If it's new, insert it into the buffered queue based on the sequence number ascending. | ||
| let index = self.pending.partition_point(|g| g.sequence < group.sequence); | ||
| self.pending.insert(index, group); | ||
| } | ||
| Some(next) = async { | ||
| loop { | ||
| // Get the oldest group in the buffered queue. | ||
| let first = self.pending.front()?; | ||
|
|
||
| // Wait until it has a timestamp available. (TODO would be nice to make this required) | ||
| let Ok(instant) = first.instant().await else { | ||
| self.pending.pop_front(); | ||
| continue; | ||
| }; | ||
|
|
||
| // Wait until the first frame of the group would have been expired. | ||
| // This doesn't mean the entire group is expired, because that uses the max_timestamp. | ||
| // But even if the group has one frame this will still unstuck the consumer. | ||
| expires.wait_expired(first.sequence, instant).await; | ||
| return self.pending.pop_front() | ||
| } | ||
| } => { | ||
| // We found the next group in order, so update the expected sequence number. | ||
| self.expected = next.sequence + 1; | ||
| return Ok(Some(next)); | ||
| } | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TrackConsumerOrdered ignores the result of wait_expired() (Cancel vs Expired).
On cancellation, the code should return an error instead of treating it like an expiration tick.
Proposed change
- expires.wait_expired(first.sequence, instant).await;
+ match expires.wait_expired(first.sequence, instant).await {
+ Error::Expired => {}
+ err => return None, // or bubble `Err(err)` by changing this block's return type
+ }
return self.pending.pop_front()🤖 Prompt for AI Agents
In @rs/moq-lite/src/model/track.rs around lines 524 - 572, The loop in
next_group treats the result of expires.wait_expired(...) as if it always means
expiration; instead check its return value and handle cancellation by returning
an Err rather than proceeding as if the group expired. Locate next_group and the
inner async branch that calls expires.wait_expired(first.sequence,
instant).await, inspect the enum/result it returns (Cancel vs Expired), and if
it indicates cancellation propagate an error (Err(...)) out of next_group; only
on an actual Expired outcome should you pop and return the buffered group and
update self.expected as currently done.
| let group = match track.next_group().await { | ||
| Ok(Some(group)) => group, | ||
| Ok(None) => return Err(StatusCode::NOT_FOUND.into()), | ||
| Err(_) => return Err(StatusCode::INTERNAL_SERVER_ERROR.into()), | ||
| _ => return Err(StatusCode::NOT_FOUND.into()), | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Error handling conflates "not found" with internal errors.
The wildcard _ catches both Ok(None) (track/group genuinely not found) and Err(...) (internal failures like connection issues). Returning NOT_FOUND for actual errors masks server problems and violates HTTP semantics—clients can't distinguish between a missing resource and a transient failure.
Suggested fix: restore distinct error handling
let group = match track.next_group().await {
Ok(Some(group)) => group,
- _ => return Err(StatusCode::NOT_FOUND.into()),
+ Ok(None) => return Err(StatusCode::NOT_FOUND.into()),
+ Err(e) => {
+ tracing::warn!(%e, "failed to fetch group");
+ return Err(StatusCode::INTERNAL_SERVER_ERROR.into());
+ }
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let group = match track.next_group().await { | |
| Ok(Some(group)) => group, | |
| Ok(None) => return Err(StatusCode::NOT_FOUND.into()), | |
| Err(_) => return Err(StatusCode::INTERNAL_SERVER_ERROR.into()), | |
| _ => return Err(StatusCode::NOT_FOUND.into()), | |
| }; | |
| let group = match track.next_group().await { | |
| Ok(Some(group)) => group, | |
| Ok(None) => return Err(StatusCode::NOT_FOUND.into()), | |
| Err(e) => { | |
| tracing::warn!(%e, "failed to fetch group"); | |
| return Err(StatusCode::INTERNAL_SERVER_ERROR.into()); | |
| } | |
| }; |
🤖 Prompt for AI Agents
In @rs/moq-relay/src/web.rs around lines 281 - 284, The match on
track.next_group() currently lumps Ok(None) and Err(...) into the wildcard arm
and returns StatusCode::NOT_FOUND, hiding internal errors; change the match to
handle three cases explicitly: Ok(Some(group)) => proceed, Ok(None) => return
NOT_FOUND, and Err(e) => map the error to an internal server error (e.g., return
StatusCode::INTERNAL_SERVER_ERROR or convert e into a proper error response) so
internal failures from track.next_group() are not mistaken for missing
resources; locate the call to track.next_group() and update the match arms
accordingly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🤖 Fix all issues with AI agents
In @rs/hang/src/import/hev1.rs:
- Around line 279-286: In Hev1's Drop implementation, avoid panicking: replace
the direct unwrap on self.catalog.lock().video.remove(&track).unwrap() with
defensive handling that first attempts to acquire the Mutex lock (handle
PoisonError by mapping to the inner guard or logging and returning), then tries
to remove the track from the catalog video map using video.remove(&track) and
handles the Option safely (e.g., if None, log a warning with track.info() and
the catalog state instead of unwrapping). Ensure track.close().ok() still runs,
and ensure all errors are logged with tracing::error or tracing::warn so Drop
never panics (do not propagate or unwrap).
In @rs/hang/src/import/opus.rs:
- Around line 106-113: The Drop impl for Opus currently calls unwrap() on the
result of self.catalog.lock().audio.remove(&track) which can panic if the track
key is absent; change this to handle the Option defensively (e.g., use match or
if let) so that if remove returns None you log a warning/debug noting the
missing track and return, and only call tracing::debug!(track = %track.info(),
?config, "ended track") when remove yields Some(config); reference the Drop
impl, the local variable track, and the catalog.lock().audio.remove(&track) call
when making the change.
In @rs/moq-lite/src/ietf/publisher.rs:
- Around line 117-131: The SubscribeOk is sent prematurely from where you call
control.send(ietf::SubscribeOk { request_id, track_alias: request_id.0,
delivery_timeout: delivery.max_latency, group_order: if delivery.ordered {
GroupOrder::Ascending } else { GroupOrder::Descending }, }).ok(); after
computing let delivery = track.delivery().current(); — defer sending that
SubscribeOk until after you receive confirmation from the origin (the success
response for the subscription) instead of immediately; update the code around
the subscribe handling to await or listen for the origin confirmation event,
then call control.send(...) only on success, and on origin failure send an
appropriate error response back to the requester (or implement a retry/backoff)
and ensure you correlate using request_id/track_alias so rejected subscriptions
are cleaned up.
In @rs/moq-lite/src/ietf/subscriber.rs:
- Around line 296-299: The tracing calls in run_subscribe use
self.origin.as_ref().unwrap(), which can panic when origin is None; replace
these unwraps with a safe check like let origin =
self.origin.as_ref().ok_or(Error::InvalidRole)? at the start of run_subscribe
(or immediately before the tracing calls) and then use
origin.absolute(&broadcast) in the tracing macros; update both the "subscribe
started" and "subscribe cancelled" lines to use the validated origin and
propagate the Error::InvalidRole via ? for consistency with start_announce.
In @rs/moq-lite/src/model/state.rs:
- Around line 246-247: The `.await.expect("not closed properly")` call on the
`wait_for` result can panic if the watch sender is dropped; replace the `expect`
with proper error handling: propagate the `Err` from `wait_for` as a meaningful
error (e.g., convert to your module's Error variant or return a Result) instead
of panicking. Locate the call that uses `wait_for().await.expect("not closed
properly")` and change it to handle `Err` via `?`, `map_err(...)`, or a `match`,
returning an appropriate error variant (with context) from the enclosing
function.
- Around line 120-146: In Producer<T>::drop replace the Relaxed ordering on
self.active.fetch_sub(...) with Release ordering, and after detecting the last
reference (when fetch_sub returns 1) insert an Acquire fence
(atomic::fence(atomic::Ordering::Acquire)) before calling
self.state.send_if_modified(...); keep the fetch_add in Producer::clone as
Relaxed, and ensure the call site still sets state.closed =
Some(Err(Error::Dropped)) inside send_if_modified.
🧹 Nitpick comments (15)
rs/hang/examples/video.rs (1)
78-113: Catalog guard not explicitly dropped before publishing frames.The
catalogguard (fromcatalog.lock()) is still in scope when encoding frames. According toCatalogGuard'sDropimplementation (in produce.rs), the catalog is published on drop. Sincecatalog(the guard) shadows the producer and isn't dropped until scope end, the catalog may not be published until after the frames are written.If the intent is to publish the catalog before streaming frames, consider explicitly dropping the guard.
Consider explicit drop for clarity
let mut track = broadcast.create_track(track, delivery)?; +// Drop the catalog guard to publish the catalog before streaming frames. +drop(catalog); + // Create a group of frames. // Each group must start with a keyframe. let mut group = track.append_group()?;rs/hang/src/catalog/produce.rs (2)
28-34: Potential deadlock if lock() is called while already holding the guard.The
lock()method usesunwrap()on the mutex lock, which will panic if the mutex is poisoned. More importantly, callinglock()twice without dropping the first guard will deadlock sinceMutexis not reentrant.Consider documenting this constraint or using a
try_lockwith a meaningful error.Add documentation about single-lock constraint
/// Get mutable access to the catalog, publishing it after any changes. +/// +/// # Panics +/// Panics if the underlying mutex is poisoned. Only one guard can be held at a time; +/// calling this while already holding a guard will deadlock. pub fn lock(&mut self) -> CatalogGuard<'_> {
62-71: Drop implementation silently ignores errors.The
DropforCatalogGuardignores errors fromappend_group(),write_frame(), andclose(). WhileDropcannot propagate errors, silently swallowing them can make debugging difficult.Consider logging failures in Drop
impl Drop for CatalogGuard<'_> { fn drop(&mut self) { - if let Ok(mut group) = self.track.append_group() { - // TODO decide if this should return an error, or be impossible to fail - let frame = self.catalog.to_string().expect("invalid catalog"); - group.write_frame(frame, moq_lite::Time::now()).ok(); - group.close().ok(); + match self.track.append_group() { + Ok(mut group) => { + // TODO decide if this should return an error, or be impossible to fail + let frame = self.catalog.to_string().expect("invalid catalog"); + if let Err(e) = group.write_frame(frame, moq_lite::Time::now()) { + tracing::warn!("failed to write catalog frame: {e}"); + } + if let Err(e) = group.close() { + tracing::warn!("failed to close catalog group: {e}"); + } + } + Err(e) => { + tracing::warn!("failed to append catalog group: {e}"); + } } } }rs/hang/src/import/aac.rs (1)
160-167: Minor inconsistency: Log level differs from other importers.Line 166 uses
tracing::debugwhilefmp4.rsandavc3.rsusetracing::infofor the "ended track" message. Consider using consistent log levels across importers for easier debugging.♻️ Suggested change for consistency
- tracing::debug!(track = %track.info(), ?config, "ended track"); + tracing::info!(track = %track.info(), ?config, "ended track");rs/moq-lite/src/error.rs (1)
117-120: Error code 21 is skipped in the mapping sequence.The error codes jump from
InvalidRole => 20toExpired => 22, skipping code 21. If this is intentional (reserved for future use or protocol specification), consider adding a comment. Otherwise, you may want to use sequential codes to avoid confusion.rs/moq-lite/src/ietf/publisher.rs (1)
96-102: Hardcoded delivery values may not honor subscriber preferences.The
max_latencyis hardcoded to 100ms andorderedto false, ignoring any delivery preferences the subscriber might have specified. The TODO comments suggest this is known, but this could lead to suboptimal delivery behavior.Consider deriving these values from the incoming
msgif such fields are available in theietf::Subscribemessage, or document why fixed defaults are acceptable here.rs/moq-lite/src/lite/publisher.rs (1)
246-256: Unconventional async polling pattern for group management.This pattern avoids
tokio::spawnto keep groups cancellable onDropand support non-tokio runtimes. However, the comment "Constantly poll all of the groups until they're all complete" suggests this branch is active even when no groups are pending, which could cause unnecessary wakeups.The
falsereturn from the async block ensures this branch never matches, butFuturesUnordered::next()will returnNoneimmediately when empty, causing the loop to exit and thewhileto complete. This should be fine, but consider adding a brief comment explaining this behavior for future maintainers.rs/moq-lite/src/model/frame.rs (1)
56-63: Potential truncation when convertingu64tousizeon 32-bit platforms.The
size as usizecast will truncate values larger thanu32::MAXon 32-bit platforms. While media frames are typically smaller than 4GB, this could cause silent data corruption in edge cases.Consider using
try_into()with error handling, or document this limitation.🔧 Suggested fix using TryFrom
impl From<u64> for Frame { fn from(size: u64) -> Self { Self { instant: Time::now(), - size: size as usize, + size: size.try_into().expect("frame size exceeds platform limits"), } } }rs/moq-lite/src/model/state.rs (1)
269-275: Minor inefficiency: Debug impl callsborrow()twice.The Debug implementation borrows the inner state twice. Consider borrowing once and reusing.
🔧 Suggested fix
impl<T: fmt::Debug> fmt::Debug for Consumer<T> { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + let state = self.inner.borrow(); f.debug_struct("Consumer") - .field("state", &self.inner.borrow().value) - .field("closed", &self.inner.borrow().closed) + .field("state", &state.value) + .field("closed", &state.closed) .finish() } }rs/moq-lite/src/model/broadcast.rs (1)
162-192: Consider handling thetry_sendfailure more explicitly.When
try_sendfails (line 175), the track is aborted but the consumer is still returned. This consumer will be in an error state, which is handled correctly. However, the early return on line 177 means the track is not inserted into the state map, but the consumer is already created and returned.This seems intentional (failed send = no producer will handle it = abort), but consider adding a brief comment explaining the error flow.
📝 Suggested comment for clarity
if self.requested.try_send(track.clone()).is_err() { + // No producer listening; the track will be in an error state. track.abort(Error::Cancel).ok(); return consumer; }rs/libmoq/src/consume.rs (3)
101-112: Consider extracting the.iter().nth(index)pattern.The same pattern is used in both
video_configandaudio_config. While not critical, extracting this lookup could reduce duplication.
189-196: Hardcoded delivery parameters limit configurability.The TODO comment acknowledges this, but
ordered: falseandpriority: 1are hardcoded. For the dynamic latency feature to be fully useful, these should eventually be exposed. The fixed priority values (video=1, audio=2) also prevent fine-grained control.
259-276: Silent error suppression may hide issues.While the comment explains that errors are ignored because "the group can be aborted/expired at any time," silently discarding all decode errors could hide legitimate bugs during development. Consider logging at debug/trace level when an error occurs.
♻️ Suggested improvement
- // NOTE: We ignore errors for frames because the group can be aborted/expired at any time. - while let Ok(Some(frame)) = hang::Container::decode(&mut group).await { + // NOTE: We ignore errors for frames because the group can be aborted/expired at any time. + loop { + match hang::Container::decode(&mut group).await { + Ok(Some(frame)) => { + // Important: Don't hold the mutex during this callback. + let id = State::lock().consume.frame.insert((frame, keyframe)); + on_frame.call(Ok(id)); + keyframe = false; + } + Ok(None) => break, + Err(e) => { + tracing::trace!("Frame decode stopped: {e}"); + break; + } + } + }rs/moq-lite/src/model/track.rs (2)
121-143: O(n) duplicate check on every group creation.The duplicate sequence check iterates through all groups. For tracks with many groups, this could become a bottleneck. If high group counts are expected, consider using a
HashSet<u64>for O(1) lookups.
457-515: Redundantself.expectedassignment.In the
Some(next) = async { ... }branch:
- Line 504 sets
self.expected = first.sequence + 1- Line 510 sets
self.expected = next.sequence + 1(wherenext == first)The second assignment is redundant.
♻️ Suggested fix
Some(next) = async { loop { // Get the oldest group in the buffered queue. let first = self.pending.front()?; // If the minimum sequence is not what we're looking for, wait until it would be expired. if first.sequence != self.expected { // Wait until the first frame of the group has been received. let Ok(Some(frame)) = first.clone().next_frame().await else { // The group has no frames, just skip it. self.pending.pop_front(); continue; }; // Wait until the first frame of the group would have been expired. - // This doesn't mean the entire group is expired, because that uses the max_timestamp. - // But even if the group has one frame this will still unstuck the consumer. expires.wait_expired(first.sequence, frame.instant).await; } // Return the minimum group and skip over any gap. let first = self.pending.pop_front().unwrap(); self.expected = first.sequence + 1; return Some(first); } } => { - // We found the next group in order, so update the expected sequence number. - self.expected = next.sequence + 1; return Ok(Some(next)); }
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (26)
justfilers/hang-cli/src/publish.rsrs/hang/examples/video.rsrs/hang/src/catalog/produce.rsrs/hang/src/import/aac.rsrs/hang/src/import/avc3.rsrs/hang/src/import/fmp4.rsrs/hang/src/import/hev1.rsrs/hang/src/import/hls.rsrs/hang/src/import/opus.rsrs/libmoq/src/consume.rsrs/libmoq/src/publish.rsrs/moq-clock/src/main.rsrs/moq-lite/src/error.rsrs/moq-lite/src/ietf/publisher.rsrs/moq-lite/src/ietf/subscriber.rsrs/moq-lite/src/lite/publisher.rsrs/moq-lite/src/model/broadcast.rsrs/moq-lite/src/model/delivery.rsrs/moq-lite/src/model/frame.rsrs/moq-lite/src/model/group.rsrs/moq-lite/src/model/mod.rsrs/moq-lite/src/model/state.rsrs/moq-lite/src/model/track.rsrs/moq-native/examples/chat.rsrs/moq-relay/src/web.rs
✅ Files skipped from review due to trivial changes (1)
- justfile
🚧 Files skipped from review as they are similar to previous changes (2)
- rs/moq-lite/src/model/delivery.rs
- rs/hang-cli/src/publish.rs
🧰 Additional context used
📓 Path-based instructions (2)
rs/**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
rs/**/*.rs: Usecargofor Rust-specific operations
Rust tests should be integrated within source files rather than in separate test files
Files:
rs/moq-relay/src/web.rsrs/libmoq/src/publish.rsrs/hang/src/import/fmp4.rsrs/moq-lite/src/error.rsrs/moq-clock/src/main.rsrs/hang/src/import/hev1.rsrs/hang/examples/video.rsrs/moq-lite/src/model/frame.rsrs/moq-lite/src/ietf/subscriber.rsrs/moq-native/examples/chat.rsrs/libmoq/src/consume.rsrs/hang/src/import/opus.rsrs/hang/src/catalog/produce.rsrs/moq-lite/src/model/mod.rsrs/hang/src/import/hls.rsrs/hang/src/import/avc3.rsrs/hang/src/import/aac.rsrs/moq-lite/src/model/broadcast.rsrs/moq-lite/src/model/state.rsrs/moq-lite/src/lite/publisher.rsrs/moq-lite/src/model/group.rsrs/moq-lite/src/ietf/publisher.rsrs/moq-lite/src/model/track.rs
rs/hang/**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
Media-specific encoding/decoding should be implemented in the hang layer on top of moq-lite
Files:
rs/hang/src/import/fmp4.rsrs/hang/src/import/hev1.rsrs/hang/examples/video.rsrs/hang/src/import/opus.rsrs/hang/src/catalog/produce.rsrs/hang/src/import/hls.rsrs/hang/src/import/avc3.rsrs/hang/src/import/aac.rs
🧠 Learnings (4)
📓 Common learnings
Learnt from: CR
Repo: moq-dev/moq PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-11T03:47:52.766Z
Learning: Applies to rs/hang/**/*.rs : Media-specific encoding/decoding should be implemented in the hang layer on top of moq-lite
Learnt from: CR
Repo: moq-dev/moq PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-11T03:47:52.766Z
Learning: Applies to rs/moq/**/*.rs : The CDN/relay layer in the moq protocol should not know anything about media; generic rules on the wire define how to deliver content
📚 Learning: 2026-01-11T03:47:52.766Z
Learnt from: CR
Repo: moq-dev/moq PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-11T03:47:52.766Z
Learning: Applies to rs/moq/**/*.rs : The CDN/relay layer in the moq protocol should not know anything about media; generic rules on the wire define how to deliver content
Applied to files:
rs/moq-relay/src/web.rsrs/moq-native/examples/chat.rs
📚 Learning: 2026-01-11T03:47:52.766Z
Learnt from: CR
Repo: moq-dev/moq PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-11T03:47:52.766Z
Learning: Applies to rs/hang/**/*.rs : Media-specific encoding/decoding should be implemented in the hang layer on top of moq-lite
Applied to files:
rs/hang/src/import/fmp4.rsrs/moq-lite/src/error.rsrs/hang/src/import/hev1.rsrs/hang/examples/video.rsrs/moq-lite/src/model/frame.rsrs/moq-native/examples/chat.rsrs/libmoq/src/consume.rsrs/hang/src/import/opus.rsrs/hang/src/import/hls.rsrs/hang/src/import/avc3.rsrs/hang/src/import/aac.rs
📚 Learning: 2025-12-20T13:22:14.684Z
Learnt from: Frando
Repo: moq-dev/moq PR: 794
File: justfile:150-156
Timestamp: 2025-12-20T13:22:14.684Z
Learning: In moq-relay, WebTransport connections automatically prepend the URL path (e.g., `/anon`) as a prefix to all broadcasts. However, raw QUIC and iroh connections have no HTTP layer, so there's no way to send a path. To match the expected broadcast path for the web UI, the prefix must be manually added to the broadcast name (e.g., `anon/{{name}}`) when publishing via iroh or raw QUIC.
Applied to files:
rs/moq-clock/src/main.rsrs/moq-native/examples/chat.rs
🧬 Code graph analysis (15)
rs/libmoq/src/publish.rs (3)
rs/hang/src/import/aac.rs (1)
new(14-20)rs/hang/src/import/fmp4.rs (1)
new(53-63)rs/hang/src/import/decoder.rs (1)
new(77-87)
rs/hang/src/import/fmp4.rs (4)
rs/moq-lite/src/model/time.rs (2)
from_secs_unchecked(54-59)new(34-36)rs/hang/src/import/aac.rs (1)
new(14-20)rs/hang/src/import/avc3.rs (1)
new(31-40)rs/hang/src/import/decoder.rs (1)
new(77-87)
rs/moq-clock/src/main.rs (1)
rs/moq-lite/src/model/broadcast.rs (2)
produce(19-25)subscribe_track(154-193)
rs/moq-lite/src/model/frame.rs (1)
rs/moq-lite/src/model/time.rs (1)
now(167-170)
rs/moq-lite/src/ietf/subscriber.rs (3)
rs/moq-lite/src/model/track.rs (12)
subscribe(229-238)subscribers(198-200)delivery(193-195)delivery(370-372)info(188-190)info(312-314)from(61-65)from(69-71)from(75-79)from(83-85)from(89-93)from(97-99)rs/moq-lite/src/model/frame.rs (7)
info(124-126)info(184-186)from(48-53)from(57-62)from(66-71)from(75-80)from(154-156)rs/moq-lite/src/model/time.rs (4)
now(167-170)from(201-203)from(263-279)from(283-285)
rs/moq-native/examples/chat.rs (5)
rs/moq-lite/src/model/group.rs (2)
new(61-67)consume(114-122)rs/moq-lite/src/model/track.rs (3)
new(49-53)new(174-186)new(449-455)rs/moq-lite/src/model/broadcast.rs (2)
select(347-355)consume(98-104)rs/moq-lite/src/model/origin.rs (4)
select(271-295)consume(180-183)consume(415-417)consume(508-510)rs/moq-lite/src/model/delivery.rs (1)
consume(54-58)
rs/libmoq/src/consume.rs (3)
rs/moq-lite/src/model/broadcast.rs (3)
new(51-60)new(131-133)consume(98-104)rs/moq-lite/src/model/delivery.rs (1)
consume(54-58)rs/moq-lite/src/model/frame.rs (1)
consume(144-150)
rs/hang/src/catalog/produce.rs (2)
rs/hang/src/catalog/root.rs (3)
default_track(82-84)default_delivery(86-91)from_slice(53-55)rs/moq-lite/src/model/time.rs (1)
now(167-170)
rs/moq-lite/src/model/mod.rs (3)
rs/moq-lite/src/model/track.rs (5)
delivery(193-195)delivery(370-372)expires(203-205)expires(365-367)subscriber(358-360)rs/moq-lite/src/model/broadcast.rs (1)
produce(19-25)rs/moq-lite/src/model/origin.rs (1)
produce(25-31)
rs/hang/src/import/hls.rs (4)
rs/hang/src/import/aac.rs (1)
new(14-20)rs/hang/src/import/avc3.rs (1)
new(31-40)rs/hang/src/import/fmp4.rs (1)
new(53-63)rs/moq-lite/src/model/broadcast.rs (3)
new(51-60)new(131-133)default(45-47)
rs/hang/src/import/avc3.rs (4)
rs/moq-lite/src/model/group.rs (3)
new(61-67)info(69-71)info(161-163)rs/moq-lite/src/model/track.rs (7)
new(49-53)new(174-186)new(449-455)info(188-190)info(312-314)delivery(193-195)delivery(370-372)rs/moq-lite/src/model/broadcast.rs (2)
new(51-60)drop(137-141)rs/hang/src/import/hev1.rs (2)
pts(269-276)drop(280-286)
rs/moq-lite/src/model/state.rs (5)
rs/moq-lite/src/model/delivery.rs (4)
fmt(40-44)fmt(71-75)new(48-52)consume(54-58)rs/moq-lite/src/model/track.rs (13)
fmt(43-45)new(49-53)new(174-186)new(449-455)close(220-222)abort(224-226)closed(349-351)unused(242-245)is_clone(248-250)is_clone(353-355)deref(266-268)deref(383-385)deref(521-523)rs/moq-lite/src/model/broadcast.rs (10)
consume(98-104)close(106-108)abort(110-112)closed(195-197)closed(316-344)unused(117-119)unused(282-313)is_clone(121-123)is_clone(202-204)default(45-47)rs/libmoq/src/consume.rs (1)
close(315-318)rs/libmoq/src/publish.rs (1)
close(28-31)
rs/moq-lite/src/lite/publisher.rs (4)
rs/moq-lite/src/model/track.rs (16)
new(49-53)new(174-186)new(449-455)from(61-65)from(69-71)from(75-79)from(83-85)from(89-93)from(97-99)subscribe(229-238)delivery(193-195)delivery(370-372)info(188-190)info(312-314)ordered(375-377)subscriber(358-360)rs/moq-lite/src/model/subscriber.rs (4)
subscribe(170-181)update(62-88)changed(165-168)default(185-187)rs/moq-lite/src/model/origin.rs (2)
select(271-295)default(339-343)rs/moq-lite/src/model/delivery.rs (2)
update(60-62)changed(83-86)
rs/moq-lite/src/ietf/publisher.rs (4)
rs/moq-lite/src/model/track.rs (3)
delivery(193-195)delivery(370-372)subscribe(229-238)rs/moq-lite/src/model/time.rs (1)
from_millis(62-64)rs/moq-lite/src/lite/subscriber.rs (3)
run_track(198-267)stream(105-105)stream(309-309)rs/moq-lite/src/model/subscriber.rs (2)
latest(155-162)subscribe(170-181)
rs/moq-lite/src/model/track.rs (3)
rs/moq-lite/src/model/delivery.rs (3)
fmt(40-44)fmt(71-75)new(48-52)rs/moq-lite/src/model/frame.rs (7)
new(32-37)new(93-98)new(117-122)info(124-126)info(184-186)deref(162-164)deref(246-248)rs/moq-lite/src/model/group.rs (5)
new(61-67)info(69-71)info(161-163)deref(133-135)deref(219-221)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Check
🔇 Additional comments (70)
rs/moq-clock/src/main.rs (3)
56-56: LGTM!The broadcast is correctly initialized using the new
Broadcast::produce()pattern. Themutis necessary for thecreate_trackcall in the Publish branch.
60-63: LGTM!Track creation correctly uses the new
create_trackAPI with the track name and default delivery settings. The track is then passed to the clock publisher as expected.
96-100: LGTM!The
broadcasthere correctly refers to the announced broadcast from the pattern match (shadowing the outer variable), which is the intended behavior. Track subscription uses the new API signature with delivery settings, consistent with the publish flow.rs/hang/src/import/hev1.rs (4)
11-30: LGTM! Clean struct definition with moq_lite integration.The struct properly holds the catalog producer and group producer alongside the broadcast and track producers. The field documentation is clear about the purpose of each field.
32-42: LGTM! Constructor properly initializes new fields.The constructor signature correctly accepts both the broadcast and catalog producers, and initializes the group to
Noneas expected for the lazy initialization pattern.
75-96: LGTM! Catalog-driven track initialization with proper delivery configuration.The initialization flow correctly:
- Locks the catalog
- Removes the old track if reinitializing
- Creates a new track via catalog
- Configures delivery with appropriate priority and latency
- Creates the track producer via broadcast
243-258: LGTM! Proper group lifecycle management for i-frame boundaries.The implementation correctly:
- Closes the existing group on i-frame detection
- Creates a new group via
track.append_group()- Encodes the container into the group
This aligns with the PR objective of frame-based delivery with group-per-keyframe semantics.
rs/hang/src/import/hls.rs (4)
71-92: LGTM! Struct properly updated with catalog field.The
Hlsstruct now holds bothmoq_lite::BroadcastProducerandhang::CatalogProducer, enabling catalog-driven track management across all importers. The field documentation clearly explains their purpose.
116-140: LGTM! Constructor properly accepts and stores catalog.The signature change aligns with other importers (Fmp4, Avc3, Aac) seen in the relevant code snippets, maintaining consistency across the codebase.
412-425: LGTM! Importer creation correctly propagates catalog.Both
ensure_video_importer_forandensure_audio_importercorrectly pass bothbroadcastandcatalogclones toFmp4::new, consistent with theFmp4::new(broadcast: moq_lite::BroadcastProducer, catalog: hang::CatalogProducer)signature shown in the relevant code snippets.
552-563: LGTM! Test setup correctly constructs broadcast and catalog.The test properly:
- Creates a default
BroadcastProducer- Creates a
CatalogProducerfrom the broadcast- Passes both to
Hls::newThis matches the expected usage pattern.
rs/hang/examples/video.rs (3)
26-34: LGTM! Simplified session and broadcast setup.The example now uses a cleaner flow:
- Connect with
origin.consumer- Create broadcast via
origin.producer.create_broadcast("")- Create catalog producer from broadcast
The context error message on Line 30 is helpful for debugging auth failures.
36-76: LGTM! Clear demonstration of catalog-driven track creation.The example effectively demonstrates:
- Locking the catalog for mutation
- Creating a track with video config via
catalog.video.create- Configuring delivery parameters (priority, max_latency, ordered)
- Creating the track producer via
broadcast.create_trackThe inline comments are helpful for understanding the workflow.
112-113: Good demonstration of group abort for error handling.The
group.abort(moq_lite::Error::Expired)call effectively shows how to abandon a group when delivery should be terminated immediately.rs/hang/src/catalog/produce.rs (2)
12-16: LGTM! Clean struct design for catalog production.The
CatalogProducerappropriately usesArc<Mutex<Catalog>>for shared mutable state, enabling theClonederive while maintaining thread safety.
94-114: LGTM! Clean async consumer implementation.The
next()method properly:
- Lazily subscribes to the catalog track on first call
- Uses
tokio::select!withbiasedto prioritize group completion- Returns
Nonewhen the stream is exhaustedThe pattern of moving
broadcasttoNoneafter subscription prevents duplicate subscriptions.rs/hang/src/import/opus.rs (3)
6-20: LGTM! Struct properly updated with moq_lite types.The constructor signature matches other importers (Hev1, Aac, Fmp4) as shown in the relevant code snippets, maintaining consistency across the codebase.
44-67: LGTM! Audio track initialization via catalog.The initialization correctly:
- Locks the catalog
- Creates an audio track with Opus codec config
- Configures delivery with priority 2 and max_latency
- Creates the track producer
The priority of 2 matches video tracks (hev1.rs Line 84), which may be intentional if audio/video have equal priority, or could be worth reconsidering if audio typically has higher priority.
Consider verifying if audio and video should have the same priority (2) or if audio typically warrants higher priority for better user experience during congestion.
84-88: LGTM! Correct pattern for independent audio frames.Each audio frame being a single group is appropriate since Opus frames are independently decodable. The close-after-encode pattern ensures proper group lifecycle.
rs/hang/src/import/fmp4.rs (5)
1-9: LGTM on imports and constant definition.The
MAX_GOPconstant usingfrom_secs_unchecked(10)is appropriate since 10 seconds is well within the valid range forTimestamp. This aligns with the PR objective of using timestamps to drive timing behavior.
27-63: LGTM on struct and constructor updates.The migration to
moq_lite::BroadcastProducerand addition ofhang::CatalogProduceraligns with the PR's catalog-driven lifecycle pattern. Thetracksmap correctly stores the group producer alongside the track producer to manage group lifecycles per track.
113-159: LGTM on track initialization with catalog-based creation.The initialization properly:
- Uses catalog to create tracks with appropriate configs
- Sets up
Deliverywith priority (video=1, audio=2),max_latency, andordered: false- Initializes tracks with
Nonegroup, correctly deferring group creation to the first keyframe
449-466: LGTM on group lifecycle management.The keyframe handling correctly:
- Closes the existing group before creating a new one
- Creates a new group via
track.append_group()- Properly errors if no group exists (ensuring streams start with keyframes)
The
Containerencoding pattern with group-based writes aligns with the PR's frame/group abstraction changes.
492-510: LGTM on Drop implementation.The cleanup correctly:
- Closes groups before tracks
- Uses
.ok()to ignore errors during drop (appropriate for cleanup)- Removes tracks from the appropriate catalog (video or audio)
- Logs the ended track with its configuration
rs/hang/src/import/avc3.rs (5)
8-40: LGTM on struct and constructor updates.The migration to
moq_litetypes and addition ofcatalogandgroupfields mirrors the pattern infmp4.rs. The constructor signature matches the expected pattern fromdecoder.rs.
42-96: LGTM on initialization with catalog-based track creation.The initialization correctly:
- Handles reinitialization by removing the old track from catalog
- Uses
catalog.video.createfor track creation- Sets up
Deliverywith consistent parameters (priority=2, max_latency, ordered=false)
218-248: LGTM on IDR-based group lifecycle management.The
maybe_start_framecorrectly manages groups around IDR frames:
- Closes existing group on IDR
- Creates new group via
track.append_group()- Properly errors if first frame isn't an I-frame
This aligns with the H.264 requirement that groups start with keyframes.
254-261: LGTM on timestamp handling.Defaulting to
Timestamp::now()when no PTS hint is provided is a reasonable fallback for live streaming scenarios. This matches the pattern inhev1.rs.
264-271: Bug: Incorrect catalog removal for video track.Line 269 calls
catalog.audio.remove(&track)but Avc3 produces a video track (H.264). This should usecatalog.video.removeinstead.Compare with the similar
hev1.rsdrop implementation (from provided snippets) which correctly usescatalog.video.remove.🐛 Proposed fix
impl Drop for Avc3 { fn drop(&mut self) { let Some(mut track) = self.track.take() else { return }; track.close().ok(); - let config = self.catalog.lock().audio.remove(&track).unwrap(); + let config = self.catalog.lock().video.remove(&track).unwrap(); tracing::info!(track = %track.info(), ?config, "ended track"); } }Likely an incorrect or invalid review comment.
rs/hang/src/import/aac.rs (4)
6-20: LGTM on struct and constructor updates.The simpler struct without a persistent
groupfield is appropriate for AAC since each audio frame is treated as an independent group (as noted in the decode method comments).
98-121: LGTM on catalog-based track initialization.The initialization correctly:
- Creates
AudioConfigwith the parsed AAC parameters- Uses
catalog.audio.createfor track registration- Sets up
Deliverywith priority=2 (lower priority than video)- Creates the track via
broadcast.create_track
123-144: LGTM on decode method with per-frame groups.The design choice to create a new group for each audio frame (lines 138-141) is appropriate since AAC frames are independent and don't require the keyframe-based grouping that video needs. The group is correctly closed immediately after encoding.
150-157: LGTM on timestamp handling.Consistent with the pattern in other importers, defaulting to
Timestamp::now()when no PTS hint is provided.rs/moq-lite/src/error.rs (1)
83-91: LGTM! New error variants for dynamic latency support.The addition of
Expired,TimeOverflow, andDroppedvariants aligns well with the new delivery/expiration semantics introduced in this PR.rs/moq-lite/src/model/mod.rs (1)
1-24: LGTM! Module organization follows existing patterns.The new modules (
delivery,expires,state,subscriber) are properly declared and selectively re-exported. Keepingstateprivate (use state::*instead ofpub use) is appropriate since it provides internal infrastructure for the public producer/consumer types.rs/moq-lite/src/ietf/publisher.rs (1)
228-234: Priority obtained from subscriber may not reflect current network state.Using
track.subscriber().current().priorityis correct for obtaining the current subscriber-specified priority. The implementation looks sound for serving groups with the correct priority.rs/moq-lite/src/ietf/subscriber.rs (3)
146-156: LGTM! Delivery update on SubscribeOk.The delivery information from the network is properly propagated to the track producer. The TODO about publisher priority is noted.
376-401: Frame creation with proper timestamp and error handling.The frame creation now correctly uses
Time::now()for the instant timestamp, and error handling is properly propagated with?. The structure aligns with the new Frame abstraction introduced in this PR.
416-419: Good use ofchecked_subfor safe arithmetic.Using
checked_subprevents underflow and returnsError::WrongSizeon failure, which is appropriate defensive coding.rs/moq-lite/src/lite/publisher.rs (3)
27-27: LGTM! Simplified origin initialization.Using
OriginProducer::new().consume()is cleaner thanOrigin::produce().consumer.
315-324: Good handling of timestamp delta calculation with fallback.The
checked_subapproach properly handles the case whereframe.instantmight be less thaninstant_max(clock skew or out-of-order frames). The warning log is appropriate for debugging such scenarios.
220-228: SubscribeUpdate properly updates subscriber delivery preferences.The delivery update is correctly propagated to
track.subscriber().update(delivery). The TODO about updating priority of outstanding groups is a valid concern for future work.rs/moq-lite/src/model/frame.rs (3)
8-28: Well-documented Frame struct with timestamp semantics.The documentation clearly explains the purpose of
instantas a track-scoped timestamp for synchronization. This aligns with the PR's goal of dynamic latency support.
189-200: Potential logic issue inread_chunkwhen producer closes early.The
wait_forpredicate only checks if a chunk exists at the current index. If the producer closes without providing the expected chunk,wait_forwill return whenclosed.is_some()(implicit in the state machine), but the subsequentget(self.index)may returnNone.However, looking at the
wait_forimplementation instate.rs, it returns when either the predicate is true OR the state is closed. If closed with an error, it returnsErr. If the predicate never matched but we're closed successfully, this could returnOk(None)which seems correct for "no more chunks".
251-436: Comprehensive test coverage following Rust conventions.Tests are integrated within the source file as per coding guidelines. Coverage includes:
- Construction and conversions
- Write/read single and multiple chunks
- Size validation (too large/too small)
- Multiple consumers
- Abort handling
- Produce helper
This is excellent test coverage.
rs/moq-lite/src/model/state.rs (2)
21-32: Clean Producer/Consumer state abstraction using tokio::watch.The design using
watch::Senderfor state broadcasting with anactivecounter for lifecycle management is a solid pattern for single-producer multi-consumer scenarios.
163-179: Race condition inupgrade()is acknowledged and acceptable.The comment at lines 169-170 correctly notes the minor race between the closed check and
fetch_add. SinceProducermust handle other handles closing at random times anyway, this is acceptable.rs/moq-native/examples/chat.rs (3)
14-21: LGTM! Clean migration to new OriginProducer API.The transition from the old origin API to
OriginProducer::new()andconsume()pattern is correct and aligns with the new moq_lite architecture.
47-57: LGTM! Track creation with Delivery struct.The new
create_trackAPI with explicitDeliveryconfiguration (priority, max_latency, ordered) correctly demonstrates the new time-aware framing model.
61-67: LGTM! Frame writing with explicit timestamps.The
write_framecalls properly useTime::from_secs()with error propagation, andgroup.close()correctly returnsResult.rs/moq-lite/src/model/group.rs (4)
44-49: LGTM! GroupState with max_instant tracking.The
append_framemethod correctly tracks the maximum instant across frames usingOption::max(), which handles the initialNonecase properly.
90-102: LGTM! Frame creation with expiration registration.The
create_frameandappend_framemethods properly register frames with the expiration tracker before adding to state. The error propagation correctly fails early if the frame would be expired.
185-209: Well-designed cancel-safe frame iteration with expiration awareness.The
next_frameimplementation correctly:
- Handles the cancel-safety edge case from
read_frame- Uses
biasedselect to prioritize available frames over expiration checks- Only waits for expiration when there's a known
max_instantThe comment on line 200 clarifies why we don't need to wait for a new maximum instant.
224-442: Excellent test coverage for the Group API.The tests comprehensively cover:
- Basic construction and conversion (
test_group_from_u64)- Single and multiple frame read/write flows
- Multi-chunk frame creation
- Frame consumer API via
next_frame- Multiple consumer fanout
- Abort propagation
- Timestamp ordering (both increasing and out-of-order cases)
- Manual
append_frameworkflowrs/moq-lite/src/model/broadcast.rs (3)
29-42: Note the naming: both fields areProducertypes.Both
producerandconsumerfields areProducer<HashMap<...>>types. Theproducertracks publisher-created tracks, whileconsumertracks subscriber-requested tracks. This naming could be confusing but appears intentional for the dual-map deduplication approach.
126-142: LGTM! Clean drop handler for pending requests.
BroadcastDropproperly drains and aborts any pending track requests when the broadcast producer is dropped, preventing resource leaks and notifying waiting consumers.
250-394: Comprehensive test coverage for broadcast lifecycle.Tests cover key scenarios: track insertion, unused detection, close propagation, select compatibility, request deduplication, and cleanup of unused requested tracks.
rs/libmoq/src/publish.rs (2)
17-21: LGTM! Broadcast-Catalog pairing in create().The tuple storage
(BroadcastProducer, CatalogProducer)correctly pairs the broadcast with its catalog. TheCatalogProducer::new(broadcast.clone())properly links the catalog to the broadcast.
33-38: LGTM! Decoder initialization with catalog.The destructuring and passing of both
broadcastandcatalogtoDecoder::newmatches the expected signatures shown in the relevant code snippets (rs/hang/src/import/decoder.rs:76-86).rs/moq-relay/src/web.rs (1)
278-286: LGTM! Simplified track subscription with default delivery.The migration to
subscribe_track(track, Delivery::default())is appropriate for the relay layer, which should remain media-agnostic per the coding guidelines. Using default delivery settings delegates latency decisions to the subscriber.The simplified error handling (all failures →
NOT_FOUND) is reasonable for an HTTP fetch endpoint where clients can't act differently on various error types.rs/libmoq/src/consume.rs (4)
8-16: LGTM!The migration from
hang::BroadcastConsumertomoq_lite::BroadcastConsumeris correctly reflected in the struct definition. The field types align with the updated moq_lite API surface.
35-36: LGTM!The frame storage change to
(hang::Container, bool)tuple correctly captures both the frame data and keyframe flag, supporting the new frame-based API.
63-96: LGTM!The catalog iteration using
hang::CatalogConsumer::new(broadcast.clone())and the updated.renditions.values()pattern correctly adapts to the moq_lite-based catalog flow.
288-308: LGTM!The
frame_chunkmethod correctly destructures the(frame, keyframe)tuple and properly dereferenceskeyframewhen populating the output struct.rs/moq-lite/src/model/track.rs (6)
32-58: LGTM!The
Trackstruct provides a clean wrapper aroundArc<String>with properDisplay,Clone, and accessor implementations.
60-106: LGTM!Comprehensive
Fromimplementations cover common string types (&str,String,&String,Cow<'_, str>,Arc<String>) enabling ergonomic track creation.
163-186: LGTM!The
TrackProducer::newinitialization correctly wires up the delivery, expires, and subscribers components with proper dependency order (expires depends on delivery).
321-346: LGTM!The
next_groupimplementation correctly handles the offset-based indexing to account for removed groups while iterating through available groups.
442-446: Unbounded pending queue could grow large.
TrackConsumerOrderedbuffers out-of-order groups inpending: VecDeque<GroupConsumer>without a size limit. In pathological network conditions with severe reordering, this queue could grow unboundedly. Consider whether a maximum buffer size should be enforced based onmax_latency.
544-820: Comprehensive test coverage.The tests cover key scenarios: group creation, ordering, duplicates, multiple consumers, close/abort, delivery updates, and ordered consumer behavior including expiration-based skipping. As per coding guidelines, tests are correctly integrated within the source file.
| impl Drop for Hev1 { | ||
| fn drop(&mut self) { | ||
| if let Some(track) = &self.track { | ||
| tracing::debug!(name = ?track.info.name, "ending track"); | ||
| self.broadcast.catalog.lock().remove_video(&track.info.name); | ||
| } | ||
| let Some(mut track) = self.track.take() else { return }; | ||
| track.close().ok(); | ||
|
|
||
| let config = self.catalog.lock().video.remove(&track).unwrap(); | ||
| tracing::info!(track = %track.info(), ?config, "ended track"); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential panic in Drop if catalog lock is poisoned or track not found.
The unwrap() on Line 284 will panic if the track was not found in the catalog's video list. While this might indicate a programming error, panicking in Drop can cause issues (e.g., double panic during stack unwinding).
Suggested defensive handling
impl Drop for Hev1 {
fn drop(&mut self) {
let Some(mut track) = self.track.take() else { return };
track.close().ok();
- let config = self.catalog.lock().video.remove(&track).unwrap();
- tracing::info!(track = %track.info(), ?config, "ended track");
+ match self.catalog.lock().video.remove(&track) {
+ Some(config) => tracing::info!(track = %track.info(), ?config, "ended track"),
+ None => tracing::warn!(track = %track.info(), "track not found in catalog during cleanup"),
+ }
}
}🤖 Prompt for AI Agents
In @rs/hang/src/import/hev1.rs around lines 279 - 286, In Hev1's Drop
implementation, avoid panicking: replace the direct unwrap on
self.catalog.lock().video.remove(&track).unwrap() with defensive handling that
first attempts to acquire the Mutex lock (handle PoisonError by mapping to the
inner guard or logging and returning), then tries to remove the track from the
catalog video map using video.remove(&track) and handles the Option safely
(e.g., if None, log a warning with track.info() and the catalog state instead of
unwrapping). Ensure track.close().ok() still runs, and ensure all errors are
logged with tracing::error or tracing::warn so Drop never panics (do not
propagate or unwrap).
| impl Drop for Opus { | ||
| fn drop(&mut self) { | ||
| if let Some(track) = self.track.take() { | ||
| tracing::debug!(name = ?track.info.name, "ending track"); | ||
| self.broadcast.catalog.lock().remove_audio(&track.info.name); | ||
| } | ||
| let Some(mut track) = self.track.take() else { return }; | ||
| track.close().ok(); | ||
|
|
||
| let config = self.catalog.lock().audio.remove(&track).unwrap(); | ||
| tracing::debug!(track = %track.info(), ?config, "ended track"); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same Drop issue as hev1.rs: unwrap() may panic if track not in catalog.
The unwrap() on Line 111 will panic if the track was not found in the catalog's audio list. Consider defensive handling similar to the suggestion for hev1.rs.
Suggested defensive handling
impl Drop for Opus {
fn drop(&mut self) {
let Some(mut track) = self.track.take() else { return };
track.close().ok();
- let config = self.catalog.lock().audio.remove(&track).unwrap();
- tracing::debug!(track = %track.info(), ?config, "ended track");
+ match self.catalog.lock().audio.remove(&track) {
+ Some(config) => tracing::debug!(track = %track.info(), ?config, "ended track"),
+ None => tracing::warn!(track = %track.info(), "track not found in catalog during cleanup"),
+ }
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| impl Drop for Opus { | |
| fn drop(&mut self) { | |
| if let Some(track) = self.track.take() { | |
| tracing::debug!(name = ?track.info.name, "ending track"); | |
| self.broadcast.catalog.lock().remove_audio(&track.info.name); | |
| } | |
| let Some(mut track) = self.track.take() else { return }; | |
| track.close().ok(); | |
| let config = self.catalog.lock().audio.remove(&track).unwrap(); | |
| tracing::debug!(track = %track.info(), ?config, "ended track"); | |
| } | |
| impl Drop for Opus { | |
| fn drop(&mut self) { | |
| let Some(mut track) = self.track.take() else { return }; | |
| track.close().ok(); | |
| match self.catalog.lock().audio.remove(&track) { | |
| Some(config) => tracing::debug!(track = %track.info(), ?config, "ended track"), | |
| None => tracing::warn!(track = %track.info(), "track not found in catalog during cleanup"), | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
In @rs/hang/src/import/opus.rs around lines 106 - 113, The Drop impl for Opus
currently calls unwrap() on the result of
self.catalog.lock().audio.remove(&track) which can panic if the track key is
absent; change this to handle the Option defensively (e.g., use match or if let)
so that if remove returns None you log a warning/debug noting the missing track
and return, and only call tracing::debug!(track = %track.info(), ?config, "ended
track") when remove yields Some(config); reference the Drop impl, the local
variable track, and the catalog.lock().audio.remove(&track) call when making the
change.
| let delivery = track.delivery().current(); | ||
|
|
||
| // TODO: This is wrong, as we haven't actually gotten a response from the origin yet. | ||
| control | ||
| .send(ietf::SubscribeOk { | ||
| request_id, | ||
| track_alias: request_id.0, // NOTE: using track alias as request id for now | ||
| delivery_timeout: delivery.max_latency, | ||
| group_order: if delivery.ordered { | ||
| GroupOrder::Ascending | ||
| } else { | ||
| GroupOrder::Descending | ||
| }, | ||
| }) | ||
| .ok(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sending SubscribeOk before origin confirmation may cause protocol issues.
The TODO at line 119 acknowledges this: the SubscribeOk is sent before the origin actually confirms the subscription. This could lead to the subscriber receiving frames for a subscription that the origin later rejects.
Consider deferring the SubscribeOk until after the origin confirms, or implementing a retry/error mechanism if the origin fails after the subscriber has been notified.
🤖 Prompt for AI Agents
In @rs/moq-lite/src/ietf/publisher.rs around lines 117 - 131, The SubscribeOk is
sent prematurely from where you call control.send(ietf::SubscribeOk {
request_id, track_alias: request_id.0, delivery_timeout: delivery.max_latency,
group_order: if delivery.ordered { GroupOrder::Ascending } else {
GroupOrder::Descending }, }).ok(); after computing let delivery =
track.delivery().current(); — defer sending that SubscribeOk until after you
receive confirmation from the origin (the success response for the subscription)
instead of immediately; update the code around the subscribe handling to await
or listen for the origin confirmation event, then call control.send(...) only on
success, and on origin failure send an appropriate error response back to the
requester (or implement a retry/backoff) and ensure you correlate using
request_id/track_alias so rejected subscriptions are cleaned up.
| tracing::info!(id = %request_id, broadcast = %self.origin.as_ref().unwrap().absolute(&broadcast), track = %track.name, "subscribe started"); | ||
|
|
||
| track.unused().await; | ||
| tracing::info!(id = %request_id, broadcast = %self.origin.as_ref().unwrap().absolute(&broadcast), track = %track.info.name, "subscribe cancelled"); | ||
| tracing::warn!(id = %request_id, broadcast = %self.origin.as_ref().unwrap().absolute(&broadcast), track = %track.name, "subscribe cancelled"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential panic from unwrap() on self.origin.
Both lines use self.origin.as_ref().unwrap() which will panic if origin is None. While the start_announce method earlier checks for origin being None, this code path in run_subscribe doesn't have a guard.
Consider using ok_or(Error::InvalidRole)? for consistency with other methods in this file.
🔧 Suggested fix
- tracing::info!(id = %request_id, broadcast = %self.origin.as_ref().unwrap().absolute(&broadcast), track = %track.name, "subscribe started");
+ let origin = self.origin.as_ref().ok_or(Error::InvalidRole)?;
+ tracing::info!(id = %request_id, broadcast = %origin.absolute(&broadcast), track = %track.name, "subscribe started");
track.unused().await;
- tracing::warn!(id = %request_id, broadcast = %self.origin.as_ref().unwrap().absolute(&broadcast), track = %track.name, "subscribe cancelled");
+ tracing::warn!(id = %request_id, broadcast = %origin.absolute(&broadcast), track = %track.name, "subscribe cancelled");📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| tracing::info!(id = %request_id, broadcast = %self.origin.as_ref().unwrap().absolute(&broadcast), track = %track.name, "subscribe started"); | |
| track.unused().await; | |
| tracing::info!(id = %request_id, broadcast = %self.origin.as_ref().unwrap().absolute(&broadcast), track = %track.info.name, "subscribe cancelled"); | |
| tracing::warn!(id = %request_id, broadcast = %self.origin.as_ref().unwrap().absolute(&broadcast), track = %track.name, "subscribe cancelled"); | |
| let origin = self.origin.as_ref().ok_or(Error::InvalidRole)?; | |
| tracing::info!(id = %request_id, broadcast = %origin.absolute(&broadcast), track = %track.name, "subscribe started"); | |
| track.unused().await; | |
| tracing::warn!(id = %request_id, broadcast = %origin.absolute(&broadcast), track = %track.name, "subscribe cancelled"); |
🤖 Prompt for AI Agents
In @rs/moq-lite/src/ietf/subscriber.rs around lines 296 - 299, The tracing calls
in run_subscribe use self.origin.as_ref().unwrap(), which can panic when origin
is None; replace these unwraps with a safe check like let origin =
self.origin.as_ref().ok_or(Error::InvalidRole)? at the start of run_subscribe
(or immediately before the tracing calls) and then use
origin.absolute(&broadcast) in the tracing macros; update both the "subscribe
started" and "subscribe cancelled" lines to use the validated origin and
propagate the Error::InvalidRole via ? for consistency with start_announce.
| impl<T> Clone for Producer<T> { | ||
| fn clone(&self) -> Self { | ||
| self.active.fetch_add(1, atomic::Ordering::Relaxed); | ||
| Self { | ||
| state: self.state.clone(), | ||
| active: self.active.clone(), | ||
| } | ||
| } | ||
| } | ||
|
|
||
| impl<T> Drop for Producer<T> { | ||
| fn drop(&mut self) { | ||
| let active = self.active.fetch_sub(1, atomic::Ordering::Relaxed); | ||
| if active != 1 { | ||
| return; | ||
| } | ||
|
|
||
| self.state.send_if_modified(|state| { | ||
| if state.closed.is_some() { | ||
| return false; | ||
| } | ||
|
|
||
| state.closed = Some(Err(Error::Dropped)); | ||
| true | ||
| }); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the full state.rs file to understand the context
wc -l rs/moq-lite/src/model/state.rsRepository: moq-dev/moq
Length of output: 89
🌐 Web query:
memory ordering for reference counting in Rust Arc
💡 Result:
Short answer:
- Incrementing the strong count (Arc::clone) uses a relaxed atomic increment (Ordering::Relaxed). [1][2]
- Decrementing (Arc::drop) uses a release decrement (Ordering::Release); if that decrement observes the last reference (returned value == 1), the drop does an Acquire fence (or equivalent Acquire load) before actually freeing the inner data to synchronize prior writes from other owners. This is what guarantees that writes done by any owner before dropping are visible to the destructor that frees the object. [3][1]
Why: Relaxed is enough for clone because the thread already has a reference to the Arc (no need to order other memory). Release on decrement + Acquire when destroying forms the necessary release–acquire synchronization to prevent use-after-free / lost writes across threads. [3][4]
Sources: Rust std/Arc docs and the Nomicon explanation of Arc drop. [1][3][2]
🏁 Script executed:
# Let's also look at the state structure and how send_if_modified works
head -150 rs/moq-lite/src/model/state.rs | tail -40Repository: moq-dev/moq
Length of output: 820
Strengthen memory ordering for the reference counter synchronization.
The fetch_sub in Drop uses Relaxed ordering, but this needs Release semantics to synchronize with other threads' decrements. Per Rust's Arc implementation, when the last reference is dropped (checked at line 133), use a Release decrement followed by an Acquire fence before accessing self.state to ensure visibility of writes from other dropping threads. Without this, modifications by threads dropping their references may not be visible when the final thread calls send_if_modified.
Keep Relaxed for the fetch_add in Clone (correct as-is).
🤖 Prompt for AI Agents
In @rs/moq-lite/src/model/state.rs around lines 120 - 146, In Producer<T>::drop
replace the Relaxed ordering on self.active.fetch_sub(...) with Release
ordering, and after detecting the last reference (when fetch_sub returns 1)
insert an Acquire fence (atomic::fence(atomic::Ordering::Acquire)) before
calling self.state.send_if_modified(...); keep the fetch_add in Producer::clone
as Relaxed, and ensure the call site still sets state.closed =
Some(Err(Error::Dropped)) inside send_if_modified.
| .await | ||
| .expect("not closed properly"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
expect() could panic if watch sender is dropped unexpectedly.
The wait_for returns Err when the sender is dropped without notification. The .expect("not closed properly") will panic in this case. While this indicates a programming error, consider returning an Error variant instead for more graceful handling.
🔧 Suggested fix
let state = self
.inner
.wait_for(|state| {
// We always want to check the function first, only returning closed if false.
matched = f(&state.value);
matched || state.closed.is_some()
})
.await
- .expect("not closed properly");
+ .map_err(|_| Error::Dropped)?;🤖 Prompt for AI Agents
In @rs/moq-lite/src/model/state.rs around lines 246 - 247, The
`.await.expect("not closed properly")` call on the `wait_for` result can panic
if the watch sender is dropped; replace the `expect` with proper error handling:
propagate the `Err` from `wait_for` as a meaningful error (e.g., convert to your
module's Error variant or return a Result) instead of panicking. Locate the call
that uses `wait_for().await.expect("not closed properly")` and change it to
handle `Err` via `?`, `map_err(...)`, or a `match`, returning an appropriate
error variant (with context) from the enclosing function.
A huge change I know, but it got too difficult to untangle. Claude struggled.
moq-lite-03
Subscriber Max LatencyandSubscriber OrderedtoSUBSCRIBEandSUBSCRIBE_UPDATE.Publisher Priority,Publisher Max Latency, andPublisher OrderedtoSUBSCRIBE_OK.Instant DeltatoFRAME.SUBSCRIBE_OKmay be sent multiple timesBasically, the publisher and all subscribers have a desired priority, max latency and group order. The relay will request the maximum value of all subscribers combined. This involves issuing a SUBSCRIBE_UPDATE when the value changes.
Instead of a hard-coded 2 groups active, now there can be any number of groups based on max latency. This should clear up errors especially around audio. The publisher max latency determines how long a group should be stored in cache. All of this is based on a timestamp sent with each frame.
I used this as an opportunity to revamp the
hangAPI. It was super confusing that hang wrapped moq while using the same naming. More has been moved into themoq-litelayer, like the ability to download frames in order based on the max latency.I'm super sorry about the inevitable merge conflicts. ping me and I'll resolve it.