Skip to content

Conversation

@larseggert
Copy link
Collaborator

Already a dependency in Gecko. Let's us reduce boilerplate quite a bit.

Already a dependency in Gecko. Let's us reduce boilerplate quite a bit.
Copilot AI review requested due to automatic review settings December 19, 2025 16:57
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR reduces boilerplate code by adopting the derive_more crate (already a dependency in Gecko) to automatically derive common trait implementations like Display, Debug, Deref, DerefMut, From, Into, AsRef, and Sub across multiple crates.

Key changes:

  • Replaced manual trait implementations with derive_more macros
  • Added derive_more and strum (for enum Display) to workspace dependencies
  • Applied conversions across neqo-common, neqo-crypto, neqo-transport, neqo-http3, neqo-qpack, neqo-bin, and test-fixture crates

Reviewed changes

Copilot reviewed 69 out of 70 changed files in this pull request and generated 6 comments.

Show a summary per file
File Description
Cargo.toml Added derive_more to workspace dependencies with required features
neqo-transport/src/addr_valid.rs Changed FrameType::NewToken to cast to u64
neqo-transport/src/ackrate.rs Changed FrameType::AckFrequency to cast to u64
neqo-transport/src/tracking.rs Replaced manual Display with derive_more::Display for PacketRange and RecvdPackets
neqo-transport/src/stream_id.rs Replaced manual Display, From with derive_more for StreamId
neqo-transport/src/stats.rs Replaced manual Deref/DerefMut with derive_more for wrapper types
neqo-transport/src/stateless_reset.rs Replaced manual AsRef with derive_more::AsRef for Token
neqo-transport/src/server.rs Replaced manual Display with derive_more::Display for Server
neqo-transport/src/send_stream.rs Replaced manual Display with derive_more::Display for SendStream
neqo-transport/src/recovery/mod.rs Replaced manual Display with derive_more::Display for Loss
neqo-transport/src/quic_datagrams.rs Replaced manual AsRef with derive_more::AsRef for QuicDatagram
neqo-transport/src/packet/mod.rs Replaced manual Debug and Deref with derive_more for Public and Decrypted
neqo-transport/src/packet/metadata.rs Replaced manual Display with derive_more::Display for MetaData
neqo-transport/src/pace.rs Replaced manual Display/Debug with derive_more for Pacer
neqo-transport/src/frame.rs Added #[expect(clippy::too_many_lines)] to test function
neqo-transport/src/fc.rs Replaced manual Display and Deref/DerefMut with derive_more for flow control types
neqo-transport/src/ecn.rs Replaced manual Deref/DerefMut with derive_more for Count and ValidationCount
neqo-transport/src/crypto.rs Replaced manual Display with derive_more::Display for crypto types
neqo-transport/src/connection/mod.rs Replaced manual Display/Debug with derive_more for Connection
neqo-transport/src/cid.rs Replaced manual Display/Debug/Deref with derive_more for ConnectionId types
neqo-transport/src/cc/*.rs Replaced manual Display with derive_more::Display for congestion control types
neqo-qpack/src/*.rs Replaced manual Display with derive_more::Display across encoder/decoder types
neqo-http3/src/*.rs Replaced manual Display/Deref with derive_more across HTTP/3 types
neqo-crypto/src/*.rs Replaced manual Display/Debug/Deref with derive_more for crypto types
neqo-common/src/*.rs Replaced manual Debug with derive_more::Debug for common types
test-fixture/src/sim/*.rs Replaced manual Debug/Display with derive_more for simulation types
neqo-bin/src/server/*.rs Replaced manual Display with derive_more::Display for server types

Comment on lines +40 to +44
#[debug("{}", self.$debug_method().map_or_else(|_| format!("Opaque {}", stringify!($scoped)), |b| format!("{} {}", stringify!($scoped), neqo_common::hex_with_len(b))))]
pub struct $scoped {
ptr: *mut $target,
}

Copy link

Copilot AI Dec 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The debug format string is excessively long and complex. Consider extracting the closure logic into a helper method on the scoped pointer types to improve readability and maintainability of the macro.

Suggested change
#[debug("{}", self.$debug_method().map_or_else(|_| format!("Opaque {}", stringify!($scoped)), |b| format!("{} {}", stringify!($scoped), neqo_common::hex_with_len(b))))]
pub struct $scoped {
ptr: *mut $target,
}
#[debug("{}", self.debug_string())]
pub struct $scoped {
ptr: *mut $target,
}
impl $scoped {
#[allow(dead_code)]
fn debug_string(&self) -> String {
self.$debug_method().map_or_else(
|_| format!("Opaque {}", stringify!($scoped)),
|b| format!(
"{} {}",
stringify!($scoped),
neqo_common::hex_with_len(b)
),
)
}
}

Copilot uses AI. Check for mistakes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems mostly good, though I would maybe use concat!() rather than format!() to place the type name; the second arm still needs format!(), but it can compile the type name down easier with concat!().

/// `Public` holds information from packets that is public only. This allows for
/// processing of packets prior to decryption.
#[derive(derive_more::Debug)]
#[debug("{:?}: {} {}", self.packet_type(), hex_with_len(&data[..*header_len]), hex_with_len(&data[*header_len..]))]
Copy link

Copilot AI Dec 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The debug format accesses struct fields directly (data, header_len) without qualification. Consider using self.data and self.header_len for clarity and consistency with self.packet_type().

Suggested change
#[debug("{:?}: {} {}", self.packet_type(), hex_with_len(&data[..*header_len]), hex_with_len(&data[*header_len..]))]
#[debug(
"{:?}: {} {}",
self.packet_type(),
hex_with_len(&self.data[..self.header_len]),
hex_with_len(&self.data[self.header_len..])
)]

Copilot uses AI. Check for mistakes.
Comment on lines +7 to +10
#![expect(
clippy::mutable_key_type,
reason = "Triggered by derive_more::Display on Http3Server, which just prints a static string."
)]
Copy link

Copilot AI Dec 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The mutable_key_type lint is triggered by HashMap<ConnectionRef, HandlerRef> where ConnectionRef implements Hash. The issue is that ConnectionRef contains a Rc<RefCell<Connection>> which is mutable. Consider whether this is the intended design, as using mutable types as hash keys can lead to subtle bugs if the hash value changes after insertion.

Copilot uses AI. Check for mistakes.
#![expect(clippy::unwrap_used, reason = "This is example code.")]
#![expect(
clippy::mutable_key_type,
reason = "Triggered by derive_more::Display on HttpServer, which just prints a static string."
Copy link

Copilot AI Dec 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as the previous comment - the mutable_key_type lint is triggered by the underlying Http3Server's HashMap<ConnectionRef, HandlerRef>. The suppression reason references derive_more::Display, but the actual issue is the HashMap key type. Consider documenting the real reason or addressing the underlying design.

Suggested change
reason = "Triggered by derive_more::Display on HttpServer, which just prints a static string."
reason = "Triggered by Http3Server's internal HashMap<ConnectionRef, HandlerRef>; accepted here for example server code."

Copilot uses AI. Check for mistakes.
@codecov
Copy link

codecov bot commented Dec 19, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 93.96%. Comparing base (adcd4bd) to head (323089c).

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3290      +/-   ##
==========================================
- Coverage   93.99%   93.96%   -0.04%     
==========================================
  Files         124      124              
  Lines       37597    37259     -338     
  Branches    37597    37259     -338     
==========================================
- Hits        35340    35011     -329     
+ Misses       1392     1385       -7     
+ Partials      865      863       -2     
Components Coverage Δ
neqo-common 98.52% <ø> (-0.02%) ⬇️
neqo-crypto 84.40% <ø> (-0.46%) ⬇️
neqo-http3 93.87% <ø> (+0.01%) ⬆️
neqo-qpack 94.63% <ø> (-0.05%) ⬇️
neqo-transport 94.79% <100.00%> (-0.04%) ⬇️
neqo-udp 82.84% <ø> (+0.41%) ⬆️
mtu 88.94% <ø> (ø)

@codspeed-hq
Copy link

codspeed-hq bot commented Dec 19, 2025

CodSpeed Performance Report

Merging #3290 will degrade performance by 7.32%

Comparing larseggert:chore-use-derive-more (323089c) with main (e74f986)1

Summary

⚡ 1 improvement
❌ 5 regressions
✅ 17 untouched

⚠️ Please fix the performance issues or acknowledge them on CodSpeed.

Benchmarks breakdown

Mode Benchmark BASE HEAD Efficiency
Simulation client 816.2 ms 769.4 ms +6.09%
Simulation wallclock-time 1 ms 1.1 ms -3.78%
Simulation coalesce_acked_from_zero 3+1 entries 3 µs 3.1 µs -3.71%
Simulation coalesce_acked_from_zero 1000+1 entries 2.6 µs 2.8 µs -7.32%
Simulation coalesce_acked_from_zero 10+1 entries 3 µs 3.1 µs -3.71%
Simulation coalesce_acked_from_zero 1+1 entries 2.9 µs 3 µs -3.92%

Footnotes

  1. No successful run was found on main (adcd4bd) during the generation of this report, so e74f986 was used instead as the comparison base. There might be some changes unrelated to this pull request in this report.

@github-actions
Copy link
Contributor

Client/server transfer results

Performance differences relative to adcd4bd.

Transfer of 33554432 bytes over loopback, min. 100 runs. All unit-less numbers are in milliseconds.

Client vs. server (params) Mean ± σ Min Max MiB/s ± σ Δ main Δ main
google-neqo-cubic 270.2 ± 4.4 262.9 285.0 118.4 ± 7.3 💚 -1.4 -0.5%
neqo-google-cubic 767.2 ± 4.1 759.0 780.5 41.7 ± 7.8 💚 -1.5 -0.2%
neqo-neqo-cubic-nopacing 95.7 ± 4.3 87.7 107.0 334.5 ± 7.4 💚 -1.8 -1.9%
neqo-quiche-cubic 193.2 ± 4.1 187.0 204.1 165.7 ± 7.8 💚 -1.5 -0.8%
neqo-s2n-cubic 220.1 ± 4.4 214.5 241.6 145.4 ± 7.3 💚 -1.5 -0.7%

Table above only shows statistically significant changes. See all results below.

All results

Transfer of 33554432 bytes over loopback, min. 100 runs. All unit-less numbers are in milliseconds.

Client vs. server (params) Mean ± σ Min Max MiB/s ± σ Δ main Δ main
google-google-nopacing 458.9 ± 4.3 451.4 476.8 69.7 ± 7.4
google-neqo-cubic 270.2 ± 4.4 262.9 285.0 118.4 ± 7.3 💚 -1.4 -0.5%
msquic-msquic-nopacing 221.0 ± 99.1 146.0 626.0 144.8 ± 0.3
msquic-neqo-cubic 207.3 ± 49.5 151.4 424.0 154.4 ± 0.6 -3.7 -1.7%
neqo-google-cubic 767.2 ± 4.1 759.0 780.5 41.7 ± 7.8 💚 -1.5 -0.2%
neqo-msquic-cubic 160.2 ± 3.9 154.3 169.3 199.8 ± 8.2 -0.6 -0.4%
neqo-neqo-cubic 98.4 ± 4.4 89.2 105.6 325.0 ± 7.3 1.2 1.3%
neqo-neqo-cubic-nopacing 95.7 ± 4.3 87.7 107.0 334.5 ± 7.4 💚 -1.8 -1.9%
neqo-neqo-newreno 97.2 ± 4.5 87.8 106.5 329.4 ± 7.1 -0.1 -0.1%
neqo-neqo-newreno-nopacing 95.9 ± 4.3 89.7 104.5 333.7 ± 7.4 0.2 0.2%
neqo-quiche-cubic 193.2 ± 4.1 187.0 204.1 165.7 ± 7.8 💚 -1.5 -0.8%
neqo-s2n-cubic 220.1 ± 4.4 214.5 241.6 145.4 ± 7.3 💚 -1.5 -0.7%
quiche-neqo-cubic 152.7 ± 5.1 140.6 162.2 209.6 ± 6.3 -0.8 -0.5%
quiche-quiche-nopacing 141.1 ± 4.5 135.6 154.8 226.7 ± 7.1
s2n-neqo-cubic 175.0 ± 4.8 165.7 193.5 182.8 ± 6.7 1.4 0.8%
s2n-s2n-nopacing 251.7 ± 28.1 232.3 368.6 127.1 ± 1.1

Download data for profiler.firefox.com or download performance comparison data.

@github-actions
Copy link
Contributor

🐰 Bencher Report

Branchchore-use-derive-more
TestbedOn-prem

🚨 1 Alert

BenchmarkMeasure
Units
ViewBenchmark Result
(Result Δ%)
Upper Boundary
(Limit %)
coalesce_acked_from_zero 1000+1 entriesLatency
nanoseconds (ns)
📈 plot
🚷 threshold
🚨 alert (🔔)
97.55 ns
(+7.25%)Baseline: 90.96 ns
95.29 ns
(102.37%)

Click to view all benchmark results
BenchmarkLatencyBenchmark Result
nanoseconds (ns)
(Result Δ%)
Upper Boundary
nanoseconds (ns)
(Limit %)
1-conn/1-100mb-req/mtu-1504 (aka. Upload)/client📈 view plot
🚷 view threshold
203,870,000.00 ns
(-1.98%)Baseline: 207,996,530.94 ns
217,131,182.04 ns
(93.89%)
1-conn/1-100mb-resp/mtu-1504 (aka. Download)/client📈 view plot
🚷 view threshold
200,030,000.00 ns
(-1.20%)Baseline: 202,454,283.39 ns
212,168,371.59 ns
(94.28%)
1-conn/1-1b-resp/mtu-1504 (aka. HPS)/client📈 view plot
🚷 view threshold
38,729,000.00 ns
(+11.51%)Baseline: 34,731,755.70 ns
46,427,997.39 ns
(83.42%)
1-conn/10_000-parallel-1b-resp/mtu-1504 (aka. RPS)/client📈 view plot
🚷 view threshold
280,660,000.00 ns
(-2.66%)Baseline: 288,323,029.32 ns
301,351,952.54 ns
(93.13%)
1-streams/each-1000-bytes/simulated-time📈 view plot
🚷 view threshold
119,090,000.00 ns
(+0.20%)Baseline: 118,848,794.79 ns
120,450,589.14 ns
(98.87%)
1-streams/each-1000-bytes/wallclock-time📈 view plot
🚷 view threshold
584,160.00 ns
(-0.87%)Baseline: 589,267.41 ns
609,160.94 ns
(95.90%)
1000-streams/each-1-bytes/simulated-time📈 view plot
🚷 view threshold
2,330,400,000.00 ns
(-72.91%)Baseline: 8,602,514,495.11 ns
23,369,679,444.11 ns
(9.97%)
1000-streams/each-1-bytes/wallclock-time📈 view plot
🚷 view threshold
12,494,000.00 ns
(-5.62%)Baseline: 13,237,711.73 ns
15,094,723.92 ns
(82.77%)
1000-streams/each-1000-bytes/simulated-time📈 view plot
🚷 view threshold
16,441,000,000.00 ns
(-6.85%)Baseline: 17,649,521,172.64 ns
20,699,471,289.58 ns
(79.43%)
1000-streams/each-1000-bytes/wallclock-time📈 view plot
🚷 view threshold
51,146,000.00 ns
(+1.28%)Baseline: 50,499,078.18 ns
55,383,137.95 ns
(92.35%)
RxStreamOrderer::inbound_frame()📈 view plot
🚷 view threshold
108,450,000.00 ns
(-1.10%)Baseline: 109,660,960.91 ns
111,405,366.94 ns
(97.35%)
coalesce_acked_from_zero 1+1 entries📈 view plot
🚷 view threshold
89.58 ns
(+0.43%)Baseline: 89.20 ns
90.55 ns
(98.94%)
coalesce_acked_from_zero 10+1 entries📈 view plot
🚷 view threshold
105.68 ns
(-0.32%)Baseline: 106.02 ns
107.08 ns
(98.69%)
coalesce_acked_from_zero 1000+1 entries📈 view plot
🚷 view threshold
🚨 view alert (🔔)
97.55 ns
(+7.25%)Baseline: 90.96 ns
95.29 ns
(102.37%)

coalesce_acked_from_zero 3+1 entries📈 view plot
🚷 view threshold
106.26 ns
(-0.24%)Baseline: 106.51 ns
107.50 ns
(98.85%)
decode 1048576 bytes, mask 3f📈 view plot
🚷 view threshold
1,417,200.00 ns
(-17.90%)Baseline: 1,726,136.97 ns
2,511,631.95 ns
(56.43%)
decode 1048576 bytes, mask 7f📈 view plot
🚷 view threshold
1,475,900.00 ns
(-67.23%)Baseline: 4,503,182.25 ns
7,383,504.46 ns
(19.99%)
decode 1048576 bytes, mask ff📈 view plot
🚷 view threshold
1,162,000.00 ns
(-57.56%)Baseline: 2,738,169.71 ns
4,212,590.37 ns
(27.58%)
decode 4096 bytes, mask 3f📈 view plot
🚷 view threshold
5,547.60 ns
(-22.01%)Baseline: 7,113.27 ns
10,933.72 ns
(50.74%)
decode 4096 bytes, mask 7f📈 view plot
🚷 view threshold
5,826.40 ns
(-66.85%)Baseline: 17,578.42 ns
28,825.20 ns
(20.21%)
decode 4096 bytes, mask ff📈 view plot
🚷 view threshold
4,516.50 ns
(-56.29%)Baseline: 10,332.70 ns
15,791.40 ns
(28.60%)
sent::Packets::take_ranges📈 view plot
🚷 view threshold
4,692.50 ns
(+0.35%)Baseline: 4,676.04 ns
4,914.59 ns
(95.48%)
transfer/pacing-false/same-seed/simulated-time/run📈 view plot
🚷 view threshold
23,941,000,000.00 ns
(-4.65%)Baseline: 25,108,802,287.58 ns
26,449,915,806.24 ns
(90.51%)
transfer/pacing-false/same-seed/wallclock-time/run📈 view plot
🚷 view threshold
23,024,000.00 ns
(-8.30%)Baseline: 25,106,697.71 ns
27,227,811.86 ns
(84.56%)
transfer/pacing-false/varying-seeds/simulated-time/run📈 view plot
🚷 view threshold
23,941,000,000.00 ns
(-4.11%)Baseline: 24,966,222,222.22 ns
26,061,728,700.23 ns
(91.86%)
transfer/pacing-false/varying-seeds/wallclock-time/run📈 view plot
🚷 view threshold
23,346,000.00 ns
(-7.44%)Baseline: 25,221,261.44 ns
27,515,908.95 ns
(84.85%)
transfer/pacing-true/same-seed/simulated-time/run📈 view plot
🚷 view threshold
23,676,000,000.00 ns
(-5.75%)Baseline: 25,121,627,450.98 ns
26,753,679,460.16 ns
(88.50%)
transfer/pacing-true/same-seed/wallclock-time/run📈 view plot
🚷 view threshold
23,793,000.00 ns
(-9.09%)Baseline: 26,173,034.31 ns
29,089,208.87 ns
(81.79%)
transfer/pacing-true/varying-seeds/simulated-time/run📈 view plot
🚷 view threshold
23,676,000,000.00 ns
(-4.40%)Baseline: 24,765,699,346.41 ns
25,930,476,681.75 ns
(91.31%)
transfer/pacing-true/varying-seeds/wallclock-time/run📈 view plot
🚷 view threshold
23,939,000.00 ns
(-6.88%)Baseline: 25,706,392.16 ns
28,122,625.71 ns
(85.12%)
🐰 View full continuous benchmarking report in Bencher

@github-actions
Copy link
Contributor

Benchmark results

Significant performance differences relative to adcd4bd.

1000-streams/each-1000-bytes/wallclock-time: 💔 Performance has regressed by +2.8628%.
       time:   [51.025 ms 51.146 ms 51.274 ms]
       change: [+2.5226% +2.8628% +3.2109] (p = 0.00 < 0.05)
       Performance has regressed.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high severe
All results
1-conn/1-100mb-resp/mtu-1504 (aka. Download)/client: Change within noise threshold.
       time:   [199.74 ms 200.03 ms 200.33 ms]
       thrpt:  [499.18 MiB/s 499.92 MiB/s 500.66 MiB/s]
change:
       time:   [-1.5033% -1.2263% -0.9828] (p = 0.00 < 0.05)
       thrpt:  [+0.9926% +1.2415% +1.5262]
       Change within noise threshold.
1-conn/10_000-parallel-1b-resp/mtu-1504 (aka. RPS)/client: No change in performance detected.
       time:   [279.02 ms 280.66 ms 282.33 ms]
       thrpt:  [35.420 Kelem/s 35.630 Kelem/s 35.840 Kelem/s]
change:
       time:   [-0.7491% +0.0446% +0.8536] (p = 0.92 > 0.05)
       thrpt:  [-0.8463% -0.0446% +0.7548]
       No change in performance detected.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mild
1-conn/1-1b-resp/mtu-1504 (aka. HPS)/client: No change in performance detected.
       time:   [38.535 ms 38.729 ms 38.944 ms]
       thrpt:  [25.678   B/s 25.821   B/s 25.951   B/s]
change:
       time:   [-0.4305% +0.3268% +1.0557] (p = 0.39 > 0.05)
       thrpt:  [-1.0447% -0.3257% +0.4324]
       No change in performance detected.
Found 9 outliers among 100 measurements (9.00%)
1 (1.00%) high mild
8 (8.00%) high severe
1-conn/1-100mb-req/mtu-1504 (aka. Upload)/client: Change within noise threshold.
       time:   [203.48 ms 203.87 ms 204.40 ms]
       thrpt:  [489.25 MiB/s 490.50 MiB/s 491.45 MiB/s]
change:
       time:   [-0.5697% -0.3197% -0.0114] (p = 0.02 < 0.05)
       thrpt:  [+0.0114% +0.3207% +0.5730]
       Change within noise threshold.
Found 2 outliers among 100 measurements (2.00%)
1 (1.00%) low mild
1 (1.00%) high severe
decode 4096 bytes, mask ff: No change in performance detected.
       time:   [4.5086 µs 4.5165 µs 4.5250 µs]
       change: [-0.5384% -0.0714% +0.3567] (p = 0.76 > 0.05)
       No change in performance detected.
Found 8 outliers among 100 measurements (8.00%)
5 (5.00%) high mild
3 (3.00%) high severe
decode 1048576 bytes, mask ff: No change in performance detected.
       time:   [1.1606 ms 1.1620 ms 1.1634 ms]
       change: [-1.0593% -0.0646% +0.8769] (p = 0.90 > 0.05)
       No change in performance detected.
Found 10 outliers among 100 measurements (10.00%)
10 (10.00%) high severe
decode 4096 bytes, mask 7f: No change in performance detected.
       time:   [5.7983 µs 5.8264 µs 5.8730 µs]
       change: [-0.3504% -0.0140% +0.3661] (p = 0.94 > 0.05)
       No change in performance detected.
Found 4 outliers among 100 measurements (4.00%)
2 (2.00%) high mild
2 (2.00%) high severe
decode 1048576 bytes, mask 7f: Change within noise threshold.
       time:   [1.4738 ms 1.4759 ms 1.4782 ms]
       change: [-0.9237% -0.7111% -0.4862] (p = 0.00 < 0.05)
       Change within noise threshold.
decode 4096 bytes, mask 3f: No change in performance detected.
       time:   [5.5367 µs 5.5476 µs 5.5592 µs]
       change: [-0.7854% -0.2514% +0.1604] (p = 0.35 > 0.05)
       No change in performance detected.
Found 24 outliers among 100 measurements (24.00%)
10 (10.00%) high mild
14 (14.00%) high severe
decode 1048576 bytes, mask 3f: No change in performance detected.
       time:   [1.4150 ms 1.4172 ms 1.4195 ms]
       change: [-0.3083% -0.0751% +0.1538] (p = 0.53 > 0.05)
       No change in performance detected.
1-streams/each-1000-bytes/wallclock-time: Change within noise threshold.
       time:   [582.24 µs 584.16 µs 586.39 µs]
       change: [+0.2361% +0.6898% +1.1823] (p = 0.00 < 0.05)
       Change within noise threshold.
Found 6 outliers among 100 measurements (6.00%)
6 (6.00%) high severe
1-streams/each-1000-bytes/simulated-time: No change in performance detected.
       time:   [118.86 ms 119.09 ms 119.32 ms]
       thrpt:  [8.1847 KiB/s 8.2003 KiB/s 8.2160 KiB/s]
change:
       time:   [-0.2229% +0.0413% +0.3031] (p = 0.76 > 0.05)
       thrpt:  [-0.3022% -0.0413% +0.2234]
       No change in performance detected.
1000-streams/each-1-bytes/wallclock-time: No change in performance detected.
       time:   [12.459 ms 12.494 ms 12.530 ms]
       change: [-0.2889% +0.1248% +0.5587] (p = 0.55 > 0.05)
       No change in performance detected.
1000-streams/each-1-bytes/simulated-time: No change in performance detected.
       time:   [2.3274 s 2.3304 s 2.3333 s]
       thrpt:  [428.57   B/s 429.12   B/s 429.67   B/s]
change:
       time:   [-0.1982% -0.0070% +0.1865] (p = 0.94 > 0.05)
       thrpt:  [-0.1862% +0.0070% +0.1986]
       No change in performance detected.
Found 3 outliers among 100 measurements (3.00%)
2 (2.00%) low mild
1 (1.00%) high mild
1000-streams/each-1000-bytes/wallclock-time: 💔 Performance has regressed by +2.8628%.
       time:   [51.025 ms 51.146 ms 51.274 ms]
       change: [+2.5226% +2.8628% +3.2109] (p = 0.00 < 0.05)
       Performance has regressed.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high severe
1000-streams/each-1000-bytes/simulated-time: No change in performance detected.
       time:   [16.182 s 16.441 s 16.701 s]
       thrpt:  [58.472 KiB/s 59.398 KiB/s 60.349 KiB/s]
change:
       time:   [-2.1350% +0.0983% +2.4409] (p = 0.94 > 0.05)
       thrpt:  [-2.3827% -0.0982% +2.1816]
       No change in performance detected.
coalesce_acked_from_zero 1+1 entries: No change in performance detected.
       time:   [89.311 ns 89.583 ns 89.858 ns]
       change: [-0.2598% +0.2506% +0.7758] (p = 0.35 > 0.05)
       No change in performance detected.
Found 12 outliers among 100 measurements (12.00%)
8 (8.00%) high mild
4 (4.00%) high severe
coalesce_acked_from_zero 3+1 entries: No change in performance detected.
       time:   [105.90 ns 106.26 ns 106.65 ns]
       change: [-1.9128% -0.5981% +0.3427] (p = 0.36 > 0.05)
       No change in performance detected.
Found 14 outliers among 100 measurements (14.00%)
2 (2.00%) high mild
12 (12.00%) high severe
coalesce_acked_from_zero 10+1 entries: No change in performance detected.
       time:   [105.28 ns 105.68 ns 106.15 ns]
       change: [-1.0261% -0.4197% +0.1267] (p = 0.17 > 0.05)
       No change in performance detected.
Found 9 outliers among 100 measurements (9.00%)
2 (2.00%) low mild
2 (2.00%) high mild
5 (5.00%) high severe
coalesce_acked_from_zero 1000+1 entries: No change in performance detected.
       time:   [90.284 ns 97.551 ns 109.09 ns]
       change: [-0.3112% +5.7376% +15.352] (p = 0.19 > 0.05)
       No change in performance detected.
Found 18 outliers among 100 measurements (18.00%)
8 (8.00%) high mild
10 (10.00%) high severe
RxStreamOrderer::inbound_frame(): Change within noise threshold.
       time:   [108.38 ms 108.45 ms 108.51 ms]
       change: [-0.2987% -0.2208% -0.1397] (p = 0.00 < 0.05)
       Change within noise threshold.
Found 10 outliers among 100 measurements (10.00%)
8 (8.00%) low mild
1 (1.00%) high mild
1 (1.00%) high severe
sent::Packets::take_ranges: No change in performance detected.
       time:   [4.5446 µs 4.6925 µs 4.8403 µs]
       change: [-3.1495% +0.5545% +4.3316] (p = 0.77 > 0.05)
       No change in performance detected.
Found 4 outliers among 100 measurements (4.00%)
4 (4.00%) high mild
transfer/pacing-false/varying-seeds/wallclock-time/run: Change within noise threshold.
       time:   [23.314 ms 23.346 ms 23.395 ms]
       change: [+0.9568% +1.2217% +1.4540] (p = 0.00 < 0.05)
       Change within noise threshold.
Found 3 outliers among 100 measurements (3.00%)
2 (2.00%) high mild
1 (1.00%) high severe
transfer/pacing-false/varying-seeds/simulated-time/run: No change in performance detected.
       time:   [23.941 s 23.941 s 23.941 s]
       thrpt:  [171.09 KiB/s 171.09 KiB/s 171.09 KiB/s]
change:
       time:   [+0.0000% +0.0000% +0.0000] (p = NaN > 0.05)
       thrpt:  [+0.0000% +0.0000% +0.0000]
       No change in performance detected.
transfer/pacing-true/varying-seeds/wallclock-time/run: Change within noise threshold.
       time:   [23.904 ms 23.939 ms 23.990 ms]
       change: [+2.5444% +2.7924% +3.0483] (p = 0.00 < 0.05)
       Change within noise threshold.
Found 4 outliers among 100 measurements (4.00%)
1 (1.00%) low severe
2 (2.00%) high mild
1 (1.00%) high severe
transfer/pacing-true/varying-seeds/simulated-time/run: No change in performance detected.
       time:   [23.676 s 23.676 s 23.676 s]
       thrpt:  [173.01 KiB/s 173.01 KiB/s 173.01 KiB/s]
change:
       time:   [+0.0000% +0.0000% +0.0000] (p = NaN > 0.05)
       thrpt:  [+0.0000% +0.0000% +0.0000]
       No change in performance detected.
transfer/pacing-false/same-seed/wallclock-time/run: Change within noise threshold.
       time:   [22.995 ms 23.024 ms 23.064 ms]
       change: [-1.1681% -0.9632% -0.7649] (p = 0.00 < 0.05)
       Change within noise threshold.
Found 5 outliers among 100 measurements (5.00%)
2 (2.00%) high mild
3 (3.00%) high severe
transfer/pacing-false/same-seed/simulated-time/run: No change in performance detected.
       time:   [23.941 s 23.941 s 23.941 s]
       thrpt:  [171.09 KiB/s 171.09 KiB/s 171.09 KiB/s]
change:
       time:   [+0.0000% +0.0000% +0.0000] (p = NaN > 0.05)
       thrpt:  [+0.0000% +0.0000% +0.0000]
       No change in performance detected.
transfer/pacing-true/same-seed/wallclock-time/run: Change within noise threshold.
       time:   [23.770 ms 23.793 ms 23.819 ms]
       change: [+0.4272% +0.5854% +0.7378] (p = 0.00 < 0.05)
       Change within noise threshold.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high severe
transfer/pacing-true/same-seed/simulated-time/run: No change in performance detected.
       time:   [23.676 s 23.676 s 23.676 s]
       thrpt:  [173.01 KiB/s 173.01 KiB/s 173.01 KiB/s]
change:
       time:   [+0.0000% +0.0000% +0.0000] (p = NaN > 0.05)
       thrpt:  [+0.0000% +0.0000% +0.0000]
       No change in performance detected.

Download data for profiler.firefox.com or download performance comparison data.

Copy link
Member

@mxinden mxinden left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally in favor. Thanks for the work!

Comment on lines +1554 to +1559
[[package]]
name = "unicode-xid"
version = "0.2.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ebc1c04c71510c7f702b52b7c350734c9ff1295c464a03335b00bb84fc54f853"

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is included in mozilla-central today. Is it necessary?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants