You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The CdacXPlatDumpTest in pipeline runtime-diagnostics (def 309) has been failing in every build for the past week. The Helix payload bundles core dump files from 8 different platforms (linux-arm, linux-arm64, linux-x64, osx-arm64, osx-x64, windows-arm64, windows-x86, windows-x64) into a single zip stream. When the combined uncompressed size of all dumps exceeds 2 GB, the MemoryStream used to back the zip archive throws IOException: Stream was too long because MemoryStream capacity is limited to int.MaxValue (~2.1 GB).
This is a test infrastructure bug in how multi-platform dumps are packaged for Helix, not a product regression. The fix must restructure the test to avoid aggregating all platform dumps into a single stream.
Impact on platforms
runtime-diagnostics (def 309) — all 8 platform legs — CdacXPlatDumpTest — IOException: Stream was too long — builds 1409927, 1408350, 1406820, 1406418, 1404739
Errors log
System.IO.IOException: Stream was too long.
at System.IO.MemoryStream.set_Capacity(Int32 value)
at System.IO.MemoryStream.EnsureCapacity(Int32 value)
The error occurs during Helix payload creation in cdac-dump-xplat-test-helix.proj when zipping multi-platform dumps into a single archive.
First build it occurred
First observed in scanned window: build 1404739 (earliest of 5 failing builds). All 5 scanned builds of pipeline 309 are affected. Build link: https://dev.azure.com/dnceng-public/public/_build/results?buildId=1404739. This is computed within the scanned window and may not be the true origin.
Recommended action
Owner: @dotnet/diagnostics (cDAC team)
File to investigate: src/native/managed/cdac/tests/DumpTests/cdac-dump-xplat-test-helix.proj — the test project that aggregates dump files from all platforms into a single Helix payload
Fix options (choose one):
Split the Helix payload into per-platform jobs so each job sends only its own platform's dump (preferred — also improves isolation)
Switch from MemoryStream to FileStream or a chunked approach for the zip archive
Compress/filter dump files before packaging to reduce total size below 2 GB
All 5 scanned builds fail; this pipeline is 100% red in the scan window.
Note
🔒 Integrity filter blocked 6 items
The following items were blocked because they don't meet the GitHub integrity level.
#92420search_issues: has lower integrity than agent requires. The agent cannot read data with integrity below "approved".
#123982search_issues: has lower integrity than agent requires. The agent cannot read data with integrity below "approved".
#111752search_issues: has lower integrity than agent requires. The agent cannot read data with integrity below "approved".
#125600search_issues: has lower integrity than agent requires. The agent cannot read data with integrity below "approved".
#124206search_issues: has lower integrity than agent requires. The agent cannot read data with integrity below "approved".
#126448search_issues: has lower integrity than agent requires. The agent cannot read data with integrity below "approved".
To allow these resources, lower min-integrity in your GitHub frontmatter:
Reasoning
The
CdacXPlatDumpTestin pipelineruntime-diagnostics(def 309) has been failing in every build for the past week. The Helix payload bundles core dump files from 8 different platforms (linux-arm, linux-arm64, linux-x64, osx-arm64, osx-x64, windows-arm64, windows-x86, windows-x64) into a single zip stream. When the combined uncompressed size of all dumps exceeds 2 GB, theMemoryStreamused to back the zip archive throwsIOException: Stream was too longbecauseMemoryStreamcapacity is limited toint.MaxValue(~2.1 GB).This is a test infrastructure bug in how multi-platform dumps are packaged for Helix, not a product regression. The fix must restructure the test to avoid aggregating all platform dumps into a single stream.
Impact on platforms
CdacXPlatDumpTest—IOException: Stream was too long— builds 1409927, 1408350, 1406820, 1406418, 1404739Errors log
The error occurs during Helix payload creation in
cdac-dump-xplat-test-helix.projwhen zipping multi-platform dumps into a single archive.First build it occurred
First observed in scanned window: build 1404739 (earliest of 5 failing builds). All 5 scanned builds of pipeline 309 are affected. Build link: https://dev.azure.com/dnceng-public/public/_build/results?buildId=1404739. This is computed within the scanned window and may not be the true origin.
Recommended action
Owner:
@dotnet/diagnostics(cDAC team)File to investigate:
src/native/managed/cdac/tests/DumpTests/cdac-dump-xplat-test-helix.proj— the test project that aggregates dump files from all platforms into a single Helix payloadFix options (choose one):
MemoryStreamtoFileStreamor a chunked approach for the zip archiveAll 5 scanned builds fail; this pipeline is 100% red in the scan window.
Note
🔒 Integrity filter blocked 6 items
The following items were blocked because they don't meet the GitHub integrity level.
search_issues: has lower integrity than agent requires. The agent cannot read data with integrity below "approved".search_issues: has lower integrity than agent requires. The agent cannot read data with integrity below "approved".search_issues: has lower integrity than agent requires. The agent cannot read data with integrity below "approved".search_issues: has lower integrity than agent requires. The agent cannot read data with integrity below "approved".search_issues: has lower integrity than agent requires. The agent cannot read data with integrity below "approved".search_issues: has lower integrity than agent requires. The agent cannot read data with integrity below "approved".To allow these resources, lower
min-integrityin your GitHub frontmatter:Report
Summary