Skip to content

HDDS-14183. Attempted to decrement available space to a negative value#9655

Open
sarvekshayr wants to merge 7 commits intoapache:masterfrom
sarvekshayr:HDDS-14183
Open

HDDS-14183. Attempted to decrement available space to a negative value#9655
sarvekshayr wants to merge 7 commits intoapache:masterfrom
sarvekshayr:HDDS-14183

Conversation

@sarvekshayr
Copy link
Contributor

What changes were proposed in this pull request?

Saw this warning when datanode disk was nearly full:

2025-12-15 10:37:20,903 WARN [166c5ca8-343e-46ed-b619-84ac193e0069-ChunkReader-215]-org.apache.hadoop.hdds.fs.CachingSpaceUsageSource: Attempted to decrement available space to a negative value. Current: 0, Decrement: 1048576, Source: /ipdr_ozone31/hadoop-ozone/datanode/data

Prior to this message, there were many failed writes. Perhaps it needs to increment the value when the write fails.

The fix adds rollback logic in KeyValueHandler.handleWriteChunk() that tracks when a chunk write succeeds and increments the usedSpace counter. If any subsequent operation fails, the exception handler calls volume.decrementUsedSpace() to restore the counter.

What is the link to the Apache JIRA

HDDS-14183

How was this patch tested?

CI: https://github.com/sarvekshayr/ozone/actions/runs/21200210393

/**
* Commit space reserved for write to usedSpace when write operation succeeds.
*/
private void commitSpaceReservedForWrite(HddsVolume volume, boolean spaceReserved, long bytes) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can commit happen when space is not reserved?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the logic to make the flow clearer by moving the spaceReserved check to the call site. This makes it explicit that the commit operation is invoked only when space has actually been reserved.

@devabhishekpal devabhishekpal added the bug Something isn't working label Jan 28, 2026
@github-actions
Copy link

This PR has been marked as stale due to 21 days of inactivity. Please comment or remove the stale label to keep it open. Otherwise, it will be automatically closed in 7 days.

@github-actions github-actions bot added the stale label Feb 19, 2026
Copy link
Contributor

@siddhantsangwan siddhantsangwan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems to me that the root cause is not bad exception handling, but actually not implementing idempotency correctly in the overwrite case.

In FilePerBlockStrategy#writeChunk, containerData.updateWriteStats(chunkLength, overwrite) is called at the very end. At that point any exception is already thrown. So it's correct to update write stats at that point.

And updateWriteStats always calls incrWriteBytes(bytesWritten), which always calls volume incrementUsedSpace(bytes):

  public void updateWriteStats(long bytesWritten, boolean overwrite) {
    getStatistics().updateWrite(bytesWritten, overwrite);
    incrWriteBytes(bytesWritten); <--------------- this, always called
  }

  private void incrWriteBytes(long bytes) {
    this.getVolume().incrementUsedSpace(bytes); <-------------- this, always called
    long bytesUsedBeforeWrite = getBytesUsed() - bytes;
    long availableSpaceBeforeWrite = getMaxSize() - bytesUsedBeforeWrite;
    if (committedSpace && availableSpaceBeforeWrite > 0) {
      long decrement = Math.min(bytes, availableSpaceBeforeWrite);
      this.getVolume().incCommittedBytes(-decrement);
    }
  }

So used space is increased and available space is decreased even on overwrite. For overwrites that don't actually grow the file on disk, this is wrong and it's probably what leads to available space being decreased more than what's possible.

For example:

chunkLength = 1,048,576 (1MB),
block file already length 1,048,576,
retry writes same chunk at offset = 0 (overwrite = true),
so after write, file length still 1,048,576 (no growth). But used space is increased and available space is decreased incorrectly.

So this is what needs to be fixed. I also suggest trying to keep it simple as we are already using the term reserved space in other parts of the code, and that means something else. This area is pretty complex at this point!

Comment on lines 188 to 195
if (overwrite) {
long fileLengthAfterWrite = offset + chunkLength;
if (fileLengthAfterWrite > fileLengthBeforeWrite) {
containerData.getStatistics().updateWrite(fileLengthAfterWrite - fileLengthBeforeWrite, false);
containerData.updateWriteStats(fileLengthAfterWrite - fileLengthBeforeWrite, false);
}
}

containerData.updateWriteStats(chunkLength, overwrite);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This still seems incorrect to me, as in the overwrite case it would end up calling updateWriteStats twice. Can you check?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updateWriteStats handles both updateWrite and incrWriteBytes.

I replaced updateWrite with updateWriteStats to ensure that during an overwrite, any change in file length is captured as a delta update in updateWrite and incrWriteBytes as well.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but in the overwrite case it's getting called once at line 191 and again at line 195.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants