Fix position_ids docstring in modeling_flash_attention_utils.py#44547
Open
mvanhorn wants to merge 2 commits intohuggingface:mainfrom
Open
Fix position_ids docstring in modeling_flash_attention_utils.py#44547mvanhorn wants to merge 2 commits intohuggingface:mainfrom
mvanhorn wants to merge 2 commits intohuggingface:mainfrom
Conversation
The docstring for position_ids incorrectly described attention_mask
semantics ("1 means valid and 0 means not valid"). Updated to
accurately describe position indices.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
4 tasks
vasqu
reviewed
Mar 9, 2026
Contributor
vasqu
left a comment
There was a problem hiding this comment.
Thanks, let's update the strings a bit to a more simple one
| Boolean or int tensor of shape (batch_size, sequence_length), 1 means valid and 0 means not valid. | ||
| Indices of positions of each input sequence tokens in the position embeddings. Selected in the range | ||
| `[0, config.n_positions - 1]`. Shape: (batch_size, sequence_length). | ||
|
|
Contributor
There was a problem hiding this comment.
Suggested change
| Indices of positions of each input sequence tokens. Shape: (batch_size, sequence_length). |
I'd rather follow
transformers/src/transformers/utils/generic.py
Lines 756 to 757 in a049c00
The generic docstring is a bit outdated imo, position embeddings sounds like the old absolute embeddings + the range can be indefinite since rope can be extended etc
stevhliu
approved these changes
Mar 9, 2026
Member
stevhliu
left a comment
There was a problem hiding this comment.
one minor comment, but lgtm!
Comment on lines
+368
to
+369
| Indices of positions of each input sequence tokens in the position embeddings. Selected in the range | ||
| `[0, config.n_positions - 1]`. Shape: (batch_size, sequence_length). |
Member
There was a problem hiding this comment.
Suggested change
| Indices of positions of each input sequence tokens in the position embeddings. Selected in the range | |
| `[0, config.n_positions - 1]`. Shape: (batch_size, sequence_length). | |
| Tensor of shape `(batch_size, sequence_length)` containing position indices of each input sequence tokens in the position embeddings. |
Drop "position embeddings" reference and range constraint to match the codebase pattern in generic.py. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Fixes #44373
Summary
position_idsparameter inprepare_fa_kwargs_from_position_idsand_prepare_from_posidswhich incorrectly described attention mask semantics ("Boolean or int tensor... 1 means valid and 0 means not valid")Testing
This contribution was developed with AI assistance (Claude Code).