Skip to content

fix reduce(gcd, ...) returns SupportsIndex instead of int #2866#2898

Open
asukaminato0721 wants to merge 2 commits intofacebook:mainfrom
asukaminato0721:2866
Open

fix reduce(gcd, ...) returns SupportsIndex instead of int #2866#2898
asukaminato0721 wants to merge 2 commits intofacebook:mainfrom
asukaminato0721:2866

Conversation

@asukaminato0721
Copy link
Copy Markdown
Contributor

@asukaminato0721 asukaminato0721 commented Mar 25, 2026

Summary

Fixes #2866

callable subtyping now checks return types before parameters.

That lets a target type variable pick up the precise covariant lower bound from math.gcd’s int return before the contravariant parameter check confirms it is still compatible with SupportsIndex, instead of prematurely pinning to SupportsIndex.

Test Plan

Copilot AI review requested due to automatic review settings March 25, 2026 15:44
@meta-cla meta-cla bot added the cla signed label Mar 25, 2026
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Fixes a type inference precision issue where reduce(gcd, ...) could be inferred as returning SupportsIndex instead of int (issue #2866), by adjusting callable subtyping constraint ordering and adding regression coverage.

Changes:

  • Change callable subtyping to check return types before parameter types to improve quantified type variable inference precision (e.g., math.gcd case).
  • Add regression test covering reduce(gcd, ...) returning an int in a realistic pattern.
  • Ensure tuple subscript inference is only applied when __getitem__ is the builtin tuple.__getitem__, adding a test for tuple subclasses that override __getitem__.

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated no comments.

Show a summary per file
File Description
pyrefly/lib/solver/subset.rs Adjustes callable subtyping check order (return before params) to improve quantified var solving precision.
pyrefly/lib/test/calls.rs Adds regression test for the reduce(gcd, ...) inference scenario from #2866.
pyrefly/lib/alt/expr.rs Gates tuple-specific subscript inference on whether the type uses builtin tuple.__getitem__.
pyrefly/lib/alt/class/tuple.rs Adds helper to detect whether a tuple(-subclass) uses builtin __getitem__.
pyrefly/lib/test/tuple.rs Adds regression test ensuring overridden __getitem__ on tuple subclasses is respected during subscripting.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions
Copy link
Copy Markdown

Diff from mypy_primer, showing the effect of this PR on open source code:

anyio (https://github.com/agronholm/anyio)
- ERROR src/anyio/_core/_eventloop.py:77:34-38: Argument `(**tuple[*PosArgsT]) -> Awaitable[T_Retval]` is not assignable to parameter `func` with type `(**tuple[*@_]) -> Awaitable[@_]` in function `anyio.abc._eventloop.AsyncBackend.run` [bad-argument-type]
+ ERROR src/anyio/_core/_eventloop.py:77:34-38: Argument `(**tuple[*PosArgsT]) -> Awaitable[T_Retval]` is not assignable to parameter `func` with type `(**tuple[*@_]) -> Awaitable[T_Retval]` in function `anyio.abc._eventloop.AsyncBackend.run` [bad-argument-type]
- ERROR src/anyio/_core/_fileio.py:480:41-58: Argument `(self: Path, *, follow_symlinks: bool = True) -> bool` is not assignable to parameter `func` with type `(**tuple[*@_]) -> @_` in function `anyio.to_thread.run_sync` [bad-argument-type]
+ ERROR src/anyio/_core/_fileio.py:480:41-58: Argument `(self: Path, *, follow_symlinks: bool = True) -> bool` is not assignable to parameter `func` with type `(**tuple[*@_]) -> bool` in function `anyio.to_thread.run_sync` [bad-argument-type]
- ERROR src/anyio/_core/_fileio.py:522:41-57: Argument `(self: Path, *, follow_symlinks: bool = True) -> str` is not assignable to parameter `func` with type `(**tuple[*@_]) -> @_` in function `anyio.to_thread.run_sync` [bad-argument-type]
+ ERROR src/anyio/_core/_fileio.py:522:41-57: Argument `(self: Path, *, follow_symlinks: bool = True) -> str` is not assignable to parameter `func` with type `(**tuple[*@_]) -> str` in function `anyio.to_thread.run_sync` [bad-argument-type]
- ERROR src/anyio/_core/_fileio.py:551:41-58: Argument `(self: Path, *, follow_symlinks: bool = True) -> bool` is not assignable to parameter `func` with type `(**tuple[*@_]) -> @_` in function `anyio.to_thread.run_sync` [bad-argument-type]
+ ERROR src/anyio/_core/_fileio.py:551:41-58: Argument `(self: Path, *, follow_symlinks: bool = True) -> bool` is not assignable to parameter `func` with type `(**tuple[*@_]) -> bool` in function `anyio.to_thread.run_sync` [bad-argument-type]
- ERROR src/anyio/_core/_fileio.py:557:41-59: Argument `(self: Path, *, follow_symlinks: bool = True) -> bool` is not assignable to parameter `func` with type `(**tuple[*@_]) -> @_` in function `anyio.to_thread.run_sync` [bad-argument-type]
+ ERROR src/anyio/_core/_fileio.py:557:41-59: Argument `(self: Path, *, follow_symlinks: bool = True) -> bool` is not assignable to parameter `func` with type `(**tuple[*@_]) -> bool` in function `anyio.to_thread.run_sync` [bad-argument-type]
- ERROR src/anyio/_core/_fileio.py:637:41-57: Argument `(self: Path, *, follow_symlinks: bool = True) -> str` is not assignable to parameter `func` with type `(**tuple[*@_]) -> @_` in function `anyio.to_thread.run_sync` [bad-argument-type]
+ ERROR src/anyio/_core/_fileio.py:637:41-57: Argument `(self: Path, *, follow_symlinks: bool = True) -> str` is not assignable to parameter `func` with type `(**tuple[*@_]) -> str` in function `anyio.to_thread.run_sync` [bad-argument-type]
- ERROR src/anyio/from_thread.py:91:9-13: Argument `(**tuple[*PosArgsT]) -> Awaitable[T_Retval]` is not assignable to parameter `func` with type `(**tuple[*@_]) -> Awaitable[@_]` in function `anyio.abc._eventloop.AsyncBackend.run_async_from_thread` [bad-argument-type]
+ ERROR src/anyio/from_thread.py:91:9-13: Argument `(**tuple[*PosArgsT]) -> Awaitable[T_Retval]` is not assignable to parameter `func` with type `(**tuple[*@_]) -> Awaitable[T_Retval]` in function `anyio.abc._eventloop.AsyncBackend.run_async_from_thread` [bad-argument-type]
- ERROR src/anyio/from_thread.py:119:9-13: Argument `(**tuple[*PosArgsT]) -> T_Retval` is not assignable to parameter `func` with type `(**tuple[*@_]) -> @_` in function `anyio.abc._eventloop.AsyncBackend.run_sync_from_thread` [bad-argument-type]
+ ERROR src/anyio/from_thread.py:119:9-13: Argument `(**tuple[*PosArgsT]) -> T_Retval` is not assignable to parameter `func` with type `(**tuple[*@_]) -> T_Retval` in function `anyio.abc._eventloop.AsyncBackend.run_sync_from_thread` [bad-argument-type]
- ERROR src/anyio/from_thread.py:378:38-42: Argument `(**tuple[*PosArgsT]) -> Awaitable[T_Retval] | T_Retval` is not assignable to parameter `func` with type `(**tuple[*@_]) -> Awaitable[@_] | @_` in function `BlockingPortal._spawn_task_from_thread` [bad-argument-type]
+ ERROR src/anyio/from_thread.py:378:38-42: Argument `(**tuple[*PosArgsT]) -> Awaitable[T_Retval] | T_Retval` is not assignable to parameter `func` with type `(**tuple[*@_]) -> Awaitable[T_Retval] | T_Retval` in function `BlockingPortal._spawn_task_from_thread` [bad-argument-type]
- ERROR src/anyio/to_thread.py:64:9-13: Argument `(**tuple[*PosArgsT]) -> T_Retval` is not assignable to parameter `func` with type `(**tuple[*@_]) -> @_` in function `anyio.abc._eventloop.AsyncBackend.run_sync_in_worker_thread` [bad-argument-type]
+ ERROR src/anyio/to_thread.py:64:9-13: Argument `(**tuple[*PosArgsT]) -> T_Retval` is not assignable to parameter `func` with type `(**tuple[*@_]) -> T_Retval` in function `anyio.abc._eventloop.AsyncBackend.run_sync_in_worker_thread` [bad-argument-type]

Tanjun (https://github.com/FasterSpeeding/Tanjun)
- ERROR tanjun/checks.py:598:12-602:6: Returned type `((ExecutableCommand[Any]) -> ExecutableCommand[Any]) | ExecutableCommand[Any]` is not assignable to declared return type `((_CommandT) -> _CommandT) | _CommandT` [bad-return]
- ERROR tanjun/checks.py:660:12-664:6: Returned type `((ExecutableCommand[Any]) -> ExecutableCommand[Any]) | ExecutableCommand[Any]` is not assignable to declared return type `((_CommandT) -> _CommandT) | _CommandT` [bad-return]
- ERROR tanjun/checks.py:722:12-726:6: Returned type `((ExecutableCommand[Any]) -> ExecutableCommand[Any]) | ExecutableCommand[Any]` is not assignable to declared return type `((_CommandT) -> _CommandT) | _CommandT` [bad-return]
- ERROR tanjun/checks.py:784:12-788:6: Returned type `((ExecutableCommand[Any]) -> ExecutableCommand[Any]) | ExecutableCommand[Any]` is not assignable to declared return type `((_CommandT) -> _CommandT) | _CommandT` [bad-return]
- ERROR tanjun/checks.py:846:12-850:6: Returned type `((ExecutableCommand[Any]) -> ExecutableCommand[Any]) | ExecutableCommand[Any]` is not assignable to declared return type `((_CommandT) -> _CommandT) | _CommandT` [bad-return]
- ERROR tanjun/components.py:864:16-98: Returned type `((ExecutableCommand[Any]) -> ExecutableCommand[Any]) | ExecutableCommand[Any]` is not assignable to declared return type `((_CommandT) -> _CommandT) | _CommandT` [bad-return]
- ERROR tanjun/components.py:900:16-72: Returned type `((MenuCommand[Any, Any]) -> MenuCommand[Any, Any]) | MenuCommand[Any, Any]` is not assignable to declared return type `((_MenuCommandT) -> _MenuCommandT) | _MenuCommandT` [bad-return]
- ERROR tanjun/components.py:937:16-73: Returned type `((BaseSlashCommand) -> BaseSlashCommand) | BaseSlashCommand` is not assignable to declared return type `((_BaseSlashCommandT) -> _BaseSlashCommandT) | _BaseSlashCommandT` [bad-return]
- ERROR tanjun/components.py:983:16-75: Returned type `((MessageCommand[Any]) -> MessageCommand[Any]) | MessageCommand[Any]` is not assignable to declared return type `((_MessageCommandT) -> _MessageCommandT) | _MessageCommandT` [bad-return]

bokeh (https://github.com/bokeh/bokeh)
- ERROR src/bokeh/core/property/container.py:255:9-25: Class member `ColumnData.make_descriptors` overrides parent class `Dict` in an inconsistent manner [bad-param-name-override]
+ ERROR src/bokeh/core/property/container.py:255:9-25: Class member `ColumnData.make_descriptors` overrides parent class `Dict` in an inconsistent manner [bad-override]
- ERROR src/bokeh/core/property/dataspec.py:218:9-25: Class member `DataSpec.make_descriptors` overrides parent class `Either` in an inconsistent manner [bad-param-name-override]
+ ERROR src/bokeh/core/property/dataspec.py:218:9-25: Class member `DataSpec.make_descriptors` overrides parent class `Either` in an inconsistent manner [bad-override]
+ ERROR src/bokeh/layouts.py:575:34-68: Argument `list[LayoutDOM | grid.col | grid.row]` is not assignable to parameter `children` with type `list[grid.col | grid.row]` in function `col.__init__` [bad-argument-type]
+ ERROR src/bokeh/layouts.py:575:34-68: Argument `list[LayoutDOM | grid.col | grid.row]` is not assignable to parameter `children` with type `list[grid.col | grid.row]` in function `row.__init__` [bad-argument-type]
- ERROR src/bokeh/layouts.py:575:43-51: Argument `((children: list[LayoutDOM], level: int = 0) -> grid.col | grid.row) | ((item: LayoutDOM, top_level: bool = False) -> LayoutDOM | grid.col | grid.row)` is not assignable to parameter `func` with type `(list[LayoutDOM]) -> grid.col | grid.row` in function `map.__new__` [bad-argument-type]
+ ERROR src/bokeh/layouts.py:575:43-51: Argument `((children: list[LayoutDOM], level: int = 0) -> grid.col | grid.row) | ((item: LayoutDOM, top_level: bool = False) -> LayoutDOM | grid.col | grid.row)` is not assignable to parameter `func` with type `(list[LayoutDOM]) -> LayoutDOM | grid.col | grid.row` in function `map.__new__` [bad-argument-type]

asynq (https://github.com/quora/asynq)
- ERROR asynq/debug.py:147:9-24: Class member `AsynqStackTracebackFormatter.formatException` overrides parent class `Formatter` in an inconsistent manner [bad-param-name-override]
+ ERROR asynq/debug.py:147:9-24: Class member `AsynqStackTracebackFormatter.formatException` overrides parent class `Formatter` in an inconsistent manner [bad-override]

streamlit (https://github.com/streamlit/streamlit)
- ERROR lib/streamlit/runtime/fragment.py:443:12-48: Returned type `(((...) -> Any) -> (...) -> Any) | ((...) -> Any)` is not assignable to declared return type `((F) -> F) | F` [bad-return]

Expression (https://github.com/cognitedata/Expression)
- ERROR expression/collections/block.py:268:30-37: Argument `(**tuple[*_P]) -> _TResult` is not assignable to parameter `mapper` with type `(**tuple[*@_]) -> @_` in function `starmap` [bad-argument-type]
+ ERROR expression/collections/block.py:268:30-37: Argument `(**tuple[*_P]) -> _TResult` is not assignable to parameter `mapper` with type `(**tuple[*@_]) -> _TResult` in function `starmap` [bad-argument-type]
- ERROR expression/core/option.py:449:27-33: Argument `(**tuple[*_P]) -> _TResult` is not assignable to parameter `mapper` with type `(**tuple[*_P]) -> @_` in function `Option.starmap` [bad-argument-type]
+ ERROR expression/core/option.py:449:27-33: Argument `(**tuple[*_P]) -> _TResult` is not assignable to parameter `mapper` with type `(**tuple[*_P]) -> _TResult` in function `Option.starmap` [bad-argument-type]
- ERROR tests/test_array.py:47:36-50: Argument `(TypedArray[object]) -> TypedArray[str]` is not assignable to parameter `fn1` with type `(TypedArray[int]) -> @_` in function `expression.core.pipe.pipe` [bad-argument-type]
+ ERROR tests/test_array.py:47:36-50: Argument `(TypedArray[object]) -> TypedArray[str]` is not assignable to parameter `fn1` with type `(TypedArray[int]) -> TypedArray[str]` in function `expression.core.pipe.pipe` [bad-argument-type]

freqtrade (https://github.com/freqtrade/freqtrade)
+ ERROR freqtrade/data/metrics.py:333:12-40: Returned type `tuple[Series[float] | Series | float, Series[float] | Series | float]` is not assignable to declared return type `tuple[float, float]` [bad-return]
+ ERROR freqtrade/data/metrics.py:411:20-65: `/` is not supported between `Series[str]` and `float` [unsupported-operation]
+ ERROR freqtrade/data/metrics.py:434:12-24: Returned type `Literal[-100] | Series | Unknown` is not assignable to declared return type `float` [bad-return]
+ ERROR freqtrade/freqtradebot.py:844:26-48: `/` is not supported between `Series[str]` and `Series[str]` [unsupported-operation]
+ ERROR freqtrade/freqtradebot.py:844:26-48: `/` is not supported between `Series[str]` and `Series` [unsupported-operation]
+ ERROR freqtrade/freqtradebot.py:844:26-48: `/` is not supported between `Series` and `Series[str]` [unsupported-operation]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_max_drawdown.py:44:20-33: Returned type `Series[str] | Series` is not assignable to declared return type `float` [bad-return]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_max_drawdown.py:45:16-57: `/` is not supported between `Series[str]` and `float` [unsupported-operation]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_max_drawdown.py:45:16-57: Returned type `Series | Unknown` is not assignable to declared return type `float` [bad-return]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_max_drawdown_relative.py:40:24-37: Returned type `Series[str] | Series` is not assignable to declared return type `float` [bad-return]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_max_drawdown_relative.py:43:20-33: Returned type `Series[str] | Series` is not assignable to declared return type `float` [bad-return]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_multi_metric.py:67:25-69: `/` is not supported between `Series[str]` and `Series` [unsupported-operation]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_multi_metric.py:94:32-96:10: `-` is not supported between `Series[str]` and `float` [unsupported-operation]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_multi_metric.py:94:48-88: `*` is not supported between `Literal[0]` and `Series[str]` [unsupported-operation]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_multi_metric.py:94:48-88: `*` is not supported between `float` and `Series[str]` [unsupported-operation]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_onlyprofit.py:26:16-33: `*` is not supported between `Literal[-1]` and `Series[str]` [unsupported-operation]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_onlyprofit.py:26:16-33: Returned type `Series | int` is not assignable to declared return type `float` [bad-return]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_profit_drawdown.py:36:16-38:10: Returned type `Series | Unknown` is not assignable to declared return type `float` [bad-return]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_profit_drawdown.py:37:13-92: `-` is not supported between `Series[str]` and `float` [unsupported-operation]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_profit_drawdown.py:37:29-69: `*` is not supported between `Literal[0]` and `Series[str]` [unsupported-operation]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_profit_drawdown.py:37:29-69: `*` is not supported between `float` and `Series[str]` [unsupported-operation]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_short_trade_dur.py:49:34-68: `/` is not supported between `Series[str]` and `float` [unsupported-operation]
+ ERROR freqtrade/optimize/hyperopt_loss/hyperopt_loss_short_trade_dur.py:52:16-22: Returned type `Series | Unknown` is not assignable to declared return type `float` [bad-return]
+ ERROR freqtrade/optimize/optimize_reports/optimize_reports.py:87:20-65: `/` is not supported between `Series[str]` and `float` [unsupported-operation]
+ ERROR freqtrade/optimize/optimize_reports/optimize_reports.py:89:21-66: `+` is not supported between `float` and `Series[str]` [unsupported-operation]
+ ERROR freqtrade/optimize/optimize_reports/optimize_reports.py:93:21-56: `/` is not supported between `Series[str]` and `Series` [unsupported-operation]
+ ERROR freqtrade/optimize/optimize_reports/optimize_reports.py:128:65-78: Argument `Series | float` is not assignable to parameter `final_balance` with type `float` in function `freqtrade.data.metrics.calculate_cagr` [bad-argument-type]
+ ERROR freqtrade/optimize/optimize_reports/optimize_reports.py:273:21-56: `/` is not supported between `Series[str]` and `Series` [unsupported-operation]
+ ERROR freqtrade/optimize/optimize_reports/optimize_reports.py:560:21-56: `/` is not supported between `Series[str]` and `Series` [unsupported-operation]
+ ERROR freqtrade/optimize/optimize_reports/optimize_reports.py:582:25-68: `/` is not supported between `Series[str]` and `float` [unsupported-operation]
+ ERROR freqtrade/optimize/optimize_reports/optimize_reports.py:583:30-99: `/` is not supported between `Series[str]` and `float` [unsupported-operation]
+ ERROR freqtrade/rpc/rpc.py:1523:45-55: Argument `Series[str] | Series | int` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR freqtrade/templates/sample_hyperopt_loss.py:54:34-68: `/` is not supported between `Series[str]` and `float` [unsupported-operation]
+ ERROR freqtrade/templates/sample_hyperopt_loss.py:57:16-22: Returned type `Series | Unknown` is not assignable to declared return type `float` [bad-return]

more-itertools (https://github.com/more-itertools/more-itertools)
- ERROR more_itertools/more.py:950:22-25: Argument `type[map]` is not assignable to parameter `func` with type `((Any) -> Unknown, Iterable[Any], Iterable[Any]) -> @_` in function `map.__new__` [bad-argument-type]
+ ERROR more_itertools/more.py:950:22-25: Argument `type[map]` is not assignable to parameter `func` with type `((Any) -> Unknown, Iterable[Any], Iterable[Any]) -> map[Unknown]` in function `map.__new__` [bad-argument-type]

setuptools (https://github.com/pypa/setuptools)
- ERROR setuptools/_vendor/more_itertools/more.py:917:22-25: Argument `type[map]` is not assignable to parameter `func` with type `((Any) -> Unknown, Iterable[Any], Iterable[Any]) -> @_` in function `map.__new__` [bad-argument-type]
+ ERROR setuptools/_vendor/more_itertools/more.py:917:22-25: Argument `type[map]` is not assignable to parameter `func` with type `((Any) -> Unknown, Iterable[Any], Iterable[Any]) -> map[Unknown]` in function `map.__new__` [bad-argument-type]

pwndbg (https://github.com/pwndbg/pwndbg)
- ERROR pwndbg/aglib/disasm/aarch64.py:485:9-25: Class member `AArch64DisassemblyAssistant._parse_immediate` overrides parent class `DisassemblyAssistant` in an inconsistent manner [bad-param-name-override]
+ ERROR pwndbg/aglib/disasm/aarch64.py:485:9-25: Class member `AArch64DisassemblyAssistant._parse_immediate` overrides parent class `DisassemblyAssistant` in an inconsistent manner [bad-override]
- ERROR pwndbg/commands/onegadget.py:27:1-63: Argument `(((show_unsat: bool = False, no_unknown: bool = False, verbose: bool = False) -> None) -> (show_unsat: bool = False, no_unknown: bool = False, verbose: bool = False) -> None) | ((show_unsat: bool = False, no_unknown: bool = False, verbose: bool = False) -> None)` is not assignable to parameter with type `((show_unsat: bool = False, no_unknown: bool = False, verbose: bool = False) -> None) -> (show_unsat: bool = False, no_unknown: bool = False, verbose: bool = False) -> None` [bad-argument-type]
+ ERROR pwndbg/commands/onegadget.py:27:1-63: Argument `(((show_unsat: bool = False, no_unknown: bool = False, verbose: bool = False) -> None) -> (show_unsat: bool = False, no_unknown: bool = False, verbose: bool = False) -> None) | ((show_unsat: bool = False, no_unknown: bool = False, verbose: bool = False) -> None)` is not assignable to parameter with type `((show_unsat: bool = False, no_unknown: bool = False, verbose: bool = False) -> None) -> ((show_unsat: bool = False, no_unknown: bool = False, verbose: bool = False) -> None) | None` [bad-argument-type]

django-stubs (https://github.com/typeddjango/django-stubs)
- ERROR django-stubs/db/models/fields/json.pyi:35:9-22: Class member `JSONField.get_transform` overrides parent class `Field` in an inconsistent manner [bad-param-name-override]
+ ERROR django-stubs/db/models/fields/json.pyi:35:9-22: Class member `JSONField.get_transform` overrides parent class `Field` in an inconsistent manner [bad-override]

schemathesis (https://github.com/schemathesis/schemathesis)
- ERROR src/schemathesis/auths.py:343:31-104: Argument `SelectiveAuthProvider[@_]` is not assignable to parameter `object` with type `AuthProvider[Unknown]` in function `list.append` [bad-argument-type]
+ ERROR src/schemathesis/auths.py:343:31-104: Argument `SelectiveAuthProvider[Unknown | None]` is not assignable to parameter `object` with type `AuthProvider[Unknown]` in function `list.append` [bad-argument-type]
- ERROR src/schemathesis/auths.py:343:62-80: Argument `RequestsAuth[@_]` is not assignable to parameter `provider` with type `AuthProvider[Unknown]` in function `SelectiveAuthProvider.__init__` [bad-argument-type]
+ ERROR src/schemathesis/auths.py:343:62-80: Argument `RequestsAuth[Unknown | None]` is not assignable to parameter `provider` with type `AuthProvider[Unknown]` in function `SelectiveAuthProvider.__init__` [bad-argument-type]
- ERROR src/schemathesis/auths.py:371:28-92: `CachingAuthProvider[@_]` is not assignable to variable `provider` with type `AuthProvider[Unknown]` [bad-assignment]
+ ERROR src/schemathesis/auths.py:371:28-92: `CachingAuthProvider[Unknown | None]` is not assignable to variable `provider` with type `AuthProvider[Unknown]` [bad-assignment]
- ERROR src/schemathesis/auths.py:373:28-375:18: `KeyedCachingAuthProvider[@_]` is not assignable to variable `provider` with type `AuthProvider[Unknown]` [bad-assignment]
+ ERROR src/schemathesis/auths.py:373:28-375:18: `KeyedCachingAuthProvider[Unknown | None]` is not assignable to variable `provider` with type `AuthProvider[Unknown]` [bad-assignment]
- ERROR src/schemathesis/auths.py:380:24-67: `SelectiveAuthProvider[@_]` is not assignable to variable `provider` with type `AuthProvider[Unknown]` [bad-assignment]
+ ERROR src/schemathesis/auths.py:380:24-67: `SelectiveAuthProvider[Unknown | None]` is not assignable to variable `provider` with type `AuthProvider[Unknown]` [bad-assignment]
- ERROR src/schemathesis/specs/openapi/adapter/parameters.py:1139:45-60: Argument `(value: dict[str, Any]) -> dict[str, Any]` is not assignable to parameter `pack` with type `(GeneratedValue) -> @_` in function `hypothesis.strategies._internal.strategies.SearchStrategy.map` [bad-argument-type]
+ ERROR src/schemathesis/specs/openapi/adapter/parameters.py:1139:45-60: Argument `(value: dict[str, Any]) -> dict[str, Any]` is not assignable to parameter `pack` with type `(GeneratedValue) -> dict[str, Any]` in function `hypothesis.strategies._internal.strategies.SearchStrategy.map` [bad-argument-type]
- ERROR src/schemathesis/specs/openapi/adapter/parameters.py:1152:45-74: Argument `(value: dict[str, Any]) -> dict[str, Any]` is not assignable to parameter `pack` with type `(GeneratedValue) -> @_` in function `hypothesis.strategies._internal.strategies.SearchStrategy.map` [bad-argument-type]
+ ERROR src/schemathesis/specs/openapi/adapter/parameters.py:1152:45-74: Argument `(value: dict[str, Any]) -> dict[str, Any]` is not assignable to parameter `pack` with type `(GeneratedValue) -> dict[str, Any]` in function `hypothesis.strategies._internal.strategies.SearchStrategy.map` [bad-argument-type]
- ERROR src/schemathesis/specs/openapi/adapter/security.py:340:16-349:10: Returned type `CachingAuthProvider[@_]` is not assignable to declared return type `AuthProvider[Unknown]` [bad-return]
+ ERROR src/schemathesis/specs/openapi/adapter/security.py:340:16-349:10: Returned type `CachingAuthProvider[Unknown | None]` is not assignable to declared return type `AuthProvider[Unknown]` [bad-return]

steam.py (https://github.com/Gobot1234/steam.py)
+ ERROR steam/ext/tf2/currency.py:91:18-45: Object of class `int` has no attribute `as_tuple` [missing-attribute]

pip (https://github.com/pypa/pip)
- ERROR src/pip/_vendor/rich/text.py:1285:16-27: Returned type `Literal[1] | SupportsIndex` is not assignable to declared return type `int` [bad-return]

tornado (https://github.com/tornadoweb/tornado)
- ERROR tornado/test/web_test.py:1651:17-28: Class member `MyStaticFileHandler.get_content` overrides parent class `StaticFileHandler` in an inconsistent manner [bad-param-name-override]
+ ERROR tornado/test/web_test.py:1651:17-28: Class member `MyStaticFileHandler.get_content` overrides parent class `StaticFileHandler` in an inconsistent manner [bad-override]

openlibrary (https://github.com/internetarchive/openlibrary)
- ERROR openlibrary/core/imports.py:144:24-34: Argument `type[ImportItem]` is not assignable to parameter `func` with type `(@_) -> @_` in function `map.__new__` [bad-argument-type]
+ ERROR openlibrary/core/imports.py:144:24-34: Argument `type[ImportItem]` is not assignable to parameter `func` with type `(@_) -> ImportItem` in function `map.__new__` [bad-argument-type]

egglog-python (https://github.com/egraphs-good/egglog-python)
- ERROR python/egglog/thunk.py:50:20-49: Argument `Unresolved[@_, *TS]` is not assignable to parameter `state` with type `Error | Resolved[@_] | Resolving | Unresolved[@_, *TS]` in function `Thunk.__init__` [bad-argument-type]
+ ERROR python/egglog/thunk.py:50:20-49: Argument `Unresolved[T, *TS]` is not assignable to parameter `state` with type `Error | Resolved[T] | Resolving | Unresolved[T, *TS]` in function `Thunk.__init__` [bad-argument-type]
- ERROR python/egglog/thunk.py:50:31-33: Argument `(**tuple[*TS]) -> T` is not assignable to parameter `fn` with type `(**tuple[*TS]) -> @_` in function `Unresolved.__init__` [bad-argument-type]
+ ERROR python/egglog/thunk.py:50:31-33: Argument `(**tuple[*TS]) -> T` is not assignable to parameter `fn` with type `(**tuple[*TS]) -> T` in function `Unresolved.__init__` [bad-argument-type]

antidote (https://github.com/Finistere/antidote)
- ERROR tests/core/test_inject.py:549:13-27: Argument `() -> object` is not assignable to parameter `__arg` with type `(Any, ParamSpec(@_)) -> @_` in function `antidote.core.Inject.method` [bad-argument-type]
+ ERROR tests/core/test_inject.py:549:13-27: Argument `() -> object` is not assignable to parameter `__arg` with type `(Any, ParamSpec(@_)) -> object` in function `antidote.core.Inject.method` [bad-argument-type]

cloud-init (https://github.com/canonical/cloud-init)
- ERROR cloudinit/sources/DataSourceEc2.py:426:9-30: Class member `DataSourceEc2.device_name_to_device` overrides parent class `DataSource` in an inconsistent manner [bad-param-name-override]
+ ERROR cloudinit/sources/DataSourceEc2.py:426:9-30: Class member `DataSourceEc2.device_name_to_device` overrides parent class `DataSource` in an inconsistent manner [bad-override]

aiohttp (https://github.com/aio-libs/aiohttp)
- ERROR aiohttp/cookiejar.py:410:37-80: No matching overload found for function `itertools.accumulate.__new__` called with arguments: (type[accumulate[_T]], list[str], Overload[
-   (self: LiteralString, *args: LiteralString, **kwargs: LiteralString) -> LiteralString
-   (self: LiteralString, *args: object, **kwargs: object) -> str
- ]) [no-matching-overload]

sockeye (https://github.com/awslabs/sockeye)
+ ERROR sockeye_contrib/plot_metrics.py:36:17-43:3: No matching overload found for function `typing.MutableMapping.update` called with arguments: (dict[str, Overload[[SupportsRichComparisonT: SupportsDunderGT[Any] | SupportsDunderLT[Any]](arg1: SupportsRichComparisonT, arg2: SupportsRichComparisonT, /, *_args: SupportsRichComparisonT, *, key: None = None) -> SupportsRichComparisonT, [_T](arg1: _T, arg2: _T, /, *_args: _T, *, key: (_T) -> SupportsRichComparison) -> _T, [SupportsRichComparisonT: SupportsDunderGT[Any] | SupportsDunderLT[Any]](iterable: Iterable[SupportsRichComparisonT], /, *, key: None = None) -> SupportsRichComparisonT, [_T](iterable: Iterable[_T], /, *, key: (_T) -> SupportsRichComparison) -> _T, [SupportsRichComparisonT: SupportsDunderGT[Any] | SupportsDunderLT[Any], _T](iterable: Iterable[SupportsRichComparisonT], /, *, key: None = None, default: _T) -> SupportsRichComparisonT | _T, [_T1, _T2](iterable: Iterable[_T1], /, *, key: (_T1) -> SupportsRichComparison, default: _T2) -> _T1 | _T2]]) [no-matching-overload]

pandas-stubs (https://github.com/pandas-dev/pandas-stubs)
+ ERROR tests/series/test_agg.py:26:22-44: assert_type(Unknown, float) failed [assert-type]
+ ERROR tests/series/test_agg.py:26:34-36: No matching overload found for function `pandas.core.series.Series.mean` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:27:22-46: assert_type(Unknown, float) failed [assert-type]
+ ERROR tests/series/test_agg.py:27:36-38: No matching overload found for function `pandas.core.series.Series.median` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:28:22-43: assert_type(Unknown, float) failed [assert-type]
+ ERROR tests/series/test_agg.py:28:33-35: No matching overload found for function `pandas.core.series.Series.std` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:29:22-43: assert_type(Unknown, float) failed [assert-type]
+ ERROR tests/series/test_agg.py:29:33-35: No matching overload found for function `pandas.core.series.Series.var` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:34:22-44: assert_type(Unknown, float) failed [assert-type]
+ ERROR tests/series/test_agg.py:34:34-36: No matching overload found for function `pandas.core.series.Series.mean` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:35:22-46: assert_type(Unknown, float) failed [assert-type]
+ ERROR tests/series/test_agg.py:35:36-38: No matching overload found for function `pandas.core.series.Series.median` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:36:22-43: assert_type(Unknown, float) failed [assert-type]
+ ERROR tests/series/test_agg.py:36:33-35: No matching overload found for function `pandas.core.series.Series.std` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:37:22-43: assert_type(Unknown, float) failed [assert-type]
+ ERROR tests/series/test_agg.py:37:33-35: No matching overload found for function `pandas.core.series.Series.var` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:42:22-44: assert_type(Unknown, float) failed [assert-type]
+ ERROR tests/series/test_agg.py:42:34-36: No matching overload found for function `pandas.core.series.Series.mean` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:43:22-46: assert_type(Unknown, float) failed [assert-type]
+ ERROR tests/series/test_agg.py:43:36-38: No matching overload found for function `pandas.core.series.Series.median` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:44:22-43: assert_type(Unknown, float) failed [assert-type]
+ ERROR tests/series/test_agg.py:44:33-35: No matching overload found for function `pandas.core.series.Series.std` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:45:22-43: assert_type(Unknown, float) failed [assert-type]
+ ERROR tests/series/test_agg.py:45:33-35: No matching overload found for function `pandas.core.series.Series.var` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:52:22-46: assert_type(Unknown, complex) failed [assert-type]
+ ERROR tests/series/test_agg.py:52:34-36: No matching overload found for function `pandas.core.series.Series.mean` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:97:22-51: assert_type(Unknown, Timedelta) failed [assert-type]
+ ERROR tests/series/test_agg.py:97:34-36: No matching overload found for function `pandas.core.series.Series.mean` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:98:22-53: assert_type(Unknown, Timedelta) failed [assert-type]
+ ERROR tests/series/test_agg.py:98:36-38: No matching overload found for function `pandas.core.series.Series.median` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_agg.py:99:22-50: assert_type(Unknown, Timedelta) failed [assert-type]
+ ERROR tests/series/test_agg.py:99:33-35: No matching overload found for function `pandas.core.series.Series.std` called with arguments: () [no-matching-overload]
+ ERROR tests/series/test_cumul.py:35:20-60: assert_type(Series[Series | complex], Series[complex]) failed [assert-type]

bandersnatch (https://github.com/pypa/bandersnatch)
- ERROR src/bandersnatch/mirror.py:814:70-81: Argument `(self: Path, *, follow_symlinks: bool = True) -> bool` is not assignable to parameter `func` with type `(**tuple[*@_]) -> @_` in function `asyncio.events.AbstractEventLoop.run_in_executor` [bad-argument-type]
+ ERROR src/bandersnatch/mirror.py:814:70-81: Argument `(self: Path, *, follow_symlinks: bool = True) -> bool` is not assignable to parameter `func` with type `(**tuple[*@_]) -> bool` in function `asyncio.events.AbstractEventLoop.run_in_executor` [bad-argument-type]

hydpy (https://github.com/hydpy-dev/hydpy)
+ ERROR hydpy/core/seriestools.py:326:31-36: Argument `Index` is not assignable to parameter `date` with type `Date | datetime | str` in function `hydpy.core.timetools.Date.__new__` [bad-argument-type]

pandas (https://github.com/pandas-dev/pandas)
- ERROR pandas/core/arrays/string_arrow.py:282:9-29: Class member `ArrowStringArray._convert_bool_result` overrides parent class `ArrowExtensionArray` in an inconsistent manner [bad-param-name-override]
+ ERROR pandas/core/arrays/string_arrow.py:282:9-29: Class member `ArrowStringArray._convert_bool_result` overrides parent class `ArrowExtensionArray` in an inconsistent manner [bad-override]
- ERROR pandas/core/computation/align.py:86:12-19: Returned type `(terms: Unknown) -> tuple[partial[Unknown] | type[NDFrame], dict[str, Index] | None] | tuple[Unknown, None] | Unknown` is not assignable to declared return type `(F) -> F` [bad-return]
+ ERROR pandas/core/computation/align.py:86:12-19: Returned type `(terms: Unknown) -> tuple[dtype | Unknown, None] | tuple[partial[Unknown] | type[NDFrame], dict[str, Index] | None] | Unknown` is not assignable to declared return type `(F) -> F` [bad-return]
- ERROR pandas/core/computation/common.py:46:36-60: No matching overload found for function `pandas.core.dtypes.cast.find_common_type` called with arguments: (list[_Buffer | _HasDType[dtype] | _HasNumPyDType[dtype] | _NestedSequence[bytes | complex | str] | _NestedSequence[_SupportsArray[dtype]] | _SupportsArray[dtype] | bytes | complex | dtype | list[Any] | str | _DTypeDict | tuple[Any, Any] | type[Any] | Unknown | None]) [no-matching-overload]
- ERROR pandas/core/computation/expr.py:172:9-173:41: Argument `Generator[object]` is not assignable to parameter `iterable` with type `Iterable[_Token]` in function `tokenize.untokenize` [bad-argument-type]
+ ERROR pandas/core/computation/expr.py:172:9-173:41: Argument `Generator[object | Unknown]` is not assignable to parameter `iterable` with type `Iterable[_Token]` in function `tokenize.untokenize` [bad-argument-type]
+ ERROR pandas/core/computation/ops.py:266:27-28: Argument `type[bool] | dtype | Unknown` is not assignable to parameter `cls` with type `type` in function `issubclass` [bad-argument-type]

pytest-robotframework (https://github.com/detachhead/pytest-robotframework)
- ERROR tests/type_tests.py:64:5-40: Argument `() -> None` is not assignable to parameter `fn` with type `() -> AbstractContextManager[@_]` in function `pytest_robotframework._WrappedContextManagerKeywordDecorator.__call__` [bad-argument-type]
+ ERROR tests/type_tests.py:64:5-40: Argument `() -> None` is not assignable to parameter `fn` with type `(ParamSpec(@_)) -> AbstractContextManager[@_]` in function `pytest_robotframework._WrappedContextManagerKeywordDecorator.__call__` [bad-argument-type]

mypy (https://github.com/python/mypy)
- ERROR mypy/typeshed/stdlib/builtins.pyi:227:5-17: Argument `(metacls: Self@type, name: str, bases: tuple[type[Any], ...], /, **kwds: Any) -> MutableMapping[str, object]` is not assignable to parameter `f` with type `(type[@_], ParamSpec(@_)) -> @_` in function `classmethod.__init__` [bad-argument-type]
+ ERROR mypy/typeshed/stdlib/builtins.pyi:227:5-17: Argument `(metacls: Self@type, name: str, bases: tuple[type[Any], ...], /, **kwds: Any) -> MutableMapping[str, object]` is not assignable to parameter `f` with type `(type[@_], ParamSpec(@_)) -> MutableMapping[str, object]` in function `classmethod.__init__` [bad-argument-type]
- ERROR mypy/typeshed/stdlib/builtins.pyi:277:9-21: Argument `(cls: Self@int, bytes: Buffer | Iterable[SupportsIndex] | SupportsBytes, byteorder: Literal['big', 'little'] = 'big', *, signed: bool = False) -> Self@int` is not assignable to parameter `f` with type `(type[@_], ParamSpec(@_)) -> @_` in function `classmethod.__init__` [bad-argument-type]
+ ERROR mypy/typeshed/stdlib/builtins.pyi:277:9-21: Argument `(cls: Self@int, bytes: Buffer | Iterable[SupportsIndex] | SupportsBytes, byteorder: Literal['big', 'little'] = 'big', *, signed: bool = False) -> Self@int` is not assignable to parameter `f` with type `(type[@_], ParamSpec(@_)) -> Self@int` in function `classmethod.__init__` [bad-argument-type]
- ERROR mypy/typeshed/stdlib/builtins.pyi:370:5-17: Argument `(cls: Self@float, string: str, /) -> Self@float` is not assignable to parameter `f` with type `(type[@_], ParamSpec(@_)) -> @_` in function `classmethod.__init__` [bad-argument-type]
+ ERROR mypy/typeshed/stdlib/builtins.pyi:370:5-17: Argument `(cls: Self@float, string: str, /) -> Self@float` is not assignable to parameter `f` with type `(type[@_], ParamSpec(@_)) -> Self@float` in function `classmethod.__init__` [bad-argument-type]
- ERROR mypy/typeshed/stdlib/builtins.pyi:640:9-21: Argument `(cls: Self@bytes, string: str, /) -> Self@bytes` is not assignable to parameter `f` with type `(type[@_], ParamSpec(@_)) -> @_` in function `classmethod.__init__` [bad-argument-type]
+ ERROR mypy/typeshed/stdlib/builtins.pyi:640:9-21: Argument `(cls: Self@bytes, string: str, /) -> Self@bytes` is not assignable to parameter `f` with type `(type[@_], ParamSpec(@_)) -> Self@bytes` in function `classmethod.__init__` [bad-argument-type]
- ERROR mypy/typeshed/stdlib/builtins.pyi:750:9-21: Argument `(cls: Self@bytearray, string: str, /) -> Self@bytearray` is not assignable to parameter `f` with type `(type[@_], ParamSpec(@_)) -> @_` in function `classmethod.__init__` [bad-argument-type]
+ ERROR mypy/typeshed/stdlib/builtins.pyi:750:9-21: Argument `(cls: Self@bytearray, string: str, /) -> Self@bytearray` is not assignable to parameter `f` with type `(type[@_], ParamSpec(@_)) -> Self@bytearray` in function `classmethod.__init__` [bad-argument-type]
- ERROR mypy/typeshed/stdlib/builtins.pyi:1117:5-17: Argument `(cls: Self@mypy.typeshed.stdlib.builtins.dict, iterable: Iterable[_T], value: None = None, /) -> builtins.dict[_T, Any | None]` is not assignable to parameter `f` with type `(type[@_], ParamSpec(@_)) -> @_` in function `classmethod.__init__` [bad-argument-type]
+ ERROR mypy/typeshed/stdlib/builtins.pyi:1117:5-17: Argument `(cls: Self@mypy.typeshed.stdlib.builtins.dict, iterable: Iterable[_T], value: None = None, /) -> builtins.dict[_T, Any | None]` is not assignable to parameter `f` with type `(type[@_], ParamSpec(@_)) -> builtins.dict[_T, Any | None]` in function `classmethod.__init__` [bad-argument-type]
- ERROR mypy/typeshed/stdlib/builtins.pyi:1119:9-17: `fromkeys` has type `classmethod[Unknown, Ellipsis, Unknown]` after decorator application, which is not callable [invalid-overload]
+ ERROR mypy/typeshed/stdlib/builtins.pyi:1119:9-17: `fromkeys` has type `classmethod[Unknown, Ellipsis, dict[_T, Any | None]]` after decorator application, which is not callable [invalid-overload]
- ERROR mypy/typeshed/stdlib/builtins.pyi:1120:5-17: Argument `(cls: Self@mypy.typeshed.stdlib.builtins.dict, iterable: Iterable[_T], value: _S, /) -> builtins.dict[_T, _S]` is not assignable to parameter `f` with type `(type[@_], ParamSpec(@_)) -> @_` in function `classmethod.__init__` [bad-argument-type]
+ ERROR mypy/typeshed/stdlib/builtins.pyi:1120:5-17: Argument `(cls: Self@mypy.typeshed.stdlib.builtins.dict, iterable: Iterable[_T], value: _S, /) -> builtins.dict[_T, _S]` is not assignable to parameter `f` with type `(type[@_], ParamSpec(@_)) -> builtins.dict[_T, _S]` in function `classmethod.__init__` [bad-argument-type]
- ERROR mypy/typeshed/stdlib/builtins.pyi:1122:9-17: `fromkeys` has type `classmethod[@_, @_, @_]` after decorator application, which is not callable [invalid-overload]
+ ERROR mypy/typeshed/stdlib/builtins.pyi:1122:9-17: `fromkeys` has type `classmethod[@_, @_, dict[_T, _S]]` after decorator application, which is not callable [invalid-overload]
+ ERROR mypy/typeshed/stdlib/collections/__init__.pyi:392:9-15: Class member `OrderedDict.__or__` overrides parent class `dict` in an inconsistent manner [bad-override]
+ ERROR mypy/typeshed/stdlib/collections/__init__.pyi:396:9-16: Class member `OrderedDict.__ror__` overrides parent class `dict` in an inconsistent manner [bad-override]
+ ERROR mypy/typeshed/stdlib/collections/__init__.pyi:440:9-15: Class member `defaultdict.__or__` overrides parent class `dict` in an inconsistent manner [bad-override]
+ ERROR mypy/typeshed/stdlib/collections/__init__.pyi:444:9-16: Class member `defaultdict.__ror__` overrides parent class `dict` in an inconsistent manner [bad-override]

meson (https://github.com/mesonbuild/meson)
- ERROR mesonbuild/ast/interpreter.py:338:9-31: Class member `AstInterpreter.evaluate_dictstatement` overrides parent class `InterpreterBase` in an inconsistent manner [bad-param-name-override]
+ ERROR mesonbuild/ast/interpreter.py:338:9-31: Class member `AstInterpreter.evaluate_dictstatement` overrides parent class `InterpreterBase` in an inconsistent manner [bad-override]
+ ERROR unittests/linuxliketests.py:727:62-77: Argument `[_StrOrBytesPathT: PathLike[bytes] | PathLike[str] | bytes | str](src: StrOrBytesPath, dst: _StrOrBytesPathT, *, follow_symlinks: bool = True) -> _StrOrBytesPathT` is not assignable to parameter `copy_function` with type `(str, str) -> object` in function `shutil.copytree` [bad-argument-type]

werkzeug (https://github.com/pallets/werkzeug)
- ERROR tests/test_utils.py:285:5-27: Argument `() -> Literal[42]` is not assignable to parameter `fget` with type `(Any) -> @_` in function `werkzeug.utils.cached_property.__init__` [bad-argument-type]
+ ERROR tests/test_utils.py:285:5-27: Argument `() -> Literal[42]` is not assignable to parameter `fget` with type `(Any) -> int` in function `werkzeug.utils.cached_property.__init__` [bad-argument-type]

rich (https://github.com/Textualize/rich)
- ERROR rich/text.py:1287:16-27: Returned type `Literal[1] | SupportsIndex` is not assignable to declared return type `int` [bad-return]

core (https://github.com/home-assistant/core)
- ERROR homeassistant/components/backup/backup.py:101:56-74: Argument `(self: Path, *, follow_symlinks: bool = True) -> bool` is not assignable to parameter `target` with type `(**tuple[*@_]) -> @_` in function `homeassistant.core.HomeAssistant.async_add_executor_job` [bad-argument-type]
+ ERROR homeassistant/components/backup/backup.py:101:56-74: Argument `(self: Path, *, follow_symlinks: bool = True) -> bool` is not assignable to parameter `target` with type `(**tuple[*@_]) -> bool` in function `homeassistant.core.HomeAssistant.async_add_executor_job` [bad-argument-type]
- ERROR homeassistant/components/image_upload/__init__.py:223:54-73: Argument `(self: Path, *, follow_symlinks: bool = True) -> bool` is not assignable to parameter `target` with type `(**tuple[*@_]) -> @_` in function `homeassistant.core.HomeAssistant.async_add_executor_job` [bad-argument-type]
+ ERROR homeassistant/components/image_upload/__init__.py:223:54-73: Argument `(self: Path, *, follow_symlinks: bool = True) -> bool` is not assignable to parameter `target` with type `(**tuple[*@_]) -> bool` in function `homeassistant.core.HomeAssistant.async_add_executor_job` [bad-argument-type]
- ERROR homeassistant/components/media_source/local_source.py:305:49-67: Argument `(self: Path, *, follow_symlinks: bool = True) -> bool` is not assignable to parameter `target` with type `(**tuple[*@_]) -> @_` in function `homeassistant.core.HomeAssistant.async_add_executor_job` [bad-argument-type]
+ ERROR homeassistant/components/media_source/local_source.py:305:49-67: Argument `(self: Path, *, follow_symlinks: bool = True) -> bool` is not assignable to parameter `target` with type `(**tuple[*@_]) -> bool` in function `homeassistant.core.HomeAssistant.async_add_executor_job` [bad-argument-type]
- ERROR homeassistant/components/panasonic_viera/__init__.py:251:62-66: Argument `(**tuple[*_Ts]) -> _R` is not assignable to parameter `target` with type `(**tuple[*@_]) -> @_` in function `homeassistant.core.HomeAssistant.async_add_executor_job` [bad-argument-type]
+ ERROR homeassistant/components/panasonic_viera/__init__.py:251:62-66: Argument `(**tuple[*_Ts]) -> _R` is not assignable to parameter `target` with type `(**tuple[*@_]) -> _R` in function `homeassistant.core.HomeAssistant.async_add_executor_job` [bad-argument-type]
- ERROR homeassistant/components/rfxtrx/entity.py:123:48-51: Argument `(**tuple[Unknown, *_Ts]) -> None` is not assignable to parameter `target` with type `(**tuple[*@_]) -> @_` in function `homeassistant.core.HomeAssistant.async_add_executor_job` [bad-argument-type]
+ ERROR homeassistant/components/rfxtrx/entity.py:123:48-51: Argument `(**tuple[Unknown, *_Ts]) -> None` is not assignable to parameter `target` with type `(**tuple[*@_]) -> None` in function `homeassistant.core.HomeAssistant.async_add_executor_job` [bad-argument-type]
- ERROR homeassistant/core.py:842:48-54: Argument `(**tuple[*_Ts]) -> _T` is not assignable to parameter `func` with type `(**tuple[*@_]) -> @_` in function `asyncio.events.AbstractEventLoop.run_in_executor` [bad-argument-type]
+ ERROR homeassistant/core.py:842:48-54: Argument `(**tuple[*_Ts]) -> _T` is not assignable to parameter `func` with type `(**tuple[*@_]) -> _T` in function `asyncio.events.AbstractEventLoop.run_in_executor` [bad-argument-type]
- ERROR homeassistant/core.py:859:64-70: Argument `(**tuple[*_Ts]) -> _T` is not assignable to parameter `func` with type `(**tuple[*@_]) -> @_` in function `asyncio.events.AbstractEventLoop.run_in_executor` [bad-argument-type]
+ ERROR homeassistant/core.py:859:64-70: Argument `(**tuple[*_Ts]) -> _T` is not assignable to parameter `func` with type `(**tuple[*@_]) -> _T` in function `asyncio.events.AbstractEventLoop.run_in_executor` [bad-argument-type]
- ERROR homeassistant/loader.py:1504:15-17: Argument `dict[@_, @_]` is not assignable to parameter `cache` with type `_ResolveDependenciesCacheProtocol` in function `_resolve_integrations_dependencies` [bad-argument-type]
+ ERROR homeassistant/loader.py:1504:15-17: Argument `dict[@_, Exception | set[str] | None]` is not assignable to parameter `cache` with type `_ResolveDependenciesCacheProtocol` in function `_resolve_integrations_dependencies` [bad-argument-type]

trio (https://github.com/python-trio/trio)
- ERROR src/trio/_tests/test_signals.py:73:38-57: Unpacked argument `tuple[() -> Coroutine[Unknown, Unknown, None]]` is not assignable to parameter `*args` with type `tuple[(**tuple[*Unknown]) -> Awaitable[@_], *Unknown]` in function `trio._threads.to_thread_run_sync` [bad-argument-type]
+ ERROR src/trio/_tests/test_signals.py:73:38-57: Unpacked argument `tuple[() -> Coroutine[Unknown, Unknown, None]]` is not assignable to parameter `*args` with type `tuple[(**tuple[*Unknown]) -> Awaitable[Unknown | None], *Unknown]` in function `trio._threads.to_thread_run_sync` [bad-argument-type]
- ERROR src/trio/_tests/test_threads.py:921:44-86: Unpacked argument `tuple[() -> Task]` is not assignable to parameter `*args` with type `tuple[(**tuple[*Unknown]) -> @_, *Unknown]` in function `trio._threads.to_thread_run_sync` [bad-argument-type]
+ ERROR src/trio/_tests/test_threads.py:921:44-86: Unpacked argument `tuple[() -> Task]` is not assignable to parameter `*args` with type `tuple[(**tuple[*Unknown]) -> Task, *Unknown]` in function `trio._threads.to_thread_run_sync` [bad-argument-type]
- ERROR src/trio/_tests/test_threads.py:922:44-81: Unpacked argument `tuple[() -> Coroutine[Unknown, Unknown, Task]]` is not assignable to parameter `*args` with type `tuple[(**tuple[*Unknown]) -> Awaitable[@_], *Unknown]` in function `trio._threads.to_thread_run_sync` [bad-argument-type]
+ ERROR src/trio/_tests/test_threads.py:922:44-81: Unpacked argument `tuple[() -> Coroutine[Unknown, Unknown, Task]]` is not assignable to parameter `*args` with type `tuple[(**tuple[*Unknown]) -> Awaitable[Task], *Unknown]` in function `trio._threads.to_thread_run_sync` [bad-argument-type]
- ERROR src/trio/_tests/test_threads.py:931:31-72: Unpacked argument `tuple[() -> int]` is not assignable to parameter `*args` with type `tuple[(**tuple[*Unknown]) -> @_, *Unknown]` in function `trio._threads.from_thread_run` [bad-argument-type]
- ERROR src/trio/_tests/test_threads.py:990:33-67: Unpacked argument `tuple[() -> Coroutine[Unknown, Unknown, None]]` is not assignable to parameter `*args` with type `tuple[(**tuple[*Unknown]) -> Awaitable[@_], *Unknown]` in function `trio._threads.to_thread_run_sync` [bad-argument-type]
+ ERROR src/trio/_tests/test_threads.py:990:33-67: Unpacked argument `tuple[() -> Coroutine[Unknown, Unknown, None]]` is not assignable to parameter `*args` with type `tuple[(**tuple[*Unknown]) -> Awaitable[Unknown | None], *Unknown]` in function `trio._threads.to_thread_run_sync` [bad-argument-type]
- ERROR src/trio/_tests/test_threads.py:1121:37-89: Unpacked argument `tuple[(seconds: float) -> Coroutine[Unknown, Unknown, None], Literal[0]]` is not assignable to parameter `*args` with type `tuple[(**tuple[*Unknown]) -> Awaitable[@_], *Unknown]` in function `trio._threads.to_thread_run_sync` [bad-argument-type]
+ ERROR src/trio/_tests/test_threads.py:1121:37-89: Unpacked argument `tuple[(seconds: float) -> Coroutine[Unknown, Unknown, None], Literal[0]]` is not assignable to parameter `*args` with type `tuple[(**tuple[*Unknown]) -> Awaitable[Unknown | None], *Unknown]` in function `trio._threads.to_thread_run_sync` [bad-argument-type]

xarray (https://github.com/pydata/xarray)
- ERROR xarray/core/groupby.py:544:28-37: Object of class `bytes` has no attribute `data`
- Object of class `complex` has no attribute `data`
- Object of class `str` has no attribute `data` [missing-attribute]
- ERROR xarray/tests/test_dataset.py:574:57-73: Argument `Iterator[Any]` is not assignable to parameter `indexers` with type `Mapping[Any, Any] | None` in function `xarray.core.dataarray.DataArray.reindex` [bad-argument-type]
+ ERROR xarray/tests/test_dataset.py:574:57-73: Argument `Iterator[Index]` is not assignable to parameter `indexers` with type `Mapping[Any, Any] | None` in function `xarray.core.dataarray.DataArray.reindex` [bad-argument-type]
- ERROR xarray/tests/test_dataset.py:574:57-73: Argument `Iterator[Any]` is not assignable to parameter `labels` with type `ExtensionArray | Index | SequenceNotStr[Any] | Series | ndarray | range | None` in function `pandas.core.frame.DataFrame.reindex` [bad-argument-type]
+ ERROR xarray/tests/test_dataset.py:574:57-73: Argument `Iterator[Index]` is not assignable to parameter `labels` with type `ExtensionArray | Index | SequenceNotStr[Any] | Series | ndarray | range | None` in function `pandas.core.frame.DataFrame.reindex` [bad-argument-type]
- ERROR xarray/tests/test_dataset.py:574:57-73: Argument `Iterator[Any]` is not assignable to parameter `index` with type `ExtensionArray | Index | SequenceNotStr[Any] | Series | ndarray | range | None` in function `pandas.core.series.Series.reindex` [bad-argument-type]
+ ERROR xarray/tests/test_dataset.py:574:57-73: Argument `Iterator[Index]` is not assignable to parameter `index` with type `ExtensionArray | Index | SequenceNotStr[Any] | Series | ndarray | range | None` in function `pandas.core.series.Series.reindex` [bad-argument-type]

scrapy (https://github.com/scrapy/scrapy)
- ERROR tests/test_spidermiddleware_referer.py:685:9-17: Class member `CustomPythonOrgPolicy.referrer` overrides parent class `ReferrerPolicy` in an inconsistent manner [bad-param-name-override]
+ ERROR tests/test_spidermiddleware_referer.py:685:9-17: Class member `CustomPythonOrgPolicy.referrer` overrides parent class `ReferrerPolicy` in an inconsistent manner [bad-override]

spark (https://github.com/apache/spark)
+ ERROR python/pyspark/pandas/tests/computation/test_stats.py:204:47-49: No matching overload found for function `pandas.core.series.Series.mean` called with arguments: () [no-matching-overload]
+ ERROR python/pyspark/pandas/tests/computation/test_stats.py:206:45-47: No matching overload found for function `pandas.core.series.Series.var` called with arguments: () [no-matching-overload]
+ ERROR python/pyspark/pandas/tests/computation/test_stats.py:207:51-59: No matching overload found for function `pandas.core.series.Series.var` called with arguments: (ddof=Literal[0]) [no-matching-overload]
+ ERROR python/pyspark/pandas/tests/computation/test_stats.py:208:51-59: No matching overload found for function `pandas.core.series.Series.var` called with arguments: (ddof=Literal[2]) [no-matching-overload]
+ ERROR python/pyspark/pandas/tests/computation/test_stats.py:209:45-47: No matching overload found for function `pandas.core.series.Series.std` called with arguments: () [no-matching-overload]
+ ERROR python/pyspark/pandas/tests/computation/test_stats.py:210:51-59: No matching overload found for function `pandas.core.series.Series.std` called with arguments: (ddof=Literal[0]) [no-matching-overload]
+ ERROR python/pyspark/pandas/tests/computation/test_stats.py:211:51-59: No matching overload found for function `pandas.core.series.Series.std` called with arguments: (ddof=Literal[2]) [no-matching-overload]
+ ERROR python/pyspark/pandas/tests/series/test_compute.py:155:63-67: No matching overload found for function `pandas.core.series.Series.diff` called with arguments: (Literal[-1]) [no-matching-overload]
- ERROR python/pyspark/pandas/tests/series/test_cumulative.py:31:41-43: Argument `Series[float | None]` is not assignable to parameter `self` with type `SupportsGetItem[Scalar, _SupportsAdd[float]]` in function `pandas.core.series.Series.sum` [bad-argument-type]
+ ERROR python/pyspark/pandas/tests/series/test_cumulative.py:31:41-43: Argument `Series[float | None]` is not assignable to parameter `self` with type `SupportsGetItem[Scalar, _SupportsAdd[Series[str] | Series | float]]` in function `pandas.core.series.Series.sum` [bad-argument-type]
- ERROR python/pyspark/pandas/tests/series/test_cumulative.py:44:41-43: Argument `Series[float | None]` is not assignable to parameter `self` with type `SupportsGetItem[Scalar, _SupportsAdd[float]]` in function `pandas.core.series.Series.sum` [bad-argument-type]
+ ERROR python/pyspark/pandas/tests/series/test_cumulative.py:44:41-43: Argument `Series[float | None]` is not assignable to parameter `self` with type `SupportsGetItem[Scalar, _SupportsAdd[Series[str] | Series | float]]` in function `pandas.core.series.Series.sum` [bad-argument-type]
- ERROR python/pyspark/pandas/tests/series/test_cumulative.py:57:41-43: Argument `Series[float | None]` is not assignable to parameter `self` with type `SupportsGetItem[Scalar, _SupportsAdd[float]]` in function `pandas.core.series.Series.sum` [bad-argument-type]
+ ERROR python/pyspark/pandas/tests/series/test_cumulative.py:57:41-43: Argument `Series[float | None]` is not assignable to parameter `self` with type `SupportsGetItem[Scalar, _SupportsAdd[Series[str] | Series | float]]` in function `pandas.core.series.Series.sum` [bad-argument-type]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_grouped_map.py:1028:17-42: `+=` is not supported between `int` and `Series[str]` [unsupported-operation]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_grouped_map.py:1046:17-42: `+=` is not supported between `int` and `Series[str]` [unsupported-operation]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_grouped_map.py:1354:17-49: `+=` is not supported between `int` and `Series[str]` [unsupported-operation]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_grouped_map.py:1455:17-42: `+=` is not supported between `int` and `Series[str]` [unsupported-operation]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_udf_grouped_agg.py:551:17-38: `+=` is not supported between `int` and `Series[str]` [unsupported-operation]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_udf_grouped_agg.py:552:20-25: Returned type `Series | int` is not assignable to declared return type `int` [bad-return]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_udf_grouped_agg.py:940:17-40: `+=` is not supported between `float` and `Series[str]` [unsupported-operation]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_udf_grouped_agg.py:942:20-53: Returned type `Series | float` is not assignable to declared return type `float` [bad-return]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_udf_grouped_agg.py:972:17-41: `+=` is not supported between `float` and `Series[str]` [unsupported-operation]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_udf_grouped_agg.py:973:20-64: Returned type `Series | float | Unknown` is not assignable to declared return type `float` [bad-return]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_udf_grouped_agg.py:997:17-38: `+=` is not supported between `float` and `Series[str]` [unsupported-operation]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_udf_grouped_agg.py:998:20-25: Returned type `Series | float` is not assignable to declared return type `float` [bad-return]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_udf_grouped_agg.py:1006:17-33: `+=` is not supported between `float` and `Series[str]` [unsupported-operation]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_udf_grouped_agg.py:1007:20-25: Returned type `Series | float` is not assignable to declared return type `float` [bad-return]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_udf_typehints.py:397:17-40: `+=` is not supported between `float` and `Series[str]` [unsupported-operation]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_udf_typehints.py:399:20-53: Returned type `Series | float` is not assignable to declared return type `float` [bad-return]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_udf_typehints.py:420:17-41: `+=` is not supported between `float` and `Series[str]` [unsupported-operation]
+ ERROR python/pyspark/sql/tests/pandas/test_pandas_udf_typehints.py:421:20-64: Returned type `Series | float | Unknown` is not assignable to declared return type `float` [bad-return]
+ ERROR python/pyspark/sql/tests/test_udf_profiler.py:551:17-35: `+=` is not supported between `float` and `Series[str]` [unsupported-operation]
+ ERROR python/pyspark/sql/tests/test_udf_profiler.py:553:20-53: Returned type `Series | float` is not assignable to declared return type `float` [bad-return]
- ERROR python/pyspark/sql/types.py:2760:25-2768:26: Argument `Generator[DataType]` is not assignable to parameter `iterable` with type `Iterable[StructType]` in function `functools.reduce` [bad-argument-type]

attrs (https://github.com/python-attrs/attrs)
- ERROR tests/test_converters.py:25:23-26: Argument `type[int]` is not assignable to parameter `converter` with type `(ConvertibleToInt, AttrsInstance, Attribute[Unknown]) -> @_` in function `attr.Converter.__init__` [bad-argument-type]
+ ERROR tests/test_converters.py:25:23-26: Argument `type[int]` is not assignable to parameter `converter` with type `(ConvertibleToInt, AttrsInstance, Attribute[Unknown]) -> int` in function `attr.Converter.__init__` [bad-argument-type]
- ERROR tests/test_converters.py:104:23-30: Argument `(_: Unknown, __: Unknown, ___: Unknown) -> float` is not assignable to parameter `converter` with type `(Unknown) -> @_` in function `attr.Converter.__init__` [bad-argument-type]
+ ERROR tests/test_converters.py:104:23-30: Argument `(_: Unknown, __: Unknown, ___: Unknown) -> float` is not assignable to parameter `converter` with type `(Unknown) -> float` in function `attr.Converter.__init__` [bad-argument-type]

prefect (https://github.com/PrefectHQ/prefect)
- ERROR src/prefect/flows.py:2201:20-2222:14: Returned type `Flow[[(ParamSpec(P)) -> R], Coroutine[Any, Any, Unknown]]` is not assignable to declared return type `(((ParamSpec(P)) -> R) -> Flow[P, R]) | Flow[P, R]` [bad-return]
+ ERROR src/prefect/flows.py:2201:20-2222:14: Returned type `Flow[P, Coroutine[Any, Any, Unknown]]` is not assignable to declared return type `(((ParamSpec(P)) -> R) -> Flow[P, R]) | Flow[P, R]` [bad-return]
- ERROR src/prefect/flows.py:2202:20-24: Argument `(ParamSpec(P)) -> R` is not assignable to parameter `fn` with type `(((ParamSpec(P)) -> R) -> Coroutine[Any, Any, Unknown]) | classmethod[Any, [(ParamSpec(P)) -> R], Coroutine[Any, Any, Unknown]] | staticmethod[[(ParamSpec(P)) -> R], Coroutine[Any, Any, Unknown]]` in function `Flow.__init__` [bad-argument-type]
+ ERROR src/prefect/flows.py:2202:20-24: Argument `(ParamSpec(P)) -> R` is not assignable to parameter `fn` with type `((ParamSpec(P)) -> Coroutine[Any, Any, Unknown]) | classmethod[Any, P, Coroutine[Any, Any, Unknown]] | staticmethod[P, Coroutine[Any, Any, Unknown]]` in function `Flow.__init__` [bad-argument-type]
- ERROR src/prefect/server/database/_migrations/env.py:190:35-52: Argument `(connection: AsyncEngine) -> None` is not assignable to parameter with type `(Connection, ParamSpec(@_)) -> @_` in function `sqlalchemy.ext.asyncio.engine.AsyncConnection.run_sync` [bad-argument-type]
+ ERROR src/prefect/server/database/_migrations/env.py:190:35-52: Argument `(connection: AsyncEngine) -> None` is not assignable to parameter with type `(Connection, ParamSpec(@_)) -> None` in function `sqlalchemy.ext.asyncio.engine.AsyncConnection.run_sync` [bad-argument-type]

scipy-stubs (https://github.com/scipy/scipy-stubs)
+ ERROR tests/sparse/test_construct.pyi:291:12-130: assert_type(coo_array[_Numeric, tuple[int, int]] | coo_matrix, coo_array[floating[_32Bit], tuple[int, int]] | coo_matrix[floating[_32Bit]]) failed [assert-type]

jax (https://github.com/google/jax)
+ ERROR jax/_src/pallas/mosaic/interpret/interpret_pallas_call.py:2032:53-67: Argument `DynamicGridDim | int | Unknown` is not assignable to parameter `y` with type `Array | builtins.bool | numpy.bool | complex | float | int | ndarray | number` in function `jax.numpy.BinaryUfunc.__call__` [bad-argument-type]

@github-actions
Copy link
Copy Markdown

Primer Diff Classification

❌ 11 regression(s) | ✅ 6 improvement(s) | ➖ 20 neutral | ❓ 1 needs review | 38 project(s) total | +178, -90 errors

11 regression(s) across bokeh, asynq, steam.py, sockeye, pandas-stubs, hydpy, pytest-robotframework, mypy, meson, scipy-stubs, jax. error kinds: bad-argument-type, bad-override reclassification (container.py, dataspec.py), bad-argument-type in layouts.py. 6 improvement(s) across Tanjun, streamlit, freqtrade, pip, rich, spark.

Project Verdict Changes Error Kinds Root Cause
anyio ➖ Neutral +10, -10 bad-argument-type pyrefly/lib/solver/subset.rs
Tanjun ✅ Improvement -9 bad-return pyrefly/lib/solver/subset.rs
bokeh ❌ Regression +4, -3 bad-override reclassification (container.py, dataspec.py) pyrefly/lib/solver/subset.rs
asynq ❌ Regression +1, -1 bad-param-name-override → bad-override reclassification pyrefly/lib/solver/subset.rs
streamlit ✅ Improvement -1 bad-return pyrefly/lib/solver/subset.rs
Expression ❓ Needs Review +3, -3 bad-argument-type
freqtrade ✅ Improvement +30 unsupported-operation on pandas Series arithmetic pyrefly/lib/solver/subset.rs
more-itertools ➖ Neutral +1, -1 bad-argument-type
setuptools ➖ Neutral +1, -1 bad-argument-type
pwndbg ➖ Neutral +2, -2 bad-argument-type, bad-override pyrefly/lib/solver/subset.rs
django-stubs ➖ Neutral +1, -1 bad-override, bad-param-name-override pyrefly/lib/solver/subset.rs
schemathesis ➖ Neutral +8, -8 bad-argument-type, bad-assignment
steam.py ❌ Regression +1 missing-attribute pyrefly/lib/solver/subset.rs
pip ✅ Improvement -1 bad-return pyrefly/lib/solver/subset.rs
tornado ➖ Neutral +1, -1 bad-override, bad-param-name-override pyrefly/lib/solver/subset.rs
openlibrary ➖ Neutral +1, -1 bad-argument-type pyrefly/lib/solver/subset.rs
egglog-python ➖ Neutral +2, -2 bad-argument-type
antidote ➖ Neutral +1, -1 bad-argument-type pyrefly/lib/solver/subset.rs
cloud-init ➖ Neutral +1, -1 bad-override, bad-param-name-override pyrefly/lib/solver/subset.rs
sockeye ❌ Regression +1 no-matching-overload pyrefly/lib/solver/subset.rs
pandas-stubs ❌ Regression +33 no-matching-overload on Series aggregation methods pyrefly/lib/solver/subset.rs
bandersnatch ➖ Neutral +1, -1 bad-argument-type pyrefly/lib/solver/subset.rs
hydpy ❌ Regression +1 bad-argument-type pyrefly/lib/solver/subset.rs
pandas ➖ Neutral +4, -4 `bad-argument-type (new, Generator[object Unknown])`
pytest-robotframework ❌ Regression +1, -1 bad-argument-type pyrefly/lib/solver/subset.rs
mypy ❌ Regression +13, -9 bad-argument-type on classmethod pyrefly/lib/solver/subset.rs
meson ❌ Regression +2, -1 bad-override reclassification pyrefly/lib/solver/subset.rs
werkzeug ➖ Neutral +1, -1 bad-argument-type
rich ✅ Improvement -1 bad-return pyrefly/lib/solver/subset.rs
core ➖ Neutral +8, -8 bad-argument-type pyrefly/lib/solver/subset.rs
trio ➖ Neutral +5, -6 bad-argument-type
xarray ➖ Neutral +1, -1 bad-argument-type
scrapy ➖ Neutral +1, -1 bad-override, bad-param-name-override pyrefly/lib/solver/subset.rs
spark ✅ Improvement +31, -4 no-matching-overload on pandas Series methods pyrefly/lib/solver/subset.rs
attrs ➖ Neutral +2, -2 bad-argument-type
prefect ➖ Neutral +3, -3 bad-argument-type, bad-return pyrefly/lib/solver/subset.rs
scipy-stubs ❌ Regression +1 assert-type pyrefly/lib/solver/subset.rs
jax ❌ Regression +1 bad-argument-type pyrefly/lib/solver/subset.rs
Detailed analysis

❌ Regression (11)

bokeh (+4, -3)

bad-override reclassification (container.py, dataspec.py): The bad-param-name-override errors were replaced by bad-override errors at the exact same locations. Both ColumnData.make_descriptors and DataSpec.make_descriptors override parent methods — the issue is still flagged, just under a different error kind. This is neutral to slightly improved (more general error kind may be more accurate). Pyright confirms both.
bad-argument-type in layouts.py: The old error about callable type mismatch in map.__new__ was replaced by 2 new errors about list[LayoutDOM | grid.col | grid.row] not being assignable to list[grid.col | grid.row]. Looking at the code: traverse (line 572) returns LayoutDOM | grid.col | grid.row because the else branch (line 577) returns item which is LayoutDOM. When map(traverse, item.children) is wrapped in list() and passed to col(...) or row(...), the resulting list contains LayoutDOM elements that don't match the expected list[row | col]. This is a genuine type issue — pyright confirms it. The PR's callable subtyping change resolved the callable-level mismatch but correctly exposed the downstream type incompatibility.

Overall: This is a net improvement. The 2 bad-override errors replace 2 bad-param-name-override errors at the same locations — same real issues, just reclassified (neutral). The layouts.py changes show improved type checking: the old error flagged a callable type mismatch at the map() call level, while the new errors more precisely identify that list[LayoutDOM | grid.col | grid.row] is not assignable to list[grid.col | grid.row] — both are real type issues. The traverse function on line 572 can return LayoutDOM (the else branch on line 577), so the list comprehension via map produces items that include LayoutDOM, which genuinely doesn't match the col dataclass's children: list[row | col] type. Pyright confirms all 4 new errors. Net change: +4 errors, -3 errors, with the removed errors being replaced by more precise versions of the same or related issues.

Attribution: The change in pyrefly/lib/solver/subset.rs in the is_subset_eq/is_subset_params ordering (checking return types before parameters for callable subtyping) is the root cause. This changed how map(traverse, item.children) is type-checked: previously the callable parameter check failed first (producing the removed error about func parameter), now the return type is checked first, the callable passes, but the resulting list[LayoutDOM | grid.col | grid.row] doesn't match list[grid.col | grid.row] expected by col.__init__. The bad-override vs bad-param-name-override reclassification may also be a side effect of the changed checking order, or from the tuple.rs changes.

asynq (+1, -1)

bad-param-name-override → bad-override reclassification: The error at line 147 changed from bad-param-name-override to bad-override. Both correctly identify that AsynqStackTracebackFormatter.formatException overrides Formatter.formatException inconsistently. The override has two issues: (1) parameter renamed from ei to exc_info, and (2) return type is str | None (since format_error returns None on line 118) vs parent's str. The new error code is more general and captures the full override incompatibility. Pyright confirms this error.

Overall: This is a reclassification of the same real error from bad-param-name-override to bad-override. The underlying issue is genuine: AsynqStackTracebackFormatter.formatException overrides logging.Formatter.formatException with an incompatible signature. The parameter is renamed from ei to exc_info, and additionally format_error can return None (line 118: return None) while the parent's formatException returns str. The new bad-override error is actually more comprehensive — it catches the return type inconsistency (str | None vs str) in addition to or instead of just the parameter name mismatch. Pyright also flags this as an override error. This is a net neutral-to-positive change: the same real bug is still caught, just under a different (arguably better) error code.

Attribution: The change in pyrefly/lib/solver/subset.rs in the is_subset_eq function reordered callable subtyping to check return types before parameters. This reordering likely changed how the override check evaluates the formatException method — the override inconsistency is now classified under the general bad-override category rather than the specific bad-param-name-override subcategory. The return type check happening first may have surfaced a return type inconsistency (parent returns str, child returns str | None from format_error) that subsumes the parameter name issue.

steam.py (+1)

On line 87, rounded_fractional = round(fractional, 2) where fractional is a Decimal. The round() builtin has overloads: round(SupportsRound[T], SupportsIndex) -> T and round(SupportsRound[int]) -> int. Since Decimal implements __round__ returning Decimal, round(Decimal, 2) should return Decimal. However, the PR's change to callable subtyping order in subset.rs (checking returns before parameters) appears to have caused pyrefly to resolve round() to the wrong overload, inferring int instead of Decimal. This makes pyrefly incorrectly report that int has no as_tuple attribute. This is a false positive — the code is correct and works at runtime.
Attribution: The PR changes callable subtyping in pyrefly/lib/solver/subset.rs to check return types before parameters. This reordering affects how type variables are resolved during reduce(gcd, ...) calls. The round() builtin likely uses overloads or a similar callable subtyping mechanism. The change to check returns before parameters may have altered how round(Decimal, int) resolves — specifically, round() has overloads where round(SupportsRound[T], int) -> T and round(number) -> int. The reordering in subset.rs may cause pyrefly to now prefer the int-returning overload over the Decimal-returning one, incorrectly inferring rounded_fractional as int instead of Decimal.

sockeye (+1)

Line 36-43 creates a dict literal {'bleu-val': max, 'bleu-test': max, 'chrf-val': max, 'learning-rate': min, 'perplexity-train': min, 'perplexity-val': min} and passes it to FIND_BEST.update(). FIND_BEST is defaultdict(lambda: max), which is a defaultdict[str, ...] where the values are the max builtin. The dict literal contains min and max builtins as values. Both min and max are overloaded functions in typeshed. The MutableMapping.update method should accept a dict[str, <callable>] where the callable type matches the mapping's value type. This code is completely valid Python and runs without error. Neither mypy nor pyright flag it. The error message shows pyrefly failing to find a matching overload for update when the dict values have the complex overloaded type of min/max. This is a false positive introduced by the callable subtyping reorder in the PR.
Attribution: The change in pyrefly/lib/solver/subset.rs reordered callable subtyping to check return types before parameters. This was intended to fix reduce(gcd, ...) returning SupportsIndex instead of int. However, the reordering appears to have broken the resolution of min/max as values stored in a dict. When min and max are used as dict values (not called), their overloaded callable type needs to be compatible with MutableMapping.update's parameter type. The new return-before-parameters ordering in is_subset_eq / is_subset_params in pyrefly/lib/solver/subset.rs apparently causes the overload resolution for update to fail when the dict values are overloaded builtins like min/max.

pandas-stubs (+33)

no-matching-overload on Series aggregation methods: 16 new no-matching-overload errors on series.mean(), series.median(), series.std(), series.var() calls. These are valid calls on pd.Series[bool], pd.Series[int], pd.Series[float], etc. with default arguments. Neither mypy nor pyright flags any of these. The callable subtype reordering in subset.rs broke overload matching for these complex pandas stubs signatures.
assert-type failures cascading from overload resolution: 17 new assert-type errors like assert_type(Unknown, float) failed. These cascade from the overload resolution failures — when pyrefly can't match an overload, the return type becomes Unknown, which then fails the assert_type check. These are not independent bugs but downstream effects of the broken overload resolution.

Overall: This is a regression. The pandas-stubs project is a type stub package extensively tested against mypy and pyright. All 33 new errors are pyrefly-only, appearing in test files that verify correct overload resolution for pd.Series aggregation methods. The errors fall into two categories: (1) no-matching-overload — pyrefly can no longer find a matching overload for series.mean(), series.median(), series.std(), series.var() when called with default arguments on pd.Series[bool], pd.Series[int], pd.Series[float], etc. (2) assert-type failures that cascade from the overload resolution failures. The root cause is the change in pyrefly/lib/solver/subset.rs that reorders callable subtype checking (returns before parameters). While this fixes the specific reduce(gcd, ...) case from issue #2866, it breaks overload resolution for complex pandas method signatures. The pandas-stubs overloads for these methods use type variables and unions that depend on parameter-first resolution order to correctly select the right overload.

Attribution: The change in pyrefly/lib/solver/subset.rs in the is_subset_eq/is_subset_params ordering for callable subtyping is the root cause. The PR reorders callable subtype checking to check return types before parameters. While this fixes the reduce(gcd, ...) case, it appears to break overload resolution for pandas Series aggregation methods (mean, median, std, var). The no-matching-overload errors (16) indicate pyrefly can no longer match the correct overload for these methods, and the assert-type errors (17) are cascading failures because the return type is wrong (or the call fails entirely). The tuple-related changes in tuple.rs and expr.rs (renaming class_overrides_tuple_getitem to tuple_uses_builtin_getitem) are unrelated to these pandas errors.

hydpy (+1)

This is a false positive. The variable date1 comes from iterating over reversed(dataframe_resampled.index) where the index is a pandas DatetimeIndex (created via pandas.date_range() on line 306, preserved through resampling on line 316). At runtime, iterating over a DatetimeIndex yields pandas.Timestamp objects, which are subclasses of datetime.datetime and thus compatible with Date.__new__'s date: Date | datetime | str parameter. Pyrefly is incorrectly inferring the iteration type as the base Index class rather than Timestamp, as indicated by the error message Argument 'Index' is not assignable. This likely occurs because the pandas type stubs don't properly type reversed() on a DatetimeIndex (or DatetimeIndex.__reversed__) to yield Timestamp objects, causing pyrefly to fall back to the base Index type. Neither mypy nor pyright are reported to flag this.
Attribution: The change in pyrefly/lib/solver/subset.rs reordered callable subtyping to check return types before parameters (self.is_subset_eq(&l.ret, &u.ret)?; self.is_subset_params(&l.params, &u.params)). This change in subtyping order likely affected how pandas type stubs (specifically DatetimeIndex iteration or reversed() on an index) are resolved, causing pyrefly to infer Index instead of Timestamp for the iteration variable. The tuple-related changes in tuple.rs and expr.rs are unlikely to be the cause since this code doesn't involve tuple subscripting.

pytest-robotframework (+1, -1)

Both the old and new errors correctly identify a type error on line 64 (the test explicitly expects this error via the # expected type error comment). However, the error message quality degraded: the old message showed () -> AbstractContextManager[@_] which correctly resolved the parameter list to (), while the new message shows (ParamSpec(@_)) -> AbstractContextManager[@_] with an unsolved ParamSpec. This is a regression in error message quality — the type inference is resolving fewer type variables before reporting the error. The old message made it clear that the issue was the return type (None vs AbstractContextManager), while the new message with the unsolved ParamSpec is confusing and could mislead users into thinking the parameter list is the problem. The error is still correct (it's a true positive), but the message is less helpful. Since both old and new are flagging the same real bug, this remains a true positive, but with a meaningfully worse error message that could hinder developer understanding.
Attribution: The change in pyrefly/lib/solver/subset.rs in the is_subset_params/is_subset_eq ordering (checking returns before parameters) changed how type variables are resolved during callable subtyping. This reordering causes the ParamSpec in _WrappedContextManagerKeywordDecorator.__call__ to not be properly solved before being displayed in the error message, resulting in ParamSpec(@_) appearing instead of the concrete () parameter list.

mypy (+13, -9)

bad-argument-type on classmethod: 7 new errors replacing 7 old errors with different @_ positions. Both old and new are inference failures when matching classmethod callables against Callable[Concatenate[type[_T], _P], _R_co]. The PR's return-before-params order change shifted where @_ appears but didn't fix the underlying issue. These remain false positives.
invalid-overload on dict.fromkeys: 2 new errors replacing 2 old errors. Previously classmethod[Unknown, Ellipsis, Unknown], now classmethod[Unknown, Ellipsis, dict[_T, Any | None]]. The return type resolves better but the result is still incorrectly flagged as not callable. False positive.
bad-override on OrderedDict.or: 4 entirely new false positive errors. OrderedDict.__or__ and __ror__ override dict.__or__ and __ror__ — this is standard typeshed practice. Neither mypy nor pyright flags these. The PR's changed inference order likely causes different type variable resolution that now triggers spurious override checks.

Overall: This is a net regression. The PR changed callable subtyping to check return types before parameters (pyrefly/lib/solver/subset.rs). While this fixes the specific reduce(gcd, ...) case, it introduces 4 entirely new false positive bad-override errors on OrderedDict.__or__ and shifts 7 bad-argument-type + 2 invalid-overload errors from one form of @_/Unknown inference failure to another. The net effect is +4 new errors (13 added - 9 removed). All 13 new errors are pyrefly-only (neither mypy nor pyright flags them). The @_ types in the error messages confirm these are type inference failures, not real bugs in typeshed. The bad-override errors on OrderedDict.__or__ overriding dict.__or__ are false positives — OrderedDict legitimately narrows the return type, and both mypy and pyright accept this.

Per-category reasoning:

  • bad-argument-type on classmethod: 7 new errors replacing 7 old errors with different @_ positions. Both old and new are inference failures when matching classmethod callables against Callable[Concatenate[type[_T], _P], _R_co]. The PR's return-before-params order change shifted where @_ appears but didn't fix the underlying issue. These remain false positives.
  • invalid-overload on dict.fromkeys: 2 new errors replacing 2 old errors. Previously classmethod[Unknown, Ellipsis, Unknown], now classmethod[Unknown, Ellipsis, dict[_T, Any | None]]. The return type resolves better but the result is still incorrectly flagged as not callable. False positive.
  • bad-override on OrderedDict.or: 4 entirely new false positive errors. OrderedDict.__or__ and __ror__ override dict.__or__ and __ror__ — this is standard typeshed practice. Neither mypy nor pyright flags these. The PR's changed inference order likely causes different type variable resolution that now triggers spurious override checks.

Attribution: The change to is_subset_eq ordering in pyrefly/lib/solver/subset.rs (checking return types before parameters in callable subtyping) directly caused the shift in bad-argument-type and invalid-overload errors. By checking self.is_subset_eq(&l.ret, &u.ret) before self.is_subset_params(&l.params, &u.params), the return type now resolves to a concrete type first, but this changes how type variables are bound during the parameter check, causing different (but still incorrect) inference failures. The 4 new bad-override errors on OrderedDict are likely a cascade effect of the changed inference order affecting how dict.__or__ return types are compared.

meson (+2, -1)

bad-override reclassification: Same override issue at same location, error code changed from bad-param-name-override to bad-override. Neutral change.
false positive on shutil.copyfile as copy_function: shutil.copyfile is a standard, documented argument for shutil.copytree's copy_function parameter. The callable is compatible: (1) parameters are contravariant — copyfile accepts StrOrBytesPath which includes str; (2) when _StrOrBytesPathT binds to str, the return type str is covariant with object; (3) the extra keyword-only follow_symlinks has a default. This is a false positive likely caused by pyrefly's difficulty resolving generic/TypeVar-parameterized callable signatures against concrete callable types.

Overall: The bad-override replacing bad-param-name-override is neutral (same issue, different error code). The new bad-argument-type on shutil.copyfile passed to shutil.copytree is a false positive — shutil.copyfile is a well-known, documented Python pattern for shutil.copytree's copy_function parameter. When checking callable compatibility: (1) parameters are contravariant — shutil.copyfile accepts StrOrBytesPath which includes str, so it accepts the str arguments the caller will provide; (2) the return type, when _StrOrBytesPathT is bound to str, is str which is covariant with object; (3) the extra keyword-only follow_symlinks parameter has a default and doesn't prevent calling with just two positional args. The likely root cause is pyrefly's difficulty resolving generic/TypeVar-parameterized callable signatures when checking assignability against a concrete callable type.

Attribution: The change in pyrefly/lib/solver/subset.rs in is_subset_eq/is_subset_params reordered callable subtyping to check return types before parameters. This caused the new false positive on shutil.copyfile being passed to shutil.copytree — the reordering changes how type variables like _StrOrBytesPathT are resolved during the subset check, causing the callable to appear incompatible when it actually is compatible.

scipy-stubs (+1)

This is a type stubs project (scipy-stubs) where the test asserts that sparse.block_diag([any_arr, any_mat]) should return coo_matrix[ScalarType] | coo_array[ScalarType, tuple[int, int]] (where ScalarType resolves to floating[_32Bit] in this context based on the types of any_arr and any_mat). Pyrefly now infers coo_array[_Numeric, tuple[int, int]] | coo_matrix instead — the scalar type parameter degraded from the specific scalar type to _Numeric (a broader type), and coo_matrix lost its type parameter entirely. This is a type inference regression where type variable resolution produces less precise types for scipy sparse matrix operations. The change in callable subtyping (reordering return-type vs parameter checks in subset.rs) was intended to fix reduce(gcd, ...) returning SupportsIndex instead of int, but it has a side effect here where overload resolution or type variable solving yields less specific results. The stubs are correct — the source code has no # type: ignore annotations on this line, indicating other type checkers accept it.
Attribution: The change in pyrefly/lib/solver/subset.rs reorders callable subtyping to check return types before parameters. This changes how type variables are resolved during callable subtyping, which can cascade into different type inference results for complex generic functions. The block_diag function likely involves overloaded or generic callable signatures where the new return-before-params order causes a type variable to resolve differently, producing coo_array[_Numeric, tuple[int, int]] | coo_matrix instead of the expected coo_array[ScalarType, tuple[int, int]] | coo_matrix[ScalarType]. The key difference is: (1) _Numeric instead of ScalarType for the scalar type parameter of coo_array, and (2) coo_matrix without a type parameter instead of coo_matrix[ScalarType]. This indicates the type variable resolution changed in a way that loses precision — _Numeric is likely a broader bound rather than the specific ScalarType that should be inferred, and coo_matrix lost its type parameter entirely.

jax (+1)

This is a false positive. The DynamicGridDim type leaks into num_iterations because pyrefly cannot narrow the element type of grid through the a is not pallas_core.dynamic_grid_dim identity check in the tuple comprehension (lines 1799-1803). At runtime, any dynamic_grid_dim sentinel in grid_mapping.grid is replaced by an actual Array from dynamic_grid_args_iter, but pyrefly infers the element type as int | DynamicGridDim | Array. When functools.reduce(jnp.multiply, grid) is called on line 1972, the result type includes DynamicGridDim (and possibly Unknown from incomplete inference of reduce's return type). This DynamicGridDim | int | Unknown type for num_iterations is then passed to jnp.minimum on line 2032, where DynamicGridDim is not assignable to the expected numeric parameter type. The code works correctly at runtime because the is not check ensures DynamicGridDim never appears in grid when the if grid: branch is taken.
Attribution: The change to is_subset_eq ordering in pyrefly/lib/solver/subset.rs (checking return types before parameters for callable subtyping) likely changed how functools.reduce's return type is inferred. By resolving the return type first, the type variable binding for reduce's output may now pick up DynamicGridDim from the input iterable type rather than resolving to a concrete numeric type, causing the DynamicGridDim | int | Unknown union to appear where it previously resolved more precisely.

✅ Improvement (6)

Tanjun (-9)

The 9 removed bad-return errors were all false positives. In functions like with_dm_check, the return type is _CallbackReturnT[_CommandT] which is a type alias defined as _CommandT | collections.Callable[[_CommandT], _CommandT]. The helper _optional_kwargs has a return type annotation of _CommandT | collections.Callable[[_CommandT], _CommandT], which is exactly the same type. Both the caller and callee use the same _CommandT TypeVar (bound to tanjun.ExecutableCommand[typing.Any]), so the return type should be compatible.

Pyrefly was incorrectly reporting that ((ExecutableCommand[Any]) -> ExecutableCommand[Any]) | ExecutableCommand[Any] is not assignable to ((_CommandT) -> _CommandT) | _CommandT. The issue was in pyrefly's type variable solving during union subtype checking — when checking whether the concrete union type (with ExecutableCommand[Any] substituted) was assignable to the generic union type (with _CommandT), the solver was failing to properly unify the TypeVar across the union members. The PR fix in pyrefly/lib/solver/subset.rs addresses this by adjusting the order of subtype checking for callable types, which allows the TypeVar to be properly constrained. Removing these false positives is an improvement, as the code is correctly typed.

Attribution: The change to is_subset_eq ordering in pyrefly/lib/solver/subset.rs (lines ~1487-1494) swapped the order of callable subtyping checks: return types are now checked before parameters. This allows quantified type variables like _CommandT to pick up their covariant lower bound from the return type before the contravariant parameter check, preventing premature pinning to a wider type. This directly fixed the false positive bad-return errors where _CommandT was being resolved to ExecutableCommand[Any] instead of staying as the proper type variable.

streamlit (-1)

The removed error was a false positive. fragment() at line 443 returns _fragment(func, run_every=run_every). Both _fragment (line 136-141) and fragment (line 300-304) share the same return type Callable[[F], F] | F using the same TypeVar F = TypeVar('F', bound=Callable[..., Any]) defined at line 42. They also share compatible parameter signatures for func: F | None and run_every. When fragment calls _fragment with the same arguments, the TypeVar F should flow through correctly, and the return type of _fragment should be directly assignable to the declared return type of fragment. Pyrefly was incorrectly expanding the TypeVar F to its bound Callable[..., Any] when analyzing the return type of _fragment, producing the concrete type (((...) -> Any) -> (...) -> Any) | ((...) -> Any) instead of preserving it as Callable[[F], F] | F. This expansion caused a spurious type mismatch. The fix corrects the TypeVar solving behavior so that the TypeVar is properly preserved through the call, removing the false positive.
Attribution: The change in pyrefly/lib/solver/subset.rs in the callable subtyping logic (is_subset_eq for return types is now checked before is_subset_params for parameters) fixed this. The comment in the diff explains: 'Check returns before parameters so quantified vars in the target callable pick up their covariant lower bounds before contravariant parameter checks.' This reordering allows TypeVar F to be properly solved from the return type first, which then makes the parameter check succeed. Previously, the parameter check was done first, which could pin F to a looser bound (like Callable[..., Any] expanded), causing the subsequent return type check to fail.

freqtrade (+30)

unsupported-operation on pandas Series arithmetic: 18 false positives. Pandas Series division/multiplication by float is valid. The error messages show Series[str] which is an incorrect type inference — these are numeric Series. The callable subtyping reorder in subset.rs caused pyrefly to resolve pandas operator overloads incorrectly, picking up string-typed Series instead of numeric.
bad-return with Series types leaking into float returns: 10 false positives. Functions like calculate_expectancy return float values that were computed via pandas operations. Pyrefly now incorrectly widens the return type to include Series variants due to the changed type variable resolution order in callable subtyping.
bad-argument-type with Series | float for float parameter: 2 false positives. Same root cause — pyrefly infers Series | float for values that are actually float, then flags them when passed to functions expecting float.

Overall: All 30 new errors are pyrefly-only false positives caused by the callable subtyping reorder in subset.rs. The pattern is clear:

  1. unsupported-operation (18 errors): These claim operations like Series[str] / float are unsupported. Looking at the code, e.g., trades['profit_abs'] / starting_balance at line 411 of metrics.py, trades['profit_abs'] returns a pandas Series (from DataFrame column access), and dividing by starting_balance: float is perfectly valid pandas arithmetic. The Series[str] type in the error message is wrong — the Series contains numeric data. This appears to be an inference regression where the callable subtyping reorder causes pyrefly to pick up the wrong type for the Series.

  2. bad-return (10 errors): These claim return types like tuple[Series[float] | Series | float, Series[float] | Series | float] don't match tuple[float, float]. Looking at calculate_expectancy (line 333), expectancy and expectancy_ratio are initialized as float (0.0 and 100.0), and within the if block they're reassigned via arithmetic that involves pandas Series operations. But the variables are always float at the return point — the pandas operations produce scalar results that get assigned back. The type widening to include Series types is an inference error.

  3. bad-argument-type (2 errors): These claim Series | float is passed where float is expected. Same root cause — pyrefly is now incorrectly widening types to include Series.

The reordering of return-type-before-parameters in callable subtyping changed how type variables are resolved during pandas operator overload resolution, causing incorrect type inference for pandas arithmetic operations.

Attribution: The change in pyrefly/lib/solver/subset.rs reversed the order of callable subtyping checks — now checking return types before parameters. This reordering was intended to fix reduce(gcd, ...) returning SupportsIndex instead of int, but it appears to have caused a regression in how pandas Series types are resolved. When pandas operations like trades['profit_abs'] / starting_balance are checked, the Series division operator's callable signature is now being resolved differently, causing pyrefly to infer Series[str] instead of a numeric Series type, which then cascades into unsupported-operation errors (e.g., / is not supported between Series[str] and float) and bad-return errors (returning Series | float where float is expected).

pip (-1)

This is a clear improvement. The removed error was a false positive caused by incorrect type variable inference during callable subtyping. math.gcd has signature (*integers: SupportsIndex) -> int in typeshed. When reduce resolves its TypeVar _T (from Callable[[_T, _T], _T]) via gcd, there is a conflict: the return type of gcd is int, but the parameter type is SupportsIndex. Previously, pyrefly was resolving _T from the parameter types first, binding _T = SupportsIndex, which produced the incorrect inferred return type SupportsIndex for the reduce call. Combined with the or 1 expression (which contributes Literal[1]) and the except branch (also Literal[1]), this yielded the union type Literal[1] | SupportsIndex, which is not assignable to the declared return type int. The PR fixes this by adjusting the inference order in callable subtyping so that the return type binding (_T = int) takes precedence, which correctly infers the reduce call as returning int. This is a more correct inference strategy given the covariant (return) and contravariant (parameter) positions in callable types.
Attribution: The fix is in pyrefly/lib/solver/subset.rs in the callable subtyping check. The change reorders is_subset_eq (return type check) before is_subset_params (parameter check). This ensures that when checking math.gcd <: Callable[[T, T], T], the return type int binds T to int first, rather than the parameter type SupportsIndex prematurely pinning T to SupportsIndex. The test case test_reduce_gcd_returns_int in pyrefly/lib/test/calls.rs directly tests this fix.

rich (-1)

This is a clear improvement. The removed error was a false positive caused by pyrefly's type variable solving order during callable subtyping. reduce(gcd, [list of ints]) should return int because math.gcd returns int. The old behavior incorrectly inferred SupportsIndex (a parameter type of gcd) as the result type, leading to a spurious bad-return error. The PR fix in subset.rs correctly prioritizes the return type during callable subtyping, matching the behavior of mypy and pyright.
Attribution: The key change is in pyrefly/lib/solver/subset.rs in the callable subtyping logic. The order was changed from checking parameters first (is_subset_params then is_subset_eq for return) to checking return type first (is_subset_eq for return then is_subset_params). This ensures that when matching math.gcd against Callable[[T, T], T], the type variable T gets bound to int (from the return type) before the parameter check, rather than getting bound to SupportsIndex (from the parameter types). The test case test_reduce_gcd_returns_int in pyrefly/lib/test/calls.rs was added to verify this exact scenario.

spark (+31, -4)

no-matching-overload on pandas Series methods: 8 new errors where pser.mean(), pser.var(), pser.var(ddof=0) etc. on boolean Series fail overload resolution. These are valid pandas operations (boolean Series supports these statistical methods). The changed callable subtyping order causes type variables in pandas-stubs overloads to resolve incorrectly. All pyrefly-only — false positives (regression).
unsupported-operation += between int and Series[str]: 12 new errors in UDF test files where += is flagged between int and Series[str]. The Series[str] type appearing in numeric aggregation contexts is a clear inference failure caused by the callable subtyping order change. All pyrefly-only — false positives (regression).
bad-return Series | int not assignable to int: 8 new errors where aggregation UDF return types are inferred as Series | int instead of int. The broader type variable resolution from the callable subtyping change causes the return type to include Series when it shouldn't. All pyrefly-only — false positives (regression).
bad-argument-type with broadened _SupportsAdd: 3 new errors in test_cumulative.py where Series.sum() self parameter type now includes _SupportsAdd[Series[str] | Series | float] instead of _SupportsAdd[float]. The type variable picks up too-broad bounds. These replace 2 removed errors that had the same root cause but narrower (also incorrect) bounds. Net: 1 more false positive, and the error messages are worse (regression).
removed bad-argument-type errors: 4 removed errors: 2 in test_cumulative.py (replaced by 3 new errors with worse inference), and likely 2 in types.py. The removed errors were themselves false positives (valid pandas operations), so their removal is positive, but the net effect is negative since more false positives were added than removed.

Overall: The analysis is factually accurate. Let me verify the key claims:

  1. no-matching-overload on Series.mean()/var(): The code at line 195 creates pser = pd.Series([True, False, True]) — a boolean Series. Lines 204, 206, 207 call pser.mean(), pser.var(), pser.var(ddof=0). These are valid pandas operations on boolean Series (bool is a subtype of int in Python, and pandas treats boolean Series as numeric for statistical operations). The errors are correctly identified as false positives.

  2. unsupported-operation += between int and Series[str]: The analysis correctly identifies that Series[str] appearing in numeric aggregation contexts is a type inference failure. In UDF test files, aggregation functions return scalar int values, not Series[str].

  3. bad-return Series | int not assignable to int: The analysis correctly identifies that aggregation UDF return types should be int, not Series | int. The union type inference is a regression.

  4. bad-argument-type with _SupportsAdd: Looking at test_cumulative.py lines 31 and 44, pser.cummin().sum() and pser.cummax().sum() are called on pd.Series([1.0, None, 0.0, 4.0, 9.0]). The cummin()/cummax() returns a Series[float | None], and .sum() on that is valid pandas code. The removed errors had _SupportsAdd[float] while new errors have _SupportsAdd[Series[str] | Series | float] — the analysis correctly identifies this as worse type inference with broader incorrect bounds. The count of 2 removed errors in test_cumulative.py replaced by 3 new errors is consistent with the error data (2 removed bad-argument-type in test_cumulative.py, 3 new bad-argument-type in test_cumulative.py).

  5. Net effect calculation: 31 added - 4 removed = 27 net new errors. The analysis correctly states this.

  6. All errors being pyrefly-only: The cross-check data confirms 0/31 new errors appear in mypy or pyright, supporting the false positive classification.

The analysis accurately describes the code behavior, type system facts, and the nature of the regressions.

Attribution: The change to is_subset_eq and is_subset_params ordering in pyrefly/lib/solver/subset.rs (the Subset::is_subset_eq method for callable types) reversed the order of return type vs parameter checking. This causes type variables to be resolved differently — the return type now constrains type variables first, which in some cases (like the reduce(gcd, ...) fix) gives better results, but in many other cases causes type variables to pick up broader/incorrect bounds. This is why pandas Series.mean(), Series.var(), Series.sum() etc. now fail — the overloaded methods in pandas-stubs resolve type variables differently, leading to no-matching-overload, unsupported-operation, and bad-return errors that didn't exist before.

➖ Neutral (20)

anyio (+10, -10)

This is a net-neutral change in terms of error count (10 added, 10 removed), and examining the content shows both old and new errors are false positives. The @_ in the error messages (e.g., (**tuple[*@_]) -> Awaitable[T_Retval]) indicates pyrefly cannot resolve the TypeVarTuple PosArgsT during callable subtyping. The code is correct — to_thread.run_sync and AsyncBackend.run are called with proper arguments. Neither mypy nor pyright flags any of these locations. The key difference between old and new errors is that the return type in the expected parameter type is now partially resolved (Awaitable[T_Retval] in new errors vs Awaitable[@_] in old errors), while the TypeVarTuple parameter inference remains broken in both cases. The PR's changes improved return type inference but didn't fix the TypeVarTuple parameter inference, leaving the same number of false positive errors with slightly different messages.
Attribution: The change in pyrefly/lib/solver/subset.rs in the is_subset_eq/is_subset_params block reorders callable subtyping to check return types before parameters. This causes the return type variable T_Retval to be resolved first (visible in the new error messages where Awaitable[T_Retval] appears instead of Awaitable[@_]), but the parameter-side PosArgsT TypeVarTuple still fails to resolve (shown as @_ in tuple[*@_]). The net effect is that the same inference failures persist but with slightly different error messages — the return type is now resolved but parameters still aren't.

more-itertools (+1, -1)

Same errors at same locations with same error kinds — message wording changed, no behavioral impact.

setuptools (+1, -1)

Same errors at same locations with same error kinds — message wording changed, no behavioral impact.

pwndbg (+2, -2)

This is a neutral change. Both errors existed before and after — they just changed error codes or slightly refined type descriptions. The bad-param-name-overridebad-override is a reclassification of the same real issue (parameter named op instead of operand). The bad-argument-type changed its expected type description slightly but still reports the same fundamental decorator type mismatch. Net error count is unchanged (2 added, 2 removed, same locations).
Attribution: The change to is_subset_eq ordering in pyrefly/lib/solver/subset.rs (checking return types before parameters in callable subtyping) caused the slight change in inferred expected types for the decorator in onegadget.py. The tuple-related changes in pyrefly/lib/alt/class/tuple.rs and pyrefly/lib/alt/expr.rs (renaming class_overrides_tuple_getitem to tuple_uses_builtin_getitem with inverted logic) are unrelated to these errors.

django-stubs (+1, -1)

This is a neutral change in terms of error count — one error is removed (bad-param-name-override) and one error is added (bad-override) at the same location (line 35, get_transform). Both errors flag the same method override of JSONField.get_transform over Field.get_transform. The error code changed from the more specific bad-param-name-override to the more general bad-override, which suggests the checker's override validation logic now reports this inconsistency under a different (broader) category. The line already has a # type: ignore[override] comment, but this is a mypy-style suppression and does not necessarily suppress pyrefly error codes. The net effect on the codebase is zero new issues to address: the same override inconsistency at the same location is being reported, just under a different error code.
Attribution: The change in pyrefly/lib/solver/subset.rs in the is_subset_params/is_subset_eq block reorders callable subtyping to check return types before parameters. This changes which aspect of the override mismatch is detected first, causing the error code to shift from bad-param-name-override to the more general bad-override. The return type check now runs first, and since JSONField.get_transform returns type[Transform] | KeyTransformFactory (wider than the parent's return type), the general override check fires before the parameter name check.

schemathesis (+8, -8)

Same errors at same locations with same error kinds — message wording changed, no behavioral impact.

tornado (+1, -1)

This is a neutral change — the same real override inconsistency at line 1651 is still being flagged, just with a more general error code (bad-override instead of the more specific bad-param-name-override). The underlying issue is the same: MyStaticFileHandler.get_content overrides StaticFileHandler.get_content inconsistently. The method is decorated with @classmethod but uses self as the first parameter instead of cls, and the parameter path may differ from the parent's parameter name abspath. The old error code bad-param-name-override was a specific sub-category for parameter name mismatches, while the new bad-override is a more general override inconsistency error that subsumes it. The error detection capability is preserved; the checker has consolidated the specific error category into a broader one.
Attribution: The change in pyrefly/lib/solver/subset.rs reordered callable subtype checking to check return types before parameters. This reordering likely changed which specific override violation is detected first for this method, causing the error code to shift from bad-param-name-override (a specific sub-category) to the more general bad-override. The tuple-related changes in tuple.rs and expr.rs are unrelated to this specific error.

openlibrary (+1, -1)

This is a minor improvement — the new error is more informative than the old one, though both remain false positives. The code map(ImportItem, result) is valid Python. ImportItem inherits from web.storage (which inherits from dict) and does not define its own __init__, so it uses web.storage's constructor which accepts a mapping as its first positional argument. Each element from result would be passed as that mapping argument, making this valid at runtime.

Both the old error (with (@_) -> @_) and the new error (with (@_) -> ImportItem) are false positives caused by pyrefly's inability to properly match a class constructor against map's generic callable parameter. The @_ in the parameter position indicates an unresolved type variable — a type inference limitation, not a real bug.

However, the new error represents a genuine improvement: the return type is now correctly resolved to ImportItem (shown as (@_) -> ImportItem) whereas before both the parameter and return type were unresolved ((@_) -> @_). The PR's changes improved return type resolution in the generic inference, making the error message more informative even though the underlying parameter-matching false positive persists. Pyright also flags this (as noted in the error annotation [pyright: yes]), suggesting this is a known difficulty for type checkers when dealing with classes whose constructors have complex or dynamically-typed signatures inherited from dict-like base classes.

Attribution: The change in pyrefly/lib/solver/subset.rs in the is_subset_eq function reordered callable subtyping to check return types before parameters. This caused the type variable in map.__new__'s func parameter to resolve the return type first (now showing ImportItem instead of @_ for the return), but the parameter side still fails to resolve (still @_). The error message changed but the false positive persists.

egglog-python (+2, -2)

Same errors at same locations with same error kinds — message wording changed, no behavioral impact.

antidote (+1, -1)

This is a purely cosmetic change in the error message. Both the old and new versions report the same error (bad-argument-type) at the same location (line 549). The only difference is that the expected callable type's return type changed from @_ (unresolved type variable) to object (resolved). The new message is arguably slightly better because it shows a concrete type (object) instead of an unresolved type variable (@_), making the error message more informative. The error itself is correct in both cases — no_args is () -> object which doesn't match a method signature requiring a self parameter, and the test expects this to raise TypeError.
Attribution: The change in pyrefly/lib/solver/subset.rs in the is_subset_eq function reordered callable subtyping to check return types before parameters. This caused the return type variable to be resolved differently — the target callable's return type variable @_ now gets bound to object (from the covariant return check) before the parameter check, resulting in the displayed expected type changing from (Any, ParamSpec(@_)) -> @_ to (Any, ParamSpec(@_)) -> object.

cloud-init (+1, -1)

This is a neutral change in error categorization. The same override inconsistency at DataSourceEc2.device_name_to_device (line 426) is being flagged both before and after the PR. The only difference is the error code changed from bad-param-name-override to bad-override. The underlying issue — that the child class method signature is inconsistent with the parent DataSource class — is still being caught. Pyright also flags this as an override issue, confirming it's a real problem. The net effect is zero: one error removed, one error added, same location, same semantic meaning.
Attribution: The change in pyrefly/lib/solver/subset.rs reordered callable subtyping to check return types before parameters. This likely changed the order in which the override inconsistency is detected — previously the parameter name mismatch was found first (yielding bad-param-name-override), but now the return type check runs first and a different aspect of the override inconsistency is detected first (yielding the more general bad-override). The tuple-related changes in tuple.rs and expr.rs are unrelated to this specific error.

bandersnatch (+1, -1)

This is essentially a neutral change — one false positive was replaced by a slightly different false positive. Both the old and new errors flag path.exists as incompatible with run_in_executor's func parameter, which is incorrect (the code works fine at runtime and neither mypy nor pyright flag it). The new error has a marginally better return type (bool instead of @_) but still contains @_ in the parameter type, indicating ongoing inference failure. The core issue — pyrefly incorrectly rejecting a bound method passed to run_in_executor — remains unchanged.
Attribution: The change in pyrefly/lib/solver/subset.rs in the is_subset_eq/is_subset_params ordering (checking return types before parameters for callable subtyping) caused the return type to resolve from @_ to bool. This is why the error message changed from -> @_ to -> bool. However, the parameter portion still contains @_ types ((**tuple[*@_])), indicating the inference failure persists for the parameter side. The tuple.rs changes are unrelated to this error.

pandas (+4, -4)

bad-argument-type (new, Generator[object | Unknown]): The old error already reported Generator[object] not assignable to Iterable[_Token] in expr.py — this was already a pre-existing false positive. The f parameter is created via _compose which uses reduce(_compose2, funcs) where _compose2 returns a lambda — the type checker cannot track the composed return type through this indirection. The PR changed the inferred type from Generator[object] to Generator[object | Unknown] (slightly worse) and added a new similar error in ops.py. Since the core false positive was pre-existing and only the spread to ops.py is new, this is a minor regression.
bad-override replacing bad-param-name-override: Same location, different error code. The override issue on _convert_bool_result was flagged before (as param name mismatch) and is still flagged (as generic override inconsistency). Neutral change in error classification.
bad-return (changed message): Same error at same location with reordered union members and slightly different type inference (tuple[Unknown, None] became tuple[dtype | Unknown, None]). The underlying issue (wrapper type doesn't match Callable[[F], F]) is unchanged. Neutral.
Removed no-matching-overload: The removed no-matching-overload on find_common_type with Unknown types in the argument was likely a false positive from incomplete inference. Its removal is an improvement.

Overall: This is a mixed bag that's roughly neutral. Let me analyze each category:

Changed bad-argument-type errors (1→2): The old error in expr.py:172 already reported Generator[object] not assignable to Iterable[_Token] — this was already a false positive since the type checker couldn't track the return type through the _compose/_compose2 lambda chain. The new error changes this to Generator[object | Unknown] and adds a similar error in ops.py. The f parameter is created via _compose which uses reduce(_compose2, funcs) where _compose2 returns a lambda — the type checker cannot track the composed return type through this indirection, which is why it infers object. The addition of Unknown and the spread to ops.py represents a slight degradation in inference, but the core issue (false positive due to inability to track composed callable types) was pre-existing. This is a minor regression.

New bad-override on ArrowStringArray._convert_bool_result: This replaces the previous bad-param-name-override. Both flag the same location and same method. The change from a specific bad-param-name-override to a generic bad-override indicates the subtyping order change now detects a different inconsistency first. This is neutral — same issue reported under a different error code.

Changed bad-return on align.py:86: The old error reported tuple[partial[Unknown] | type[NDFrame], dict[str, Index] | None] | tuple[Unknown, None] | Unknown while the new error reports tuple[dtype | Unknown, None] | tuple[partial[Unknown] | type[NDFrame], dict[str, Index] | None] | Unknown. The union members are reordered and the tuple[Unknown, None] became tuple[dtype | Unknown, None] (slightly more specific inference for the result_type_many return). Both flag the same real issue — the _filter_special_cases decorator returns a wrapper whose type doesn't match the declared Callable[[F], F] return. Neutral.

Removed no-matching-overload: The removal of the overload error in common.py with Unknown types in the argument list suggests improved type inference — likely a false positive that's now resolved. Improvement.

Overall: 1 improvement (removed false positive), 1 minor regression (additional false positive in ops.py and slightly worse inference with Unknown), and 2 neutral changes (error code/message changes). The net effect is roughly neutral.

Attribution: The change in pyrefly/lib/solver/subset.rs to is_subset_eq (checking return types before parameters in callable subtyping) is the root cause. By checking returns first, type variables pick up covariant bounds from return types before contravariant parameter checks. This changes how Unknown propagates through callable type inference, which explains:

  1. The Generator[object]Generator[object | Unknown] change in expr.py (type inference resolves differently)
  2. The removal of no-matching-overload in common.py (better type variable resolution)
  3. The bad-param-name-overridebad-override change (different subtyping path triggers different error kind)
  4. The new bad-argument-type in ops.py (new false positive from changed inference)

The tuple-related changes in pyrefly/lib/alt/class/tuple.rs and pyrefly/lib/alt/expr.rs (renaming class_overrides_tuple_getitem to tuple_uses_builtin_getitem) don't appear to affect these pandas errors.

werkzeug (+1, -1)

Same errors at same locations with same error kinds — message wording changed, no behavioral impact.

core (+8, -8)

This is a neutral change. Both the old and new errors are false positives — they flag valid code where Path.exists / Path.is_file are passed to async_add_executor_job. The PR changed the order of callable subtype checking (returns before parameters), which altered the error message (the return type resolved from @_ to bool) but didn't fix or worsen the underlying inference failure with the variadic parameter type **tuple[*@_]. The @_ in the parameter type indicates an unresolved type variable — a type inference limitation. 8 false positives were removed and 8 equivalent false positives were added, with slightly different error messages. The net effect on correctness is zero.
Attribution: The change in pyrefly/lib/solver/subset.rs in the is_subset_eq function reordered callable subtyping to check return types before parameters. This caused the return type variable to resolve to bool (changing @_ to bool in the error message) but didn't fix the underlying parameter inference issue with async_add_executor_job's variadic type parameter *_Args. The tuple getitem changes in pyrefly/lib/alt/class/tuple.rs and pyrefly/lib/alt/expr.rs are unrelated to these callable errors.

trio (+5, -6)

Most errors at same locations with same error kinds — message wording changed with minor residual noise, no significant behavioral impact.

xarray (+1, -1)

Same errors at same locations with same error kinds — message wording changed, no behavioral impact.

scrapy (+1, -1)

This is a 1-for-1 swap of error codes at the exact same location. The old error bad-param-name-override was removed and replaced with bad-override — both flag the same real issue: CustomPythonOrgPolicy.referrer(self, response, request) overrides ReferrerPolicy.referrer in an inconsistent manner. The net effect is zero: one error removed, one error added, same location, same semantic meaning. This is a neutral change in error classification/wording.
Attribution: The change in pyrefly/lib/solver/subset.rs reordered callable subtyping to check return types before parameters. This likely changed the order in which override inconsistencies are detected, causing the error to be classified as a general bad-override instead of the more specific bad-param-name-override. The PR's core change is in the is_subset_eq ordering for callable types.

attrs (+2, -2)

Same errors at same locations with same error kinds — message wording changed, no behavioral impact.

prefect (+3, -3)

This is a neutral change. 3 errors were removed and 3 were added at the same locations with the same fundamental issue — pyrefly struggles with complex ParamSpec + overload + generic callable subtyping in prefect's Flow/FlowDecorator. The Unknown types in the error messages indicate inference limitations. The errors shifted in their type representations due to the return-before-parameters reordering in subset.rs, but the net effect is zero: same number of false positives, same files, same fundamental cause.
Attribution: The change to is_subset_eq ordering in pyrefly/lib/solver/subset.rs (checking return types before parameters in callable subtyping) changed how type variables are resolved during callable subtype checks. This altered the intermediate types pyrefly computes for ParamSpec-based callables, producing slightly different (but still incorrect) type representations in the error messages. The old errors showed (((ParamSpec(P)) -> R) -> Coroutine[...]) while new errors show ((ParamSpec(P)) -> Coroutine[...]) — different wrapping but same fundamental false positive.

❓ Needs Review (1)

Expression (+3, -3)

LLM classification failed: Anthropic API returned 429: {"type":"error","error":{"type":"rate_limit_error","message":"This request would exceed the rate limit you configured in workspace Githib_Actions_POC of 10,000 output tokens per minute. For details, refer to: https://docs.claude.com/en/api/rate-limits; see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also adjust your configured workspace rate limit in the settings page of the Anthropic Console."},"request_id":"req_011CZUdEA4Qvwq15XJ7WDz4A"}. Non-trivial change (3 added, 3 removed).

Suggested fixes

Summary: The reordering of callable subtype checking (return types before parameters) in subset.rs fixes reduce(gcd,...) but causes widespread regressions in overload resolution, type variable solving, and callable compatibility checks across 11 projects, with 80+ pyrefly-only false positives.

1. In is_subset_eq() for callable types in pyrefly/lib/solver/subset.rs, instead of unconditionally checking returns before parameters, only check returns first when the upper callable contains quantified type variables that appear in both return and parameter positions. This can be implemented as: check if u has quantified vars that appear in both u.ret and u.params; if so, check returns first (the new behavior for reduce/gcd); otherwise, check parameters first (the old behavior). Pseudo-code: if u has quantified vars in both ret and params { self.is_subset_eq(&l.ret, &u.ret)?; self.is_subset_params(&l.params, &u.params) } else { self.is_subset_params(&l.params, &u.params)?; self.is_subset_eq(&l.ret, &u.ret) }. This preserves the fix for reduce(gcd,...) while restoring the old behavior for overload resolution, override checking, and general callable compatibility.

Files: pyrefly/lib/solver/subset.rs
Confidence: high
Affected projects: pandas-stubs, spark, mypy, freqtrade, steam.py, sockeye, hydpy, meson, scipy-stubs, jax, pytest-robotframework
Fixes: no-matching-overload, assert-type, bad-override, bad-argument-type, unsupported-operation, bad-return, invalid-overload
The root cause of all regressions is the unconditional reordering of return-before-parameters in callable subtyping. The fix was specifically motivated by reduce(gcd, ...) where TypeVar T appears in both return (T) and params ([T, T]) positions, and the return type int should win over the looser SupportsIndex parameter bound. But for most other callable subtyping scenarios (overload matching, override checking, generic callable compatibility), the old parameters-first order worked correctly. By conditionally applying the new order only when quantified vars span both positions, we get the best of both worlds. This would eliminate ~80+ pyrefly-only false positives: 33 in pandas-stubs (no-matching-overload + assert-type on Series.mean/median/std/var), 31 in spark (no-matching-overload, unsupported-operation, bad-return, bad-argument-type on pandas operations), 13 in mypy (bad-override on OrderedDict.or, bad-argument-type on classmethod, invalid-overload on dict.fromkeys), 30 in freqtrade (unsupported-operation, bad-return, bad-argument-type on pandas arithmetic), 1 in steam.py (round() overload resolution), 1 in sockeye (min/max as dict values), 1 in hydpy (DatetimeIndex iteration), 1 in meson (shutil.copyfile compatibility), 1 in scipy-stubs (block_diag type precision), 1 in jax (DynamicGridDim leaking through reduce), and 1 in pytest-robotframework (unsolved ParamSpec in error message).


Was this helpful? React with 👍 or 👎

Classification by primer-classifier (9 heuristic, 29 LLM)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

reduce(gcd, ...) returns SupportsIndex instead of int

2 participants