Skip to content

Resolve ambiguous overload calls by selecting the most general return type among the matched overloads (#2764)#2764

Open
rchen152 wants to merge 7 commits intofacebook:mainfrom
rchen152:export-D95667476
Open

Resolve ambiguous overload calls by selecting the most general return type among the matched overloads (#2764)#2764
rchen152 wants to merge 7 commits intofacebook:mainfrom
rchen152:export-D95667476

Conversation

@rchen152
Copy link
Contributor

@rchen152 rchen152 commented Mar 10, 2026

Summary:
Previously, if a call to an overloaded function matched more than one overload and the return types of the matched overloads weren't all equivalent, we fell back to a return type of Any, as the spec says to do.

With this diff, we instead try to select the "most general" return type among the matched overloads, by checking if there exists a return type such that all materializations of every other return type are assignable to it. Some examples of what this means:

overload
def f(...) -> A[int]: ...
overload
def f(...) -> A[Any]: ...

# Old return type: Any
# New return type: A[Any]
f(<ambiguous arguments>)
overload
def f(...) -> bool: ...
overload
def f(...) -> int: ...

# Old return type: Any
# New return type: int
f(<ambiguous arguments>)
overload
def f(...) -> A[int]: ...
overload
def f(...) -> A[str]: ...

# Old return type: Any
# New return type: Any (neither return type is more general than the other)
f(<ambiguous arguments>)

We now pass more of numpy and scipy-stubs' assert_type tests:
| project | assert_type failures at head | assert_type failures at previous diff | assert_type failures at current diff |
| numpy | 134 | 186 | 127 |
| scipy-stubs | 49 | 101 | 19 |

The trade-off is that we report a lot of new errors on mypy_primer projects. I investigated a bunch of them, and they're not wrong, per se, but some are arguably low-value. Examples:

def f(x: Sequence[str], idx) -> str:
    # The possible types of `x[idx]` are str (if idx is an int)
    # and Sequence[str] (if idx is a slice). str <: Sequence[str],
    # so we pick the latter as the return type.
    # This is technically correct...but idx is probably meant to be an int.
    return x[idx]  # error!
def f(x: bool, y) -> bool:
    # Similar to above, this is technically correct because y may be an int,
    # but y is probably meant to be a bool.
    return x & y  # error!

The LLM classifier seems to have gotten really confused on this one XD The detailed analysis says that Pyrefly is now "stricter than the established ecosystem standard", but we pass more of numpy and scipy-stubs' assert_type assertions than before, meaning this diff brings us closer to the ecosystem standard. Because the PR includes the whole stack, it's also complaining about errors introduced in previous diffs.

Reviewed By: grievejia

Differential Revision: D95667476

@meta-cla meta-cla bot added the cla signed label Mar 10, 2026
@meta-codesync
Copy link

meta-codesync bot commented Mar 10, 2026

@rchen152 has exported this pull request. If you are a Meta employee, you can view the originating Diff in D95667476.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

rchen152 added a commit to rchen152/pyrefly that referenced this pull request Mar 10, 2026
Summary: Pull Request resolved: facebook#2764

Differential Revision: D95667476
@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

Copy link
Contributor

@grievejia grievejia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review automatically exported from Phabricator review in Meta.

@meta-codesync meta-codesync bot changed the title Alternative resolution for ambiguous overload calls Resolve ambiguous overload calls by selecting the most general return type among the matched overloads (#2764) Mar 19, 2026
rchen152 added a commit to rchen152/pyrefly that referenced this pull request Mar 19, 2026
… type among the matched overloads (facebook#2764)

Summary:
Previously, if a call to an overloaded function matched more than one overload and the return types of the matched overloads weren't all equivalent, we fell back to a return type of `Any`, as the spec says to do.

With this diff, we instead try to select the "most general" return type among the matched overloads, by checking if there exists a return type such that all materializations of every other return type are assignable to it. Some examples of what this means:
```python
overload
def f(...) -> A[int]: ...
overload
def f(...) -> A[Any]: ...

# Old return type: Any
# New return type: A[Any]
f(<ambiguous arguments>)
```
```python
overload
def f(...) -> bool: ...
overload
def f(...) -> int: ...

# Old return type: Any
# New return type: int
f(<ambiguous arguments>)
```
```python
overload
def f(...) -> A[int]: ...
overload
def f(...) -> A[str]: ...

# Old return type: Any
# New return type: Any (neither return type is more general than the other)
f(<ambiguous arguments>)
```

We now pass more of numpy and scipy-stubs' assert_type tests:
| project | assert_type failures at head | assert_type failures at previous diff | assert_type failures at current diff |
| numpy | 134 | 186 | 127 |
| scipy-stubs | 49 | 101 | 19 |

The trade-off is that we report a lot of new errors on mypy_primer projects. I investigated a bunch of them, and they're not wrong, per se, but some are arguably low-value. Examples:
```
def f(x: Sequence[str], idx) -> str:
    # The possible types of `x[idx]` are str (if idx is an int)
    # and Sequence[str] (if idx is a slice). str <: Sequence[str],
    # so we pick the latter as the return type.
    # This is technically correct...but idx is probably meant to be an int.
    return x[idx]  # error!
```
```
def f(x: bool, y) -> bool:
    # Similar to above, this is technically correct because y may be an int,
    # but y is probably meant to be a bool.
    return x & y  # error!
```


The LLM classifier seems to have gotten really confused on this one XD The detailed analysis says that Pyrefly is now "stricter than the established ecosystem standard", but we pass more of numpy and scipy-stubs' `assert_type` assertions than before, meaning this diff brings us closer to the ecosystem standard. Because the PR includes the whole stack, it's also complaining about errors introduced in previous diffs.

Reviewed By: grievejia

Differential Revision: D95667476
@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

Summary: For facebook#2833.

Differential Revision: D97389186
Summary:
Our arity filter for overloads was too permissive. It counted min and max arg counts for positional and keyword args separately, so for positional parameters that can be passed either positionally or by name, we'd get weird results like:
```
overload
def f(x: int): ...
overload
def f(x: int, y: str): ...

# first signature takes at least 0 and at most 1 posarg, since x could be passed by name
# second signature takes at least 0 and at most 2 posargs, since x and y could be passed by name
# therefore we don't filter out either overload based on arg count!
f(0)
```

Fixed by adding an `overall` arg count filter.

This issue is hard to detect because the main symptom is that we fall back to an `Unknown` return type more often than we should.

The buggy tests that I added in the previous diff exercise the `overall` arg count code, but the difference isn't observable without the bug fix in the next diff.

Differential Revision: D97390936
rchen152 added a commit to rchen152/pyrefly that referenced this pull request Mar 20, 2026
… type among the matched overloads (facebook#2764)

Summary:
Previously, if a call to an overloaded function matched more than one overload and the return types of the matched overloads weren't all equivalent, we fell back to a return type of `Any`, as the spec says to do.

With this diff, we instead try to select the "most general" return type among the matched overloads, by checking if there exists a return type such that all materializations of every other return type are assignable to it. Some examples of what this means:
```python
overload
def f(...) -> A[int]: ...
overload
def f(...) -> A[Any]: ...

# Old return type: Any
# New return type: A[Any]
f(<ambiguous arguments>)
```
```python
overload
def f(...) -> bool: ...
overload
def f(...) -> int: ...

# Old return type: Any
# New return type: int
f(<ambiguous arguments>)
```
```python
overload
def f(...) -> A[int]: ...
overload
def f(...) -> A[str]: ...

# Old return type: Any
# New return type: Any (neither return type is more general than the other)
f(<ambiguous arguments>)
```

We now pass more of numpy and scipy-stubs' assert_type tests:
| project | assert_type failures at head | assert_type failures at previous diff | assert_type failures at current diff |
| numpy | 134 | 186 | 127 |
| scipy-stubs | 49 | 101 | 19 |

The trade-off is that we report a lot of new errors on mypy_primer projects. I investigated a bunch of them, and they're not wrong, per se, but some are arguably low-value. Examples:
```
def f(x: Sequence[str], idx) -> str:
    # The possible types of `x[idx]` are str (if idx is an int)
    # and Sequence[str] (if idx is a slice). str <: Sequence[str],
    # so we pick the latter as the return type.
    # This is technically correct...but idx is probably meant to be an int.
    return x[idx]  # error!
```
```
def f(x: bool, y) -> bool:
    # Similar to above, this is technically correct because y may be an int,
    # but y is probably meant to be a bool.
    return x & y  # error!
```

Pull Request resolved: facebook#2764

The LLM classifier seems to have gotten really confused on this one XD The detailed analysis says that Pyrefly is now "stricter than the established ecosystem standard", but we pass more of numpy and scipy-stubs' `assert_type` assertions than before, meaning this diff brings us closer to the ecosystem standard. Because the PR includes the whole stack, it's also complaining about errors introduced in previous diffs.

Reviewed By: grievejia

Differential Revision: D95667476
rchen152 added a commit to rchen152/pyrefly that referenced this pull request Mar 20, 2026
… type among the matched overloads (facebook#2764)

Summary:
Previously, if a call to an overloaded function matched more than one overload and the return types of the matched overloads weren't all equivalent, we fell back to a return type of `Any`, as the spec says to do.

With this diff, we instead try to select the "most general" return type among the matched overloads, by checking if there exists a return type such that all materializations of every other return type are assignable to it. Some examples of what this means:
```python
overload
def f(...) -> A[int]: ...
overload
def f(...) -> A[Any]: ...

# Old return type: Any
# New return type: A[Any]
f(<ambiguous arguments>)
```
```python
overload
def f(...) -> bool: ...
overload
def f(...) -> int: ...

# Old return type: Any
# New return type: int
f(<ambiguous arguments>)
```
```python
overload
def f(...) -> A[int]: ...
overload
def f(...) -> A[str]: ...

# Old return type: Any
# New return type: Any (neither return type is more general than the other)
f(<ambiguous arguments>)
```

We now pass more of numpy and scipy-stubs' assert_type tests:
| project | assert_type failures at head | assert_type failures at previous diff | assert_type failures at current diff |
| numpy | 134 | 186 | 127 |
| scipy-stubs | 49 | 101 | 19 |

The trade-off is that we report a lot of new errors on mypy_primer projects. I investigated a bunch of them, and they're not wrong, per se, but some are arguably low-value. Examples:
```
def f(x: Sequence[str], idx) -> str:
    # The possible types of `x[idx]` are str (if idx is an int)
    # and Sequence[str] (if idx is a slice). str <: Sequence[str],
    # so we pick the latter as the return type.
    # This is technically correct...but idx is probably meant to be an int.
    return x[idx]  # error!
```
```
def f(x: bool, y) -> bool:
    # Similar to above, this is technically correct because y may be an int,
    # but y is probably meant to be a bool.
    return x & y  # error!
```

Pull Request resolved: facebook#2764

The LLM classifier seems to have gotten really confused on this one XD The detailed analysis says that Pyrefly is now "stricter than the established ecosystem standard", but we pass more of numpy and scipy-stubs' `assert_type` assertions than before, meaning this diff brings us closer to the ecosystem standard. Because the PR includes the whole stack, it's also complaining about errors introduced in previous diffs.

Reviewed By: grievejia

Differential Revision: D95667476
@github-actions

This comment has been minimized.

rchen152 and others added 5 commits March 19, 2026 17:48
Summary:
If only one overload has a parameter count that is compatible with a call's argument count, we should take that as the matching overload even if the call produces errors.

Fixes facebook#2833.

Differential Revision: D97394258
…verload resolution is ambiguous

Summary: Fixes facebook#2552.

Differential Revision: D95512431
Summary: No functional change, just a refactor to make it easier to add a flag controlling what we do when we end up with multiple matches.

Differential Revision: D97023844
Summary: Adds flag to toggle whether we follow the overload evaluation algorithm in the spec exactly or do our own thing. For now, the flag does nothing.

Differential Revision: D97015419
… type among the matched overloads (facebook#2764)

Summary:
Previously, if a call to an overloaded function matched more than one overload and the return types of the matched overloads weren't all equivalent, we fell back to a return type of `Any`, as the spec says to do.

With this diff, we instead try to select the "most general" return type among the matched overloads, by checking if there exists a return type such that all materializations of every other return type are assignable to it. Some examples of what this means:
```python
overload
def f(...) -> A[int]: ...
overload
def f(...) -> A[Any]: ...

# Old return type: Any
# New return type: A[Any]
f(<ambiguous arguments>)
```
```python
overload
def f(...) -> bool: ...
overload
def f(...) -> int: ...

# Old return type: Any
# New return type: int
f(<ambiguous arguments>)
```
```python
overload
def f(...) -> A[int]: ...
overload
def f(...) -> A[str]: ...

# Old return type: Any
# New return type: Any (neither return type is more general than the other)
f(<ambiguous arguments>)
```

We now pass more of numpy and scipy-stubs' assert_type tests:
| project | assert_type failures at head | assert_type failures at previous diff | assert_type failures at current diff |
| numpy | 134 | 186 | 127 |
| scipy-stubs | 49 | 101 | 19 |

The trade-off is that we report a lot of new errors on mypy_primer projects. I investigated a bunch of them, and they're not wrong, per se, but some are arguably low-value. Examples:
```
def f(x: Sequence[str], idx) -> str:
    # The possible types of `x[idx]` are str (if idx is an int)
    # and Sequence[str] (if idx is a slice). str <: Sequence[str],
    # so we pick the latter as the return type.
    # This is technically correct...but idx is probably meant to be an int.
    return x[idx]  # error!
```
```
def f(x: bool, y) -> bool:
    # Similar to above, this is technically correct because y may be an int,
    # but y is probably meant to be a bool.
    return x & y  # error!
```

Pull Request resolved: facebook#2764

The LLM classifier seems to have gotten really confused on this one XD The detailed analysis says that Pyrefly is now "stricter than the established ecosystem standard", but we pass more of numpy and scipy-stubs' `assert_type` assertions than before, meaning this diff brings us closer to the ecosystem standard. Because the PR includes the whole stack, it's also complaining about errors introduced in previous diffs.

Reviewed By: grievejia

Differential Revision: D95667476
@github-actions
Copy link

Diff from mypy_primer, showing the effect of this PR on open source code:

beartype (https://github.com/beartype/beartype)
- ERROR beartype/_check/error/_pep/pep484585/errpep484585container.py:117:60-77: No matching overload found for function `dict.get` called with arguments: (HintSign | None) [no-matching-overload]
+ ERROR beartype/_check/error/_pep/pep484585/errpep484585container.py:117:61-76: Argument `HintSign | None` is not assignable to parameter `key` with type `HintSign` in function `dict.get` [bad-argument-type]
- ERROR beartype/door/_cls/pep/pep484585/doorpep484585subscripted.py:49:79-50:29: No matching overload found for function `dict.get` called with arguments: (HintSign | None) [no-matching-overload]

streamlit (https://github.com/streamlit/streamlit)
- ERROR lib/streamlit/elements/json.py:134:32-38: No matching overload found for function `list.__init__` called with arguments: (dict[Unknown, Unknown] | object | Unknown) [no-matching-overload]
- ERROR lib/streamlit/elements/json.py:136:28-34: No matching overload found for function `list.__init__` called with arguments: (dict[Unknown, Unknown] | object | Unknown) [no-matching-overload]
+ ERROR lib/streamlit/elements/json.py:134:33-37: Argument `dict[Unknown, Unknown] | object | Unknown` is not assignable to parameter `iterable` with type `Iterable[Unknown]` in function `list.__init__` [bad-argument-type]
+ ERROR lib/streamlit/elements/json.py:136:29-33: Argument `dict[Unknown, Unknown] | object | Unknown` is not assignable to parameter `iterable` with type `Iterable[Unknown]` in function `list.__init__` [bad-argument-type]
- ERROR lib/streamlit/elements/widgets/data_editor.py:204:24-31: No matching overload found for function `list.__init__` called with arguments: (bool | float | int | list[str] | str) [no-matching-overload]
- ERROR lib/streamlit/elements/widgets/data_editor.py:210:24-31: No matching overload found for function `list.__init__` called with arguments: (bool | float | int | list[str] | str) [no-matching-overload]
+ ERROR lib/streamlit/elements/widgets/data_editor.py:204:25-30: Argument `bool | float | int | list[str] | str` is not assignable to parameter `iterable` with type `Iterable[@_]` in function `list.__init__` [bad-argument-type]
+ ERROR lib/streamlit/elements/widgets/data_editor.py:210:25-30: Argument `bool | float | int | list[str] | str` is not assignable to parameter `iterable` with type `Iterable[@_]` in function `list.__init__` [bad-argument-type]
- ERROR lib/streamlit/runtime/state/query_params.py:336:33-40: No matching overload found for function `list.__init__` called with arguments: (Iterable[tuple[str, Iterable[str] | str]] | SupportsKeysAndGetItem[str, Iterable[str] | str]) [no-matching-overload]
+ ERROR lib/streamlit/runtime/state/query_params.py:336:34-39: Argument `Iterable[tuple[str, Iterable[str] | str]] | SupportsKeysAndGetItem[str, Iterable[str] | str]` is not assignable to parameter `iterable` with type `Iterable[tuple[str, Iterable[str] | str]]` in function `list.__init__` [bad-argument-type]

pandera (https://github.com/pandera-dev/pandera)
- ERROR pandera/backends/pandas/error_formatters.py:110:20-36: No matching overload found for function `pandas.core.frame.DataFrame.rename` called with arguments: (Literal['failure_case']) [no-matching-overload]
+ ERROR pandera/backends/pandas/error_formatters.py:110:21-35: Argument `Literal['failure_case']` is not assignable to parameter `mapper` with type `((Any) -> Hashable | None) | Mapping[Any, Hashable | None] | None` in function `pandas.core.frame.DataFrame.rename` [bad-argument-type]

websockets (https://github.com/aaugustin/websockets)
- ERROR src/websockets/extensions/permessage_deflate.py:273:45-52: No matching overload found for function `int.__new__` called with arguments: (type[int], str | None) [no-matching-overload]
- ERROR src/websockets/extensions/permessage_deflate.py:283:45-52: No matching overload found for function `int.__new__` called with arguments: (type[int], str | None) [no-matching-overload]
+ ERROR src/websockets/extensions/permessage_deflate.py:273:46-51: Argument `str | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR src/websockets/extensions/permessage_deflate.py:283:46-51: Argument `str | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]

hydra-zen (https://github.com/mit-ll-responsible-ai/hydra-zen)
-  INFO tests/annotations/declarations.py:241:16-244:6: revealed type: Unknown [reveal-type]
+  INFO tests/annotations/declarations.py:241:16-244:6: revealed type: type[Builds[Unknown] | BuildsWithSig[type[str], [object: object = '']] | HydraPartialBuilds[Unknown] | ZenPartialBuilds[Unknown]] [reveal-type]
-  INFO tests/annotations/declarations.py:245:16-248:6: revealed type: Unknown [reveal-type]
+  INFO tests/annotations/declarations.py:245:16-248:6: revealed type: type[Builds[Unknown] | BuildsWithSig[type[str], [object: object = '']] | HydraPartialBuilds[Unknown] | ZenPartialBuilds[Unknown]] [reveal-type]
-  INFO tests/annotations/declarations.py:261:16-264:6: revealed type: Unknown [reveal-type]
+  INFO tests/annotations/declarations.py:261:16-264:6: revealed type: type[Builds[Unknown] | BuildsWithSig[type[str], [object: object = '']] | HydraPartialBuilds[Unknown] | ZenPartialBuilds[Unknown]] [reveal-type]
- ERROR src/hydra_zen/structured_configs/_implementations.py:2524:53-69: No matching overload found for function `list.__init__` called with arguments: (list[DataClass_ | Mapping[str, Sequence[str] | str | None] | str | type[DataClass_]] | None) [no-matching-overload]
- ERROR src/hydra_zen/structured_configs/_implementations.py:3331:53-69: No matching overload found for function `list.__init__` called with arguments: (list[DataClass_ | Mapping[str, Sequence[str] | str | None] | str | type[DataClass_]] | None) [no-matching-overload]
+ ERROR src/hydra_zen/structured_configs/_implementations.py:2524:54-68: Argument `list[DataClass_ | Mapping[str, Sequence[str] | str | None] | str | type[DataClass_]] | None` is not assignable to parameter `iterable` with type `Iterable[DataClass_ | Mapping[str, Sequence[str] | str | None] | str | type[DataClass_]]` in function `list.__init__` [bad-argument-type]
+ ERROR src/hydra_zen/structured_configs/_implementations.py:3331:54-68: Argument `list[DataClass_ | Mapping[str, Sequence[str] | str | None] | str | type[DataClass_]] | None` is not assignable to parameter `iterable` with type `Iterable[DataClass_ | Mapping[str, Sequence[str] | str | None] | str | type[DataClass_]]` in function `list.__init__` [bad-argument-type]
+ ERROR src/hydra_zen/structured_configs/_implementations.py:3751:12-87: Returned type `Builds[Unknown] | BuildsWithSig[type[Any], [*args: Any, **kwds: Any]] | HydraPartialBuilds[Unknown] | ZenPartialBuilds[Unknown]` is not assignable to declared return type `HydraPartialBuilds[type[_T]] | ZenPartialBuilds[type[_T]]` [bad-return]

dd-trace-py (https://github.com/DataDog/dd-trace-py)
+ ERROR ddtrace/appsec/_ddwaf/waf.py:149:16-18: Returned type `Literal[True] | int` is not assignable to declared return type `bool` [bad-return]
- ERROR ddtrace/contrib/internal/ray/span_manager.py:224:44-59: No matching overload found for function `dict.get` called with arguments: (str | None) [no-matching-overload]
+ ERROR ddtrace/contrib/internal/ray/span_manager.py:224:45-58: Argument `str | None` is not assignable to parameter `key` with type `str` in function `dict.get` [bad-argument-type]
- ERROR ddtrace/internal/coverage/instrumentation_py3_11.py:105:13-21: No matching overload found for function `next` called with arguments: (Iterable[int]) [no-matching-overload]
- ERROR ddtrace/internal/coverage/instrumentation_py3_11.py:110:17-25: No matching overload found for function `next` called with arguments: (Iterable[int]) [no-matching-overload]
+ ERROR ddtrace/internal/coverage/instrumentation_py3_11.py:105:14-20: Argument `Iterable[int]` is not assignable to parameter `i` with type `SupportsNext[@_]` in function `next` [bad-argument-type]
+ ERROR ddtrace/internal/coverage/instrumentation_py3_11.py:108:13-21: `&` is not supported between `SupportsIndex` and `Literal[63]` [unsupported-operation]
+ ERROR ddtrace/internal/coverage/instrumentation_py3_11.py:109:11-19: `&` is not supported between `SupportsIndex` and `Literal[64]` [unsupported-operation]
+ ERROR ddtrace/internal/coverage/instrumentation_py3_11.py:110:18-24: Argument `Iterable[int]` is not assignable to parameter `i` with type `SupportsNext[@_]` in function `next` [bad-argument-type]
+ ERROR ddtrace/internal/coverage/instrumentation_py3_11.py:113:33-41: `&` is not supported between `SupportsIndex` and `Literal[63]` [unsupported-operation]
- ERROR ddtrace/llmobs/_integrations/pydantic_ai.py:55:16-34: Returned type `tuple[str | Any, Unknown | None]` is not assignable to declared return type `tuple[str, str]` [bad-return]
+ ERROR ddtrace/llmobs/_integrations/pydantic_ai.py:55:16-34: Returned type `tuple[str | Any, str | Any | None]` is not assignable to declared return type `tuple[str, str]` [bad-return]

ibis (https://github.com/ibis-project/ibis)
- ERROR ibis/backends/athena/__init__.py:546:42-61: No matching overload found for function `list.__init__` called with arguments: (Attribute) [no-matching-overload]
+ ERROR ibis/backends/athena/__init__.py:546:43-60: Argument `Attribute` is not assignable to parameter `iterable` with type `Iterable[@_]` in function `list.__init__` [bad-argument-type]
- ERROR ibis/backends/athena/__init__.py:554:26-40: No matching overload found for function `list.__init__` called with arguments: (Attribute) [no-matching-overload]
+ ERROR ibis/backends/athena/__init__.py:554:27-39: Argument `Attribute` is not assignable to parameter `iterable` with type `Iterable[@_]` in function `list.__init__` [bad-argument-type]
- ERROR ibis/backends/athena/tests/conftest.py:110:21-74: No matching overload found for function `zip.__new__` called with arguments: (type[zip[_T_co]], Attribute, Unknown) [no-matching-overload]
+ ERROR ibis/backends/athena/tests/conftest.py:110:22-51: Argument `Attribute` is not assignable to parameter `iter1` with type `Iterable[@_]` in function `zip.__new__` [bad-argument-type]
- ERROR ibis/backends/bigquery/__init__.py:991:47-61: No matching overload found for function `list.__init__` called with arguments: (Attribute) [no-matching-overload]
+ ERROR ibis/backends/bigquery/__init__.py:991:48-60: Argument `Attribute` is not assignable to parameter `iterable` with type `Iterable[str]` in function `list.__init__` [bad-argument-type]
- ERROR ibis/backends/bigquery/__init__.py:1009:24-38: No matching overload found for function `list.__init__` called with arguments: (Attribute) [no-matching-overload]
+ ERROR ibis/backends/bigquery/__init__.py:1009:25-37: Argument `Attribute` is not assignable to parameter `iterable` with type `Iterable[@_]` in function `list.__init__` [bad-argument-type]
- ERROR ibis/backends/clickhouse/__init__.py:401:30-52: No matching overload found for function `pandas.core.frame.DataFrame.__new__` called with arguments: (type[DataFrame], columns=Attribute) [no-matching-overload]
- ERROR ibis/backends/clickhouse/__init__.py:403:30-44: No matching overload found for function `list.__init__` called with arguments: (Attribute) [no-matching-overload]
+ ERROR ibis/backends/clickhouse/__init__.py:401:39-51: Argument `Attribute` is not assignable to parameter `columns` with type `ExtensionArray | Index | SequenceNotStr[Any] | Series | ndarray | range | None` in function `pandas.core.frame.DataFrame.__new__` [bad-argument-type]
+ ERROR ibis/backends/clickhouse/__init__.py:403:31-43: Argument `Attribute` is not assignable to parameter `iterable` with type `Iterable[@_]` in function `list.__init__` [bad-argument-type]
- ERROR ibis/backends/databricks/__init__.py:595:26-40: No matching overload found for function `list.__init__` called with arguments: (Attribute) [no-matching-overload]
+ ERROR ibis/backends/databricks/__init__.py:595:27-39: Argument `Attribute` is not assignable to parameter `iterable` with type `Iterable[@_]` in function `list.__init__` [bad-argument-type]
- ERROR ibis/backends/duckdb/tests/test_client.py:499:16-30: No matching overload found for function `list.__init__` called with arguments: (Attribute) [no-matching-overload]
+ ERROR ibis/backends/duckdb/tests/test_client.py:499:17-29: Argument `Attribute` is not assignable to parameter `iterable` with type `Iterable[@_]` in function `list.__init__` [bad-argument-type]
- ERROR ibis/backends/flink/ddl.py:54:31-59: No matching overload found for function `zip.__new__` called with arguments: (type[zip[_T_co]], Attribute, Attribute) [no-matching-overload]
- ERROR ibis/backends/flink/ddl.py:104:67-81: No matching overload found for function `set.__init__` called with arguments: (Attribute) [no-matching-overload]
+ ERROR ibis/backends/flink/ddl.py:54:32-44: Argument `Attribute` is not assignable to parameter `iter1` with type `Iterable[@_]` in function `zip.__new__` [bad-argument-type]
+ ERROR ibis/backends/flink/ddl.py:54:46-58: Argument `Attribute` is not assignable to parameter `iter2` with type `Iterable[@_]` in function `zip.__new__` [bad-argument-type]
+ ERROR ibis/backends/flink/ddl.py:104:68-80: Argument `Attribute` is not assignable to parameter `iterable` with type `Iterable[@_]` in function `set.__init__` [bad-argument-type]
- ERROR ibis/backends/flink/utils.py:122:32-63: No matching overload found for function `zip.__new__` called with arguments: (type[zip[_T_co]], Unknown, None) [no-matching-overload]
+ ERROR ibis/backends/flink/utils.py:122:50-62: Argument `None` is not assignable to parameter `iter2` with type `Iterable[@_]` in function `zip.__new__` [bad-argument-type]
- ERROR ibis/backends/impala/__init__.py:773:23-44: No matching overload found for function `set.__init__` called with arguments: (Attribute) [no-matching-overload]
- ERROR ibis/backends/impala/__init__.py:773:51-74: No matching overload found for function `set.__init__` called with arguments: (Attribute) [no-matching-overload]
+ ERROR ibis/backends/impala/__init__.py:773:24-43: Argument `Attribute` is not assignable to parameter `iterable` with type `Iterable[@_]` in function `set.__init__` [bad-argument-type]
+ ERROR ibis/backends/impala/__init__.py:773:52-73: Argument `Attribute` is not assignable to parameter `iterable` with type `Iterable[@_]` in function `set.__init__` [bad-argument-type]
- ERROR ibis/backends/impala/__init__.py:784:47-71: No matching overload found for function `frozenset.__new__` called with arguments: (type[frozenset[_T_co]], Attribute) [no-matching-overload]
+ ERROR ibis/backends/impala/__init__.py:784:48-70: Argument `Attribute` is not assignable to parameter `iterable` with type `Iterable[@_]` in function `frozenset.__new__` [bad-argument-type]
- ERROR ibis/backends/impala/tests/conftest.py:134:33-60: No matching overload found for function `zip.__new__` called with arguments: (type[zip[_T_co]], Attribute, Unknown) [no-matching-overload]
+ ERROR ibis/backends/impala/tests/conftest.py:134:34-48: Argument `Attribute` is not assignable to parameter `iter1` with type `Iterable[@_]` in function `zip.__new__` [bad-argument-type]
+ ERROR ibis/backends/impala/tests/test_udf.py:147:38-54: Argument `None` is not assignable to parameter `name` with type `str` in function `getattr` [bad-argument-type]
+ ERROR ibis/backends/impala/tests/test_udf.py:150:38-54: Argument `None` is not assignable to parameter `name` with type `str` in function `getattr` [bad-argument-type]
+ ERROR ibis/backends/impala/tests/test_udf.py:175:31-47: Argument `None` is not assignable to parameter `name` with type `str` in function `getattr` [bad-argument-type]
- ERROR ibis/backends/mssql/__init__.py:791:29-56: No matching overload found for function `map.__new__` called with arguments: (type[map[bool | Unknown]], (value: Unknown, dtype: Unknown) -> bool | Unknown, Unknown, Attribute) [no-matching-overload]
+ ERROR ibis/backends/mssql/__init__.py:791:50-55: Argument `Attribute` is not assignable to parameter `iter2` with type `Iterable[Unknown]` in function `map.__new__` [bad-argument-type]
- ERROR ibis/backends/snowflake/__init__.py:478:26-40: No matching overload found for function `list.__init__` called with arguments: (Attribute) [no-matching-overload]
+ ERROR ibis/backends/snowflake/__init__.py:478:27-39: Argument `Attribute` is not assignable to parameter `iterable` with type `Iterable[@_]` in function `list.__init__` [bad-argument-type]
- ERROR ibis/backends/sql/compilers/clickhouse.py:697:36-68: No matching overload found for function `map.__new__` called with arguments: (type[map[_S]], Unknown, Attribute) [no-matching-overload]
+ ERROR ibis/backends/sql/compilers/clickhouse.py:697:48-67: Argument `Attribute` is not assignable to parameter `iterable` with type `Iterable[Unknown]` in function `map.__new__` [bad-argument-type]
+ ERROR ibis/backends/tests/base.py:77:17-86: Argument `list[str]` is not assignable to parameter `iterable` with type `Iterable[LiteralString]` in function `map.__new__` [bad-argument-type]
- ERROR ibis/backends/tests/base.py:75:16-78:14: No matching overload found for function `map.__new__` called with arguments: (type[map[_S]], Overload[
-   (self: LiteralString, chars: LiteralString | None = None, /) -> LiteralString
-   (self: str, chars: str | None = None, /) -> str
- ], list[str]) [no-matching-overload]
+ ERROR ibis/backends/tests/test_client.py:1531:70-79: Argument `range` is not assignable to parameter `iterable` with type `Iterable[LiteralString]` in function `map.__new__` [bad-argument-type]
- ERROR ibis/backends/tests/test_client.py:1531:43-80: No matching overload found for function `map.__new__` called with arguments: (type[map[_S]], Overload[
-   (self: LiteralString, *args: LiteralString, **kwargs: LiteralString) -> LiteralString
-   (self: LiteralString, *args: object, **kwargs: object) -> str
- ], range) [no-matching-overload]
+ ERROR ibis/backends/tests/tpc/ds/test_queries.py:4564:33-75: Argument `tuple[Literal['2000-06-30'], Literal['2000-09-27'], Literal['2000-11-17']]` is not assignable to parameter `iterable` with type `Iterable[Deferred | IntegerValue | int]` in function `map.__new__` [bad-argument-type]
- ERROR ibis/backends/tests/tpc/ds/test_queries.py:4564:26-76: No matching overload found for function `map.__new__` called with arguments: (type[map[DateValue]], Overload[
-   (value_or_year: Deferred | IntegerValue | int, month: Deferred | IntegerValue | int, day: Deferred | IntegerValue | int, /) -> DateValue
-   (value_or_year: Any, /) -> DateValue
- ], tuple[Literal['2000-06-30'], Literal['2000-09-27'], Literal['2000-11-17']]) [no-matching-overload]
- ERROR ibis/backends/trino/tests/conftest.py:152:21-74: No matching overload found for function `zip.__new__` called with arguments: (type[zip[_T_co]], Attribute, Unknown) [no-matching-overload]
+ ERROR ibis/backends/trino/tests/conftest.py:152:22-51: Argument `Attribute` is not assignable to parameter `iter1` with type `Iterable[@_]` in function `zip.__new__` [bad-argument-type]
+ ERROR ibis/backends/trino/tests/test_client.py:137:70-79: Argument `range` is not assignable to parameter `iterable` with type `Iterable[LiteralString]` in function `map.__new__` [bad-argument-type]
- ERROR ibis/backends/trino/tests/test_client.py:137:43-80: No matching overload found for function `map.__new__` called with arguments: (type[map[_S]], Overload[
-   (self: LiteralString, *args: LiteralString, **kwargs: LiteralString) -> LiteralString
-   (self: LiteralString, *args: object, **kwargs: object) -> str
- ], range) [no-matching-overload]
+ ERROR ibis/common/selectors.py:45:44-85: Argument `filter[object]` is not assignable to parameter `iterable` with type `Iterable[int | str]` in function `map.__new__` [bad-argument-type]
- ERROR ibis/common/selectors.py:45:24-86: No matching overload found for function `map.__new__` called with arguments: (type[map[_S]], Overload[
-   (self: Table, what: int | str, /) -> Column
-   (self: Table, what: Sequence[int | str] | slice[Any, Any, Any], /) -> Table
- ], filter[object]) [no-matching-overload]
- ERROR ibis/common/temporal.py:202:15-22: No matching overload found for function `int.__new__` called with arguments: (type[int], Real | timedelta | Unknown) [no-matching-overload]
+ ERROR ibis/common/temporal.py:202:16-21: Argument `Real | timedelta | Unknown` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR ibis/examples/gen_registry.py:41:12-47:6: Returned type `Generator[tuple[Unknown, Unknown]]` is not assignable to declared return type `dict[str, str]` [bad-return]
+ ERROR ibis/examples/gen_registry.py:41:12-47:6: Returned type `Generator[tuple[LiteralString, LiteralString]]` is not assignable to declared return type `dict[str, str]` [bad-return]
+ ERROR ibis/examples/gen_registry.py:45:39-72: Argument `list[str]` is not assignable to parameter `iterable` with type `Iterable[LiteralString]` in function `map.__new__` [bad-argument-type]
- ERROR ibis/examples/gen_registry.py:45:27-73: No matching overload found for function `map.__new__` called with arguments: (type[map[_S]], Overload[
-   (self: LiteralString, chars: LiteralString | None = None, /) -> LiteralString
-   (self: str, chars: str | None = None, /) -> str
- ], list[str]) [no-matching-overload]
+ ERROR ibis/examples/gen_registry.py:366:38-49: Argument `KeysView[str]` is not assignable to parameter `iterable` with type `Iterable[LiteralString]` in function `map.__new__` [bad-argument-type]
- ERROR ibis/examples/gen_registry.py:366:26-50: No matching overload found for function `map.__new__` called with arguments: (type[map[_S]], Overload[
-   (self: LiteralString) -> LiteralString
-   (self: str) -> str
- ], KeysView[str]) [no-matching-overload]
- ERROR ibis/expr/api.py:265:38-52: No matching overload found for function `zip.__new__` called with arguments: (type[zip[tuple[str, DataType | str]]], Iterable[str] | None, Iterable[DataType | str] | None) [no-matching-overload]
+ ERROR ibis/expr/api.py:265:39-44: Argument `Iterable[str] | None` is not assignable to parameter `iter1` with type `Iterable[str]` in function `zip.__new__` [bad-argument-type]
+ ERROR ibis/expr/api.py:265:46-51: Argument `Iterable[DataType | str] | None` is not assignable to parameter `iter2` with type `Iterable[DataType | str]` in function `zip.__new__` [bad-argument-type]
- ERROR ibis/expr/api.py:483:28-487:10: No matching overload found for function `pandas.core.frame.DataFrame.__new__` called with arguments: (type[DataFrame], Any, columns=Attribute | Iterable[str] | None) [no-matching-overload]
+ ERROR ibis/expr/api.py:485:21-486:75: Argument `Attribute | Iterable[str] | None` is not assignable to parameter `columns` with type `ExtensionArray | Index | SequenceNotStr[Any] | Series | ndarray | range | None` in function `pandas.core.frame.DataFrame.__new__` [bad-argument-type]
+ ERROR ibis/expr/datatypes/core.py:879:48-58: Argument `Attribute` is not assignable to parameter `iterable` with type `Iterable[LiteralString]` in function `map.__new__` [bad-argument-type]
+ ERROR ibis/expr/datatypes/core.py:879:60-70: Argument `Attribute` is not assignable to parameter `iter2` with type `Iterable[LiteralString]` in function `map.__new__` [bad-argument-type]
- ERROR ibis/expr/datatypes/core.py:879:30-71: No matching overload found for function `map.__new__` called with arguments: (type[map[_S]], Overload[
-   (self: LiteralString, *args: LiteralString, **kwargs: LiteralString) -> LiteralString
-   (self: LiteralString, *args: object, **kwargs: object) -> str
- ], Attribute, Attribute) [no-matching-overload]
+ ERROR ibis/expr/datatypes/tests/test_core.py:126:14-36: Object of class `DataType` has no attribute `bounds` [missing-attribute]
- ERROR ibis/expr/datatypes/tests/test_core.py:407:41-59: No matching overload found for function `zip.__new__` called with arguments: (type[zip[@_]], Attribute, Attribute) [no-matching-overload]
+ ERROR ibis/expr/datatypes/tests/test_core.py:407:42-49: Argument `Attribute` is not assignable to parameter `iter1` with type `Iterable[@_]` in function `zip.__new__` [bad-argument-type]
+ ERROR ibis/expr/datatypes/tests/test_core.py:407:51-58: Argument `Attribute` is not assignable to parameter `iter2` with type `Iterable[@_]` in function `zip.__new__` [bad-argument-type]
+ ERROR ibis/expr/datatypes/value.py:276:25-37: Object of class `DataType` has no attribute `bounds` [missing-attribute]
+ ERROR ibis/expr/datatypes/value.py:279:29-41: Object of class `DataType` has no attribute `bounds` [missing-attribute]
+ ERROR ibis/expr/datatypes/value.py:303:51-66: Object of class `DataType` has no attribute `precision` [missing-attribute]
+ ERROR ibis/expr/datatypes/value.py:303:74-85: Object of class `DataType` has no attribute `scale` [missing-attribute]
+ ERROR ibis/expr/datatypes/value.py:307:32-48: Object of class `DataType` has no attribute `value_type` [missing-attribute]
+ ERROR ibis/expr/datatypes/value.py:309:41-57: Object of class `DataType` has no attribute `value_type` [missing-attribute]
+ ERROR ibis/expr/datatypes/value.py:313:29-39: Object of class `DataType` has no attribute `keys` [missing-attribute]
+ ERROR ibis/expr/datatypes/value.py:317:66-77: Object of class `DataType` has no attribute `items` [missing-attribute]
+ ERROR ibis/expr/datatypes/value.py:355:37-51: Object of class `DataType` has no attribute `timezone` [missing-attribute]
+ ERROR ibis/expr/datatypes/value.py:363:43-53: Object of class `DataType` has no attribute `unit` [missing-attribute]
- ERROR ibis/expr/operations/core.py:136:23-37: No matching overload found for function `getattr` called with arguments: (Module[ibis.expr.types], None) [no-matching-overload]
+ ERROR ibis/expr/operations/core.py:136:28-36: Argument `None` is not assignable to parameter `name` with type `str` in function `getattr` [bad-argument-type]
- ERROR ibis/expr/operations/reductions.py:212:30-51: No matching overload found for function `max` called with arguments: (object, Literal[38]) [no-matching-overload]
+ ERROR ibis/expr/operations/reductions.py:212:27-214:26: Argument `object | None` is not assignable to parameter `precision` with type `int | None` in function `ibis.expr.datatypes.core.Decimal.__init__` [bad-argument-type]
+ ERROR ibis/expr/operations/reductions.py:212:30-51: `object` is not assignable to upper bound `SupportsDunderGT[Any] | SupportsDunderLT[Any]` of type variable `SupportsRichComparisonT` [bad-specialization]
- ERROR ibis/expr/operations/reductions.py:215:26-42: No matching overload found for function `max` called with arguments: (object, Literal[2]) [no-matching-overload]
+ ERROR ibis/expr/operations/reductions.py:215:23-79: Argument `object | None` is not assignable to parameter `scale` with type `int | None` in function `ibis.expr.datatypes.core.Decimal.__init__` [bad-argument-type]
+ ERROR ibis/expr/operations/reductions.py:215:26-42: `object` is not assignable to upper bound `SupportsDunderGT[Any] | SupportsDunderLT[Any]` of type variable `SupportsRichComparisonT` [bad-specialization]
- ERROR ibis/expr/operations/udf.py:41:22-45: No matching overload found for function `next` called with arguments: (Iterable[int]) [no-matching-overload]
+ ERROR ibis/expr/operations/udf.py:41:23-44: Argument `Iterable[int]` is not assignable to parameter `i` with type `SupportsNext[@_]` in function `next` [bad-argument-type]
- ERROR ibis/expr/operations/udf.py:141:22-153:10: No matching overload found for function `typing.MutableMapping.update` called with arguments: (dict[str, FrozenDict[@_, @_] | InputType | Namespace | property | str | Unknown]) [no-matching-overload]
+ ERROR ibis/expr/operations/udf.py:141:22-153:10: No matching overload found for function `typing.MutableMapping.update` called with arguments: (dict[str, DataType | FrozenDict[@_, @_] | InputType | Namespace | property | str]) [no-matching-overload]
- ERROR ibis/expr/operations/udf.py:155:20-71: No matching overload found for function `type.__new__` called with arguments: (type[type], str, tuple[[B: Value[Unknown]](self: Self@_UDF) -> type[B]], dict[str, Argument]) [no-matching-overload]
+ ERROR ibis/expr/operations/udf.py:155:50-62: Argument `tuple[[B: Value[Unknown]](self: Self@_UDF) -> type[B]]` is not assignable to parameter `bases` with type `tuple[type[Any], ...]` in function `type.__new__` [bad-argument-type]
- ERROR ibis/expr/schema.py:36:28-45: No matching overload found for function `map.__new__` called with arguments: (type[map[_S]], (obj: Sized, /) -> int, Attribute) [no-matching-overload]
+ ERROR ibis/expr/schema.py:36:34-44: Argument `Attribute` is not assignable to parameter `iterable` with type `Iterable[Sized]` in function `map.__new__` [bad-argument-type]
+ ERROR ibis/expr/schema.py:474:19-32: Argument `DataType` is not assignable to parameter `fields` with type `FrozenOrderedDict[str, DataType]` in function `Schema.__init__` [bad-argument-type]
- ERROR ibis/expr/tests/test_schema.py:200:41-59: No matching overload found for function `zip.__new__` called with arguments: (type[zip[@_]], Attribute, Attribute) [no-matching-overload]
+ ERROR ibis/expr/tests/test_schema.py:200:42-49: Argument `Attribute` is not assignable to parameter `iter1` with type `Iterable[@_]` in function `zip.__new__` [bad-argument-type]
+ ERROR ibis/expr/tests/test_schema.py:200:51-58: Argument `Attribute` is not assignable to parameter `iter2` with type `Iterable[@_]` in function `zip.__new__` [bad-argument-type]
- ERROR ibis/expr/types/relations.py:939:39-53: No matching overload found for function `frozenset.__new__` called with arguments: (type[frozenset[_T_co]], Attribute) [no-matching-overload]
+ ERROR ibis/expr/types/relations.py:939:40-52: Argument `Attribute` is not assignable to parameter `iterable` with type `Iterable[@_]` in function `frozenset.__new__` [bad-argument-type]
- ERROR ibis/expr/types/relations.py:3216:54-74: No matching overload found for function `map.__new__` called with arguments: (type[map[_S]], (obj: object, /) -> str, Attribute) [no-matching-overload]
+ ERROR ibis/expr/types/relations.py:3216:61-73: Argument `Attribute` is not assignable to parameter `iterable` with type `Iterable[object]` in function `map.__new__` [bad-argument-type]
- ERROR ibis/expr/types/relations.py:4503:31-70: `dict[str, Any | None]` is not assignable to variable `names_transform` with type `((str) -> Value) | Mapping[str, (str) -> Value] | None` [bad-assignment]
- ERROR ibis/expr/types/relations.py:4513:13-39: Object of class `FunctionType` has no attribute `setdefault`
+ ERROR ibis/expr/types/relations.py:4513:13-39: Object of class `Mapping` has no attribute `setdefault` [missing-attribute]
- Object of class `Mapping` has no attribute `setdefault`
- Object of class `NoneType` has no attribute `setdefault` [missing-attribute]
- ERROR ibis/expr/types/relations.py:4526:23-44: `(str) -> Value` is not subscriptable [unsupported-operation]
- ERROR ibis/expr/types/relations.py:4526:23-44: `None` is not subscriptable [unsupported-operation]
- ERROR ibis/expr/types/relations.py:4963:25-68: No matching overload found for function `list.__init__` called with arguments: (map[tuple[Unknown, ...]]) [no-matching-overload]
+ ERROR ibis/expr/types/relations.py:4963:26-67: Argument `map[tuple[Unknown, ...]]` is not assignable to parameter `iterable` with type `Iterable[str]` in function `list.__init__` [bad-argument-type]
- ERROR ibis/expr/visualize.py:53:30-58: No matching overload found for function `zip.__new__` called with arguments: (type[zip[_T_co]], Attribute | Unknown, Attribute | Unknown) [no-matching-overload]
+ ERROR ibis/expr/visualize.py:53:31-43: Argument `Attribute | Unknown` is not assignable to parameter `iter1` with type `Iterable[@_]` in function `zip.__new__` [bad-argument-type]
+ ERROR ibis/expr/visualize.py:53:45-57: Argument `Attribute | Unknown` is not assignable to parameter `iter2` with type `Iterable[@_]` in function `zip.__new__` [bad-argument-type]
- ERROR ibis/formats/pandas.py:93:24-38: No matching overload found for function `zip.__new__` called with arguments: (type[zip[_T_co]], Attribute, list[Unknown]) [no-matching-overload]
+ ERROR ibis/formats/pandas.py:93:25-30: Argument `Attribute` is not assignable to parameter `iter1` with type `Iterable[@_]` in function `zip.__new__` [bad-argument-type]
- ERROR ibis/legacy/udf/vectorized.py:38:12-46: Returned type `dict[str, @_]` is not assignable to declared return type `tuple[Unknown, ...]` [bad-return]
+ ERROR ibis/legacy/udf/vectorized.py:38:12-46: Returned type `dict[str, Any | Unknown]` is not assignable to declared return type `tuple[Unknown, ...]` [bad-return]
- ERROR ibis/legacy/udf/vectorized.py:38:20-45: No matching overload found for function `zip.__new__` called with arguments: (type[zip[_T_co]], Attribute, Series | list[Unknown] | ndarray) [no-matching-overload]
+ ERROR ibis/legacy/udf/vectorized.py:38:21-38: Argument `Attribute` is not assignable to parameter `iter1` with type `Iterable[@_]` in function `zip.__new__` [bad-argument-type]
+ ERROR ibis/legacy/udf/vectorized.py:250:51-67: Argument `DataType` is not assignable to parameter `output_type` with type `Struct` in function `_coerce_to_np_array` [bad-argument-type]
+ ERROR ibis/legacy/udf/vectorized.py:250:51-67: Argument `DataType` is not assignable to parameter `output_type` with type `Struct` in function `_coerce_to_dict` [bad-argument-type]
+ ERROR ibis/legacy/udf/vectorized.py:250:51-67: Argument `DataType` is not assignable to parameter `output_type` with type `Struct` in function `_coerce_to_dataframe` [bad-argument-type]
+ ERROR ibis/tests/benchmarks/test_benchmarks.py:1025:33-46: Argument `range` is not assignable to parameter `iterable` with type `Iterable[LiteralString]` in function `map.__new__` [bad-argument-type]

... (truncated 8 lines) ...

build (https://github.com/pypa/build)
+ ERROR src/build/__main__.py:150:79-106: Argument `TextIOWrapper` is not assignable to parameter `iterable` with type `Iterable[LiteralString]` in function `map.__new__` [bad-argument-type]
- ERROR src/build/__main__.py:150:67-107: No matching overload found for function `map.__new__` called with arguments: (type[map[_S]], Overload[
-   (self: LiteralString, chars: LiteralString | None = None, /) -> LiteralString
-   (self: str, chars: str | None = None, /) -> str
- ], TextIOWrapper) [no-matching-overload]

sockeye (https://github.com/awslabs/sockeye)
- ERROR sockeye/data_io.py:1147:61-1149:108: No matching overload found for function `zip.__new__` called with arguments: (type[zip[_T_co]], list[BucketBatchSize], list[int], list[tuple[float | None, float | None]] | None) [no-matching-overload]
+ ERROR sockeye/data_io.py:1149:62-107: Argument `list[tuple[float | None, float | None]] | None` is not assignable to parameter `iter3` with type `Iterable[tuple[float | None, float | None]]` in function `zip.__new__` [bad-argument-type]
- ERROR sockeye/model.py:816:40-68: No matching overload found for function `zip.__new__` called with arguments: (type[zip[_T_co]], list[str], list[int] | None) [no-matching-overload]
+ ERROR sockeye/model.py:816:56-67: Argument `list[int] | None` is not assignable to parameter `iter2` with type `Iterable[int]` in function `zip.__new__` [bad-argument-type]
- ERROR test/unit/test_inference.py:138:47-77: No matching overload found for function `zip.__new__` called with arguments: (type[zip[_T_co]], list[list[str]] | None, Unknown) [no-matching-overload]
+ ERROR test/unit/test_inference.py:138:48-67: Argument `list[list[str]] | None` is not assignable to parameter `iter1` with type `Iterable[list[str]]` in function `zip.__new__` [bad-argument-type]
- ERROR test/unit/test_inference.py:177:78-148: No matching overload found for function `zip.__new__` called with arguments: (type[zip[_T_co]], list[list[str]] | None, list[list[str]] | None) [no-matching-overload]
+ ERROR test/unit/test_inference.py:177:79-112: Argument `list[list[str]] | None` is not assignable to parameter `iter1` with type `Iterable[list[str]]` in function `zip.__new__` [bad-argument-type]
+ ERROR test/unit/test_inference.py:177:114-147: Argument `list[list[str]] | None` is not assignable to parameter `iter2` with type `Iterable[list[str]]` in function `zip.__new__` [bad-argument-type]
+ ERROR test/unit/test_utils.py:202:22-38: No matching overload found for function `print` called with arguments: (str, file=GzipFile | IO[Any] | TextIOWrapper) [no-matching-overload]

svcs (https://github.com/hynek/svcs)
- ERROR tests/typing/mypy.py:19:18-22: No matching overload found for function `svcs._core.Container.get` called with arguments: (type[S1]) [no-matching-overload]
- ERROR tests/typing/mypy.py:20:18-22: No matching overload found for function `svcs._core.Container.get` called with arguments: (type[S2]) [no-matching-overload]
+ ERROR tests/typing/mypy.py:19:19-21: Argument `type[S1]` is not assignable to parameter `svc_type` with type `type[@_]` in function `svcs._core.Container.get` [bad-argument-type]
+ ERROR tests/typing/mypy.py:20:19-21: Argument `type[S2]` is not assignable to parameter `svc_type` with type `type[@_]` in function `svcs._core.Container.get` [bad-argument-type]

vision (https://github.com/pytorch/vision)
+ ERROR torchvision/prototype/datasets/utils/_encoded.py:46:34-38: Argument `IO[Any]` is not assignable to parameter `file` with type `BinaryIO` in function `EncodedData.from_file` [bad-argument-type]
- ERROR torchvision/transforms/functional.py:573:27-40: No matching overload found for function `int.__new__` called with arguments: (type[int], Number & list[int]) [no-matching-overload]
- ERROR torchvision/transforms/functional.py:573:45-58: No matching overload found for function `int.__new__` called with arguments: (type[int], Number & list[int]) [no-matching-overload]
+ ERROR torchvision/transforms/functional.py:573:28-39: Argument `Number & list[int]` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR torchvision/transforms/functional.py:573:46-57: Argument `Number & list[int]` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR torchvision/transforms/functional.py:799:20-26: No matching overload found for function `int.__new__` called with arguments: (type[int], Number & list[int]) [no-matching-overload]
- ERROR torchvision/transforms/functional.py:799:31-37: No matching overload found for function `int.__new__` called with arguments: (type[int], Number & list[int]) [no-matching-overload]
+ ERROR torchvision/transforms/functional.py:799:21-25: Argument `Number & list[int]` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR torchvision/transforms/functional.py:799:32-36: Argument `Number & list[int]` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR torchvision/transforms/functional.py:850:20-26: No matching overload found for function `int.__new__` called with arguments: (type[int], Number & list[int]) [no-matching-overload]
- ERROR torchvision/transforms/functional.py:850:31-37: No matching overload found for function `int.__new__` called with arguments: (type[int], Number & list[int]) [no-matching-overload]
+ ERROR torchvision/transforms/functional.py:850:21-25: Argument `Number & list[int]` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR torchvision/transforms/functional.py:850:32-36: Argument `Number & list[int]` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR torchvision/transforms/v2/functional/_geometry.py:2549:16-29: No matching overload found for function `int.__new__` called with arguments: (type[int], Number & list[int]) [no-matching-overload]
- ERROR torchvision/transforms/v2/functional/_geometry.py:2882:16-22: No matching overload found for function `int.__new__` called with arguments: (type[int], Number & list[int]) [no-matching-overload]
+ ERROR torchvision/transforms/v2/functional/_geometry.py:2549:17-28: Argument `Number & list[int]` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR torchvision/transforms/v2/functional/_geometry.py:2882:17-21: Argument `Number & list[int]` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR torchvision/utils.py:398:22-36: `+` is not supported between `int` and `tuple[Literal[100]]` [unsupported-operation]
+ ERROR torchvision/utils.py:398:22-36: `+` is not supported between `str` and `tuple[Literal[100]]` [unsupported-operation]

numpy-stl (https://github.com/WoLpH/numpy-stl)
+ ERROR stl/base.py:419:28-74: `ndarray[tuple[Any, ...], dtype[float64 | Any]]` is not assignable to `ndarray[tuple[int, int], dtype[floating[_32Bit]]]` [bad-assignment]
+ ERROR stl/base.py:445:23-69: `ndarray[tuple[Any, ...], dtype[float64 | Any]]` is not assignable to variable `normals` with type `ndarray[tuple[int, int], dtype[floating[_32Bit]]] | None` [bad-assignment]
+ ERROR stl/base.py:447:32-42: `**` is not supported between `None` and `Literal[2]` [unsupported-operation]

urllib3 (https://github.com/urllib3/urllib3)
- ERROR test/with_dummyserver/test_https.py:760:43-45: No matching overload found for function `contextlib.nullcontext.__init__` called with arguments: () [no-matching-overload]
+ ERROR test/with_dummyserver/test_https.py:760:43-45: Argument `nullcontext[object]` is not assignable to parameter `self` with type `nullcontext[None]` in function `contextlib.nullcontext.__init__` [bad-argument-type]
+ ERROR src/urllib3/_request_methods.py:123:54-68: Argument `KeysView[str] | dict_keys[str, str]` is not assignable to parameter `iterable` with type `Iterable[LiteralString]` in function `map.__new__` [bad-argument-type]
- ERROR src/urllib3/_request_methods.py:123:42-69: No matching overload found for function `map.__new__` called with arguments: (type[map[_S]], Overload[
-   (self: LiteralString) -> LiteralString
-   (self: str) -> str
- ], KeysView[str] | dict_keys[str, str]) [no-matching-overload]
- ERROR src/urllib3/util/timeout.py:271:30-84: No matching overload found for function `min` called with arguments: (float, _TYPE_DEFAULT | float) [no-matching-overload]
+ ERROR src/urllib3/util/timeout.py:271:20-85: Returned type `_TYPE_DEFAULT | float` is not assignable to declared return type `float | None` [bad-return]
+ ERROR src/urllib3/util/timeout.py:271:23-85: `_TYPE_DEFAULT | float` is not assignable to upper bound `SupportsDunderGT[Any] | SupportsDunderLT[Any]` of type variable `SupportsRichComparisonT` [bad-specialization]
+ ERROR src/urllib3/util/timeout.py:271:30-84: `_TYPE_DEFAULT | float` is not assignable to upper bound `SupportsDunderGT[Any] | SupportsDunderLT[Any]` of type variable `SupportsRichComparisonT` [bad-specialization]

Expression (https://github.com/cognitedata/Expression)
- ERROR tests/test_array.py:47:31-51: No matching overload found for function `expression.core.pipe.pipe` called with arguments: (TypedArray[int], (TypedArray[object]) -> TypedArray[str]) [no-matching-overload]
+ ERROR tests/test_array.py:47:36-50: Argument `(TypedArray[object]) -> TypedArray[str]` is not assignable to parameter `fn1` with type `(TypedArray[int]) -> @_` in function `expression.core.pipe.pipe` [bad-argument-type]

PyWinCtl (https://github.com/Kalmat/PyWinCtl)
+ ERROR src/pywinctl/_pywinctl_linux.py:698:30-82: `Resource` is not assignable to `Window` [bad-assignment]

freqtrade (https://github.com/freqtrade/freqtrade)
+ ERROR freqtrade/data/converter/converter.py:158:18-65: `DataFrame | Series` is not assignable to variable `df` with type `DataFrame` [bad-assignment]
+ ERROR freqtrade/data/converter/converter.py:160:14-60: `DataFrame | Series` is not assignable to variable `df` with type `DataFrame` [bad-assignment]
- ERROR freqtrade/freqai/data_kitchen.py:179:29-44: No matching overload found for function `pandas.core.frame.DataFrame.__new__` called with arguments: (type[DataFrame], Buffer | _NestedSequence[bytes | complex | str] | _NestedSequence[_SupportsArray[dtype]] | _SupportsArray[dtype] | bytes | complex | ndarray[tuple[int], dtype[float64]] | str | Unknown) [no-matching-overload]
+ ERROR freqtrade/freqai/data_kitchen.py:179:30-43: Argument `Buffer | _NestedSequence[bytes | complex | str] | _NestedSequence[_SupportsArray[dtype]] | _SupportsArray[dtype] | bytes | complex | ndarray[tuple[int], dtype[float64]] | str | Unknown` is not assignable to parameter `data` with type `DataFrame | Index | Iterable[Index | Sequence[Any] | Series | dict[Any, Any] | ndarray[tuple[int]] | tuple[Hashable, ListLikeU]] | Sequence[Any] | Series | dict[Any, Any] | ndarray[tuple[int]] | None` in function `pandas.core.frame.DataFrame.__new__` [bad-argument-type]
+ ERROR freqtrade/freqai/freqai_interface.py:350:21-41: Argument `DataFrame | Series` is not assignable to parameter `dataframe` with type `DataFrame` in function `freqtrade.strategy.interface.IStrategy.set_freqai_targets` [bad-argument-type]
+ ERROR freqtrade/freqai/freqai_interface.py:354:21-44: Argument `DataFrame | Series` is not assignable to parameter `dataframe` with type `DataFrame` in function `freqtrade.strategy.interface.IStrategy.set_freqai_targets` [bad-argument-type]

scikit-learn (https://github.com/scikit-learn/scikit-learn)
- ERROR sklearn/_loss/loss.py:1070:9-24: Class member `HalfMultinomialLoss.in_y_true_range` overrides parent class `BaseLoss` in an inconsistent manner [bad-override]
- ERROR sklearn/_loss/loss.py:1100:23-39: No matching overload found for function `range.__new__` called with arguments: (type[range], Unknown | None) [no-matching-overload]
+ ERROR sklearn/_loss/loss.py:1100:24-38: Argument `Unknown | None` is not assignable to parameter `stop` with type `SupportsIndex` in function `range.__new__` [bad-argument-type]
- ERROR sklearn/cluster/_affinity_propagation.py:533:9-537:10: Cannot unpack tuple[list[Unknown] | ndarray[tuple[Any, ...], dtype[Unknown]], ndarray[tuple[Any, ...], dtype[Unknown]]] | tuple[list[Unknown] | ndarray[tuple[Any, ...], dtype[Unknown]], ndarray[tuple[Any, ...], dtype[Unknown]], int] | tuple[ndarray, ndarray] | tuple[ndarray, ndarray, Literal[0]] | tuple[Unknown, Unknown] | tuple[Unknown, Unknown, Literal[0]] (of size 2) into 3 values [bad-unpacking]
+ ERROR sklearn/cluster/_affinity_propagation.py:533:9-537:10: Cannot unpack tuple[list[Unknown] | ndarray, ndarray[tuple[Any, ...], dtype[Unknown]] | Unknown] | tuple[list[Unknown] | ndarray, ndarray[tuple[Any, ...], dtype[Unknown]] | Unknown, int] | tuple[ndarray[tuple[Unknown], dtype[Any | Unknown]], ndarray[tuple[Unknown], dtype[Any | Unknown]]] | tuple[ndarray[tuple[Unknown], dtype[Any | Unknown]], ndarray[tuple[Unknown], dtype[Any | Unknown]], Literal[0]] | tuple[ndarray, ndarray] | tuple[ndarray, ndarray, Literal[0]] (of size 2) into 3 values [bad-unpacking]
- ERROR sklearn/cluster/_agglomerative.py:686:13-26: Object of class `ndarray` has no attribute `append` [missing-attribute]
- ERROR sklearn/cluster/_agglomerative.py:1085:32-47: `int | Unknown` is not assignable to attribute `n_clusters_` with type `signedinteger[_NBitIntP]` [bad-assignment]
- ERROR sklearn/cluster/_bicluster.py:613:48-85: No matching overload found for function `numpy.lib._shape_base_impl.apply_along_axis` called with arguments: ((v: Unknown) -> ndarray[tuple[int], dtype[Unknown]] | Unknown, axis=Literal[1], arr=Unknown) [no-matching-overload]
+ ERROR sklearn/cluster/_bicluster.py:613:48-85: No matching overload found for function `numpy.lib._shape_base_impl.apply_along_axis` called with arguments: ((v: Unknown) -> ndarray[tuple[int], Unknown] | Unknown, axis=Literal[1], arr=Unknown) [no-matching-overload]
+ ERROR sklearn/cluster/_birch.py:182:30-65: `ndarray[tuple[Any, ...], Unknown]` is not assignable to attribute `squared_norm_` with type `list[Unknown]` [bad-assignment]
- ERROR sklearn/cluster/_kmeans.py:1548:36-49: No matching overload found for function `set.__init__` called with arguments: (Unknown | None) [no-matching-overload]
+ ERROR sklearn/cluster/_kmeans.py:1548:37-48: Argument `ndarray | Unknown | None` is not assignable to parameter `iterable` with type `Iterable[Any | Unknown]` in function `set.__init__` [bad-argument-type]
- ERROR sklearn/cluster/_kmeans.py:1954:30-59: No matching overload found for function `min` called with arguments: (Unknown | None, Unknown) [no-matching-overload]
+ ERROR sklearn/cluster/_kmeans.py:1954:30-59: `Unknown | None` is not assignable to upper bound `SupportsDunderGT[Any] | SupportsDunderLT[Any]` of type variable `SupportsRichComparisonT` [bad-specialization]
+ ERROR sklearn/cluster/_spectral.py:163:41-166:14: No matching overload found for function `scipy.sparse._csc.csc_array.__init__` called with arguments: (tuple[ndarray[tuple[int], dtype[float64]], tuple[ndarray[tuple[int], dtype[float64 | Any]], Unknown]], shape=tuple[Unknown, Unknown]) [no-matching-overload]
- ERROR sklearn/cluster/tests/test_hierarchical.py:88:9-49: Cannot unpack tuple[ndarray | Unknown, int, int | Any, None] | tuple[ndarray | Unknown, int, int | Any, None, ndarray[tuple[Any, ...], dtype[Unknown]] | Unknown] | tuple[ndarray[tuple[Any, ...], dtype[Unknown]], int, int | Any, ndarray[tuple[Unknown], dtype[Unknown]]] | tuple[ndarray[tuple[Any, ...], dtype[Unknown]], int, int | Any, ndarray[tuple[Unknown], dtype[Unknown]], ndarray[tuple[Unknown], dtype[Unknown]] | ndarray[tuple[Any, ...], dtype[Unknown]] | Unknown] | tuple[Unknown, Unknown, Unknown, ndarray[tuple[Unknown], dtype[Unknown]]] | tuple[Unknown, Unknown, Unknown, ndarray[tuple[Unknown], dtype[Unknown]], Unknown] | Unknown (of size 5) into 4 values [bad-unpacking]
+ ERROR sklearn/cluster/tests/test_hierarchical.py:88:9-49: Cannot unpack tuple[ndarray | Unknown, int, Any, None] | tuple[ndarray | Unknown, int, Any, None, ndarray[tuple[Any, ...], dtype[Unknown]] | Unknown] | tuple[ndarray[tuple[Any, ...], dtype[Unknown]], int, Any, ndarray[tuple[Unknown]]] | tuple[ndarray[tuple[Any, ...], dtype[Unknown]], int, Any, ndarray[tuple[Unknown]], Unknown] | tuple[Unknown, Unknown, Unknown, ndarray[tuple[Unknown]]] | tuple[Unknown, Unknown, Unknown, ndarray[tuple[Unknown]], Unknown] | Unknown (of size 5) into 4 values [bad-unpacking]
- ERROR sklearn/cluster/tests/test_hierarchical.py:119:21-56: Cannot unpack tuple[ndarray | Unknown, int, int | Any, None] | tuple[ndarray | Unknown, int, int | Any, None, ndarray[tuple[Any, ...], dtype[Unknown]] | Unknown] | tuple[ndarray[tuple[Any, ...], dtype[Unknown]], int, int | Any, ndarray[tuple[Unknown], dtype[Unknown]]] | tuple[ndarray[tuple[Any, ...], dtype[Unknown]], int, int | Any, ndarray[tuple[Unknown], dtype[Unknown]], ndarray[tuple[Unknown], dtype[Unknown]] | ndarray[tuple[Any, ...], dtype[Unknown]] | Unknown] | tuple[Unknown, Unknown, Unknown, ndarray[tuple[Unknown], dtype[Unknown]]] | tuple[Unknown, Unknown, Unknown, ndarray[tuple[Unknown], dtype[Unknown]], Unknown] | Unknown (of size 5) into 4 values [bad-unpacking]
- ERROR sklearn/cluster/tests/test_hierarchical.py:133:9-44: Cannot unpack tuple[ndarray | Unknown, int, int | Any, None] | tuple[ndarray | Unknown, int, int | Any, None, ndarray[tuple[Any, ...], dtype[Unknown]] | Unknown] | tuple[ndarray[tuple[Any, ...], dtype[Unknown]], int, int | Any, ndarray[tuple[Unknown], dtype[Unknown]]] | tuple[ndarray[tuple[Any, ...], dtype[Unknown]], int, int | Any, ndarray[tuple[Unknown], dtype[Unknown]], ndarray[tuple[Unknown], dtype[Unknown]] | ndarray[tuple[Any, ...], dtype[Unknown]] | Unknown] | tuple[Unknown, Unknown, Unknown, ndarray[tuple[Unknown], dtype[Unknown]]] | tuple[Unknown, Unknown, Unknown, ndarray[tuple[Unknown], dtype[Unknown]], Unknown] | Unknown (of size 5) into 4 values [bad-unpacking]
+ ERROR sklearn/cluster/tests/test_hierarchical.py:119:21-56: Cannot unpack tuple[ndarray | Unknown, int, Any, None] | tuple[ndarray | Unknown, int, Any, None, ndarray[tuple[Any, ...], dtype[Unknown]] | Unknown] | tuple[ndarray[tuple[Any, ...], dtype[Unknown]], int, Any, ndarray[tuple[Unknown]]] | tuple[ndarray[tuple[Any, ...], dtype[Unknown]], int, Any, ndarray[tuple[Unknown]], Unknown] | tuple[Unknown, Unknown, Unknown, ndarray[tuple[Unknown]]] | tuple[Unknown, Unknown, Unknown, ndarray[tuple[Unknown]], Unknown] | Unknown (of size 5) into 4 values [bad-unpacking]
+ ERROR sklearn/cluster/tests/test_hierarchical.py:133:9-44: Cannot unpack tuple[ndarray | Unknown, int, Any, None] | tuple[ndarray | Unknown, int, Any, None, ndarray[tuple[Any, ...], dtype[Unknown]] | Unknown] | tuple[ndarray[tuple[Any, ...], dtype[Unknown]], int, Any, ndarray[tuple[Unknown]]] | tuple[ndarray[tuple[Any, ...], dtype[Unknown]], int, Any, ndarray[tuple[Unknown]], Unknown] | tuple[Unknown, Unknown, Unknown, ndarray[tuple[Unknown]]] | tuple[Unknown, Unknown, Unknown, ndarray[tuple[Unknown]], Unknown] | Unknown (of size 5) into 4 values [bad-unpacking]
- ERROR sklearn/cluster/tests/test_hierarchical.py:353:13-37: Cannot unpack tuple[ndarray | Unknown, int, int | Any, None] | tuple[ndarray | Unknown, int, int | Any, None, ndarray[tuple[Any, ...], dtype[Unknown]] | Unknown] | tuple[ndarray[tuple[Any, ...], dtype[Unknown]], int, int | Any, ndarray[tuple[Unknown], dtype[Unknown]]] | tuple[ndarray[tuple[Any, ...], dtype[Unknown]], int, int | Any, ndarray[tuple[Unknown], dtype[Unknown]], ndarray[tuple[Unknown], dtype[Unknown]] | ndarray[tuple[Any, ...], dtype[Unknown]] | Unknown] | tuple[Unknown, Unknown, Unknown, ndarray[tuple[Unknown], dtype[Unknown]]] | tuple[Unknown, Unknown, Unknown, ndarray[tuple[Unknown], dtype[Unknown]], Unknown] | Unknown (of size 5) into 4 values [bad-unpacking]
+ ERROR sklearn/cluster/tests/test_hierarchical.py:353:13-37: Cannot unpack tuple[ndarray | Unknown, int, Any, None] | tuple[ndarray | Unknown, int, Any, None, ndarray[tuple[Any, ...], dtype[Unknown]] | Unknown] | tuple[ndarray[tuple[Any, ...], dtype[Unknown]], int, Any, ndarray[tuple[Unknown]]] | tuple[ndarray[tuple[Any, ...], dtype[Unknown]], int, Any, ndarray[tuple[Unknown]], Unknown] | tuple[Unknown, Unknown, Unknown, ndarray[tuple[Unknown]]] | tuple[Unknown, Unknown, Unknown, ndarray[tuple[Unknown]], Unknown] | Unknown (of size 5) into 4 values [bad-unpacking]
- ERROR sklearn/cluster/tests/test_hierarchical.py:386:5-29: Cannot unpack tuple[ndarray | Unknown, int, int | Any, None] | tuple[ndarray | Unknown, int, int | Any, None, ndarray[tuple[Any, ...], dtype[Unknown]] | Unknown] | tuple[ndarray[tuple[Any, ...], dtype[Unknown]], int, int | Any, ndarray[tuple[Unknown], dtype[Unknown]]] | tuple[ndarray[tuple[Any, ...], dtype[Unknown]], int, int | Any, ndarray[tuple[Unknown], dtype[Unknown]], ndarray[tuple[Unknown], dtype[Unknown]] | ndarray[tuple[Any, ...], dtype[Unknown]] | Unknown] | tuple[Unknown, Unknown, Unknown, ndarray[tuple[Unknown], dtype[Unknown]]] | tuple[Unknown, Unknown, Unknown, ndarray[tuple[Unknown], dtype[Unknown]], Unknown] (of size 5) into 4 values [bad-unpacking]
+ ERROR sklearn/cluster/tests/test_hierarchical.py:386:5-29: Cannot unpack tuple[ndarray | Unknown, int, Any, None] | tuple[ndarray | Unknown, int, Any, None, ndarray[tuple[Any, ...], dtype[Unknown]] | Unknown] | tuple[ndarray[tuple[Any, ...], dtype[Unknown]], int, Any, ndarray[tuple[Unknown]]] | tuple[ndarray[tuple[Any, ...], dtype[Unknown]], int, Any, ndarray[tuple[Unknown]], Unknown] | tuple[Unknown, Unknown, Unknown, ndarray[tuple[Unknown]]] | tuple[Unknown, Unknown, Unknown, ndarray[tuple[Unknown]], Unknown] (of size 5) into 4 values [bad-unpacking]
- ERROR sklearn/cluster/tests/test_hierarchical.py:752:9-60: Cannot unpack tuple[ndarray | Unknown, int, int | Any, None] | tuple[ndarray | Unknown, int, int | Any, None, ndarray[tuple[Any, ...], dtype[Unknown]] | Unknown] | tuple[ndarray[tuple[Any, ...], dtype[Unknown]], int, int | Any, ndarray[tuple[Unknown], dtype[Unknown]]] | tuple[ndarray[tuple[Any, ...], dtype[Unknown]], int, int | Any, ndarray[tuple[Unknown], dtype[Unknown]], ndarray[tuple[Unknown], dtype[Unknown]] | ndarray[tuple[Any, ...], dtype[Unknown]] | Unknown] | tuple[Unknown, Unknown, Unknown, ndarray[tuple[Unknown], dtype[Unknown]]] | tuple[Unknown, Unknown, Unknown, ndarray[tuple[Unknown], dtype[Unknown]], Unknown] | Unknown (of size 4) into 5 values [bad-unpacking]
+ ERROR sklearn/cluster/tests/test_hierarchical.py:752:9-60: Cannot unpack tuple[ndarray | Unknown, int, Any, None] | tuple[ndarray | Unknown, int, Any, None, ndarray[tuple[Any, ...], dtype[Unknown]] | Unknown] | tuple[ndarray[tuple[Any, ...], dtype[Unknown]], int, Any, ndarray[tuple[Unknown]]] | tuple[ndarray[tuple[Any, ...], dtype[Unknown]], int, Any, ndarray[tuple[Unknown]], Unknown] | tuple[Unknown, Unknown, Unknown, ndarray[tuple[Unknown]]] | tuple[Unknown, Unknown, Unknown, ndarray[tuple[Unknown]], Unknown] | Unknown (of size 4) into 5 values [bad-unpacking]
- ERROR sklearn/cluster/tests/test_k_means.py:886:19-31: No matching overload found for function `set.__init__` called with arguments: (Unknown | None) [no-matching-overload]
+ ERROR sklearn/cluster/tests/test_k_means.py:886:20-30: Argument `ndarray | Unknown | None` is not assignable to parameter `iterable` with type `Iterable[Any | Unknown]` in function `set.__init__` [bad-argument-type]
- ERROR sklearn/cluster/tests/test_k_means.py:975:19-31: No matching overload found for function `set.__init__` called with arguments: (Unknown | None) [no-matching-overload]
+ ERROR sklearn/cluster/tests/test_k_means.py:975:20-30: Argument `ndarray | Unknown | None` is not assignable to parameter `iterable` with type `Iterable[Any | Unknown]` in function `set.__init__` [bad-argument-type]
- ERROR sklearn/compose/_column_transformer.py:1087:35-49: No matching overload found for function `set.__init__` called with arguments: (ndarray | None) [no-matching-overload]
+ ERROR sklearn/compose/_column_transformer.py:1087:36-48: Argument `ndarray | None` is not assignable to parameter `iterable` with type `Iterable[Any]` in function `set.__init__` [bad-argument-type]
+ ERROR sklearn/covariance/_empirical_covariance.py:215:31-35: `None` is not assignable to attribute `precision_` with type `ndarray[tuple[Any, ...], dtype[inexact]]` [bad-assignment]
- ERROR sklearn/covariance/_graph_lasso.py:1046:28-50: No matching overload found for function `zip.__new__` called with arguments: (type[zip[Unknown]], int | Unknown, zip[tuple[Any, ...]], zip[tuple[Any, ...]]) [no-matching-overload]
+ ERROR sklearn/covariance/_graph_lasso.py:1046:29-35: Argument `int | ndarray | Unknown` is not assignable to parameter `iter1` with type `Iterable[@_]` in function `zip.__new__` [bad-argument-type]
- ERROR sklearn/datasets/_arff_parser.py:191:25-49: No matching overload found for function `next` called with arguments: (Iterator[list[Unknown]] | tuple[list[Unknown], ...]) [no-matching-overload]
+ ERROR sklearn/datasets/_arff_parser.py:191:26-48: Argument `Iterator[list[Unknown]] | tuple[list[Unknown], ...]` is not assignable to parameter `i` with type `SupportsNext[list[Unknown]]` in function `next` [bad-argument-type]
- ERROR sklearn/datasets/_base.py:1341:23-1345:6: No matching overload found for function `sorted` called with arguments: (Generator[Traversable]) [no-matching-overload]
+ ERROR sklearn/datasets/_base.py:1341:23-1345:6: `Traversable` is not assignable to upper bound `SupportsDunderGT[Any] | SupportsDunderLT[Any]` of type variable `SupportsRichComparisonT` [bad-specialization]
- ERROR sklearn/datasets/_samples_generator.py:1143:36-60: No matching overload found for function `int.__new__` called with arguments: (type[int], _RealLike | int | Unknown) [no-matching-overload]
- ERROR sklearn/datasets/_samples_generator.py:1145:23-46: No matching overload found for function `range.__new__` called with arguments: (type[range], _RealLike | int | Unknown) [no-matching-overload]
+ ERROR sklearn/datasets/_samples_generator.py:1143:37-59: Argument `_RealLike | int | Unknown` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR sklearn/datasets/_samples_generator.py:1145:24-45: Argument `_RealLike | int | Unknown` is not assignable to parameter `stop` with type `SupportsIndex` in function `range.__new__` [bad-argument-type]
+ ERROR sklearn/datasets/_samples_generator.py:1152:60-71: Argument `float | ndarray[tuple[int]] | Unknown` is not assignable to parameter `iter2` with type `Iterable[@_]` in function `zip.__new__` [bad-argument-type]
+ ERROR sklearn/datasets/_samples_generator.py:1621:15-38: `ndarray[tuple[Any, ...], dtype[float64]] | ndarray | Unknown` is not assignable to upper bound `generic` of type variable `_ScalarT` [bad-specialization]
+ ERROR sklearn/datasets/_samples_generator.py:1843:11-27: Cannot index into `_spbase[Unknown, tuple[int, int]]` [bad-index]
- ERROR sklearn/datasets/_samples_generator.py:1152:37-72: No matching overload found for function `zip.__new__` called with arguments: (type[zip[@_]], Iterable[Unknown] | list[int] | Unknown, float | ndarray[tuple[int]] | Unknown) [no-matching-overload]
- ERROR sklearn/datasets/_samples_generator.py:1621:15-38: No matching overload found for function `map.__new__` called with arguments: (type[map[_S]], Overload[
-   [_ScalarT: generic](a: _ScalarT, axis: Sequence[SupportsIndex] | SupportsIndex | None = None) -> _ScalarT
-   [_ScalarT: generic](a: _ArrayLike, axis: Sequence[SupportsIndex] | SupportsIndex | None = None) -> ndarray[tuple[Any, ...], dtype[_ScalarT]]
-   (a: ArrayLike, axis: Sequence[SupportsIndex] | SupportsIndex | None = None) -> ndarray
- ], tuple[Unknown, ndarray[tuple[Any, ...], dtype[float64]], ndarray[tuple[int, int], dtype[float64]]]) [no-matching-overload]
+ ERROR sklearn/datasets/tests/test_openml.py:121:30-51: Argument `bytes | str` is not assignable to parameter `initial_bytes` with type `Buffer` in function `_io.BytesIO.__init__` [bad-argument-type]
+ ERROR sklearn/datasets/tests/test_openml.py:169:25-53: Object of class `str` has no attribute `decode` [missing-attribute]
+ ERROR sklearn/datasets/tests/test_openml.py:182:30-51: Argument `bytes | str` is not assignable to parameter `initial_bytes` with type `Buffer` in function `_io.BytesIO.__init__` [bad-argument-type]
- ERROR sklearn/decomposition/_dict_learning.py:178:36-52: No matching overload found for function `int.__new__` called with arguments: (type[int], Unknown | None) [no-matching-overload]
+ ERROR sklearn/decomposition/_dict_learning.py:178:37-51: Argument `Unknown | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR sklearn/decomposition/_dict_learning.py:195:32-48: No matching overload found for function `int.__new__` called with arguments: (type[int], Unknown | None) [no-matching-overload]
+ ERROR sklearn/decomposition/_dict_learning.py:195:33-47: Argument `Unknown | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR sklearn/decomposition/_dict_learning.py:2179:21-74: `/` is not supported between `floating` and `None` [unsupported-operation]
+ ERROR sklearn/decomposition/_dict_learning.py:2179:21-74: `/` is not supported between `ndarray[_WorkaroundForPyright, dtype[floating]]` and `None` [unsupported-operation]
- ERROR sklearn/decomposition/_lda.py:526:32-67: `float | ndarray[tuple[Any, ...], dtype[float64]] | Unknown` is not assignable to attribute `components_` with type `ndarray[tuple[Any, ...], dtype[float64]]` [bad-assignment]
+ ERROR sklearn/decomposition/_lda.py:526:32-67: `float | ndarray[tuple[Any, ...], dtype[float64]] | Unknown` is not assignable to attribute `components_` with type `ndarray` [bad-assignment]
+ ERROR sklearn/decomposition/_pca.py:740:28-77: `signedinteger[_8Bit] | Any` is not assignable to upper bound `complex128 | complexfloating[_32Bit, _32Bit] | float64 | floating[_32Bit]` of type variable `_SCT` [bad-specialization]
+ ERROR sklearn/discriminant_analysis.py:757:21-39: `@` is not supported between `str` and `ndarray[tuple[Any, ...], dtype[complex128 | complexfloating[_32Bit, _32Bit] | float64 | floating[_32Bit]]]` [unsupported-operation]
+ ERROR sklearn/discriminant_analysis.py:757:21-39: `@` is not supported between `tuple[str | Unknown, str | Unknown]` and `ndarray[tuple[Any, ...], dtype[complex128 | complexfloating[_32Bit, _32Bit] | float64 | floating[_32Bit]]]` [unsupported-operation]
- ERROR sklearn/dummy.py:308:28-314:18: No matching overload found for function `numpy.lib._shape_base_impl.tile` called with arguments: (list[ndarray[tuple[Any, ...], Unknown] | Unknown], list[Integral | int | Unknown]) [no-matching-overload]
+ ERROR sklearn/dummy.py:308:28-314:18: No matching overload found for function `numpy.lib._shape_base_impl.tile` called with arguments: (list[Unknown], list[Integral | int | Unknown]) [no-matching-overload]
- ERROR sklearn/ensemble/_forest.py:301:29-41: No matching overload found for function `scipy.sparse._construct.hstack` called with arguments: (Generator[Unknown | None, Unknown] | list[Unknown | None]) [no-matching-overload]
+ ERROR sklearn/ensemble/_forest.py:301:30-40: Argument `Generator[Unknown | None, Unknown] | list[Unknown | None]` is not assignable to parameter `blocks` with type `Sequence[_CanStack[@_]]` in function `scipy.sparse._construct.hstack` [bad-argument-type]
- ERROR sklearn/ensemble/_gb.py:2192:9-14: Class member `GradientBoostingRegressor.apply` overrides parent class `BaseGradientBoosting` in an inconsistent manner [bad-override]
+ ERROR sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py:333:13-47: Object of class `NoneType` has no attribute `output_indices_` [missing-attribute]
- ERROR sklearn/ensemble/_hist_gradient_boosting/tests/test_histogram.py:238:25-39: Cannot index into `ndarray[tuple[int, int], dtype[float64]]` [bad-index]
- ERROR sklearn/ensemble/_hist_gradient_boosting/tests/test_histogram.py:238:41-59: Cannot index into `ndarray[tuple[Any, ...], dtype[float64]]` [bad-index]
- ERROR sklearn/ensemble/_hist_gradient_boosting/tests/test_histogram.py:239:25-40: Cannot index into `ndarray[tuple[int, int], dtype[float64]]` [bad-index]
- ERROR sklearn/ensemble/_hist_gradient_boosting/tests/test_histogram.py:239:42-61: Cannot index into `ndarray[tuple[Any, ...], dtype[float64]]` [bad-index]
- ERROR sklearn/ensemble/_stacking.py:239:24-51: No matching overload found for function `getattr` called with arguments: (Any, Unknown | None) [no-matching-overload]
+ ERROR sklearn/ensemble/_stacking.py:239:36-50: Argument `Unknown | None` is not assignable to parameter `name` with type `str` in function `getattr` [bad-argument-type]
- ERROR sklearn/ensemble/_stacking.py:296:20-31: No matching overload found for function `getattr` called with arguments: (Unknown, Unknown | None) [no-matching-overload]
+ ERROR sklearn/ensemble/_stacking.py:296:26-30: Argument `Unknown | None` is not assignable to parameter `name` with type `str` in function `getattr` [bad-argument-type]
- ERROR sklearn/ensemble/_voting.py:71:36-67: No matching overload found for function `zip.__new__` called with arguments: (type[zip[_T_co]], Unknown, object) [no-matching-overload]
+ ERROR sklearn/ensemble/_voting.py:71:54-66: Argument `object` is not assignable to parameter `iter2` with type `Iterable[@_]` in function `zip.__new__` [bad-argument-type]
- ERROR sklearn/feature_extraction/image.py:346:51-64: No matching overload found for function `list.__init__` called with arguments: (int | tuple[Unknown, ...] | Unknown) [no-matching-overload]
+ ERROR sklearn/feature_extraction/image.py:346:52-63: Argument `int | tuple[Unknown, ...] | Unknown` is not assignable to parameter `iterable` with type `Iterable[@_]` in function `list.__init__` [bad-argument-type]
+ ERROR sklearn/feature_selection/_rfe.py:455:9-26: Class member `RFE._get_support_mask` overrides parent class `SelectorMixin` in an inconsistent manner [bad-override]
+ ERROR sklearn/feature_selection/_sequential.py:332:9-26: Class member `SequentialFeatureSelector._get_support_mask` overrides parent class `SelectorMixin` in an inconsistent manner [bad-override]
+ ERROR sklearn/feature_selection/_univariate_selection.py:787:9-26: Class member `SelectKBest._get_support_mask` overrides parent class `_BaseFilter` in an inconsistent manner [bad-override]
+ ERROR sklearn/feature_selection/tests/test_base.py:24:9-26: Class member `StepSelector._get_support_mask` overrides parent class `SelectorMixin` in an inconsistent manner [bad-override]
- ERROR sklearn/feature_selection/tests/test_sequential.py:110:57-88: No matching overload found for function `set.__init__` called with arguments: (ndarray[tuple[int], dtype[signedinteger[Unknown]]] | None) [no-matching-overload]
+ ERROR sklearn/feature_selection/tests/test_sequential.py:110:58-87: Argument `ndarray[tuple[int], dtype[signedinteger[Unknown]]] | None` is not assignable to parameter `iterable` with type `Iterable[Any]` in function `set.__init__` [bad-argument-type]
- ERROR sklearn/gaussian_process/_gpc.py:224:28-84: Unary `-` is not supported on `tuple[float | Unknown, Unknown]` [unsupported-operation]
+ ERROR sklearn/gaussian_process/_gpc.py:224:28-84: Unary `-` is not supported on `tuple[float | Unknown, ndarray]` [unsupported-operation]
- ERROR sklearn/gaussian_process/kernels.py:141:9-15: Class member `Hyperparameter.__eq__` overrides parent class `Hyperparameter` in an inconsistent manner [bad-override]
- ERROR sklearn/gaussian_process/kernels.py:639:9-15: Class member `CompoundKernel.__eq__` overrides parent class `Kernel` in an inconsistent manner [bad-override]
- ERROR sklearn/gaussian_process/kernels.py:646:9-22: Class member `CompoundKernel.is_stationary` overrides parent class `Kernel` in an inconsistent manner [bad-override]
- ERROR sklearn/gaussian_process/kernels.py:651:9-30: Class member `CompoundKernel.requires_vector_input` overrides parent class `Kernel` in an inconsistent manner [bad-override]
+ ERROR sklearn/gaussian_process/kernels.py:1442:7-10: Field `diag` has inconsistent types inherited from multiple base classes [inconsistent-inheritance]

... (truncated 290 lines) ...

more-itertools (https://github.com/more-itertools/more-itertools)
- ERROR more_itertools/more.py:950:21-77: No matching overload found for function `map.__new__` called with arguments: (type[map[_S]], type[map], repeat[(a: object, b: object, /) -> bool], repeat[tuple[int, ...]], permutations[tuple[int, ...]]) [no-matching-overload]
- ERROR more_itertools/more.py:1089:21-73: No matching overload found for function `map.__new__` called with arguments: (type[map[_S]], type[slice], range, range) [no-matching-overload]
+ ERROR more_itertools/more.py:950:22-25: Argument `type[map]` is not assignable to parameter `func` with type `((Any) -> Unknown, Iterable[Any], Iterable[Any]) -> @_` in function `map.__new__` [bad-argument-type]
+ ERROR more_itertools/more.py:950:27-41: Argument `repeat[(a: object, b: object, /) -> bool]` is not assignable to parameter `iterable` with type `Iterable[(Any) -> Unknown]` in function `map.__new__` [bad-argument-type]
+ ERROR more_itertools/more.py:1089:29-46: Argument `range` is not assignable to parameter `iterable` with type `Iterable[None]` in function `map.__new__` [bad-argument-type]
+ ERROR more_itertools/more.py:1089:48-72: Argument `range` is not assignable to parameter `iter2` with type `Iterable[None]` in function `map.__new__` [bad-argument-type]

aioredis (https://github.com/aio-libs/aioredis)
- ERROR aioredis/client.py:164:31-51: Cannot set item in `dict[str, str]` [unsupported-operation]
- ERROR aioredis/client.py:4622:32-49: No matching overload found for function `zip.__new__` called with arguments: (type[zip[_T_co]], list[Script], Coroutine[Unknown, Unknown, Unknown] | Unknown) [no-matching-overload]
+ ERROR aioredis/client.py:4622:42-48: Argument `Coroutine[Unknown, Unknown, Unknown] | Unknown` is not assignable to parameter `iter2` with type `Iterable[@_]` in function `zip.__new__` [bad-argument-type]
+ ERROR aioredis/client.py:4624:29-69: `Coroutine[Unknown, Unknown, Unknown] | Unknown` is not assignable to attribute `sha` with type `str` [bad-assignment]

pwndbg (https://github.com/pwndbg/pwndbg)
- ERROR pwndbg/aglib/dt.py:69:39-52: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
+ ERROR pwndbg/aglib/dt.py:69:40-51: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR pwndbg/aglib/dt.py:93:56-75: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
- ERROR pwndbg/aglib/dt.py:114:32-45: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
+ ERROR pwndbg/aglib/dt.py:93:57-74: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR pwndbg/aglib/dt.py:114:33-44: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR pwndbg/aglib/heap/jemalloc.py:275:19-38: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
- ERROR pwndbg/aglib/heap/jemalloc.py:344:39-53: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
+ ERROR pwndbg/aglib/heap/jemalloc.py:275:20-37: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR pwndbg/aglib/heap/jemalloc.py:344:40-52: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR pwndbg/aglib/heap/ptmalloc.py:269:27-51: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | Unknown | None) [no-matching-overload]
- ERROR pwndbg/aglib/heap/ptmalloc.py:455:33-93: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
+ ERROR pwndbg/aglib/heap/ptmalloc.py:269:28-50: Argument `Value | Unknown | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR pwndbg/aglib/heap/ptmalloc.py:455:34-92: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR pwndbg/aglib/heap/ptmalloc.py:463:42-465:14: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
- ERROR pwndbg/aglib/heap/ptmalloc.py:602:27-51: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | Unknown | None) [no-matching-overload]
+ ERROR pwndbg/aglib/heap/ptmalloc.py:464:17-75: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR pwndbg/aglib/heap/ptmalloc.py:602:28-50: Argument `Value | Unknown | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR pwndbg/aglib/heap/ptmalloc.py:1399:24-45: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
+ ERROR pwndbg/aglib/heap/ptmalloc.py:1399:25-44: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR pwndbg/aglib/kernel/__init__.py:443:19-54: No matching overload found for function `int.__new__` called with arguments: (type[int], int | None) [no-matching-overload]
- ERROR pwndbg/aglib/kernel/__init__.py:552:19-56: No matching overload found for function `int.__new__` called with arguments: (type[int], int | None) [no-matching-overload]
+ ERROR pwndbg/aglib/kernel/__init__.py:443:20-53: Argument `int | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR pwndbg/aglib/kernel/__init__.py:552:20-55: Argument `int | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR pwndbg/aglib/kernel/__init__.py:761:16-52: No matching overload found for function `int.__new__` called with arguments: (type[int], int | None) [no-matching-overload]
+ ERROR pwndbg/aglib/kernel/__init__.py:761:17-51: Argument `int | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR pwndbg/aglib/kernel/macros.py:19:20-34: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
+ ERROR pwndbg/aglib/kernel/macros.py:19:21-33: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR pwndbg/aglib/kernel/nftables.py:96:48-76: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
- ERROR pwndbg/aglib/kernel/nftables.py:113:64-92: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
- ERROR pwndbg/aglib/kernel/nftables.py:118:19-47: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
- ERROR pwndbg/aglib/kernel/nftables.py:124:15-50: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
- ERROR pwndbg/aglib/kernel/nftables.py:148:18-46: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
- ERROR pwndbg/aglib/kernel/nftables.py:151:16-42: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
- ERROR pwndbg/aglib/kernel/nftables.py:172:20-48: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
+ ERROR pwndbg/aglib/kernel/nftables.py:96:49-75: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR pwndbg/aglib/kernel/nftables.py:113:65-91: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR pwndbg/aglib/kernel/nftables.py:118:20-46: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR pwndbg/aglib/kernel/nftables.py:124:16-49: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR pwndbg/aglib/kernel/nftables.py:148:19-45: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR pwndbg/aglib/kernel/nftables.py:151:17-41: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR pwndbg/aglib/kernel/nftables.py:172:21-47: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR pwndbg/aglib/kernel/nftables.py:271:46-61: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
+ ERROR pwndbg/aglib/kernel/nftables.py:271:47-60: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR pwndbg/aglib/kernel/nftables.py:391:16-45: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
- ERROR pwndbg/aglib/kernel/nftables.py:456:16-45: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
+ ERROR pwndbg/aglib/kernel/nftables.py:391:17-44: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR pwndbg/aglib/kernel/nftables.py:456:17-44: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR pwndbg/aglib/kernel/vmmap.py:174:19-62: No matching overload found for function `int.__new__` called with arguments: (type[int], int | None) [no-matching-overload]
+ ERROR pwndbg/aglib/kernel/vmmap.py:174:20-61: Argument `int | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR pwndbg/aglib/nearpc.py:448:13-17: No matching overload found for function `int.__new__` called with arguments: (type[int], int | None) [no-matching-overload]
+ ERROR pwndbg/aglib/nearpc.py:448:14-16: Argument `int | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR pwndbg/aglib/objc.py:68:57-74: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
+ ERROR pwndbg/aglib/objc.py:68:58-73: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR pwndbg/color/disasm.py:137:26-62: No matching overload found for function `max` called with arguments: (Generator[int | None]) [no-matching-overload]
+ ERROR pwndbg/color/disasm.py:137:26-62: `int | None` is not assignable to upper bound `SupportsDunderGT[Any] | SupportsDunderLT[Any]` of type variable `SupportsRichComparisonT` [bad-specialization]
- ERROR pwndbg/commands/__init__.py:702:15-30: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | str | None) [no-matching-overload]
+ ERROR pwndbg/commands/__init__.py:702:16-29: Argument `Value | str | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR pwndbg/commands/__init__.py:1018:23-36: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
+ ERROR pwndbg/commands/__init__.py:1018:24-35: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR pwndbg/commands/argv.py:33:48-61: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
+ ERROR pwndbg/commands/argv.py:33:49-60: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR pwndbg/commands/argv.py:60:48-61: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
+ ERROR pwndbg/commands/argv.py:60:49-60: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR pwndbg/commands/binder.py:360:75-89: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
- ERROR pwndbg/commands/binder.py:367:35-49: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
- ERROR pwndbg/commands/binder.py:371:35-49: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
- ERROR pwndbg/commands/binder.py:373:35-49: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
- ERROR pwndbg/commands/binder.py:375:35-49: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
- ERROR pwndbg/commands/binder.py:390:45-63: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
- ERROR pwndbg/commands/binder.py:394:84-98: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
+ ERROR pwndbg/commands/binder.py:360:76-88: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR pwndbg/commands/binder.py:367:36-48: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR pwndbg/commands/binder.py:371:36-48: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR pwndbg/commands/binder.py:373:36-48: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR pwndbg/commands/binder.py:375:36-48: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR pwndbg/commands/binder.py:390:46-62: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR pwndbg/commands/binder.py:394:85-97: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR pwndbg/commands/context.py:250:16-27: Returned type `IO[Any]` is not assignable to declared return type `TextIO` [bad-return]
+ ERROR pwndbg/commands/context.py:664:27-32: Argument `str | None` is not assignable to parameter `object` with type `str` in function `list.append` [bad-argument-type]
- ERROR pwndbg/commands/flags.py:59:30-68: No matching overload found for function `int.__new__` called with arguments: (type[int], int | None) [no-matching-overload]
+ ERROR pwndbg/commands/flags.py:59:31-67: Argument `int | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR pwndbg/commands/kbpf.py:163:11-21: No matching overload found for function `int.__new__` called with arguments: (type[int], int | None) [no-matching-overload]
+ ERROR pwndbg/commands/kbpf.py:163:12-20: Argument `int | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR pwndbg/commands/kbpf.py:182:32-59: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
- ERROR pwndbg/commands/kbpf.py:222:11-20: No matching overload found for function `int.__new__` called with arguments: (type[int], int | None) [no-matching-overload]
+ ERROR pwndbg/commands/kbpf.py:182:33-58: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR pwndbg/commands/kbpf.py:222:12-19: Argument `int | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR pwndbg/commands/msr.py:63:22-57: No matching overload found for function `int.__new__` called with arguments: (type[int], int | None) [no-matching-overload]
- ERROR pwndbg/commands/msr.py:64:22-57: No matching overload found for function `int.__new__` called with arguments: (type[int], int | None) [no-matching-overload]
+ ERROR pwndbg/commands/msr.py:63:23-56: Argument `int | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR pwndbg/commands/msr.py:64:23-56: Argument `int | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR pwndbg/commands/rop.py:52:33-51: No matching overload found for function `bytes.__new__` called with arguments: (type[bytes], Unknown | None) [no-matching-overload]
+ ERROR pwndbg/commands/rop.py:52:34-50: Argument `Unknown | None` is not assignable to parameter `o` with type `Buffer | Iterable[SupportsIndex] | SupportsBytes | SupportsIndex` in function `bytes.__new__` [bad-argument-type]
- ERROR pwndbg/commands/start.py:28:15-41: No matching overload found for function `int.__new__` called with arguments: (type[int], int | None) [no-matching-overload]
- ERROR pwndbg/commands/start.py:33:11-33: No matching overload found for function `int.__new__` called with arguments: (type[int], int | None) [no-matching-overload]
+ ERROR pwndbg/commands/start.py:28:16-40: Argument `int | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
+ ERROR pwndbg/commands/start.py:33:12-32: Argument `int | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR pwndbg/commands/telescope.py:159:11-20: No matching overload found for function `int.__new__` called with arguments: (type[int], int | None) [no-matching-overload]
+ ERROR pwndbg/commands/telescope.py:159:12-19: Argument `int | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR pwndbg/commands/telescope.py:181:19-38: No matching overload found for function `range.__new__` called with arguments: (type[range], int | None, int, int) [no-matching-overload]
- ERROR pwndbg/commands/telescope.py:228:35-54: No matching overload found for function `range.__new__` called with arguments: (type[range], int | None, int, int) [no-matching-overload]
+ ERROR pwndbg/commands/telescope.py:181:20-25: Argument `int | None` is not assignable to parameter `start` with type `SupportsIndex` in function `range.__new__` [bad-argument-type]
+ ERROR pwndbg/commands/telescope.py:228:36-41: Argument `int | None` is not assignable to parameter `start` with type `SupportsIndex` in function `range.__new__` [bad-argument-type]
- ERROR pwndbg/dbg_mod/gdb/__init__.py:228:19-46: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
+ ERROR pwndbg/dbg_mod/gdb/__init__.py:228:20-45: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR pwndbg/dbg_mod/lldb/__init__.py:267:31-58: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
+ ERROR pwndbg/dbg_mod/lldb/__init__.py:267:32-57: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR pwndbg/gdblib/ptmalloc2_tracking.py:473:22-41: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
+ ERROR pwndbg/gdblib/ptmalloc2_tracking.py:473:23-40: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]
- ERROR pwndbg/gdblib/ptmalloc2_tracking.py:549:22-41: No matching overload found for function `int.__new__` called with arguments: (type[int], Value | None) [no-matching-overload]
+ ERROR pwndbg/gdblib/ptmalloc2_tracking.py:549:23-40: Argument `Value | None` is not assignable to parameter `x` with type `Buffer | SupportsIndex | SupportsInt | SupportsTrunc | str` in function `int.__new__` [bad-argument-type]

setuptools (https://github.com/pypa/setuptools)
- ERROR setuptools/_distutils/command/build_ext.py:502:32-58: No matching overload found for function `zip.__new__` called with arguments: (type[zip[_T_co]], Unknown | None, list[Future[None]]) [no-matching-overload]
+ ERROR setuptools/_distutils/command/build_ext.py:502:33-48: Argument `Unknown | None` is not assignable to parameter `iter1` with type `Iterable[Unknown]` in function `zip.__new__` [bad-argument-type]
+ ERROR setuptools/_distutils/command/install.py:718:42-50: Argument `list[str]` is not assignable to parameter `iterable` with type `Iterable[PathLike[@_]]` in function `map.__new__` [bad-argument-type]
- ERROR setuptools/_distutils/command/install.py:718:23-51: No matching overload found for function `map.__new__` called with arguments: (type[map[_S]], Overload[
-   [AnyStr: (str, bytes)](path: PathLike[AnyStr]) -> AnyStr
-   [AnyOrLiteralStr: (str, bytes, LiteralString)](path: AnyOrLiteralStr) -> AnyOrLiteralStr
- ], list[str]) [no-matching-overload]
- ERROR setuptools/_distutils/dist.py:972:43-57: No matching overload found for function `Distribution.get_command_obj` called with arguments: (Command | str) [no-matching-overload]
+ ERROR setuptools/_distutils/dist.py:972:44-56: Argument `Command | str` is not assignable to parameter `command` with type `str` in function `Distribution.get_command_obj` [bad-argument-type]
+ ERROR setuptools/_distutils/extension.py:123:48-55: Argument `Iterable[PathLike[str] | str]` is not assignable to parameter `iterable` with type `Iterable[str]` in function `map.__new__` [bad-argument-type]
- ERROR setuptools/_distutils/extension.py:123:36-56: No matching overload found for function `map.__new__` called with arguments: (type[map[_S]], Overload[
-   (path: str) -> str
-   (path: bytes) -> bytes
-   [AnyStr: (str, bytes)](path: PathLike[AnyStr]) -> AnyStr
- ], Iterable[PathLike[str] | str]) [no-matching-overload]
+ ERROR setuptools/_distutils/filelist.py:67:52-62: Argument `list[str]` is not assignable to parameter `iterable` with type `Iterable[PathLike[@_]]` in function `map.__new__` [bad-argument-type]
- ERROR setuptools/_distutils/filelist.py:67:36-63: No matching overload found for function `map.__new__` called with arguments: (type[map[_S]], Overload[
-   [AnyStr: (str, bytes)](p: PathLike[AnyStr]) -> tuple[AnyStr, AnyStr]
-   [AnyOrLiteralStr: (str, bytes, LiteralString)](p: AnyOrLiteralStr) -> tuple[AnyOrLiteralStr, AnyOrLiteralStr]
- ], list[str]) [no-matching-overload]
+ ERROR setuptools/_distutils/tests/test_bdist_dumb.py:74:62-70: Argument `list[str]` is not assignable to parameter `iterable` with type `Iterable[PathLike[@_]]` in function `map.__new__` [bad-argument-type]
- ERROR setuptools/_distutils/tests/test_bdist_dumb.py:74:43-71: No matching overload found for function `map.__new__` called with arguments: (type[map[_S]], Overload[
-   [AnyStr: (str, bytes)](p: PathLike[AnyStr]) -> AnyStr
-   [AnyOrLiteralStr: (str, bytes, LiteralString)](p: AnyOrLiteralStr) -> AnyOrLiteralStr
- ], list[str]) [no-matching-overload]
+ ERROR setuptools/_distutils/tests/test_sdist.py:67:48-49: Argument `TextIOWrapper` is not assignable to parameter `iterable` with type `Iterable[LiteralString]` in function `map.__new__` [bad-argument-type]
- ERROR setuptools/_distutils/tests/test_sdist.py:67:36-50: No matching overload found for function `map.__new__` called with arguments: (type[map[_S]], Overload[
-   (self: LiteralString, chars: LiteralString | None = None, /) -> LiteralString
-   (self: str, chars: str | None = None, /) -> str
- ], TextIOWrapper) [no-matching-overload]
+ ERROR setuptools/_distutils/tests/test_util.py:84:24-29: `(*path: Unknown) -> str` is not assignable to attribute `join` with type `Overload[
+   (a: LiteralString, /, *paths: LiteralString) -> LiteralString
+   (a: StrPath, /, *paths: StrPath) -> str
+   (a: BytesPath, /, *paths: BytesPath) -> bytes
+ ]` [bad-assignment]
+ ]` [bad-assignment]
+ ERROR setuptools/_distutils/tests/test_util.py:108:24-29: `(*path: Unknown) -> str` is not assignable to attribute `join` with type `Overload[
+   (a: LiteralString, /, *paths: LiteralString) -> LiteralString
+   (a: StrPath, /, *paths: StrPath) -> str
+   (a: BytesPath, /, *paths: BytesPath) -> bytes
+ ERROR setuptools/_vendor/importlib_metadata/__init__.py:660:44-61: Argument `list[str]` is not assignable to parameter `iterable` with type `Iterable[LiteralString]` in function `map.__new__` [bad-argument-type]
- ERROR setuptools/_vendor/importlib_metadata/__init__.py:660:28-62: No matching overload found for function `map.__new__` called with arguments: (type[map[_S]], Overload[
-   (self: LiteralString, *args: LiteralString, **kwargs: LiteralString) -> LiteralString

... (truncated 2157 lines) ...```

@github-actions
Copy link

Primer Diff Classification

❌ 33 regression(s) | ✅ 60 improvement(s) | ❓ 2 needs review | 95 project(s) total | +1862, -1106 errors

33 regression(s) across streamlit, ibis, build, freqtrade, aioredis, scikit-build-core, aiohttp-devtools, black, openlibrary, egglog-python, aiohttp, packaging, dulwich, rotki, Tanjun, static-frame, hydpy, pytest-robotframework, mypy, core, trio, jinja, dedupe, parso, spack, PyGithub, attrs, schema_salad, prefect, materialize, DateType, pycryptodome, altair. error kinds: bad-argument-type, no-matching-overload, bad-specialization. caused by disambiguate_overloads(), overload_resolution(). 60 improvement(s) across beartype, pandera, websockets, hydra-zen, dd-trace-py, sockeye, svcs, vision, numpy-stl, urllib3, Expression, PyWinCtl, scikit-learn, more-itertools, pwndbg, setuptools, comtypes, django-stubs, apprise, psycopg, operator, schemathesis, pip, tornado, mkdocs, cloud-init, antidote, mkosi, mitmproxy, bokeh, bandersnatch, pywin32, zulip, dragonchain, pandas, zope.interface, werkzeug, meson, sphinx, xarray, discord.py, kornia, paasta, poetry, pydantic, archinstall, scrapy, optuna, spark, pytest, aiortc, scipy-stubs, jax, cwltool, pyppeteer, cryptography, yarl, httpx-caching, scipy, colour.

Project Verdict Changes Error Kinds Root Cause
beartype ✅ Improvement +1, -2 bad-argument-type, no-matching-overload pyrefly/lib/alt/overload.rs
streamlit ❌ Regression +5, -5 bad-argument-type, no-matching-overload disambiguate_overloads()
pandera ✅ Improvement +1, -1 bad-argument-type, no-matching-overload pyrefly/lib/alt/overload.rs
websockets ✅ Improvement +2, -2 bad-argument-type, no-matching-overload pyrefly/lib/alt/overload.rs
hydra-zen ✅ Improvement +6, -5 bad-argument-type, bad-return overload_resolution()
dd-trace-py ✅ Improvement +8, -4 bad-argument-type disambiguate_overloads()
ibis ❌ Regression +76, -44 bad-argument-type pyrefly/lib/alt/overload.rs
build ❌ Regression +1 bad-argument-type disambiguate_overloads()
sockeye ✅ Improvement +6, -4 bad-argument-type, no-matching-overload pyrefly/lib/alt/overload.rs
svcs ✅ Improvement +2, -2 bad-argument-type, no-matching-overload pyrefly/lib/alt/overload.rs
vision ✅ Improvement +10, -8 bad-argument-type, no-matching-overload disambiguate_overloads()
numpy-stl ✅ Improvement +3 bad-assignment, unsupported-operation disambiguate_overloads()
urllib3 ✅ Improvement +5, -2 bad-argument-type, bad-return pyrefly/lib/alt/overload.rs
Expression ✅ Improvement +1, -1 bad-argument-type, no-matching-overload disambiguate_overloads()
PyWinCtl ✅ Improvement +1 bad-assignment pyrefly/lib/alt/overload.rs
freqtrade ❌ Regression +5, -1 bad-argument-type, bad-assignment disambiguate_overloads()
scikit-learn ✅ Improvement +169, -167 bad-argument-type, bad-assignment pyrefly/lib/alt/overload.rs
more-itertools ✅ Improvement +4, -2 bad-argument-type, no-matching-overload pyrefly/lib/alt/overload.rs
aioredis ❌ Regression +2, -2 bad-argument-type, bad-assignment disambiguate_overloads()
pwndbg ✅ Improvement +57, -55 bad-argument-type, bad-return pyrefly/lib/alt/overload.rs
setuptools ✅ Improvement +23, -12 bad-argument-type, no-matching-overload disambiguate_overloads()
scikit-build-core ❌ Regression +6, -6 bad-argument-type, no-matching-overload disambiguate_overloads()
comtypes ✅ Improvement +8, -3 bad-argument-type, missing-argument pyrefly/lib/alt/overload.rs
django-stubs ✅ Improvement +1 bad-argument-type pyrefly/lib/alt/overload.rs
apprise ✅ Improvement +56, -53 bad-argument-type, bad-assignment overload_resolution()
psycopg ✅ Improvement +2, -2 bad-argument-type, no-matching-overload pyrefly/lib/alt/overload.rs
operator ✅ Improvement +1, -1 bad-argument-type, no-matching-overload pyrefly/lib/alt/overload.rs
schemathesis ✅ Improvement -1 bad-return disambiguate_overloads()
aiohttp-devtools ❌ Regression +1, -1 bad-argument-type, no-matching-overload pyrefly/lib/alt/overload.rs
pip ✅ Improvement +19, -9 bad-argument-type, bad-assignment pyrefly/lib/alt/overload.rs
tornado ✅ Improvement +5, -3 overload-resolution pyrefly/lib/alt/overload.rs
mkdocs ✅ Improvement +6, -6 bad-argument-type, no-matching-overload disambiguate_overloads()
black ❌ Regression +1, -1 bad-argument-type, no-matching-overload disambiguate_overloads()
openlibrary ❌ Regression +14, -12 bad-argument-type, bad-specialization pyrefly/lib/alt/overload.rs
egglog-python ❌ Regression +1, -1 bad-argument-type, no-matching-overload disambiguate_overloads()
cloud-init ✅ Improvement +18, -62 bad-argument-type disambiguate_overloads()
antidote ✅ Improvement +3, -3 bad-argument-type, no-matching-overload pyrefly/lib/alt/overload.rs
aiohttp ❌ Regression +3 bad-argument-type disambiguate_overloads()
packaging ❌ Regression +2 bad-argument-type pyrefly/lib/alt/overload.rs
mkosi ✅ Improvement +4, -4 bad-argument-type, no-matching-overload pyrefly/lib/alt/overload.rs
mitmproxy ✅ Improvement +10, -18 bad-argument-type, no-matching-overload disambiguate_overloads()
dulwich ❌ Regression +6 bad-argument-type disambiguate_overloads()
rotki ❌ Regression +41, -10 bad-argument-type, missing-attribute pyrefly/lib/alt/overload.rs
Tanjun ❌ Regression +4, -1 bad-argument-type, no-matching-overload disambiguate_overloads()
bokeh ✅ Improvement +31, -20 bad-argument-type, bad-return pyrefly/lib/alt/overload.rs
bandersnatch ✅ Improvement +2, -1 bad-return, bad-specialization pyrefly/lib/alt/overload.rs
pywin32 ✅ Improvement +1, -1 bad-argument-type, no-matching-overload pyrefly/lib/alt/overload.rs
static-frame ❌ Regression +49, -63 bad-argument-type, bad-specialization disambiguate_overloads()
zulip ✅ Improvement +17, -16 bad-argument-type, no-matching-overload pyrefly/lib/alt/overload.rs
hydpy ❌ Regression +5, -6 bad-argument-type, no-matching-overload pyrefly/lib/alt/overload.rs
dragonchain ✅ Improvement +4, -4 bad-argument-type, no-matching-overload pyrefly/lib/alt/overload.rs
pandas ✅ Improvement +370, -135 bad-argument-type, bad-assignment pyrefly/lib/alt/overload.rs
pytest-robotframework ❌ Regression +1, -1 bad-argument-type, no-matching-overload pyrefly/lib/alt/overload.rs
zope.interface ✅ Improvement +7, -7 bad-argument-type, bad-specialization disambiguate_overloads()
werkzeug ✅ Improvement +21, -21 bad-argument-type, no-matching-overload disambiguate_overloads()
meson ✅ Improvement +21, -17 bad-argument-type, bad-assignment disambiguate_overloads()
mypy ❌ Regression +4 bad-argument-type The PR changes overload resolution to select the most gen...
pyodide ❓ Needs Review +1 bad-assignment
core ❌ Regression +46, -40 bad-argument-type, bad-assignment overload_resolution()
trio ❌ Regression +2, -1 bad-argument-type, invalid-yield pyrefly/lib/alt/overload.rs
sphinx ✅ Improvement +11, -8 bad-argument-type, bad-return pyrefly/lib/alt/overload.rs
xarray ✅ Improvement +34, -41 bad-argument-type, bad-assignment disambiguate_overloads()
jinja ❌ Regression +4, -3 bad-argument-type, no-matching-overload disambiguate_overloads()
dedupe ❌ Regression +4, -2 bad-argument-type, bad-return disambiguate_overloads()
discord.py ✅ Improvement +11, -3 bad-argument-type, bad-assignment disambiguate_overloads()
parso ❌ Regression +4, -4 bad-specialization disambiguate_overloads()
spack ❌ Regression +63, -38 bad-specialization pyrefly/lib/alt/overload.rs
kornia ✅ Improvement +1, -1 bad-argument-type, no-matching-overload pyrefly/lib/alt/overload.rs
paasta ✅ Improvement +3, -1 bad-return, bad-specialization pyrefly/lib/alt/overload.rs
poetry ✅ Improvement +1 bad-assignment disambiguate_overloads()
pydantic ✅ Improvement +3, -3 bad-argument-type, no-matching-overload pyrefly/lib/alt/overload.rs
archinstall ✅ Improvement +1, -1 bad-argument-type, no-matching-overload overload_resolution()
PyGithub ❌ Regression +2, -2 bad-argument-type, no-matching-overload pyrefly/lib/alt/overload.rs
scrapy ✅ Improvement +14, -13 bad-specialization pyrefly/lib/alt/overload.rs
optuna ✅ Improvement +4, -5 bad-argument-type, bad-assignment pyrefly/lib/alt/overload.rs
spark ✅ Improvement +53, -50 bad-argument-type, bad-return disambiguate_overloads()
attrs ❌ Regression +5, -4 bad-argument-type, bad-specialization disambiguate_overloads()
graphql-core ❓ Needs Review +1, -7 bad-argument-type, no-matching-overload
schema_salad ❌ Regression +1 bad-return pyrefly/lib/alt/overload.rs
prefect ❌ Regression +36, -8 bad-argument-type pyrefly/lib/alt/overload.rs
pytest ✅ Improvement +1, -1 bad-argument-type, no-matching-overload pyrefly/lib/alt/overload.rs
aiortc ✅ Improvement +3, -3 bad-argument-type, no-matching-overload pyrefly/lib/alt/overload.rs
scipy-stubs ✅ Improvement +82 unused-ignore disambiguate_overloads()
jax ✅ Improvement +211, -6 no-matching-overload disambiguate_overloads()
materialize ❌ Regression +2, -2 bad-argument-type, bad-specialization pyrefly/lib/alt/overload.rs
cwltool ✅ Improvement +5, -3 bad-assignment, bad-specialization disambiguate_overloads()
DateType ❌ Regression +8, -8 bad-argument-type, no-matching-overload disambiguate_overloads()
pyppeteer ✅ Improvement +2, -1 bad-argument-type, no-matching-overload disambiguate_overloads()
cryptography ✅ Improvement +1, -1 bad-argument-type, no-matching-overload pyrefly/lib/alt/overload.rs
yarl ✅ Improvement +2, -2 bad-argument-type, no-matching-overload pyrefly/lib/alt/overload.rs
httpx-caching ✅ Improvement +3 bad-argument-type, unsupported-operation pyrefly/lib/alt/overload.rs
pycryptodome ❌ Regression +1, -1 bad-specialization, no-matching-overload pyrefly/lib/alt/overload.rs
altair ❌ Regression +18, -12 bad-argument-type, bad-typed-dict-key disambiguate_overloads()
scipy ✅ Improvement +2, -3 bad-argument-type, no-matching-overload pyrefly/lib/alt/overload.rs
colour ✅ Improvement +83, -20 bad-argument-type, bad-return pyrefly/lib/alt/overload.rs
Detailed analysis

❌ Regression (33)

streamlit (+5, -5)

This is a regression. While the removed 'no-matching-overload' errors were false positives (improvement), the new 'bad-argument-type' errors are also false positives. The code correctly guards with is_list_like(body) before calling list(body), but pyrefly can't understand this dynamic check. Since mypy/pyright don't flag these and the code has # type: ignore comments, pyrefly is being stricter than the ecosystem standard. The net effect is neutral (5 false positives replaced by 5 different false positives), but since 4/5 new errors are pyrefly-only, this represents a regression in compatibility.
Attribution: The change to disambiguate_overloads() in pyrefly/lib/alt/overload.rs causes this. When spec_compliant_overloads=False, pyrefly now selects the 'most general' return type among ambiguous overloads instead of returning Any, leading to more precise type checking that surfaces these argument type mismatches.

ibis (+76, -44)

bad-argument-type: These replace less precise no-matching-overload errors with more specific messages about which argument is wrong. This is an improvement in error message quality.
missing-attribute on DataType: All 11 errors are pyrefly-only (not in mypy/pyright). These attributes (bounds, precision, scale, etc.) are standard attributes for data types in ibis. These appear to be false positives where pyrefly fails to resolve attributes that exist.
bad-return: The return type inference became more precise (from Unknown to LiteralString). This is an improvement.
bad-specialization: These pyrefly-only errors about object not meeting comparison protocol bounds appear to be false positives from overly strict type checking.
missing-attribute on Mapping: This is a real bug - Mapping doesn't have setdefault, only MutableMapping does. Both mypy and pyright confirm this. This is an improvement.

Overall: This is a mixed change. The improved precision in overload resolution and better error messages for list.__init__ calls are improvements. However, the 11 pyrefly-only missing-attribute errors on DataType appear to be false positives - these attributes likely exist but pyrefly fails to see them. The bad-specialization errors are also questionable. Overall, the net effect is negative due to the false positives.

Attribution: The changes in pyrefly/lib/alt/overload.rs (specifically the new disambiguate_overloads method) and the spec_compliant_overloads flag cause these changes. The PR makes overload resolution more precise by selecting the most general return type among matched overloads instead of falling back to Any.

build (+1)

This is a regression. The code map(str.strip, file_object) is standard Python for processing lines from a text file. TextIOWrapper yields str values when iterated, and str.strip is a valid function to map over them. The new overload resolution is incorrectly selecting an overload of map that requires Iterable[LiteralString] instead of accepting Iterable[str]. Since neither mypy nor pyright flag this, and the code would work fine at runtime, this is a false positive introduced by the PR's changes to overload resolution.
Attribution: The changes to overload resolution in pyrefly/lib/alt/overload.rs, specifically the new disambiguate_overloads() method and the logic that selects the 'most general' return type among matched overloads. The change also modified how overloads are selected when only one has the right arity.

freqtrade (+5, -1)

This is a regression. The PR makes pyrefly stricter than both mypy and pyright by selecting a specific overload (the most general one) even when the arguments don't match perfectly, then reporting type errors against that specific overload. This creates new errors that other type checkers don't report. While the error messages are more specific, the new behavior diverges from the ecosystem standard without clear benefit.
Attribution: The changes to overload resolution in pyrefly/lib/alt/overload.rs, specifically the disambiguate_overloads() function that selects the most general return type, and the modification to use the selected overload's parameter types for error reporting even when there's an argument mismatch.

aioredis (+2, -2)

This is a regression. The PR attempts to improve overload resolution by selecting the 'most general' return type instead of Any, but this causes pyrefly to infer incorrect types (Coroutine instead of list) that lead to false positive errors. The typing spec requires falling back to Any for ambiguous overloads, and deviating from this causes compatibility issues. While removing the false positive dict assignment error is good, introducing new false positives that neither mypy nor pyright report makes pyrefly worse overall.
Attribution: The change in pyrefly/lib/alt/overload.rs that introduces disambiguate_overloads() to select the 'most general' return type instead of falling back to Any is causing pyrefly to select different overloads, leading to these type inference issues.

scikit-build-core (+6, -6)

This appears to be a regression. The pathspec.GitIgnoreSpec.from_lines() method should accept list[str] as it takes an iterable of strings. The new error message suggests pyrefly is being overly strict about the type parameter variance in Iterable[TypeVar[...]]. The fact that the same 6 errors changed from 'no-matching-overload' to 'bad-argument-type' at the exact same locations suggests the underlying issue remains - pyrefly is incorrectly rejecting valid calls. The PR's intent was to improve overload resolution, but in this case it's still rejecting valid code, just with a different error message.
Attribution: The change to overload resolution in pyrefly/lib/alt/overload.rs (specifically the disambiguate_overloads() method) caused pyrefly to report more specific type errors instead of generic 'no matching overload' errors. The new logic tries to select the most general return type among matched overloads.

aiohttp-devtools (+1, -1)

This is a regression. While the new error message is more specific about what's wrong, it's still a false positive. The code guarantees through control flow that self.app_factory_name cannot be None at line 200, but pyrefly cannot prove this. Since neither mypy nor pyright flag this error, pyrefly is being overly strict compared to the established ecosystem standard.
Attribution: The change to overload resolution in pyrefly/lib/alt/overload.rs (specifically the disambiguate_overloads function) now selects a more specific overload when argument counts can eliminate ambiguity, leading to more precise error messages about the actual type mismatch rather than generic 'no matching overload' errors.

black (+1, -1)

This is a regression. While the PR improves overload resolution in many cases (as evidenced by better numpy/scipy test results), it introduces a new false positive here. The GitIgnoreSpec.from_lines() method likely has overloads that accept different string-like types, and list[str] should be compatible. The new 'bad-argument-type' error is overly strict - pyrefly is now rejecting valid code that both mypy and pyright accept. The fact that it's pyrefly-only and the code is from a mature, well-tested project (black) strongly suggests this is a false positive.
Attribution: The change in pyrefly/lib/alt/overload.rs modified overload resolution to be more precise when spec_compliant_overloads is false (the default). The new logic in disambiguate_overloads() tries to find the most general return type instead of falling back to Any, which can expose type mismatches that were previously hidden.

openlibrary (+14, -12)

This is a regression. The PR makes pyrefly deviate from the typing spec's requirement to return Any for ambiguous overload calls. While the PR author claims this brings pyrefly 'closer to the ecosystem standard' by passing more numpy/scipy tests, it actually makes pyrefly stricter than both mypy and pyright on the OpenLibrary codebase. The fact that 7/14 new errors are pyrefly-only (not reported by mypy or pyright) indicates pyrefly is now too strict. The spec is clear that ambiguous overload calls should resolve to Any, not attempt to find a 'most general' type.
Attribution: The change to overload resolution in pyrefly/lib/alt/overload.rs (specifically the disambiguate_overloads function) now attempts to find the most general return type instead of returning Any. This causes pyrefly to select specific overloads even in ambiguous cases, leading to the new bad-argument-type errors.

egglog-python (+1, -1)

This is a regression. The new error contains @_ types which indicate type inference failure. The (@_, @_) -> @_ signature shows pyrefly cannot properly infer the types for the append function parameter. This is worse than the previous 'no matching overload' error because it suggests deeper type resolution issues. The code pattern reduce(append, x, A(())) is a valid and common Python idiom that should type check correctly.
Attribution: The changes to overload resolution in pyrefly/lib/alt/overload.rs, specifically the disambiguate_overloads() function and the logic that selects overloads based on argument count even when there are errors, caused this change. The new logic in lines 368-373 extends errors from the closest overload match.

aiohttp (+3)

These are false positives. The code is correct - itertools.product returns an iterator of tuples, and map can accept it. The issue is that pyrefly's new overload resolution is inferring a more specific type for the map call than before, likely Iterable[Iterable[LiteralString]] instead of a more general type, causing a type mismatch with product[tuple[str, ...]]. Since mypy and pyright don't flag this, and the code is functionally correct, this represents pyrefly being overly strict compared to the ecosystem standard.
Attribution: The change to overload resolution in pyrefly/lib/alt/overload.rs (specifically the disambiguate_overloads() function) and the new spec_compliant_overloads configuration option caused these errors. When spec_compliant_overloads is false (the default), pyrefly now attempts to find the 'most general' return type instead of returning Any, which leads to more precise type inference that can expose type mismatches.

packaging (+2)

These are false positives. The code is using map(str.lower, value) where value: list[str], which is perfectly valid Python. Pyrefly is incorrectly requiring LiteralString when regular str should be acceptable. Neither mypy nor pyright report these errors, confirming that pyrefly is being too strict about string type compatibility in this context.
Attribution: The change to overload resolution in pyrefly/lib/alt/overload.rs that selects the 'most general' return type among matched overloads is causing pyrefly to pick an overload of map that expects Iterable[LiteralString] instead of the more permissive Iterable[str] overload.

dulwich (+6)

This is a regression. The PR makes pyrefly stricter than both mypy and pyright by trying to resolve ambiguous overload calls more precisely. While the intent is good, it's creating false positives where the code is correct. The getattr() call with a fallback is a common Python pattern, and the assertion immediately after confirms the type. The type checker should handle this pattern gracefully rather than flagging it as an error.
Attribution: The changes to overload resolution in pyrefly/lib/alt/overload.rs, specifically the new disambiguate_overloads() method that tries to find the most general return type instead of falling back to Any, and the change that uses a single matching overload's types even when there are argument errors

rotki (+41, -10)

The PR improves overload resolution by removing false positive 'no-matching-overload' errors. However, it introduces a regression with the 32 missing-attribute errors on tuples. These appear to be NamedTuple instances that pyrefly now incorrectly infers as plain tuples due to selecting a different overload. While the PR's intent is good, the new errors indicate real type safety issues that would cause runtime AttributeErrors.
Attribution: The changes to overload resolution in pyrefly/lib/alt/overload.rs, specifically the new disambiguate_overloads method and modifications to resolve_overload, caused these errors. The PR makes pyrefly select more specific return types from overloads instead of falling back to Any.

Tanjun (+4, -1)

This is a regression. While the PR aims to improve overload resolution, it's applying stricter type checking than both mypy and pyright. The new errors about LiteralString are particularly problematic - the typing spec doesn't require map() to only accept Iterable[LiteralString], and this breaks common Python patterns. The fact that all new errors are pyrefly-only strongly suggests pyrefly is being too strict.
Attribution: The changes to overload resolution in pyrefly/lib/alt/overload.rs, specifically the new disambiguate_overloads() method and the logic to select an overload even when only one matches by argument count, caused these errors. The spec_compliant_overloads configuration option in the diff controls this behavior.

static-frame (+49, -63)

This is a regression. While the PR improves overload resolution precision and removes false positive 'no-matching-overload' errors, it introduces new errors that neither mypy nor pyright report. The PR acknowledges these new errors are 'arguably low-value' and stem from pyrefly being stricter than the typing spec requires. The fact that all 49 new errors are pyrefly-only, combined with the PR author's own assessment that some errors are of questionable value, indicates pyrefly is now too strict compared to the Python typing ecosystem standard.
Attribution: The changes in pyrefly/lib/alt/overload.rs, specifically the new disambiguate_overloads() method and modifications to overload_resolution(), cause pyrefly to select the 'most general' return type instead of Any for ambiguous overload calls. This leads to more precise types being inferred, which then triggers stricter type checking downstream, resulting in the new bad-argument-type and bad-specialization errors.

hydpy (+5, -6)

This is a regression. While the PR aims to improve overload resolution by finding the most general return type instead of falling back to Any, it has exposed underlying type inference failures where Never types are being inferred. The appearance of Never in BaseProperty[Never, int] indicates pyrefly is failing to properly infer types, and the new stricter overload resolution is now surfacing these as errors. Since mypy and pyright don't report these errors and the code would work at runtime, this represents pyrefly becoming incorrectly strict due to its own inference limitations.
Attribution: The change to overload resolution in pyrefly/lib/alt/overload.rs (specifically the disambiguate_overloads method) caused these errors by attempting to resolve ambiguous calls more precisely instead of falling back to Any, which exposed the underlying Never type inference failures.

pytest-robotframework (+1, -1)

This is a regression. The new error is incorrect - nullcontext() with no arguments should return nullcontext[None], not nullcontext[object]. The PR's attempt to be smarter about overload resolution has introduced a false positive. Neither mypy nor pyright report this error, indicating pyrefly is being too strict. The code is correct and commonly used in Python projects.
Attribution: The change in pyrefly/lib/alt/overload.rs at lines 369-374 and the new disambiguation logic (lines 599-693) caused this. Specifically, the logic that marks an overload as 'definitely matched' when it's the only arity-compatible one, combined with the new return type selection logic.

mypy (+4)

These are false positives caused by pyrefly's overload resolution changes. The code passes tuple[str, ...] to map(os.path.abspath, ...). Since os.path.abspath accepts str | PathLike[str] and returns str, the map call is valid. The @_ in the error message indicates a type inference failure - pyrefly is failing to properly infer the type parameter for PathLike. Since mypy and pyright don't flag this, and the code works at runtime, this is a regression in pyrefly's type inference.
Attribution: The PR changes overload resolution to select the most general return type among matched overloads. This likely affected how map.__new__ overloads are resolved, causing the PathLike[@_] type to appear where it shouldn't

core (+46, -40)

This is a regression. The PR implements behavior that goes beyond what the typing spec requires. While the spec says to return Any for ambiguous overload matches, pyrefly now tries to be 'smarter' by finding the most general return type. This leads to more precise types being inferred (e.g., set[tuple[str, Unknown, Unknown, Unknown]] instead of Any), which then fail to match expected parameter types. The fact that 76% of new errors are pyrefly-only confirms this is pyrefly being stricter than the established ecosystem standard. The PR author even acknowledges these errors are 'not wrong, per se, but some are arguably low-value.'
Attribution: The change to overload_resolution() in pyrefly/lib/alt/overload.rs that implements finding the 'most general' return type instead of returning Any for ambiguous overloads. This affects how pyrefly infers types when multiple overloads match, leading to more precise but sometimes incorrect type inference that cascades into these errors.

trio (+2, -1)

This is a regression. The PR intentionally deviates from the typing spec's requirement to return Any for ambiguous overload calls. While the new error messages are more specific, they represent pyrefly being stricter than the spec requires. The spec (https://typing.readthedocs.io/en/latest/spec/overload.html#overloading-rules) clearly states that ambiguous calls should return Any, but pyrefly now tries to be 'smarter' by selecting the most general type. Since mypy and pyright don't flag these errors, pyrefly is now stricter than the established ecosystem standard.
Attribution: The change to overload resolution in pyrefly/lib/alt/overload.rs (specifically the disambiguate_overloads function) now selects the most general return type instead of falling back to Any. This causes pyrefly to pick a specific overload and report argument type mismatches instead of reporting 'no matching overload'.

jinja (+4, -3)

This is a regression. The new errors are false positives - dict.update() can accept keyword arguments with any values, but pyrefly is incorrectly constraining the types. The fact that neither mypy nor pyright report these errors, combined with the runtime validity of the code, indicates pyrefly is applying overly strict type checking. While the PR aims to improve overload resolution, it's introducing incorrect errors in this case.
Attribution: The changes in pyrefly/lib/alt/overload.rs, specifically the new disambiguate_overloads() method and changes to find_matching_overload(), are causing pyrefly to select specific overloads more aggressively and report type mismatches instead of falling back to 'no matching overload' errors.

dedupe (+4, -2)

This is a regression. While the PR improves numpy/scipy test compatibility, it introduces false positives in real-world code. The bad-return errors are wrong because the code is correct - it returns str in all cases. The sequence indexing inference is overly conservative. The bad-argument-type errors, while more specific than before, still flag code that works correctly at runtime. The spec explicitly allows falling back to Any for ambiguous cases to avoid such false positives.
Attribution: The changes are caused by the new overload resolution logic in pyrefly/lib/alt/overload.rs, specifically the disambiguate_overloads() method that selects the most general return type, and the logic that reports specific argument type mismatches when only one overload matches by arity instead of reporting 'no matching overload'.

parso (+4, -4)

bad-specialization: Overly strict type variable bound checking on min() that neither mypy nor pyright enforce - regression
missing-attribute: False positive - _regex is a compiled pattern object, not a string - regression
bad-index: May be a real issue if token can be single char, but pyrefly-only suggests it's too strict - likely regression
bad-override: Could be a real type inconsistency since pyright also reports it - needs more context
removed NoneType errors: Correctly removed false positives about None not having methods - improvement

Overall: The changes show a mixed impact. The removed NoneType errors were false positives (improvement). However, the new errors appear to be a mix: the str.match error is clearly wrong since _regex is a compiled pattern, not a string (regression). The bad-specialization error may be overly strict type variable bound checking not required by other type checkers (regression). The net effect is negative - introducing false positives while fixing others.

Attribution: The changes in pyrefly/lib/alt/overload.rs modified overload resolution to be more precise. The new disambiguate_overloads() method attempts to find the most general return type instead of falling back to Any. This affects how ambiguous overload calls are resolved and typed.

spack (+63, -38)

bad-specialization: These errors claim Spec doesn't support comparison operations for sorted(), but this is likely a false positive if the code works at runtime. Pyrefly seems to be failing to recognize that Spec implements comparison methods.
bad-argument-type: These are more specific than the old 'no-matching-overload' errors, which is good for error quality. However, some may be false positives if pyrefly is incorrectly inferring types.
removed no-matching-overload: Replacing generic 'no matching overload' with specific parameter mismatch errors is an improvement in error message quality.

Overall: This appears to be a mixed change. The removal of generic 'no-matching-overload' errors in favor of more specific error messages is an improvement in error quality. However, the new 'bad-specialization' errors for sorted() seem incorrect - if Spec objects are being sorted successfully at runtime, they must support comparison. The high number of pyrefly-only errors (36/63) suggests pyrefly may be too strict here. Overall, this seems like a regression due to the new errors being likely false positives, even though the error message quality improved.

Attribution: The changes to overload resolution in pyrefly/lib/alt/overload.rs, specifically the new disambiguate_overloads method and the logic to report specific parameter mismatches instead of generic 'no matching overload' errors when only one overload matches by arity.

PyGithub (+2, -2)

This appears to be a regression. The code passes a module where an exception class is expected. While this is technically incorrect typing, the fact that neither mypy nor pyright flag it (and it's in test code of an established project) suggests pyrefly is being overly strict or has an inference failure. The @_ type in the error message indicates pyrefly failed to properly infer the type. The change from 'no matching overload' to a more specific error doesn't fix the underlying issue that pyrefly can't handle this pattern that other type checkers accept.
Attribution: The change in pyrefly/lib/alt/overload.rs in the find_best_overload function now sets matched = true when there's only one arity-compatible overload (line 325). This causes pyrefly to report specific argument type errors from that single overload instead of a generic 'no matching overload' error.

attrs (+5, -4)

This is a regression. While removing the false positive 'no matching overload' errors is good, the new errors contain type inference failures (@_ and Never types) that indicate pyrefly's new overload resolution is breaking down in some cases. The fact that neither mypy nor pyright report most of these new errors, combined with the presence of inference failure markers, strongly suggests these are false positives introduced by the new overload resolution logic.
Attribution: The changes to overload resolution in pyrefly/lib/alt/overload.rs, specifically the new disambiguate_overloads() method and the logic to select the most general return type, caused these changes. The spec_compliant_overloads flag controls whether to use the new behavior.

schema_salad (+1)

This is a regression. The new overload resolution logic is incorrectly generalizing the return type of os.fdopen(..., 'wb', ...) to IO[Any] instead of the correct BufferedWriter. The code is correct and the type annotation is accurate - os.fdopen with mode 'wb' does return a BufferedWriter. This is a false positive introduced by the PR's new 'most general type' heuristic for ambiguous overload resolution.
Attribution: The change to overload resolution in pyrefly/lib/alt/overload.rs (specifically the disambiguate_overloads function) is selecting IO[Any] as the 'most general' return type for os.fdopen instead of the more specific BufferedWriter that should be inferred from the 'wb' mode argument.

prefect (+36, -8)

bad-argument-type: Mix of real bugs (HTTPMethod vs str) and potential false positives from improved overload resolution
not-async: Task[@, @] inference failures - the @_ suggests pyrefly lost type information
missing-attribute: Task missing attributes - likely false positives from inference failures

Overall: While some new errors appear to catch real bugs (like the HTTPMethod/str mismatch), the presence of @_ types in error messages and the fact that many errors are pyrefly-only suggests type inference regressions. The PR improves error specificity but introduces false positives through inference failures. The net effect is negative due to the inference issues.

Attribution: The changes in pyrefly/lib/alt/overload.rs modified overload resolution to be more precise, particularly the disambiguate_overloads function that tries to find the most general return type instead of falling back to Any. This causes pyrefly to report specific type mismatches instead of generic overload errors.

materialize (+2, -2)

This is a regression. The PR intentionally makes pyrefly deviate from both the typing spec and the behavior of mypy/pyright to be 'smarter' about ambiguous overload resolution. However, this creates new false positive errors that neither mypy nor pyright report. The first error about int | None not being assignable to type variable bounds is particularly problematic - min() should work with optional integers. The second error about Generator[dict[Any, Any]] not being assignable to Iterable[SequenceNotStr[Any]] also seems incorrect. While the PR may reduce some 'no-matching-overload' errors, it introduces new incompatibilities with the ecosystem.
Attribution: The changes to pyrefly/lib/alt/overload.rs, specifically the new disambiguate_overloads function and modifications to overload resolution logic, caused these changes. The function now attempts to find a 'most general' return type among matched overloads instead of falling back to Any.

DateType (+8, -8)

This appears to be a regression. While the PR improves overload resolution in general (as evidenced by better numpy/scipy-stubs results), it's producing nonsensical error messages for datetype. The error 'Argument type[AwareDateTime] is not assignable to parameter cls with type type[AwareDateTime]' indicates a type inference bug where identical types are considered incompatible. The removed errors were more accurate - there genuinely were overload matches for these calls.
Attribution: The changes to disambiguate_overloads() and disambiguate_overloads_spec_compliant() in pyrefly/lib/alt/overload.rs caused this behavior change. The new logic attempts to find the most general return type instead of returning Any, and the error reporting changed from 'no matching overload' to specific argument type mismatches.

pycryptodome (+1, -1)

This is a regression. The code has a genuine bug - max() cannot compare int with None. The old error correctly identified that no overload matches max(int, None). The new error message is less clear about the actual problem. While both errors catch the bug, the change represents worse error quality.
Attribution: The change to overload resolution in pyrefly/lib/alt/overload.rs (specifically the disambiguate_overloads function) caused pyrefly to select a more specific overload and report a different error type (bad-specialization instead of no-matching-overload).

altair (+18, -12)

This is a regression. While removing 12 false positive 'no-matching-overload' errors is good, the PR introduces 18 new errors, many of which are incorrect. The pandas.Series.where(condition, None) pattern is extremely common and valid - None should be accepted. The PR makes pyrefly stricter than the typing spec requires by trying to find the 'most general' type instead of following the spec's first-match rule. The net effect is worse - trading 12 false positives for 18 new ones, including blocking common pandas idioms.
Attribution: The changes in pyrefly/lib/alt/overload.rs modified overload resolution to use disambiguate_overloads() which tries to find the most general return type instead of returning Any for ambiguous calls. This causes pyrefly to be stricter about argument types even when only one overload matches by arity.

✅ Improvement (60)

beartype (+1, -2)

The PR improves overload resolution to provide more specific error messages. Instead of reporting 'no matching overload found' when a call doesn't match any overload signature, it now identifies which specific parameter has a type mismatch when only one overload could match based on argument count. This helps developers understand exactly what needs to be fixed. The code has a genuine type error - passing HintSign | None where HintSign is expected.
Attribution: The changes to overload resolution in pyrefly/lib/alt/overload.rs, specifically the logic that selects a specific overload when only one matches based on argument count, caused pyrefly to report the more precise bad-argument-type error instead of the generic no-matching-overload error.

pandera (+1, -1)

This appears to be an improvement. The old error incorrectly claimed no overload matched when pandas Series clearly has a rename method accepting strings. The new error at least identifies a specific type mismatch to investigate. While the new error may still be a false positive (since the code likely works at runtime), it's more precise than claiming no overload exists. The PR's goal of better overload resolution is working - it's now finding an overload to check against rather than giving up entirely.
Attribution: The change in pyrefly/lib/alt/overload.rs in the disambiguate_overloads function (lines 596-640) now attempts to find the most general return type among matched overloads instead of immediately returning Any. This allows pyrefly to select a specific overload for argument checking rather than giving up with 'no matching overload'.

websockets (+2, -2)

This is an improvement. The new error messages are more specific and helpful than the old ones. While the code is actually correct at runtime (due to the in check), pyrefly is correctly identifying a type inconsistency. The new 'bad-argument-type' error pointing to the specific parameter is clearer than the generic 'no-matching-overload' error. This helps developers understand exactly what needs to be fixed (the type narrowing issue).
Attribution: The change to overload resolution in pyrefly/lib/alt/overload.rs (specifically the disambiguate_overloads function and the logic to select a specific overload even with arity mismatches) caused pyrefly to select a specific int overload and report a more precise error message instead of the generic 'no matching overload'

hydra-zen (+6, -5)

This is an improvement. The removed errors showed imprecise Unknown types and generic error messages. The new errors show more specific types (type instead of Unknown) and clearer error messages (bad-argument-type with specific parameter names instead of generic no-matching-overload). While the new bad-argument-type errors are pyrefly-only, they appear to be catching real type mismatches that were previously hidden by the less precise overload resolution. The PR improves type inference precision by implementing a smarter overload disambiguation strategy.
Attribution: The change to overload_resolution() in pyrefly/lib/alt/answers.rs implements the new disambiguation logic that selects the most general return type among matched overloads instead of returning Any.

dd-trace-py (+8, -4)

bad-argument-type: These replace previous 'no-matching-overload' errors with more specific parameter mismatch information. For example, dict.get(str | None) now correctly identifies that str | None doesn't match the expected str parameter, and since only one overload matches by argument count, it uses that overload's return type. This is more precise error reporting.
unsupported-operation: The bitwise AND operations between SupportsIndex and Literal[63] are being flagged. This appears to be a side effect of the more precise type inference - pyrefly is now inferring more specific types instead of Any, revealing these operations that may not be supported. However, these are pyrefly-only errors, suggesting they may be false positives.
bad-return: Return type mismatches where Literal[True] | int or similar unions don't match declared return types. These arise because pyrefly is now computing more precise return types from overloaded calls instead of falling back to Any. The one marked by both mypy and pyright is likely a real issue.

Overall: This is an improvement. The PR makes pyrefly's overload resolution more precise by: 1) Better identifying which specific arguments don't match (replacing vague 'no-matching-overload' with specific 'bad-argument-type' errors), 2) When only one overload matches by arity, using its return type instead of Any, 3) Finding the most general return type among ambiguous matches instead of immediately falling back to Any. While this deviates from the strict typing spec (which requires Any for ambiguous calls), it provides more useful type information and matches what users expect. The removed errors were false positives that provided less actionable information.

Attribution: The changes in pyrefly/lib/alt/overload.rs (specifically the disambiguate_overloads() and disambiguate_overloads_spec_compliant() methods) and the modification to overload resolution logic that selects the most general return type are causing these changes. The key change is in how ambiguous overload calls are resolved - instead of returning Any, it now tries to find a most general type.

sockeye (+6, -4)

This is an improvement. Pyrefly is now correctly catching real bugs where None or optional types are being passed to functions expecting non-optional iterables. The removed 'no-matching-overload' errors were misleading - there were matching overloads, just with type mismatches. The new error messages are more precise and helpful. While mypy/pyright don't catch these, that appears to be a limitation on their part - the code genuinely has type errors that would fail at runtime.
Attribution: The changes to overload resolution in pyrefly/lib/alt/overload.rs, specifically the logic that selects a single overload when only one matches by arity, and the new disambiguate_overloads function that finds the most general return type.

svcs (+2, -2)

This is an improvement. The old behavior incorrectly reported 'no-matching-overload' when there was actually a matching overload based on arity. The new behavior correctly identifies the specific type mismatch (type[S1] not assignable to type[@_]). The @_ type in the error message indicates a type inference issue - pyrefly is failing to properly infer the type parameter in Container.get(). However, the overall change from vague 'no-matching-overload' to specific 'bad-argument-type' errors is more helpful for debugging, even though the @_ suggests pyrefly still has inference problems.
Attribution: The change to overload resolution in pyrefly/lib/alt/overload.rs (specifically the logic that sets matched || arity_compatible_overloads.len() == 1) caused this behavior change. When only one overload has the correct arity, it's now considered matched.

vision (+10, -8)

This is an improvement. The PR makes pyrefly's overload resolution more precise and useful. While it deviates from the typing spec's requirement to return Any for ambiguous overloads, it provides better type inference that aligns with ecosystem expectations (passing more numpy/scipy-stubs tests). The new errors are more specific and actionable than the generic 'no-matching-overload' errors they replace. The net effect is better developer experience with more precise error messages.
Attribution: The changes in pyrefly/lib/alt/overload.rs, specifically the new disambiguate_overloads() method that finds the most general return type among matched overloads, directly caused these error changes.

numpy-stl (+3)

Pyrefly is now catching real type mismatches that were previously hidden. The numpy-stl code has genuine issues: (1) np.cross() can return float64 arrays even with float32 inputs, but the code expects float32, and (2) line 447 doesn't check if normals is None before using it in arithmetic. These are bugs that should be fixed.
Attribution: The change to overload resolution in pyrefly/lib/alt/overload.rs that selects the most general return type instead of falling back to Any, specifically the new disambiguate_overloads() method

urllib3 (+5, -2)

This is an improvement. The PR makes pyrefly's overload resolution smarter by default (with spec_compliant_overloads=false). The new errors are more specific - instead of vague 'no matching overload' messages, pyrefly now identifies the exact type mismatches. For example, in test_https.py:760, instead of saying no overload matches contextlib.nullcontext(), it correctly identifies that the issue is nullcontext[object] vs nullcontext[None]. The removed errors were false positives where pyrefly was giving up too early. While these new errors are pyrefly-only (mypy/pyright don't report them), they represent genuine type inconsistencies that pyrefly is now smart enough to catch.
Attribution: The changes in pyrefly/lib/alt/overload.rs (specifically the disambiguate_overloads function) and the modified overload resolution logic that now attempts to find the 'most general' return type among matched overloads caused these changes.

Expression (+1, -1)

This is an improvement. The new error message is more specific and actionable than the old 'no matching overload' error. While the code would work at runtime, pyrefly is correctly identifying a type-level issue with contravariance in the function parameter. The error helps developers understand exactly what type mismatch is occurring rather than just saying no overload matched
Attribution: The change to overload resolution in pyrefly/lib/alt/overload.rs now selects a specific overload and reports its type mismatch instead of giving up with 'no matching overload'. The disambiguate_overloads() function tries to find the most general return type

PyWinCtl (+1)

This is an improvement. The PR made pyrefly more precise about tracking return types from overloaded functions. As a result, pyrefly now correctly identifies that create_resource_object('window', d.id) returns a Resource type, not the more specific Window type that the variable annotation expects. This is a genuine type mismatch that was previously hidden when pyrefly was less precise about overload return types. While mypy/pyright don't catch this, it represents pyrefly correctly enforcing type safety.
Attribution: The change to overload resolution in pyrefly/lib/alt/overload.rs that selects more precise return types instead of falling back to Any caused pyrefly to better track the actual return type of create_resource_object, revealing this type mismatch

scikit-learn (+169, -167)

This is an improvement. The PR makes pyrefly's overload resolution smarter and more aligned with how real-world type checkers (mypy, pyright) behave. While the typing spec says to return Any for ambiguous overloads, both mypy and pyright try to be more precise, and many libraries rely on this behavior. The new errors are more specific and actionable (e.g., pointing to the exact type mismatch rather than saying 'no matching overload'). The fact that pyrefly now passes more numpy and scipy-stubs assert_type tests (134→127 failures for numpy, 49→19 for scipy-stubs) demonstrates this is bringing pyrefly closer to the established ecosystem standard.
Attribution: The changes in pyrefly/lib/alt/overload.rs modified overload resolution logic. The key change is in the disambiguate_overloads function which now tries to find the 'most general' return type among matched overloads instead of immediately falling back to Any. This allows pyrefly to return more precise types like int | None instead of Any when the overloads have compatible return types.

more-itertools (+4, -2)

The code has a real bug - it's passing the map class where a callable is expected in the nested map calls. The new error messages correctly identify this type mismatch. While the old 'no-matching-overload' errors were technically correct, the new 'bad-argument-type' errors are more specific and helpful. This is an improvement in pyrefly's error reporting quality.
Attribution: The changes in pyrefly/lib/alt/overload.rs, specifically the disambiguate_overloads function and the logic to select overloads based on argument count even when there are errors, caused pyrefly to provide more specific error messages instead of generic 'no-matching-overload' errors.

pwndbg (+57, -55)

This is an improvement. The PR makes pyrefly's error messages more helpful by: 1) When only one overload matches by argument count, it selects that overload and reports specific type mismatches rather than a generic 'no matching overload'. 2) The new bad-argument-type errors pinpoint the exact parameter that has a type mismatch (e.g., 'Value | None is not assignable to parameter x with type Buffer | SupportsIndex | ...'). This gives users clearer guidance on what needs to be fixed. The fact that mypy doesn't flag these at all while pyright flags most of them suggests these are real type issues in the code that pyrefly is correctly catching.
Attribution: The change to overload resolution in pyrefly/lib/alt/overload.rs (specifically the disambiguate_overloads function and the modification to treat single arity-compatible overloads as matched) caused these errors to change from no-matching-overload to bad-argument-type.

setuptools (+23, -12)

Pyrefly improved its error messages. Instead of reporting vague 'no matching overload' errors, it now reports specific parameter type mismatches when it can determine which overload is intended based on argument count. The errors correctly identify that untyped values (Unknown) don't match typed parameters (Iterable[Unknown]). This helps developers understand exactly what needs to be fixed.
Attribution: The changes to disambiguate_overloads() in pyrefly/lib/alt/overload.rs and the modification to use the return type of arity-compatible overloads even when there are type errors caused these changes. Specifically, the code at line 373 that sets matched || arity_compatible_overloads.len() == 1.

comtypes (+8, -3)

This is an improvement. The 2 read-only errors correctly catch real bugs where code tries to assign to NumPy's read-only flat property. The more specific error messages replacing generic 'no matching overload' errors provide better diagnostics. While some new errors may be overly strict (like the byref type issue), the net effect is better: catching real bugs and providing clearer error messages. The PR successfully improves overload resolution to be more precise while maintaining compatibility with the typing spec.
Attribution: The changes to overload resolution in pyrefly/lib/alt/overload.rs (specifically the new disambiguate_overloads method and modifications to resolve_overloaded_call) caused these changes. The PR makes pyrefly select the most general return type instead of falling back to Any, and reports more specific errors when only one overload matches by arity.

django-stubs (+1)

This is an improvement. Pyrefly is correctly identifying that this call doesn't match any valid overload of UniqueConstraint.__init__. The new error message is more specific than before - instead of just saying 'no matching overload', it explains that list[str] cannot be assigned to None for the fields parameter. This is a real type error that all three type checkers (mypy, pyright, and pyrefly) agree on, as evidenced by the ignore comments in the code. The comment explicitly states 'There's no overload case for passing both expression and 'fields'', confirming this is invalid usage. The more specific error message helps users understand exactly what's wrong with their code.
Attribution: The change in pyrefly/lib/alt/overload.rs modified overload resolution to report more specific errors. Instead of reporting 'no matching overload', it now reports the specific type mismatch from the closest matching overload.

apprise (+56, -53)

This is an improvement. The PR makes pyrefly's overload resolution more precise, replacing vague 'no matching overload' errors with specific type mismatch errors. While this creates more errors in apprise, they are pointing to real type inconsistencies where Unknown values are being passed to typed parameters. The removed errors were false positives - there were matching overloads, pyrefly just couldn't decide between them. The new behavior provides better error messages and more actionable feedback to developers.
Attribution: The changes to overload_resolution() in pyrefly/lib/alt/overload.rs and the new disambiguate_overloads() method cause pyrefly to select the most general return type instead of falling back to Any. This leads to more precise type inference, which propagates through the code and reveals type mismatches that were previously hidden.

psycopg (+2, -2)

improvement - The new error messages are more specific and actionable. Instead of a vague 'no matching overload', users now see exactly which argument has a type mismatch. This helps developers fix the issue more quickly. The fact that pyright also reports these as type errors (pyright: yes) confirms these are real issues being caught.
Attribution: The change in pyrefly/lib/alt/overload.rs modified overload resolution to select the best matching overload based on argument count when only one overload has the right arity, and to report specific argument type mismatches rather than generic 'no matching overload' errors.

operator (+1, -1)

This is an improvement. The code has a real bug - passing a bool to set() which expects an iterable. The new error message is more specific and helpful, correctly identifying the type mismatch rather than just saying 'no matching overload'. The fact that pyright also reports this error confirms it's a real issue.
Attribution: The change in pyrefly/lib/alt/overload.rs modified overload resolution to report specific argument type mismatches instead of generic 'no matching overload' errors when there's only one overload with the right arity. The disambiguate_overloads function now tries to find the most general return type instead of falling back to Any.

schemathesis (-1)

The removed error claimed urlunsplit would return Literal[b''] when it actually returns str. This was a false positive caused by poor overload resolution. The PR improves overload resolution to select the most general return type instead of falling back to Any, which correctly resolves the type of urlunsplit in this context. Since the error was wrong and its removal makes pyrefly more accurate, this is an improvement.
Attribution: The changes to overload resolution in pyrefly/lib/alt/overload.rs, specifically the new disambiguate_overloads() method that selects the most general return type among matched overloads, fixed the incorrect type inference for urlunsplit.

pip (+19, -9)

This is an improvement. The PR makes pyrefly's overload resolution smarter - when only one overload matches based on argument count, it uses that overload's signature for error reporting instead of giving up with a generic 'no matching overload' error. This provides more actionable error messages. For example, instead of saying 'no matching overload for getattr', it now says specifically that 'str | None' cannot be assigned to parameter 'name' of type 'str'. This helps developers understand exactly what needs to be fixed.
Attribution: The changes in pyrefly/lib/alt/overload.rs (specifically the disambiguate_overloads and disambiguate_overloads_spec_compliant methods) and the logic that selects an overload when only one matches based on arity are causing these changes. The key change is in line 324 where matched || arity_compatible_overloads.len() == 1 ensures that if only one overload has the right argument count, it's considered matched.

tornado (+5, -3)

overload-resolution: The change from generic 'no-matching-overload' errors to specific 'bad-argument-type' errors provides more actionable information to users. When only one overload matches by arity, pyrefly now reports the specific type mismatch rather than saying no overload matched at all. This helps users understand exactly what needs to be fixed.

Overall: improvement

Attribution: The change to overload resolution in pyrefly/lib/alt/overload.rs (specifically the disambiguate_overloads function and the logic that selects an overload even when there's only one arity-compatible option) caused these errors to change from no-matching-overload to bad-argument-type.

mkdocs (+6, -6)

This is an improvement. The PR enhances pyrefly's overload resolution to be more precise when spec_compliant_overloads is false (the default). Instead of reporting generic 'no matching overload' errors, pyrefly now identifies the specific parameter that has a type mismatch. For example, at line 509, instead of saying no overload matches OptionallyRequired.__init__(default, required=required), it now specifically reports that Unknown | None is not assignable to the required parameter which expects bool. This provides more actionable error messages while maintaining type safety. The fact that 4/6 errors are also reported by pyright suggests these are legitimate type issues that pyrefly is now reporting more clearly.
Attribution: The change to overload resolution in pyrefly/lib/alt/overload.rs, specifically the disambiguate_overloads() function and the logic that selects overloads based on argument count even when there are type mismatches. The PR also modifies error reporting to provide specific parameter mismatches rather than generic overload errors.

cloud-init (+18, -62)

bad-argument-type: These are improvements - pyrefly now correctly identifies type mismatches in arguments rather than giving up with 'No matching overload'. For example, it now reports that list[str] | list[Unknown] doesn't match Iterable[str] parameter.
missing-attribute: Mixed - the NoneType errors are improvements (catching potential None access), but the _TemporaryFileWrapper.unlink and bool.get errors may be regressions if these attributes exist through duck typing or runtime injection.
bad-override: These are improvements - pyrefly correctly identifies methods that override parent methods with incompatible signatures, which are real Liskov substitution violations.
unsupported-operation: These removed errors were false positives - pyrefly was incorrectly claiming operations weren't supported. Their removal is an improvement.

Overall: This is an improvement. The PR makes pyrefly's overload resolution more precise, removing 62 false positive errors while adding 18 new errors that catch real type issues. The new errors are mostly legitimate type problems (e.g., operations on potentially None values, incorrect override signatures). While pyrefly is deviating from the strict typing spec by not returning Any for ambiguous overloads, this deviation provides more useful type checking that catches real bugs. The net reduction of 44 errors with most removals being false positives indicates pyrefly got smarter.

Attribution: The changes in pyrefly/lib/alt/overload.rs, specifically the new disambiguate_overloads() method and modifications to overload_resolution(), cause pyrefly to select more specific return types from overloads instead of falling back to Any. This leads to more precise types propagating through the code, which can expose new type incompatibilities.

antidote (+3, -3)

This is an improvement. When only one overload matches based on argument count, pyrefly now reports the specific type mismatch (e.g., '() -> object' not assignable to expected parameter type) instead of the less helpful 'No matching overload found'. This gives developers more actionable error messages. While mypy/pyright don't report these errors, pyrefly's stricter checking here provides value by catching potential type issues earlier. The new error messages are more precise and helpful than the old ones.
Attribution: The change to overload resolution in pyrefly/lib/alt/overload.rs (specifically the logic that sets matched = true when there's only one arity-compatible overload) causes pyrefly to report specific parameter mismatches instead of generic overload failures.

mkosi (+4, -4)

This is an improvement in error reporting quality. When only one overload matches by argument count, pyrefly now reports the specific type mismatch (e.g., 'nullcontext[Popen[str] | None]' vs 'nullcontext[None]') rather than a vague 'No matching overload found'. This gives users more actionable information about what's wrong. The fact that mypy/pyright don't report these errors doesn't make them wrong - pyrefly is providing more detailed diagnostics.
Attribution: The change to matched || arity_compatible_overloads.len() == 1 in pyrefly/lib/alt/overload.rs causes pyrefly to select the single arity-compatible overload and report specific type errors rather than generic overload errors

mitmproxy (+10, -18)

Pyrefly improved its overload resolution to be smarter about ambiguous calls and provide better error messages. The 18 removed errors were false positives where pyrefly incorrectly rejected valid overloaded calls. The 10 new errors correctly identify type mismatches between IO[Any] and more specific types like TextIO, which are real (if minor) type safety issues. Net improvement of -8 errors with better error quality.
Attribution: The changes to overload resolution in pyrefly/lib/alt/overload.rs, particularly the new disambiguate_overloads() method and the logic to use specific overload errors when only one overload matches by arity, directly caused both the removal of false positive 'no-matching-overload' errors and the addition of more specific 'bad-argument-type' errors.

bokeh (+31, -20)

This is an improvement. The new errors correctly identify real type inconsistencies in the bokeh codebase where ClassVar[str] attributes are being passed to parameters expecting specific literal types. The fact that 19/24 new errors are also caught by pyright confirms these are genuine issues. The improved overload resolution provides more actionable error messages by pinpointing the exact parameter mismatches rather than generic 'no matching overload' errors.
Attribution: The change to overload resolution in pyrefly/lib/alt/overload.rs (specifically the disambiguate_overloads function) now selects the most general return type among matched overloads instead of returning Any, which leads to more precise error reporting

bandersnatch (+2, -1)

This is an improvement. The old error message was misleading - there was a matching overload for max(), but the real issue is that the code has a genuine bug: it's trying to find the maximum of a list that might contain None, which is not comparable with integers. The new errors correctly identify this type safety issue. While mypy/pyright don't catch this, pyrefly is correctly identifying a real bug that could cause runtime errors.
Attribution: The change to overload resolution in pyrefly/lib/alt/overload.rs (specifically the new disambiguate_overloads method) caused pyrefly to select a more precise return type for max() instead of falling back to Any, which exposed the actual type incompatibility.

pywin32 (+1, -1)

This is an improvement. The code has a real bug - it's using Python 2's map(None, ...) syntax which doesn't work in Python 3. The old error message 'No matching overload found' was correct but vague. The new error 'Argument None is not assignable to parameter func' is more specific and helpful, correctly identifying that None cannot be used as the function argument to map(). Even though it's pyrefly-only, it's catching a genuine runtime error.
Attribution: The change to overload resolution in pyrefly/lib/alt/overload.rs (specifically the logic that selects an overload when only one matches based on argument count) caused pyrefly to report a more specific error message instead of the generic 'No matching overload found'

zulip (+17, -16)

This is an improvement. The new error messages are more specific and actionable than 'no matching overload'. Instead of just saying no overload matches, pyrefly now tells you exactly which parameter has the wrong type. However, the underlying issue appears to be a type inference problem - filter(bool, ...) should be inferred as filter[str] not filter[object] when the input is clearly a set comprehension of strings. Both mypy and pyright handle this correctly, so while the error message got better, pyrefly still has a type inference issue to fix.
Attribution: The changes in pyrefly/lib/alt/overload.rs, specifically the logic that selects a single overload when only one matches the argument count (lines 319-326), and the new disambiguation logic that attempts to find the most general return type among ambiguous overloads.

dragonchain (+4, -4)

This is an improvement. The new behavior provides more precise error messages. Instead of saying 'no matching overload found', pyrefly now identifies the specific argument type mismatch. The code genuinely has a type error - dict.get() expects the key type to match the dict's key type (str), but the code passes Unknown | None. The PR improves error quality by pinpointing the exact type incompatibility. While the PR deviates from the typing spec's prescribed behavior for ambiguous overloads, it aligns with pyright's behavior and provides better developer experience.
Attribution: The change to overload resolution in pyrefly/lib/alt/overload.rs (specifically the disambiguate_overloads function) now selects the most general return type among matched overloads instead of returning Any. This allows pyrefly to report more specific type errors on the arguments rather than generic 'no matching overload' errors.

pandas (+370, -135)

This is an improvement. The PR explicitly states it brings pyrefly closer to numpy/scipy-stubs compatibility by resolving ambiguous overloads more precisely. While this deviates from the typing spec's requirement to return Any, it matches what mypy/pyright actually do in practice. The 135 removed false positives (like claiming list.__init__ has no matching overload) outweigh the 370 new errors, which appear to be legitimate type issues that were previously hidden by Any types. The fact that all errors are pyrefly-only confirms this is pyrefly becoming more precise, not introducing bugs.
Attribution: The changes in pyrefly/lib/alt/overload.rs (specifically the new disambiguate_overloads method) and the overall overload resolution logic cause these errors. The PR changes how ambiguous overload calls are resolved - instead of returning Any, it tries to find the most general return type, which propagates more specific types through the codebase.

zope.interface (+7, -7)

This is an improvement. The PR makes pyrefly's error messages more specific and helpful. Instead of saying 'no matching overload found', it now tells you exactly which argument has the wrong type when it can determine a single matching overload based on argument count. The errors are still catching the same type issues, just with better diagnostics. The one new bad-specialization error also appears to be a legitimate type error that pyrefly is now able to detect with its improved overload resolution.
Attribution: The change to disambiguate_overloads() in pyrefly/lib/alt/overload.rs causes pyrefly to select a specific overload when only one matches by arity, leading to more specific error messages instead of generic 'no matching overload' errors.

werkzeug (+21, -21)

This is an improvement. The PR improves overload resolution to be more precise. Instead of giving up with 'no matching overload' when there's a type mismatch in one parameter, pyrefly now correctly identifies which specific parameter has the type mismatch. For example, d.get('foo', default=-1, type=int) has a valid overload signature that matches the argument count, but the first parameter type doesn't match. Previously pyrefly incorrectly reported 'no matching overload', now it correctly reports that the specific issue is with the type parameter. The fact that mypy and pyright also flag most of these errors confirms they are real type issues in the test code.
Attribution: The changes to overload resolution in pyrefly/lib/alt/overload.rs, specifically the disambiguate_overloads() method and the improved handling of single-arity-matching overloads, caused these errors to change from generic 'no-matching-overload' to specific 'bad-argument-type' errors.

meson (+21, -17)

Pyrefly is now more precisely tracking types through overloaded function calls, avoiding premature fallback to Any and providing more specific error messages. While this creates more errors, they appear to identify real type inconsistencies where Unknown types are mixed with known types. The removed 'No matching overload' errors are replaced with more specific type mismatch errors, improving error message quality.
Attribution: The changes to overload resolution in pyrefly/lib/alt/overload.rs, specifically the new disambiguate_overloads() method that attempts to find the most general return type among ambiguous overloads, and the modification to use the return type of the single arity-compatible overload even when there are argument type mismatches.

sphinx (+11, -8)

This is an improvement. While the PR implements non-spec-compliant behavior by default, it provides better error messages and more precise type inference that helps catch real bugs. The new bad-argument-type errors are more actionable than the generic 'no matching overload' errors they replace. The unsupported-operation errors reveal actual type incompatibilities that would fail at runtime. The PR aligns pyrefly with the de facto behavior of mypy and pyright, improving ecosystem compatibility.
Attribution: The changes in pyrefly/lib/alt/overload.rs (specifically the disambiguate_overloads function) and the new spec_compliant_overloads configuration option are responsible for these changes. The PR modifies overload resolution to select the most general return type instead of falling back to Any.

xarray (+34, -41)

This is an improvement. The PR makes pyrefly's overload resolution more precise and compatible with real-world usage. While it deviates from the strict typing spec, it aligns better with how mypy/pyright behave and what the ecosystem expects. The new errors are catching real type inconsistencies (e.g., passing int | Any where Mapping[Any, Any] | None is expected), and the removed errors were mostly false positives or have been replaced with better error messages. The fact that pyrefly now passes more numpy and scipy-stubs assert_type tests (127 vs 134 failures for numpy, 19 vs 49 for scipy-stubs) confirms this is moving in the right direction.
Attribution: The changes in pyrefly/lib/alt/overload.rs (specifically the new disambiguate_overloads() method) and the overall overload resolution logic cause these changes. The PR makes pyrefly select more specific return types from ambiguous overloads instead of falling back to Any, which can expose type mismatches that were previously hidden.

discord.py (+11, -3)

This is an improvement. The PR makes overload resolution more precise by selecting the most general return type instead of immediately falling back to Any. This reveals actual type mismatches that were previously hidden behind vague 'no matching overload' errors. The new errors correctly identify that str | None is not assignable to bool | int | None in the kwargs type. While this deviates from the typing spec's requirement to return Any for ambiguous calls, it provides more actionable error messages that help developers fix real type issues.
Attribution: The changes to disambiguate_overloads() in pyrefly/lib/alt/overload.rs cause pyrefly to select the most general return type among matched overloads instead of falling back to Any, which leads to more precise error reporting on the selected overload.

kornia (+1, -1)

This is an improvement. The new error message is more specific and actionable - it tells the user exactly what's wrong (int | None vs int) rather than just saying 'no matching overload'. Both errors correctly identify the same type incompatibility, but the new message is clearer. Since mypy and pyright also report this error, it confirms this is a real type issue that should be caught.
Attribution: The change in pyrefly/lib/alt/overload.rs (specifically the logic around line 371 that adds closest_overload.call_errors to the error list) causes pyrefly to report the specific argument type mismatch from the selected overload rather than a generic 'no matching overload' message

paasta (+3, -1)

This is an improvement. The code has a real bug where min(None, instances) would crash at runtime if get_max_instances() returns None. The old error message was misleading (claiming no overload matched), while the new errors correctly identify the type incompatibility. Per the typing spec on overloading (https://typing.readthedocs.io/en/latest/spec/overload.html#overloading), when overload resolution is ambiguous, type checkers may use heuristics to select a return type, which is what pyrefly is now doing more intelligently.
Attribution: The change to overload resolution in pyrefly/lib/alt/overload.rs (specifically the disambiguate_overloads function) now selects the most general return type instead of falling back to Any, which allows pyrefly to detect the type incompatibility.

poetry (+1)

This is an improvement. The test code has a genuine type error - it declares path: Path but assigns a value of type Path | None to it. Previously, pyrefly's overload resolution returned Any which masked this bug. Now with more precise return type inference, pyrefly correctly catches this type mismatch. The fact that pyright also reports this error supports that it's a real issue.
Attribution: The changes to overload resolution in pyrefly/lib/alt/overload.rs, specifically the logic that treats single arity-compatible overloads as definite matches and the new disambiguate_overloads() function that selects the most general return type, caused pyrefly to infer a more precise return type for get_cached_archive_for_link() instead of Any, which exposed this pre-existing type error

pydantic (+3, -3)

This is an improvement. The PR intentionally improves upon the typing spec's behavior by providing more precise types when overloads are ambiguous. The new errors are more specific (pointing to exact parameters) rather than vague 'no matching overload' messages. While pyrefly is stricter than the spec requires, this helps users identify actual type mismatches more clearly. The removed errors were replaced with better, more actionable error messages.
Attribution: The changes in pyrefly/lib/alt/overload.rs (specifically the disambiguate_overloads function) and the new spec_compliant_overloads configuration option are responsible. When spec_compliant_overloads is false (the default), pyrefly now tries to find the most general return type among matched overloads instead of returning Any, which leads to more precise type inference and more specific error messages.

archinstall (+1, -1)

This is an improvement. The new error message is more precise and helpful. Previously, pyrefly reported 'no matching overload' which was vague. Now it correctly identifies that list.__init__ does have a matching overload based on arity, but there's a type mismatch: str is not assignable to PathLike[Any] | bytes. This is a real bug in the code - os.execve requires bytes-like arguments on POSIX systems, not strings. The more specific error helps developers understand exactly what needs to be fixed.
Attribution: The change in pyrefly/lib/alt/overload.rs in the overload_resolution() function now reports bad-argument-type errors from the closest overload match (line 461: errors.extend(closest_overload.call_errors)), whereas before it would report no-matching-overload. This makes the error more precise.

scrapy (+14, -13)

bad-specialization: False positive - pyrefly is incorrectly rejecting max/min calls on Any values, which should be allowed. Neither mypy nor pyright flag these.
bad-argument-type: Improvement - these catch real type mismatches like passing None to functions expecting iterables.
missing-attribute: Improvement - catches real type inconsistency where annotation says Iterable but code expects dict with .items().
no-matching-overload: Improvement - these were false positives that are now resolved by better overload matching.

Overall: Mixed impact. The PR improves overload resolution by removing false positive 'no-matching-overload' errors and providing more specific error messages. However, it introduces 2 false positive bad-specialization errors that neither mypy nor pyright report, related to comparing Any values. The other new errors appear to catch real type issues. Net improvement due to better error specificity, but with some regressions.

Attribution: The changes in pyrefly/lib/alt/overload.rs (specifically the disambiguate_overloads function) and the logic to select overloads based on arity even when there are type errors caused these changes. The removed errors come from better overload matching, while new errors come from reporting specific type mismatches instead of generic 'no matching overload'.

optuna (+4, -5)

This is an improvement. The PR makes pyrefly's overload resolution smarter by selecting the most general return type among matched overloads instead of falling back to Any. This reduces false positive no-matching-overload errors (5 removed) while introducing some new stricter type checks (4 added). The new errors appear to be genuine type mismatches that were previously hidden by the Any fallback. For example, the bool | int not assignable to bool error is a real type incompatibility. While the new behavior deviates from the typing spec, it aligns better with mypy/pyright behavior and improves type safety overall.
Attribution: The changes in pyrefly/lib/alt/overload.rs (specifically the new disambiguate_overloads method and modifications to overload_resolution) directly cause these primer results by implementing the new 'most general return type' selection strategy for ambiguous overload calls.

spark (+53, -50)

This is an improvement. The PR makes pyrefly smarter at resolving overloaded function calls, replacing vague 'no-matching-overload' errors with specific parameter mismatch errors that are more actionable. The new bad-specialization error appears to catch a real type constraint violation. While technically deviating from the typing spec's requirement to return Any for ambiguous calls, this provides better practical type checking that helps users identify actual issues in their code.
Attribution: The changes in pyrefly/lib/alt/overload.rs, specifically the new disambiguate_overloads() method and modifications to resolve_overloads(), caused pyrefly to better resolve ambiguous overload calls and provide more specific error messages instead of falling back to Any.

pytest (+1, -1)

This is an improvement. Pyrefly now provides a more specific and actionable error message for a real type issue that could cause a runtime TypeError. The code genuinely has a bug - it doesn't handle the case where log_level could be None before passing it to int().
Attribution: The change in pyrefly/lib/alt/overload.rs modified the overload resolution algorithm to select the most general return type among matched overloads and provide more specific error messages when there's only one overload that matches by arity.

aiortc (+3, -3)

This is an improvement. The new bad-argument-type errors are more specific and helpful than the generic no-matching-overload errors. They pinpoint exactly which parameter has a type mismatch. The fact that pyright also flags these (3/3) while mypy doesn't (0/3) suggests pyrefly is adopting pyright's stricter and more helpful approach. The PR's changes to overload resolution allow pyrefly to select the appropriate overload based on argument count and then report specific type mismatches, rather than giving up with a generic error.
Attribution: The change in pyrefly/lib/alt/overload.rs modified overload resolution to be more precise when only one overload matches by arity, leading to better error messages instead of generic 'no matching overload' errors.

scipy-stubs (+82)

unused-ignore: Pyrefly no longer reports 'no-matching-overload' errors in these locations, so the ignore comments are unnecessary. This indicates pyrefly fixed false positives where it couldn't resolve overloads correctly before.
assert-type: Pyrefly now infers more precise types (e.g., ndarray[tuple[Any, ...], dtype[float64]] instead of Any) matching what mypy/pyright infer. These are co-reported by other checkers, confirming the new behavior is correct.
bad-argument-type: Correctly identifies that Literal[b'boltzmann'] (bytes) cannot be passed to a parameter expecting str | None. This is a real type error that pyrefly now catches with a clearer message.

Overall: This is an improvement. The PR brings pyrefly closer to mypy/pyright behavior, as evidenced by scipy-stubs' assert_type tests now passing (49→19 failures). The 71 unused-ignore errors show pyrefly no longer produces false positive 'no-matching-overload' errors that needed suppression. The 10 assert_type failures that are co-reported by mypy/pyright indicate pyrefly now infers the same (more precise) types as other checkers. The single bad-argument-type error correctly identifies that bytes cannot be passed where str is expected. Overall, pyrefly is producing fewer false positives and more accurate type inference.

Attribution: The changes to overload resolution in pyrefly/lib/alt/overload.rs (specifically disambiguate_overloads() selecting the most general return type) and the fix to use the correct overload when only one matches by arity caused these changes. The PR also modified error reporting to show specific argument mismatches instead of generic 'no matching overload' messages.

jax (+211, -6)

no-matching-overload: These errors changed to more specific bad-argument-type errors because pyrefly now selects an overload based on arity and reports the specific type mismatch instead of saying no overload matches
bad-argument-type: Real type mismatches where JAX passes values of union types (e.g., Array | tuple) to parameters expecting specific types
bad-return: Functions returning union types like Array | tuple[Array, ...] when declared to return just Array - these are real bugs
unsupported-operation: Operations like / on values that might be tuples - real bugs since tuples don't support division
missing-attribute on tuple: Calling array methods like .astype() on values that might be tuples - real bugs
bad-assignment: Type mismatches in assignments
bad-unpacking: Trying to unpack unions with different tuple sizes
bad-index: Indexing into tuple[Array, ...] which has unknown length
missing-attribute on AbstractValue/bool: Real missing attributes
bad-specialization: Type parameter constraint violation

Overall: The PR improves overload resolution to be more precise, selecting the most general return type instead of falling back to Any. This catches real type inconsistencies in JAX's code where values might be either Array or tuple[Array, ...] but are used as if they're always Array. The removed errors were false positives.

Attribution: The changes in pyrefly/lib/alt/overload.rs, specifically the new disambiguate_overloads() method that implements the 'most general return type' logic, and the modification to use the first overload's return type when only one overload matches by arity even if there are argument type errors.

cwltool (+5, -3)

This is an improvement. The PR intentionally makes pyrefly's overload resolution more precise to match ecosystem expectations. The removed errors were false positives (claiming operations weren't supported when they actually were). The new errors are more specific - instead of vague 'no matching overload' messages, they now point to the exact type mismatches. The PR brings pyrefly closer to mypy/pyright behavior and passes more of their test suites, which is a clear improvement in ecosystem compatibility.
Attribution: The changes in pyrefly/lib/alt/overload.rs (specifically the new disambiguate_overloads() method and modifications to overload_resolution()) caused these errors to change. The PR introduces logic to select the most general return type among matched overloads instead of falling back to Any.

pyppeteer (+2, -1)

The new error messages are more specific and actionable. Instead of a vague 'no matching overload', developers now see exactly which parameter has a type mismatch. Both errors catch real type issues where None values are passed to functions expecting non-optional types. The improved overload resolution also catches the downstream Frame | None issue that was previously missed.
Attribution: The changes to overload resolution in pyrefly/lib/alt/overload.rs, specifically the new logic in disambiguate_overloads() that selects the most general return type and the modification to use the single arity-compatible overload when only one matches, caused these error message improvements

cryptography (+1, -1)

This is an improvement. The old error message was misleading - claiming no overload matched when actually there was a matching overload by arity. The new error message correctly identifies the specific type mismatch (Generator[union of GeneralName types] vs Iterable[_IPAddressTypes]). This gives developers actionable information about what needs to be fixed rather than a vague 'no matching overload' message. The code would execute successfully at runtime, so the old error was overly broad.
Attribution: The change in pyrefly/lib/alt/overload.rs in the disambiguate_overloads method and the overall overload resolution logic improvements caused this change. Specifically, the code now provides more specific error messages and can select overloads even with type mismatches when only one overload matches by argument count.

yarl (+2, -2)

This is an improvement. The new error messages are more specific and helpful. Instead of saying 'no matching overload found', pyrefly now identifies the specific parameter type mismatch. The code genuinely has a type error - memoryview is not assignable to the expected query parameter types (Mapping[str, ...], Sequence[tuple[str, ...]], str, or None). The test expects this to fail with a TypeError at runtime (line 323). The error message change from generic to specific makes debugging easier.
Attribution: The change in pyrefly/lib/alt/overload.rs modified overload resolution to select a specific overload even when only one matches by arity (lines 372-375). This causes pyrefly to report a specific parameter type mismatch instead of a generic 'no matching overload' error.

httpx-caching (+3)

These are real bugs in the code. The parsing functions can return None, but the code doesn't check for this before using the results. Pyrefly is correctly identifying potential runtime errors that would occur when the date headers are missing or malformed. The improved overload resolution is revealing these latent bugs that were previously hidden by falling back to Any.
Attribution: The changes to overload resolution in pyrefly/lib/alt/overload.rs, specifically the new disambiguate_overloads method and the logic to select overloads based on argument count even when there are type errors, caused these new errors to appear. The change makes pyrefly more precise in tracking types through overloaded functions like dict.get and parsing functions.

scipy (+2, -3)

This is an improvement. The PR removes a false positive (scipy array assignment) and provides more specific error messages. While it deviates from the typing spec by default, it provides better practical results by attempting to resolve ambiguous overloads more precisely rather than immediately returning Any.
Attribution: The changes in pyrefly/lib/alt/overload.rs (specifically the new disambiguate_overloads method) and the modification to use the most specific overload when only one matches by arity caused these changes.

colour (+83, -20)

This is an improvement. The PR makes overload resolution more precise, removing false positive 'no-matching-overload' errors while exposing real type inconsistencies in the code. The new errors correctly identify places where functions claim to return specific numpy array types but actually return more complex unions. The removed errors were incorrect - numpy functions do support those calling patterns. While the new errors may seem noisy, they accurately reflect type mismatches that were previously hidden by Any types.
Attribution: The changes in pyrefly/lib/alt/overload.rs (specifically the new disambiguate_overloads method) cause pyrefly to select more specific return types from overloaded functions instead of falling back to Any. This creates more precise but also more complex union types that trigger the new type errors.

❓ Needs Review (2)

pyodide (+1)

LLM requested additional files that could not be resolved. Non-trivial change (1 added, 0 removed).

graphql-core (+1, -7)

LLM classification failed: Could not parse LLM response as classification JSON: Looking at this diff, I need to analyze the changes in pyrefly's overload resolution behavior.

Analysis

The PR implements a change to how pyrefly resolves ambiguous overload calls. Previously, when multiple overloads matched and had different return types, pyrefly would fall back to Any. Now it tries to find the "most general" return type among the matched overloads.

New Error Analysis

The new error is in tests/utils/gen_fuzz_strings.py:

yield from map("".join, product(allowed_chars, repeat=length))

The error states: Argument product[tuple[str, ...]] is not assignable to parameter iterable with type Iterable[Iterable[LiteralString]]

Looking at the code, product(allowed_chars, repeat=length) returns an iterator of tuples. The map function expects its second argument to be an Iterable[Iterable[LiteralString]] when the first argument is "".join. This appears to be a genuine type mismatch - product returns tuples of characters from allowed_chars (which is str), but the type checker is being more precise about what map expects.

Removed Errors Analysis

The 7 removed no-matching-overload errors in src/graphql/type/definition.py are being replaced with more specific error messages. Looking at the code:

  1. Line 427-440: GraphQLScalarTypeKwargs.__init__ - The PR now provides a more specific error instead of generic "no matching overload"
  2. Line 765-770: GraphQLObjectTypeKwargs.__init__ - Similar pattern

The PR changes show that instead of reporting "No matching overload", pyrefly now reports the specific type mismatch when there's only one overload that matches by argument count.

Verdict

{"spec_check": "The typing spec for overloads (https://typing.readthedocs.io/en/latest/spec/overload.html#overload) states that when multiple overloads match, the return type should be Any if the return types are not all equivalent. The PR implements a non-spec-compliant mode by default that tries to find the most general return type, but also adds a spec_compliant_overloads flag to follow the spec exactly.", "runtime_behavior": "The new error would not cause a runtime crash - product() returns tuples which are iterable, and "".join() can join them. The removed errors were also not preventing runtime crashes.", "mypy_pyright": "The new error is pyrefly-only (0/1 in mypy, 0/1 in pyright), suggesting pyrefly is being stricter than other type checkers. The removed errors were also pyrefly-specific.", "removal_assessment": "The removed errors were false positives - they were reporting 'No matching overload' when there was actually a matching overload based on argument count. Removing these and replacing them with more specific error messages is an improvement in error quality.", "pr_attribution": "The changes in pyrefly/lib/alt/overload.rs modified the overload resolution logic, specifically the disambiguate_overloads function and the logic for selecting overloads when only one matches by arity.", "reason": "This is an improvement. The PR improves error messages by replacing generic 'No matching overload' errors with specific type mismatches when only one overload matches by argument count. While the new error about product may be overly strict (pyrefly-only), the overall change provides better error messages. The removed errors were false positives that incorrectly claimed no overload matched when one actually did."}. Non-trivial change (1 added, 7 removed).

Suggested fixes

Summary: The PR's attempt to be 'smarter' than the typing spec by finding the most general return type for ambiguous overloads causes 33 regressions across 35 projects, with most being pyrefly-only false positives.

1. In disambiguate_overloads() in pyrefly/lib/alt/overload.rs, when the function parameter is a bound method like str.strip, relax the type checking to accept Iterable[str] instead of requiring Iterable[LiteralString]. Add a check: if the callable is a method reference and the parameter type is LiteralString, widen it to str

Files: pyrefly/lib/alt/overload.rs
Confidence: high
Affected projects: streamlit, build, packaging, aiohttp, black, Tanjun
Fixes: bad-argument-type
This fixes the map(str.strip, list[str]) pattern that's failing in 12+ projects (streamlit, build, packaging, aiohttp, etc.). These are all pyrefly-only errors where the code works fine at runtime. Expected outcome: eliminates ~20 bad-argument-type errors across streamlit, build, packaging, aiohttp, black, Tanjun, and others

2. In resolve_overloaded_call() or find_best_overload() in pyrefly/lib/alt/overload.rs, when type inference produces @_ or Never types, fall back to the previous behavior of returning Any instead of propagating the failed inference. Add a guard: if return_type.contains_inference_failure() { return Any }

Files: pyrefly/lib/alt/overload.rs
Confidence: medium
Affected projects: hydpy, mypy, egglog-python, attrs, prefect
Fixes: bad-argument-type, not-async
Multiple projects show @_ types in error messages indicating inference failures. This would fix the inference issues in hydpy, mypy, egglog-python, and attrs. Expected outcome: eliminates 10+ errors with @_ types across 5 projects

3. In the code that handles attribute resolution (likely in a different file not shown in the diff), when resolving attributes on DataType objects in ibis, ensure that standard data type attributes like 'bounds', 'precision', 'scale' are properly resolved. This may require special-casing DataType attribute lookup

Files: unknown - attribute resolution code
Confidence: low
Affected projects: ibis
Fixes: missing-attribute
The 11 missing-attribute errors on DataType in ibis are all pyrefly-only, suggesting pyrefly fails to see attributes that exist. Without seeing the attribute resolution code, it's hard to be specific. Expected outcome: eliminates 11 missing-attribute errors in ibis

4. In disambiguate_overloads() in pyrefly/lib/alt/overload.rs, add a configuration check to only use the new 'most general' logic when spec_compliant_overloads is false AND the project explicitly opts in. Currently it's too aggressive by default. Change the condition to require both flags

Files: pyrefly/lib/alt/overload.rs
Confidence: high
Affected projects: all regression projects
Fixes: bad-argument-type, bad-specialization, missing-attribute
The new behavior causes too many false positives to be enabled by default. Making it opt-in would prevent the 33 regressions while still allowing projects that benefit from it to use it. Expected outcome: eliminates most pyrefly-only errors across all affected projects when spec_compliant_overloads is not explicitly set to false


Was this helpful? React with 👍 or 👎

Classification by primer-classifier (1 heuristic, 94 LLM)

migeed-z added a commit to migeed-z/pyrefly that referenced this pull request Mar 21, 2026
…nd cross-project consistency

Summary:
The primer classifier has been producing inconsistent results across runs — the same primer diff can be classified as 'improvement' in one run and 'regression' in another. This was observed on real PRs like facebook#2839 (altair TypeVar iterability) and facebook#2764 (overload resolution, 60+ projects).

Three changes to improve reliability:

1. **Self-critique pass (Pass 1.5)**: After Pass 1 produces reasoning, a new pass checks it for factual errors — e.g., claiming dicts are not iterable, incorrect inheritance claims, wrong TypeVar constraint analysis. This catches hallucinations before they reach the verdict pass. Tested on PR facebook#2839 where it correctly identified that both constraints of `_C` (list and TypedDict) are iterable.

2. **Majority voting on verdict (Pass 2)**: Instead of a single verdict call, makes 5 independent calls and takes the majority. This reduces non-determinism where the same reasoning could be classified either way. Vote distribution is logged for transparency.

3. **Cross-project consistency enforcement**: After classifying all projects independently, groups them by error kind and enforces majority verdict within each group. This prevents the classifier from saying 'overload resolution improved' for one project and 'overload resolution regressed' for another with the same pattern.

Also upgrades the default Anthropic model from claude-opus-4-20250514 to claude-opus-4-6 for better Pass 1 reasoning quality.

Differential Revision: D97571454
meta-codesync bot pushed a commit that referenced this pull request Mar 21, 2026
Summary:
Pull Request resolved: #2840

I noticed when looking at the classifier output for #2764 that the "verdict" formatting needed to be fixed.

Two fixes:
1. formatter.py: Add _format_reason() to render JSON reason dicts as
   labeled readable sections (e.g. "**Spec check:** ...", "**Reasoning:** ...")
2. llm_client.py: Ensure reason is always a string by serializing dict
   values, so downstream code handles it consistently.

Reviewed By: grievejia

Differential Revision: D97422229

fbshipit-source-id: 4aaaa4cf507ea273fdce2cfcf761801f0632d894
meta-codesync bot pushed a commit that referenced this pull request Mar 22, 2026
…nd cross-project consistency (#2841)

Summary:
Pull Request resolved: #2841

The primer classifier has been producing inconsistent results across runs — the same primer diff can be classified as 'improvement' in one run and 'regression' in another. This was observed on real PRs like #2839 (altair TypeVar iterability) and #2764 (overload resolution, 60+ projects).

Three changes to improve reliability:

1. **Self-critique pass (Pass 1.5)**: After Pass 1 produces reasoning, a new pass checks it for factual errors — e.g., claiming dicts are not iterable, incorrect inheritance claims, wrong TypeVar constraint analysis. This catches hallucinations before they reach the verdict pass. Tested on PR #2839 where it correctly identified that both constraints of `_C` (list and TypedDict) are iterable.

2. **Majority voting on verdict (Pass 2)**: Instead of a single verdict call, makes 5 independent calls and takes the majority. This reduces non-determinism where the same reasoning could be classified either way. Vote distribution is logged for transparency.

3. **Cross-project consistency enforcement**: After classifying all projects independently, groups them by error kind and enforces majority verdict within each group. This prevents the classifier from saying 'overload resolution improved' for one project and 'overload resolution regressed' for another with the same pattern.

Also upgrades the default Anthropic model from claude-opus-4-20250514 to claude-opus-4-6 for better Pass 1 reasoning quality. According to gemni, this is a big upgrade :) so I am hoping to see improvement in the quality.

Reviewed By: yangdanny97

Differential Revision: D97571454

fbshipit-source-id: 356f4b150e0c4886c2743abc17699e004da997f1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants