Fix incorrect fixtures searching in imports. Add test that inspection registers problem only for PyTestFixture.
GitOrigin-RevId: de029bb401689f0218e6fce04e64e738a8051fae
The original problem with @contextlib.asynccontextmanager was due to a bug
in PyTypeChecker.substitute introduced in the TypeVarTuple support. Namely,
we started to substitute unmatched ParamSpec types with null, effectively
replacing them in a callable signature with a single parameter of type Any.
Then the special logic in PyCallExpressionHelper.mapArguments that treated
unmatched ParamSpecs as "catch-all" varargs stopped working, and we started
to highlight all extra arguments in the substituted callable invocations.
In other words, binding type parameters from decorator targets, e.g.
ParamSpecs or function return types, never worked because we can't resolve
functions passed as decorator arguments in "de-sugared" expression fragments
in the codeAnalysis context, i.e. when we replace
```
@deco
def f(x: int, y: str): ...
```
with `deco(f)` and then try to infer its type in PyDecoratedFunctionTypeProvider,
but we didn't report it thanks to that special-casing of unmatched ParamSpecs
(other type parameters replaced by Any don't trigger such warnings).
Ideally, we should start resolving references in arguments of function calls
in such virtual expression fragments in some stub-safe manner instead of relying
on this fallback logic. In the general case, however, complete stub-safe inference
for decorators is a hard problem because arbitrary expressions can affect types of
their return type, .e.g.
```
def deco(result: T) -> Callable[[Callable[P, Any]], Callable[P, T]]: ...
@deco(arbitrary_call().foo + 42) # how to handle this without unstubbing?
def f(x: int, y: str): ...
```
GitOrigin-RevId: adeb625611a3ebb7d5db523df00388d619323545
This decorator is fully type hinted in Typeshed, so, with the changes introduced
for PY-60104, it's no longer necessary to special-case it anywhere.
PyDecoratedFunctionTypeProvider can infer the correct type after application
of this decorator to a generator function just as for any other typed decorator.
The original problem was caused by the fact that PyDecoratedFunctionTypeProvider
didn't process declarations having any decorator listed in the KnownDecorator enum,
as presumably all of them were too "magical" to analyze.
Co-authored-by: Daniil Kalinin <daniil.kalinin@jetbrains.com>
GitOrigin-RevId: 53b277803a1eb42784131d0dae5bb7ace173c017
Assume that such decorators as well as "well-known" decorators, which we special-case,
don't change signatures of decorated functions and classes.
This change effectively stops the long-standing policy of safe-listing a few recognized
"well-known" decorators and assuming everything else can change a definition in any
way. This approach doesn't apply well to the current state of the Python world, where most
of the common side effects of decorators, such as adding new parameters, can be expressed
in type hints.
In 2021.1 we added PyDecoratedFunctionTypeProvider that was able to infer a return type of
decorator over its body, as for any other function, and then correctly apply this information
to a decorated definition. It led to a number of problems.
First of all, depending on whether TypeEvalContext allowed us to access AST of a decorator's
body, we inferred different signatures for functions decorated with an imported decorator in
inspections and in user-initiated actions, such as Parameter Info.
Secondly, we started inferring useless `(*args, **kwargs)` signatures in case of decorators
defined following the common pattern of returning a wrapper function accepting arbitrary
parameters and itself decorated with @functools.wraps (PY-48338). In some sense, our code
analysis was "too smart" in its type inference in this case.
Lastly, we diluted the return types of functions decorated with unknown decorators, even
fully typed, by uniting these types with Any (so-called "weak" types). This logic
existed before PyDecoratedFunctionTypeProvider, but it became more problematic now
than we were able to propagate this artificial union through generic decorators.
This change in behavior might lead to some false positives for untyped Python code
with non-pure decorators. However, given that other type checkers are also likely to hit these
problems, there is now a stronger incentive to add type hints for such problematic APIs.
In the worst case, we can special-case some heavily requested decorators as we did before.
GitOrigin-RevId: db11fb3573bda5da155cb921a30adc31d5c841e2
The original false positive was caused by the protocol NestedSequence in matplotlib
defined as:
class NestedSequence(Protocol[_T_co]):
def __getitem__(self, key: int, /) -> _T_co | NestedSequence[_T_co]: ...
def __len__(self, /) -> int: ...
which was used to annotate parameters of matplotlib.pyplot.plot and was considered
incompatible with the builtin list.
Skipping positional- and keyword-only parameter separators of the expected callable
is a workaround until we have a comprehensive mechanism for matching signatures.
GitOrigin-RevId: 93d8bb4c6c4405d0e24b5f98152a461691f6197e
- Drop support for python 3.5 & 3.6 in compatibility inspection
- Fix and remove some outdated tests
- Remove xmls for long-unsupported python 2.6 & 3.5
- Regenerate versions.xml
- Remove mentions of OS-specific modules
GitOrigin-RevId: 3265dd1de8a4f7a41119e10c95bb705ca5845efe
Inherit PyTypeAliasStatement from PyQualifiedNameOwner to re-use type aliases stack in PyTypingTypeProvider
Various tests for the changes above
Co-authored-by: Mikhail Golubev <mikhail.golubev@jetbrains.com>
GitOrigin-RevId: 242427c6f84c05ec48c94085f20675b8e30f8625
Previously, we incorrectly inferred generic list and set types parameterized with
types of each element of the corresponding collection literal instead of a union
of those. For instance, for [1, 2] we inferred list[Literal[1], Literal[2]] instead
of list[Literal[1, 2]].
GitOrigin-RevId: 6f222daee871137a5de5589429f78341704c5544
The introduction of TypeVarTuples and the concept of unpacked tuple types made us
revise all the places where we match sequences of types in type inference.
For instance, when matching type parameters and type arguments for generic
specialization in:
* type hints, i.e. xs: MyGeneric[int, str] = MyGeneric()
* constructor invocations, i.e. xs = MyGeneric[int, str]()
* class declarations, i.e. class MyGeneric(Base[T1, T2, str]): ...
* type alias declarations, i.e. MyAlias: TypeAlias = MyGeneric[T, int]
as well as during type matching of all generic types, both normal non-variadic and
existing "built-in" generic variadics in the type system, namely tuples and
Callables.
Previously, this logic was spread across numerous places in PyTypeChecker and
PyTypingTypeProvider, all with their own subtle differences. The first attempt
of PEP 646 support put all the code for uniform matching of type parameters directly
in PyTypeChecker, significantly complicating its already arcane internals.
I've introduced a unified API for that called PyTypeParameterMapping.
It still retains some of the former quirks in form of its Option flags, controlling
in particular how we handle having some of the expected types unmatched
(imagine expecting MyGeneric[T1, T2, *Ts] and receiving MyGeneric[int]),
but I'm planning to gradually eliminate this conditional logic.
The same class is now also responsible for matching parameter types of callables
that already allowed to fix some of the known problems, such as ignoring their
arity (PY-16994), but I'm going to extract a separate API entity for that, since
matching of callable signatures is a much more complicated task involving
compatibility of different types of parameters (positional-only, keyword-only,
defaults, varargs, etc.).
Another positive side effect of these changes is that substitution of type
parameters during type inference became more consistent, and we no longer lose
useful type information by replacing all unbound type parameters with Any. It's
particularly visible in type checker errors where we stopped dropping unbound type
parameters from messages about mismatched parameter-argument types.
Among other improvements in this changeset are proper scoping for
TypeVarTuples, consistent with other type parameters, and recognizing TypeVarTuples
and unpacked tuples in types of *args parameters in function bodies, e.g.
`*args: *Ts` translates to "args" parameter having the type `tuple[*Ts]`.
Confusing PyNoMatchedType used only for reporting of missing arguments for *args
parameters annotated with unpacked tuples in the type checker inspection, e.g.
def f(*args: *tuple[int, str]): ...
f(42) # a type checker error about a missing argument for str
was also removed from the type system in favor of a simpler approach with handling
such errors directly in the inspection. We might need such a general type in
the future, but it has to be well thought-through.
GitOrigin-RevId: 63db6202254205863657f014632d141d340fe147
Also, this commit changes the behavior of PyPIPackageUtil, now only one package per name is supported. Motivation:
- it complicates the logic in all usages of the mapping
- There are only 3 cases when it's applicable and in all cases the second package is abandoned.
GitOrigin-RevId: 80fb1e0d28369bdfeb64f6b928ed1b543165d1de
Report warning if a fixture is used without being passed to test function parameters or to
`@pytest.mark.usefixtures` decorator.
Co-authored-by: Denis Mashutin <Denis.Mashutin@jetbrains.com>
Merge-request: IJ-MR-108713
Merged-by: Egor Eliseev <Egor.Eliseev@jetbrains.com>
GitOrigin-RevId: 28d0711b99ab7ae180f672306dd4ab8a81f1feec
Previously, unresolved class attributes were not reported for the decorated classes because of potential dynamical attributes (see PY-7173). This commit enables this warning again because false-negative unresolved reference warning is much more common and distracting than the case with dynamic attributes
Merge-request: IJ-MR-105254
Merged-by: Daniil Kalinin <Daniil.Kalinin@jetbrains.com>
GitOrigin-RevId: 67d1ab3fe1d5a140836d49f8ef6a65cf01873456
Functions annotated with `NoReturn` and `Never` now taken into account in the Control Flow Graph building process, and the code after calling such functions is treated as unreachable.
Merge-request: IJ-MR-105973
Merged-by: Daniil Kalinin <Daniil.Kalinin@jetbrains.com>
GitOrigin-RevId: ef5840ae6e593498fc334dc9bd2daadccebf2b13
This behavior is similar to how mypy and pyright handle overloads relying on
their definition order, where more specific signatures are supposed to precede
more general ones. The subtle difference is that in case of Any arguments,
pyright tries to find a common supertype for return types of all matching
overloads, mypy just returns Any, and we return a union of their return types.
For the time being, it keeps things simple and matches how we treated ambiguous
signatures before.
To make this work, I've had to revise how we handle name redefinitions in files.
Previously, we processed them in the reversed order to give priority to those
later in a file, which is natural for regular .py files. It doesn't make much
sense for .pyi stubs, though, as it's impossible to redefine a name there and
multiple definitions, e.g. overloads, are supposed to have equal precedence.
Now, we don't reverse the order of name definitions in .pyi stubs at all,
and in .py files we do that preserving the original order of overloads.
A side effect of it is that now we always navigate to the first overload
of a name in a .pyi stub, as opposed to the last, but it only seems more
logical and convenient.
On the other hand, when we handle overloads in .py files, we explicitly
assign to them a lower resolve rate to give precedence to implementations.
The only exception is that when there are no implementations we still want
to prefer the latest overload, so we put it into the results second time
with a higher rate. It messed up overloads order important for type inference,
so I've introduced a dedicated RATE_LIFTED_PY_FILE_OVERLOAD rate for such
results to filter them out later in PyCallExpressionHelper. I've also added
a named RATE_PY_FILE_OVERLOAD rate of other overloads in .py files to make
them more easily distinguishable from other resolve results with a low rate.
GitOrigin-RevId: e921654e47fe1fc5da047950b70775e342996757
Previously, we inferred tuple[Any, ...] for such parameters and then replaced it
with a PyParamSpecType in PyTypeCheckerInspection specifically for its checks.
However, inside the inspection we didn't have enough context information to
properly detect the right type parameter scope.
Now, PyParamSpecType is created for vararg parameters annotated with "P.args" and
"P.kwargs" directly in PyTypingTypeProvider, uniformly with other possible usages
of ParamSpec and with accurate scope checks.
This also changes how we handle unrecognized parameter type hints. Previously,
we returned null for them, delegating to other type providers, which could lead
to subtle bugs. Now, when a type hint is present, but can't be identified,
we return Any, thus ignoring any further sources of type information.
It was triggered in PyProtectedMemberInspectionTest.testMemberResolvedToStub
in particular, where we ignored an annotation containing a broken
forward reference to a containing class, proceeding to a nearby .pyi stub.
This was most likely an honest mistake, so I've updated the test data.
GitOrigin-RevId: 02945e756c5f9a8097360c3bcf3f1c5f267c02e4
This changeset introduces a few important changes to out type inference.
First of all, TypeVar instances, represented as PyGenericType objects
in our type system, finally have associated scope owners
(see https://peps.python.org/pep-0484/#scoping-rules-for-type-variables),
which allows to safely use type parameters with the same name in different
declarations. Absence of this feature, not only caused various subtle bugs
in type checking, but also led to occasional SOEs on type substitution
(e.g. PY-54336).
To make it work consistently across the board, I also added scope owners
to type parameters extracted from docstrings in our legacy python-skeletons
format.
Secondly, now both nominal subtyping and structural subtyping via protocols
properly take into account type parameters "fixed" on inheritance. Previously,
it was done only for receivers in methods calls and attribute reads.
It fixes PY-27707, PY-35026, PY-38897.
GitOrigin-RevId: ff4e61fb9b4aff079e67b2e5263f30552da15c63
Control flow now abrupts on `exit()` and `pytest.fail()` calls
Control flow now abrupts only if class which contains `self.fail()` call contains case-insensitive "test" word in the name
Merge-request: IJ-MR-96165
Merged-by: Daniil Kalinin <Daniil.Kalinin@jetbrains.com>
GitOrigin-RevId: ea173fdb72a10a373cd95f266ea7589e36545f30
While resolving an aliased method exported from a module, we might lose
the context that it was referenced via instance and hence its first "self"
parameter is already bound and doesn't need to be passed explicitly.
The reason is PyResolveUtil#doResolveQualifiedNameInScope (called in
PyTargetExpressionImpl.multiResolveAssignedValue) performs resolve
over qualified names saved in PSI stubs and returns plain PsiElements
(end results) that don't retain such information about their qualifiers.
QualifiedResolveResult can't be used there either because we don't keep
PyExpressions in PSI stubs. What's more, when later such function is referenced
via some module we consider it definitely unbound, even though a module cannot
possibly have a method as its immediate attribute. I changed the logic so that
we no longer consider referencing a method through a module as somehow affecting
its bound/unbound state.
GitOrigin-RevId: 17a6c3e5d43c088d0663ba54651004c8370d5eca
Makes it possible to mark individual TypedDict keys as required or not required, covered in [PEP-655](https://peps.python.org/pep-0655/)
GitOrigin-RevId: 6567fd1009430e37f32924eb29ab8b4a1a17f315
* Dedicated inspections for `ClassVar` variables in variable declarations, variable reassignments, function parameters, local and return variables
* Types of `ClassVar` variables now resolves correctly
* Tests for `ClassVar` inspections
GitOrigin-RevId: 0fd0ef0126ba2c2801ef82bcbeca4ea9b0c48c73
In case of syntactic ambiguity with previous versions of the grammar, such as
"with (expr)" or "with (expr1, expr2)", PyWithStatement is still parsed as
having its own parentheses, not a parenthesized expression or a tuple as
a single context expression. The latter case, even though syntactically legal,
is still reported by the compatibility inspection in Python <3.9.
These changes also include proper formatter and editing support (e.g. not
inserting backslashes on line breaks inside parentheses), as well as
Complete Current Statement, which now takes possible parentheses into account
while inserting a missing colon.
The changes in the formatter are somewhat ad-hoc, intended to minimize the effect
on other constructs. "With" statement is somewhat special in the sense that it's
the first compound statement (having a statement list) with its own list-like
part in parentheses.
Existing tests on with statement processing were expanded and uniformly named.
Co-authored-by: Semyon Proshev <semyon.proshev@jetbrains.com>
GitOrigin-RevId: 15c33e97f177e81b5ed23891063555df016feb05