Fix incorrect fixtures searching in imports. Add test that inspection registers problem only for PyTestFixture.
GitOrigin-RevId: de029bb401689f0218e6fce04e64e738a8051fae
Provide correct EP for inspection tools even with inconsistent tool.getShortName() and shortName="" in plugin.xml.
That allows obtaining correct tool.getLanguage(), and avoid running irrelevant inspections.
E.g. CheckDtdRef inspection doesn't run in java-only tests anymore.
GitOrigin-RevId: 188e9d55686ca084611c5c89cb899874dd078010
The original problem with @contextlib.asynccontextmanager was due to a bug
in PyTypeChecker.substitute introduced in the TypeVarTuple support. Namely,
we started to substitute unmatched ParamSpec types with null, effectively
replacing them in a callable signature with a single parameter of type Any.
Then the special logic in PyCallExpressionHelper.mapArguments that treated
unmatched ParamSpecs as "catch-all" varargs stopped working, and we started
to highlight all extra arguments in the substituted callable invocations.
In other words, binding type parameters from decorator targets, e.g.
ParamSpecs or function return types, never worked because we can't resolve
functions passed as decorator arguments in "de-sugared" expression fragments
in the codeAnalysis context, i.e. when we replace
```
@deco
def f(x: int, y: str): ...
```
with `deco(f)` and then try to infer its type in PyDecoratedFunctionTypeProvider,
but we didn't report it thanks to that special-casing of unmatched ParamSpecs
(other type parameters replaced by Any don't trigger such warnings).
Ideally, we should start resolving references in arguments of function calls
in such virtual expression fragments in some stub-safe manner instead of relying
on this fallback logic. In the general case, however, complete stub-safe inference
for decorators is a hard problem because arbitrary expressions can affect types of
their return type, .e.g.
```
def deco(result: T) -> Callable[[Callable[P, Any]], Callable[P, T]]: ...
@deco(arbitrary_call().foo + 42) # how to handle this without unstubbing?
def f(x: int, y: str): ...
```
GitOrigin-RevId: adeb625611a3ebb7d5db523df00388d619323545
Rework annotators engine to make annotators run in parallel, each on all relevant PSI elements in their own order (makes fast annotators complete faster to allow them to remove outdated highlighters faster).
For that, for each annotator (in parallel):
- create its own AnnotationHolder
- rearrange its PSI elements in "time to first diagnostic in previous run" order, to reduce latency.
- run annotator on these PSI elements sequentially
- as soon as annotator produces info/fails to produce the same info from the previous run, update the corresponding range highlighters
Pleas note, there's no more contract "Do not call annotators for parent PSI if some (maybe completely unrelated) annotator/highlight visitor produced error for some PSI element".
Fix highlighting tests, the majority of which relied on annotator order, or implicit contract above.
Fix a bunch of annotators which tried to double-visit some PSI elements to fight the contract above.
GitOrigin-RevId: 74f727fc6d3be3f500cdbb0f26e7d0daf1ffe7ff
Perform real shortcut processing as in production environment.
Must not be called in WA due to that.
GitOrigin-RevId: faa0302c4cd7460f08792e6170ae027cbd415de4
Traversing through all children of a PyFunction looking for nested functions,
possibly declaring a nonlocal or global variable, and doing that for every
target expression or a reference to one, is both inefficient and seemingly
hardly needed. For a name being local means that it cannot be accessed
outside the scope of the corresponding function. Updating this variable from
some inner helper function doesn't violate this property. In Python, one
has to explicitly mark a name from an enclosing scope as nonlocal or global to
be able to assign to it within a function. It seems enough (and less surprising)
to rely on this information to distinguish between local variables and everything
else.
All in all, if some local variable is accessed as a nonlocal name in an inner
function, it's now still highlighted as local in the function that defines
it (previously it wasn't), but it is not in the function declaring it as
nonlocal (same as before).
Additionally, we now uniformly highlight as local variables "for" and "with"
statement targets, targets of assignment expressions, names bound in patterns,
and variables in assignments with unpacking (previously it was done only for
trivial assignments).
GitOrigin-RevId: 04c07ae6814a6b531911b3d87a3a26191c934962
This decorator is fully type hinted in Typeshed, so, with the changes introduced
for PY-60104, it's no longer necessary to special-case it anywhere.
PyDecoratedFunctionTypeProvider can infer the correct type after application
of this decorator to a generator function just as for any other typed decorator.
The original problem was caused by the fact that PyDecoratedFunctionTypeProvider
didn't process declarations having any decorator listed in the KnownDecorator enum,
as presumably all of them were too "magical" to analyze.
Co-authored-by: Daniil Kalinin <daniil.kalinin@jetbrains.com>
GitOrigin-RevId: 53b277803a1eb42784131d0dae5bb7ace173c017
Removed a number of tests in Py3TypeTest duplicating those of PyDecoratedFunctionTypeProviderTest.
Removed the tests about PY-23067 in Py3ArgumentListInspectionTest and Py3CompletionTest because
this issue was actually not addressed in 05e8ed4df0c7faa24bd972e1b422f664d708b510, and the behavior
some of them assert is not what users wanted.
More consistent naming of tests in PyDecoratedFunctionTypeProviderTest and PyParameterInfoTest.
Removed there excess tests too similar to others or checking scenarios not relevant to
the current approach to type inference for decorators, e.g. presense of @functools.wraps and
alternatives inside decorators — we don't even analyze their bodies anymore.
Add a few extra tests illustrating problems with the current approach:
- testNotAnnotatedDecoratorChangingFunctionSignatureIsIgnored
- testInStackOfDecoratorsChangingFunctionSignatureOnlyAnnotatedAreConsidered
- testInStackOfImportedDecoratorsChangingFunctionSignatureOnlyAnnotatedAreConsidered
- testNotAnnotatedDecoratorRetainsParametersOfOriginalFunctionEvenIfItChangesItsSignature
GitOrigin-RevId: 0bf5070fc523b88dcc9d3009786dd028bdfa0feb
Assume that such decorators as well as "well-known" decorators, which we special-case,
don't change signatures of decorated functions and classes.
This change effectively stops the long-standing policy of safe-listing a few recognized
"well-known" decorators and assuming everything else can change a definition in any
way. This approach doesn't apply well to the current state of the Python world, where most
of the common side effects of decorators, such as adding new parameters, can be expressed
in type hints.
In 2021.1 we added PyDecoratedFunctionTypeProvider that was able to infer a return type of
decorator over its body, as for any other function, and then correctly apply this information
to a decorated definition. It led to a number of problems.
First of all, depending on whether TypeEvalContext allowed us to access AST of a decorator's
body, we inferred different signatures for functions decorated with an imported decorator in
inspections and in user-initiated actions, such as Parameter Info.
Secondly, we started inferring useless `(*args, **kwargs)` signatures in case of decorators
defined following the common pattern of returning a wrapper function accepting arbitrary
parameters and itself decorated with @functools.wraps (PY-48338). In some sense, our code
analysis was "too smart" in its type inference in this case.
Lastly, we diluted the return types of functions decorated with unknown decorators, even
fully typed, by uniting these types with Any (so-called "weak" types). This logic
existed before PyDecoratedFunctionTypeProvider, but it became more problematic now
than we were able to propagate this artificial union through generic decorators.
This change in behavior might lead to some false positives for untyped Python code
with non-pure decorators. However, given that other type checkers are also likely to hit these
problems, there is now a stronger incentive to add type hints for such problematic APIs.
In the worst case, we can special-case some heavily requested decorators as we did before.
GitOrigin-RevId: db11fb3573bda5da155cb921a30adc31d5c841e2
The original false positive was caused by the protocol NestedSequence in matplotlib
defined as:
class NestedSequence(Protocol[_T_co]):
def __getitem__(self, key: int, /) -> _T_co | NestedSequence[_T_co]: ...
def __len__(self, /) -> int: ...
which was used to annotate parameters of matplotlib.pyplot.plot and was considered
incompatible with the builtin list.
Skipping positional- and keyword-only parameter separators of the expected callable
is a workaround until we have a comprehensive mechanism for matching signatures.
GitOrigin-RevId: 93d8bb4c6c4405d0e24b5f98152a461691f6197e
- Drop support for python 3.5 & 3.6 in compatibility inspection
- Fix and remove some outdated tests
- Remove xmls for long-unsupported python 2.6 & 3.5
- Regenerate versions.xml
- Remove mentions of OS-specific modules
GitOrigin-RevId: 3265dd1de8a4f7a41119e10c95bb705ca5845efe
It reports such cases:
* Duplicated type parameter names in type parameter lists
* Wrong number of type var constraints (one or zero) defined with new-style PEP 695 syntax
* Use of constraints for ParamSpec and TypeVarTuple type parameter kinds
GitOrigin-RevId: e0e8e7eb4dcef0c1b56ea49a3527666e3c713d86
Do not resolve type parameters as class members
Tests for the changes above
Co-authored-by: Mikhail Golubev <mikhail.golubev@jetbrains.com>
GitOrigin-RevId: 96309ebedf26d04e375bfa3a5f8ae0bc9257d48f
Inherit PyTypeAliasStatement from PyQualifiedNameOwner to re-use type aliases stack in PyTypingTypeProvider
Various tests for the changes above
Co-authored-by: Mikhail Golubev <mikhail.golubev@jetbrains.com>
GitOrigin-RevId: 242427c6f84c05ec48c94085f20675b8e30f8625
Previously, we incorrectly inferred generic list and set types parameterized with
types of each element of the corresponding collection literal instead of a union
of those. For instance, for [1, 2] we inferred list[Literal[1], Literal[2]] instead
of list[Literal[1, 2]].
GitOrigin-RevId: 6f222daee871137a5de5589429f78341704c5544
The introduction of TypeVarTuples and the concept of unpacked tuple types made us
revise all the places where we match sequences of types in type inference.
For instance, when matching type parameters and type arguments for generic
specialization in:
* type hints, i.e. xs: MyGeneric[int, str] = MyGeneric()
* constructor invocations, i.e. xs = MyGeneric[int, str]()
* class declarations, i.e. class MyGeneric(Base[T1, T2, str]): ...
* type alias declarations, i.e. MyAlias: TypeAlias = MyGeneric[T, int]
as well as during type matching of all generic types, both normal non-variadic and
existing "built-in" generic variadics in the type system, namely tuples and
Callables.
Previously, this logic was spread across numerous places in PyTypeChecker and
PyTypingTypeProvider, all with their own subtle differences. The first attempt
of PEP 646 support put all the code for uniform matching of type parameters directly
in PyTypeChecker, significantly complicating its already arcane internals.
I've introduced a unified API for that called PyTypeParameterMapping.
It still retains some of the former quirks in form of its Option flags, controlling
in particular how we handle having some of the expected types unmatched
(imagine expecting MyGeneric[T1, T2, *Ts] and receiving MyGeneric[int]),
but I'm planning to gradually eliminate this conditional logic.
The same class is now also responsible for matching parameter types of callables
that already allowed to fix some of the known problems, such as ignoring their
arity (PY-16994), but I'm going to extract a separate API entity for that, since
matching of callable signatures is a much more complicated task involving
compatibility of different types of parameters (positional-only, keyword-only,
defaults, varargs, etc.).
Another positive side effect of these changes is that substitution of type
parameters during type inference became more consistent, and we no longer lose
useful type information by replacing all unbound type parameters with Any. It's
particularly visible in type checker errors where we stopped dropping unbound type
parameters from messages about mismatched parameter-argument types.
Among other improvements in this changeset are proper scoping for
TypeVarTuples, consistent with other type parameters, and recognizing TypeVarTuples
and unpacked tuples in types of *args parameters in function bodies, e.g.
`*args: *Ts` translates to "args" parameter having the type `tuple[*Ts]`.
Confusing PyNoMatchedType used only for reporting of missing arguments for *args
parameters annotated with unpacked tuples in the type checker inspection, e.g.
def f(*args: *tuple[int, str]): ...
f(42) # a type checker error about a missing argument for str
was also removed from the type system in favor of a simpler approach with handling
such errors directly in the inspection. We might need such a general type in
the future, but it has to be well thought-through.
GitOrigin-RevId: 63db6202254205863657f014632d141d340fe147