In other words, in a statement like "await = 42" there are now both a warning about
the missing operand and a warning that an await expression cannot be used as
an assignment target. This behavior is consistent with other expressions where
additional parsing errors are not special-cased.
GitOrigin-RevId: 37a68eacc7ec042435c185f9ddd8bc2eea42b40f
Instead, parse them as usual and later report in the dedicated AssignTargetAnnotator
and TypeAnnotationTargetAnnotator. This way, a PsiError element appearing in the PSI
tree of a type declaration statement doesn't cause PyAstTypeDeclarationStatement.getTarget
nullability contract violation.
GitOrigin-RevId: a3e90088cfac7938c398d4d293a72dbd127a2cd0
Fix rerunning failed tests for all Doctest.
Do 'Doctest via pytest' option be available not only if a file/fun/class starts with `test_`.
Merge-request: IJ-MR-131237
Merged-by: Egor Eliseev <Egor.Eliseev@jetbrains.com>
GitOrigin-RevId: c1e1fda5f66d1e213f34a057175d8e8986a46647
Run Doctest via pytest configuration if `pytest` is installed and selected in settings
Merge-request: IJ-MR-130194
Merged-by: Egor Eliseev <Egor.Eliseev@jetbrains.com>
GitOrigin-RevId: 4c97411c3da69249e6b9ad886bcbd9a206db744b
The logic is similar to that for instance attributes. Top-level class
attributes and methods defined in the class body get the precedence,
followed by class attributes defined with assignments in @classmethods
unless the latter would resolve to the same assignments as in
cls.attr = cls.attr + 1
finally, we scan through all other class methods resolving the name
to the first definition inside one of them.
So far, I intentionally didn't expose such attributes in findClassAttribute()
or getClassAttributes() because users of these methods assume that
this API considers only attributes defined immediately in the class body.
Adding extra definitions from class methods might break these usages.
I had to update the inspection about typing.Final, because it relied
on the fact that resolve() on assignment targets on class objects can
lead only to those top-level class attributes, where type hints are normally
located, but now it can lead to assignments to a qualified attribute inside
a containing class method.
GitOrigin-RevId: 0ca5bdaa4efca127ac187e822a49df6795e1028a
Fix incorrect fixtures searching in imports. Add test that inspection registers problem only for PyTestFixture.
GitOrigin-RevId: de029bb401689f0218e6fce04e64e738a8051fae
Provide correct EP for inspection tools even with inconsistent tool.getShortName() and shortName="" in plugin.xml.
That allows obtaining correct tool.getLanguage(), and avoid running irrelevant inspections.
E.g. CheckDtdRef inspection doesn't run in java-only tests anymore.
GitOrigin-RevId: 188e9d55686ca084611c5c89cb899874dd078010
The original problem with @contextlib.asynccontextmanager was due to a bug
in PyTypeChecker.substitute introduced in the TypeVarTuple support. Namely,
we started to substitute unmatched ParamSpec types with null, effectively
replacing them in a callable signature with a single parameter of type Any.
Then the special logic in PyCallExpressionHelper.mapArguments that treated
unmatched ParamSpecs as "catch-all" varargs stopped working, and we started
to highlight all extra arguments in the substituted callable invocations.
In other words, binding type parameters from decorator targets, e.g.
ParamSpecs or function return types, never worked because we can't resolve
functions passed as decorator arguments in "de-sugared" expression fragments
in the codeAnalysis context, i.e. when we replace
```
@deco
def f(x: int, y: str): ...
```
with `deco(f)` and then try to infer its type in PyDecoratedFunctionTypeProvider,
but we didn't report it thanks to that special-casing of unmatched ParamSpecs
(other type parameters replaced by Any don't trigger such warnings).
Ideally, we should start resolving references in arguments of function calls
in such virtual expression fragments in some stub-safe manner instead of relying
on this fallback logic. In the general case, however, complete stub-safe inference
for decorators is a hard problem because arbitrary expressions can affect types of
their return type, .e.g.
```
def deco(result: T) -> Callable[[Callable[P, Any]], Callable[P, T]]: ...
@deco(arbitrary_call().foo + 42) # how to handle this without unstubbing?
def f(x: int, y: str): ...
```
GitOrigin-RevId: adeb625611a3ebb7d5db523df00388d619323545
Rework annotators engine to make annotators run in parallel, each on all relevant PSI elements in their own order (makes fast annotators complete faster to allow them to remove outdated highlighters faster).
For that, for each annotator (in parallel):
- create its own AnnotationHolder
- rearrange its PSI elements in "time to first diagnostic in previous run" order, to reduce latency.
- run annotator on these PSI elements sequentially
- as soon as annotator produces info/fails to produce the same info from the previous run, update the corresponding range highlighters
Pleas note, there's no more contract "Do not call annotators for parent PSI if some (maybe completely unrelated) annotator/highlight visitor produced error for some PSI element".
Fix highlighting tests, the majority of which relied on annotator order, or implicit contract above.
Fix a bunch of annotators which tried to double-visit some PSI elements to fight the contract above.
GitOrigin-RevId: 74f727fc6d3be3f500cdbb0f26e7d0daf1ffe7ff
Perform real shortcut processing as in production environment.
Must not be called in WA due to that.
GitOrigin-RevId: faa0302c4cd7460f08792e6170ae027cbd415de4
Traversing through all children of a PyFunction looking for nested functions,
possibly declaring a nonlocal or global variable, and doing that for every
target expression or a reference to one, is both inefficient and seemingly
hardly needed. For a name being local means that it cannot be accessed
outside the scope of the corresponding function. Updating this variable from
some inner helper function doesn't violate this property. In Python, one
has to explicitly mark a name from an enclosing scope as nonlocal or global to
be able to assign to it within a function. It seems enough (and less surprising)
to rely on this information to distinguish between local variables and everything
else.
All in all, if some local variable is accessed as a nonlocal name in an inner
function, it's now still highlighted as local in the function that defines
it (previously it wasn't), but it is not in the function declaring it as
nonlocal (same as before).
Additionally, we now uniformly highlight as local variables "for" and "with"
statement targets, targets of assignment expressions, names bound in patterns,
and variables in assignments with unpacking (previously it was done only for
trivial assignments).
GitOrigin-RevId: 04c07ae6814a6b531911b3d87a3a26191c934962
This decorator is fully type hinted in Typeshed, so, with the changes introduced
for PY-60104, it's no longer necessary to special-case it anywhere.
PyDecoratedFunctionTypeProvider can infer the correct type after application
of this decorator to a generator function just as for any other typed decorator.
The original problem was caused by the fact that PyDecoratedFunctionTypeProvider
didn't process declarations having any decorator listed in the KnownDecorator enum,
as presumably all of them were too "magical" to analyze.
Co-authored-by: Daniil Kalinin <daniil.kalinin@jetbrains.com>
GitOrigin-RevId: 53b277803a1eb42784131d0dae5bb7ace173c017
Removed a number of tests in Py3TypeTest duplicating those of PyDecoratedFunctionTypeProviderTest.
Removed the tests about PY-23067 in Py3ArgumentListInspectionTest and Py3CompletionTest because
this issue was actually not addressed in 05e8ed4df0c7faa24bd972e1b422f664d708b510, and the behavior
some of them assert is not what users wanted.
More consistent naming of tests in PyDecoratedFunctionTypeProviderTest and PyParameterInfoTest.
Removed there excess tests too similar to others or checking scenarios not relevant to
the current approach to type inference for decorators, e.g. presense of @functools.wraps and
alternatives inside decorators — we don't even analyze their bodies anymore.
Add a few extra tests illustrating problems with the current approach:
- testNotAnnotatedDecoratorChangingFunctionSignatureIsIgnored
- testInStackOfDecoratorsChangingFunctionSignatureOnlyAnnotatedAreConsidered
- testInStackOfImportedDecoratorsChangingFunctionSignatureOnlyAnnotatedAreConsidered
- testNotAnnotatedDecoratorRetainsParametersOfOriginalFunctionEvenIfItChangesItsSignature
GitOrigin-RevId: 0bf5070fc523b88dcc9d3009786dd028bdfa0feb
Assume that such decorators as well as "well-known" decorators, which we special-case,
don't change signatures of decorated functions and classes.
This change effectively stops the long-standing policy of safe-listing a few recognized
"well-known" decorators and assuming everything else can change a definition in any
way. This approach doesn't apply well to the current state of the Python world, where most
of the common side effects of decorators, such as adding new parameters, can be expressed
in type hints.
In 2021.1 we added PyDecoratedFunctionTypeProvider that was able to infer a return type of
decorator over its body, as for any other function, and then correctly apply this information
to a decorated definition. It led to a number of problems.
First of all, depending on whether TypeEvalContext allowed us to access AST of a decorator's
body, we inferred different signatures for functions decorated with an imported decorator in
inspections and in user-initiated actions, such as Parameter Info.
Secondly, we started inferring useless `(*args, **kwargs)` signatures in case of decorators
defined following the common pattern of returning a wrapper function accepting arbitrary
parameters and itself decorated with @functools.wraps (PY-48338). In some sense, our code
analysis was "too smart" in its type inference in this case.
Lastly, we diluted the return types of functions decorated with unknown decorators, even
fully typed, by uniting these types with Any (so-called "weak" types). This logic
existed before PyDecoratedFunctionTypeProvider, but it became more problematic now
than we were able to propagate this artificial union through generic decorators.
This change in behavior might lead to some false positives for untyped Python code
with non-pure decorators. However, given that other type checkers are also likely to hit these
problems, there is now a stronger incentive to add type hints for such problematic APIs.
In the worst case, we can special-case some heavily requested decorators as we did before.
GitOrigin-RevId: db11fb3573bda5da155cb921a30adc31d5c841e2