add new features: info about underscores in path, info about lib location, info about context of original file(extension type, psi parents, size of project),
info about already existing imports from the same library(in this file, in other opened files, in other files in the same directory)
GitOrigin-RevId: ca8206d4d7db6bc79e8f1a78502bf33696a653e9
`dataclass_transform` support posed a number of challenges to the current
AST/PSI stubs separation in our architecture. For the standard "dataclasses"
module and the "attrs" package API, we could rely on well-known names
and defaults to recognize and preserve the information about decorator
arguments and field specifier arguments in PSI stubs. With `dataclass_transform`
however, effectively any decorator call or extending any base class can indicate
using a "magical" API that should generate dataclass-like entities.
At the moment of building PSI stubs we can't be sure, because we can't leave
the boundaries of the current file to resolve these names and determine if these
decorators and base classes are decorated themselves with `dataclass_transform`.
To support that, we instead rely on well-known keyword argument names documented
in the spec, e.g. "kw_only" or "frozen". In other words, whenever we encounter
any decorator call or a superclass list with such keyword arguments, we generate
a `PyDataclassStub` stub for the corresponding class.
The same thing is happening with class attribute initializers, i.e. whenever
we see a function call with argument such as "default" or "kw_only" in their
RHS, we generate a `PyDataclassFieldStub` for the corresponding target expression.
Both of these stub interfaces now can contain null values for the corresponding
properties if they were not specified directly in the class definition.
Finally, for the `dataclass_transform` decorator itself, a new custom decorator stub
was introduced -- `PyDataclassTransformDecoratorStub`, it preserves its keyword
arguments, such as "keyword_only_default" or "frozen_default" controlling
the default properties of generated dataclasses.
Later, when we need concluded information about specific dataclass properties,
e.g. in `PyDataclassTypeProvider` to generate a constructor signature, or in
`PyDataclassInspection`, we try to "resolve" this incomplete information from stubs
into finalized `PyDataclassParameters` and `PyDataclassFieldParameters` that
contain non-null versions of the same fields. The main entry points for that
are `resolveDataclassParameters` and `resolveDataclassFieldParameters`.
These methods additionally handle the situations where decorators, superclass
lists and field specifiers lack any keyword arguments, and thus, there were no
automatically created custom stubs for them.
All the existing usages of `PyDataclassStub` and `PyDataclassFieldStub`
were updated to operate on `PyDataclassParameters` and `PyDataclassFieldParameters`
instead.
Counterparts of the tests on various inspection checks for the standard dataclasses
definitions were added for dataclasses created with `dataclass_transform`, even
though the spec is unclear on some aspects the expected type checker semantics, e.g.
if combining "eq=False" and "order=True" or specifying both "default" and
"default_factory" for a field should be reported.
I tried to follow common sense when enabling existing checks for such arbitrary
user-defined dataclass APIs.
GitOrigin-RevId: 4180a1e32b5e4025fc4e3ed49bb8d67af0d60e66
These changes make PyStatementList elements (which are function and class bodies, cycle bodies, if-else branches, etc.) lazy-parseable which means they can now be reparsed without reparsing the whole file if changes are happened inside them accepted as safe
The main reason behind these changes is to improve performance
GitOrigin-RevId: 892acbe0c95fde6aec74b7595b0a58f902c426f5
Instead, parse them as usual and later report in the dedicated AssignTargetAnnotator
and TypeAnnotationTargetAnnotator. This way, a PsiError element appearing in the PSI
tree of a type declaration statement doesn't cause PyAstTypeDeclarationStatement.getTarget
nullability contract violation.
GitOrigin-RevId: a3e90088cfac7938c398d4d293a72dbd127a2cd0
Otherwise, incomplete stubs break our custom logic for resolving tensorflow
modules re-exported from its _api.v2 subpackage, such as tensorflow.audio,
tensorflow.image, tensorflow.random, etc.
We did the same for Numpy stubs in the past, enabling them back once they
became mature enough.
GitOrigin-RevId: b1e46067406a592761f56b7d296a287e5282b079
Assume that such decorators as well as "well-known" decorators, which we special-case,
don't change signatures of decorated functions and classes.
This change effectively stops the long-standing policy of safe-listing a few recognized
"well-known" decorators and assuming everything else can change a definition in any
way. This approach doesn't apply well to the current state of the Python world, where most
of the common side effects of decorators, such as adding new parameters, can be expressed
in type hints.
In 2021.1 we added PyDecoratedFunctionTypeProvider that was able to infer a return type of
decorator over its body, as for any other function, and then correctly apply this information
to a decorated definition. It led to a number of problems.
First of all, depending on whether TypeEvalContext allowed us to access AST of a decorator's
body, we inferred different signatures for functions decorated with an imported decorator in
inspections and in user-initiated actions, such as Parameter Info.
Secondly, we started inferring useless `(*args, **kwargs)` signatures in case of decorators
defined following the common pattern of returning a wrapper function accepting arbitrary
parameters and itself decorated with @functools.wraps (PY-48338). In some sense, our code
analysis was "too smart" in its type inference in this case.
Lastly, we diluted the return types of functions decorated with unknown decorators, even
fully typed, by uniting these types with Any (so-called "weak" types). This logic
existed before PyDecoratedFunctionTypeProvider, but it became more problematic now
than we were able to propagate this artificial union through generic decorators.
This change in behavior might lead to some false positives for untyped Python code
with non-pure decorators. However, given that other type checkers are also likely to hit these
problems, there is now a stronger incentive to add type hints for such problematic APIs.
In the worst case, we can special-case some heavily requested decorators as we did before.
GitOrigin-RevId: db11fb3573bda5da155cb921a30adc31d5c841e2
- Drop support for python 3.5 & 3.6 in compatibility inspection
- Fix and remove some outdated tests
- Remove xmls for long-unsupported python 2.6 & 3.5
- Regenerate versions.xml
- Remove mentions of OS-specific modules
GitOrigin-RevId: 3265dd1de8a4f7a41119e10c95bb705ca5845efe
Inspection covers such cases:
* Extending typing.Generic in new-style generic classes
* Extending parameterized typing.Protocol in new-style generic classes
* Using generic upper bounds and constraints with type parameters for ParamSpec and TypeVarTuple
* Mixing traditional and new-style type variables
* Using traditional type variables in new-style type aliases
GitOrigin-RevId: 8812959f64d2d87e1b72f713405edb86936220b9
The introduction of TypeVarTuples and the concept of unpacked tuple types made us
revise all the places where we match sequences of types in type inference.
For instance, when matching type parameters and type arguments for generic
specialization in:
* type hints, i.e. xs: MyGeneric[int, str] = MyGeneric()
* constructor invocations, i.e. xs = MyGeneric[int, str]()
* class declarations, i.e. class MyGeneric(Base[T1, T2, str]): ...
* type alias declarations, i.e. MyAlias: TypeAlias = MyGeneric[T, int]
as well as during type matching of all generic types, both normal non-variadic and
existing "built-in" generic variadics in the type system, namely tuples and
Callables.
Previously, this logic was spread across numerous places in PyTypeChecker and
PyTypingTypeProvider, all with their own subtle differences. The first attempt
of PEP 646 support put all the code for uniform matching of type parameters directly
in PyTypeChecker, significantly complicating its already arcane internals.
I've introduced a unified API for that called PyTypeParameterMapping.
It still retains some of the former quirks in form of its Option flags, controlling
in particular how we handle having some of the expected types unmatched
(imagine expecting MyGeneric[T1, T2, *Ts] and receiving MyGeneric[int]),
but I'm planning to gradually eliminate this conditional logic.
The same class is now also responsible for matching parameter types of callables
that already allowed to fix some of the known problems, such as ignoring their
arity (PY-16994), but I'm going to extract a separate API entity for that, since
matching of callable signatures is a much more complicated task involving
compatibility of different types of parameters (positional-only, keyword-only,
defaults, varargs, etc.).
Another positive side effect of these changes is that substitution of type
parameters during type inference became more consistent, and we no longer lose
useful type information by replacing all unbound type parameters with Any. It's
particularly visible in type checker errors where we stopped dropping unbound type
parameters from messages about mismatched parameter-argument types.
Among other improvements in this changeset are proper scoping for
TypeVarTuples, consistent with other type parameters, and recognizing TypeVarTuples
and unpacked tuples in types of *args parameters in function bodies, e.g.
`*args: *Ts` translates to "args" parameter having the type `tuple[*Ts]`.
Confusing PyNoMatchedType used only for reporting of missing arguments for *args
parameters annotated with unpacked tuples in the type checker inspection, e.g.
def f(*args: *tuple[int, str]): ...
f(42) # a type checker error about a missing argument for str
was also removed from the type system in favor of a simpler approach with handling
such errors directly in the inspection. We might need such a general type in
the future, but it has to be well thought-through.
GitOrigin-RevId: 63db6202254205863657f014632d141d340fe147
Also, this commit changes the behavior of PyPIPackageUtil, now only one package per name is supported. Motivation:
- it complicates the logic in all usages of the mapping
- There are only 3 cases when it's applicable and in all cases the second package is abandoned.
GitOrigin-RevId: 80fb1e0d28369bdfeb64f6b928ed1b543165d1de