`dataclass_transform` support posed a number of challenges to the current
AST/PSI stubs separation in our architecture. For the standard "dataclasses"
module and the "attrs" package API, we could rely on well-known names
and defaults to recognize and preserve the information about decorator
arguments and field specifier arguments in PSI stubs. With `dataclass_transform`
however, effectively any decorator call or extending any base class can indicate
using a "magical" API that should generate dataclass-like entities.
At the moment of building PSI stubs we can't be sure, because we can't leave
the boundaries of the current file to resolve these names and determine if these
decorators and base classes are decorated themselves with `dataclass_transform`.
To support that, we instead rely on well-known keyword argument names documented
in the spec, e.g. "kw_only" or "frozen". In other words, whenever we encounter
any decorator call or a superclass list with such keyword arguments, we generate
a `PyDataclassStub` stub for the corresponding class.
The same thing is happening with class attribute initializers, i.e. whenever
we see a function call with argument such as "default" or "kw_only" in their
RHS, we generate a `PyDataclassFieldStub` for the corresponding target expression.
Both of these stub interfaces now can contain null values for the corresponding
properties if they were not specified directly in the class definition.
Finally, for the `dataclass_transform` decorator itself, a new custom decorator stub
was introduced -- `PyDataclassTransformDecoratorStub`, it preserves its keyword
arguments, such as "keyword_only_default" or "frozen_default" controlling
the default properties of generated dataclasses.
Later, when we need concluded information about specific dataclass properties,
e.g. in `PyDataclassTypeProvider` to generate a constructor signature, or in
`PyDataclassInspection`, we try to "resolve" this incomplete information from stubs
into finalized `PyDataclassParameters` and `PyDataclassFieldParameters` that
contain non-null versions of the same fields. The main entry points for that
are `resolveDataclassParameters` and `resolveDataclassFieldParameters`.
These methods additionally handle the situations where decorators, superclass
lists and field specifiers lack any keyword arguments, and thus, there were no
automatically created custom stubs for them.
All the existing usages of `PyDataclassStub` and `PyDataclassFieldStub`
were updated to operate on `PyDataclassParameters` and `PyDataclassFieldParameters`
instead.
Counterparts of the tests on various inspection checks for the standard dataclasses
definitions were added for dataclasses created with `dataclass_transform`, even
though the spec is unclear on some aspects the expected type checker semantics, e.g.
if combining "eq=False" and "order=True" or specifying both "default" and
"default_factory" for a field should be reported.
I tried to follow common sense when enabling existing checks for such arbitrary
user-defined dataclass APIs.
GitOrigin-RevId: 4180a1e32b5e4025fc4e3ed49bb8d67af0d60e66
- PEP-696 adds a new syntax for declaring the default types of Type Parameters in new-new style generic classes, functions and type alias statements. Support these grammar changes.
- Store info about default types in stubs for Type Parameters
- Increment the stub version counter in PyFileElementType
GitOrigin-RevId: b6b22e3eaa86ce06132885781e5775a89bf4b840
As far as `__get__` call when accessing the attribute is implicit, create a synthetic call considering the type of callsite (access via instance or via class) and use its type as a type of property typed with descriptor class.
GitOrigin-RevId: acc36ebd2d62acfe99a5202b2478356f7b7aea46
In other words, this API allows inferring results of function calls based only on the type of reciever (e.g., class), if exists, and types of arguments passed to the function. The aim of this API is to replace the existing approach where we infer types of such "synthetic" calls by creating a new expression using `com.jetbrains.python.psi.PyUtil.createExpressionFromFragment` and then inferring the type of created expression
GitOrigin-RevId: 09bee7ba1757cb07910be245253fe4bd855f5076
Add a new setting Python Integrated Tools: Detect tests in Jupyter Notebooks.
Exclude Jupyter Notebook files from the scope for test detection by default.
Add tests
Merge-request: IJ-MR-134248
Merged-by: Egor Eliseev <Egor.Eliseev@jetbrains.com>
GitOrigin-RevId: 0bee082bde4fa608cb1907b8fbd64b97bb9755a0
Otherwise, if someone wants to move a definition/extract a superclass from there to
a brand-new package, e.g. from main.py to pkg/mod.py, a namespace "pkg" package will
be created.
Restore the original behavior of PyExtractSuperclassTest.testMultifileNew: the origin
file was inside a regular project root without __init__.py alongside.
GitOrigin-RevId: 750414b18582740076c14bfcfd07fa38992b4428
Now they all just compare the resulting test project content with an explicit "after" directory
instead of checking individual files in the test code.
GitOrigin-RevId: cff00d0a6b8ea4547b719997716e95a3f7c62cc9
Namely, Move Module Members, Extract Superclass and Make Local Function Top-Level were
all affected by this.
Now we check if the refactoring origin is inside a namespace package to decide whether
__init__.py should be generated for target directories.
Co-authored-by: Kamalia <alishevakamalia@gmail.com>
Co-authored-by: Maksim.Levitskii <maksim.levitskii@jetbrains.com>
GitOrigin-RevId: b0b3420c5ec8d1f7d3000d8834211631690a0c42
Otherwise, we end up with dozens of unintentionally public names such as "s", "i", "k"
even in the standard library (e.g. `this.s` or `pickletools.i`).
Ideally, we should rely on .pyi stubs and the content of `__all__` to offer only explicitly
exposed API, but not every module has any of those two, and it's not clear how to match
.py files and the corresponding .pyi stubs fast enough for completion.
GitOrigin-RevId: 163c472654e60ae63ff893142b8ddb9accc56393
Previously, such names were visible only on so-called "extended" completion,
activated when the hotkey for the basic completion was hit twice. The main reason
was that collecting such variants from indexes was a slow process, and we
didn't want to harm the responsiveness of completion for basic names.
Now it becomes possible thanks to a number of performance optimizations:
* Instead of using three separate indexes for classes, functions and variables,
we use one -- PyExportedModuleAttributeIndex. By definition, it includes only top-level
"importable" names, so we additionally save time by not filtering out irrelevant
entries. Also, it doesn't contain private definitions starting with an underscore.
It might bother some users, but given that the previous completion was used
extremely rarely, and the new one is going to be visible everywhere, it seems
that pruning unlikely entries as much as possible is a fare tradeoff. In the future,
we might enable them back on the "extended" completion if there is a demand.
Also, this index binds its keys to the project (`traceKeyHashToVirtualFileMapping`),
further eliminating useless index lookups.
* Thanks to the recent fixes in the platform (IJPL-265), it's now possible to
simultaneously iterate over all keys in an index and request values for a given key
without deadlocks, which is much faster than eagerly fetching all keys first.
* While scanning through all matching entries from indexes, we terminate
the lookup if the number of items exceeds the size of the lookup list.
We can further reduce this number by adjusting the "ide.completion.variant.limit"
registry value.
* Calculating expensive "canonical" import paths (e.g. "pkg.private.Name" is importable as
"pkg.Name") is offloaded to a background thread thanks to the `withExpensiveRenderer` API.
We still calculate these paths synchronously, though, for names whose raw qualified names
contain components starting with an underscore to decide whether these private names are
publicly re-exported and, hence, should be displayed.
The rest of the work has been put into reducing the number of entries on the list, e.g.
* The prefix under caret is now matched from the beginning of a name, e.g. `Bar<caret>`
matches `BarBaz`, but not `FooBar`.
* We don't suggest imported names clashing with those already available in scope.
* Some kinds of definitions are not suggested in specific contexts, e.g.
functions and variables are not suggested inside patterns and type hints.
* Nothing is suggested at the top-level of a class body, where dangling
reference expressions or calls are not normally expected.
Additionally, we don't suggest names from .pyi stubs at the moment, because
it pollutes the suggestion list with entries coming from the stubs for
third-party packages in Typeshed. We should probably enable them back once
we are able to properly disable Typeshed entries for not installed packages.
Some legacy forms of completion are left in the extended mode. In particular,
qualified names of classes are offered inside string literals only in this mode.
Also, module and package names are suggested only in the extended mode, because
top-level packages and modules are already suggested for the basic completion
by PyModuleNameCompletionContributor.
A few tests in PyClassNameCompletionTest were updated or removed entirely because
* we no longer suggest private names
* we no longer suggest names from private modules not re-exported in a public module
* we no longer suggest names clashing with those already available in scope
* prefix matching policy was changed to start at the beginning of an identifier
The whole feature can be disabled with the option "Suggest importable classes,
functions and variables in basic completion" in settings.
GitOrigin-RevId: 0787d42ce337b73b01a60f0bb7aa434fee43e659