when type inference is called, it's not known if the client calls inference on resolved method or enumerates all possible candidates and thus the results cant be cached; current implementation is pessimistic and prohibit all caching during inference. Thus, for long method call chains which depend on some non-trivial calculations, it may be extremely resource consuming. Let's cache all qualifiers locally: this doesn't prevent recalculation globally but works around performance problems per single call
GitOrigin-RevId: b9b42cbc50918259f5de3a81d5f3a38967c153f1