PEP 3107 introduced syntax for function annotations, but the semantics
were deliberately left undefined. There has now been enough 3rd party
usage for static type analysis that the community would benefit from
a standard vocabulary and baseline tools within the standard library.

This PEP introduces a provisional module to provide these standard
definitions and tools, along with some conventions for situations
where annotations are not available.

Note that this PEP still explicitly does NOT prevent other uses of
annotations, nor does it require (or forbid) any particular processing
of annotations, even when they conform to this specification. It
simply enables better coordination, as PEP 333 did for web frameworks.

For example, here is a simple function whose argument and return type
are declared in the annotations:

def greeting(name: str) -> str:
return 'Hello ' + name

While these annotations are available at runtime through the usual
__annotations__ attribute, no type checking happens at runtime.
Instead, the proposal assumes the existence of a separate off-line
type checker which users can run over their source code voluntarily.
Essentially, such a type checker acts as a very powerful linter.
(While it would of course be possible for individual users to employ
a similar checker at run time for Design By Contract enforcement or
JIT optimization, those tools are not yet as mature.)

The proposal is strongly inspired by mypy [mypy]. For example, the
type "sequence of integers" can be written as Sequence[int]. The
square brackets mean that no new syntax needs to be added to the
language. The example here uses a custom type Sequence, imported
from a pure-Python module typing. The Sequence[int] notation
works at runtime by implementing __getitem__() in the metaclass
(but its significance is primarily to an offline type checker).

The type system supports unions, generic types, and a special type
named Any which is consistent with (i.e. assignable to and from) all
types. This latter feature is taken from the idea of gradual typing.
Gradual typing and the full type system are explained in PEP 483.

Other approaches from which we have borrowed or to which ours can be
compared and contrasted are described in PEP 482.

PEP 3107 added support for arbitrary annotations on parts of a
function definition. Although no meaning was assigned to annotations
then, there has always been an implicit goal to use them for type
hinting [gvr-artima], which is listed as the first possible use case
in said PEP.

Of these goals, static analysis is the most important. This includes
support for off-line type checkers such as mypy, as well as providing
a standard notation that can be used by IDEs for code completion and
refactoring.

While the proposed typing module will contain some building blocks for
runtime type checking -- in particular the get_type_hints()
function -- third party packages would have to be developed to
implement specific runtime type checking functionality, for example
using decorators or metaclasses. Using type hints for performance
optimizations is left as an exercise for the reader.

It should also be emphasized that Python will remain a dynamically
typed language, and the authors have no desire to ever make type hints
mandatory, even by convention.

Any function without annotations should be treated as having the most
general type possible, or ignored, by any type checker. Functions
with the @no_type_check decorator should be treated as having
no annotations.

It is recommended but not required that checked functions have
annotations for all arguments and the return type. For a checked
function, the default annotation for arguments and for the return type
is Any. An exception is the first argument of instance and
class methods. If it is not annotated, then it is assumed to have the
type of the containing class for instance methods, and a type object
type corresponding to the containing class object for class methods.
For example, in class A the first argument of an instance method
has the implicit type A. In a class method, the precise type of
the first argument cannot be represented using the available type
notation.

(Note that the return type of __init__ ought to be annotated with
-> None. The reason for this is subtle. If __init__ assumed
a return annotation of -> None, would that mean that an
argument-less, un-annotated __init__ method should still be
type-checked? Rather than leaving this ambiguous or introducing an
exception to the exception, we simply say that __init__ ought to
have a return annotation; the default behavior is thus the same as for
other methods.)

A type checker is expected to check the body of a checked function for
consistency with the given annotations. The annotations may also used
to check correctness of calls appearing in other checked functions.

Type checkers are expected to attempt to infer as much information as
necessary. The minimum requirement is to handle the builtin
decorators @property, @staticmethod and @classmethod.

Type hints may be built-in classes (including those defined in
standard library or third-party extension modules), abstract base
classes, types available in the types module, and user-defined
classes (including those defined in the standard library or
third-party modules).

While annotations are normally the best format for type hints,
there are times when it is more appropriate to represent them
by a special comment, or in a separately distributed stub
file. (See below for examples.)

Annotations must be valid expressions that evaluate without raising
exceptions at the time the function is defined (but see below for
forward references).

Annotations should be kept simple or static analysis tools may not be
able to interpret the values. For example, dynamically computed types
are unlikely to be understood. (This is an
intentionally somewhat vague requirement, specific inclusions and
exclusions may be added to future versions of this PEP as warranted by
the discussion.)

In addition to the above, the following special constructs defined
below may be used: None, Any, Union, Tuple,
Callable, all ABCs and stand-ins for concrete classes exported
from typing (e.g. Sequence and Dict), type variables, and
type aliases.

All newly introduced names used to support features described in
following sections (such as Any and Union) are available in
the typing module.

Note that there are no square brackets around the ellipsis. The
arguments of the callback are completely unconstrained in this case
(and keyword arguments are acceptable).

Since using callbacks with keyword arguments is not perceived as a
common use case, there is currently no support for specifying keyword
arguments with Callable. Similarly, there is no support for
specifying callback signatures with a variable number of argument of a
specific type.

Because typing.Callable does double-duty as a replacement for
collections.abc.Callable, isinstance(x, typing.Callable) is
implemented by deferring to isinstance(x, collections.abc.Callable).
However, isinstance(x, typing.Callable[...]) is not supported.

Since type information about objects kept in containers cannot be
statically inferred in a generic way, abstract base classes have been
extended to support subscription to denote expected types for container
elements. Example:

In this case the contract is that the returned value is consistent with
the elements held by the collection.

A TypeVar() expression must always directly be assigned to a
variable (it should not be used as part of a larger expression). The
argument to TypeVar() must be a string equal to the variable name
to which it is assigned. Type variables must not be redefined.

TypeVar supports constraining parametric types to a fixed set of
possible types. For example, we can define a type variable that ranges
over just str and bytes. By default, a type variable ranges
over all possible types. Example of constraining a type variable:

To create Node instances you call Node() just as for a regular
class. At runtime the type (class) of the instance will be Node.
But what type does it have to the type checker? The answer depends on
how much information is available in the call. If the constructor
(__init__ or __new__) uses T in its signature, and a
corresponding argument value is passed, the type of the corresponding
argument(s) is substituted. Otherwise, Any is assumed. Example:

Note that the runtime type (class) of p and q is still just Node
-- Node[int] and Node[str] are distinguishable class objects, but
the runtime class of the objects created by instantiating them doesn't
record the distinction. This behavior is called "type erasure"; it is
common practice in languages with generics (e.g. Java, TypeScript).

Generic versions of abstract collections like Mapping or Sequence
and generic versions of built-in classes -- List, Dict, Set,
and FrozenSet -- cannot be instantiated. However, concrete user-defined
subclasses thereof and generic versions of concrete collections can be
instantiated:

data = DefaultDict[int, bytes]()

Note that one should not confuse static types and runtime classes.
The type is still erased in this case and the above expression is
just a shorthand for:

data = collections.defaultdict() # type: DefaultDict[int, bytes]

It is not recommended to use the subscripted class (e.g. Node[int])
directly in an expression -- using a type alias (e.g. IntNode = Node[int])
instead is preferred. (First, creating the subscripted class,
e.g. Node[int], has a runtime cost. Second, using a type alias
is more readable.)

Generic[T] is only valid as a base class -- it's not a proper type.
However, user-defined generic types such as LinkedList[T] from the
above example and built-in generic types and ABCs such as List[T]
and Iterable[T] are valid both as types and as base classes. For
example, we can define a subclass of Dict that specializes type
arguments:

The metaclass used by Generic is a subclass of abc.ABCMeta.
A generic class can be an ABC by including abstract methods
or properties, and generic classes can also have ABCs as base
classes without a metaclass conflict.

A type variable may specify an upper bound using bound=<type>.
This means that an actual type substituted (explicitly or implicitly)
for the type variable must be a subtype of the boundary type. A
common example is the definition of a Comparable type that works well
enough to catch the most common errors:

(Note that this is not ideal -- for example min('x', 1) is invalid
at runtime but a type checker would simply infer the return type
Comparable. Unfortunately, addressing this would require
introducing a much more powerful and also much more complicated
concept, F-bounded polymorphism. We may revisit this in the future.)

An upper bound cannot be combined with type constraints (as in used
AnyStr, see the example earlier); type constraints cause the
inferred type to be _exactly_ one of the constraint types, while an
upper bound just requires that the actual type is a subtype of the
boundary type.

Consider a class Employee with a subclass Manager. Now
suppose we have a function with an argument annotated with
List[Employee]. Should we be allowed to call this function with a
variable of type List[Manager] as its argument? Many people would
answer "yes, of course" without even considering the consequences.
But unless we know more about the function, a type checker should
reject such a call: the function might append an Employee instance
to the list, which would violate the variable's type in the caller.

It turns out such an argument acts contravariantly, whereas the
intuitive answer (which is correct in case the function doesn't mutate
its argument!) requires the argument to act covariantly. A longer
introduction to these concepts can be found on Wikipedia
[wiki-variance] and in PEP 483; here we just show how to control
a type checker's behavior.

By default generic types are considered invariant in all type variables,
which means that values for variables annotated with types like
List[Employee] must exactly match the type annotation -- no subclasses or
superclasses of the type parameter (in this example Employee) are
allowed.

To facilitate the declaration of container types where covariant or
contravariant type checking is acceptable, type variables accept keyword
arguments covariant=True or contravariant=True. At most one of these
may be passed. Generic types defined with such variables are considered
covariant or contravariant in the corresponding variable. By convention,
it is recommended to use names ending in _co for type variables
defined with covariant=True and names ending in _contra for that
defined with contravariant=True.

The read-only collection classes in typing are all declared
covariant in their type variable (e.g. Mapping and Sequence). The
mutable collection classes (e.g. MutableMapping and
MutableSequence) are declared invariant. The one example of
a contravariant type is the Generator type, which is contravariant
in the send() argument type (see below).

Note: Covariance or contravariance is not a property of a type variable,
but a property of a generic class defined using this variable.
Variance is only applicable to generic types; generic functions
do not have this property. The latter should be defined using only
type variables without covariant or contravariant keyword arguments.
For example, the following example is
fine:

PEP 3141 defines Python's numeric tower, and the stdlib module
numbers implements the corresponding ABCs (Number,
Complex, Real, Rational and Integral). There are some
issues with these ABCs, but the built-in concrete numeric classes
complex, float and int are ubiquitous (especially the
latter two :-).

Rather than requiring that users write import numbers and then use
numbers.Float etc., this PEP proposes a straightforward shortcut
that is almost as effective: when an argument is annotated as having
type float, an argument of type int is acceptable; similar,
for an argument annotated as having type complex, arguments of
type float or int are acceptable. This does not handle
classes implementing the corresponding ABCs or the
fractions.Fraction class, but we believe those use cases are
exceedingly rare.

When a type hint contains names that have not been defined yet, that
definition may be expressed as a string literal, to be resolved later.

A situation where this occurs commonly is the definition of a
container class, where the class being defined occurs in the signature
of some of the methods. For example, the following code (the start of
a simple binary tree implementation) does not work:

The string literal should contain a valid Python expression (i.e.,
compile(lit, '', 'eval') should be a valid code object) and it
should evaluate without errors once the module has been fully loaded.
The local and global namespace in which it is evaluated should be the
same namespaces in which default arguments to the same function would
be evaluated.

Moreover, the expression should be parseable as a valid type hint, i.e.,
it is constrained by the rules from the section Acceptable type hints
above.

It is allowable to use string literals as part of a type hint, for
example:

class Tree:
...
def leaves(self) -> List['Tree']:
...

A common use for forward references is when e.g. Django models are
needed in the signatures. Typically, each model is in a separate
file, and has methods that arguments whose type involves other models.
Because of the way circular imports work in Python, it is often not
possible to import all the needed models directly:

Assuming main is imported first, this will fail with an ImportError at
the line from models.a import A in models/b.py, which is being
imported from models/a.py before a has defined class A. The solution
is to switch to module-only imports and reference the models by their
_module_._class_ name:

Since the subclasses of Enum cannot be further subclassed,
the type of variable x can be statically inferred in all branches
of the above example. The same approach is applicable if more than one
singleton object is needed: one can use an enumeration that has more than
one value:

A special kind of type is Any. Every type is consistent with
Any. It can be considered a type that has all values and all methods.
Note that Any and builtin type object are completely different.

When the type of a value is object, the type checker will reject
almost all operations on it, and assigning it to a variable (or using
it as a return value) of a more specialized type is a type error. On
the other hand, when a value has type Any, the type checker will
allow all operations on it, and a value of type Any can be assigned
to a variable (or used as a return value) of a more constrained type.

A function parameter without an annotation is assumed to be annotated with
Any. If a generic type is used without specifying type parameters,
they assumed to be Any:

This rule also applies to Tuple, in annotation context it is equivalent
to Tuple[Any, ...] and, in turn, to tuple. As well, a bare
Callable in an annotation is equivalent to Callable[[...], Any] and,
in turn, to collections.abc.Callable:

Sometimes you want to talk about class objects, in particular class
objects that inherit from a given class. This can be spelled as
Type[C] where C is a class. To clarify: while C (when
used as an annotation) refers to instances of class C, Type[C]
refers to subclasses of C. (This is a similar distinction as
between object and type.)

Now when we call new_user() with a specific subclass of User a
type checker will infer the correct type of the result:

joe = new_user(BasicUser) # Inferred type is BasicUser

The value corresponding to Type[C] must be an actual class object
that's a subtype of C, not a special form. IOW, in the above
example calling e.g. new_user(Union[BasicUser, ProUser]) is
rejected by the type checker (in addition to failing at runtime
because you can't instantiate a union).

Note that it is legal to use a union of classes as the parameter for
Type[], as in:

Type[Any] is also supported (see below for its meaning). However,
other special constructs like Tuple or Callable are not
allowed.

There are some concerns with this feature: for example when
new_user() calls user_class() this implies that all subclasses
of User must support this in their constructor signature. However
this is not unique to Type[]: class methods have similar concerns.
A type checker ought to flag violations of such assumptions, but by
default constructor calls that match the constructor signature in the
indicated base class (User in the example above) should be
allowed. A program containing a complex or extensible class hierarchy
might also handle this by using a factory class method. A future
revision of this PEP may introduce better ways of dealing with these
concerns.

When Type is parameterized it requires exactly one parameter.
Plain Type without brackets is equivalent to Type[Any] and
this in turn is equivalent to type (the root of Python's metaclass
hierarchy). This equivalence also motivates the name, Type, as
opposed to alternatives like Class or SubType, which were
proposed while this feature was under discussion; this is similar to
the relationship between e.g. List and list.

Regarding the behavior of Type[Any] (or Type or type),
accessing attributes of a variable with this type only provides
attributes and methods defined by type (for example,
__repr__() and __mro__). Such a variable can be called with
arbitrary arguments, and the return type is Any.

Type is covariant in its parameter, because Type[Derived] is a
subtype of Type[Base]:

In most cases the first argument of class and instance methods
does not need to be annotated, and it is assumed to have the
type of the containing class for instance methods, and a type object
type corresponding to the containing class object for class methods.
In addition, the first argument in an instance method can be annotated
with a type variable. In this case the return type may use the same
type variable, thus making that method a generic function. For example:

Sometimes there's code that must be seen by a type checker (or other
static analysis tools) but should not be executed. For such
situations the typing module defines a constant,
TYPE_CHECKING, that is considered True during type checking
(or other static analysis) but False at runtime. Example:

(Note that the type annotation must be enclosed in quotes, making it a
"forward reference", to hide the expensive_mod reference from the
interpreter runtime. In the # type comment no quotes are needed.)

Some functions are designed to take their arguments only positionally,
and expect their callers never to use the argument's name to provide
that argument by keyword. All arguments with names beginning with
__ are assumed to be positional-only:

The typing.py module provides a generic version of ABC
collections.abc.Coroutine to specify awaitables that also support
send() and throw() methods. The variance and order of type variables
correspond to those of Generator, namely Coroutine[T_co, T_contra, V_co],
for example:

A number of existing or potential use cases for function annotations
exist, which are incompatible with type hinting. These may confuse
a static type checker. However, since type hinting annotations have no
runtime behavior (other than evaluation of the annotation expression and
storing annotations in the __annotations__ attribute of the function
object), this does not make the program incorrect -- it just may cause
a type checker to emit spurious warnings or errors.

To mark portions of the program that should not be covered by type
hinting, you can use one or more of the following:

a # type: ignore comment;

a @no_type_check decorator on a class or function;

a custom class or function decorator marked with
@no_type_check_decorator.

For more details see later sections.

In order for maximal compatibility with offline type checking it may
eventually be a good idea to change interfaces that rely on annotations
to switch to a different mechanism, for example a decorator. In Python
3.5 there is no pressure to do this, however. See also the longer
discussion under Rejected alternatives below.

No first-class syntax support for explicitly marking variables as being
of a specific type is added by this PEP. To help with type inference in
complex cases, a comment of the following format may be used:

In stubs it may be useful to declare the existence of a variable
without giving it an initial value. This can be done using a literal
ellipsis:

from typing import IO
stream = ... # type: IO[str]

In non-stub code, there is a similar special case:

from typing import IO
stream = None # type: IO[str]

Type checkers should not complain about this (despite the value
None not matching the given type), nor should they change the
inferred type to Optional[...] (despite the rule that does this
for annotated arguments with a default value of None). The
assumption here is that other code will ensure that the variable is
given a value of the proper type, and all uses can assume that the
variable has the given type.

The # type: ignore comment should be put on the line that the
error refers to:

A # type: ignore comment on a line by itself is equivalent to
adding an inline # type: ignore to each line until the end of
the current indented block. At top indentation level this has
effect of disabling type checking until the end of file.

If type hinting proves useful in general, a syntax for typing variables
may be provided in a future Python version.

Occasionally the type checker may need a different kind of hint: the
programmer may know that an expression is of a more constrained type
than a type checker may be able to infer. For example:

from typing import List, cast
def find_first_str(a: List[object]) -> str:
index = next(i for i, x in enumerate(a) if isinstance(x, str))
# We only get here if there's at least one string in a
return cast(str, a[index])

Some type checkers may not be able to infer that the type of
a[index] is str and only infer object or Any, but we
know that (if the code gets to that point) it must be a string. The
cast(t, x) call tells the type checker that we are confident that
the type of x is t. At runtime a cast always returns the
expression unchanged -- it does not check the type, and it does not
convert or coerce the value.

Casts differ from type comments (see the previous section). When using
a type comment, the type checker should still verify that the inferred
type is consistent with the stated type. When using a cast, the type
checker should blindly believe the programmer. Also, casts can be used
in expressions, while type comments only apply to assignments.

There are also situations where a programmer might want to avoid logical
errors by creating simple classes. For example:

class UserId(int):
pass
get_by_user_id(user_id: UserId):
...

However, this approach introduces a runtime overhead. To avoid this,
typing.py provides a helper function NewType that creates
simple unique types with almost zero runtime overhead. For a static type
checker Derived = NewType('Derived', Base) is roughly equivalent
to a definition:

class Derived(Base):
def __init__(self, _x: Base) -> None:
...

While at runtime, NewType('Derived', Base) returns a dummy function
that simply returns its argument. Type checkers require explicit casts
from int where UserId is expected, while implicitly casting
from UserId where int is expected. Examples:

NewType accepts exactly two arguments: a name for the new unique type,
and a base class. The latter should be a proper class, i.e.,
not a type construct like Union, etc. The function returned by NewType
accepts only one argument; this is equivalent to supporting only one
constructor accepting an instance of the base class (see above). Example:

Stub files are files containing type hints that are only for use by
the type checker, not at runtime. There are several use cases for
stub files:

Extension modules

Third-party modules whose authors have not yet added type hints

Standard library modules for which type hints have not yet been
written

Modules that must be compatible with Python 2 and 3

Modules that use annotations for other purposes

Stub files have the same syntax as regular Python modules. There is one
feature of the typing module that is different in stub files:
the @overload decorator described below.

The type checker should only check function signatures in stub files;
It is recommended that function bodies in stub files just be a single
ellipsis (...).

The type checker should have a configurable search path for stub files.
If a stub file is found the type checker should not read the
corresponding "real" module.

While stub files are syntactically valid Python modules, they use the
.pyi extension to make it possible to maintain stub files in the
same directory as the corresponding real module. This also reinforces
the notion that no runtime behavior should be expected of stub files.

Additional notes on stub files:

Modules and variables imported into the stub are not considered
exported from the stub unless the import uses the import ... as
... form or the equivalent from ... import ... as ... form.

However, as an exception to the previous bullet, all objects
imported into a stub using from ... import * are considered
exported. (This makes it easier to re-export all objects from a
given module that may vary by Python version.)

Stub files may be incomplete. To make type checkers aware of this, the file
can contain the following code:

def __getattr__(name) -> Any: ...

Any identifier not defined in the stub is therefore assumed to be of type
Any.

The @overload decorator allows describing functions and methods
that support multiple different combinations of argument types. This
pattern is used frequently in builtin modules and types. For example,
the __getitem__() method of the bytes type can be described as
follows:

Uses of the @overload decorator as shown above are suitable for
stub files. In regular modules, a series of @overload-decorated
definitions must be followed by exactly one
non-@overload-decorated definition (for the same function/method).
The @overload-decorated definitions are for the benefit of the
type checker only, since they will be overwritten by the
non-@overload-decorated definition, while the latter is used at
runtime but should be ignored by a type checker. At runtime, calling
a @overload-decorated function directly will raise
NotImplementedError. Here's an example of a non-stub overload
that can't easily be expressed using a union or a type variable:

NOTE: While it would be possible to provide a multiple dispatch
implementation using this syntax, its implementation would require
using sys._getframe(), which is frowned upon. Also, designing and
implementing an efficient multiple dispatch mechanism is hard, which
is why previous attempts were abandoned in favor of
functools.singledispatch(). (See PEP 443, especially its section
"Alternative approaches".) In the future we may come up with a
satisfactory multiple dispatch design, but we don't want such a design
to be constrained by the overloading syntax defined for type hints in
stub files. It is also possible that both features will develop
independent from each other (since overloading in the type checker
has different use cases and requirements than multiple dispatch
at runtime -- e.g. the latter is unlikely to support generic types).

A constrained TypeVar type can often be used instead of using the
@overload decorator. For example, the definitions of concat1
and concat2 in this stub file are equivalent:

Some functions, such as map or bytes.__getitem__ above, can't
be represented precisely using type variables. However, unlike
@overload, type variables can also be used outside stub files. We
recommend that @overload is only used in cases where a type
variable is not sufficient, due to its special stub-only status.

Another important difference between type variables such as AnyStr
and using @overload is that the prior can also be used to define
constraints for generic class type parameters. For example, the type
parameter of the generic class typing.IO is constrained (only
IO[str], IO[bytes] and IO[Any] are valid):

The easiest form of stub file storage and distribution is to put them
alongside Python modules in the same directory. This makes them easy to
find by both programmers and the tools. However, since package
maintainers are free not to add type hinting to their packages,
third-party stubs installable by pip from PyPI are also supported.
In this case we have to consider three issues: naming, versioning,
installation path.

This PEP does not provide a recommendation on a naming scheme that
should be used for third-party stub file packages. Discoverability will
hopefully be based on package popularity, like with Django packages for
example.

Third-party stubs have to be versioned using the lowest version of the
source package that is compatible. Example: FooPackage has versions
1.0, 1.1, 1.2, 1.3, 2.0, 2.1, 2.2. There are API changes in versions
1.1, 2.0 and 2.2. The stub file package maintainer is free to release
stubs for all versions but at least 1.0, 1.1, 2.0 and 2.2 are needed
to enable the end user type check all versions. This is because the
user knows that the closest lower or equal version of stubs is
compatible. In the provided example, for FooPackage 1.3 the user would
choose stubs version 1.1.

Note that if the user decides to use the "latest" available source
package, using the "latest" stub files should generally also work if
they're updated often.

Third-party stub packages can use any location for stub storage. Type
checkers should search for them using PYTHONPATH. A default fallback
directory that is always checked is shared/typehints/python3.5/ (or
3.6, etc.). Since there can only be one package installed for a given
Python version per environment, no additional versioning is performed
under that directory (just like bare directory installs by pip in
site-packages). Stub file package authors might use the following
snippet in setup.py:

There is a shared repository where useful stubs are being collected
[typeshed]. Note that stubs for a given package will not be included
here without the explicit consent of the package owner. Further
policies regarding the stubs collected here will be decided at a later
time, after discussion on python-dev, and reported in the typeshed
repo's README.

No syntax for listing explicitly raised exceptions is proposed.
Currently the only known use case for this feature is documentational,
in which case the recommendation is to put this information in a
docstring.

Note that special type constructs, such as Any, Union,
and type variables defined using TypeVar are only supported
in the type annotation context, and Generic may only be used
as a base class. All of these (except for unparameterized generics)
will raise TypeError if appear in isinstance or issubclass.

Fundamental building blocks:

Any, used as def get(key: str) -> Any: ...

Union, used as Union[Type1, Type2, Type3]

Callable, used as Callable[[Arg1Type, Arg2Type], ReturnType]

Tuple, used by listing the element types, for example
Tuple[int, int, str].
The empty tuple can be typed as Tuple[()].
Arbitrary-length homogeneous tuples can be expressed
using one type and ellipsis, for example Tuple[int, ...].
(The ... here are part of the syntax, a literal ellipsis.)

Generator, used as Generator[yield_type, send_type,
return_type]. This represents the return value of generator
functions. It is a subtype of Iterable and it has additional
type variables for the type accepted by the send() method (it
is contravariant in this variable -- a generator that accepts sending it
Employee instance is valid in a context where a generator is required
that accepts sending it Manager instances) and the return type of the
generator.

Hashable (not generic, but present for completeness)

ItemsView

Iterable

Iterator

KeysView

Mapping

MappingView

MutableMapping

MutableSequence

MutableSet

Sequence

Set, renamed to AbstractSet. This name change was required
because Set in the typing module means set() with
generics.

Sized (not generic, but present for completeness)

ValuesView

A few one-off types are defined that test for single special methods
(similar to Hashable or Sized):

Reversible, to test for __reversed__

SupportsAbs, to test for __abs__

SupportsComplex, to test for __complex__

SupportsFloat, to test for __float__

SupportsInt, to test for __int__

SupportsRound, to test for __round__

SupportsBytes, to test for __bytes__

Convenience definitions:

Optional, defined by Optional[t] == Union[t, type(None)]

AnyStr, defined as TypeVar('AnyStr', str, bytes)

Text, a simple alias for str in Python 3, for unicode in Python 2

NamedTuple, used as
NamedTuple(type_name, [(field_name, field_type), ...])
and equivalent to
collections.namedtuple(type_name, [field_name, ...]).
This is useful to declare the types of the fields of a named tuple
type.

@no_type_check, a decorator to disable type checking per class or
function (see below)

@no_type_check_decorator, a decorator to create your own decorators
with the same meaning as @no_type_check (see below)

@overload, described earlier

get_type_hints(), a utility function to retrieve the type hints from a
function or method. Given a function or method object, it returns
a dict with the same format as __annotations__, but evaluating
forward references (which are given as string literals) as expressions
in the context of the original function or method definition.

TYPE_CHECKING, False at runtime but True to type checkers

Types available in the typing.io submodule:

IO (generic over AnyStr)

BinaryIO (a simple subtype of IO[bytes])

TextIO (a simple subtype of IO[str])

Types available in the typing.re submodule:

Match and Pattern, types of re.match() and re.compile()
results (generic over AnyStr)

Some tools may want to support type annotations in code that must be
compatible with Python 2.7. For this purpose this PEP has a suggested
(but not mandatory) extension where function annotations are placed in
a # type: comment. Such a comment must be placed immediately
following the function header (before the docstring). An example: the
following Python 3 code:

Sometimes you want to specify the return type for a function or method
without (yet) specifying the argument types. To support this
explicitly, the argument list may be replaced with an ellipsis.
Example:

Sometimes you have a long list of parameters and specifying their
types in a single # type: comment would be awkward. To this end
you may list the arguments one per line and add a # type: comment
per line after an argument's associated comma, if any.
To specify the return type use the ellipsis syntax. Specifying the return
type is not mandatory and not every argument needs to be given a type.
A line with a # type: comment should contain exactly one argument.
The type comment for the last argument (if any) should precede the close
parenthesis. Example:

Tools that support this syntax should support it regardless of the
Python version being checked. This is necessary in order to support
code that straddles Python 2 and Python 3.

It is not allowed for an argument or return value to have both
a type annotation and a type comment.

When using the short form (e.g. # type: (str, int) -> None)
every argument must be accounted for, except the first argument of
instance and class methods (those are usually omitted, but it's
allowed to include them).

The return type is mandatory for the short form. If in Python 3 you
would omit some argument or the return type, the Python 2 notation
should use Any.

When using the short form, for *args and **kwds, put 1 or 2
stars in front of the corresponding type annotation. (As with
Python 3 annotations, the annotation here denotes the type of the
individual argument values, not of the tuple/dict that you receive
as the special argument value args or kwds.)

Like other type comments, any names used in the annotations must be
imported or defined by the module containing the annotation.

When using the short form, the entire annotation must be one line.

The short form may also occur on the same line as the close
parenthesis, e.g.:

def add(a, b): # type: (int, int) -> int
return a + b

Misplaced type comments will be flagged as errors by a type checker.
If necessary, such comments could be commented twice. For example:

Most people are familiar with the use of angular brackets
(e.g. List<int>) in languages like C++, Java, C# and Swift to
express the parametrization of generic types. The problem with these
is that they are really hard to parse, especially for a simple-minded
parser like Python. In most languages the ambiguities are usually
dealt with by only allowing angular brackets in specific syntactic
positions, where general expressions aren't allowed. (And also by
using very powerful parsing techniques that can backtrack over an
arbitrary section of code.)

But in Python, we'd like type expressions to be (syntactically) the
same as other expressions, so that we can use e.g. variable assignment
to create type aliases. Consider this simple type expression:

List<int>

From the Python parser's perspective, the expression begins with the
same four tokens (NAME, LESS, NAME, GREATER) as a chained comparison:

a < b > c # I.e., (a < b) and (b > c)

We can even make up an example that could be parsed both ways:

a < b > [ c ]

Assuming we had angular brackets in the language, this could be
interpreted as either of the following two:

It would surely be possible to come up with a rule to disambiguate
such cases, but to most users the rules would feel arbitrary and
complex. It would also require us to dramatically change the CPython
parser (and every other parser for Python). It should be noted that
Python's current parser is intentionally "dumb" -- a simple grammar is
easier for users to reason about.

For all these reasons, square brackets (e.g. List[int]) are (and
have long been) the preferred syntax for generic type parameters.
They can be implemented by defining the __getitem__() method on
the metaclass, and no new syntax is required at all. This option
works in all recent versions of Python (starting with Python 2.2).
Python is not alone in this syntactic choice -- generic classes in
Scala also use square brackets.

One line of argument points out that PEP 3107 explicitly supports
the use of arbitrary expressions in function annotations. The new
proposal is then considered incompatible with the specification of PEP
3107.

Our response to this is that, first of all, the current proposal does
not introduce any direct incompatibilities, so programs using
annotations in Python 3.4 will still work correctly and without
prejudice in Python 3.5.

We do hope that type hints will eventually become the sole use for
annotations, but this will require additional discussion and a
deprecation period after the initial roll-out of the typing module
with Python 3.5. The current PEP will have provisional status (see
PEP 411) until Python 3.6 is released. The fastest conceivable scheme
would introduce silent deprecation of non-type-hint annotations in
3.6, full deprecation in 3.7, and declare type hints as the only
allowed use of annotations in Python 3.8. This should give authors of
packages that use annotations plenty of time to devise another
approach, even if type hints become an overnight success.

Another possible outcome would be that type hints will eventually
become the default meaning for annotations, but that there will always
remain an option to disable them. For this purpose the current
proposal defines a decorator @no_type_check which disables the
default interpretation of annotations as type hints in a given class
or function. It also defines a meta-decorator
@no_type_check_decorator which can be used to decorate a decorator
(!), causing annotations in any function or class decorated with the
latter to be ignored by the type checker.

There are also # type: ignore comments, and static checkers should
support configuration options to disable type checking in selected
packages.

Despite all these options, proposals have been circulated to allow
type hints and other forms of annotations to coexist for individual
arguments. One proposal suggests that if an annotation for a given
argument is a dictionary literal, each key represents a different form
of annotation, and the key 'type' would be use for type hints.
The problem with this idea and its variants is that the notation
becomes very "noisy" and hard to read. Also, in most cases where
existing libraries use annotations, there would be little need to
combine them with type hints. So the simpler approach of selectively
disabling type hints appears sufficient.

The current proposal is admittedly sub-optimal when type hints must
contain forward references. Python requires all names to be defined
by the time they are used. Apart from circular imports this is rarely
a problem: "use" here means "look up at runtime", and with most
"forward" references there is no problem in ensuring that a name is
defined before the function using it is called.

The problem with type hints is that annotations (per PEP 3107, and
similar to default values) are evaluated at the time a function is
defined, and thus any names used in an annotation must be already
defined when the function is being defined. A common scenario is a
class definition whose methods need to reference the class itself in
their annotations. (More general, it can also occur with mutually
recursive classes.) This is natural for container types, for
example:

As written this will not work, because of the peculiarity in Python
that class names become defined once the entire body of the class has
been executed. Our solution, which isn't particularly elegant, but
gets the job done, is to allow using string literals in annotations.
Most of the time you won't have to use this though -- most uses of
type hints are expected to reference builtin types or types defined in
other modules.

A counterproposal would change the semantics of type hints so they
aren't evaluated at runtime at all (after all, type checking happens
off-line, so why would type hints need to be evaluated at runtime at
all). This of course would run afoul of backwards compatibility,
since the Python interpreter doesn't actually know whether a
particular annotation is meant to be a type hint or something else.

A compromise is possible where a __future__ import could enable
turning all annotations in a given module into string literals, as
follows:

A few creative souls have tried to invent solutions for this problem.
For example, it was proposed to use a double colon (::) for type
hints, solving two problems at once: disambiguating between type hints
and other annotations, and changing the semantics to preclude runtime
evaluation. There are several things wrong with this idea, however.

It's ugly. The single colon in Python has many uses, and all of
them look familiar because they resemble the use of the colon in
English text. This is a general rule of thumb by which Python
abides for most forms of punctuation; the exceptions are typically
well known from other programming languages. But this use of ::
is unheard of in English, and in other languages (e.g. C++) it is
used as a scoping operator, which is a very different beast. In
contrast, the single colon for type hints reads naturally -- and no
wonder, since it was carefully designed for this purpose (the idea
long predates PEP 3107[gvr-artima]). It is also used in the same
fashion in other languages from Pascal to Swift.

What would you do for return type annotations?

It's actually a feature that type hints are evaluated at runtime.

Making type hints available at runtime allows runtime type
checkers to be built on top of type hints.

It catches mistakes even when the type checker is not run. Since
it is a separate program, users may choose not to run it (or even
install it), but might still want to use type hints as a concise
form of documentation. Broken type hints are no use even for
documentation.

Because it's new syntax, using the double colon for type hints would
limit them to code that works with Python 3.5 only. By using
existing syntax, the current proposal can easily work for older
versions of Python 3. (And in fact mypy supports Python 3.2 and
newer.)

If type hints become successful we may well decide to add new syntax
in the future to declare the type for variables, for example
var age: int = 42. If we were to use a double colon for
argument type hints, for consistency we'd have to use the same
convention for future syntax, perpetuating the ugliness.

A few other forms of alternative syntax have been proposed, e.g. the
introduction of a where keyword [roberge], and Cobra-inspired
requires clauses. But these all share a problem with the double
colon: they won't work for earlier versions of Python 3. The same
would apply to a new __future__ import.

A decorator, e.g. @typehints(name=str, returns=str). This could
work, but it's pretty verbose (an extra line, and the argument names
must be repeated), and a far cry in elegance from the PEP 3107
notation.

Stub files. We do want stub files, but they are primarily useful
for adding type hints to existing code that doesn't lend itself to
adding type hints, e.g. 3rd party packages, code that needs to
support both Python 2 and Python 3, and especially extension
modules. For most situations, having the annotations in line with
the function definitions makes them much more useful.

Docstrings. There is an existing convention for docstrings, based
on the Sphinx notation (:type arg1: description). This is
pretty verbose (an extra line per parameter), and not very elegant.
We could also make up something new, but the annotation syntax is
hard to beat (because it was designed for this very purpose).

It's also been proposed to simply wait another release. But what
problem would that solve? It would just be procrastination.