The pdb module is a simple but adequate console-mode debugger for
Python. It is part of the standard Python library, and is documented
in the Library Reference Manual. You can
also write your own debugger by using the code for pdb as an example.

The IDLE interactive development environment, which is part of the
standard Python distribution (normally available as Tools/scripts/idle),
includes a graphical debugger. There is documentation for the IDLE
debugger at http://www.python.org/idle/doc/idle2.html#Debugger

PyChecker is a static analysis tool that finds bugs in Python source
code and warns about code complexity and style. You can get PyChecker
from http://pychecker.sf.net.

Pylint is another tool
that checks if a module satisfies a coding standard, and also makes it
possible to write plug-ins to add a custom feature. In addition to
the bug checking that PyChecker performs, Pylint offers some
additional features such as checking line length, whether variable
names are well-formed according to your coding standard, whether
declared interfaces are fully implemented, and more.
http://www.logilab.org/projects/pylint/documentation provides a full
list of Pylint's features.

You don't need the ability to compile Python to C code if all you
want is a stand-alone program that users can download and run without
having to install the Python distribution first. There are a number
of tools that determine the set of modules required by a program and
bind these modules together with a Python binary to produce a single
executable.

One is to use the freeze tool, which is included in the Python
source tree as Tools/freeze. It converts Python byte
code to C arrays; a C compiler you can embed all
your modules into a new program, which is then linked
with the standard Python modules.

It works by scanning your source recursively for import statements (in
both forms) and looking for the modules in the standard Python path as
well as in the source directory (for built-in modules). It then turns
the bytecode for modules written in Python into C code (array
initializers that can be turned into code objects using the marshal
module) and creates a custom-made config file that only contains those
built-in modules which are actually used in the program. It then
compiles the generated C code and links it with the rest of the Python
interpreter to form a self-contained binary which acts exactly like
your script.

Obviously, freeze requires a C compiler. There are several other
utilities which don't. The first is Gordon McMillan's installer at

A third is Christian Tismer's SQFREEZE which appends the byte code
to a specially-prepared Python interpreter that can find the byte
code in the executable. It's possible that a similar approach will
be added to Python 2.4, due out some time in 2004.

That's a tough one, in general. There are many tricks to speed up
Python code; consider rewriting parts in C as a last resort.

In some cases it's possible to automatically translate Python to C or
x86 assembly language, meaning that you don't have to modify your code
to gain increased speed.

Pyrex can
compile a slightly modified version of Python code into a C extension,
and can be used on many different platforms.

Psyco is a just-in-time compiler
that translates Python code into x86 assembly language. If you can
use it, Psyco can provide dramatic speedups for critical functions.

The rest of this answer will discuss various tricks for squeezing a
bit more speed out of Python code. Never apply any optimization
tricks unless you know you need them, after profiling has indicated
that a particular function is the heavily executed hot spot in the
code. Optimizations almost always make the code less clear, and you
shouldn't pay the costs of reduced clarity (increased development
time, greater likelihood of bugs) unless the resulting performance
benefit is worth it.

One thing to notice is that function and (especially) method calls are
rather expensive; if you have designed a purely OO interface with lots
of tiny functions that don't do much more than get or set an instance
variable or call another method, you might consider using a more
direct way such as directly accessing instance variables. Also see the
standard module "profile" (described in the Library Reference manual) which
makes it possible to find out where your program is spending most of
its time (if you have some patience -- the profiling itself can slow
your program down by an order of magnitude).

Remember that many standard optimization heuristics you
may know from other programming experience may well apply
to Python. For example it may be faster to send output to output
devices using larger writes rather than smaller ones in order to
reduce the overhead of kernel system calls. Thus CGI scripts
that write all output in "one shot" may be faster than
those that write lots of small pieces of output.

Also, be sure to use Python's core features where appropriate.
For example, slicing allows programs to chop up
lists and other sequence objects in a single tick of the interpreter's
mainloop using highly optimized C implementations. Thus to
get the same effect as:

L2 = []
for i in range[3]:
L2.append(L1[i])

it is much shorter and far faster to use

L2 = list(L1[:3]) # "list" is redundant if L1 is a list.

Note that the functionally-oriented builtins such as
map(), zip(), and friends can be a convenient
accelerator for loops that perform a single task. For example to pair the elements of two
lists together:

Other examples include the join() and split()
methods of string objects. For example if s1..s7 are large (10K+) strings then
"".join([s1,s2,s3,s4,s5,s6,s7])`maybefarfasterthanthemoreobvious``s1+s2+s3+s4+s5+s6+s7, since the "summation"
will compute many subexpressions, whereas join() does all the
copying in one pass. For manipulating strings, use
the replace() method on string objects. Use
regular expressions only when you're not dealing with constant string patterns.
Consider using the string formatting operations
string%tuple and string%dictionary.

Be sure to use the list.sort() builtin method to do sorting, and see
the sorting mini-HOWTO for examples of moderately advanced usage.
list.sort() beats other techniques for sorting in all but the most
extreme circumstances.

Another common trick is to "push loops into functions or methods."
For example suppose you have a program that runs slowly and you
use the profiler to determine that a Python function ff()
is being called lots of times. If you notice that ff():

def ff(x):
...do something with x computing result...
return result

tends to be called in loops like:

list = map(ff, oldlist)

or:

for x in sequence:
value = ff(x)
...do something with value...

then you can often eliminate function call overhead by rewriting
ff() to:

Default arguments can be used to determine values once, at
compile time instead of at run time. This can only be done for
functions or objects which will not be changed during program
execution, such as replacing

Any variable assigned in a function is local to that function.
unless it is specifically declared global. Since a value is bound
to x as the last statement of the function body, the compiler
assumes that x is local. Consequently the printx
attempts to print an uninitialized local variable and will
trigger a NameError.

The solution is to insert an explicit global declaration at the start
of the function:

In Python, variables that are only referenced inside a function are
implicitly global. If a variable is assigned a new value anywhere
within the function's body, it's assumed to be a local. If a variable
is ever assigned a new value inside the function, the variable is
implicitly local, and you need to explicitly declare it as 'global'.

Though a bit surprising at first, a moment's consideration explains
this. On one hand, requiring global for assigned variables provides
a bar against unintended side-effects. On the other hand, if global
was required for all global references, you'd be using global all the
time. You'd have to declare as global every reference to a
builtin function or to a component of an imported module. This
clutter would defeat the usefulness of the global declaration for
identifying side-effects.

The canonical way to share information across modules within a single
program is to create a special module (often called config or cfg).
Just import the config module in all modules of your application; the
module then becomes available as a global name. Because there is only
one instance of each module, any changes made to the module object get
reflected everywhere. For example:

config.py:

x = 0 # Default value of the 'x' configuration setting

mod.py:

import config
config.x = 1

main.py:

import config
import mod
print config.x

Note that using a module is also the basis for implementing the
Singleton design pattern, for the same reason.

In general, don't use frommodulenameimport*.
Doing so clutters the importer's namespace. Some people avoid this idiom
even with the few modules that were designed to be imported in this
manner. Modules designed in this manner include Tkinter,
and threading.

Import modules at the top of a file. Doing so makes it clear what
other modules your code requires and avoids questions of whether the
module name is in scope. Using one import per line makes it easy to
add and delete module imports, but using multiple imports per line
uses less screen space.

Never use relative package imports. If you're writing code that's
in the package.sub.m1 module and want to import package.sub.m2,
do not just write importm2, even though it's legal.
Write frompackage.subimportm2 instead. Relative imports can lead to a
module being initialized twice, leading to confusing bugs.

It is sometimes necessary to move imports to a function or class to
avoid problems with circular imports. Gordon McMillan says:

Circular imports are fine where both modules use the "import <module>"
form of import. They fail when the 2nd module wants to grab a name
out of the first ("from module import name") and the import is at
the top level. That's because names in the 1st are not yet available,
because the first module is busy importing the 2nd.

In this case, if the second module is only used in one function, then the
import can easily be moved into that function. By the time the import
is called, the first module will have finished initializing, and the
second module can do its import.

It may also be necessary to move imports out of the top level of code
if some of the modules are platform-specific. In that case, it may
not even be possible to import all of the modules at the top of the
file. In this case, importing the correct modules in the
corresponding platform-specific code is a good option.

Only move imports into a local scope, such as inside a function
definition, if it's necessary to solve a problem such as avoiding a
circular import or are trying to reduce the initialization time of a
module. This technique is especially helpful if many of the imports
are unnecessary depending on how the program executes. You may also
want to move imports into a function if the modules are only ever used
in that function. Note that loading a module the first time may be
expensive because of the one time initialization of the module, but
loading a module multiple times is virtually free, costing only a couple of
dictionary lookups. Even if the module name has gone out of scope,
the module is probably available in sys.modules.

If only instances of a specific class use a module, then it is
reasonable to import the module in the class's __init__ method and
then assign the module to an instance variable so that the module is
always available (via that instance variable) during the life of the
object. Note that to delay an import until the class is instantiated,
the import must be inside a method. Putting the import inside the
class but outside of any method still causes the import to occur when
the module is initialized.

Collect the arguments using the * and ** specifiers in the function's
parameter list; this gives you the positional arguments as a tuple
and the keyword arguments as a dictionary. You can
then pass these arguments when calling another function by using
* and **:

Remember that arguments are passed by assignment in Python. Since
assignment just creates references to objects, there's no alias
between an argument name in the caller and callee, and so no
call-by-reference per se. You can achieve the desired effect in a
number of ways.

You have two choices: you can use nested scopes
or you can use callable objects. For example, suppose you wanted to
define linear(a,b) which returns a function f(x) that computes the
value a*x+b. Using nested scopes:

Generally speaking, it can't, because objects don't really have
names. Essentially, assignment always binds a name to a value; The
same is true of def and class statements, but in that case the
value is a callable. Consider the following code:

Arguably the class has a name: even though it is bound to two names
and invoked through the name B the created instance is still reported
as an instance of class A. However, it is impossible to say whether
the instance's name is a or b, since both names are bound to the same
value.

Generally speaking it should not be necessary for your code to "know
the names" of particular values. Unless you are deliberately writing
introspective programs, this is usually an indication that a change of
approach might be beneficial.

In comp.lang.python, Fredrik Lundh once gave an excellent analogy in
answer to this question:

The same way as you get the name of that cat you found on your
porch: the cat (object) itself cannot tell you its name, and it
doesn't really care -- so the only way to find out what it's called
is to ask all your neighbours (namespaces) if it's their cat
(object)...

....and don't be surprised if you'll find that it's known by many
names, or no name at all!

In many cases you can mimic a?b:c with "a and b or c", but there's a flaw: if b is zero
(or empty, or None -- anything that tests false) then c will be selected
instead. In many cases you can prove by looking at the code that this
can't happen (e.g. because b is a constant or has a type that can never be false),
but in general this can be a problem.

Tim Peters (who wishes it was Steve Majewski) suggested the following
solution: (a and [b] or [c])[0]. Because [b] is a singleton list it
is never false, so the wrong path is never taken; then applying [0] to
the whole thing gets the b or c that you really wanted. Ugly, but it
gets you there in the rare cases where it is really inconvenient to
rewrite your code using 'if'.

The best course is usually to write a simple if...else statement.
Another solution is to implement the "?:" operator as a function:

In most cases you'll pass b and c directly: q(a,b,c). To avoid
evaluating b or c when they shouldn't be, encapsulate them within a
lambda function, e.g.: q(a,lambda:b,lambda:c).

It has been asked why Python has no if-then-else expression.
There are several answers: many languages do
just fine without one; it can easily lead to less readable code;
no sufficiently "Pythonic" syntax has been discovered; a search
of the standard library found remarkably few places where using an
if-then-else expression would make the code more understandable.

In 2002, PEP 308 was
written proposing several possible syntaxes and the community was
asked to vote on the issue. The vote was inconclusive. Most people
liked one of the syntaxes, but also hated other syntaxes; many votes
implied that people preferred no ternary operator
rather than having a syntax they hated.

To specify an octal digit, precede the octal value with a zero. For
example, to set the variable "a" to the octal value "10" (8 in
decimal), type:

>>> a = 010
>>> a
8

Hexadecimal is just as easy. Simply precede the hexadecimal number with a
zero, and then a lower or uppercase "x". Hexadecimal digits can be specified
in lower or uppercase. For example, in the Python interpreter:

It's primarily driven by the desire that i%j have the same sign as
j. If you want that, and also want:

i == (i/j)*j + (i%j)

then integer division has to return the floor. C also requires that identity
to hold, and then compilers that truncate i/j need to make i%j have
the same sign as i.

There are few real use cases for i%j when j is negative. When j
is positive, there are many, and in virtually all of them it's more useful
for i%j to be >=0. If the clock says 10 now, what did it say 200
hours ago? -190%12==2 is useful; -190%12==-10 is a bug
waiting to bite.

By default, these interpret the number as decimal, so that
int('0144')==144 and int('0x144') raises
ValueError. int(string,base) takes the base to convert from
as a second optional argument, so int('0x144',16)==324. If the
base is specified as 0, the number is interpreted using Python's
rules: a leading '0' indicates octal, and '0x' indicates a hex number.

Do not use the built-in function eval() if all you need is to
convert strings to numbers. eval() will be significantly slower
and it presents a security risk: someone could pass you a Python
expression that might have unwanted side effects. For example,
someone could pass __import__('os').system("rm-rf$HOME") which
would erase your home directory.

eval() also has the effect of interpreting numbers as Python
expressions, so that e.g. eval('09') gives a syntax error because Python
regards numbers starting with '0' as octal (base 8).

To convert, e.g., the number 144 to the string '144', use the built-in
function str(). If you want a hexadecimal or octal
representation, use the built-in functions hex() or oct().
For fancy formatting, use the % operator on strings, e.g. "%04d"%144
yields '0144' and "%.3f"%(1/3.0) yields '0.333'. See the library
reference manual for details.

The best is to use a dictionary that maps strings to functions. The
primary advantage of this technique is that the strings do not need
to match the names of the functions. This is also the primary
technique used to emulate a case construct:

Starting with Python 2.2, you can use S.rstrip("\r\n") to remove
all occurences of any line terminator from the end of the string S
without removing other trailing whitespace. If the string S
represents more than one line, with several empty lines at the end,
the line terminators for all the blank lines will be removed:

For simple input parsing, the easiest approach is usually to split the
line into whitespace-delimited words using the split() method of
string objects and then convert decimal strings to numeric values using
int() or float(). split() supports an optional "sep"
parameter which is useful if the line uses something other than
whitespace as a separator.

For more complicated input parsing, regular expressions
more powerful than C's sscanf() and better suited for the task.

This error indicates that your Python installation can handle
only 7-bit ASCII strings. There are a couple ways to fix or
work around the problem.

If your programs must handle data in arbitrary character set encodings,
the environment the application runs in will generally identify the
encoding of the data it is handing you. You need to convert the input
to Unicode data using that encoding. For example, a program that
handles email or web input will typically find character set encoding
information in Content-Type headers. This can then be used to
properly convert input data to Unicode. Assuming the string referred
to by value is encoded as UTF-8:

value = unicode(value, "utf-8")

will return a Unicode object. If the data is not correctly encoded as
UTF-8, the above call will raise a UnicodeError exception.

If you only want strings converted to Unicode which have non-ASCII
data, you can try converting them first assuming an ASCII encoding,
and then generate Unicode objects if that fails:

It's possible to set a default encoding in a file called sitecustomize.py
that's part of the Python library. However, this isn't recommended because changing the Python-wide default encoding may cause third-party extension modules to fail.

Note that on Windows, there is an encoding known as "mbcs", which uses
an encoding specific to your current locale. In many cases, and
particularly when working with COM, this may be an appropriate default
encoding to use.

The function tuple(seq) converts any sequence (actually, any
iterable) into a tuple with the same items in the same order.

For example, tuple([1,2,3]) yields (1,2,3) and tuple('abc')
yields ('a','b','c'). If the argument is
a tuple, it does not make a copy but returns the same object, so
it is cheap to call tuple() when you aren't sure that an object
is already a tuple.

The function list(seq) converts any sequence or iterable into a list with
the same items in the same order.
For example, list((1,2,3)) yields [1,2,3] and list('abc')
yields ['a','b','c']. If the argument is a list,
it makes a copy just like seq[:] would.

Python sequences are indexed with positive numbers and
negative numbers. For positive numbers 0 is the first index
1 is the second index and so forth. For negative indices -1
is the last index and -2 is the penultimate (next to last) index
and so forth. Think of seq[-n] as the same as seq[len(seq)-n].

Using negative indices can be very convenient. For example S[:-1]
is all of the string except for its last character, which is useful
for removing the trailing newline from a string.

This has the disadvantage that while you are in the loop, the list
is temporarily reversed. If you don't like this, you can make a copy.
This appears expensive but is actually faster than other solutions:

rev = list[:]
rev.reverse()
for x in rev:
<do something with x>

If it's not a list, a more general but slower solution is:

for i in range(len(sequence)-1, -1, -1):
x = sequence[i]
<do something with x>

A more elegant solution, is to define a class which acts as a sequence
and yields the elements in reverse order (solution due to Steve
Majewski):

Lists are equivalent to C or Pascal arrays in their time complexity;
the primary difference is that a Python list can contain objects of
many different types.

The array module also provides methods for creating arrays of
fixed types with compact representations, but they are slower to index
than lists. Also note that the Numeric extensions and others define
array-like structures with various characteristics as well.

To get Lisp-style linked lists, you can emulate cons cells using tuples:

lisp_list = ("like", ("this", ("example", None) ) )

If mutability is desired, you could use lists instead of tuples. Here
the analogue of lisp car is lisp_list[0] and the analogue of cdr
is lisp_list[1]. Only do this if you're sure you really need to,
because it's usually a lot slower than using Python lists.

The reason is that replicating a list with * doesn't create copies, it only creates references to the existing objects. The *3
creates a list containing 3 references to the same list of length
two. Changes to one row will show in all rows, which is almost certainly
not what you want.

The suggested approach is to create a list of the desired length first
and then fill in each element with a newly created list:

A = [None]*3
for i in range(3):
A[i] = [None] * 2

This generates a list containing 3 different lists of length two.
You can also use a list comprehension:

w,h = 2,3
A = [ [None]*w for i in range(h) ]

Or, you can use an extension that provides a matrix datatype; Numeric
Python is the best known.

You can't. Dictionaries store their keys in an unpredictable order,
so the display order of a dictionary's elements will be similarly
unpredictable.

This can be frustrating if you want to save a printable version to a
file, make some changes and then compare it with some other printed
dictionary. In this case, use the pprint module to pretty-print
the dictionary; the items will be presented in order sorted by the key.

A more complicated solution is to subclass UserDict.UserDict
to create a SortedDict class that prints itself in a predictable order.
Here's one simpleminded implementation of such a class:

This will work for many common situations you might encounter, though
it's far from a perfect solution. The largest flaw is that if some
values in the dictionary are also dictionaries, their values won't be
presented in any particular order.

The technique, attributed to Randal Schwartz of the Perl community,
sorts the elements of a list by a metric which maps each element to
its "sort value". To sort a list of strings by their uppercase
values:

If you find this more legible, you might prefer to use this instead of
the final list comprehension. However, it is almost twice as slow for
long lists. Why? First, the append() operation has to reallocate
memory, and while it uses some tricks to avoid doing that each time,
it still has to do it occasionally, and that costs quite a bit.
Second, the expression "result.append" requires an extra attribute
lookup, and third, there's a speed reduction from having to make
all those function calls.

A class is the particular object type created by executing
a class statement. Class objects are used as templates to create
instance objects, which embody both the data
(attributes) and code (methods) specific to a datatype.

A class can be based on one or more other classes, called its base
class(es). It then inherits the attributes and methods of its base
classes. This allows an object model to be successively refined by
inheritance. You might have a generic Mailbox class that provides
basic accessor methods for a mailbox, and subclasses such as
MboxMailbox, MaildirMailbox, OutlookMailbox that handle
various specific mailbox formats.

Self is merely a conventional name for the first argument of a method.
A method defined as meth(self,a,b,c) should be called as
x.meth(a,b,c) for some instance x of the class in which the
definition occurs; the called method will think it is called as
meth(x,a,b,c).

Use the built-in function isinstance(obj,cls). You can check if
an object is an instance of any of a number of classes by providing a tuple instead of a single class, e.g. isinstance(obj,(class1,class2,...)),
and can also check whether an object is one of Python's built-in types, e.g.
isinstance(obj,str) or isinstance(obj,(int,long,float,complex)).

Note that most programs do not use isinstance() on user-defined
classes very often. If you are developing the classes yourself, a
more proper object-oriented style is to define methods on the classes
that encapsulate a particular behaviour, instead of checking the
object's class and doing a different thing based on what class it is.
For example, if you have a function that does something:

Delegation is an object oriented technique (also called a design
pattern). Let's say you have an object x and want to change the
behaviour of just one of its methods. You can create a new class that
provides a new implementation of the method you're interested in changing
and delegates all other methods to the corresponding method of x.

Python programmers can easily implement delegation. For example, the
following class implements a class that behaves like a file but
converts all written data to uppercase:

Here the UpperOut class redefines the write() method to
convert the argument string to uppercase before calling the underlying
self.__outfile.write() method. All other methods are delegated to
the underlying self.__outfile object. The delegation is
accomplished via the __getattr__ method; consult the language
reference for
more information about controlling attribute access.

Note that for more general cases delegation can get trickier. When
attributes must be set as well as retrieved, the class must define a
__settattr__ method too, and it must do so carefully. The basic
implementation of __setattr__ is roughly equivalent to the following:

If you're using classic classes: For a class definition such as
classDerived(Base):... you can call method meth() defined in
Base (or one of Base's base classes) as Base.meth(self,arguments...). Here, Base.meth is an unbound method, so you
need to provide the self argument.

You could define an alias for the base class, assign the real base
class to it before your class definition, and use the alias throughout
your class. Then all you have to change is the value assigned to the
alias. Incidentally, this trick is also handy if you want to decide
dynamically (e.g. depending on availability of resources) which base
class to use. Example:

c.count also refers to C.count for any c such that
isinstance(c,C) holds, unless overridden by c itself or by some
class on the base-class search path from c.__class__ back to C.

Caution: within a method of C, an assignment like self.count=42
creates a new and unrelated instance vrbl named "count" in self's own dict.
Rebinding of a class-static data name must always specify the class
whether inside a method or not:

Variables with double leading underscore are "mangled" to provide a
simple but effective way to define class private variables. Any
identifier of the form __spam (at least two leading
underscores, at most one trailing underscore) is textually
replaced with _classname__spam, where classname is the
current class name with any leading underscores stripped.

This doesn't guarantee privacy: an outside user can still deliberately
access the "_classname__spam" attribute, and private values are visible
in the object's __dict__. Many Python programmers never bother to use
private variable names at all.

The del statement does not necessarily call __del__ -- it simply
decrements the object's reference count, and if this reaches zero
__del__ is called.

If your data structures contain circular links (e.g. a tree where each
child has a parent reference and each parent has a list of children)
the reference counts will never go back to zero. Once in a while
Python runs an algorithm to detect such cycles, but the garbage
collector might run some time after the last reference to your data
structure vanishes, so your __del__ method may be called at an
inconvenient and random time. This is inconvenient if you're trying to
reproduce a problem. Worse, the order in which object's __del__
methods are executed is arbitrary. You can run gc.collect() to
force a collection, but there are pathological cases where objects will
never be collected.

Despite the cycle collector, it's still a good idea to define an
explicit close() method on objects to be called whenever you're
done with them. The close() method can then remove attributes
that refer to subobjecs. Don't call __del__ directly --
__del__ should call close() and close() should make sure
that it can be called more than once for the same object.

Another way to avoid cyclical references is to use the "weakref"
module, which allows you to point to objects without incrementing
their reference count. Tree data structures, for instance, should use
weak references for their parent and sibling references (if they need
them!).

If the object has ever been a local variable in a function that caught
an expression in an except clause, chances are that a reference to the
object still exists in that function's stack frame as contained in the
stack trace. Normally, calling sys.exc_clear() will take care of
this by clearing the last recorded exception.

Finally, if your __del__ method raises an exception, a warning message
is printed to sys.stderr.

Python does not keep track of all instances of a class (or of a
built-in type). You can program the class's constructor to keep track
of all instances by keeping a list of weak references to each
instance.

When a module is imported for the first time (or when the source is
more recent than the current compiled file) a .pyc file containing
the compiled code should be created in the same directory as the
.py file.

One reason that a .pyc file may not be created is permissions
problems with the directory. This can happen, for example, if you
develop as one user but run as another, such as if you are testing
with a web server. Creation of a .pyc file is automatic if you're
importing a module and Python has the ability (permissions, free
space, etc...) to write the compiled module back to the directory.

Running Python on a top level script is not considered an import and
no .pyc will be created. For example, if you have a top-level
module abc.py that imports another module xyz.py, when you run
abc, xyz.pyc will be created since xyz is imported, but no
abc.pyc file will be created since abc.py isn't being
imported.

If you need to create abc.pyc -- that is, to create a .pyc file for a
module that is not imported -- you can, using the py_compile and
compileall modules.

The py_compile module can manually compile any module. One way is
to use the compile() function in that module interactively:

>>> import py_compile
>>> py_compile.compile('abc.py')

This will write the .pyc to the same location as abc.py (or
you can override that with the optional parameter cfile).

You can also automatically compile all files in a directory or
directories using the compileall module.
You can do it from the shell prompt by running compileall.py
and providing the path of a directory containing Python files to compile:

A module can find out its own module name by looking at the predefined
global variable __name__. If this has the value '__main__', the
program is running as a script. Many modules that are usually used by
importing them also provide a command-line interface or a self-test,
and only execute this code after checking __name__:

bar imports foo (which is a no-op since there already is a module named foo)

bar.foo_var = foo.foo_var

The last step fails, because Python isn't done with interpreting foo
yet and the global symbol dictionary for foo is still empty.

The same thing happens when you use importfoo, and then try to
access foo.foo_var in global code.

There are (at least) three possible workarounds for this problem.

Guido van Rossum recommends avoiding all uses of from<module>import..., and placing all code inside functions. Initializations
of global variables and class variables should use constants or
built-in functions only. This means everything from an imported
module is referenced as <module>.<name>.

Jim Roskind suggests performing steps in the following order in each
module:

For reasons of efficiency as well as consistency, Python only reads
the module file on the first time a module is imported. If it didn't,
in a program consisting of many modules where each one imports the
same basic module, the basic module would be parsed and re-parsed many
times. To force rereading of a changed module, do this:

import modname
reload(modname)

Warning: this technique is not 100% fool-proof. In particular,
modules containing statements like

from modname import some_objects

will continue to work with the old version of the imported objects.
If the module contains class definitions, existing class instances
will not be updated to use the new class definition. This can
result in the following paradoxical behaviour: