Check the Library Reference to see if there's a relevant
standard library module. (Eventually you'll learn what's in the standard
library and will be able to skip this step.)

For third-party packages, search the Python Package Index or try Google or
another Web search engine. Searching for "Python" plus a keyword or two for
your topic of interest will usually find something helpful.

If you can't find a source file for a module it may be a built-in or
dynamically loaded module implemented in C, C++ or other compiled language.
In this case you may not have the source file or it may be something like
mathmodule.c, somewhere in a C source directory (not on the Python Path).

You need to do two things: the script file's mode must be executable and the
first line must begin with #! followed by the path of the Python
interpreter.

The first is done by executing chmod+xscriptfile or perhaps chmod755scriptfile.

The second can be done in a number of ways. The most straightforward way is to
write

#!/usr/local/bin/python

as the very first line of your file, using the pathname for where the Python
interpreter is installed on your platform.

If you would like the script to be independent of where the Python interpreter
lives, you can use the env program. Almost all Unix variants support
the following, assuming the Python interpreter is in a directory on the user's
PATH:

#!/usr/bin/env python

Don't do this for CGI scripts. The PATH variable for CGI scripts is
often very minimal, so you need to use the actual absolute pathname of the
interpreter.

Occasionally, a user's environment is so full that the /usr/bin/env
program fails; or there's no env program at all. In that case, you can try the
following hack (due to Alex Rezinsky):

#! /bin/sh""":"exec python $0${1+"$@"}"""

The minor disadvantage is that this defines the script's __doc__ string.
However, you can fix that by adding

For Unix variants: The standard Python source distribution comes with a curses
module in the Modules subdirectory, though it's not compiled by default.
(Note that this is not available in the Windows distribution -- there is no
curses module for Windows.)

The curses module supports basic curses features as well as many additional
functions from ncurses and SYSV curses such as colour, alternative character set
support, pads, and mouse support. This means the module isn't compatible with
operating systems that only have BSD curses, but there don't seem to be any
currently maintained OSes that fall into this category.

Python comes with two testing frameworks. The doctest module finds
examples in the docstrings for a module and runs them, comparing the output with
the expected output given in the docstring.

The unittest module is a fancier testing framework modelled on Java and
Smalltalk testing frameworks.

To make testing easier, you should use good modular design in your program.
Your program should have almost all functionality
encapsulated in either functions or class methods -- and this sometimes has the
surprising and delightful effect of making the program run faster (because local
variable accesses are faster than global accesses). Furthermore the program
should avoid depending on mutating global variables, since this makes testing
much more difficult to do.

The "global main logic" of your program may be as simple as

if__name__=="__main__":main_logic()

at the bottom of the main module of your program.

Once your program is organized as a tractable collection of functions and class
behaviours you should write test functions that exercise the behaviours. A test
suite that automates a sequence of tests can be associated with each module.
This sounds like a lot of work, but since Python is so terse and flexible it's
surprisingly easy. You can make coding much more pleasant and fun by writing
your test functions in parallel with the "production code", since this makes it
easy to find bugs and even design flaws earlier.

"Support modules" that are not intended to be the main module of a program may
include a self-test of the module.

if__name__=="__main__":self_test()

Even programs that interact with complex external interfaces may be tested when
the external interfaces are unavailable by using "fake" interfaces implemented
in Python.

The pydoc module can create HTML from the doc strings in your Python
source code. An alternative for creating API documentation purely from
docstrings is epydoc. Sphinx can also include docstring content.

But now (on many platforms) the threads don't run in parallel, but appear to run
sequentially, one at a time! The reason is that the OS thread scheduler doesn't
start a new thread until the previous thread is blocked.

Instead of trying to guess a good delay value for time.sleep(),
it's better to use some kind of semaphore mechanism. One idea is to use the
queue module to create a queue object, let each thread append a token to
the queue when it finishes, and let the main thread read as many tokens from the
queue as there are threads.

Or, if you want fine control over the dispatching algorithm, you can write
your own logic manually. Use the queue module to create a queue
containing a list of jobs. The Queue class maintains a
list of objects and has a .put(obj) method that adds items to the queue and
a .get() method to return them. The class will take care of the locking
necessary to ensure that each job is handed out exactly once.

Here's a trivial example:

importthreading,queue,time# The worker thread gets jobs off the queue. When the queue is empty, it# assumes there will be no more work and exits.# (Realistically workers will run until terminated.)defworker():print('Running worker')time.sleep(0.1)whileTrue:try:arg=q.get(block=False)exceptqueue.Empty:print('Worker',threading.currentThread(),end=' ')print('queue empty')breakelse:print('Worker',threading.currentThread(),end=' ')print('running with argument',arg)time.sleep(0.5)# Create queueq=queue.Queue()# Start a pool of 5 workersforiinrange(5):t=threading.Thread(target=worker,name='worker %i'%(i+1))t.start()# Begin adding work to the queueforiinrange(50):q.put(i)# Give threads time to runprint('Main thread sleeping')time.sleep(5)

A global interpreter lock (GIL) is used internally to ensure that only one
thread runs in the Python VM at a time. In general, Python offers to switch
among threads only between bytecode instructions; how frequently it switches can
be set via sys.setswitchinterval(). Each bytecode instruction and
therefore all the C implementation code reached from each instruction is
therefore atomic from the point of view of a Python program.

In theory, this means an exact accounting requires an exact understanding of the
PVM bytecode implementation. In practice, it means that operations on shared
variables of built-in data types (ints, lists, dicts, etc) that "look atomic"
really are.

For example, the following operations are all atomic (L, L1, L2 are lists, D,
D1, D2 are dicts, x, y are objects, i, j are ints):

Operations that replace other objects may invoke those other objects'
__del__() method when their reference count reaches zero, and that can
affect things. This is especially true for the mass updates to dictionaries and
lists. When in doubt, use a mutex!

The global interpreter lock (GIL) is often seen as a hindrance to Python's
deployment on high-end multiprocessor server machines, because a multi-threaded
Python program effectively only uses one CPU, due to the insistence that
(almost) all Python code can only run while the GIL is held.

Back in the days of Python 1.5, Greg Stein actually implemented a comprehensive
patch set (the "free threading" patches) that removed the GIL and replaced it
with fine-grained locking. Adam Olsen recently did a similar experiment
in his python-safethread
project. Unfortunately, both experiments exhibited a sharp drop in single-thread
performance (at least 30% slower), due to the amount of fine-grained locking
necessary to compensate for the removal of the GIL.

This doesn't mean that you can't make good use of Python on multi-CPU machines!
You just have to be creative with dividing the work up between multiple
processes rather than multiple threads. The
ProcessPoolExecutor class in the new
concurrent.futures module provides an easy way of doing so; the
multiprocessing module provides a lower-level API in case you want
more control over dispatching of tasks.

Judicious use of C extensions will also help; if you use a C extension to
perform a time-consuming task, the extension can release the GIL while the
thread of execution is in the C code and allow other threads to get some work
done. Some standard library modules such as zlib and hashlib
already do this.

It has been suggested that the GIL should be a per-interpreter-state lock rather
than truly global; interpreters then wouldn't be able to share objects.
Unfortunately, this isn't likely to happen either. It would be a tremendous
amount of work, because many object implementations currently have global state.
For example, small integers and short strings are cached; these caches would
have to be moved to the interpreter state. Other object types have their own
free list; these free lists would have to be moved to the interpreter state.
And so on.

And I doubt that it can even be done in finite time, because the same problem
exists for 3rd party extensions. It is likely that 3rd party extensions are
being written at a faster rate than you can convert them to store all their
global state in the interpreter state.

And finally, once you have multiple interpreters not sharing any state, what
have you gained over running each interpreter in a separate process?

Use os.remove(filename) or os.unlink(filename); for documentation, see
the os module. The two functions are identical; unlink() is simply
the name of the Unix system call for this function.

To remove a directory, use os.rmdir(); use os.mkdir() to create one.
os.makedirs(path) will create any intermediate directories in path that
don't exist. os.removedirs(path) will remove intermediate directories as
long as they're empty; if you want to delete an entire directory tree and its
contents, use shutil.rmtree().

To rename a file, use os.rename(old_path,new_path).

To truncate a file, open it using f=open(filename,"rb+"), and use
f.truncate(offset); offset defaults to the current seek position. There's
also os.ftruncate(fd,offset) for files opened with os.open(), where
fd is the file descriptor (a small integer).

To read or write complex binary data formats, it's best to use the struct
module. It allows you to take a string containing binary data (usually numbers)
and convert it to Python objects; and vice versa.

For example, the following code reads two 2-byte integers and one 4-byte integer
in big-endian format from a file:

The '>' in the format string forces big-endian data; the letter 'h' reads one
"short integer" (2 bytes), and 'l' reads one "long integer" (4 bytes) from the
string.

For data that is more regular (e.g. a homogeneous list of ints or floats),
you can also use the array module.

Note

To read and write binary data, it is mandatory to open the file in
binary mode (here, passing "rb" to open()). If you use
"r" instead (the default), the file will be open in text mode
and f.read() will return str objects rather than
bytes objects.

os.read() is a low-level function which takes a file descriptor, a small
integer representing the opened file. os.popen() creates a high-level
file object, the same type returned by the built-in open() function.
Thus, to read n bytes from a pipe p created with os.popen(), you need to
use p.read(n).

For most file objects you create in Python via the built-in open()
function, f.close() marks the Python file object as being closed from
Python's point of view, and also arranges to close the underlying C file
descriptor. This also happens automatically in f's destructor, when
f becomes garbage.

But stdin, stdout and stderr are treated specially by Python, because of the
special status also given to them by C. Running sys.stdout.close() marks
the Python-level file object as being closed, but does not close the
associated C file descriptor.

To close the underlying C file descriptor for one of these three, you should
first be sure that's what you really want to do (e.g., you may confuse
extension modules trying to do I/O). If it is, use os.close():

I would like to retrieve web pages that are the result of POSTing a form. Is
there existing code that would let me do this easily?

Yes. Here's a simple example that uses urllib.request:

#!/usr/local/bin/pythonimporturllib.request# build the query stringqs="First=Josephine&MI=Q&Last=Public"# connect and send the server a pathreq=urllib.request.urlopen('http://www.some-server.out-there''/cgi-bin/some-cgi-script',data=qs)withreq:msg,hdrs=req.read(),req.info()

Note that in general for percent-encoded POST operations, query strings must be
quoted using urllib.parse.urlencode(). For example, to send
name=GuySteele,Jr.:

A Unix-only alternative uses sendmail. The location of the sendmail program
varies between systems; sometimes it is /usr/lib/sendmail, sometimes
/usr/sbin/sendmail. The sendmail manual page will help you out. Here's
some sample code:

The select module is commonly used to help with asynchronous I/O on
sockets.

To prevent the TCP connect from blocking, you can set the socket to non-blocking
mode. Then when you do the connect(), you will either connect immediately
(unlikely) or get an exception that contains the error number as .errno.
errno.EINPROGRESS indicates that the connection is in progress, but hasn't
finished yet. Different OSes will return different values, so you're going to
have to check what's returned on your system.

You can use the connect_ex() method to avoid creating an exception. It will
just return the errno value. To poll, you can call connect_ex() again later
-- 0 or errno.EISCONN indicate that you're connected -- or you can pass this
socket to select to check if it's writable.

Note

The asyncore module presents a framework-like approach to the problem
of writing non-blocking networking code.
The third-party Twisted library is
a popular and feature-rich alternative.

The pickle library module solves this in a very general way (though you
still can't store things like open files, sockets or windows), and the
shelve library module uses pickle and (g)dbm to create persistent
mappings containing arbitrary Python objects.