If you are starting out in writing a script, don't take these guidelines too seriously, its worth getting things working and then sight a need for optimization before you begin to look for areas to optimize your script.

If you're making a tool for others, keep in mind that they might use it with larger datasets than those you have been testing with. So before releasing it can be good time to try to optimize.

Blender.Object.Get() is often used to get all objects for exporting. This is bad practice since Object.Get() will return objects from every scene in Blender, this is almost never what the user wants and could result in overlapping data, but also could take a long time if the user has 2 or more large scenes consuming a lot of memory.

Instead use Blender.Scene.GetCurrent().getChildren() This returns all objects from the current scene.

Another alternative is to use Blender.Object.GetSelected() which returns selected objects on visible layers in the current scene.

Meshes are not just thin wrappers like most of the other Types in Blender so you want to avoid meshObject.getData() and meshObject.data as much as possible, its best practice only calling each mesh data once.

To get the name of the data the object references do not use ob.getData().name simply to get the name of the object. Instead use obj.getData(1) ... meaning obj.getData(1=name_only) this is nice because it works for all data types, its just that

Note, recently (as of 15/10/05) the addition of the Mesh module (a thin wrapper around Blenders mesh data) means that you can do a meshObject.getData(mesh=1) without the problem NMesh has, however this is new and doesn't support all the NMesh functions.

As of Blender 2.37 nmesh.transform(ob.matrix) can be used to transform an NMesh by a 4x4 transformation matrix.
On my system it is over 100 times faster than transforming each vert's location and normal by a matrix within python. Though the overall speed gained by using this in my obj exporter was about %5.

make sure that if you are exporting vertex normals add a 1,mesh.transfrom(matrix, recalc_normals=1) This avoids un-needed vertex normal transforming.

You may ignore this point, for small applications, but I have found that for larger scenes this can become a problem if not allowed for early on.

A List in python (and therefore blender:python) is never deallocated, however python can re-use the memory whilst the script is running. Basically making large lists in python and a lot of recursive variable assignment can leak memory that can only be regained with a restart. - *If you can- avoid making large lists.

In Python there are some tricky list functions that save you having to search through the list.
Even though you're not looping on the list data python is, so you need to be aware of functions that will slow down your script by searching the whole list.

In python we can add and remove from a list, This is slower when lists are modified in length at the start of the list, since all the data after the index of modification needs to be moved up or down 1 place.

Of course the most simple way to add onto the end of the list is to use myList.append(listItem) or myList.extend(someList) and the fastest way to remove an item is myList.pop()

To use an index you can use myList.insert(index, listItem) and pop also takes indicies for list removal, but these are slower.

Sometimes its faster (but more memory hungry) to just rebuild the list.
Say we want to remove all triangle faces in a list.

Use myList.pop(i) rather then myList.remove(listItem)
This requires you to have the index of the list Item but is faster since remove needs to search the list.
Using pop for removing list items favors a while loops instead of a for loop.

Here is an example of how to remove items in 1 loop, removing the last items first, which is faster (as explained above)

When passing a list/dictionary to a function, it is better to have the function modify the list rather then returning a new list. This means python dosn't need to create a new list in memory.
Functions that modify a list in their place are more efficient then functions that create new lists.normalize(vec) faster: no re-assignment ...is faster then.vec = normalize(vec) slower, only use for functions that are used to make new, unlinked lists.

Also be note that passing a sliced list makes a copy of the list in python memory eg..foobar(mylist[4:-1])
If mylist was a large list of floats, a copy could use a lot of extra memory.

When you access a dictionary item, someDict['foo'] A lookup is performed, pythons lookups are very fast, but if your are accessing this data in a loop its better to make a variable that can be referenced faster.

Blender pathnames are not compatible with path names from other generic python modules.
python native functions don't understand blenders // as being the current file dir in blender and # to be the current framenumber.
A common example is where your exporting a scene and want to export the image paths with it.Blender.sys.expandpath(img.filename)
Will give you the absolute path, so you can pass the path to functions from other python modules.

Since many file formats are ASCII, the way you parse/export strings can make a large difference in how fast your program runs.
When importing strings to make into blender there are a few ways to parse the string.

Using Startswith is slightly faster (about 5%).
myString.endswith('foobar') can be used for line endings too.
Also, if your unsure whether the text is upper or lower case use lower or upper string function. Eg. if line.upper().startswith('VERT ')

A python module called psyco exists that can dynamically compile some of python functions. Most of the speed gains affect algorithms written in python, so importers and exporters have less to gain from them then scripts that deal with 3d math.

For many scripts, a no brainier way of using psyco is to do a.

import psyco
psyco.full()

Psyco can also profile your code.

import psyco
psyco.profile()

For scripts to be distributed it is polite to try loading psyco to avoid raising an error.

The try function is useful to save time writing code to check a condition,
However 'try' is about 10 times slower then 'if', so dont use 'try' in areas of your code that execute in a loop that runs many times (1000's or more).

There are cases where using 'try' is faster than checking wether the condition will raise an error, so it is worth experimenting.

In some cases "is" can be used instead of "==" for comparison, "is not" for "!=".
"==" Checks whether the 2 variables are the same value, where as "is" tests that the python object is the same, shares the same memory - are both an instance of the same object. (Python Object, not Blender a Object)
The advantage of using "is" is that its faster since it dosent need to compare as much data.

Here is a benchmark test that compares a python class and a blender vector- one the same and one different.