I ordered a ~$1200 2.8 lb. 14" Lenovo Thinkpad X1 Carbon (2014 2nd generation) today at 40% off for the Black Friday / Cyber Monday sale. I hope I like the keyboard. I splurged for an Intel Core i7-4600U, 8GB RAM, 256GB SSD, and the WQHD 2560x1440 display. I have enjoyed 7 months with my ~$200 2.8 lb. 11" Acer C720 Chromebook. It's a pleasant contrast to my work-issued 5.9 lb. 17" Lenovo Thinkpad W510. Crouton worked well for running Linux on the Chromebook but I wanted a dedicated Linux laptop and a better screen. I will bequeath the C720 to my wife since she currently uses my 7+ year old Dell Inspiron E1405. I considered the 3.5 lb. 14" Lenovo Thinkpad T440s due to better battery/keyboard/flexibility, but the X1 Carbon was ~$200 cheaper for a similarly spec'd model due to a bigger discount and I liked the thinner and lighter design and better display. Many reviews complained about the new keyboard layout and adaptive function row. I hope key remapping will reduce the pain enough. LWN.net editor, Jonathan Corbet mentioned he bought an X1 Carbon in High-DPI displays and Linux, so it can't be too bad, right?

Today I learned that the old style "%" string formatting and the new string .format() method behave differently when interpolating unicode strings. I was suprised to find out that one of these lines raised an error while one did not:

'%s'%u'O\u2019Connor'

'{}'.format(u'O\u2019Connor')

The old style "%" formatting operation returns a unicode string if one of the values is a unicode string even when the format string is a non-unicode string:

The new string .format() method called on a non-unicode string with a unicode string argument tries to encode the unicode string to a non-unicode string (bytestring) possibly raising a UnicodeEncodeError:

Loggers are organized in a hierarchical fashion. A logger named 'foo.bar' is a child
of a logger named 'foo'.

getLogger() returns a reference to a logger instance with the specified name if it is provided, or root if not. The names are period-separated hierarchical structures. Multiple calls to getLogger() with the same name will return a reference to the same logger object. Loggers that are further down in the hierarchical list are children of loggers higher up in the list. For example, given a logger with a name of foo, loggers with names of foo.bar, foo.bar.baz, and foo.bam are all descendants of foo.
- Loggers documentation

If the level is not set on a logger, the level of the parent is used.

Loggers have a concept of effective level. If a level is not explicitly set on a logger, the level of its parent is used instead as its effective level. If the parent has no explicit level set, its parent is examined, and so on - all ancestors are searched until an explicitly set level is found. The root logger always has an explicit level set (WARNING by default). When deciding whether to process an event, the effective level of the logger is used to determine whether the event is passed to the logger’s handlers.
- Loggers documentation

Similarly, if a handler is not defined for a logger, the handler of the parent is used.

Child loggers propagate messages up to the handlers associated with their ancestor loggers. Because of this, it is unnecessary to define and configure handlers for all the loggers an application uses. It is sufficient to configure handlers for a top-level logger and create child loggers as needed. (You can, however, turn off propagation by setting the propagate attribute of a logger to False.)
- Loggers documentation

However, filters, unlike levels and handlers, do not propagate.
If a filter is not defined for a logger, the filter of the parent is NOT used.

Note that filters attached to handlers are consulted before an event is emitted by the handler, whereas filters attached to loggers are consulted whenever an event is logged (using debug(), info(), etc.), before sending an event to handlers. This means that events which have been generated by descendant loggers will not be filtered by a logger’s filter setting, unless the filter has also been applied to those descendant loggers.
- Filter Objects documentation

This example shows how to set up a Flask local development server
to use a different configuration based on the subdomain
of the request. The project I work on has several environments
(dev, qa, staging, etc). Each environment has different
database and API hostnames. I use this to switch
between database and API environments quickly while
using my local development server.

The SubdomainDispatcher is taken from the
Application Dispatching Flask documentation.
It is
WSGI middleware that
looks at the subdomain of the request and returns a different
application instance for each subdomain. It calls the
create_app function above and passes it
the appropriate configuration object for the subdomain.

rundevserver is similar to
flask.Flask.run
but uses the SubdomainDispatcher middleware
before calling werkzeug.serving.run_simple.

defrundevserver(host=None,port=None,domain='',debug=True,**options):""" Modified from `flask.Flask.run` Runs the application on a local development server. :param host: the hostname to listen on. Set this to ``'0.0.0.0'`` to have the server available externally as well. Defaults to ``'127.0.0.1'``. :param port: the port of the webserver. Defaults to ``5000`` :param domain: used to determine the subdomain :param debug: if given, enable or disable debug mode. See :attr:`debug`. :param options: the options to be forwarded to the underlying Werkzeug server. See :func:`werkzeug.serving.run_simple` for more information. """fromwerkzeug.servingimportrun_simpleifhostisNone:host='127.0.0.1'ifportisNone:port=5000options.setdefault('use_reloader',debug)options.setdefault('use_debugger',debug)app=SubdomainDispatcher(create_app,domain,debug=debug)run_simple(host,port,app,**options)

This example shows how to use Python to generate a
Google Static Map URL for a map that contains markers within
some dimensions which are smaller than the map image dimensions. This
effectively allows for setting minimum X and Y margins around the markers
in a map. This is useful for a "fluid" web design where a maximum map
size is requested from Google and is then cut off at the edges for
small browser windows.

importmathdefgenerate_map_url(min_map_width_px,max_map_width_px,min_map_height_px,max_map_height_px,marker_groups):""" Return a Google Static Map URL for a map that contains markers within some dimensions which are smaller than the map image dimensions. This effectively allows for setting minimum X and Y margins around the markers in a map. This is useful for a "fluid" web design where a maximum map size is requested from Google and is then cut off at the edges for small browser windows. """# Determine the maximum zoom to contain markers at the minimum map sizelat_list=[latformarkersinmarker_groupsforlat,lnginmarkers['lat_lng']]lng_list=[lngformarkersinmarker_groupsforlat,lnginmarkers['lat_lng']]max_zoom=get_zoom_to_fit(min(lat_list),max(lat_list),min(lng_list),max(lng_list),min_map_width_px,min_map_height_px,)# Build the markers query string argumentsmarkers_args=''formarkersinmarker_groups:lat_lng='|'.join(['{},{}'.format(lat,lng)forlat,lnginmarkers['lat_lng']])markers_args+='&markers;=color:{}|{}'.format(markers['color'],lat_lng)# Build and return the map URLreturn''.join(['http://maps.googleapis.com/maps/api/staticmap','?sensor=false&v;=3&visual;_refresh=true','&size;={}x{}&zoom;={}'.format(max_map_width_px,max_map_height_px,max_zoom),markers_args,])defget_zoom_to_fit(min_lat,max_lat,min_lng,max_lng,width_px,height_px):""" Return the maximum zoom that will fit the given min/max lat/lng coordinates in a map of the given dimensions. This is used to override the zoom set by Google's implicit positioning. Calculation translated from Javascript to Python from: http://stackoverflow.com/questions/6048975/google-maps-v3-how-to-calculate-the-zoom-level-for-a-given-bounds """GOOGLE_WORLD_WIDTH=256GOOGLE_WORLD_HEIGHT=256MAX_ZOOM=17deflat2rad(lat):sinlat=math.sin(math.radians(lat))radx2=math.log((1+sinlat)/(1-sinlat))/2.0returnmax(min(radx2,math.pi),-math.pi)/2.0defzoom(map_px,world_px,fraction):# Use int() to round down to the nearest integerreturnint(math.log(float(map_px)/float(world_px)/fraction)/math.log(2.0))# Determine the maximum zoom based on height and latitudeifmin_lat==max_lat:lat_zoom=MAX_ZOOMelse:lat_fraction=(lat2rad(max_lat)-lat2rad(min_lat))/math.pilat_zoom=zoom(height_px,GOOGLE_WORLD_HEIGHT,lat_fraction)# Determine the maximum zoom based on width and longitudeifmin_lng==max_lng:lng_zoom=MAX_ZOOMelse:lng_range=max_lng-min_lngiflng_range<0:lng_range+=360.0lng_fraction=lng_range/360.0lng_zoom=zoom(width_px,GOOGLE_WORLD_WIDTH,lng_fraction)returnmin(lat_zoom,lng_zoom,MAX_ZOOM)

Do you have a lot of short, single-use, private functions in your
Python code? For example, below is some stubbed out authentication
code I've been working on. It checks if a user's password is correct
and updates the hash algorithm to use bcrypt.
The 4 private functions with the leading underscore are from 1 to 10
lines long and are only used by
the check_password function. These functions are part
of a larger module with about 20 functions. I don't like that these
4 functions add clutter to the module and are not grouped with the
function that uses them, check_password.

def_get_password_hash_from_db(email_address):"""Get the user's password hash from the database. """def_determine_password_hash_algorithm(password_hash):"""Determine the hash algorithm. """def_hash_password_old(password):"""This is the OLD password hash algorithm. """def_hash_existing_password_bcrypt(password,db_password_hash):"""This is the NEW algorithm used for hashing existing passwords. """defcheck_password(email_address,password):"""Check if a user's supplied password is correct. """db_password_hash=_get_password_hash_from_db(email_address)hash_alg=_determine_password_hash_algorithm(db_password_hash)ifhash_alg=='BCRYPT':input_password_hash=_hash_existing_password_bcrypt(password,db_password_hash)else:input_password_hash=_hash_password_old(password)password_correct=(input_password_hash==db_password_hash)ifpassword_correctandhash_alg!='BCRYPT':call_change_password(email_address,password)returnpassword_correctdefcall_change_password(email_address,new_password):"""Change the user's password. """

Sometimes, in cases like this, I move the 4 private functions to be
nested functions inside check_password.
I like how the functions are grouped together and that the module is
not littered with extraneous functions. However, the inner functions
are not easily testable and I don't see many people doing this.

defcheck_password(email_address,password):"""Check if a user's supplied password is correct. """defget_password_hash_from_db(email_address):"""Get the user's password hash from the database. """defdetermine_password_hash_algorithm(password_hash):"""Determine the hash algorithm. """defhash_password_old(password):"""This is the OLD password hash algorithm. """defhash_existing_password_bcrypt(password,db_password_hash):"""This is the NEW algorithm used for hashing existing passwords. """db_password_hash=get_password_hash_from_db(email_address)hash_alg=determine_password_hash_algorithm(db_password_hash)ifhash_alg=='BCRYPT':input_password_hash=hash_existing_password_bcrypt(password,db_password_hash)else:input_password_hash=hash_password_old(password)password_correct=(input_password_hash==db_password_hash)ifpassword_correctandhash_alg!='BCRYPT':call_change_password(email_address,password)returnpassword_correctdefcall_change_password(email_address,new_password):"""Change the user's password. """

Another option is to create a PasswordChecker class instead.
This seems the most powerful and now the private methods are
testable. However, this adds more overhead and I hear Jack Diederich telling me to
Stop
Writing Classes!

class_PasswordChecker(object):"""Check if a user's supplied password is correct. """@staticmethoddef_get_password_hash_from_db(email_address):"""Get the user's password hash from the database. """@staticmethoddef_determine_password_hash_algorithm(password_hash):"""Determine the hash algorithm. """@staticmethoddef_hash_password_old(password):"""This is the OLD password hash algorithm. """@staticmethoddef_hash_existing_password_bcrypt(password,db_password_hash):"""This is the NEW algorithm used for hashing existing passwords. """def__call__(self,email_address,password):db_password_hash=self._get_password_hash_from_db(email_address)hash_alg=self._determine_password_hash_algorithm(db_password_hash)ifhash_alg=='BCRYPT':input_password_hash=self._hash_existing_password_bcrypt(password,db_password_hash)else:input_password_hash=self._hash_password_old(password)password_correct=(input_password_hash==db_password_hash)ifpassword_correctandhash_alg!='BCRYPT':call_change_password(email_address,password)returnpassword_correctcheck_password=_PasswordChecker()defcall_change_password(email_address,new_password):"""Change the user's password. """

Maybe the solution is to break up the module into smaller modules
which act like the class above? However this might leave me with
some unevenly sized modules. How do you handle this?

The finally block is used to define clean-up actions. Why is the
finally block needed? Why can't the clean up actions be put after
the try/except/else block? This works in some cases, but if there is
a return, break, or continue, or an unhandled exception inside the
try, except, or else clauses, that code will never be executed. The
finally block executes even in these conditions.

I needed to gzip some data in memory that would eventually end up
saved to disk as a .gz file. I thought, That's easy, just use Python's built in
gzip
module.

However, I needed to pass the data to pycurl as a file-like
object. I didn't want to write the data to disk and then read
it again just to pass to pycurl. I thought, That's easy also-- just use Python's
cStringIO
module.

The solution did end up being simple, but figuring out the
solution was a lot harder than I thought. Below is my roundabout process of finding the simple solution.

I googled, then looked at the
source code for gzip.py.
I found that the compressed data was in the StringIO object. So
I performed my file operations on it instead of the GzipFile object.
Now I was able to write the data out to a file. However, the size of
the file was much too small.

I saw there was a flush() method in the source code. I added a call to flush().
This time, I got a reasonable file size, however, when trying to
gunzip it from the command line, I got the following error:

I knew that GzipFile worked properly when writing files directly as
opposed to reading from the StringIO object. It turns out the
difference was that there was code in the close() method of GzipFile
which wrote some extra required data. Now stuff was working.

deftry7_got_it_working():fgz=cStringIO.StringIO()gzip_obj=gzip.GzipFile(filename=FILENAME,mode='wb',fileobj=fgz)gzip_obj.write(STUFF_TO_GZIP)gzip_obj.flush()# Do stuff that GzipFile.close() doesgzip_obj.fileobj.write(gzip_obj.compress.flush())gzip.write32u(gzip_obj.fileobj,gzip_obj.crc)gzip.write32u(gzip_obj.fileobj,gzip_obj.size&0xffffffffL)filesize=pycurl_simulator(fgz)printfilesize

Here's the (not really) final version using a subclass of GzipFile that adds a
method to write the extra data at the end. If also overrides close()
so that stuff isn't written twice in case you need to use close().
Also, the separate flush() call is not needed.

deftry8_not_really_final_version():classMemoryGzipFile(gzip.GzipFile):""" A GzipFile subclass designed to be used with in memory file like objects, i.e. StringIO objects. """defwrite_crc_and_filesize(self):""" Flush and write the CRC and filesize. Normally this is done in the close() method. However, for in memory file objects, doing this in close() is too late. """self.fileobj.write(self.compress.flush())gzip.write32u(self.fileobj,self.crc)# self.size may exceed 2GB, or even 4GBgzip.write32u(self.fileobj,self.size&0xffffffffL)defclose(self):ifself.fileobjisNone:returnself.fileobj=Noneifself.myfileobj:self.myfileobj.close()self.myfileobj=Nonefgz=cStringIO.StringIO()gzip_obj=MemoryGzipFile(filename=FILENAME,mode='wb',fileobj=fgz)gzip_obj.write(STUFF_TO_GZIP)gzip_obj.write_crc_and_filesize()filesize=pycurl_simulator(fgz)printfilesize