Tuesday, September 29, 2015

While new syntax is important, remember your audience in pitching the upgrade from Python 2.7. You may need to pander to people who aren't programmers or don't really know Python.

When selling the upgrade, it can help to focus on the objective measures.

Performance. When anyone asks why we should disturb our precious Python 2 ecosystem, point out the performance improvements. Begin with Python 3.2, 3.3, 3.4, and then 3.5 improvements. The union of these is an impressive list. Faster is better, right?

New Libraries. For some folks who don't know Python well, it helps to give them a concrete list of features you absolutely require. Seriously. Enumerate all the new libraries from Python 3.2, ..., 3.5. It's a big list. Some of them have been backported, so this list isn't a a complete win. You may not really need all of them, but use them to bolster your case.

Other Cleanups. These are important for folks who use Python daily, but aren't too impressive to manager types who aren't deeply into the language details.

The fact that Python 3 handles class/type better than Python 2 isn't impressive to anyone who hasn't dealt with it.

The print statement issue will cause some managers to claim that the upgrade is "risky".

The division issue is a complete win. Weirdly, nay-sayers will claim (a) just use float() a lot, (b) just add +0.0 a lot, or (c) just add from __future__ import division a lot. How is this workaround better? No clue. Be prepared to make the case that the dumb workarounds are... well... dumb.

You can lift up the type definition and http://mypy-lang.org. If you do, be prepared for snark from the Java/Scala crowd. These folks will (wrongly) claim that a partial type proof is useless, and static type checking is mandatory. This is a difficult discussion to have because the "type safety is important" crowd don't seem to recognize the awful gyrations they're forced into so they can write generic code that's type-agnostic. All Python code is type-agnostic; the type checking just confirms some design constraints. The presence of differing strategies -- type-specific code vs. generic type-agnostic code -- means that neither is right, and the argument is moot.

Don't focus on async/await. Yes, it's first on the Python web site, but, it can be a tough sell.

Performance

The easy sell is this impressive list of optimizations.

3.2

Peephole optimizer improvements

Serializing and unserializing data using the pickle module is now several times faster.

The Timsort algorithm used in list.sort() and sorted() now runs faster and uses less memory when called with a key function.

JSON decoding performance is improved and memory consumption is reduced whenever the same string is repeated for multiple keys.

Recursive locks (created with the threading.RLock() API) now benefit from a C implementation which makes them as fast as regular locks, and between 10x and 15x faster than their previous pure Python implementation.

The fast-search algorithm in stringlib is now used by the split(), splitlines() and replace() methods on bytes, bytearray and str objects. Likewise, the algorithm is also used by rfind(), rindex(), rsplit() and rpartition().

Integer to string conversions now work two “digits” at a time, reducing the number of division and modulo operations.

Several other minor optimizations.

Set differencing now runs faster when one operand is much larger than the other

The array.repeat() method has a faster implementation

The BaseHTTPRequestHandler has more efficient buffering

The operator.attrgetter() function has been sped-up

ConfigParser loads multi-line arguments a bit faster

3.3

Some operations on Unicode strings have been optimized

UTF-8 is now 2x to 4x faster. UTF-16 encoding is now up to 10x faster.

3.4

The UTF-32 decoder is now 3x to 4x faster.

The cost of hash collisions for sets is now reduced.

The interpreter starts about 30% faster.

bz2.BZ2File is now as fast or faster than the Python2 version for most cases. lzma.LZMAFile has also been optimized.

random.getrandbits() is 20%-40% faster for small integers.

By taking advantage of the new storage format for strings, pickling of strings is now significantly faster.

A performance issue in io.FileIO.readall() has been solved.

html.escape() is now 10x faster.

3.5

The os.walk() function has been sped up by 3 to 5 times on POSIX systems, and by 7 to 20 times on Windows.

Construction of bytes(int) (filled by zero bytes) is faster and uses less memory for large objects.

Some operations on ipaddress IPv4Network and IPv6Network have been massively sped up,

The dozen new modules, however, might help overcome organizational inertia to make progress on ditching Python2. I've been making heavy use of statistics. I need to make better use of pathlib in future projects.

Tuesday, September 22, 2015

The core visualization tool (pythontutor.com) can be helpful for many people. The shared environments seem like a cool idea, also, but I don't have any specific comments on the other tools.

While this looks very cool, I'm not a huge fan of this kind of step-by-step visualization. This uses very clear graphics, and looks very clever, it has some limitations. I think that some aspects of "visualization" can be misleading. Following an execution path for a specific initial condition can obscure events and conditions that aren't on the happy path. It's not clear how a group of statements establish a more general condition.

I'm a fan of formal post-conditions. From these, we can postulate a statement, and work out the weakest precondition for the statement. As we work through this exercise, we create a formal proof and a program. It's very elegant. And it covers the general case, not specific examples.

Most importantly, this effort depends on having formal semantics for each statement. To write code, we need to have a concise definition of the general state change is made by each statement in a language. We're looking at the general case for each statement rather than following a specific initial condition through a statement.

Sidebar.

In C, what does this do? a[i++] = ++i; There is no formal definition. The statement has three state changes stated. But how are they ordered? No matter what initial values for a[] and i we provide, this is still pretty murky. A debugger only reveals the specific implementation being debugged.

Visualization may help some people understand the state change created by a statement. Some people do learn things by watching this kind of "debugger" mode. In particular, this may help because it has much better graphics than the built-in character-mode debugger.

This idea works best with programs that already make sense: programs that are well designed. Programs that make orderly progress from some initial state to the desired final state.

Programs written by learners may not be all that clean. Realistically, they may be inept. They may even reach the far end of the spectrum and be downright bad.

While this tool is graphically gorgeous, it's still a debugger. It wallows around in an internal world in which the formal semantics can get obscured. The general case can't easily be shown.

We have a forest and trees problem here. A debugger (or other statement-by-statement visualization tool) emphasizes each individual tree. The larger structures of glades, thickets, groves, stands, brakes, and coppices are lost to view.

The humble while statement (especially one with an internal if-break) can be extremely difficult to understand as a single statement. If we break down the statement-by-statement execution, the presence of two termination conditions (one on the while clause and one on the if clause) can be obscured because a visualization must follow a specific initial condition.

With really well-written tutorials -- and some necessary metadata -- a super-visualizer might be able to highlight the non-happy-path logic that exists. This alternate path viewing could be helpful for showing how complex logic works (and doesn't work.)

With programs written by learners -- programs which are inept and won't have appropriate metadata -- a super-visualizer would need to reason very carefully about the code to determine what happy path and non-happy-path kinds of logic are present. It would have to locate and highlight

contradictory elif clauses,

gaps among elif clauses,

missing else clauses,

hellishly complex else clauses,

break conditions,

continue conditions, as well as

exception handling.

For truly bad programs, the super-visualizer may be stumped as to what is intended. Indeed, it may be impossible to determine how it can be displayed meaningfully to show alternatives and show how the specific code generalizes into a final outcome.

What does this do? How can any visualizer aid the student to show problems?

To take this one step further, I think this kind of thing might also be hazardous to learning how the functional programming feature of Python work. I think that exposing the underlying mechanics of a generator expression might be more confusing than simply treating it as a "lazy list."

It's very nice. But it isn't perfect. Something that supports reasoning about the general post-conditions established by a statement would be more useful than a step-by-step debugger with great graphics.

Tuesday, September 15, 2015

Here are some tricks to working with log file extracts. We're looking at some Enterprise Splunk extracts. We can fiddle around with Splunk, trying to explore the data. Or we can get a simple extract and fiddle around with the data in Python.

Running different experiments in Python seems to be more effective than trying to do this kind of exploratory fiddling in Splunk. Primarily because there aren't any boundaries on what we can do with the data. We can create very sophisticated statistical models all in one place.

Theoretically, we can do a lot of exploration in Splunk. It has a variety of reporting and analytical features.

But...

Using Splunk presumes we know what we're looking for. In many cases, we don't know what we're looking for: we're exploring. We may have some indication that a few RESTful API transactions are slow, but little more than that. How do we proceed?

Step one is to get raw data in CSV format. Now what?

Reading Raw Data

We'll start by wrapping a CSV.DictReader object with some additional functions.

Object-Oriented Purists will object to this strategy. "Why not just extend DictReader?" they ask. I don't have a great answer. I lean toward functional programming and the resulting orthogonality of components. With a purely OO approach, we have to use more complex-seeming mixins to achieve this.

Our general framework for processing logs is this.

with open("somefile.csv") as source:
rdr = csv.DictReader(source)

This allows us to read the CSV-formatted Splunk extract. We can iterate through rows in the reader. Here's trick #1. It's not really very tricky, but I like it.

We can -- to a limited extent -- report raw data in a helpful format. If we want to dress up the output, we can change the format string. Maybe "{host:30s} {ReponseTime:8s} {source:s}" or something like that.

Filtering

A common situation is that we've extracted too much, and only need to see a subset. We can change the Splunk filter, but, we hate to overcommit before we've finished our exploration. It's far easier to filter in Python. Once we've learned what we need, we can finalize in Splunk.

We've injected a generator expression that will filter the source rows, allowing us to work with a meaningful subset.

Projection

In some cases, we'll have additional columns of source data that we don't really want to use. We'll eliminate this data by making a projection of each row.

In principle, Splunk never produces an empty column. However, RESTful API logs may lead to data sets with a huge number of unique column titles based on surrogate keys that are part of request URI's. These columns will have one row of data from the one request that used that surrogate key. For every other row, there's nothing useful in that column. Life is much simpler if we remove the empty columns from each row.

We can do this with a generator expression, also, but it gets a bit long. A generator function is somewhat easier to read.

def project(reader):
for row in reader:
yield {k:v for k,v in row.items() if v}

We've built a new row dictionary from a subset of the items in the original reader. We can use this to wrap the output of our filter.

While the ns_reader() function and ns_reader() lambda are used the same way, it's slightly harder to write a document string and doctest unit test for a lambda. For this reason, a lambda should probably be avoided.

We can use map(lambda row: types.SimpleNamespace(**row), reader). Some folks prefer this over the generator expression.

We could use a proper for statement with an internal yield statement, but there doesn't seem to be any benefit from making a big statement out of a small thing.

We have a lot of choices because Python offers so many functional programming features. We don't often see Python touted as a functional language. Yet, we have a variety of ways to handle a simple mapping.

Mappings: Conversions and Derived Data

We'll often have a list of data conversions that are pretty obvious. Plus, we'll have a growing list of derived data items. The derived items will be dynamic and are based on different hypotheses we're testing. Each time we have an experiment or question, we might change the derived data.

Each of these steps: filtering, projection, conversions, and derivation, are stages in the "map" portion of a map-reduce pipeline. We could create a number of smaller functions and apply them with map(). Because we're updating a stateful object, we can't use the general map() function. If we wanted to achieve a more pure functional programming style, we'd use an immutable namedtuple instead of a mutable SimpleNamespace.

As we explore, we'll adjust the body of this conversion function. Perhaps we'll start with some minimal set of conversions and derivations. We'll extend this with some "are these right?" kind of things. We'll take some out when we discover that the don't work.

Note that change in the body of our for statement. Our convert() function produces values we're sure of. We've added some additional variables inside the for loop that we're not 100% sure of. We'll see if they're helpful (or even correct) before updating the convert() function.

Reductions

When it comes to reductions, we can adopt a slightly different style of processing. We need to refactor our previous example, and turn it into a generator function.

Ideally, all of our programming looks like this. We use a generator function to produce data. The final display of the data is kept entirely separate. This allows us to refactor and change the processing much more freely.

Now we can do things like collect rows into Counter() objects, or perhaps compute some statistics. We might use a defaultdict(list) to group rows by service.

We've decided to create concrete list objects here. We can use itertools to group the response times by service. It looks like proper functional programming, but the implementation points up some limitations in the Pythonic form of functional programming. Either we have to sort the data (creating a list object) or we have to create lists as we group the data. In order to do several different statistics, it's often easier to group data by creating concrete lists.

Rather than simply printing a row object, we're now doing two things.

Create some local variables, like svc and m. We can easily add variance or other measures.

Use the vars() function with no arguments, which creates a dictionary out of the local variables.

This use of vars() with no arguments -- which behaves like locals() -- is a handy trick. It allows us to simply create any local variables we want and include them in the formatted output. We can hack in as many different kinds of statistical measures as we think might be relevant.

Now that our essential processing loop is for row in converted_log("somefile.csv"), we can explore a lot of processing alternatives in a tiny, easy-to-modify script. We can explore a number of hypotheses to determine why a some RESTful API transactions are slow and others are fast.

Tuesday, September 1, 2015

At first, I imagined the problem was going to be PyAudio. This package has a bunch of installers. But the installers don't recognize Python 3.4, so none of them work for me. The common fallback plan is to install from source, but, I couldn't find the source. That looks like a problem.

In the past, I have played with pysynth because it has some examples of wave-table additive synth. That's very handy. The examples are hard to follow because a lot of the synth ideas are conflated into small functions.

Complication: The pysynth package is Python2. It lacks even the simple from __future__ import print_function to make it attempt Python3 compatibility.

The pysynth.play_wav module could be a handy wrapper around various audio playback technologies, include pyaudio. It has to be tweaked, however, to make it work with Python3.4. I really need to clone the project, make the changes, and put in a pull request.

The pysynth.pysynth and pysynth.pysynth_beeper modules are helpful for seeing how wave tables work. How much rework to make these work with Python3.4? And how much reverse engineering to understand the math?

I've since found pyo. Which is also Python 2. See the AjaxSoundStudio pages for details. This may be a better example of wave tables. But it's still Python2. More investigation to follow.

Advertisers

About Me

Steven F. Lott is a consultant, teacher, author and software developer with over 35 years of experience building software of every kind, from specialized control systems for military hardware to large data warehouses to web service API's.