Lap: One length of the course. Sometimes may also mean down and back (2 lengths) of the course.

Thanks a lot... This definition both exposes the absurdity, by defining lap
to mean precisely "a length," and then throws out there that some people
use the word differently (in the useful way), so we really don't know what
we're talking about.

Can we do something about this? Can't the universe make just a little more
sense?

If you participate in mailing lists or IRC long enough, you will encounter a
type of person I call The Lone Confused Expert. These are people who know
a lot, but have gotten something wrong along the way. They have a
fundamental misconception somewhere that is weaving through their
conclusions.

Others will try to correct their wrong worldview, but because the Lone
Confused Expert is convinced of their own intelligence, they view these
conversations as further evidence that they know a great deal and that
everyone around them is wrong, and doesn't understand.

I'm fascinated by the Lone Confused Expert. I want to understand the one
wrong turn they took. One of the things I like about teaching is seeing
people's different views (some right, some wrong) on the topics we're
discussing. Understanding how others grasp a concept teaches me something
about the concept, and about the people.

But the LCE is just a tantalizing mystery, because we never get to uncover
their fundamental understandings. The discussions just turn into giant
foodfights over their incorrect conclusions.

As an example, recently in the #python IRC channel, someone learning Python
said (paraphrased),

Python calls old datatypes new names to make them sound like new
things. A dict is just a rebranded list.

I'd like to know what this person thought a dict was, and how they missed
its essential nature, which is nothing like what other people call lists.
Perhaps they were thinking of Lisp's association lists? That seems
unlikely because they were also very dismissive of languages other than
C/C++.

Typical of The Lone Confused Expert, the discussion balloons as more people
see the odd misconceptions being defended as a higher truth. The more
people flow in to try to correct The Expert, the more they stick to their
guns and mock the sheeple that simply believe what they've been told rather
than attaining their rarer understanding.

At a certain level, these statements are simply wrong. But I think
somewhere deep in The Lone Confused Expert's mind, there's a kernel of
truth that's been misapplied, some principle that's been extended beyond
its utility, to produce these ideas. I want to understand that process.
I want to see where they stepped off the path.

There's just no way to get at it, because the LCE won't examine and discuss
their own beliefs. Challenges are viewed as attacks on their intelligence,
which they hold in higher esteem than their knowledge.

In idle moments, these statements come back to me, and I try to puzzle through
what the thought process could be. How can someone know what a punched card
is, but also think that characters on it cannot be tokenized?

I wonder if a face-to-face discussion would work better. People can be
surprisingly different in person than they are online. It's easy to feel
attacked if you have a dozen people talking to you at once. I've never had
the opportunity to meet one of these Lone Confused Experts in real life.
Maybe I don't want to?

A new alpha of Coverage.py 4.0 is available: coverage.py 4.0a6.
(Yes, there are many alphas: I'm changing a lot, and want to let it bake
well before locking things in.)

Also available is the latest version of the Django coverage plugin:
django_coverage_plugin 0.5.
This uses the new plugin support in Coverage.py 4.0 to implement coverage
measurement of Django templates.

Other changes since 4.0a5:

Python 3.5 is supported.

The old-style module-level function interface is no longer supported.
I'm not sure what this might break. Try your coverage-related
tools!

Of the popular Python static checkers, pylint
seems to be the most forceful: it raises alarms more aggressively than
the others. This can be annoying, but thankfully it also has detailed
controls over what it complains about.

It is also extensible: you can write plugins that add checkers for your
code. At edX, we've started doing this for problems we see that pylint
doesn't already check for.

edx-lint is our repo of pylint
extras, including plugins and a simple tool for keeping a central pylintrc
file and using it in a number of repos.

The documentation for pylint internals is not great. It exists,
but too quickly recommends reading the source to understand what's going
on. The good news is that all of the built-in pylint checkers use the
same mechanisms you will, so there are plenty of examples to follow.

A pylint checker is basically an abstract syntax tree (AST) walker, but over
a richer AST than Python provides natively. Writing a checker involves some
boilerplate that I don't entirely understand, but the meat of it is a simple
function that examines the AST.

One problem we've had in our code is getting engineers to understand the
idiosyncratic way that translation functions are used. When you use the
gettext functions in your code, you have to use a literal string as the
first argument. This is because the function will not only be called at
runtime, but is also analyzed statically by the string extraction tools.

So this is good:

welcome=gettext("Welcome, {}!").format(user_name)

but this won't work properly:

welcome=gettext("Welcome, {}!".format(user_name))

The difference is subtle, but crucial. And both will work with the English
string, so the bug can be hard to catch. So we wrote a pylint checker to
flag the bad case.

defvisit_callfunc(self,node):ifnotisinstance(node.func,astroid.Name):# It isn't a simple name, can't deduce what function it is.return

ifnode.func.namenotinself.TRANSLATION_FUNCTIONS:# Not a function we care about.return

ifnotself.linter.is_message_enabled(self.MESSAGE_ID):return

first=node.args[0]ifisinstance(first,astroid.Const):ifisinstance(first.value,basestring):# The first argument is a constant string! All is well!return

# Bad!self.add_message(self.MESSAGE_ID,args=node.func.name,node=node)

Because the method is named "visit_callfunc", it will be invoked for every
function call found in the code. The "node" variable is the AST node for
the function call. In the first line, we look at the expression for the
function being called. It could be a name, or it could be some other
expression. Most function calls will be a simple name, but if it isn't a
name, then we don't know enough to tell if this is one of the translation
functions, so we return without flagging a problem.

Next we look at the name of the function. If it isn't one of the dozen or
so functions that will translate the string, then we aren't interested in
this function call, so again, return without taking any action.

The next check is to see if this checker is even enabled. I think there's
a better way to do this, but I'm not sure.

Finally we can do the interesting check: we look at the first argument to
the function, which remember, is not a calculated value, but a node in the
abstract syntax tree representing the code that will calculate the
value.

The only acceptable value is a string constant. So we can check if the
first argument is a Const node. Then we can examine the actual literal
value, to see that it's a string. If it is, then everything is good, and
we can return without an alarm.

But if the first argument is not a string constant, then we can use
self.add_message to add a warning message to the pylint output. Elsewhere
in the file, we defined MESSAGE_ID to refer to the message:

"i18n function %s() must be called with a literal string"

Our add_message call uses that string, providing an argument for the string
formatter, so the message will have the actual function name in it, and
also provides the AST node, so that the message can indicate the file and
line where the problem happened.

That's the whole checker. If you're interested, the edx-lint repo also
shows how to test checkers, which is done with sample .py files, and .txt
files with the pylint messages they should generate.

We have a few other checkers also: checks that setUp and tearDown call their
super() counterparts properly, and a check that range isn't called with
a needless first argument.

The checker I'd like to write is one that can tell you that this:

self.assertTrue(len(x)==2)

should be re-written as:

self.assertEqual(len(x),2)

and other similar improvements to test assertions.

Once you write a pylint checker, you start to get ideas for others that
might work well. I can see it becoming a kind of mania...

Working in a Python project, it's common to have a clean-up step that
deletes all the .pyc files, like this:

$ find . -name '*.pyc' -delete

This works great, but there's a slight chance of a problem: Git records
information about branches in files within the .git directory. These files
have the same name as the branch.

Try this:

$ git checkout -b cleanup-all-.pyc

This makes a branch called "cleanup-all-.pyc". After making a commit, I
will have files named .git/refs/heads/cleanup-all-.pyc and
logs/refs/heads/cleanup-all-.pyc. Now if I run my find command, it will
delete those files inside the .git directory, and my branch will be
lost.

One way to fix it is to tell find not to delete the file if it's found in
the .git directory:

$ find . -name '*.pyc' -not -path './.git/*' -delete

A better way is:

$ find . -name '.git' -prune -o -name '*.pyc' -exec rm {}\;

The first command examines every file in .git, but won't delete the .pyc it
finds there. The second command will skip the entire .git directory, and
not waste time examining it.

UPDATE: I originally had -delete in that latter command, but find doesn't
like -prune and -delete together. It seems simplistic and unfortunate,
but there it is.

A recent pull request for coverage.py by Conrad Ho added a timestamp to the
HTML report pages. Of course, it included tests. They needed a little
cleaning up, because they dealt with the current time, and that always
gets involved.

Here, run_coverage creates the HTML report, then the test reads the HTML
file directly, computes the expected timestamp, and then checks that the
expected timestamp is in the file.

Seems straightforward enough, but there's a problem. Deep inside
run_coverage is a call to datetime.now() to get the current time to create
the timestamp. Then in our test, we call datetime.now() again to create
the expected timestamp. The problem is that because we call now() twice,
they will return different times. Even formatting to hours and minutes as
we do, the timestamps could be different.

This test will very occasionally fail: it is a flaky test, which is a very
bad thing. Some of the existing tests in the test suite weren't changed
in this pull request, but they also become flaky. They looked kind of
like this:

Here, we're creating two different HTML reports, and asserting that they are
the same. But run_coverage() in each calls now() at different times, so
the timestamps can differ in them. Some might say that the chances are
really small, and a very occasional test failure is not worth the extra
complexity. True story: the first time these tests were run on Travis,
they failed because of different timestamps!

One way to solve time problems like this is to mock out datetime.now(), but
that can be complicated.
So I took different approaches.

The second tests were straightforward to make impervious to the time
changes. In that case, I
amended get_html_index_content
to strip out the timestamp:

defget_html_index_content(self):"""Return the content of index.html, with timestamps scrubbed."""withopen("htmlcov/index.html")asf:index=f.read()index=re.sub(r"created at \d{4}-\d{2}-\d{2} \d{2}:\d{2}",r"created at YYYY-MM-DD HH:MM",index,)returnindex

Now the text of index.html doesn't have the timestamp, so the value of now()
doesn't matter, and the tests aren't flaky. These are tests of other
aspects than the timestamp, so it's fine to just remove the timestamp.

But the first tests were about the timestamp itself, we can't just scrub it
from the output. For those tests, I chose a different approach: extract
the timestamp from the HTML, and check that it is a very recent timestamp:

defassert_correct_timestamp(self,html):"""Extract the timestamp from `html`, and assert it is recent."""timestamp_pat=r"created at (\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2})"m=re.search(timestamp_pat,html)self.assertTrue(m,"Didn't find a timestamp!")timestamp=datetime.datetime(*map(int,m.groups()))

age=datetime.datetime.now()-timestampself.assertEqual(age.days,0)# The timestamp only records the minute, so the delta could be from# 12:00 to 12:01:59, or two minutes.self.assertLessEqual(abs(age.seconds),120,"Timestamp is wrong: {0}".format(timestamp))

Here I have a new method, assert_correct_timestamp. It takes the content of
the HTML, extracts the timestamp with a regex, converts it into a datetime,
and then checks that the datetime is recent. This fixes the flaky test: it
will not fail due to shifting time windows.

But now the test method has a bunch of code for figuring out if the datetime
is recent. And it has a bug: I used abs(age.seconds) < 120, which will
pass if the datetime is in the near future as well as in the near past.

This test has two ideas in it: get the timestamp from the HTML code, and
check if it is recent. Better would be to
factor out that second part
into its own datetime assert method:

defassert_recent_datetime(self,dt,seconds=10,msg=None):"""Assert that `dt` marks a time at most `seconds` seconds ago."""age=datetime.datetime.now()-dtself.assertEqual(age.days,0,msg)self.assertGreaterEqual(age.seconds,0,msg)self.assertLessEqual(age.seconds,seconds,msg)

This assert method is purely about datetimes and their recency. We've fixed
the bug with the near future. Now we can test this assert method directly
to be sure we have the logic right:

defassert_correct_timestamp(self,html):"""Extract the timestamp from `html`, and assert it is recent."""timestamp_pat=r"created at (\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2})"m=re.search(timestamp_pat,html)self.assertTrue(m,"Didn't find a timestamp!")timestamp=datetime.datetime(*map(int,m.groups()))# The timestamp only records the minute, so the delta could be from# 12:00:00 to 12:01:59, or two minutes.self.assert_recent_datetime(timestamp,seconds=120,msg="Timestamp is wrong: {0}".format(timestamp),)

I like giving talks. I spend a lot of time on my presentation slides, and
have a typically idiosyncratic toolchain for them. This is how I make
them. Note:I am not recommending that anyone else
make slides this way. If you like it, fine, but most people will prefer
more common tools.

I generally favor text-based tools over WYSIWYG, and slides are no exception.
For simple presentations, I will use Google Docs. But PyCon talks are not
simple. They usually involve technical details, or involved explanations,
and I want to have code helping me make them. I choose text tools for the
control they give me, not for convenience.

HTML-based presentations are popular, and they suit my need for text-based
tooling. Other options include Markdown- or ReST-based tools, but they
remove control rather than provide it, so I prefer straight-up HTML.

There are a number of HTML-based presentation tools, like
impress.js and
reveal.js. For reasons lost
in the mists of time, I long ago chose one that no one else seems to use:
Slippy. Maybe someday I
will switch, but Slippy does what I need.

To make a Slippy presentation, I create a .html file, open it in vim, and
start typing. Each slide is a <div class="slide">. To see and
present the slides, I just open that HTML file in a browser. If you want
to see an actual artifact, click the "actual presentation" link on any of
my recent talks, or take a look at the repo for one of them:

When I need more power than just my own typing, I want to use Python to
produce content. In Pragmatic Unicode, I
used it to produce tables of character translations, and to run the Python
code samples. In Names and Values, I used
it to write Cupid figures.

To run Python code that can create content in my HTML file, I use
Cog, a tool I wrote that can execute Python code
inline in a text file, and put the output back into the file. I originally
wrote it to solve a different problem, but it works great here. It lets
me stick with a workflow where I have one file that contains both the
program and result.

Sometimes, I don't need Cog. Loop Like a Native
is just static text, with no need, so it's not in there.

For explaining code, it's very helpful to be able to highlight individual
lines in a snippet on the screen. I couldn't find a way to do this, so I
wrote lineselect.js, a jQuery
plugin to let me select individual lines. While presenting, I use a
presentation remote with volume control buttons, and remap those keys to
j and k so that I can manually move the line selection as I talk.

As I write the presentation, I like working out what I am going to say by
writing it out in English. This helps me find the right way to explain
things, but has another huge advantage: it means I have a written
presentation as well as a visual one. It frustrates me to hear about
someone's great presentation, and then to have two options of how to learn
from it: either watch a video, or look at slides with no words behind
them.

When I write the English, I put it into the .html file also, interleaved
with the slides, as <div class="text">. CSS lets me hide those divs
during the presentation, but I can work in my HTML file and see the slides
near the text.

For publication on my site, I have a Python program that parses the HTML and
extracts the text divs into a .px file for insertion into my typically
idiosyncratic site publication toolchain.

Producing that .px file also involves producing PNGs from the slides.
Slippy comes with a phantomjs program
to do this which works well. The px-producing program inserts those PNGs
into the page.

As I say, I'm not explaining this to convince you to make slides this way.
Most people will vastly prefer a more convenient set of tools. I like the
control this gives me, and I like writing the kind of tooling I need to
make them this way. To each her own.

My dad and stepmother were here for lunch yesterday. It happened to be their
45th wedding anniversary, so we made them a cake. Not anniversary themed,
but theater-themed, because that is a huge passion of theirs. They both
worked in the theater,
and continue to help run the Barnstomers Theater
in New Hampshire.

So we made a theater cake, with stage, house (where the audience sits),
proscenium, lights, curtains, and some kind of confused production going on:

The view from backstage:

You can't see the seats, they are Rolos with Hershey bar backs. Sorry for
the poor photo quality...

I am on the plane back to Boston from PyCon 2015 in Montreal. You've
probably read over and over again that PyCon is the best conference ever,
yadda-yadda. I haven't been to another conference in a long time, so I
don't have points of comparison. I can tell you that PyCon feels like a
huge family reunion.

I started on Thursday, and was not feeling part of things. I don't know why.
I thought perhaps 9 PyCons in a row is too many. I thought maybe I should
be spending my energies elsewhere.

But Friday, I started the day by helping with the keynotes, keeping time,
tracking down speakers, and so on. I felt involved. I was helping friends
with things they needed to do.

PyCon is almost entirely organized and run by volunteers. There is one
employee, all the rest is done by people just helping as a side project. I
think this gives the event a tone of something you do, rather than
something you attend or consume. Anyone can volunteer to make things
happen, and it can be a really good way to meet people.

There are 2500 people at PyCon, but we are all in the same group. There
isn't a entire cadre of paid staff on one side, and attendees on the other.
We're all making the conference happen in our own ways. It an open-source
conference in the truest sense of the word.

Adam

My co-worker Adam Palay gave his talk early on Friday. I'd first seen Adam
speak in a lightning talk at Boston Python.
His girlfriend Anne was there to record him. They seemed supportive and
close. I really liked the talk he gave, and told him so. When the call
for talks opened for PyCon, he let me know he was submitting a proposal,
and I helped him where I could.

His talk was accepted, along with mine and two other speakers from edX. For
each talk, we had a rehearsal at work, and at a Boston Python rehearsal night.
Each time Adam rehearsed his talk, his girlfriend Anne and his brother Josh
were there. I was impressed by their support. It turned out Anne was going
to not only come to Montreal, but attend the conference with him.

Friday morning at PyCon, I went to Adam's talk. Sitting in the second row
was Anne. Next to her was Josh. Next to him was Adam's sister, and on
either side were his mother and father, all with conference badges! I
joked about "Team Palay", and that the five of them should have held up
cards spelling P-A-L-A-Y.

Clearly, this level of support from a family is unusual, to take the time,
buy airfare and hotel, and pay the conference fees, just to see Adam
present his 30-minute talk at a technical conference.

I'm explaining all this about Adam's supportive family because when I am at
PyCon, I feel a bit like Adam must all the time. I am surrounded by
friends who feel like family. We are brought together by an odd esoteric
shared interest, but we come together each year, and interact online
throughout the year. We are together to talk about technical topics, but
it goes beyond that.

I know this must sound like a greeting card or something. Don't get me
wrong: like any family, there is friction. I don't like everyone in the
Python world. But so many people at PyCon know each other and have built
relationships over years, there are plenty of friendly faces all
around.

All those friendly faces give rise to an effect my devops guy Feanil coined
"Ned latency": the extra time I have to figure in when planning to be at a
certain place at a certain time. When traveling over any significant
distance at PyCon, there will be people I want to stop and talk to.

This is called the "hallway track": the social or technical activity that
happens in the hallways all during the day, regardless of the track talks.
I've spoken to people at PyCon who've said, "I haven't seen any talks!"

Jenny

Last year during lunch, I happened to sit next to a woman I didn't know. We
introduced ourselves. Her name was Jenny. We chatted a bit, and then
headed off to our own activities. Over the next few days, I'd wave to
Jenny as we passed each other on the escalators, and so on.

I saw Jenny again this year and miraculously remembered her name, so I waved
and said, "Hi Jenny." This happened a few times. Later in the weekend,
Jenny came up to me and said, "I want to thank you, you really made me feel
welcome."

This made me really happy. I was saying hi to Jenny originally so that I
would know more people, but we'd made a tiny connection that helped her in
some way, and she felt strongly enough about it to tell me.
Ian describes
a similar dynamic from the bag-stuffing evening: just learning another
person's name gives you a connection to that person that can last a surprisingly long time.

There are people I greet at PyCon purely because I've been chatting with
them for five minutes once a year at every PyCon I've been to.

Speaking

One of the highlights of PyCon for me is giving talks. I've spoken at the
last 7 PyCons (the talks are on my text page). I put a
lot of work into the talks, and am proud that they have some lasting power
as things people recommend to other learners.
After a talk, people always ask, "how did it go?" My answer is usually,
"people seemed to like it," but the other half is, "on the inside,
horrible. I know all the things I wish I had done differently!"

On Sunday evening, Shauna Gordon-McKeon and Open Hatch
organized an intro to sprinting session for new contributors. I agreed to
be a mentor there, thinking it would be a classroom style lecture, with
mentors milling around helping people one-on-one. Turned out it was a
series of 15-minute lectures at a number of stations around the room, with
people shuttling between topics they wanted to hear about. I was the
speaker on unit testing.

I was able to start by saying, if you really want to know about this, see
my PyCon talk from last year, Getting Started Testing.
Then I launched into an impromptu 15-minute overview of unit testing.

During one of the breaks, on my way to the water fountain, I passed a woman
in the hallway watching the talk on her headphones. She said it was great,
then later on Twitter, we had a typical PyCon love-fest.

To be able to see someone learning from something you've created is very
gratifying and rewarding.

Sprinting

I attended one day of sprints. My main project there was
Open edX, but I also said I would be
sprinting on coverage.py,
which I had never done before. I'd always had the feeling that coverage.py
was esoteric and thorny, and it would be difficult to get contributors
going. I was pleasantly surprised that five people joined me to make some
headway against issues in the tracker.

But some of the interesting bugs are about branch coverage, which I had
become somewhat frustrated by. I warned people that the problems might
require a complete rewrite, but they were game to look into it.

Mickie Betz in particular was digging into an
issue involving loops in generators. I was interested to watch her
progress, and helped her with debugging techniques, but was not hopeful
that there was a practical fix. To my surprise, a day later, she has
submitted a
pull request with a very simple solution. Mickie has restored my faith
in humanity. She persevered in the face of a discouraging maintainer (me),
and proved him wrong!

Another sprinter, Jon Chappell, picked up an issue that was easy but
annoying to fix. Annoying because it was asking coverage.py to accommodate
a stupid limitation in a different tool. It was not glamourous work, but I
really appreciated him taking the task so that I didn't have to do it.

Two other sprinters, Conrad Ho and Leonardo Pistone, have each submitted a
pull request, and Leonardo is also chasing down other issues. Lastly,
Frederick Wagner has expressed interest in adding a warning-suppressing
feature.

A very productive time, considering I was only at the sprints for about
four hours. PyCon is amazing.

Juggling

One thing I've never seen at PyCon is organized juggling. I considered
bringing beanbags with me this time, but thought they would be heavy to
carry around. Then Yelp was handing out bouncy balls at their booth, so I
got four of those, and used them all weekend. It was a good way to play
with people, especially once we did some pair juggling.
Next year, I'll bring some serious equipment, and have a real open space
(or two!) Who's in?

All in all

I don't know why I felt off the first day. PyCon is an amazing time, and
now I again can't imagine missing it. It connects you to people. One
afternoon, an attendee pulled me aside to show me a bug in coverage.py. I
looked in the issue tracker, and saw that it had been written up four years
ago by Christian Heimes, who was attending PyCon this year for the first
time, and who I met at the bar on my first night!

PyCon energizes me, and cements my relationship to the entire Python world.
Sometimes I wonder about a programming language as the basis for a group of
people, but why not? They share my sensibilities and interests. They like
what I do, and I like what they do. We move in similar circles. Do you
need better reasons for a group of 2500 people to be close friends?

I gave my talk yesterday at PyCon 2015:
Python Names and Values. PyCon has always been good at getting videos
online, but they just keep getting better: the video was online the same
day.

People ask me afterwards how the talk went. I got good reactions, but I also
know what I would like to have done differently. I think I spoke too fast,
and I think I should have had more practical advice about not mutating
values if you can avoid it.

My youngest son Ben turned 17 today. He is fascinated with mushrooms, so
we made him a mushroom cake. Actually a trio of cakes:

It looks a bit like cupcakes, but no cupcakes were harmed in the making of
this cake.

The main mushroom has a stem ("It's called a stipe, Dad") made of two 4.5-inch
cake rounds. The cap ("pileus, Dad") was baked in the bottom of a
stainless steel mixing bowl. The two stem pieces bulged more than we
expected, so we sliced them off and made caps for the medium mushrooms.
They are supported by stacked Ring-Dings for the stem.