A Budget Budget App

I have a very simple budget app I wrote a few years ago that I use to track spending. I recently wanted to add a feature (a short description in addition to the amount spent) but soon I started thinking about the actual task and it didn't seem fun at all. I'd have to remember how it worked, remember how the framework I was using worked. I would have to remember how I deployed it. And then I'd have to write the feature. All told, it wasn't that much work but no work sounded even better. And since there were features I really didn't care about any more, decided I would hack the functionality of the entire thing into a google form and go on with my day. I even got to use another old project to make it even nicer. Here's how I did it:

I made a google form with two fields: amount and description. I set up a spreadsheet to store information the information in. Google automatically adds a time stamp.

Published it and added it to my phone's homescreen so I can quickly add a time stamp.

I created a second page in the spreadsheet it stores data in and wrote a little query to copy the data that gets written into the first sheet into my second sheet. So the sheet that the form is writing into is called "expenses" and looks like:

The following query in A1 of the second sheet copies this data over:

1

=query(expenses!A1:C512,"SELECT A, B, C")

I do this rather than just using the page with the expenses directly or copying them with simple references (like "=(expenses!A!)") because every time data in input into the the form it inserts a new row in the spreadsheet that captures data. This breaks formula references. So the above gives me a way to maintain references in my other formulae because the query copies an entire "area" over.

Then in columns D and E I extract the week number with:

1

=if(A2,WEEKNUM(A2,2),"")

And the year number with:

1

=if(A2,YEAR(A2),"")

And in two other cells product the current week and year using something similar to the above but with the NOW() function:

1

=WEEKNUM(NOW(),2)

I store these in F2 and F4. Then produce my weekly total with the following query:

Company Clocks

It seems to me that there are two clocks that control every product company. One marks events that disrupt the work that its people do, like a bell tolling. Another measures how long it takes to get something done, like a stopwatch ticking.

The Bell

The Bell clock marks off moments in time when the company or a team changes in some way the disrupts things.
This might include:

A team loses or gains members

A project is canceled

A priority shifts

A competitor forces a position

A planning cycle ends and a new one begins

A piece of unmaintained code suddenly begins acting up

We can illustrate like this.

Time flows left to right and the red lines demarcate disruptions.

The Stopwatch

The Stopwatch clock measures duration: how long it takes to get something done. It might be governed by:

Number of "stakeholders" that must be consulted

Number of people on a team

Amount of tech debt

Amount of product debt

Experience level of team

Conceptual clarity of the project

Compatibility of existing infrastructure with new needs

Release process

SLAs, contract with users

The length of time it takes for a new team to become effective

Primary communication media

We can illustrate it like this:

The blue blocks demarcate pieces of work beginning, existing and ending. The end of one of these is when some value has been added.

Things are never so regular as in both the above illustrations, some projects take longer and some teams have a harder time, and some changes affect certain departments or teams.

When companies are young and small these times are both typically short. In regards to bell time there are frequent shifts in priority as the team discovers what its users want and new ideas are experimented with. But stopwatch times are small too: a deploy is just a keystroke away, there's no one to consult or get clearance from, there are few users to annoy with new changes. There is little tech debt, which means creating tech debt is very easy.

When companies are mature and large these times are both typically long. Assuming the business is solid and well-run, it may be that no one tries to change anything for fear of rocking the boat. By this point, if a competitor attacks them, it hurts but not a lot and they'll live because they know their domain deeply and have reserves to weather challenges. On the other hand if they do want to change something there are levels upon levels of people to consult before launch. They have to carefully thread this change through a dozen legacy systems. And a change is more likely to disrupt millions of people relying on old behavior which means they need to give users time to adapt.

Of course, few young and mature companies are without issue. But these are two modes I believe exist. And of course in some of these cases, abrubt changes and shifts still end projects. The point is that generally they give space for things to work.

The in-between case is interesting and it's a place I think a lot of medium sized companies get into trouble. In this in-between case, changes happen nearly as much (maybe more) than they do at small companies but when they respond they have enough process and debt that it happens slowly.

Viewed in the graphic above, all the durations that fall on a bell ringing are canceled projects, wasted work—inefficiency. I put the next project on a different line to show that there is some time gained. But what isn’t shown is that typically there was a string of other planned work—work that was going to follow the canceled project—that also vanishes.

In pathological cases it can feel impossible to get anything done—by the time one gets a project underway, things have changed under you. And because the company isn’t executing it fails to meet its goals and decides to change again.

To recover from a position like this a company has to either lengthen the time between changes, or shorten the amount of time it takes to get things done.

A company will always almost always try to shorten the time it takes to get things done for two reasons:

Upper management has an outsized effect on bell time and refuses to relinquish it. In fact, they remain one of the only direct tools upper management has to influence things. They will always want the option to reorganize everything at any whim. And for the most part, that’s a good thing. (As long as it’s used well).

Bell time is influenced by external pressures, namely competition and the market, so it’s harder to control in general.

There is a corollary to the first point above—upper management can dictate that stopwatch time change and push the responsibility down to the lowest parts of the company. In the worst cases management will simply demand that this time shrink with no further concessions. In the very worst cases they will do this but also force an artificial set of processes on top of it.

In the best cases, they will attempt to look at the variables that control these times and make hard decisions in order to be able to manipulate them. These are the companies that have a chance of succeeding beyond this point.

I think that most people who have worked at companies struggling with this will find the above familiar, maybe even obvious. But I also think sometimes obvious things are under-scrutinized because they are obvious. Every growing company should be thinking about these two clocks.

Some Trees

Lava

The air was biting. Lynn had not know Villi very long, and had serious doubts about staying in Iceland, as he wanted her to do. She had, in fact, pretty much decided to leave, and now, in the harrowing cold, she became certain that she wanted to go home. Plowing the snow with her feet to make large letters, she began to write I HATE YOU. She finished the "I" and was working on the "H" when she changed her mind. She felt ashamed. She knocked the arm and one leg of the "H" and made an "L." Her completed message said I LOVE YOU. The sentiment distracted Villi on the mountain. His vigilance relaxed, and a glowing bomb, much larger than he was, landed beside him with a reverberant thud.

The "bomb" here is a rock ejected from a nearby Volcano. A great miniature story embedded in John McPhee's essay "Cooling the Lava"

React and Google Maps: Binding events to an InfoWindow

Using React with Redux and Google Maps together introduces a host of challenges. There are several packages available for joining them together, but for a recent project I found neither of them met my needs, although they ended up being good guides.

One of the issues is that InfoWindows support an api that takes an HTML string to produce the inner content of the window. For static HTML, this is fine, and the google-maps-react package simply takes any children supplied to its own InfoWindow class and renders them using react-dom/Server's renderToString method. Like so:

In my own InfoWindow component class, I dealt with this by relying on React.render instead of ReactDOMServer.renderToString, by only rendering a wrapper div into Google's InfoWindow object and then using that as the container for render.

Notes on Hosting a Static Site in a Google Cloud Provider Storage Bucket

Static sites are great and hosting them in simple storage buckets like S3 or GCP Storage is cheap and easy. I've been drawn to GCP lately because the docmentation and tools are often simpler than AWS. However, GCP does have a few caveats for hosting static sites.

Looks look at the setup first. Once you've installed GCP's command line tool gsutil the quickest documentation for setting up a static site actually comes from the gsutil web --help command.

1
2
3
4
5
6
7
8
9
10
11
12
13

For example, suppose your company's Domain name is example.com. You could set up a website bucket as follows:
1. Create a bucket called example.com (see the "DOMAIN NAMED BUCKETS"
section of "gsutil help naming"for details about creating such buckets).
2. Create index.html and 404.html files and upload them to the bucket.
3. Configure the bucket to have website behavior using the command:
gsutil web set -m index.html -e 404.html gs://www.example.com
4. Add a DNS CNAME record for example.com pointing to c.storage.googleapis.com
(ask your DNS administrator for help with this).

Since you're creating a CNAME, you can't serve your site at http://example.com. There is no easy way to get around this. The best path forward is to use your registrar to forward example.com to www.example.com.

Single Page Apps will 404 on pages that don't exist

Step 3 in the above directions configures your bucket to use index.html as your MainPageSuffix. This means, the use won't see index.html as the last part of the URI.

If you're building a single page app, you probably want all urls to redirect to your index page so you can route on the client. To do this, you'd run the web command as:

1

gsutil web set -m index.html -e index.html gs://www.example.com

This works, but it unfortunately still returns a 404 response to the client. To the human on the other end, this doesn't matter, but to clients it may.

Drawing Machines

Sprezzatura

I re-made a little visualization I had on my old site that got lost somewhere along the way. Just a quick thing to get down an idea about how much is lost between the act of writing what actually gets down on the page.

Quincunx Board

Using Flask jsonify with PyMongo Objects

Flask has a handy utility to return json encoded responses from its web controllers. I like to use mongodb for toy projects because its easy to set up and started with. Jsonify doesn't know what to do with PyMongo's ObjectID data types. Here's a drop in replacement I've found useful on a few projects.

1
2
3
4
5
6
7
8

importjsondefjsonifym(d):"jsonifier that works with mongo objects"returnjson.dumps(d,default=json_util.default)defmy_endpoint():returnjsonifym(my_mongo_db_instance_object)

Getting Started with MTA Bustime Information

To get started, first you need to get an API Key from the MTA Website. Fill out the form and you'll get an email with your key.

The API has vehicle information and stop information. You can see a full list of parameters on the website, but the important ones you'll probably want to use are:

key: Your API key. You need to send this.

VehicleRef: This is the 4 digit number you see on the side of the bus. Why you'd care about an individual bus is beyond me. You're probably more interested in...

LineRef: An actual route ID. Formatted like MTA NYCT_B63.

Direction: Buses go in two directions, so this is either 1 or 0.

Line Monitoring: Shows all the buses in a line, their location, bearing, direction they progressing along the line, status and information for the next stop they will come to.

Stop Monitoring: Shows a single stop, including how far away the next bus is in distance and number of stops. For Stop Monitoring you have to pass along a MonitoringRef, which is the ID for a particular stop.

At this point, you're probably wondering where you can find all these wonderful ids that are required as parameters. Luckily the MTA has them available as text file downloads.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26

varurl="http://bustime.mta.info/api/siri/stop-monitoring.json"vardata={key:'GET_YOUR_OWN_KEY',// see above for linkLineRef:"MTA NYCT_B63",MonitoringRef:"MTA_305408"};// tells us how far away the bus is$.ajax({url:url,data:data,dataType:'jsonp',success:function(result){alert(result.Siri.ServiceDelivery.StopMonitoringDelivery[0].MonitoredStopVisit[0].MonitoredVehicleJourney.MonitoredCall.Extensions.Distances.PresentableDistance);}});

Moving from Zepto to jQuery

I recently had the opportunity to port a large, moving mobile web codebase from Zepto to jQuery. Thanks to Zepto's fairly complete API compatabilty this was fairly straight forward. For the most part, Zepto is a subset of jQuery, so you're going to an environment of less functionality to more functionality. That said, there are a few differences. Here are some things worth looking for:

JSONP: Older versions of Zepto had you constructing your own callback url. On the brightside, it used a completely different method, so it's easy to grep for. If you're dealing with a Zepto before 1.0, or even devs who are used to working with Zepto pre 1.0, ack for "ajaxJSONP" (the method is deprecated, but remains). jQuery doesn't have a ajaxJSONP method, so this is definitely something that could break.

data: The basic implementation of data in Zepto only stores strings. In jQuery, you can store complex objects. This is a place where jQuery is giving you more functionality, so if you're going from Zepto to jQuery you should be ok. If you're going in the opposite direction, you're going to have to do some ack'ing.

Promises / Deferreds: Zepto doesn't return promise objects from its ajax calls. If you've added installed the popular Simply Deferred library into your Zepto install, you'll be just fine when you swap Zepto out for jQuery. I haven't tried this in the opposite direction. My hunch is that you'd be fine, but definitely watch for it.

Touch events: jQuery lacks the touch events that Zepto provides. We grabbed just the touch portion out of jQuery mobile and threw it on top of jQuery and the compatibility has been very good. We've found a few strange interaction, but they were usually with some overwrought code. If you're just using tap and swipe, you're probably going to be ok. jQuery Mobile touch code also provides a ghost click interceptor and is a little bit better about making it easy to override some of the threshold values they use. Zepto has made some choices that cause issues and don't seem to be getting fixed.

selector differences: One of the most subtle issues was finding that there were subtle issues with selectors, for instance, in jQuery if I have the code:

You might be scratching your head and asking why on earth you would write that, but I did find a real world example of this biting us. The engineer didn't want to add another class to the element they were looking for, but also didn't want to use the very general class name as a selector with no namespacing. So they added the current element to the find statement, saw that it worked in Zepto and went on with their lives. While doing some manual testing I realized that this caused an issue under jQuery.

I can't say this is a conclusive list, and you should definitely spend some time in the Zepto's github issues section, but this is a good place to start your sweep.

One last thing. You're going to shake your head, but if you're just doing an experiment, and you're aware of all the above issues and you're moving from the smaller, more constrained Zepto to jQuery, you can literally do this and expect pretty much everything to work:

URLs as Pointers

One: Every once in a while I'll be re-entering some little piece of information, for instance, my name, address and telephone number, and I'll think, this should just be stored somewhere for me so I can use it like a variable. It should be available anywhere in any package I'm working in. In fact, in any language I'm working in. I should be able to update some resource with all my information and the places that depend on it should just update. If I change my email address, it should update on on my blog, on twitter, everywhere.

Two: Git and Github have changed the way software folks learn and work. Now if you need some functionality, you just pull and start using it. Code is a resource, and it can be executed, but it requires you to pull it down and run it. What if code could be executed in place on Github?

This is just for fun, so don't get out your red pens yet:

Imagine a programing language that didn't store your data in memory addresses when it ran, but it stored them at www urls, grabbing the source out of your source file and PUTting at your domain on the web. And when it dereferenced it, it just ran a GET on that url. If you PUT some executable code at a url, you would be able to POST to it with some data as args. And you could take that response and POST it somewhere else.

On top, we could build a syntax, and when I "run" a program, it PUTs and GETs all my functions and data and applies them with POSTs. I can create an entire web of functions and data and string them together with HTTP. You could use my functions and data. I could use yours.

Template or JSON decorator for Flask

Write your view function to return a context that will either be serialized as JSON or used as the template context.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17

deftemplate_or_json(template=None):""""Return a dict from your view and this will either pass it to a template or render json. Use like: @template_or_json('template.html') """defdecorated(f):@wraps(f)defdecorated_fn(*args,**kwargs):ctx=f(*args,**kwargs)ifrequest.is_xhrornottemplate:returnjsonify(ctx)else:returnrender_template(template,**ctx)returndecorated_fnreturndecorated

Test Video Maker

Recently I've been working on a very simple JS video slideshow. The only thing technically interesting about it is the order in which it plays back videos. I have some rough unit tests, but to demo that it's working correctly to the client, sometimes its nice to show it actually running the way they've asked you to make it.

Generally this involves me hunting around the internet for webm videos, and then trying to remember which order I've uploaded them while we wait for the slideshow to move through its queue and insert new videos into the correct spot.

I don't know why it's taken me so long, but finally I knocked together this little command line utility so I can make webm videos that have some text in them so I can easily label them with ordinals.

I'll add support for other formats and sizes, but it's easy enough to grab the code and alter it to get this working for you.

How to Post a Photo to Facebook with Python

I don't know why you'd want to add content to facebook.com, but sometimes people ask you to do this for them. Adding photos can be a little tricky.

I was recently trying to do this in a Flask application using Flask OAuth. Unfortunately, if you pass a file along with the data to your remote method call, Facebook throws a dreaded "requires upload file" message. After doing a little research, it seems the problem has to do with the the file data getting signed by the oauth2 library. All the other data should be signed, then the file data needs to added, unsigned, afterwards.

This means patching the libraries to do what you want. I started going down that road and then I remembered I saw a PHP example where the access token was simply passed along as a querystring paramater and the file was uploaded normally with an HTML form.

Turns out, this works wonderfully and you don't need to jump through all the oauth library hoops.

Here's some code using Flask OAuth to get the access token and Requests to post the data.

oauth=OAuth()facebook=oauth.remote_app('facebook',base_url='https://graph.facebook.com/',request_token_url=None,access_token_url='/oauth/access_token',authorize_url='https://www.facebook.com/dialog/oauth',consumer_key="GET YOUR OWN KEY",consumer_secret="GET YOUR OWN SECRET",request_token_params={'scope':'publish_stream, photo_upload'},)defget_image_path(filename):"""Implement this to return an absolute path to the image. Maybe do something to avoid security issues."""pass@app.route('/fb/auth/post/<string:filename>')defpost_to_fb_auth(filename):callback_url='http://localhost'+url_for('fb_authorized',filename=filename)returnfacebook.authorize(callback=callback_url)@app.route('/fb/authorized/post/<string:filename>/')@facebook.authorized_handlerdeffb_authorized(resp,filename):access_token=resp['access_token']image=get_image_path(filename)files={'source':open(image,'rb'),}url='https://graph.facebook.com/me/photos?access_token=%s'%access_tokenr=requests.post(url,files=files)returnredirect(url_for("success"))

Just a few things to note: you have to add your callback url to your Facebook app's configuration. This the field called "Website with Facebook Login". In my example, I'm just using localhost, but you can add a port if you're not on 80.

You'll also need to make sure you have the publish_stream and photo_upload permissions set up in your application.

Mouse Hit-Testing on a Canvas Using a Separate Buffer's Pixel Data.

While unsuitable for such a simple example, this is a demonstration of using a seperate buffer for quick and dirty mouse selection in a canvas element.

Clickable items, here circles, are given an int id (random in this example, but these could increment from 0, etc). This id is converted to a hex color which is used as a key in a dictionary, pointing to the associated Clickable object.

When the mouse is pressed, the clickables are rendered to a seperate canvas with the same dimensions, but instead of using their ordinary display color as a fill color, we use the color that derives from the clickable's id. The color is then read off of the buffer and used to look up the object from the clickables dictionary.

On first glance this technique seems overly complicated, particulary for circles where simply checking if the mouse falls into the radius of the clickable objects would be faster and easier, but it has a number of advantages: It works with arbitrarily shaped objects, it would work even if you were faking a 3d scene, and you don't have to do complicated math to translate your mouse coords if you're positioning your objects by transforming the canvas geometry with translation, scale and rotation calls. You can also easily change item depth by sorting the objects so objects can occlude one another in whatever order you wish.

There are some caveats. Canvas doesn't support aliased rendering, so you can get mis-attributed hits if you happened to click the edge of a shape whose pixel color happens to match the id color of another object. It's very dificult to select more than one object at a time. You also need to make sure your canvas background color is not a viable id.

Installing Django on mod_wsgi, in a virtualenv behind Nginx on Ubuntu

These instructions will help you walk through setting up Django in one of the preferred deployments: We'll use Nginx as a proxy to pass off requests to Django. One of the benefits here is that you can set up other sites on your server running on various technologies and nginx will just pass the requests to the correct handler. This way, you can have some static sites, maybe a couchdb instance, a rails app, a Django app etc, all on the same server, which should be fine for low traffic sites. Django will run on Apache under modwsgi and live inside a virtualenv so you can install libraries or upgrade your Python wihout messing with your system Python.

I'm using a Linode instance with Ubuntu 10, but the instructions should be helpful for any Linux. One last thing, I'm not an ops guy. If you see something wrong, please let me know.

Before beginning, I'd just go read through the docs on deploying Django on mod_wsgi and running mod_wsgi on virtualenv. Both are pretty good. If you have some experience configuring a server, they're all you need. If you don't, having them in your head will be handy as you work through the following and they'll be crucial during trouble shooting. Either way, let's get started..

First, install virtualenv with easy_install. Create a directory in /usr/local/ called pythonenv and change into it, so you're in:

1

/usr/local/pythonenv

Then, create a clear virtualenv by running the command:

1

virtualenv --no-site-packages BASELINE

Create a user and under that create a directory sites/ and cd into it, so your pwd is something like this:

1

/home/someuser/sites/

In this folder, create a new virtualenv; this is the virtualenv that you'll install Django into.

(Note, check your python version above). If you want, you can set up django-admin on your shell path. Complete details on this can be found on the Django
site.

At this point you should be able to activate your virtualenv and import django.

1
2
3

source example.com.env/bin/activate
python
>>> import django

Now, create or move your Django application code into the site directory you created a few steps earlier. You'll need to create a wsgi file if you don't have one already and add it somewhere in this folder. For help creating the WSGI file, follow the docs on djangoproject.com, but skip the Apache configuration part at the beginning. Just make the wsgi file and add it to a directory in your Django project. You'll also have to add the following to the top:

Change the directory above to match the site-packages directory in your new virtualenv.

I personally set up two versions of my Django settings under a common "configs" folder with separate subdirectories for wsgi and settings files which correspond to my development and production environments. This is a technique I think I got from Simon Willison, but you can keep it anywhere in your project. Most people create an "apache" directory and stick it there. Just make sure all the configurations coming up point to it.
Now follow the instructions to set up Apache2. but don't follow the virtual host configuration. When you get there, instead install modwsgi:

The important thing in this configuration is that the Directory directive points
to the folder you put your Django wsgi file and the WSGIScriptAlias points
directly to it. While you're at it, you can place your error logs whereever you
want as well. The above might not be a great place for them. Make sure the
directories exist.

Ok, now enable this site by running a2ensite

1

sudo a2ensite example.com

This creates symlink from your available sites to your enabled sites.

Also, edit your main apache2.conf file in /etc/apache2/ and at the bottom of the
file add:

1

WSGIPythonHome /usr/local/pythonenv/BASELINE

Now you've installed a virtualenv with Django and Apache with mod_wsgi, so we're getting close.

Finally, you can add location configurations to the above to server your static media. Something like this should work, but again, refer to the Nginx documentation.

1
2
3

location /media/ {
root /home/someuser/example.com.env/site/static/
}

Now, retstart Apache and Nginx

1
2

sudo /etc/init.d/apache2 restart
sudo /etc/init.d/nginx restart

Fire up a browser and go to your domain. Guess what? it didn't work. This is a complicated process
and you're bound to mess something up. The key here is that you need to check
your logs. Tail the nginx and apache error logs that you configured above and
fix each problem until you have this working.

If you see import errors, that means your Python path isn't correct. Make sure
you really understand it, fix it and don't forget to restart Apache when you're
testing fixes.

Emailing Content to Django with Webfaction's mail2script

Webfaction, my shared-hosting provider of choice, has a nice feature that allows you to pipe emails to a script. This is useful for all sorts of things, but I decided to try it out by building a way for a friend to email photos to the front page of his website, which is built with Django.

To start, create a python script (I called mine EmailPhotoUpload.py). You can place it anywhere really, but I placed mine right in the django app that had the models I would be working with. Make sure the script starts with

1

/usr/local/bin/python2.5

or whatever Python you're using with webfaction. Also, make sure it's executable:

1

chmod +x EmailPhotoUpload.py

When mail2script executes the script, it doesn't execute with knowledge about your Django project, so you'll need to set the DJANGO_SETTINGS_MODULE environment variable and place your project on your Python path if its not already. Something like this works:

Once you've done that, you can import the models you want to manipulate. I have a model called 'Photo' in the 'photo' app. So I import it like so:

1

fromphoto.modelsimportPhoto

The email is accessible at sys.stdin, which is a file-like object. Python's email module has a method that takes a file and makes a Message object, so this works out nicely:

1

msg=email.message_from_file(sys.stdin)

Now you can refer to the email module documentation to get your data. I started by writing a few helper functions to get the data I want out of email.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22

defget_text(msg):"Looks for a part of the message with a text/plain mime type and returns it"text=""ifmsg.is_multipart():forpartinmsg.get_payload():ifpart.get_content_type()=='text/plain':text=part.get_payload()else:text=msg.get_payload()returntextdefget_jpegs(msg):"Looks for any part of the message with image/jpeg mime type and returns a list."jpegs=[]ifmsg.is_multipart():forpartinmsg.get_payload():ifpart.get_content_type()=='image/jpeg':data=part.get_payload()tempjpeg=tempfile.NamedTemporaryFile('w+b',-1)tempjpeg.write(base64.b64decode(data))jpegs.append(tempjpeg)returnjpegs

A few things to note: I'm using Python's tempfile module to hold the jpegs until I can save them in my model. Django has its own NamedTemporaryFile class, but it's for Windows compatibility and since this is a linux environment, we can use the standard library's. All we need to do is take the payload from the message part, base64 decode it and write it into a file. If you expect large files, or many, you'll eat up memory here and you'll have to do something more sophisticated.

For any of the header information, just call get_header, pass in the name of the email header and then perform any clean up on the result. For instance, I'm stripping the email address portion from the "From" header. Now just iterate through your list of jpegs and make a Photo object for each:

Remember, jpegs is a list of temporary files; be sure to close them so the os cleans them up.

Now log in to webfaction, create an email address and put the absolute path to this script in the target area.

Please note, this isn't very secure. Anyone with the address can post content, and if you're not cleaning it, they may be able to do worse--never trust user input.

I plan on releasing a cleaner, more reusale classed-based version with some added features and exception handling, but for now I'll post the full text of what I describe above so you can see everything in context. Remember, you can test your script by downloading the raw email text and redirecting it to your script at the command line like this: