You are here

Python

conus.nitk.in is a simple visualization of the US, based off of NASA's data. It scrapes the latest image from GOES-East, adds the timestamp, and displays it in an auto-refreshing format. The source script is called by cron every 5 minutes and lives here: https://alpha.nitk.in/liveusa.py In fact, all of the source images and data are visible from alpha.nitk.in - the below URLs are just pretty aliases to the .html pages.

It's nothing groundbreaking, but it offers a near-live view (usually about 30 minutes off from real-time) of the US. I enjoy having the site up on a secondary screen just to watch the curvature of the earth spin by. (I'm planning to set up an old tablet in a picture frame on my desk to display the earth.)

For a more exciting take, have a look at http://animatedconus.nitk.in/ - that site shows the past 24 hours or so of GOES data, and you can see the sun rise and set.

Over the past few weeks, I've been developing a base station for Optimus'. (That's the IGVC robot's name.) In order to operate autonomously, Optimus' is outfit with a slew of sensors. In order to keep tabs on Optimus' and his operation, the base station establishes a radio link with the robot. The robot constantly sends telemetry data out to the base station and the base station periodically sends commands to the robot.

Yesterday, I set up an account with Facebook. I'm not especially fond of the service, but my girlfriend's abroad, with metered internet, and wants to send me pictures. Facebook lets her upload once, and share many times. I'm also under pressure from other organizations to have a profile there. So I caved and set it up.

One of my fears of the service is spending too much time there, or going often to check that I haven't missed any important messages. I know that there are clients for Macintosh that will pull Facebook notifications to the desktop, but couldn't find a parallel for Linux. There are a few clients, but they're either processing-heavy or just broken.

After drawing basic shapes on the oscilloscope (circles, lines, and rectangles), I decided to try something more adventurous: bitmaps. Since the soundcard outputs no more than 192,000 samples per second and the oscilloscope doesn't retain data for much longer than 1/60th of a second, I knew that only small images would work.

First, I needed to read images. The Python Imaging Library does an admirable job (and is free!). It works with most formats, and is capable of reading layered images (animation, for the layman). Most of my test images were GIF files, which posed a problem: GIF's are indexed. In an indexed image, each pixel has only a single value, rather than the usual three (red, green, and blue). The GIF includes a lookup table for translating that value into a color. Unfortunately, PIL doesn't include any way of running those numbers through the lookup table. Fortunately, it allows conversion to a normal RGB image.

I've seen people attaching microcontrollers to 2-channel oscilloscopes to draw pretty patterns. It looked cool, so I decided to try it myself. One problem: I don't know heads from tails when it comes to microcontrollers. I can, however, program. And my computer has a sound card, capable of producing two channels of 8-bit wrath.

The process was actually beautifully sequential. I'll spare you the details of how I found a library - suffice it to say that PyAudio allows for injection of sound into a stream and reached maturity. I reverse-engineered the sound format using some of the sample recipes provided. PyAudio's streams store and read sound from an ASCII string. Initially, I assumed that the transform was based on chr(), where chr(0) corresponded to 0 volts, and chr(255) reflected maximum volume. I was wrong.

Glyph is a small Python library that simplifies text-based GUI's and games. Its fundamental class, the Glyph (surprise!) consists of some number of ASCII images and a few other attributes (including position and layer). Glyphs relate to each other in a parent-child context. When a glyph is rendered, it renders each of its children, at the position they specify. The glyph inserts itself under any children with a positive layer number.

Each Glyph saves its image in a dictionary, where keys are [x, y] positions and values are the characters in those locales. When rendered, each Glyph's image is composited with all its children, then converted into a string for printing.

It doesn't sound so useful yet, does it? I'll present a few examples of its utility.

Simple games. The background acts as the root parent, and its children include the character and any interactive elements.

Status screens. Glyph simplifies building even a complicated status screen, full of both text and ASCII images. It's easy to place text within a frame, or organize data in rows and columns. If each piece of data is a Glyph, then they can be updated independently with the setImage function.

Construct a new word, using all but one of the letters of the original word.

Repeat until you have a one letter word.

I decided to replicate it using Python and a dictionary file, and here are the results. The program consists of a WordTree class and a small wrapper. Given a word to begin with, the class finds all permutations of the word, less one character. It then spellchecks each one, discarding the gibberish.

Next, it creates a child WordTree class for each of its valid words. Through the magic of recursive processing, it creates a tree of slowly shrinking words.

WordTree has a few features that are just for show. It uses a custom __str__() function that prints a representation of the entire tree, using DOS box characters for prettiness. There are also two levels of notification for progress updates, and the dictionary is customizable.