Category Archives: software

With the addition of automatic property synthesis to Objective C, the @synthesize statement is usually no longer necessary. However, the XCode refactoring operation has not been appropriately updated. Renaming ‘foo’ to ‘bar’ in the code below works – the property gets renamed to bar, but it remains linked to an ivar called _foo. Remove the @synthesize statement, however, and refactoring breaks as the implicit ivar will – after refactoring – now be called _bar, but the init body will still try to assign to _foo.

I just released the first version of SearchTouch, hosted on Github at https://github.com/spoogle/SearchTouch. This is a search engine written in Objective C which compiles and runs on Mac OS X or iOS. It is designed to allow searches to be efficiently carried out on a device.

The code builds an index for a document set consisting of an inverted index for each word in a document set. The index can be stored on the device, and can be efficiently searched to produce a ranked list of every document which contains all of any given set of search terms.

There is a strict separation between the search and indexing code, and the data structures used to store indexes. This separation is mainly achieved by defining the Index class as a class cluster, although a prototcol is also used.

The main storage backend uses Core Data. There is a second back end which is purely in memory using CFTrees.

I have been using Wings3d recently to construct a 3d model of something I would like to print on a 3d printer. It is very interesting.

I have nearly finished my model, and one of the things I realize now is that I spent a long time working on one part of the model early on, only to see now that it represents maybe 10% of the volume of the finished whole. But when I started, I was focused on this one piece, and put a lot of detail into it. Now that detail is hardly visible in the final model. Worse, the extra detail results in a very large number of polygons for that one piece, with a number of negative consequences. I should have done a rough sketch of the whole model at the beginning, then I would have understood where I should focus my time.

It’s a bit like software development, accelerated. Some important themes in common are:

Employ rapid prototyping at the beginning to understand the problem and what the important elements of a solution should be (see previous paragraph).

Big components can require a lot of maintenance. If you put a lot of work into a part of the system, you can make it beautiful and reusable. But you can also make it bloated, full of unseen problems, and hard to fix or adapt to new uses. For example, part of my model is a hand. When I originally constructed the hand, I made it too thin, but I also added a lot of extra faces to get a smooth surface. Later, I realized I had made it too thin. Then all those extra faces bit me, I had to take care to move them all to make the hand wider.

Maintain modularity and decoupling. I have had some problems with Wings3d incorrectly applying transformations to selections containing multiple bodies. To get around that, I merged the separate parts of my model into a single body fairly early in the construction process. That helps the transformation problem, but now it will be hard to reuse the separate parts of the model.

Wings3d is a surprisingly excellent 3d modeling package, written in Erlang. I have been playing with it for a few weeks, using a prebuilt .dmg. I now need to be able to build from source so I can tinker.

I hit some rocks following the build instructions on the wings3d site.

First problem: the build of ESDL which the ESDL website claims is the latest is esdl-0.94.0615. This is outdated and not compatible with current Wings3d source code – there seem to be big differences in the OpenGL libraries. Get the actual latest version instead from ESDL [sourceforge]. I got 1.0.1 and that seems to work.

After building ESDL, ESDL_PATH needs to be set correctly to compile Wings3d. I achieved that with:

export ESDL_PATH=/usr/local/lib/erlang/lib/esdl-1.0.1

Another problem with ESDL – somehow, the library files had been installed with the wrong permissions: they were owned by root and had permissions -rwx——– and so could not be read by erlang. I fixed the permissions.

Final ESDL problem – when I had built Wings3d and started it from the command line, I got an error:
Driver failed: dlopen(/usr/local/lib/erlang/lib/esdl-1.0.1/priv/sdl_driver.so, 2): no suitable image found. Did find:
/usr/local/lib/erlang/lib/esdl-1.0.1/priv/sdl_driver.so: mach-o, but wrong architecture

I added ‘-arch i386′ to the LDFLAGS and CFLAGS in the Makefile in the c_src directory of the ESDL sources and got an i386 .so library out as needed.

Another problem is that the Mac specific Makefiles and the XCode project needed to build a .dmg specify MacOS 10.4. I am running 10.6.6 and do not have the 10.4 frameworks installed on this machine, so to build I needed to specify 10.6 or latest versions of the frameworks. This was achieved by deleting -isysroot /Developer/SDKs/MacOSX10.4u.sdk where it appears in various Makefiles, and editing the XCode project in macosx/Wings3d.xcodeproj/ to change the target to 10.6 or latest from 10.4.

That was enough to get the build to work, and to yield a .dmg containing an application bundle to install.

My first piece of fiddling has been to allow a mousepress while holding down the CMD modifier to fake a right button click if a single button mouse is being used. I changed the first clause of handle_event_0 in wings.erl to this:

Many web companies provide APIs which allow third parties to use and build upon their services. This allows the company to focus resources, creativity, and attention on the main thrust of their business, while benefiting from the resources, creativity and attention of the companies which use their APIs. I would say that these days, providing an API is best practice.

A good example of this is Twitter, a fascinating company. Since its earliest days, Twitter allowed third parties to access and modify user data through its APIs. As a result, a thousand Twitter clients have bloomed, providing users with a rich user experience and various tools while Twitter concentrated on the critical problem of scaling its service to meet its incredibly rapid growth.

A good example in a different domain is Wholefoods. When I walk around Wholefoods, I often pass tables offering samples of different products. Some of these tables may be run by Wholefoods, but I believe that many are run by third-party vendors. Wholefoods provides the infrastructure (building, heating, payment) which the vendors use. This is essential for the vendors, and works well for Wholefoods, because the vendors use their own creativity and business dynamics to provide something which Wholefoods in theory could provide, but which it does not have the time or interest in providing.

My recent experiments with robotics and sensors have been really eye opening. Almost everything that computers do is limited by available means of interaction. For the most part, output to the user is constrained to a few million scintillating points of light, and user input to a grid of 100 square tiles and a fake rodent. These provide sufficient bandwidth only for written and visual communication directly between the computer and the user.

A notable recent trend has been the expansion of user input mechanisms, particularly in gaming, where the intent of a three dimensional, mobile, interacting user has to pass through communications channel of miniscule capacity (e.g. a joystick pad + fire and jump buttons) to instruct an increasingly three dimensional, mobile, interacting game character. So, Nintendo and others have brought us the analog joystick, the vibrating joystick, force feedback, the Wii controller. Apple understood that a touch surface is not just a way to swap a mouse for a pen (different physical input mechanism – same bandwidth), but a way to increase bandwidth (multi-touch). Microsoft have done something similar with the Surface (as far as I can tell, a ton of people would buy one at a price ~ iPhone’s $400 – Microsoft’s problem seems to be manufacturing).

Voice input has not yet broken through, although Google’s iPhone app is quite compelling (except for an unfortunate problem with British accents). A limitation there is the compute power needed to do speech recognition, something which Google smartly got around by sending the data to their servers for processing.

Another important kind of input and output is provided by the computer’s network connection, which admits information into the computer faster than a keyboard, but provides slower output than a visual display unit. The network connection does not usually provide data which is immediately relevant to the user’s immediate situation: it does not provide data relating to the user at a small scale, and does not provide information which is actionable at that small scale. By “small scale”, I mean the scale of things we can ourselves see, touch, taste, move, like, love. This is important, because most of what we experience, think, and do is carried out at this small scale.

Your phone might let you talk and browse the web. Your PC might be able to play you a movie or control the lights in your house. Your city might have a computer which monitors the traffic and adjusts the traffic lights. Your country might have a computer which predicts the weather or models the spread of disease, or which computes stock prices. The larger the area from which the computer’s inputs are drawn, the more the computed outputs are relevant to people in the aggregate, and less they are relevant to people as individuals.

There is a huge scope, and, I think, a lot of benefit, to making computation much more local and therefore personal. A natural conclusion (but in no ways a limit) is provided by endowing every significant object with computation, sensing, and networking. I cannot put my finger on a single killer benefit from doing this… but I think that even small benefits, when multiplied by every object you own or interact with, would become huge and game-changing. You could use a search engine to find your lost keys, have your car schedule its own service and drive itself to the garage while you were out, recall everything you had touched or seen in a day. Pills would know when they need to be taken, food would know when it was bad.

Yesterday I wrote about my problems with Mozy. This morning I had a response from their tech support, and a comment to my blog post, both of which helped to clarify the issues:

From tech support:

Reinstall the software to correct the corrupted configuration database problem – not the subtlest of solutions but I will give it a bash.

When Mozy can’t find the files it doesn’t actually delete them from the backup, it just says it does. When it tries to back them up again, it should find them still in the Mozy server and will compare them to the copies on my hard drive. If they are the same, then the one on my hard drive will not be uploaded again. I understand the solution but in this case they should not say they are deleting the files, but instead marking them as not found.

Jimmy pointed out in the comments that a lot of the problem is my limited upload speed, which is not Mozy’s problem, but concludes that their service is not ready for prime time yet. Two fair points.

I would like to have a disaster backup, so that if the whole house burns down, and all the computers are burnt to cinders, I still have my photos, my music collection, my PhD dissertation and years of email.

A week ago I decided to try Mozy. The price was sweet, and the service backs up your files over the network to the Mozy servers in Utah (at least, that is where their offices are).

Problem number 1: I have 350 Gb of files I want to backup and the maximum I can squeeze through my DSL’s limited upstream connection is 2Gb/day. That’s 170 days – half a year – just to do the initial backup, during which time my internet connection is very slow. Ah, those memories of my first modem, a 300 baud dial-up. BAD MEMORIES.

Problem number 2: Mozy has a configuration utility you use to tell it which files to back up, but whenever I try to change that configuration, the utility crashes, complaining about a corrupted configuration database.

Problem number 3: I started my Mac this morning with the external hard drive turned off. Mozy backup started in the background, couldn’t fine the 250 Gb or so of files I had asked it to back up from that drive, and so concluded I had deleted them, and deleted them all from the backup on its servers. A week of uploading wasted.

I have emailed Mozy’s tech support. No reply yet. I am thinking I need to change tack and ask someone a few hundred miles away to look after a hard drive with a complete backup of my system on it.

The Chrome comic pointed out that the browser has gone beyond the web and is now often used for running applications, and therefore that we should adapt the browser to go beyond the web metaphor. The same considerations apply to the OS, coming from the other direction. One example here is files. On the web, references to resources can include some extra state information as part of the URL, e.g. http://www.example.com/mypage&tab=12&encoding=3&secret=banana. You can copy, email, bookmark this URL and use it to return at a later time to the resource in the desired state. You cannot pass extra state as part of a filename. The best you can do is have the application which opens the files store this working state to the file along with the data.