Monthly Archives: November 2006

Post navigation

Looks like Google have been busy. Theres been major updates to Google Spreadsheets and Google Docs. You can access both at docs.google.com using your GMail account. In addition to being able to create documents and spreadsheets using Googles new online editors you can also upload documents in MS Word, OpenOffice, RTF, HTML and Text. Once you have created your document you can download it to your desktop in any of the above formats. The same is true of Google Spreadsheets. You can also invite other users to share your documents online. Finally you can now publish your online documents and spreadsheets directly to your blogs!

SearchMash is Google’s new web search interface, just without the Google branding apparently in an attempt to ensure impartiality amongst test users. The new search interface boasts some cool features – integrating web search, images, video, blog and Wikipedia searches all into one dynamic page with modules that expand and collapse. I really do like it … it aggregates results from several sources onto one easy to use screen. Try it for yourselves.

Rob and I took a coffee break from refactoring some code yesterday and he asked me if I had watched Ramsay’s Kitchen Nightmares the night before, which I had! We commented on how formulaic the show is – each episode Ramsay turns up at the door of an ailing restaurant, and helps get them back on track and making money by empowering and motivating the cooks, using fresh ingredients, coming up with a simple less complicated menu, keeping management out of the kitchen and in the front where they belong, and above all putting the customer first!! Rob then said something along the lines of “You know when you think about it its not too different to the problems that many other software engineering firms face“. That’s when the penny dropped …

As a metaphor this should sound familiar to anyone working on a large ( even not so large ) software project. Your coders are the equivalent of your kitchen brigade, guys and girls who have to deliver, without them you cant serve anything to your customers. Your ingredients are API’s, frameworks, and technologies. Your complicated menu’s are over analysed, over designed, and over complicated software architectures; and lets not forget your restaurant managers are the same as your project managers, who generally know sod all about writing code, they promise your customers everything under the sun, generally work their brigade to death to deliver to unrealistic time frames and have a penchant for blaming the coders when it all goes tits up!

Worst of all the customer rarely gets what he or she actually ordered, because so many software company’s still persist on following dated project management and development methodologies trying to gather requirements up front, do exhaustive analysis and design then coding, and finally delivering a system 12 months later to a customer who’s requirements have now changed.

Did I sound bitter then? It’s probably because I realise that I’ve spent the better part of a decade working for the kinds of software companies where this kind of thing is considered the norm. Sadly for many company’s it still is the norm!

So lets apply the Ramsay formula to this. How do you turn teams that are building software in the manner above around?

Firstly, you have to empower your brigade. In the past I’ve worked in places where the job of most developers is simply to “fill-in-the-blanks”… in other words just implement method stubs that are generated by a design tool that the architects use. This creates hierarchical divisions within teams, and is generally extremely de-motivating. Like a cook who has given up using his imagination, has no passion, and has given up thinking creatively and has been reduced to doing little more than reheating pre-cooked frozen hash.

In order to empower the team, they need to OWN the code collectively. It doesn’t belong to one person, it belongs to everyone. As a team they’re passionate about it. You have to be break people out of the traditional thinking that I wrote this class so people have to check with me if they want to change it!

We need “fresh” ingredients. Yes we should re-use software but only if its appropriate. How many times have you worked on software projects where the architecture isn’t based on whether its the right technology or tool for the job, but is based on other factors like the company you work for has a cool licensing agreement with a vendor and wants to use their technology because they don’t have to fork out for something more appropriate? This, using a square-peg to fill a round hole approach, in variably leads to difficulties.

Have “simpler menus”!We need to get rid of up-front complicated over-architected designs, and over analysing solutions – which are a product of old-style waterfall approaches. Teams need to be moving towards iterative, agile development methodologies. These processes engage the customers who are able to provide feedback on the product at the end of each iteration so this approach to developing software encourages customers to change their requirements if and when they want to which means that when you finally deliver … your actually giving them what they want! And this squarely puts the customer first!

Keep management out of the kitchen. If your the kind of organisation willing to invest in bringing a smart bunch of technical people together, then trust them to do their job. They don’t need project managers standing over their shoulders asking them for an hourly update on their progress. This kind of Orwellian micro-management is culturally ingrained into some large software companies and in my opinion it is very, very damaging. It fosters resentment, encourages bullying and totally de-motivates developers because of the perpetual monkey on their backs!

I am so glad im not working for an organisation that suffers from these problems, and perhaps if you are then maybe you should consider a change?

Todd Hollenshead from idSoftware was on hand at the launch of the new NVIDIA GeForce 8800 GPU, as part of the launch he showed off never before seen gameplay footage of this new game running on the new hardware … and it looks absolutely stunning! Watch it for yourselves here.

Was very impressed when I saw how far the OLPC project had come, this demonstration of the user interface on this Linux based laptops is simple and intuitive and a world apart from the bare Linux interfaces most of us are used to. You can also learn about the history and some of the issues faced by the project on Wikipedia, as well as view official information at the OLPC Homepage. The laptops are will cost $100, and a represent an opportunity to revolutionise how we teach the worlds children – with an emphasis on developing nations where access to technology is limited for under privileged children.

Some scientists at Tokyo’s Keio University have developed a brolly with a digital camera, WiFi, and a projector built-in. They’re calling it “Pileus“. It can be used to capture images as well as video while your out and about ( in the rain 😉 ) which are then sent to Flickr and YouTube. Plus you can beam previously captured images and video down onto the ground using the built in projector.

I’ve been trying to read up on developments in Artificial Intelligence, my primary motivation for this has been in my own resurgent interest in the field. I studied Artificial Intelligence at the University of Birmingham and whilst my academic life was dominated by my interest in the subject, its something I lost touch with during the course of my professional career, with the exception of a stint dabbling using artificial neural networks at Rolls Royce in order to extrapolate trending in turbine engines over normal and prolonged usage.

Anyway I came across this Panel Discussion on google video. The panel discusses the question “What are the bottlenecks, and how soon to Artificial General Intelligence? If you have the time, then its well worth watching. I have to confess I was engrossed. To summarise the members of panel stated that the bottlenecks or obstacles currently preventing projects pushing towards AGI include:

Lack of Funding

Nature of current programming languages, which are viewed as being cumbersome to work with.

Building an emergent system, rather than a system that can be incrementally tested.

Not enough people involved in research in this field.

Too much polarisation in terms of what researchers believe defines Artificial Intelligence, and the wildly different approaches adopted by researchers.

The inherent complexity of building a system capable of the level of generalisation required.

Too much research in the field focuses on building solutions to “toy” problems which arent compelling enough to convince investors.

Our ignorance, how the hell do we build an intelligent machine? We dont even know what the goal is.

The lack of a common ontology and vocabulary to discuss the subject.

I wont bore you with the panel’s wildly varying assessment of how long it will take some believed within the next decade whilst others believe it will happen towards the end of this century. One of the most interesting questions posed was “Have we reached the status of being a science?”, the only panelist who answered stated “No, we’ve always been an Engineering discipline” – and I think its true to say that its one that is divided into entrenched groups not willing or able to work with each other.

The AI research community is seemingly still haemorrhaged into advocates of Strong AI and others who advocate Weak AI. There are those who believe the solution lies in mimicking the human brain, if you imitate the human brain closely enough you’ll end up with a conscious intelligent creature since we ourselves are proof of this. On the other hand there are those whole believe in a purely engineered solution, using software to study and accomplish specific problem solving or reasoning.

I’m concerned that in the last 10 years it appears the divisions within this discipline have grown wider, however im encouraged that one of the overriding and recurring points in this video is everyones agreement that in order to move forward more collaboration is needed. I’ll be following the initiatives mentioned in this talk closely at http://agiri.org/

I did it find it amusing when someone commented during the discussion that AI researchers were perhaps too familiar with science fiction and perhaps that was part of the problem! 🙂

MS Windows and Linux handle file locking differently. This article describes both approaches in detail. To summarise though, when you open a file for reading, under windows it prevents others from deleting it or writing to it, however under Linux it does not.

I fell foul of this while trying to run some Java based unit tests that worked perfectly well under Linux but started throwing errors on windows.

Next month see’s the UK release of the complete second series of Ghost in the Shell – Standalone Complex 2nd Gig. I already own the complete first series as well as both movies. Over the last month I’ve tried to re-watch the first series although I’m well aware that the second series isnt a continuation from the first or even the two movies.

However it’s gotten me thinking about the pervasiveness of technology in society and how the works of contemporary visionaries such as Masamune Shirow the creator of the original Ghost in the Shell manga, and Kenji Kamiyama who dramatised it into the anime series, as well as other sci-fi writers who have touched upon this genre, most noticeably writers such William Gibson, Theodore Sturgeon, Isaac Asimov, Philip K. Dick etc. are all slowly being translated into reality. Shirow’s work however is focused more tightly on the ethical and philosophical ramifications of the widespread merging of humanity and technology. The development of artificial intelligence and an omnipresent computer network set the stage for a reevaluation of human identity and our sense of uniqueness … which has me hooked!

One of the overriding themes in Ghost in the shell, is how in the future the human race has been cyberized with most humans choosing to have some degree of cyberization, whether that is as little as having a cyber brain case installed to trading their human body for a cyber body free from the weaknesses of our natural form. In Shirow’s vision this cyberizations means human beings are wired into a massive connected network, with information out there in the net accessed directly by the brain. When someone isn’t connected to the net they are often referred to as being in autistic mode.

So where am I headed with all this?…

Well … with the growing popularity of existing services such as Second Life which is one of several virtual worlds that have been inspired by the science fiction novel Snow Crash by Neal Stephenson, it makes me wonder just what is the appeal with the idea of living in an artificial world.

Besides Second Life isn’t a game! It’s a virtual world where some users have gone as far as setting up real monetary businesses and are earning a living from it, for others its simply an escape from their First life. In either case its an interesting phenomenon, I’m not sure what it means for the human race, or even if it means anything at all, personally my First Life far too complicated enough without adding to it the pressures of a Second Life.

At Talis we’ve been doing a lot of work using vmware, I routinely run a Fedora Core 5 image for developing code in, which works wonderfully well on the dell laptop I have which has virtualisation support and lots of memory 😉 I also have Red Hat, KNOPPIX and SUSE VM machine for trying out other things.

One of my colleagues, Rob Styles, introduced me to thoughtpolice.co.uk, who have provided VM Images of many popular distros, this lowers the barrier hugely since I dont have to worry about installing a distro from scratch which can be time consuming. I use this site regularly and recommend that anyone who might be thinking of setting up a virtual machine running a linux distro to check here before installing from scratch.