The Javamex companion blog. This blog includes both technical articles relating to the programming information that you'll find on the Javamex site, plus information covering the IT industry more generally.

Tuesday, September 27, 2011

There appear to be some interesting legalcases springing up around the fact that software developers have been taking "raw" Unique Device IDs and using them to store data against a given user 'in the cloud'. UDIDs essentially resemble a long 'random number' rather than directly encoding any user-identifiable information, so you may be forgiven for wondering what the issue is. The problem comes when applications have then stored user-identifiable information (such as social network data) alongside the UDID and make this information available to other applications, thus leading to a 'leaking' of user-identifiable information on the basis of this shared UDID.

The solution is in principle simple: append the UDID to an application-specific code or 'salt' and then encode this combination using a secure hash scheme such as SHA-256. The application-specific salt doesn't need to be a secret. But doing this means that I cannot compare a hash for user X from one application with a hash for user X from another application and deduce that they are for the same user. Again, in principle.

However, this scheme does rely on UDIDs containing sufficient entropy. Since the number of devices of a particular model sold is maybe in the tens or at most hundreds of millions (iOS devices appear to have sold something in the order of 200 million so far), if it is possible to make a good prediction of which range of the theoretically possible UDIDs has actually been allocated, then I can simply pre-compute all the potential combinations of (allocated hashes, application ID) for two given applications and find all the matches.

So why didn't people think of this in the first place? I wonder if one of the issues is that to a human, UDIDs do just look like 'random junk': it's just a string of random numbers, right, so why would you bother encrypting it? It's a good example of how when deciding when and how to employ encryption, we have to think not only about the data itself but also about protocols and practices governing how that data is used and stored.

Saturday, September 17, 2011

Matthew Baxter-Reynolds wrote an interesting piece for the Guardian yesterday giving some points of view essentially on what Windows 8 will mean for businesses and IT careers. In particular, he makes the point that writing for a Windows 8 device means a more natural progression from the "C# in Visual Studio" type development that is the bread and butter of most business applications. And he makes the point that this time around, Microsoft will be fostering a tighter coupling of software and hardware platforms to move closer to the model of the iPad, part of whose success relies on it being more of a "self-contained ecosystem".

From this point of view, I think Matthew is probably correct: if the thing you want to concentrate on is writing boring old "bread-and-butter" business applications, then a platform that builds on the existing "bread-and-butter" platform of C# in Visual Studio will be a more attractive proposition for businesses to get a footing on the tablet bandwagon. And even it if wasn't more attractive from a development perspective, "Microsoft Windows tablet" may just sound a bit more 'serious and businessy' on a tender bid.

But, I think Matthew could have included a few other important observations (which don't necessarily contradict his point of view and if anything support it-- but which are nonetheless worth mentioning):

the iOS "ecosystem" may still present an attractive market to developers in the sense that Apple have done the job of (a) isolating the 100+ million people with sufficient income to splash out on fancy toy; (b) sold them that toy on the basis of it providing enjoyment: or in other words, persuaded them that it is to their benefit to spend money on this new gadget (and associated apps); and (c) built a system for developers to market fairly effortlessly to that income-to-spare-for-toys-and-apps sector;

games (and, apparently, knocking out games that you can sell for a buck or two a download) remain the predominant iOS market; the iOS development framework allows you to write virtually all of your application in a bog-standard C/OpenGL paradigm which will allow the creation/porting of a huge number of games with a minimal learning curve;

while slightly quirky, in a sense Objective-C is just "another C syntax-based language" and if you stick around in programming long enough, you generally end up learning a new C syntax-based language every 10 years or so; indeed, "Java" as it looks today, and certainly how it will look if currently discussed language features make it into Java 8, is almost a whole new language compared to how Java was when it first materialised back in 1735 (anyone remember when Java didn't have inner classes, let alone generics or closures?);

given that many business applications can and indeed ought to be written as web apps (a point which Matthew himself makes), for as long as HTML5/Javascript remains a standard enough development paradigm, I wonder if the Microsoft tablet will "C# in Visual Studio" actually become the paradigm of choice for tablet business applications anyway?

So, while I think Matthew is probably right that we could end up with an 80:20 split in one market versus a 20:80 split in the other, I don't know that that means that the "not-the-boring-bread-and-butter-database-application" market isn't viable.

Wednesday, September 14, 2011

It hardly seems 5 minutes since Windows 7 was the Next Big Thing. But apparently Windows 8 will soon be upon us, the principal motivation apparently to allow Microsoft to have another crack at jumping on the tablet bandwagon. It will be interesting to see how many people are actually burning to have Windows on their tablet, in whatever touchy-tabletified guise.

I'm quite happy not to have the following features on my iPad, for example:

the need to grind the entire system down to the speed of a ZX Spectrum thanks to the latest antivirus software

irritating pop-ups every five seconds requesting permission for applications to do their basic job

having to battle with system that insists on saving documents in fundamentally stupid locations buried somewhere 5 million levels deep in the filing system

the need for every Application That Goes Ping to take up 7 squillion terabytes of disk space, delay startup by half a minute and require at least 2 resets before they will work

But of course, I am open minded and eagerly await the first demonstration of the wondrous crop of Windows 8 tablets which will not fail to ensue.

Monday, September 12, 2011

There's an interesting petition being made to the UK government to teach programming to children from primary school age rather than waiting until later in their school career. I'm not sure that the rationale specifically given by the petition's initiator-- that there would be "far less of a disparity between the sexes"-- is terribly convincing as it stands. It's not clear to me either that there's really some great barrier, preventing female students from learning program if they choose to, that will be broken down specifically if 9-year-olds are taught programming. And in any case, it's not clear to me that even if it were the case, that it would be the most compelling reason for teaching programming. However, whether that proposition proves true or false, I think there are other valid reasons for teaching programming at this early stage. And so I support the petition itself, even if not the exaggerated gender-gap mantra that seems to be its underlying motivation.

I think a more compelling rationale is to be found in the types of skills that programming develops and simply in the ubiquity of programmable devices. As has been expressed elsewhere, learning to be a good programmer seems to entail making certain problem-solving skills become intuitive. (As one fellow blogger has put it, learning to "algorithmate".) And in today's world where the average iPhone packs computing power that would have classed it as a mainframe not so many years ago, the potential for innovation through programming is practically beyond comprehension. I also suspect-- though I concede that I am also guilty of fluffy argument here and would be hard pushed to substantiate this view-- that the mindset that is made intuitive through learning to program gives people a more intuitive grasp of other concepts crucial to understanding our universe, be it genetics or the syntax of language. But in any case, the computer is now so engrained into our day-to-day devices that I would argue that "understanding our universe" really entails understanding how the microchip and computer work just as much as it does understanding the atom, the laws of thermodynamics etc.

Or put another way, there's really no reason nowadays to view programming as a uniquely specialist, "nerdy" subject.

However, I have some implementational concerns. I worry, if programming was brought earlier on to the curriculum, about how severely it would be dumbed down; educators need to understand that "programming" doesn't mean drawing inane patterns with "turtle graphics" or similar glorified spirograph imitations. I also have mild concerns-- though from a UK perspective, yes, they are mild-- that having a stronger computing component to the curriculum could widen existing digital divides if, for example, basic computer literacy was sidelined. (However, I do disagree with an apparent over-obsession with "teaching" skills such as browsing the Internet or word processing to those children for who such skills are as intuitive as operating their iPod.)

I look forward to seeing what success this and other similar petitions in other countries may have. And if you are based in the UK, I urge you to sign this petition and encourage your friends and colleagues to do likewise.

Sunday, September 4, 2011

As we approach the predicted launch date of the iPhone 5, we unsurprisingly start to see more of the media speculation machine whirr into action. Key specifications that are pretty much on the cards are the dual-core A5 processor and an 8 megapixel camera. And the new version of iOS isn't a terribly big secret given that any iOS developer can download the beta version.

Now, perhaps I'm missing something, but it strikes me that if this is the big spec improvement to make a big fuss about, it seems pretty unexciting.

The dual-core processor will hopefully make, for example, video editing a little less klunky. And it may be put to good use in a few specialist applications such as to allow extra tracks or more complex virtual instruments in some of the excellent music making software that has appeared for iOS over the last couple of years. But for your bog-standard iPhone game, I suspect the extra capacity will go largely unused: the standard game programming paradigm simply doesn't used multithreading, or only to a very minimal extent (e.g. sound playback is handled by background threads). While I'm not sure whether to agree with Don Knuth's take that multi-core processors are essentially a passing gimmick while we work out how to get faster single cores, it's true that what game programmers generally want is a single core that runs as fast as possible.

As for an 8 megapixel camera instead of a 5 megapixel one. Well, I can't help shrugging my shoulders a little. So maybe we'll get a 2x digital zoom that's just about worthwhile if the extra megapixels actually add resolution rather than just being extra noise. But unless Apple is about to announce that they've just designed a revolutionary new lens, a few extra megapixels don't sound terribly exciting on the face of it.

Saturday, September 3, 2011

Bruche Schneier has an interesting summary of the recent diplomatic cable disclosure due to (a) the encrypted file being (surreptitiously but) publicly available, and (b) the decruption key being publicly disclosed on a separate occasion. It's a good example of how what can often go wrong with cryptography is not the core technology or algorithm itself, but a failure in handling associated protocols (be it technical or social).