And so the iOS-ification of Mac OS X continues. Apple has just announced that all applications submitted to the Mac App Store have to use sandboxing by March 2012. While this has obvious security advantages, the concerns are numerous - especially since Apple's current sandboxing implementation and associated rules makes a whole lot of applications impossible.

I'm not sure if you aware of how the black hat industry works. Make no mistake, this is a multi million dollar industry. There are people out there that make a living out of it. There are people who do nothing all day but to find these zero-day bugs. And when they find them, they sell them on the black market, for hundreds or thousands of dollars. These aren't the kinds of bugs that come to light by patches. The black hat industry has moved beyond that. These are bugs that aren't known by their respective vendors and aren't patched in any of their products. This information is then bought by malware writers, who exploit them in their malicious code for keylogging, botnets, whatever. There's not a hair on my head that thinks black hats are not capable of writing Stuxnet-like functionality. Don't underestimate these guys, they're way smarter than you think.

I don't think that black hat guys are stupid or not capable to pull out top-quality exploits. For all I know, Stuxnet may just have been the American government hiring some black hats. But it is a fact that the more information about an exploit spreads, the most likely it is to reach the ears of developers, who will then be able to patch it.

So if a black hat has a high-profile, Stuxnet-like exploit at hand, won't he rather sell it for a hefty sum of money to high-profile malware editors which will then use it to attack high-profile targets, than sell it for the regular price to a random script kiddie who will use it to write yet another fake antivirus that displays ads, and attempts to steal credit card information ?

True. On a Mac, .pkg/.mpkg packages do that. They actually are little more than a bundle of an archive files and some xml data to describe its contents. it supports scripting, resources, …

Indeed, these are relatively close to what can be found on Linux. Now, personally, what I'd like to see is something between DMGs and this variety of packages. A standard package format which does not require root access for standard installation procedures and has an extremely streamlined installation procedure for mundane and harmless software, but still has all the bells and whistle of a full installation procedure when it is needed.

Its an interesting train of thought, but I still think there would be a lot of human design based decisions to be made for the different devices, and I don't know if the net gain of letting the computer do this would be greater than just redesigning the UI yourself, especially on iOS devices, where its trivial to set up an UI.

Oh, sure, I'm not talking about making UI design disappear, just changing a bit the balance of what's easy and what's difficult in it in favor of making software work for a wider range of hardware and users.

Adopting a consistent terminology, designing good icons, making good error messages, avoiding modals like pest, many ingredients of good UI design as it exists today would remain. But making desktop software scale well when the main window is resized or designing for blind people would be easier, whereas a price for this would be paid in terms of how easy it is to mentally perceive what you are working on during design work, making good IDEs even more important.

It has to have the functionality to support the use cases for the device. Everything else is just clutter.

This is not as trivial as you make it sound, though. Sometimes, the same use cases can be supported with more or less functionality, and there is a trade-off between comfort and usability.

Take, as an example, dynamically resizable arrays in the world of software development. Technically, all a good C developer needs in order to do that is malloc(), free() and memcpy(). But this is a tedious and error-prone process, so if resizing arrays is to be done frequently (as with strings), stuff which abstracts the resizing process away such as realloc() becomes desirable.

But that was just a parenthesis.

Some UI's which are basically displays of underlying functionality. These tend to be very tedious and time consuming to work with. There are others which actually take the effort to make the translation between a simple user interaction and the underlying technology. A lot of thought can go into the process of trying to come to grips with how these interactions should present itself to the user, and in some cases, it takes an order of a magnitude more effort than it takes to actually write the code behind it.

Well, we totally agree that UI design really is tedious and important stuff, and will remain so for any foreseeable future

You're looking at it from a developer perspective, I'm looking at it from a user perspective. As a user I don't care if there's a windowing technology behind it or not. I don't see it, I don't use it, so it doesn't exist.

By this logic, a huge lot of computer technology does not exist, until the day it starts crashing or being exploited, out of being treated as low-priority because users don't touch it directly

More seriously, I see your point. Mine was just that if you took a current desktop operating system, set the taskbar to auto-hide, and used a window manager which runs every app in full screen and doesn't draw window decorations, you'd get something that's extremely close in behaviour to a mobile device, and all software which doesn't use multiple windows wouldn't need to be changed a tiny bit. So full screen windows are not so much of a big deal as far as UI design is concerned, in my opinion.

Desktop computers have windowing functionality (The classic Mac OS even had way too many of it) There are more differences than that. Some popups, like authorizations, are modal, some others, like notifications, are non-modal. They way they display these things is different as well. But these are just individual elements, and in the grand scheme of things, trivialities.

And mobile OSs have modal dialogs and notifications too. No, seriously, I don't see what's the deal with windows on mobile devices. AFAIK, the big differences, as far as UI design is concerned, is that there is a very small amount of screen estate and that touchscreens require very big controls to be operated. But you talk about this later, so...

(...) Good tablet apps are layed out differently than good desktop apps. This is not a coincidence. Some of those differences are based on the different platform characteristics, as you mentioned. But other reasons have to do with the fact that the use cases for these apps differ greatly. I'm convinced that when you are designing UI's, you have to start from the user experience and define these use cases properly to be able to come to an application design thats truly empowering your users.

And this is precisely an area where I wanted to go. Is there such a difference in use cases between desktops and tablets ? I can use a desktop as well as a tablet to browse the web, fetch mail, or play coffee-break games. And given some modifications to tablet hardware, such as the addition of an optional stylus, and the addition of more capable OSs, tablets could be used for a very wide range of desktop use cases.

Now, there is some stuff which will always be more convenient on a desktop than on a tablet, and vice versa, because of the fundamental differences in hardware design criteria. But in the end, a personal computer remains a very versatile machine, and those we have nowadays are particularly similar to each other. Except for manufacturers who want to sell lots of hardware, there is little point in artificially segregating "tablet-specific" use cases and "desktop-specific" use cases. That would be like turning laptop owners who play games in derision because they don't have "true" gaming hardware, which I hope you agree would be just wrong. Everyone should use whatever works best for them.

So if a black hat has a high-profile, Stuxnet-like exploit at hand, won't he rather sell it for a hefty sum of money to high-profile malware editors which will then use it to attack high-profile targets, than sell it for the regular price to a random script kiddie who will use it to write yet another fake antivirus that displays ads, and attempts to steal credit card information ?

I don't think android exploits are really that "high profile" and if there's money to be made, I don't think blackhats really care about what profile it has. Its all about return of investment. The more of the same systems there are in the market, the more interesting an exploit becomes, since your attack surface increases by a great margin.

To give you an example : Suppose I'm a malware writer, and I write a worm that at a certain night every month at 2 am, silently calls an overseas toll number, allowing me to connect $1 from the call. I wrote the app, but I need some clever exploits to insert it into the system. Suppose its not one hack but a collection of pretty neat hacks, and after shopping around, it sets me back $25K to have them. Then I write the worm and release it, and its able to infect a little 100.000 smartphones. Since the call is sporadic and it only costs a couple of dollars, its improbable that people will discover it right away. Hardly anyone checks every call every month. So over the course of a year, I collect a cool 5.8 million dollers. Say check their bill once every few months and 1% discovers it every month, thats still $3,8M. Say 4% check it every month, thats still $1,3M. Still quite a nice a nice investment. Say the exploit costs me 5 times as much, or even 25 times as much, its still a nice investment. I don't know if the numbers are realistic because i'm not a black hat, I just wanted to show that kind of potential a dominating smartphone platform has.

Indeed, these are relatively close to what can be found on Linux. Now, personally, what I'd like to see is something between DMGs and this variety of packages. A standard package format which does not require root access for standard installation procedures and has an extremely streamlined installation procedure for mundane and harmless software, but still has all the bells and whistle of a full installation procedure when it is needed.

I'm not quite sure what you mean with "between" the two. A .dmg is a virtual disk file describing the content of a disk volume, a .pkg is an installation description for a bill of materials that is read by an application and executed. Both can be combined with eachother. .pkg files are scriptable, extensible with programming, and combinable into metapackages. You can make them as simple or as complicated as you like. You can specify in your .pkg if the application requires authentication or not. If you're just installing in a users home folder, you can do so without authentication..

This is not as trivial as you make it sound, though. Sometimes, the same use cases can be supported with more or less functionality, and there is a trade-off between comfort and usability.

The best user interaction designs are often the ones who have a novel way at doing things and with an ingenious simplicity. Take the iPod, for example. The click wheel is an inherently simple design, much simpler than having buttons. Its still a lot faster to navigate around your device than it is with buttons although buttons are more complicated.

Mine was just that if you took a current desktop operating system, set the taskbar to auto-hide, and used a window manager which runs every app in full screen and doesn't draw window decorations, you'd get something that's extremely close in behaviour to a mobile device, and all software which doesn't use multiple windows wouldn't need to be changed a tiny bit. So full screen windows are not so much of a big deal as far as UI design is concerned, in my opinion.

Is there such a difference in use cases between desktops and tablets ? I can use a desktop as well as a tablet to browse the web, fetch mail, or play coffee-break games. And given some modifications to tablet hardware, such as the addition of an optional stylus, and the addition of more capable OSs, tablets could be used for a very wide range of desktop use cases.

I think they are. UI's aren't flat surfaces. Every good UI has depth. The important things are on the surface of the UI, the less important stuff is tucked away deeper. A good UI balances what needs to be where on the frequency of the use case. If manipulations are many, you better make it obvious on the surface of the UI. If its infrequent, its better to tuck them away deeper, so its not in the way to clutter with the important stuff. Although our post pc devices do similar things, I think their use cases can differ greatly, because of circumstantional circharacteristics. So I think to tune them well towards to their intended use, their UIs need to be different as well. I'll give you some examples :

Consider mail. When I'm using mail on my desktop, I want to have all the tools at my fingertips to be able to be most productive in my mail client. All the "Power tools", like sorting, moving and labeling my email, advanced editing functionality,... need to be right where I want them. A smartphone does mail too, but thats just about where the similarities end. Email on a smartphone is more a way to keep you up-to-date on your inbox, and shoot the occasional short reply if things can't wait. No sane person is going to do lengthy emails on their smartphones or do mailbox maintenance, that stuff's just way too tedious on a tiny screen. Now tablets are somewhat in the middle between smartphones and desktop computers, but I still don't think people will want to do a lot of mailbox management on a tablet, because its still too tedious. A tablet is more something to take with you when you're on the move or in the coutch and when you need more comfort while doing email and you tend to do more than just skimming your imbox and typing a short reply. So the use case for mail on a tablet will be somewhere between those smartphones and PC's. You'll want a couple of features more than on a smartphone, but less than on a desktop.

Another example is the Garageband application. Garageband is a sequencer which shipped with a Mac for a while now, and has a version for iPad and now iPhone. The mobile versions are essentially a visual multitrack recorder with extras thrown in. The desktop version is more of an editing, polishing and export tool. So you can record your jams your iPhone or iPad, transfer your recordings on your desktop computer, clean it up and export it. This "software on an standardized platform replacing dedicated appliances" approach works really well to turn the post pc devices into truly versatile tools. Controlling virtual appliances with a mouse has always been awkward but they make a lot more sense on a post PC device. The biggest mistake one could make in terms of the tablet form factor is to look at it as PC in a frame. We both know that technically, thats what it is. But its this technical myopia that has caused tablets to be a dud in the market place and to come forward with a compelling solution until the iPad came along.

I don't think android exploits are really that "high profile" and if there's money to be made, I don't think blackhats really care about what profile it has. Its all about return of investment. The more of the same systems there are in the market, the more interesting an exploit becomes, since your attack surface increases by a great margin. (...) I don't know if the numbers are realistic because i'm not a black hat, I just wanted to show that kind of potential a dominating smartphone platform has.

Hence there should be more phone OSs which are better designed to make the task harder and reduce the return on investment !

Hah, if only ARM stuff was based on a standard and open platform like x86/PC... People have no problem with trying new OSs on phones on tablet yet, so that would be a great chance for alternative OSs...

I'm not quite sure what you mean with "between" the two. A .dmg is a virtual disk file describing the content of a disk volume, a .pkg is an installation description for a bill of materials that is read by an application and executed. Both can be combined with eachother. .pkg files are scriptable, extensible with programming, and combinable into metapackages.

Well, on the Mac platform, DMGs are typically used for simple software which can be installed through a mere drag and drop in the Applications folder, whereas PKGs are here for the more sophisticated installation stuff (plugins, system components, file associations...)

I wish there was something that was as straightforward as or simpler than DMGs for everyday software, but still flexible enough to adapt itself to more sophisticated use cases.

You can make them as simple or as complicated as you like. You can specify in your .pkg if the application requires authentication or not. If you're just installing in a users home folder, you can do so without authentication..

Really ? Guess not many developers are aware of this possibility, because I don't think I've ever met a PKG which didn't require typing in a root password, in the default OSX user account setup where users can freely move stuff inside of the "Applications" folder...

The best user interaction designs are often the ones who have a novel way at doing things and with an ingenious simplicity. Take the iPod, for example. The click wheel is an inherently simple design, much simpler than having buttons. Its still a lot faster to navigate around your device than it is with buttons although buttons are more complicated.

Heh There you are talking about the dying breed of specialized devices which are carefully tailored towards a specific goal. I agree that very nice stuff can be made this way, as every coffee machine out there can attest, but as time passes these devices tend to be "swallowed" by more general-purpose devices which achieve more functionality at the same financial cost, even though usability is sacrificed a bit...

I think they are. UI's aren't flat surfaces. Every good UI has depth. The important things are on the surface of the UI, the less important stuff is tucked away deeper. A good UI balances what needs to be where on the frequency of the use case. If manipulations are many, you better make it obvious on the surface of the UI. If its infrequent, its better to tuck them away deeper, so its not in the way to clutter with the important stuff.

What if UI designers took some time to explain this balance to the machine, as an example by numerically quantifying the "usefulness" of each control, so that they can be tucked deeper automagically as screen size is reduced and they begin to take too much screen estate with respect to the benefit they bring ? This early build of the Office Ribbon shows how it could work in practice : http://www.sunflowerhead.com/msimages/RibbonScaling.wmv

A similar strategy could also be applied at a second level, to remove functionality from the UI altogether when even hiding it in the depth of nested menus make the whole UI too complex...

Consider mail. When I'm using mail on my desktop, I want to have all the tools at my fingertips to be able to be most productive in my mail client. All the "Power tools", like sorting, moving and labeling my email, advanced editing functionality,... need to be right where I want them. A smartphone does mail too, but thats just about where the similarities end. Email on a smartphone is more a way to keep you up-to-date on your inbox, and shoot the occasional short reply if things can't wait. No sane person is going to do lengthy emails on their smartphones or do mailbox maintenance, that stuff's just way too tedious on a tiny screen. Now tablets are somewhat in the middle between smartphones and desktop computers, but I still don't think people will want to do a lot of mailbox management on a tablet, because its still too tedious. A tablet is more something to take with you when you're on the move or in the coutch and when you need more comfort while doing email and you tend to do more than just skimming your imbox and typing a short reply. So the use case for mail on a tablet will be somewhere between those smartphones and PC's. You'll want a couple of features more than on a smartphone, but less than on a desktop.

If I wanted to make this work on tablet, here's a list of stuff which I would start with :
-> Remove windows decorations
-> Replace the menu bar with a toolbar-activated menu that fills the whole screen.
-> Depending on screen estate, replace the flat 3-part folder selection/mail selection/mail display layout with a 2-part layout with folder selection -> mail selection drill down on the left and mail display on the right, or a full screen folder selection -> mail selection -> mail display drill down.
-> Hide stuff which is not frequently used in the folder selection behind a "more..." option, and increase control size for touchscreen friendliness
-> Reduce the amount of sorting criteria in mail selection, and remove sorting altogether if the display is too small
-> Displace the toolbar to the bottom of the screen for better finger accessibility.
-> Hide status bar when nothing is happening, hide tab bar when multiple tabs are not shown.

At this level, I think we'd already get a pretty nice tablet app, and I don't think anything I've mentioned so far couldn't be done by a machine with a bit of help from the UI designer for deciding which optimization should be applied first and when each optimization should be applied is concerned.

Another example is the Garageband application. (...) This "software on an standardized platform replacing dedicated appliances" approach works really well to turn the post pc devices into truly versatile tools. Controlling virtual appliances with a mouse has always been awkward but they make a lot more sense on a post PC device.

But then how about seamlessly using tablets as extra displays for virtual appliances when editing a mix on the desktop box ? Or doing some quick editing on the go before perfecting the mix at home ? This way, you'd get the comfort of a tablet for virtual appliance control and portability, and the number-crunching power and heavy peripherals of a home studio desktop for the heavy-duty work.

The biggest mistake one could make in terms of the tablet form factor is to look at it as PC in a frame. We both know that technically, thats what it is. But its this technical myopia that has caused tablets to be a dud in the market place and to come forward with a compelling solution until the iPad came along.

Well, sorry for being stubborn, but again I wonder if people have tried hard enough. A tablet will never run software designed for mouse input well, even with a stylus, but software which was designed to be cross-platform to begin with... Why not ?