Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

AHuxley noticed the frightening little Ars story talking about a certain expectation that
iOS and MacOS will merge, leading to a single DRM-locked OS on your MacBook and your iPad. Certainly Apple would love a piece of every app sold. Now I'm sure that this has been discussed over there, but I wouldn't expect it any time soon.

If I were Apple I'd make a desktop iOS a user option like the current Parental Controls. Locking specific users into a walled garden of uncomplicated settings and apps sure would be nice for grandparent support.

It seems Apple is now the new Microsoft / IBM / Conservative / Christian. They build products that are popluar, sell well, and work, but it seems some people have to find fault with that. So sad. I guess that is why the US seems hell bent ditching what made it great (hard work, great products, marriage between a Man and Women) and running to socialism that has a 100% proven track record of failure. Maybe Steve will pull a Galt and close Apple down and let the industry sell junk.

I'd bet that half of the people reading this Slashdot story are mostly concerned about one feature: the ability to use Xcode and distribute what you make without starting a company and paying $99 per year to Apple. If Mac OS X loses this, watch GNUstep (Free clone of Cocoa's predecessor) suddenly attract a boost in activity.

The article discusses how developers expect iOS and OS X to merge from an API perspective - cross pollination between the developments (mostly from iOS to OS X) will lead to a unified development environment. This is *not* the same as the DRM/App Store, which is just the distribution method chosen for the iPhone and iPad. There's nothing technical about this - it's a business choice to make this the sole channel, one that doesn't seem to make sense for desktop computing, and one that I doubt they'd pursue.

Whilst I expect an App Store on the Mac, I would be shocked if it were the only distribution method available. In truth, I suspect we'll see a situation similar to downloading apps via Safari now - the first run, you get a warning about possible unsafe code, you tell it you're fine with that, and then everything carries on as normal. The Mac still represents a vast chunk of their revenue - only marginally less than iPhone in terms of income, and probably more in terms of profit. They're not going to kill a fully functioning golden goose, though I do expect some experimentation with it.

This experimentation is long overdue. For most people, something much simpler than a full desktop would be ideal - my iPad passes my parental approval filter far more than their desktop computer, the complexity of which causes more trouble than benefit. Now, the iPad is *not* a suitable desktop replacement - using my parents as an example again, there's no really useful document processing, no ability to hook up their TomTom, no easy printing. However, I can certainly see some hybrid iMac/iPad (or Android setup, I don't care who makes it) being a *much* better proposition for them than buying another desktop of the current ilk - be it Windows, Mac or Linux.

I believe that apps on mobile phones are a transitory phenomenon. They are/were necessary to make content available on the relatively small screens and to implement touch input (as most websites at the time were built for mouse input). The functionalities of most apps these days can be implemented as websites (HTML, Ajax,...) and this will be the best solution to fix the compatibility problem (different apps for Android, iOS, Symbian, Windows Phone 7 (?), Bada (?),...) and to avoid vendor lock-in. Will we really need an app to access news content? Will the NYT really build and maintain apps for 4 or more different platforms? I believe what we need are properly coded websites that adapt to different screen sizes and input devices.

There will probably be a market for high-end applications on your phone (navigation?, media player?) but honestly, how many of those are on your phone?

Why not? The desktop/laptop is stagnant and flat to little growth, you can see this in the way they gloss over it in their quarterly reports. The iOS is freeking rocking in money. So yeah they may have a hammer and try it out. At the very least a skin that makes the macosx look like the ios with a translation layer.

Right now it is the 'gold rush' with these small apps. That will die down in a year or three. With a decent group making apps people really want instead of zillions of little crap apps. Right now it is a fad/novelty. That will change. There is real money to be made in the lock in.

The 'simple to use' market is huge. We as engineers think it is awesome to be able to tweak everything. Most people do not use that junk. They just want those 5-10 apps they use all the time to just work. They have over the years learned all these crazy gyrations to use these things. For example last year I could have given my gf something like an iPad and she would have been just fine. But now she is 'locked in' to the windows platform because I bought her some games she really likes. She jumps thru the windows hoops so she can play her games. She doesnt give one wit about how to change the power on the 802.11n card, or how many fps her video card can do she just wants to play her adventure games.

Apple, windows, hardware, and linux fan boys miss the mark on this every time. People want to use applications. Users will jump thru the hoops you create to use them (i double smoosh this picture, the computer wants a startup and shutdown phase). I am not saying there isnt a place for more complex interfaces. But simple easy to use ones are the best 99% of the time.

Apple wants to kill the Mac OS desktop. Thus far I've been called a Troll, Naive and Insane. Now I am vindicated as developers have said the same thing.

Apple isn't going to kill the Imac and Macbook lines, they will simply replace the current NEXT based OS with the future versions of IOS and naturally more complex systems are more prone to unexpected issues. Moving the hardware to ARM is trivial as they've already got the HW expertise and OS to do it. The only thing they need to do is get SW makers to fall in line, MS will with their standard half-arsed attempt at Office:Mac and so will Adobe with CS (Adobe dont have the balls to tell Steve to stuff it). Realistically they just need to add more keyboard and mouse support to the Ipad.

Apple wants to do this for three reasons.

1. It just works(TM). Mac OSX can go wrong more then the Iphone. This is because, as fanboys point out OSX is a lot more complex then IOS. Apple does not want users to have to deal with their own problems so they seek to eliminate the chance of it happening. Apple's current strategy is to cut features out that don't work perfectly.
2. Homogeneity. Apple prides itself on the fact that everything works together, that choices are simple. Having two disparate OS lines is detrimental to the long term success of this goal.
3. Control. Fanboys may defend Apple's control for various reasons, mostly using cognitive dissonance (it's for your own good and other such excuses) but you cant deny that Apple wants control. They want to stop the hackintosh, they want to prevent more clones and they want to control what the end users experiences.

This wont happen overnight, not even the RDF turned to eleven could pull that one off. It will happen over time in baby steps and be hailed by the fanboys.

Yeah, that's the "current Parental Controls". This would be an order of magnitude simpler. I know, because I setup my grandmother with a Mac and even Simple Finder was too much. Multitasking, settings, windows, etc. Ideally we'd be able to setup a iPad-like screen with big buttons that runs one application at a time with absolutely zero user configuration possible (email accounts and the like having been setup by the admin account).

Pardon my ignorance, but what is the point of a 192KHz sampling rate? The maximum frequency you can push through that is 96Khz, which is way above human hearing. In fact, the human hearing range is between 20Hz and 20KHz, so even 44KHz sampling rate should be more than enough. Or am I missing something important?

A lot of people don't really understand how to apply the Nyquist-Shannon sampling theorem [wikipedia.org] and so they look at the "jaggy" sampled waveform and think that it will sound horrible if it is output. It's true that if you output the samples directly then you are going to hear artifacts but if you apply the Whittaker-Shannon interpolation formula [wikipedia.org] then you get back the original waveforms and the output will sound nearly identical to the original.

Of course this is all best-case and since we live in the real world with imperfect low-pass filters and non-infinite past and future data we will still get artifacts if we sample at the minimum rate. That's (part of) the reason why we sample at 44.1 kHz instead of 40 kHz, to allow some overhead to account for these non-ideal factors. You absolutely do NOT need to sample at 192 kHz, if you do you are just wasting storage space on your digital media. I believe the default for a DAT is 48 kHz and that's pretty much the maximum you should ever use.

That is, unless you are doing recordings for bats and dogs to listen to...

I could perhaps understand this if you had one computer at home that you use for ultra-important tasks, but I really can't think of anyone with this limitation.

At the time I had this Performa running At Ease, we owned exactly two computers. One was a Mac, and one was a PC. And the only reason we had the PC was because we'd found a working one at a garage sale. This was years ago, before everybody and their dog had a personal computer. At the time, it was unheard-of to have two computers in a household.

Anyone whose life or livelihood is that dependent on a working computer at home has one dedicated to this ultra-important task and one (or more) for the kids and others to screw around with - it isn't like kids that are likely to screw up your system are really going to need the latest and greatest hardware.

I'm not necessarily talking about IT professionals. Plenty of folks have just a single computer in the household. If that one computer gets hosed, they're all out of luck. It won't be life-threatening... But there'll be no email, facebook, whatever. And most folks aren't able to do their own repairs, so it'll take a trip to the shop to get it fixed. Definitely an inconvenience.

I never understood this point of view. Why wouldn't you want the system wide open and available for your kids to tinker with?

I've already mentioned that we had the two systems - one of which was an old Tandy PC.

The kids were restricted to only using the Mac (and At Ease) after they killed the PC. My son saw all these white papers on the C: drive that couldn't be opened with anything, so he deleted them to make room. All those white papers ended in things like.DLL and.SYS Had to reload the whole system from disk. Lots and lots of disks. Was not fun.

How are they going to learn anything if you keep them confined to this walled garden?

Most people aren't terribly concerned with their kids learning how to tinker with a computer. They have a computer that they use for Internet/email/facebook/whatever... And the kids may be allowed to use it... But they sure as hell don't want the kids taking the thing apart to see how it works.

What would it have been like for our generation if our Commodores or Apple IIs or whatnot didn't let us do anything but run those idiotic learning games that schools tried to force on us? I sure as hell wouldn't have developed an appreciation for or interest in computers while being confined to a few "permitted" applications with no access to the underpinnings of the system.

At school that is precisely what I had available. We were only allowed to run a few, specific programs. The computer lab was locked when not in use, and the disks were kept in another locked cabinet. You basically weren't allowed to have any fun.

At home, my mother had an Epson PC of some sort, running some flavor of DOS. I was not allowed to use it. That machine cost multiple thousands of dollars and was exclusively for her work. Nobody touched it but her.

When I decided I wanted to learn how computers worked myself, I saved up my money for the better part of a year and bought a used Mac SE/30 from a repair shop. I tinkered with that thing to my heart's content. Had all sorts of fun with Hypercard.

The point being - it is not unreasonable for the parent in the household not to want their children to destroy their computer. And if the kids are that curious, they can get their own computer to play with.

Odd. There isn't a single mention of DRM in the entire article. The summary is just an alarmist piece. It's only natural that features from one end up in the other, just as features from Windows end up in Mobile, and I would expect features from Mobile will end up in Windows if they are useful in a desktop environment.

iOS4 received feature parity with OS X (some 23 features from OS X ported to iOS in addition to IPV6 and DNS functionality). The article fails to mention any of this. It only talks about iOS4 influence on the desktop while ignoring the return path.

Why would that happen? People who buy Macs can afford the $99/year, why would they buy a mac then subject themselves to a crappier dev library and documentation rather than just paying $100 for the real thing?

You gotta stop thinking like people who buy Apple devices are poor and trying to get everything they can for 'free'. Its not Linux. Mac people actually just pay for stuff.

No, it does not unless you are starting with frequencies ABOVE 20 kHz which would normally be inaudible to human ears.

If we are talking about sounds that a human can hear then you do not need the additional samples to shift the pitch down. Signals that are at 20 kHz and were captured at a sample rate of 40 kHz can be shifted down 2 octaves to 5 kHz without losing any quality since a 5 kHz signal would only need a sample rate of 10 kHz. It doesn't matter that your signal is effectively 10 kHz, that downshifted signal is 5 kHz and doesn't need a higher sample rate.

If, instead, you are pitch-shifting upward you don't want to go over 20 kHz anyways because no one could hear it! The maximum sample rate you need for anything that humans are going to listen to is 40 kHz. Yes, some overhead is nice because you will lose some resolution when manipulating the data but 192 kHz is absolutely ridiculous, even 96 kHz is overkill.

If you really need these huge sampling rates then I'd take a good, hard look at your equipment because something is wrong with it. I'm an instrumental/analytical chemist and I work with signal theory constantly in building and tuning instruments. We never have to resort to over 9 times the sampling frequency of the signal we want to capture, 2 or 3 times gives extremely accurate results no matter how much post-processing we need to do.

Just to put this all in perspective the highest note on a standard piano is C8 [wikipedia.org]. It has a frequency of 4,186 Hz (in 12 tone equal temperament) which means that you need a sample rate of 8,372 Hz to capture it. If you sample at 40 kHz you will not only get its first harmonic but also the second, third, AND fourth harmonics - and nearly the fifth! The only reason we need to sample at a rate of 40 kHz in the first place is that transient sounds like cymbals, buzzes, hisses, and clicks often include higher harmonics in the 15 - 20 kHz range and if you don't capture those then you lose some of the character of the music. By sampling at a rate over 40 kHz we accurately capture those signals and preserve the original recording in such a way that humans can fully enjoy it.