Posted
by
timothyon Thursday August 29, 2013 @12:15PM
from the click-carefully dept.

redkemper writes with this news from BGR.com (based on a report at Hacker News), excerpting: "Android might be targeted by hackers and malware far more often than Apple's iOS platform, but that doesn't mean devices like the iPhone and iPad are immune to threats. A post on a Russian website draws attention to a fairly serious vulnerability that allows nefarious users to remotely crash apps on iOS 6, or even render them unusable. The vulnerability is seemingly due to a bug in Apple's CoreText font rendering framework, and OS X Mountain Lion is affected as well."

I thought Apple added address space randomization back in Leopard? What happened?

The problem that was reported leads to a crash. A crash is _safe_. An attacker can't gain any advantage by crashing your computer. They can merely annoy you.

Address Space Randomization cannot prevent crashes. Its purpose is to prevent crashes being turned into exploits. An attacker does two things: Find a way to make your software fail, then find a way to turn that failure into an advantage for the attacker. The second part is where Address Space Randomization comes in. The next step is Sandboxing, where even if the attacker finds a way past ASR and takes over your code, your code would be in a sandbox and can't do any harm outside.

But the GP was referring to jailbreaks - I thought those were exploits "used for good"?

If you have an exploit, you can use it for good or evil. On the other hand, if it is an exploit where the device owner has to do things actively (like downloading an app, connecting the device through USB cable, running the app, clicking five buttons on the device) then there is no danger except the possibility of trojans, so Apple doesn't need to fix it. If it is an exploit that could be used to attack unsuspecting users, then it _must_ be fixed.

Yes, but address space randomization was supposed to make those exploits (mostly buffer overflows) obsolete, regardless of their intent. Clearly that didn't work if there are still jailbreaks and/or other exploits.

It merely makes it harder to craft a working exploit from bugs like buffer overflows, but not impossible.

As I understood it, without being able to predict any addresses you were not able to get your exploit to jump to a good address to attach shell code to. But I'm not an exploit writer - I was hoping somebody could explain why that isn't working as a deterrent, but it sounds like I'm asking on the wrong board today.

The Windows versions of iTunes and Safari include the MacOS font rendering code so that they look identical to the Mac versions. If the code is vulnerable it seems that those applications may also be vulnerable, although at least it's an app level problem and thus not as serious.

Confirmed Safari crash on 10.8. However, on iOS 7, it does not crash. It looks like this will be patched on mobile within the next couple of weeks. I can't test iOS 6, so I'll take others' word for it.

I suppose the "weird stuff" in there might be the block of U+03XX "combining diacritical" marks; so the string requires sticking a bunch of diacriticals over Arabic characters (which might invoke whatever fancy part of the code is broken). Someone with more time could play around with reducing this to a more "minimal" crash example.

However, is this particular combination of combining diacriticals a "valid" one in Arabic? The U+03XX diacriticals are "general use" and not Arabic-specific, so this might be an unusual (or even "invalid") combination in Arabic orthography (which I don't know much about). Note, I seem to be able to get the crash just with the two characters + diacriticals "\xcc\xb7\xcc\xb4\xcc\x90\xd8\xae \xcc\xb7\xcc\xb4\xcc\x90" (tell python to print that in Terminal.app under OS10.8.2 kills Terminal.app...). Is this a co

Right, I'm not trying to excuse Apple --- this is a bug, and shouldn't happen. However, it's maybe something a bit more subtle than "crashes on Arabic plain text," which would have been caught much more quickly. The AC above (assuming you're the same one?) was "disappointed" that this wasn't using "weird characters" --- but stacking a bunch of diacriticals not normally used with the script (the Arabic block apparently contains its own diacriticals) in an invalid way is getting pretty "weird." The breakdown

Actually, facebook doesn't. I accidentally had downloaded the magic terrorist mantra using curl in a konsole, from which I copy-pasted it into Facebook. However, konsole must have stripped its magic vibes...

After I retried it by visiting the URL with Firefox, and copying the contents from there to Facebook, I got "This message contains content that has been blocked by our security systems.

Actually, facebook takes it too, but it takes somewhat more smarts: just set up a web page, enter the magic terrorist thread as the <title, and post a link to the web page to Face book. As Facebook enters the title itself, it will not scan it for any "forbidden content", and presto!

I still don't know what this Arabic sentence means, but I tested it with Safari on a Mac, and indeed, it goes kaboom as soon as I visit my Facebook page!

O, and when pasting the contents of killwebkit.html into Twitter or other social media, be sure to visit the site using your web browser, rather than downloading it via curl. Indeed, some terminal programs, such as konsole tend to strip off its voodoo...

80%? They make it sound so close! It's actually 100:1 for Android:iOS: "Android was targeted by an astonishing 79 percent of all smartphone malware that year... iOS was targeted by 0.7 percent of malware attacks."

I do; but its more like... Find something that looks really good, then look at all the permissions it wants; but it shouldn't need all those permissions!! Feel sad about it and then don't install it unless drunk.

Right, because having users manage their own risk profile has worked out so well in the PC/Windows world...

Indeed. Letting someone else control your computer is much safer.

Android's big problem is that you have no way of saying 'no, I'm not giving this app that permission', and can only choose to install or not install the Fluffy Kitty Screen Saver that wants access to your filesystem, the Internet, and the ability to send SMS messages.

Holy cow, your fanboy hat must be cutting off the flow of blood to your brain. Explain again why an OS with 4x the market share garners 100x the exploits?

Attackers will *always* try to attack the biggest target. They are not for equal opportunity, they do not meet to work out quotas so that OSes gets attacked accordingly to their market share.

Say you joined a shooting competition: You can shoot at two targets, equal size and equal distance, no objective difference at all. Only difference is that each time you hit target A four people will give you $10 each and each time you hit target B only one person give you $10. You have 10 rounds. How do you distribute your rounds between the two targets? Do you fire 8 shots at target A and 2 shots at target B because that would be the most fair thing to do, or do you fire all 10 shots at target A?.

Maybe, just maybe, there's more to it than market share.

There might be. When you see people start taking shots at B, despite the higher reward of hitting target A, you can conclude that some factor causes them to *not* go for the higher reward. Somehow target A must have become harder to hit, the reward is going down or the shooters skills allow them to hit target B more easily.

But all other things being equal, prudent attackers who are in it for the rewards will go for the higher market share, every time.

Attackers will *always* try to attack the biggest target. They are not for equal opportunity, they do not meet to work out quotas so that OSes gets attacked accordingly to their market share.

Ok then so if iOS has 13.2% marketshare then why does it only get 0.7% of the smartphone malware and the remaining 20.3% of smartphone malware is targeted at the remaining various players that make up just 6.8% of the marketshare?

The logic is fairly obvious: when there is such a large gap between the leader and everyone else, why would you as a malware writer target anything by the leader? It doesn't make sense to target 20% of your exploits for iOS and 80% for Android, just because that's the corresponding market shares. You target all of your exploits in such a way as to maximize the total number of people hit by each. From that perspective, the only logical choice is to target Android for all of them.

It's the same as Windows, you just target what gets you the largest return. Organised crime is a business, just like any other.However there is still the walled garden thing, even if Apple went back up to a 50:50 market share with Android, Android would get targeted more because every Android user can choose to install any application and give that app the permission to email their bank details to Russia.

With iOS they have to wait for a good ol' fashioned buffer overflow before they can grab anything I guess.Unless you get that with iOS too? I don't know I've never owned one.

But the 8:2 logic holds up, when the sample size it that large I'm guessing that's exactly the reason why.

Ultimately it's all moot.

If Apple had 100% of the market share this is what would happen:

The crims would send everyone sms/emails with links to pages that asked them for their passwords an X percent of users would give it to them.

No amount of security or walled gardens get around the fact most of you are really really thick.

You don't have to install Cute Kitty Wallpapers with internet, sms and bank details access.Because that's all this "malware" is, it's not big or clever, 50% are just from the wrong side of the bell curve.

Oh, an I use Linux.On the Desktop.Well, I used to, because who the hell uses a desktop anymore anyway?Have you seen this cute screensaver I found!!!

I work at a Fortune 50 company. Yes, 50, not 500. Currently researching Big Data on the GPU using Linux + CUDA + nVidia's nSight + GTX Titan. I'm in OSX, Linux, Win8 in that order. The other devs use command line + vim + git. The OPS guys use OpenBSD on the servers.

There is a surprisingly amount of people using Linux. Heck most of the contractors we have are using VirtualBox + Linux (Ubuntu)

You're talking out of your ass making assumptions. Unix, wether it be Linux or BSD variation, is getting more and more popular.

Sir, my grandpa lived to the age of 94, and he smoked four packs a day. Does that mean if I smoke four packs a day, I have nothing to worry about health-wise? I suppose the cognitive error you've made is clearer now. You're giving personal experience too much weight. Please show me a survey saying that, today, Linux as a desktop platform is at least half as popular as Macintosh is. The short answer is, you won't find one. At least not one that's been done properly. Saying it's "getting more and more popular

Unix, wether it be Linux or BSD variation, is getting more and more popular.

I'm not sure what the point of lumping all Unix derivatives together is in this context, it was about exploits and platforms targeted by malware, what exploits and malware run across all Unix derivatives?

Marketshare for IOS will probably drop, but have you seen the average IOS user's statistics versus Android and others? Have you seen how much money IOS users spend versus the rest? Which is more used by business? You may understand statistics but you're missing out on the big picture here.

That may look good to you, but it isn't. If you had 100 pieces of malware, and each affected 1% of the possible users, then you would have 80 pieces of Android malware and 20 pieces of other malware, so an Android user would have an 80% chance of being affected, while other users would only have a 20% chance.

It may give an explanation why there is so much malware, but it doesn't help you. (BTW iPhone was said to be attacked by 0.7% of all malware, which makes every iPhone user about 100 times safer. And

Exactly! Apple's never been a big enough target or had enough users to make anyone want to hack them. As for those Apple (l)oosers? Just think how boring their lives have been for all these decades, not getting the real experience of using computers but stuck just playing quietly with their toys. Stoopid loosers!

Secure? Maybe, maybe not. Having less malware does not mean something is more secure, after all. More safe? Definitely so, since having less malware means that there is simply less danger. A walled garden in the country side is more safe but less secure than an apartment with bars over all the windows in the middle of the city, after all, and safety is what is more important overall, rather than security.

Of course, that doesn't excuse a company to fail at securing their products, just because no one has attacked them yet, but by all indications, the "security through obscurity" argument doesn't hold much water in this case, given that iPhone users are consistently shown to be disproportionately profitable to target and that they continue to sell extremely well overall (even the report you linked cites the fact that this is an expected low as part of the regular product cycle for the line and that they expect the iPhone to recapture its lost market share with the launch of the new iPhone this quarter).

Long story short, Android appears to be less secure and less safe. Which is to be expected, given the fact that developers are able to do a lot more on Android than they can on iOS, so it's not without its upsides, by any means. But that added capability (and the fact that every carrier/manufacturer makes their own tweaks that can open up vulnerabilities) comes at a price, and in this case, it's security.

Of course, that doesn't excuse a company to fail at securing their products, just because no one has attacked them yet, but by all indications, the "security through obscurity" argument doesn't hold much water in this case, given that iPhone users are consistently shown to be disproportionately profitable to target and that they continue to sell extremely well overall (even the report you linked cites the fact that this is an expected low as part of the regular product cycle for the line and that they expect the iPhone to recapture its lost market share with the launch of the new iPhone this quarter).

Let's say that iphone owners are worth $30,000 each, and Android users are worth only $10,000. If Android users are 4.7x more numerous than iphone users... then Android users are the logical target, if you can only target one group or the other. Now, who really thinks iphone users have a net worth three times that of Android users? Android users, by the way, are 4.7x more numerous.

The rest of your argument is irrelevant. I don't have anything really to say to your security v. safety argument, because that's

Now, who really thinks iphone users have a net worth three times that of Android users?

That's a great question and exactly the right one to ask. As it turns out, an average iOS user is worth roughly 4-5x more than an average Android user [guim.co.uk], at least in terms of what they're willing to spend on apps, which admittedly isn't the worth that we're talking about in this context, but is about the closest indication we can get to the relative worths of users on the different platforms, absent of having data on what the street value is for a compromised device of each variety.

As it turns out, an average iOS user is worth roughly 4-5x more than an average Android user, at least in terms of what they're willing to spend on apps, which admittedly isn't the worth that we're talking about in this context, but is about the closest indication we can get to the relative worths of users on the different platforms, absent of having data on what the street value is for a compromised device of each variety.

Actually, this doesn't say anything except that IOS users are more willing to pay for apps -- this may be due to the smaller number of free alternatives and the fact that the IOS marketplace is controlled entirely by Apple, and you have to spend a not inconsiderable amount of money getting licensed and approved so you can submit apps to Apple. Android has no such restrictions. Since it costs less to develop for Android, it makes sense that the overall price would be lower -- afterall, you aren't by the mere

Fine, I'll keep it simple since we apparently can't even agree that "wanting to diversify" is a valid business idea that would apply to malware developers too.

Your big claim is that iOS has less malware because of security through obscurity. That is, they have next to no market share, so no one writes malware for it since there's no profit there. Explain Symbian then. It had 19% of the malware share (i.e. 1/4 that of Android) while having a comparable installed user base to that of Android at the time that

Now, who really thinks iphone users have a net worth three times that of Android users?

The most popular Android phone sold was the Samsung Galaxy S III. Which is around 60M units. Google says there are 900M Android devices out there. Which puts the SGS3 at under 10% marketshare. Given it was THE flagship phone to get, there's a good bet the vast majority of Android phones out there are the crappy free ones. Because people see the iPhone, they see the $200 price tag, then the salesperson shows them the colle

"Android was targeted by an astonishing 79 percent of all smartphone malware that year... iOS was targeted by 0.7 percent of malware attacks."

Oh wow! That must mean iOS is much more secure!

Well if Android was targeted by 79% of smartphone malware and has 80% marketshare where iOS has (according to your statistics) 13.2% of the marketshare with only 0.7% of the smartphone malware then that would suggest that yes it is. Of course the next thing to look at is whether it is just hackers going after the biggest target marketshare, well the remaining smartphone malware (20.3%) is targeted at the remaining players that have just 6.8% of the smartphone market, so if you were to suggest it was just th

Well that would be logical wouldn't it, given that Android is a more widely used platform. Hackers often try to get the biggest 'bang for buck' and target the most popular platforms (see also number of Windows viruses vs. Mac OS ones).

Well that would be logical wouldn't it, given that Android is a more widely used platform. Hackers often try to get the biggest 'bang for buck' and target the most popular platforms (see also number of Windows viruses vs. Mac OS ones).

Are you claiming iOS was targeted far more than Android just 2 years ago?

I think Android is targeted more because it isn't inherently tied to the Play store, and not so much because of devices not being updated. The app signature verification works for 2.3 and up, which covers 96% of Google's Android devices. Getting malware on a phone or tablet still generally requires installing a malicious app, and it's far easier to be careless about that on Android.

Or you know, perhaps there are things like actually testing to make sure the patch works across their product lines. Or evaluating the Risk of the Flaw, and decide to put it in the next update, vs just keep on patching over and over again.

I remember back when Microsoft started its security initiative back after XP was released. There were a lot of security updates and often they would end up breaking more stuff then it fixed. Because they didn't spend the time testing it.

Security in non-trivial code is hard. People insist on writing stuff in C and other "hard" languages. And this is the result. We probably should have switched to Ada a long time ago. Oh noes it is 10% slower = no excuse. Just buy the 2.2ghz machine instead of 2ghz.

Looks like someone can't understand the concept of an example for the sake of making a point. I'm sure most people would much rather have a device that doesn't crash and doesn't get hacked, than gain 10% in speed.

"It really worries me that the FreeType font library is now being made to accept untrusted content from the web.

The library probably wasnâ(TM)t written under the assumption that it would be fed much more than local fonts from trusted vendors who are already installing arbitrary executable on a computer, and itâ(TM)s already had a handful of vulnerabilities found in it shortly after it first saw use in Firefox.

It is a very large library that actually includes a virtual machine that has been rewritten from pascal to single-threaded non-reentrant C to reentrant Câ¦ The code is extremely hairy and hard to review, especially for the VM."

That doesn't excuse the fact that it's totally unnecessary. They've created an entire virtual machine for the sole purpose of font rendering. Doesn't that strike you as just a little bit over the top? Text is just symbols arranged on the screen -- I'm certain better ways of doing this could be imagined that wouldn't require an exploitable VM with root permissions.

I don't care if it's turing complete or not, it's irrelevant. They've taken one of the most basic functions of a computer and managed to overly-co

Desktop publishing has used embedded, Turing-complete languages for decades -- TeX is Turing-complete, as is XSLT. It's the best and most compact way of specifying an abstract image for a generic rasterizing displays of arbitrary resolution.

Desktop publishing has used embedded, Turing-complete languages for decades -- TeX is Turing-complete, as is XSLT. It's the best and most compact way of specifying an abstract image for a generic rasterizing displays of arbitrary resolution.

No, it's not the best way; It has handed someone a root exploit. And it isn't the most compact way either -- because obviously it grew to such complexity that it became part of the kernel. These are design failures. If you cannot figure out a way to put pixels on the screen without getting yourself rooted, you're doing it wrong.

Well, CoreText isn't in kernel, it's a library, and the exploit just allows crashes in userspace, not actual "rooting." Your account of the computer science involved is pretty terrible, in particular your conflation of any runtime environment and a privileged execution environment, and your confusion on the meaning of information-theoretic compactness. Your blanket condemnation of 30 years of desktop printing technology, the implementers of PostScript, and Donald Knuth(!) is also somewhat concerning.

Sigh. Fonts are programs, and have been, for a long while now. Is that news to you? You must have never seen what it takes to actually render a font not to understand that. Be thankful those are not postscript fonts, because those would have been even harder to implement safely. The TTF hinter execution environment is much simpler.

They've created an entire virtual machine for the sole purpose of font rendering. Doesn't that strike you as just a little bit over the top? Text is just symbols arranged on the screen -- I'm certain better ways of doing this could be imagined that wouldn't require an exploitable VM with root permissions

Spoken like someone who has never actually written code to display text. Sure, with monospaced bitmap fonts, this is an easy problem. For modern text, you start off with a set of bezier paths representing each glyph. That's fairly easy to render, and you can just start drawing each one to the right of the previous one. That will give you blurry characters with ugly spacing, but it's a start.

So how do you fix the blurriness? Now you need some hinting telling the renderer when it should try to snap li

All that trouble, and an old fashioned screen font still looks better.

Sure, as long as you only ever have one screen DPI to deal with and only need to support a small number of font sizes and don't ever need to print. Of course things look better if you draw them for the exact output format that you're targeting.

That's not any kind of excuse. Turing-completeness in itself need not imply that code should be able to break out of its execution environment.

It is a very large library that actually includes a virtual machine that has been rewritten from pascal to single-threaded non-reentrant C to reentrant Câ¦ The code is extremely hairy and hard to review

Yes, and that such a fundamental piece of code should be in such an awful state is pretty much exactly a summary of what's wrong with the software industry.

Any foundational platform library must reviewable and provable to be correct. I don't care how "hard" this to do because the Internet does not care. The Internet is going to crash your code if it is crashable and

Um this isn't fixed ASCII text. Dynamic scaling fonts like TrueType ones are almost images especially with internationalization. You might think printing out "A" is easy but you don't see that the device had to scale that A to a certain size and draw it differently depending on the dimensions proscribed by the font definitions (Serif, Sans Serif, Cursive, Italic, Bold, etc). Also if you want to do any business in places like China, your font rendering engine better be able to handle the complexity. Not

Okay, am I the only one that thinks that if you can't design something that renders text onto a screen without it turning into the Ocean's Eleven of computer security, you're doing it wrong? Be honest now guys. I can understand this in something that needs to interpret complex animations of dancing toilet paper flying across my screen screaming "Buy meeeee, pleeeeeeease!" -- I don't approve, but I can see how someone could screw it up.

But text... really guys, I mean, really?

I really get where you're coming from... However, Unicode is a PITA to implement, what with multiple glyphs for compositions / decompositions and BIDI (text direction rules) -- which change depending on paragraph direction and state machine. That's just the character encoding! To actually render the fonts there's a tiny VM that decodes the glyphs and handles sub-pixel hinting, etc. A bitmap ASCII (CP437) font? Done. I can crank one out in an hour, tops... Unicode w/ TrueType or FreeType? Ugh. I mean, just getting the character property tables from the Unicode site downloaded and transformed from CSV to the format we need is a project in of itself. The bugs in every last 3rd party library ever encountered (even libPNG), I'm hesitant to use other's code unless I have to (I have a higher standard -- input fuzzing, code coverage and unit testing for everything), but bugs in today's text rendering systems aren't just expected, they're a given -- It's literally the first thing I attack, and almost every time it works against new code: embedded invalid surrogate pairs, and over-long forms. [wikipedia.org]

Ah, but everyone's doing it wrong but you? Well, let me tell ya something: If you set out to make the closest to the metal compilable language that's not ASM, it'll work just like C does (C is a product of the architecture more than anything). Same goes for making a minimal font rendering system that covers all the world's languages -- Try it, it'll end up almost exactly like TrueType & Unicode because they're products of their environment too.

Now, that's not to say I don't agree with you to some extent. I'd say humans need to ditch all the BS and start from scratch to create a language that's easy to OCR with syntax and grammar that's extensible and non ambiguous and thus interpretable by machines. Do that and "natural language processing" is a no-brainer (literally). We get away with as few as 16 glyphs for the Virgon (Galactic) language -- Designed for ease of deciphering from examples using mathematics, incrementally graduating up to a small Von Neumann "VM" and then including "instructional" programs to then teach the rest.... So, yeah, you damn dirty apes did do it wrong, but if your sunk cost fallacy doesn't keep you doing it wrong you'll be the first lifeforms in the Super Cluster to do it right before you've solved the Fermi Paradox.

Obviously someone who thinks Unicode is just an extended character set. Unfortunately, it isn't, and it's why characters are referred to as "codepoints" (because you may need multiple codepoints to actually produce a character).

First comes the many ways of expressing a codepoint as a string - UTF-8, UTF-16, UTF-32 are just the most common variations (and there's also the whole big and little endian thing). And there's plenty of reasons why you'd want say, UTF-16 over

Okay, am I the only one that thinks that if you can't design something that renders text onto a screen without it turning into the Ocean's Eleven of computer security, you're doing it wrong? Be honest now guys. [...]

But text... really guys, I mean, really?

Slashdot, one of the geekiest sites with sysadmins who are should know how to do it right... doesn't even *allow* Unicode, apparently due in part to some spoofing or other security risks.

Never mind "doing it wrong", Slashdot isn't even *trying*. There should be no excuse after all this time. It's just text, after all. Right?

Or perhaps modern text processing/rendering is much more complex and complicated than you think.

Google can roll out system patches via Play too. It does it now and again to deal with serious security issues, or provide new features. The patches can affect all versions of Android, at least as far back as 1.5.

Yes and yes. For example they recently rolled out a system update for the app signature spoofing vulnerability and every version of the OS got it, on every device with Google Player (i.e. 99% of them, only major forks like Amazon's Kindle OS was not covered).

From what I've seen the answer is actually no. They issued a patch to the OEMs [throwingdigitalsheep.com] which then are responsible for patching handsets. Also AFAIK the update issued via Google Play was to patch Google Play itself to scan for this issue, it didn't patch Android: