Posted
by
timothyon Sunday July 27, 2014 @11:55PM
from the little-of-this-little-of-that dept.

New submitter Brett W (3715683) writes The security researchers that first published the 'Heartbleed' vulnerabilities in OpenSSL have spent the last few months auditing the Top 50 downloaded Android apps for vulnerabilities and have found issues with at least half of them. Many send user data to ad networks without consent, potentially without the publisher or even the app developer being aware of it. Quite a few also send private data across the network in plain text. The full study is due out later this week.

Code recycling is one thing, but not understanding what that code does when you put it into a production app or not following best practices is another. As Android gains popularity as a platform to develop for, we're going to lose quality as the new folks jumping onto the band wagon don't care how their apps work or look beyond the end goal. This mentality is already popping up with Android Wear developers who cram as much information as they can on the screen and claim that design guidelines are "just recommendations."

Design guidelines are just recommendations. Frequently bad ones. A developer should design the best UI he can, not follow what Google says regardless of whether it fits. And most developer guidelines, Google and Apple both, are crap.

The problem is that the whole app movement has brought in a whole slew of crappy developers who's idea of coding is to search stack overflow or git for stuff to copy paste. They don't read it, don't understand how to use it right, and expect it to magically work. Worse half of the people writing that code fall into the same category, so its the blind reading the blind. If you pick a library off of github and assume it will work, you deserve what you get. Unfortunately your users don't.

These people have been around for a while (they used to be "web developers" and program by copy pasting big chunks of javascript). The problem is that on a phone they can do more damage. In a world where the number of quality programmers is fixed and far less than the demand for programmers, how do you fix it? Making it easier to program actually hurts, you end up with those crappy coders trying to do even more. Maybe its time to raise the barriers to entry for a while.

They can get simple things done without understanding the whole system. That deliver something that sort of works. This makes them cheap labor.

Why do we need cheap labor, because of competition and a race to the bottom driven by consumer buying decisions.

In a talk by Gabe Newell from Valve said that a free game got you 10x more users and 3x more profit (they for example get some money from people selling items inside the game). Not that they use cheap labor, they actually do the exact opposite. But it is just to illustrate how price is important.

So free like the above is a profitable model, free and ad-supported might actually not be as profitable. I don't know how much money companies get for selling personal information. I assume it is more than the ads.

So how do you solve that.

I see a few possible ways:- education- create good open source libraries that prevent most of the bad things and cheap developers want to use.

Now comes the kicker:

Do you think HTML5-apps without any permissions by default on phones would be a better model ?:-)That would be a model similar to Javascript-code running in the browser on the desktop where the user is asked to allow access to the camera when needed.

Actually, I do, but then again I actually do use a FirefoxOS phone to see what it is like.

A lot of the time the hardware is bit underpowered so it can be sold in countries that currently still have a large number of feature phones or people not willing/able to pay for more expensive hardware.

But still pretty impressive what they can get out of that cheaper hardware.

Code recycling is one thing, but not understanding what that code does when you put it into a production app or not following best practices is another. As Android gains popularity as a platform to develop for, we're going to lose quality as the new folks jumping onto the band wagon don't care how their apps work or look beyond the end goal. This mentality is already popping up with Android Wear developers who cram as much information as they can on the screen and claim that design guidelines are "just recommendations."

The exact same thing happens on every other platform, though perhaps to varying degrees. I refer to it as the Stack Overflow effect. One developer who doesn't know the right way to do something posts a question. Then, a developer who also doesn't know the right way to do it posts how he or she did it. Then ten thousand developers who don't know the right way to do it copy the code without understanding what it does or why it's the wrong way to do it. By the time somebody notices it, signs up for the site, builds up enough reputation points to point out the serious flaw in the code, and actually gets a correction, those developers have moved on, and the bad code is in shipping apps. Those developers, of course, think that they've found the answer, so there's no reason for them to ever revisit the page in question, thus ensuring that the flaw never gets fixed.

Case in point, there's a scary big number of posts from people telling developers how to turn off SSL chain validation so that they can use self-signed certs, and a scary small number of posts reminding developers that they'd better not even think about shipping it without removing that code, and bordering on zero posts explaining how to replace the SSL chain validation with a proper check so that their app will actually be moderately secure with that self-signed cert even if it does ship. The result is that those ten thousand developers end up (statistically) finding the wrong way far more often than the right way.

Of course, it's not entirely fair to blame this problem solely on sites like Stack Overflow for limiting people's ability to comment on other people's answers unless they have a certain amount of reputation (a policy that is, IMO, dangerous as h***), and for treating everybody's upvotes and downvotes equally regardless of the reputation of the voter. A fair amount of blame has to be placed on the companies that create the technology itself. As I told one of my former coworkers, "The advantage of making it easier to write software is that more people write software. The disadvantage of making it easier to write software is that... more people write software." Ease of programming is a two-edged sword, and particularly when you're forced to run other people's software without any sort of actual code review, you'd like it to have been as hard as possible for the developer to write that software, to ensure that only people with a certain level of competence will even make the attempt—sort of a "You must be this tall to ride the ride" bar.

To put it another way, complying with or not complying with design guidelines are the least of app developers' problems. I'd be happy if all the developers just learned not to point the gun at other people's feet and pull the trigger without at least making sure it's not loaded, but for some reason, everybody seems to be hell-bent on removing the safeties that would confuse them in their attempts to do so. Some degree of opaqueness and some lack of documentation have historically been safety checks against complete idiots writing software. Yes, I'm wearing my UNIX curmudgeon hat when I say that, but you have to admit that the easier programming has become, the lower the average quality of code has seemed to be. I know correlation is not causation, but the only plausible alternative is that everyone is trying to make programming easier because the average developer is getting dumber and can't handle the hard stuff, which while p

Although you certainly have a point, the core problem is often that the documentation is poor. I find that if there is a proper writeup of the solution somewhere on the net, Stack Overflow will mention it (eventually). If there is no proper writeup, sometimes someone bright posts a solution that is right, and sometimes people stumble upon a voodoo solution that nobody understands properly, but sort-of works.

The Android APIs are susceptible to this problem, because they are often poorly documented, have glar

Amazingly, security libraries are often in this category. Is there a really good writeup ANYWHERE about SSL, certificates and signing practices? And IPSec with all its intricacies?

Funnily enough, on Stack Overflow! Not all of the security-related questions are overflowing with shitty misinformation. (SO might not be great, but it's better than the squillion shitty places for question answering that preceded it.)

Although you certainly have a point, the core problem is often that the documentation is poor.

A not uncommon problem being "solutions" which omit steps or assume that everyone knows how to find, what is in practice, an obscure option. Sometimes also having "boilerplate" which overexplains another part of the process.

Amazingly, security libraries are often in this category. Is there a really good writeup ANYWHERE about SSL, certificates and signing practices?

The problem is worse on Android than on many other platforms because there are very few native shared libraries exposed to developer and there is no sensible mechanism for updating them all. If there's a vulnerability in a library that a load of developers use, then you need 100% of those developers to update the library and ship new versions of their apps to be secure. For most other systems, core libraries are part of a system update and so can be fixed centrally.

The problem is worse on Android than on many other platforms because there are very few native shared libraries exposed to developer and there is no sensible mechanism for updating them all. If there's a vulnerability in a library that a load of developers use, then you need 100% of those developers to update the library and ship new versions of their apps to be secure. For most other systems, core libraries are part of a system update and so can be fixed centrally.

Case in point, there's a scary big number of posts from people telling developers how to turn off SSL chain validation so that they can use self-signed certs, and a scary small number of posts reminding developers that they'd better not even think about shipping it without removing that code, and bordering on zero posts explaining how to replace the SSL chain validation with a proper check so that their app will actually be moderately secure with that self-signed cert even if it does ship. The result is tha

A self-signed certificate is never more secure than a CA-signed cert. Period. The only benefit to self-signed certs is cost. Any other perceived benefits are merely side effects caused by forcing you to do extra security checks to make up for the lack of a CA—checks that you could do anyway, but probably won't.

For example, if you're paranoid about a CA issuing a cert for your organization to someone else, then you might add code in your app to do your own set of checks to decide whether a cert is v

I'll agree there - thought its not Java at fault necessarily - not unless you lump in a bunch of other languages like VB, C#, JS etc.

The problem is of the library code you're using. Libraries should be small, well defined, easy to use, and documented.

The problem today is (especially with code written in Java,.NET or JS) that it is knocked up to solve some problem but the problem is not only not properly understood, but the code that is provided doesn't solve it particularly well. Its not defined as a discr

Lots of times, you see something wrong, and you want to point it out, but by limiting commenting to people with rep, if you don't have rep on that particular board, you are prevented from correcting the error. That means that there's wrong information without any hint that it might be wrong. So the worst-case scenario there is pretty bad.

By contrast, if you remove those limits, the worst-case scenario is that people who don't know what they're doing might say that it is wrong, at which point you'll have t

Probably mostly speed. Understanding every tool you use means you must invest time to understand it. In the swift and agile world of app development security is the first victim. Taking time to understand what you are doing seems to be outdated.The only thing the users can do is not install apps that request rights they have no need for. Sadly most users do not care.

Code recycling is one thing, but not understanding what that code does when you put it into a production app or not following best practices is another.

No developer completely understands everything that happens on a system, that's impossible. You do your best and you verify as well as you can that it's acting as you expect. Because where else do you stop..? You can't verify every library that you use, otherwise why bother using them, you might as well write your own. You can't verify the system itself because it's far too big.

Not that I'm saying things couldn't be written better, but programming is not a "correct / incorrect" binary choice, any nontrivial

You don't have to understand everything, but you do need to at least understand the basics, like how networking works, how crypto works, etc. at a conceptual level. I feel like too many developers learn how to program by learning JavaScript and other scripting languages on their own, then jump into app programming thinking that it's only one step harder because you can sort of do it in Python/Ruby/other Obj-C bridged languages/other.NET languages, or because Swift looks like JavaScript, or whatever their

It doesn't matter if it is Windows, Mac, iOS, Android, or Linux, all software is full of bugs.

For that matter, all of everything constructed by human beings...is full of defects, or potential defects, or security vulnerabilities. Your house, for example. You have a lock on your front door, but it takes a thief just a few seconds to kick the door in. Or your car...a thief can break into it in seconds, even if you have electronic theft protection. I'd call those "security vulnerabilities."

It's the nature of all human creations, software or hardware, electronic or mechanical.

So what do we do? We improve security until it becomes "just secure enough" that we can live with the risks, and move on.

Our add features to a language that help the programmer prove that certain defects are not present. Bounds checked arrays are a big one compared to plain C, but others exist. Rust, for example, has separate types for "pointer that can never be null" and "pointer allowed to be null", and it is a compile-time error to pass the latter to a function expecting the former outside of a construction that means essentially "if null then do X else do Y".

Until the saviour rust browser engine descends from heaven to rule the earth we have to live with the sign of the gecko beast on our forehead. Before servo delivers us from the zeroth day on the last day, a huge fox with fire will be cast into the hdd: and the third part of the hdd shall become blood; And the third part of the tabs which were on the gecko, and had life, died; and many processes will die of the memory, because the memory will be have been made bitter.

I would recommend MISRA C, but it's impossible to make a conformance checker under a free software license because quoting the rules in error messages appears to require an incompatible copyright license. Source: presence of the word "prices" in the section "I am a tool vendor" in the official FAQ [misra-c.com].

But we don't do that. We never do that. As developers, we hide our head in the sand until we absolutely can no longer ignore then problem, and then we say "Whoops! My bad!" As consumers we assume that professionally published software should be reasonably free of bugs or exploitable code. And people start being held accountable by law for their shitty software, the status quo will never change.

I was demonstrating to a shitty software developer the other day how all his input sanitizing routines were in the javascript front end to his web application and anyone bypassing the javascript could essentially have their way with the back-end database, and he told me "Oh you're making a back-end API call, no one will ever do that!" No one except the guy who's hacking your fucking system, jackass. People like that make me want to sign on as Linus' personal dick-puncher. Whenever someone writes some shitty software that pisses Linus off, I will find that person and I will PUNCH THEM IN THE DICK. Because I swear to god, that's what it's going to take. Congress is going to have to WRITE A LAW allowing me to HUNT PEOPLE DOWN and PUNCH THEM IN THE DICK over the SHITTY SOFTWARE they write. And when that day comes, with God as my witness, I will PITCH A TENT outside MICROSOFT HEADQUARTERS, and that will be the LAST TENT EVER PITCHED at MICROSOFT HEADQUARTERS!

2. Some people just don't know this yet, they don't have a hacker mentality (which is what is needed to understand whole systems and how things can be used in ways they were never intended). A hacker mentality is not taught at educational institutions, so they need to still learn it. It usually isn't malice or laziness it is not understanding what you are doing. All they have learned is is how to get the task completed.

I was demonstrating to a shitty software developer the other day how all his input sanitizing routines were in the javascript front end to his web application and anyone bypassing the javascript could essentially have their way with the back-end database, and he told me "Oh you're making a back-end API call, no one will ever do that!" No one except the guy who's hacking your fucking system, jackass.

That actually happened in one of the online games I used to play. The game company decided to run a promotio

For that matter, all of everything constructed by human beings...is full of defects, or potential defects, or security vulnerabilities. Your house, for example. You have a lock on your front door, but it takes a thief just a few seconds to kick the door in. Or your car...a thief can break into it in seconds, even if you have electronic theft protection. I'd call those "security vulnerabilities."

So what do we do? We improve security until it becomes "just secure enough" that we can live with the risks, and move on.

Who cares about the security of an untrusted and untrustworthy app in the first place?

What difference does it make if it was written by the most competent team of programmers in the world if while operating as designed still treats the end user with contempt?

You might not be terribly surprised to know that our genes (and the genomes of pretty much everything) are also full of bugs. We have a whole raft of deleterious genetic variants in our DNA that are just waiting for the perfect time to activate and say "hey, you know that life thing? I can make it worse." On top of that, we have a few viral genomes in our DNA (possibly some that are still active), and rely on bacteria and mitochondria to provide

Software on Internet-connected devices is a bit different from your examples though. No matter how insecure cars are, it would be really hard for me to steal a million cars in one night, let alone without being caught. Yet, it's common to see millions of computers/phones being hacked in a very short period of time. And the risk to the person responsible is much lower.

That's trivial. It's like saying, there are only two numbers, "zero" and "many". It simply isn't true that all languages and all platforms are full of bugs in any meaningful sense. Some platforms are more buggy than others. This is a function of how old the platform is, how serious the creators are about preventing bugs, etc. That's meaningful.

For example, the well known OpenBSD aims to be much more secure than other OSes. The well known Windows family doesn't care about security, only as an afterthought.

Some people believe that either something is secure or it's not, just like a woman is pregnant or she's not, or a dish is vegan or it's not [wikipedia.org]. But to head off an imminent definition debate [c2.com], let me explain your core idea in terms they'll understand:

Virtually all off-the-shelf software is insecure. People take out errors and omissions liability insurance ("E&O") [wikipedia.org] to cover their behinds in case a vulnerability causes a noticeable problem. You may call software "more secure" if it has had its vulnerable sur

This is the sort of thing that you can expect when you put developers through a whirlwind coding course. They learn to use library after library without understanding the ramifications of their use. Need an ad network? Slap in a library. Need geolocation? Slap in a library. What you end up with are flashlight applications that want permission to read the low level system log. Then again, that's coding in the instant gratification world that we live and develop in today.

Just imagine a world where you had no libraries and had to manually code everything. What would that world look like? No developers? No consistency for end users? Do you think security would be better when developers are forced to write more code?

you'd have a vast library of libraries. Something like CPAN or something you'd get in the C world. Libraries written to perform some task and nothing more. Then documented with care and the API published.

Anyone wants to do something, they take the library that appeals to them and adds it to their program and build up a program from these bits.

Now the problem today is that a) some only use libs that come with the OS or language framework, b) the libraries that are out there are shit, written quickly and for

Let's see this list of spyware. Will Google kick them out of the Android store? Will the FBI prosecute the developers for "exceeding authorized access" under the Computer Fraud and Abuse Act? If not, why not?

Let's see this list of spyware. Will Google kick them out of the Android store? Will the FBI prosecute the developers for "exceeding authorized access" under the Computer Fraud and Abuse Act? If not, why not?

Easy, the summary says they analyzed the top 50 downloaded apps. So your list of spyware will be those.

As for Google, well, Google owns online marketing advertising market, so those apps really are helping Google in the end...

And if obtaining root access trivially is important to an Android user, they will choose their device accordingly.

So how does one who has been given a hand-me-down device, such as my cousin, go about that? Sell the device on Craigslist and buy another?

Nexus devices don't require some exploit to be found to achieve root... it's a very straightforward process.

Root on a Nexus requires unlocking the bootloader, which in turn requires wiping the device. This means you lose all your data if you want to gain root at any time other than the day you buy a new device.

You can buy Linux devices setup as kiosks that lock the user out of root.

The difference is still that GNU/Linux PC owners are expected to have root. All major distros either ask for a root password or put the first created user into a "wheel"

I'm aware of that. Say I were to back up my first-generation Nexus 7 tablet through Android Debug Bridge (ADB). How would I verify the completeness of this backup?

GNU/Linux PC owners are expected to have root.

Not on a kiosk, video game console, a TiVo, or any other "appliance".

Which such appliance is a "GNU/Linux PC"? Video game consoles do not run Linux (except for those few remaining fat PlayStation 3 consoles that haven't been upgraded past system software 3.20). TiVo DVRs run Linux but not GNU/Linux. You keep bringing up "kiosks"; to which kiosks are you referring?

if you buy an appliance-type computing device (which I gave multiple examples of already,

I must have missed where you named a make or model of appliance of using GNU/Linux, not a non-GNU userland on the Linux kernel. The point I was trying to get across in #47550429 was that GNU/Linux is less likely than other kinds of Linux-based operating system to come installed on an appliance locked down against its owner.

and smartphones are one)

As the GNU/Linux FAQ [gnu.org] explains: "There are complete systems that contain Linux and not GNU; Android is an example. [...] Wh

The entire article is harping on 3rd-party ad network libraries stealing personal data and phoning tracking info home. As these are libraries and developers are re-using open source libraries, then it follows that "Open source is no free lunch" and is stealing your data. What a majestic leap in logic!

They conflate open source libraries with various ad-network code stealing personal data, basically trying to portrait open source code as being responsible for it. Never mind that the ad-network code is almost never open source.

Granted, OSS is certainly not bug-free, but the spyware has little to do with it.

yeah. as long as the custoemers dont even care about any security, but about a shiny interface and are not willing to pay, focusing on the interface and not on the security of the app seems like a reasonable economic decision to me.

The choice seems to be between the flexibility of Android vs. the (arguably?) better security on iOS.

I'd like to be able to install Android apps without having to accept all of the permissions they require, but without rooting my phone that's impossible. As a result, there are many apps I just won't install (it took me ages to find a torch app that didn't need anything beyond access to the camera, for example).

On the other hand, I love widgets - quick access to information and actions from the desktop is re

“Some people might have been providing a vulnerability on purpose in order to do something nasty.. Who are they working with? Do they have sideline jobs somewhere else? The developers might be getting their dollars from ad networks"

Is this what slashdot has been reduced to, regurgitating anti Open Source FUD on behalf of a most probably a false-front for the

Utter crap. Codenomicon are very friendly to FLOSS and FLOSS developers. They're also great guys. They have been providing free test services to the Samba project for many years now, and have helped us fix many many bugs.

In case you hadn't noticed, the code they're reporting on here is closed source proprietary code...

Why on earth would you recycle code, that is rookie programming error 101. Every program you write needs to use a fresh and clean set of functions and structures, because how else can you get everything to fit together perfectly.

The article mentions sandbox tools that allow admins to test applications and see what the code and the libraries are really doing, but doesn't name any of them. Any/.ers know if there are FOSS or BSD tools of this sort? Or even cheap proprietary ones? I read the code for any library I use, and I try to add some assert() like statements where the lib dev might have felt them unneeded to be certain that nothing gets past memory boundaries. But everyone misses something now and then, and just look at the IOC

Not surprised that android apps are full of holes. The whole android concept was designed to treat people like commodities in a way never before possible. The whole Ecosystem is *engineered* to have holes.

Not surprised that iPhone apps are full of holes. The whole Apple concept was designed to treat people like commodities in a way never before possible. The whole Ecosystem is *engineered* to have holes.

How would an ecosystem be designed not to have these sorts of holes but also not to restrict what the owner of a device can use it for?

Just look at the Xprivacy extension for rooted android phones. Even iPhones let you disable app permissions. What has Google done about the issue? They reduced permissions into groups so users couldn't even know exactly what their apps have access to any more. Oh, and block apps from writing to most of the external SD card, but they can do whatever they want to the internal one. Guess Google doesn't like privacy or SD cards.

Oh, and block apps from writing to most of the external SD card, but they can do whatever they want to the internal one. Guess Google doesn't like privacy or SD cards.

That's just incorrect. For the internal memory, an app can't overwrite another app's private data, it can't even read it without special interfaces (assuming a non-rooted device). An external SD card on the other hand is deemed insecure by definition since it can easily be pulled out and placed into another device. So an external SD card was chosen as an easy way to store, share, and manage media files between different applications.

Internal memory and internal SD card are two separate things in Android. Internal SD card is simply a part of the internal NAND that the OS treats like a normal SD card. Many phones don't support external SD cards but have moderate amounts of storage, so they compromise.

Internal memory and internal SD card are two separate things in Android. Internal SD card is simply a part of the internal NAND that the OS treats like a normal SD card. Many phones don't support external SD cards but have moderate amounts of storage, so they compromise.

I'm not sure I follow.

Many phones don't support external SD cards, but officially their apis still need to support external storage with internal SD memory anyway, otherwise they won't pass the Compatibility Test Suite [android.com].

Internal memory and internal SD card are two separate things in Android. Internal SD card is simply a part of the internal NAND that the OS treats like a normal SD card. Many phones don't support external SD cards but have moderate amounts of storage, so they compromise.

I'm not sure I follow.

Many phones don't support external SD cards, but officially their apis still need to support external storage with internal SD memory anyway, otherwise they won't pass the Compatibility Test Suite [android.com].

The problem is that the internal SD card and external SD card are treated differently.

Android apps by default work off the internal SD card. It's actually a separate partition that's mounted at the same place as old phones used for external SD cards. You can't change the default to use an external card. You can't recover space from that internal partition.*

Here's the kicker. Now external SD cards are mounted somewhere else. (/mnt/extSD) The thing is that many apps don't work with the external SD card. Especially after the latest android release. With android KitKat apps with the, misnamed, external storage permission can read and write anywhere on the internal card. The problem is that now they can read anywhere on the external card, but can only write to a directory on it which is something like "/mnt/extSD/data/app.name" There are a few exceptions for system apps like the camera, but regular apps have to use this weird naming scheme.

It's actually a good security feature, but the fact they don't apply it to the internal SD card just seems to be Google deliberately moving people away from phones with an external SD card. Not cool.

*Without rooting, and knowing exactly what you're doing at least. No way a non expert is doing this.

"Android apps by default work off the internal SD card. It's actually a separate partition that's mounted at the same place as old phones used for external SD cards. You can't change the default to use an external card."

Depends on the phone. I have a cheapass Android phone with only 4GB of internal memory, but it let me choose (out of the box, no root-only tricks here) wether I want the internal memory or the physical microsd card mounted as/sdcard0 or/sdcard1. The phone switches them if you like (and that is very reccommended with this little internal memory).

The internal "SD Card" is formatted with a Unix-style file system that provides access controls to keep apps from being able to access one anothers' data. External SD Cards are formatted with FAT32, because that's what the whole world expects. Unfortunately, FAT has no concept of ownership or permissions, so the path-based restriction is necessary to ensure that apps can't muck with each others' data.

Mmmm, Android moved "unacceptable" into "not unusual" at the same time and said a lot more apps "require no special permissions", despite needing Device ID, GPS, and storage access. You know. For a torch app.

I understand your fear of falling into a definition trap. I define restrict as A. refusing to make an API for reasonable uses of hardware features, such as no way for an app to see which SSIDs are nearby or no way for web apps to draw 3D scenes or upload data types other than photos and videos, or B. requiring a recurring fee or an organizational background check to run software that you compiled on a machine that you own.

Why does anyone install an app on Android that didn't come from F-Droid?

I can think of two reasons. One is that someone might be using a hand me down Android device from the first year that AT&T sold Android phones, and these devices support only Google Play Store, not Unknown sources. But though I have a cousin whom this affects, I imagine few others are still on a Galaxy S 1 Captivate. A more common reason to use non-free Android apps is that free software has shown itself to be poor at producing compelling original video games. Free software works when there's a clear spec, which is true of libraries and productivity apps. But apart from maybe roguelikes, games are less specified up front unless it's a clone of an existing game, such as Aisleriot, Frozen Bubble, or StepMania. A non-free game's developer can afford to put more time into creating both the spec and the implementation.

I can think of a third. I had no idea fdoid existed until reading these posts. Outside of rooting my phone ans removing a bunch of garbage, i never really looked for more than a few apps and i wont update those due to expanded permisions i find too intrusive.

That being said, now that i know, i will likely use it when i change phones again in about 2 weeks.

How many reasons would you like? F-Droid has about a thousand apps to the Play store's 1.2 million. You have to install it through side channels. Relatively few in the mainstream have heard of it. None of the apps that people's friends or favorite websites are talking about are available on it. A quick peek at some of the new apps listed on the front page reveal these potential blockbusters:

* A guessing game: try to guess a number between 1 and 100 in under eight tries* A ROT-13 encoder/decoder* An ASCII/Hex/Ocal/Binary converter* Swimming distance calculator* TI graphing calculator emulator (no ROMs included)

It surprises you that people aren't flocking to this in droves? Look, nothing against F-Droid. It's cool that people are doing this, but let's keep our expectations grounded in reality.

When I've had no android, I've thought that too. But as I've purchased an android phone, I was quite impressed about the efficient and tight rights separation system of android. Don't misunderstand me: I didn't "activate" the play store app, as I needed to couple it with a google account. If you could install the free apps without an account I'd have tried it, but that way google had lost a customer. The next thing I was annoyed of was the samsung bloat, and the possible lock-in the case I really started to

I doubt Apple has such a patent. Both of these were features of Symbian at least since EKA2 (over 10 years ago) and, I think, earlier. Apple may have a patent on some particular way of exposing this functionality to the UI, but that's about the most that they could have without it being shot down in court in 10 seconds (prior art that's in the form of a phone OS that millions of people owned is hard to refute).

You're correct, as far as I can tell. First to file changes only how conflicts are resolved when two patent applications pending at the same time claim the same invention. It does not remove the novelty requirement, which means that an inventor is still not entitled to a patent if someone else publishes the invention before the inventor applies for a patent.

So I have two guesses as to how the Windows CE thing came about. Either Fingerworks licensed the patent to Microsoft before Apple acquired Fingerwork