I disagree. So does the TCSEC ("Orange Book"), which argues that covert channels may be effectively impossible to eliminate but that they should still be reduced, then lays out guidelines on how far they should be reduced for secure computing platforms. Specifically:

In any multilevel computer system there are a number of
relatively low-bandwidth covert channels whose existence is
deeply ingrained in the system design. Faced with the large
potential cost of reducing the bandwidths of such covert
channels, it is felt that those with maximum bandwidths of
less than one (1) bit per second are acceptable in most
application environments. [...] Therefore, a Trusted
Computing Base should provide, wherever possible, the
capability to audit the use of covert channel mechanisms
with bandwidths that may exceed a rate of one (1) bit in ten
(10) seconds.

I'll admit that this definition is at least a couple decades out of date, but at the same time... the standard for the US's intelligence agencies' secure computing platforms was to not even care about hypothetical covert channels slower than .1 bps. In addition, such channels were required to be auditable, not eliminated; I'm certain that less-secure applications were completely fine shipping with demonstrated covert channels in the bits-per-second range. And that's for systems that deal with things like the IDs of HUMINT assets or technical specifications for weapons systems; here on HN we're talking about browser fingerprints and location data.

All of this is to say: If I could limit covert channels from webpages or mobile phone apps to 10 bps I'd do it in a heartbeat. Perfect is the enemy of good enough.

How many extensions need to send unique outbound data?
Prior to publishing in store, browser maker can look at schema and query whether it makes sense in context of extension.
If the submitted schema is not tight enough, it can be rejected until it's tighter.

Last year several coworkers had installed a fake Postman Chrome extension that contained adware. We all reported it to Google, and on inspection others had left reviews to that effect, but Google took over six months to remove it.

> Related question, What would you use a Postman Chrome Extension... for?

Postman started off as a Chrome Extension (ran in a chrome tab), and then became a Chrome App. The standalone apps for desktops came later. A lot of people use the chrome extension because it's convenient.

I believe the extension can be used in conjunction with the app to let the app use the cookies in your browser session, but to be honest, I've only seen others do it, and it was back in the day when Postman was just a "Chrome App" and not a detached application. Maybe that functionality exists in the new Postman app without the chrome extension.

I can't speak to Postman specifically, because I barely use it enough for it to make a difference. But in a more general sense, I tend to prefer <thing> in a browser over <thing> but in its own window. It means one fewer program filling my taskbar.

I very rarely have only a single browser window, and I'm not coding in a browser anyway, so that's not actually a huge factor in the decision. I agree with using windows to easily switch between tweak and test.

The Chrome Extension has the 'interceptor' feature which listens to ALL network requests made in the browser on a particular page and pipes it to the Postman App. This was very neat for me to debug my requests.

However, the standalone app doesn't have that feature (yet). So I will continue to use the Chrome Extension version until they have that feature available in the Standalone app.

For some idiotic reason the native version of Postman does not support authentication against corp proxies. As a result, when using it at work, behind a corp proxy that requires authentication, the native postman doesn't work!

The only version that works is the Chrome App Postman, which simply uses the Chrome network stack, which obviously works behind the proxy.

Mozilla and Google should look at the top extensions and implement the popular ones as official extensions(or for some may be worth building them inside the browser), Reader mode is now part of some browsers so you do not need an extensions.

Mozilla could implement ad blocking extensions and give the user the option to use custom block list(so Mozilla is not accused of becoming a gate keeper).

Or maybe not: the Firefox version of the pocket extension is badly baked (you have to wait for the adding animation to disappear otherwise it gets cancelled. The previous version was "click and it's added in the background").

The Chrome version is more usable.

The great firefox redesign at the beginning of the century was about slimming down Mozilla the navigator and let extensions extend the browser. Is that the pendulum going back and forth ?

I was suggestion official extensions, so you could not install or disable them, the reason I mentioned some could be put directly in the browser is if the same functionality can't be done by a pure extension or it would be much efficient directly in the browser.

I completely agree that you should be able to disable/unintall Mozilla extensions and replace them if you want with different ones(maybe you know of a better reader mode or a better ad blocker extension)

In fact this extension may not even be installed , just be part of Mozilla code base so any update will be reviewed

Through reading bug reports, I found out that the FF reviewers for the decentraleyes extension have a custom script to check that all copied scripts are actually identical to the CDN versions. I found that step in the review interesting and positive.

I don't want to offend the extension creators, I want to option to uninstall an official extension and put my own or a community one but IMO there are reasons to trust Mozilla then a stranger or a community. There were cases where popular extensions were bought and updated with malicious code, because of that I make sure I open bank or paypal website in a private window with extensions blocked but will a regular user know to do this ?

No, not secure enough. Remember ActiveX? The security policy of ActiveX was, the browser asks the user if he wants to install the ActiveX. If the user says yes, anything that happens afterwards is the users responsibility.

What you're suggesting is not that much better. Do you expect your grandma to be able to review the permission list for the browser extension?

Browser extensions are the modern day ActiveX. Yes, lots of them are very useful. But you could say the same about ActiveX controls too.

Therein lies the problem. The entire industry has, ever since windows 3.1 (!), done their best to condition users with a single and highly destructive mindset:

"Press OK to make the annoying window go away."

The only way around this, and I'm not saying this lightly, would be to make the pushers and vendors CRIMINALLY AND PERSONALLY liable for the damage they cause to end users. Once we see the third or fourth offender nailed through their genitals, head down, on the town hall wall, the message will start to get through.

A lot of it happens in countries other than country of origin... and extradition is difficult and often expensive. Though, I wouldn't mind seeing the people that write rogue extensions that harm people get doxed.

That's a great feature. Maybe not something people would want on a personal shopper extension though, which is another type of extension that might have done the sraping. It's more convenient to just have a price alert activate when I look at an item on Amazon than having to push a button every time.

For Android, it is extremely easy to do that via an app called NoRoot Firewall. What it does is it creates a VPN server on localhost and routes all traffic to that. When an app wants to connect to a host, it shows a notification which when clicked, you can see the URL/ip and the app name. Then, you decide whether you accept the connection or not. It supports permanent blacklisting and whitelisting as well.

Since a browser like Opera can integrate a proprietary VPN without messing with OS network settings, doing the same on other browsers should be possible.

it actually doesn't because you can actually make a rule, and then the application will falollow it from that point on with no more notifications. The rule can include wild cards for parts of the hostname, or the IP address, or the port, or both

Rogue extensions are the Achilles Heels of browsers, yet the ramifications aren't understood by the average user, who happily installs all sorts off addons and extensions. Frankly I'm surprised more sensitive information hasn't already been stolen/harvested all these years. This is why I run my browser with no extensions with the sole exception of uBlock Origin.

I wouldn't object to extensions becoming paid and verified - that is, an expert review team doing a code / security review for each update of an extension. Either paid for by the authors, or done for free by e.g. Google because they have plenty of money and they are directly impacted if their platform releases a malicious extension.

The downside to this would be that this would still be possible to bypass if users are allowed to install “unverified” extensions, but removing this feature would lead to the downsides of the App Store namely Google having full control over what their browser supports. Them being an advertising company, there are whole classes of popular extensions that directly hurt their business.

You should mostly just assume all the browser extensions have access to everything you look at. Most do. ;)

I'm similar to the parent comment but my sole extension is the EFF's Privacy Badger. Yes, I'm trusting the EFF with access to everything I view, but they are in turn, blocking tracking data from nearly everyone else.

I may soon drop Privacy Badger though, Firefox's built-in tracking protection has inched closer and closer to that tier.

If I could purge some of my Facebook messages after a certain age, I think that would be great. When I downloaded my archive, I had circa-2006 messages with people who has since deactivated their accounts but their names were labeled "Facebook User."

This is a problem with almost any online 'space' - everything sticks around forever. You can go left from anyone's Facebook profile picture and see probably the first picture they uploaded to Facebook. Snapchat's USP was that it didn't (not publically) keep stuff around.

I think there's a happy middle ground somewhere were I can set an expiration time on anything I post to such a platform (e.g. Facebook/Twitter) so that it goes private after that time - e.g. a year. It wouldn't even harm the bottom line, since all the money is in new content, and I'd still have a private archive of photos if I ever wanted to download them again.

All this is moot for me since I don't use services like this at all, but I think there's an opportunity for a company to get this right.

Despite privacy issues, I still think that things sticking around "forever" on the Internet is a good default. Link rot is already a huge problem when you're trying to reference something you read in the past, and that's without auto-expiry.

Facebook now has the concept of both private (i.e. e2e encrypted) and temporary conversations. The UI to access them is a big awkward to be sure.

That doesn't help the problem of old messages from before these existed. It's also not super helpful because the retention is no more than a day. Better I think would be like a year -- enough time that you're unlikely to want to refer back to it.

Perhaps, but only slightly imho. To your average HN denizen, 'hacked' implies the account was completely compromised. To the wider world it might well include partial compromise and/or the communications to/from the account even if the attackers didn't gain total control. Which is what this appears to be.

Nor should they be expected to, but the BBC should know the difference. Facebooks stock price could be hurt due to this reporting, even though it shouldn't. This could be seen as an attempt at manipulating the stock price of a public trade company. Of cause it's just incompetence, but still.

The average tech reporter for a public service broadcaster most likely does not know the difference. How often do you read mainstream tech reporting and find yourself complimenting the journalist on their insight and factual correctness?

I would says is roughly the same as posting a news story about fuelling your car at Texaco destroys the cars engine, and that it's a Texaco issue, even if the truth is that someone just accidentally choose diesel, rather than petrol.

In this case there might not be a GDPR violation. If the data is taken by compromised browsers, then the breach wouldn't exist within Facebook's control.

It's not clear to me from reading the GDPR whether companies are responsible for the loss of personal data outside of breaches in their security. E.g. is a successful phishing campaign against customers a data breach? If not at fault, do they have an obligation to alert customers specifically about the attack?

If you've had enough of Facebook's negligence and like many others in recent months have closed your account, use this handy website to send them a GDPR request to make sure they delete all your personal data (disclosure, I'm one of the creators): https://opt-out.eu/?company=facebook.com#nav

You have a point, although we don't know the details of this attack (they haven't even disclosed the name of the extension) so I guess I'm biased against them in light of recent history. My comment was more general than this particular incident.

So I guess the data is now being shared across the border between security services and rightly so. The data and the story now have significantly more value to those services that bill the tax payer, and those that sell your attention using fear antagonising news media. So when an organisation demands you hand over your data, and it’s for your security, it’s not really, is it?