Crude code point sorting is a sensible default if you expect consistent results, and in many cases I believe that’s exactly what programmers expect. Imagine if your binary search didn’t work on my dataset because we used different locale settings. You can opt in if you want it, but the problems introduced by imposing locale behavior on programs not expecting it are much worse than aesthetic.

You’ll occasionally come across the accidentally locale aware code generator. Fun times when it prints floating point numbers as 3,14.

It’s dependent on what you’re going to use the sorted result for. A binary search doesn’t care what you’re actually sorting on, as long as it’s a total order and it’s fast. User display needs to be locale aware and consistent with user expectations. Implementing a spec (bencode, for example) that demands sorting requires you to implement it exactly as they specify.

…one of the reasons we decided to end EdgeHTML was because Google kept making changes to its sites that broke other browsers, and we couldn’t keep up…

I can appreciate the shadenfreude of Microsoft’s new position, but this is a pretty legitimate concern. Especially if Google is/was doing that intentionally. What we need is a good incentive for Google to care about web standards and performance in non-Chrome browsers, but this move by Microsoft only drives things in the opposite direction.

One reason intention matters: if the intention is to handicap edge, then it’s probably not serving some other purpose that’s good for all of us. If handicapping edge is a side-effect of some real benefit, that’s just a fact about the complexity of the web (it might still be a bad decision, but there are trade-offs involved).

I don’t know if it’s intentional or not, but I am almost never able to complete reCaptchas in Firefox, it just keeps popping up ridiculous ones, like traffic lights that are on the border of three squares, and it keeps popping the same unsolvable ones for 2-3 minutes until I get tired/locked out of it and just use Chrome to log in, where somehow I always get sane ones and it lets me in first try. Anyone had the same?

OK, let’s put aside the schadenfreude as best we can and examine the consequences. I think it’s fair to assume, for the sake of argument, that Alphabet Inc absolutely will do everything in its power, dirty tricks included, to derive business value from it’s pseudo-monopolist position. If Microsoft were to dig in their heels and ship a default browser for their desktop OS that didn’t play YouTube videos as well as Chrome does, would that harm Alphabet, or just Microsoft at this point?

I don’t really understand your talk of “a good incentive”. Think of it this way: what incentive did Google, an advertising company, ever have to build and support a web browser in the first place? How did this browser come to its current position of dominance?

That’s certainly ONE factor. The other is that Chrome by default makes “address bar” and “search bar” the same thing, and sends everything you type into the search bar to Google.

Same as Google Maps, or Android as a whole. I often navigate with Google Maps while driving. The implication is that Google knows where I live, where I work, where I go for vacation, where I eat, where I shop. This information has a monetary value.

If there is something Google does that is not designed to collect information on it’s users that can be turned into ad revenue, that something will eventually be shut down.

Exactly. They are trying to build accurate profiles of every aspect of people and businesses’ existences. Their revenue per user can go up as they collect more information for their profiles. That gives them an incentive to build new products that collect more data, always by default. Facebook does same thing. Revenue per user climbed for them year after year, too. I’m not sure where the numbers are currently at for these companies, though.

Google built a web browser because Microsoft won the browser wars and did nothing with IE for 10 years.

No, that was Mozilla. They together with Opera were fighting IE’s stagnation and by 2008 achieved ~30% share which arguably made Microsoft notice. Chrome was entering the world which already was multi-browser at that point.

Also, business-wise Google needed Chrome as a distribution engine, it has nothing to do with fighting browser wars purely for the sake of users.

I’m not entirely sure what you mean by a distribution engine. For ads? Or for software?

I think business motives are extremely hard to discern from actions. I think you could make the argument that Google has been trying for years to diversify their business, mostly unsuccessfully, and around 2008 maybe they envisioned office software (spreadsheets, document processing, etc) as the next big thing. GMail was a surprise hit, and maybe they thought they could overthrow Microsoft’s dominance in the field. But they weren’t about to start building desktop software, so they needed a better browser to do it.

Or maybe they built it so that Google would be the default search engine for everyone so they could serve more ads?

Or maybe some engineers at Google really were interested in improving performance and security, built a demo of it, and managed to convince enough people to actually see it through?

I realize the last suggestion may sound helplessly naive, but having worked as an engineer in a company where I had a lot of say in what got worked on, my motives were often pretty far afield of any pure business motive. I got my paycheck regardless, and sometimes I fixed a bug or made something faster because it annoyed me. I imagine there are thousands of employees at Google doing the same thing every day.

Regardless, the fact remains that the technology they built for Chrome has significantly improved the user experience. The reason Chrome is now so dominant is because it was better. Much better when compared to something like IE6.

And even ChromeOS is better than the low-price computing it competes with. Do you remember eMachines? They were riddled with junk software and viruses rendering them almost completely useless. A 100$ Chromebook is such a breath of fresh air compared to that experience.

I realize there’s a cost to this, and I get why there’s a lot of bad press about Google, but I don’t think we need to rewrite history about it. I think we’re all better off with Google having created Chrome (even if I don’t agree with many of the things they’re doing now).

Google makes deals with OEMs to ship Chrome by default on the new desktops and laptops. Microsoft cannot stop them because of historical antitrust regulations.

Google advertised Chrome on their search page (which happens to be the most popular web page in the world) whenever someone using another browser visited it. It looks like they’ve stopped, though, since I just tried searching with Google from Firefox and didn’t get a pop-up.

The incentive to play fair would come from Google not wanting to lose the potential ad revenue from users of non-Chrome browsers due to them deliberately sabotaging their own products in those browsers. Not trying to imply that EdgeHTML was the solution to that problem or that it would somehow be in Microsoft’s best interest to stick with it, just that its loss is further cementing Google’s grip on the web and that’s a bad thing.

All the user knows is “browser A doesn’t seem to play videos as good as browser B”. In general they can’t even distinguish server from client technologies. All they can do about it, individually, is switch browsers.

Now that Alphabet has cornered the market, their strategy should be obvious. It’s the same as Microsoft’s was during the Browser Wars. The difference is, Alphabet made it to the end-game.

Your interpretation[1][2] of how a single historical case went doesn’t change the fact that antitrust action is bad for a company’s long-term prospects and short-term stock price. The latter should directly matter to current leadership. Companies spend a reasonable amount of time trying to not appear anti-competitive. @minimax is utterly ignoring that consequence of “dirty tricks”.

You’re looking at it wrong. The question you really need to consider is:

What makes Google’s position more of an end-game than what Microsoft had in the early 2000s?

Microsoft was the dominant OS player, but the Internet itself was undergoing incredible growth. What’s more, no one existed solely within what’s Microsoft provided.

Today, the Internet is essentially the OS for many (most?). People exist in a fully vertically integrated world built by Google - operating system, data stored in their cloud, documents written on their editor and emails sent through their plumbing… all of it run by the worlds most profitable advertising company, who just built themselves mountains of data to mine for better advertisements.

Your assessment of Google today strikes me as not completely unreasonable, although it does neglect the fact that only a small fraction of Internet users live so completely in Google’s stack; I suspect far more people just use Android and Chrome and YouTube on a daily basis but don’t really use Gmail or GSuite (Docs, etc.) very frequently, instead relying on WhatsApp and Instagram a lot more.

And back in the 2000s there were definitely a large group of people who just used Windows, IE, Outlook, Hotmail, MSN & MS Office to do the vast majority of their computing. SO it’s not as different as you seem to believe. Except now there are viable competitors to Google in the form of Facebook & Apple in a way that nobody competed with MS back then.

Similarly, Office/Outlook/Windows in 2000 didn’t mine the files I was working on to enrich an advertising profile that would follow me across the internet. If memory serves, while Hotmail did serve advertisements, they were based on banner advertisements / newsletters generated by Microsoft, and not contextually targeted.

The real risk here, I believe, is in both the scope and ease of understanding what’s happening today versus what Microsoft did. Microsoft’s approach was to make money by being the only software you run, and they’d use any trick they could to achieve that - patently anticompetitive behavior included.

Google, on the other hand… at this point I wonder if they’d care if 90% of the world ran Firefox as long as the default search engine was Google. I think their actions are far more dangerous than those of Microsoft because they are much wider reaching and far more difficult for regulators to dig into.

I suspect far more people just use Android and Chrome and YouTube on a daily basis but don’t really use Gmail or GSuite (Docs, etc.) very frequently, instead relying on WhatsApp and Instagram a lot more.

Your assessment that Chrome is only a means to an end, the end being to have people continue using Google’s web search, seems dead on. But then you follow that up with a claim that doesn’t seem to logically follow at all.

The reach of Google now relative to Microsoft 15 years ago is lower as a fraction of total users; it only seems higher because the absolute number of total users has grown so much.

Doesn’t this depend on how you define a “user”, though? Google has a grip on search that would be the envy of IBM back in the day. Android is by far the most popular operating system for mobile phones, if not for computing devices writ large. They pay for Mozilla because they can harvest your data through Firefox almost as easily as via Chrome, and they prop up a competitor, in case the US DOJ ever gets their head out of their ass and starts to examine the state of the various markets they play in.

The reach of Google now relative to Microsoft 15 years ago is lower as a fraction of total users; it only seems higher because the absolute number of total users has grown so much.

Android’s global smartphone penetration is at 86% in 2017[1]. And while the “relative reach” might be lower, the absolute impact of the data being Hoovered up is significant. In 2000, annual PC sales hit 130 million per the best figures I could find[2] … that’s less than a tenth of smartphone sales in 2017 alone.

What does it matter that Google’s relative reach is lower when they control nearly 9/10 smartphones globally and proudly boast over two billion monthly active devices?

The level of control isn’t directly comparable. Microsoft sold Windows licenses for giant piles of money while Google licenses only the Play Store and other apps that run on Android. Android in China is a great example of the difference, although I guess Microsoft probably lost revenue (but not control over the UX) there via piracy.

The popularity of JS is a bad example; it’s popular because until very recently (with the emergence of TypeScript, which is just a superset of it anyway) it was the only way to run code in any web browser, making it the only reasonable choice for web apps.

until very recently (with the emergence of TypeScript, which is just a superset of it anyway)

you mean that until recently (2012), nobody had written a thing like TypeScript that lets you write software in a non-Javascript language that can be deployed as Javascript. It was never impossible before that, just nobody had picked that side of the trade-off. Where, rather than “nobody”, you mean nobody other than 280 North (Objective-J, 2008), Jeremy Ashkenas (CoffeeScript, 2009), Google (Dart, 2011), and undoubtedly others.

TypeScript, which is just a superset of it anyway

Indeed. TypeScript is the “take the default choice and configure it to suit where our team sits” option. It’s the Jenkins plugin, or the Jira workflow futzing, of the Javascript development world.

the only reasonable choice

Even accepting that there were no JS alternatives for running code in the browser until Typescript came along in 2012 (which for the reasons given above is a flaky assumption), teams that are longer-running than that have had six years to evaluate alternatives, and teams that are newer than that have always had alternatives to choose from.

to run code in any web browser, making it the only reasonable choice for web apps

Here we beg the question. “JS is popular because you have to do it” requires that we accept that we have to make a web app. Why? My assertion is that along with other examples like Jira and Jenkins, people start with a web app in JS because it’s the thing that’s done, and they can probably make progress with it. Thus we discover that it’s a good example, because it’s the same as the other examples. You could write your own, pick an alternative, or try to configure the thing most people use to your own circumstances.

Indeed, this is such a common pattern that @srbaker proposed the word “Jefaults”, for a team that goes “well everybody is using Jenkins, Jira, Javascript and Java, we’ll start there”.

Flash was very popular: YouTube used Flash, remember? It’s been deprecated now because the iPhone didn’t ship it, and the iPhone didn’t ship it because it was kind of slow (along with Flash having poor support for things like access for the visually impared, the clipboard, and scrolling, which is why apps like GMail didn’t use it).

Java was less popular, because its obnoxious permissions and code loading model made it even slower and less convenient than Flash.

Silverlight didn’t exist for long enough before the iPhone killed off browser plugins for good. Who knows? It might’ve been better if it had been given a chance.

Polemics like this always seem to leave out the part where, though things might not be exactly to the author’s preference, they are nonetheless actually pretty impressive. We’ve built a lot of systems that are truly amazing in the positive impact they have, just as we have built tools that are used for distasteful purposes. In addition, a lot of the idealism vanishes out the window once you actually try to build a real thing which works for thousands or millions of people, rather than a lab experiment or research project.

I’m sure we’re not at a global maximum of whatever it is we should be optimising, but the idea that everything is terrible and we’re all just lying to ourselves is such a tired one.

Well, the article wasn’t about that (it’s about how history is misrepresented to present a world dominated by technological determinism), but I’m always up for discussing the subject.

‘Polemics like this’ are generally not making the argument that everything is terrible, but that relatively straightforward & obvious improvements are not being made (or once made are being ignored). In the case of the work of folks mentioned in this article, commercial products today are strictly worse along the axes these people care about than commercially-available systems in the late 1970s and early 1980s. In the case of both Alan Kay & Ted Nelson, they themselves released open source software that gets closer to their goals.

I don’t think it’s unfair to get mad about a lack of progress that can’t be excused by technical difficulties. It’s absurd that popularly available software doesn’t support useful features common forty years ago. However, the tech industry is uniquely willing to reject labor-saving technology in favor of familiar techniques – it does so to a degree far greater than other industries – so while absurd, it’s not surprising: software engineering is an industry of amateurs, and one largely ignorant of its own history.

I think you’re deliberately understating the current state of computing, programming, and networking.

It’s absurd that popularly available software doesn’t support useful features common forty years ago.

Like clipboards! And spellcheckers! And pivot tables! And multimedia embedding! And filesharing! And full-text searching! And emailing!

Except…wait a second, those weren’t really common useful features at all. Wait a second….

However, the tech industry is uniquely willing to reject labor-saving technology in favor of familiar techniques – it does so to a degree far greater than other industries – so while absurd, it’s not surprising

What do you mean by this? Have you compared it to industries like, say, paper printing? Healthcare? Cooking?

Would you consider the constant churn of, say, web frameworks promising ever-easier development to be favoring familiar techniques? What about the explosion functional and ML languages which will magically save us from the well-documented pitfalls and solutions of procedural and OOP languages, for the mere cost of complete retraining of our developers and reinvention of the entire software stack?

Please put some more effort into these bromides–facile dismissals of tech without digging into the real problems and factors at play is at least as shortsighted as anything you complain of in your article.

That’s actually a decent example. Companies operating under the philosophy to interoperate with maximum number of techs for benefits enkiv2 is going for would want everyone to have clipboards that could interoperate with each other, too. Instead we get walled garden implementations. Additionally, Microsoft patented it instead of leaving it open in case they want to use it offensively to block its adoption and/or monetize it.

On a technical angle, clipboards were much weaker than data-sharing and usage models that came before them. Some of the older systems could’ve easily been modified to do that with more uses than clipboards currently offer. There’s entire product lines on Windows and Linux dedicated to letting people manipulate their data in specific ways that might have just been a tie-in (like clipboards) on top of a fundamental mechanism using the extensible designs enkiv2’s and his sources prefer. Instead, we get patented, weak one-off’s like clipboards added on many years after. Other things like search and hyperlinks came even later with Microsoft’s implementation in Windows once again trying to use IE to lock everyone in vs real vision of WWW.

I could probably write something similar about filesharing adoption in mainstream OS’s vs distributed OS’s. languages, and filesystems from decades earlier.

The clipboard mechanism in X Windows allows for more than just text. I just highlighted some text in Firefox on Linux. When I query the selection [1], I see I have the following targets:

Timestamp

targets (this returns the very list I’m presenting)

text/html

text/_moz_htmlcontext

text/_moz_htmlinfo

UTF8_STRING

COMPOUND_TEXT

TEXT

STRING

text/x-moz-url-priv

If I select text/html I get the actual HTML code I selected in the web page. When I select text/x-moz-url-priv I get the URL of the page that I selected the text on. TEXT just returns the text (and the alt text from the image that’s part of the selection). I use that feature (if I”m on Linux) when blogging—this allows me to cut a selection of a webpage to paste into an entry which grabs the URL along with the HTML.

Of course, it helps to know it’s available.

[1] When first playing around with this in X Windows, I wrote a tool that allowed me to query the X selection from the command line.

It’s not used for anything that fancy because (a) the clipboard can only ever hold one piece of data at a time (b) you have to get the applications to support the same data format, just like if you’d used a file

I think you’re deliberately understating the current state of computing, programming, and networking.

Never do I say that the tech we have is worthless. But, at every opportunity, I like to bring up the fact that with not much effort we could do substantially better.

those weren’t really common useful features at all.

It shouldn’t take fifty years for a useful feature to migrate from working well across a wide variety of experimental systems to working poorly in a handful of personal systems – particularly since we have a lot of developers, and every system owned by a developer is a de-facto experimental system.

There weren’t impossible scaling problems with these technologies. We just didn’t put the effort in.

Have you compared it to industries like, say, paper printing? Healthcare? Cooking?

I was thinking in particular of fields of engineering. CAD got adopted in mechanical engineering basically as soon as it was available.

But, sure: in the domain of cooking, sous vide systems got adopted in industrial contexts shortly after they became available, and are now becoming increasingly common among home cooks. Molecular gastronomy is a thing.

Healthcare is a bit of a special case. All sorts of problems, at least in the US, and since the stakes are substantially higher for failures, some conservativism is justified.

Printing as an industry has a tendency to adopt new technology quickly when it’ll increase yield or lower costs – even when it’s dangerous (as with the linotype). There are some seriously impressive large-scale color laser printers around. (And, there was a nice article going around about a year ago about the basic research being done on the dynamics of paper in order to design higher-speed non-jamming printers.) My familiarity with printing is limited, but I’m not surprised that Xerox ran PARC, because printing tech has been cutting edge since the invention of xerography.

Would you consider the constant churn of, say, web frameworks promising ever-easier development to be favoring familiar techniques?

Promising but never actually delivering hardly counts.

What about the explosion functional and ML languages which will magically save us from the well-documented pitfalls and solutions of procedural and OOP languages, for the mere cost of complete retraining of our developers and reinvention of the entire software stack?

Functional programming is 70s tech, and the delay in adoption is exactly what I’m complaining about. We could have all been doing it thirty years ago.

(The other big 70s tech we could benefit a great deal from as developers but aren’t is planners. We don’t write prolog, we don’t use constraint-based code construction, and we don’t use provers. SQL is the rare exception where we rely upon a planner-based system at all in production code. Instead, we go the opposite route: write java, where engineer time and engineer effort is maximized because everything is explicit.)

The variety of lisps and schemes available for early-90s commodity hardware indicates that a functional style has been viable on that hardware for thirty years. We can be very conservative and call it twenty-five too: if Perl scripts running CGI are viable, so is all of the necessary features of functional programming (provided the developer of the language has been sensible & implemented the usual optimizations). Haskell is probably not the best representative of functional programming as a whole in this context: it really is heavy, in ways that other functional languages are not, and has a lot of theoretical baggage that is at best functional-adjacent. Folks can and did (but mostly didn’t) run lisp on PCs in ’95.

By the late 90s, we are already mostly not competing with C. We can compare performance to perl, python, and java.

The question is not “why didn’t people use haskell on their sinclair spectrums”. The question is “why didn’t developers start taking advantage of the stuff their peers on beefier machines had been using for decades as soon as it became viable on cheap hardware?”

You’re playing fast and loose with your dates. You said 30 years ago, which would’ve been 1988–before the Intel 486. Even so, let’s set that aside.

The variety of lisps and schemes available for early-90s commodity hardware indicates that a functional style has been viable on that hardware for thirty years.

The most you could say was that the languages were supported–actual application development relies on having code that can run well on the hardware that existed. I think you’re taking a shortcut in your reasoning that history just doesn’t bear out.

if Perl scripts running CGI are viable, so is all of the necessary features of functional programming (provided the developer of the language has been sensible & implemented the usual optimizations).

I’m not sure what you mean by viable here. The Web in the 90s kinda sucked. You’re also overlooking that actual requirements for both desktops and servers at the time for web stuff were pretty low–bandwidth was small, clients were slow, and content was comparatively tiny in size when compared with what we use today (or even ten years ago).

The second bit about “assuming the developer of the language” is handwaving that doesn’t even hold up to today’s languages–people make really dumb language implementation decisions all the time. Ruby will never be fast or small in memory for most cases. Javascript is taking a long time to get TCO squared away properly. Erlang is crippled for numeric computation compared to the hardware it runs on.

By the late 90s, we are already mostly not competing with C. We can compare performance to perl, python, and java.

I don’t believe that to be the case, especially in the dominant desktop environment of the time, Windows. Desktop software was at the time very much written in C/C++. with Visual Basic and Delphi probably near leaders.

~

I think the problem is that you’re basing your critiques on a present based on a past that didn’t happen.

You’re also overlooking that actual requirements for both desktops and servers at the time for web stuff were pretty low–bandwidth was small, clients were slow, and content was comparatively tiny in size when compared with what we use today (or even ten years ago).

actual application development relies on having code that can run well on the hardware that existed

Professional (i.e., mass-production) development is more concerned with performance than hobby, research, and semi-professional development – all of which is primarily concerned with ease of exploration.

Sure, big computing is important and useful. I’m focusing on small computing contexts (like home computers, non-technical users, and technical users working outside of a business environment, and technical users working on prototypes rather than production software) because small computing gets no love (while big computing has big money behind it). Big computing doesn’t need me to protect it, but small computing does, because small computing is almost dead.

So, I think your criticisms here are based on an incorrect understanding of where I’m coming from.

Professional development is totally out of scope for this – of course professionals should be held to high standards (substantially higher than they are now), and of course the initial learning curve of tooling doesn’t matter as much to professionals, and of course performance matters a lot more when you multiply every inefficiency by number of units shipped. I don’t need to say much of anything about professional computing, because there are people smarter than me whose full time job is to have opinions about how to make your software engineering more reliable & efficient.

Powerful dynamic languages (and other features like functional programming & planners) have been viable on commodity hardware for experimental & prototype purposes for a long time, and continue to become progressively more viable. (At some point, these dynamic languages got fast enough that they started being used in production & user-facing services, which in many cases was a bad idea.)

For 30 years, fairly unambiguously, fewer people have been using these facilities than is justified by their viability.

Folks have been prototyping in their target languages (and thus making awkward end products shaped more by what is easy in their target language than what the user needs), or sticking to a single language for all development (and thus being unable to imagine solutions that are easy or even idiomatic in a language they don’t know).

For a concrete example, consider the differences between Wolfenstein 3d and Doom. Then, consider that just prior to writing Doom, id switched to developing on NeXT machines & started writing internal tooling in objective c. Even though Doom itself ran on DOS & could be built on DOS, the access to better tooling in early stages of development made a substantially more innovative engine easier to imagine. It’s a cut and dried example of impact of tools on the exploration side of the explore/exploit divide, wherein for technical reasons the same tools are not used on the production (exploit) side.

people make really dumb language implementation decisions all the time

Sure. And, we consider them dumb, and criticize them for it. But, while today’s hardware will run clojure, a c64 lisp that doesn’t have tail recursion optimization will have a very low upper limit on complexity. The difference between ‘viable for experimentation’, ‘viable for production’, and ‘cannot run a hello world program’ is huge, and the weaker the machine the bigger those differences are (and the smaller a mistake needs to be to force something into a lower category of usability).

The lower the power of the target machine, the higher the amount of sensible planning necessary to make something complex work at all. So, we can expect early 90s lisp implementations for early 90s commodity hardware to have avoided all seriously dumb mistakes (even ones that we today would not notice) & performed all the usual optimization tricks, so as to be capable of running their own bootstrap.

There are things that can barely run their own bootstrap, and we generally know what they are. I don’t really care about them. There are other things that were functional enough to develop in. Why were they not as widely used?

Desktop software was at the time very much written in C/C++. with Visual Basic and Delphi probably near leaders.

Sure, but software was being written in scripting languages, and so writing your software in a scripting language was not a guarantee that it would be the slowest thing on the box (or even unusably slow). That makes it viable for writing things in that will never be sold – which is what I’m concerned with.

I think the problem is that you’re basing your critiques on a present based on a past that didn’t happen.

I think I just have a different sense of appropriate engineer time to cpu time tradeoffs, and don’t consider the mass production aspect of software to be as important.

I certainly ran Emacs Lisp on a 386 SX-16, and it ran fine. I didn’t happen to run Common Lisp on it, mainly because I wasn’t into it, or maybe there were only commercial implementations of it back then. But I would be pretty surprised if reasonable applications in CL weren’t viable on a 386 or 68020 in 1990. Above-average amounts of RAM were helpful (4 or 8 MB instead of 1 or 2).

We opt out of the wayback machine because inclusion would allow people to discover the identity of authors who had written sensitive answers publicly and later had made them anonymous, and because it would prevent authors from being able to remove their content from the internet if they change their mind about publishing it.

Quora is making promises that they can’t keep. No technical measure can stop me (taking on the role of your adversary in this scenario) from simply remembering a Quora Q&A, and even if I can’t prove it to anyone else, I can still act on it when making a hiring decision or whatever. Assuming I’d have to prove it, though, then your anti-botting mechanisms have the problem of not always working, and you also can’t stop me from using Snipping Tool to take a screenshot.

The only way to make an ephemeral conversation platform on the Web is to make it invite only, like a Discord room.

Conventional belief is that a new product needs to be around 10 times better for users to consider switching from their old ways.

When Firefox was gaining market share it had good extensions, which enabled you to do a lot more things with the browser than the alternatives.

When Chrome launched, it was better than it’s competition. It launched faster, it ran faster, it was more stable ( if a tab crashed it would not take down the whole browser ), it automatically updated and it had a simple and well thought out user interface. It qualified as a 10x improvement compared to existing alternatives. It probably also helped that the IE team was asleep at the wheel.

Can we realistically expect the Firefox team to bring about a 10x improvement over Chrome ? Google has deep pockets. They are laser focused on making Chrome even better. I think it is unlikely.

For personal use, I have switched to Firefox for the same reasons I use Linux : to have a little more control over my computing. Yes, I know I give up some goodies, but I think it is a fair trade considering the upside.

Realistically, we can only expect Firefox to remain as a niche browser. The problem is, will Google be willing to shell out the millions of dollars to Mozilla which it needs to pay its people so that Google can be the default search engine for <10% of the market. Hopefully, yes.

The best play for Mozilla would be to cozy up to Microsoft which may have some apprehension about the tightening control of Google over the Internet and the World Wide Web

Your “conventional belief” ignores both the network effect and the effect of marketing. Web browsers, being a consumer product that acts as an intermediary to other service providers who deliver the real value, are heavily governed by both.

I’m not sure if this is really true but: I got the impression part of the reason for that tradition was that Firefox’s releases were necessary for the IE group to get funding inside Microsoft. Without the visible competition spurring them try to make IE compete, Microsoft could have just left it stagnate.

edit: to give the article’s author credit, I don’t think they were claiming that Chrome’s release was wholly responsible for spurred development of MSIE. I think that bit is rather suggesting that Chrome might not have been able to overtake MSIE in usefulness if MSIE had continued improving the whole while instead of stagnating for five years.

I am curious what problems the author had editing from WSL in the windows file system. /mnt/c contains the whole c drive and in my case I have symlinks set up in wsl to redirect me to specific parts of the windows directory structure from there, like for example my one drive synced folders. And as for editing windows files with a WSL vim, well a simple :set ff=dos will do the trick. Also WSL supports more than just Ubuntu now for the linux instance.

Besides mucking around in the Windows Services API (some of our clients need a thing to run in the background when their Windows server boots), and again going to try a few other theories about where the 422 errors that bors is getting from GitHub come from.

I always wondered how effective plus addressing really is. Couldn’t a spammer just strip the “+” and anything after it to get the real email address? As “+” is almost never used to distinguish between two different mailboxes that will yield far more real addresses than false positives, right? And spammers usually don’t care two much about precise approaches.

Having the luxury of a dedicated domain I for my personal email, I just use a catchall which has the benefit that one couldn’t guess an alternative address from say spotify@example.com and spammers can’t know if I am using a catch-all or not.

That’s true, but what really matters is that there’s a larger metagame, not just a one-on-one between you and a hypothetical spammer. A spammer has to come up with a plan to defeat all of the policies that people are likely to use, not just yours:

If a spammer strips the plus-address off the end whenever they see it, then they get blackholed by anyone who blackholes non-plussed mail.

If a spammer replaces the plus-address with a guessed tag, and if they guess wrong, your bayesian learning system will blackhole any mail sent to notriddle+google@example.com since that address never receives anything other than spam.

If a spammer leaves the plus-address alone, then they get blackholed after the user learns which service leaked their email address.

Spammers aren’t trying to get into your mailbox. They’re trying to get into as many mailboxes as possible. Anything “smart” that they do is a chance for them to be distinguished from legitimate mailers.

Without it I expect the future will look more or less like it does today. Linux will remain the operating system of choice for servers and computer science workstations. KDE and Gnome will continue their pointless competition. Cell phone and PDA manufacturers will choose Linux as the core OS and write their own proprietary and closed UI toolkits to run on top, although there is a good chance that they will find it easier to license PalmOS or CE instead. And Microsoft will retain their 90% or better share of the home and business PC market, while Linux advocates keep chanting “any year now.”

I don’t think I’ve read a more politically loaded articles this year, I’m not sure why but this is not a trend I’m found of and from the discussions on the forums and irc I know a lot agree that this is getting tiresome. Trying to paint a narrative over whatever is happening, especially a narrative that only applies in certain parts of the world. Keep away with your anti or pro capitalism talk from the 60s and let us enjoy the tech instead. Especially when it comes to the second article and the author nagging about Google being a corporate evil because their own product features are not supported in their pdf reader.

To me, you either think that political issues are serious and you mention them, or you just don’t (for whatever reason, there’s not reason it’s not legitimate) and ignore them – but think stance of saying “Cringiest articles of the year?”, and quasi proclaiming “I actively don’t care about issues others take seriously (and you shouldn’t either)” is just annoying and deterring.

And seriously, people mentioning the political dimensions of technical issues isn’t that omnipresent, that it’s preventing you from “enjoing the tech”.

Valid point.
I usually avoid political news, and this is what this is about, but thought this time of putting a message in the newsletter because we had already discussed that week on the forums about that same topic. It was in the train of the moment.

There’s probably nothing else in the entire newsletter related to this topic, it was a reference to a forums discussion.

The stance that politics, as a whole, should be ignored, has been a highly effective messaging strategy on the part of people whose views are aligned with the status quo.

Just to rephrase that in a way that makes the caveats in it a bit more obvious: I am not saying that everyone asking to curtail political discussion is doing so for political reasons; I am sure that many people say this for other reasons, such as sincerely finding politics stressful. I am saying that those requests end up serving the aim of preserving the status quo, and I am saying that a desire to preserve the status quo is itself a political position.

There are more remarks that I could make, relating an investment in the status quo to privilege theory, but I think that going into any depth on that analysis would be a distraction right now.

This is probably going to be an unpopular comment, but I’ll try it nevertheless

The majority of Lobsters voted in the last meta for politics to be in every story and comment section here if they can achieve that. They also predominantly vote for a specific kind of leftist politics whose members say the same stuff as you. If anything, I predicted you’re the privileged in-group posting a comment to a pro-similar-politics, echo chamber that would reward you and anyone supporting your statement with high votes (popularity/support). That’s exactly what happened with your comment followed by @Irene’s. So, just chiming in to remind you your views are compatible with the dominant, most-voting people on the site with nothing for you to worry about. It’s people outside those views that have to worry they’ll get hit with strong, negative comments in threads on UNIX newsletters and stuff. So, whenever you feel anything like this, submit it without worries since you have a ton of support here.

If anything, the people who say politics is so important are slacking off since Lobsters submissions and comments are still mostly technical, not politically beneficial. They should be submitting much more content on these issues like culture, technological methods to address this stuff (eg accessibility libraries/tips), content written by minority members underrepresented in tech, organizations that put money into this, and so on. Although a few submit some of that, the vast majority of political “work” on Lobsters are people in the political group(s) telling people in other groups that what they’re doing is wrong for (political explanation here) sometimes with lots of downvotes. On top of doing that, I strongly encourage all of you in the political activism group to reflect your stated beliefs in submissions, comments, and professional work to make stuff happen for real. Especially submissions: focus on politically-beneficial articles, esp written by minority members. I’ll believe all of you when 70+% of Lobsters submissions from all of you are advancing the goals for society that you claim is more important than tech write-ups.

I of course can’t prove it, but from my experience I honestly excepted that people would shun me for leaving a “off-topic” comment. I was surprised to see that there was a positive reaction, possibly because I don’t know the lobste.rs community as well as you do – but even if that hadn’t been the case (and I’m sure I could post unrelated comments on my views that would provoke such a reaction) I would have left my complaint for @venam to see.

And after all, I only mention “politics” because it was mentioned in a newsletter, I remembered. My point was (next to the one that I had no time to read all the articles) that I would have rather wanted the political submission to not be included (the secondary, deriving issue was the way it was talked about).

The majority of Lobsters voted in the last meta for politics to be in every story and comment section here if they can achieve that.

That recounting of the discussion is made of straw, and if it were true, I’d be confused about why you’re even still here. Most post comment sections do not go political (this is vacuously true, since 14/25 front page items have no comments at all, but even the remaining ones don’t seem very politically charged).

Let’s be real here: @zge was responding to text within the article itself which clearly carries a political statement. If the article says something is “cringy”, it is not off-topic to respond with a justified “no it’s not”. Responding to a political sentiment with additional political sentiment does not mean you want to involve politics in every story and comment section here.

Please get off your cross, so we can use the wood for something useful.

If it’s straw, look at the comment section to see what the number of comments are here for the technical vs political aspects of the article plus their voting support. No surprise that it supports my assertion. The comments in the other threads were usually in support of people calling out authors or other commenters about the political ramifications, from a specific vantage point, of their claims with more support for that than the technical aspects. I think the consistent, higher-than-technical-stuff support for such comments further corroborates my claim they reward political claims seemingly every time it shows up and (by their other statements/vote) support much more of it. However, there’s statements in comment section and action towards stated goals. About that…

“this is vacuously true, since 14/25 front page items have no comments at all, but even the remaining ones don’t seem very politically charged”

That’s what I’m calling them out for. The highest-voted stuff from the political side was about promoting inclusion, fixing social problems, modifying speech/actions to conform to their politics, and so on. Yet, there’s hardly any comments or political submissions at all from the same people who value politics higher than technical content. It’s like, “Do you care about this stuff that much or don’t you?” I previously said they were virtue signaling since most of them don’t submit crap that achieves their stated goals. How hard is it to submit one a week from each of them on anything they discuss in the comments? They put lots of time into the comments doing accusations or defending the need for political action but about nothing into the main content on the site. Their failure to act consistently with their stated priorities, at least here, is why the data you mention doesn’t show it.

“Please get off your cross, so we can use the wood for something useful.”

There’s no cross. The site’s politics changed over time to reward specific views/practices and shun others. I was pointing out the person who appeared worried about their compatible politics having a negative reaction had nothing to worry about. Actually, that person was slamming someone else while saying that with a lot of upvotes. I then encouraged them and everyone else upvoting it believing political angles were so important to actually submit stuff benefiting same political goals to Lobsters. More submissions helping every issue they upvote in political debates. I see almost none as you indicated. So, they’re either hypocrites doing virtue signaling or extremely busy doing good things for such causes outside Lobsters to point they can’t spare even a submission a week (or day). I’ve adapted to the New Lobsters by both ending my most mention of views they collectively discourage and encouraging them to do better about views/practices they encourage: submit politically-beneficial, inclusive content that minimizes harm in its many forms while the rest of us just submit deep, technical stuff (which may or may not do some of the same public goods).

What’s interesting is that Google promotes PWAs but is there any PWA made by Google? Moreover, I can’t remember encountering any webpage with offline and “installation to home screen” capabilities in the whole internets.

Google Photos has no third warning, it has service worker, but it probably does nothing related to “PWA” functionality.

For Youtube, there are the same three warnings plus other, minor warnings, such as “brand colors in address bar”.

Google Play Music refuses to load altogether, and says “open native app or go away” if it detects that browser is “mobile” (!), but on desktop, it loads, but all Lighthouse audits fail, except “Uses HTTPS”.

I don’t think idea of PWAs is bad, it’s how the first Iphone and later Firefox OS were supposed to work, but Google’s notion of PWAs is complete bullshit, with “service workers”, “brand colors in address bar” and other nonsense. Even Google itself does not try to conform to this.

From a user’s perspective the problem with edge was not with pages not rendering correctly, or being slow or being too resource hungry. I had a pretty good experience in these regards.

What I had terrible experience is the UI. Initially Edge had a very minimal UI. Address bar, back, forward, landing page with most visited pages stamps.
Luckily after it gained some traction someone had a great idea: The product is not opular, because it does not have

pervasive tracking in the name of synchronization, which (at least the sync part) does not really work

Meanwhile they marketed the outstanding support for Progressive Web Apps, which was… a bit of an overstatement. For example You can pin the PWA to start menu, but it will open in the ordinary edge window in a tab, not its separate window as if it were an app.

Overall the browser was not that bad, on my underpowered notebook it was running pretty decent. Still the agressive bing and msn bullshit eventually ed me up so i’m back to Firefox, which I abandoned after version 3.6 I think.

Last time I tried it, it had weird unpredictable lock-ups when I would open a new tab, type in the web address I wanted to visit, and only actually start loading that web address five seconds later. Firefox had considerably better and more predictable performance.

By the way: am I the only one who thinks Firefox Quantum looks more like Edge than it looks like Chrome?

On FF is more like edge or chrome: because of the blocky tabs i tend to agree with you, but I think this is primarily your perception, and FF is on its own way.

That’s true, but everyone always compares it to Chrome. I don’t think I’ve ever heard anyone compare it to Edge (it’s not just the blocky tabs; the “Library” button in Firefox is conceptually similar to the “Hub” button on Edge, bundling the history, bookmarks, and downloads together into one drop-down panel).

As for Firefox’s future: if it becomes the browser that just bundles an ad blocker by default (and I don’t mean the neutered implementation that only kicks in for ads that play sound by default, I mean all ads), that’ll probably be the day I start telling my grandma to use it. Because, screw any other considerations, that’s exactly the kind of unfair advantage that Firefox should be exercising.

It’s 2015, and I saw a presenter at a Python conference make fun of Java. How would that feel to people trying to move from Java into something else? I wouldn’t feel welcome, and I’d have learned that the idea that the Python community is welcoming wasn’t true.

My brother-in-law teased me about buying a Subaru and not God’s Chosen Car (Chevrolet). My father (an Aggie) teased my wife about going to an obviously inferior school (UT). I got teased for running Linux for Babies (Ubuntu) instead of Gentoo. My last talk at Black Hat I had a joke about the obvious superiority of Python over Ruby.

None of these cases are actual examples of anyone being unwelcoming. There’s a fair amount of good-natured ribbing in the world, for just about anything. It is a normal method of human interaction.

There are certainly asses out there who really do get way into their language/car/tool of choice and are actually unwelcoming, but that is certainly not unique to computer science or software engineering. To judge a whole community by those asses is applying a standard that wouldn’t work in any field of human endeavor.

And there’s the additional problem that some tools really are just better and not even just for specific cases. That doesn’t mean you should make fun of the people who use them, of course, but those people shouldn’t get offended if people wonder why they’re using an inferior tool when other tools are just as easily available. (Maybe it’s because it’s what you know, which is fine, but if a doctor tells me that they only use medical techniques from 30 years ago because it’s “what they know best” I’m going to find a different doctor…)

Good-natured ribbing, like any other form of humor, is kind of audience-dependent. Making fun of someone who is already your friend is different from making fun of a whole class of people who aren’t even there to defend themselves. A joke about Ubuntu being Linux for Babies might be fine with someone who is already confident enough in their abilities to present at BHW, but it can be soul-crushing if you’re actually a noob.

A conference presentation should probably avoid jokes unless you’ve shown them to other people to try to make sure that they actually think it’s funny.

I don’t understand this meme. How does QUIC benefit “big players” more than it benefits anyone else?

QUIC (HTTP/3 is basically just HTTP/2, but using QUIC as a transport instead of TCP) is basically an attempt to eliminate head-of-line-blocking problems that HTTP/2 and HTTP/1.1+pipelining have. That seems like it could help with any page that downloads a lot of resources, regardless of whether it’s being deployed on a massive server cluster or it’s just a server sitting under my desk.

Other improvements that QUIC is supposed to make, like avoiding middleboxes that have messed with the unencrypted parts of TLS, and avoiding breaking the connection when the IP address of the client changes, and allowing the initial connection packet to also contain the GET request, seem even more agnostic to the size of the website. In fact, with the hypothetical server-under-my-desk, with its own high-latency residential connection, seems like it would benefit more from the 0RTT-init than a server in a data center with a fiber-optic connection.

So, yeah… please, actually explain how QUIC benefits GOOG without necessarily benefiting anyone else the same way? Other than not being compatible with existing tooling, which is inevitable with any protocol change, and of course the problems that the article lists, how does it hurt people who just take nginx or IIS and drop it onto their server somewhere?

Edit: I do find it annoying and disturbing that Google has so much power as a vendor. For example, I hate AMP. The problems AMP is intended to solve are real, but as a solution, it’s sick and wrong. I don’t get AMP vibes from QUIC, though; it’s just a multiplexed stream thingy with an emphasis on reducing latency.

I don’t think this complaint is entirely valid, but one can argue that small sites can compete with big sites by being faster, but now quic comes along and makes the big sites fast, erasing ones competitive advantage.

I suppose the same was said when optimizing compilers arrived. The people who could write the tightest leanest code lost a lot of their edge.

Google QUIC became a full transport protocol early in its development.
Regarding HoL, it was IETF that completely eliminated it since the HEADERS stream in Google QUIC was still serialised.
HTTP/3 is an IETF innovation since Google QUIC used HTTP/2.

I beg all my fellow crustaceans to please, please use Firefox. Not because you think it’s better, but because it needs our support. Technology only gets better with investment, and if we don’t invest in Firefox, we will lose the web to chrome.

On the other hand, WebSocket debugging (mostly frame inspection) is impossible in Firefox without an extension. I try not to install any extensions that I don’t absolutely need and Chrome has been treating me just fine in this regard[1].

Whether or not I agree with Google’s direction is now a moot point. I need Chrome to do what I do with extensions.

As soon as Firefox supports WebSocket debugging natively, I will be perfectly happy to switch.

[1] I mostly oppose extensions because of questionable maintenance cycles. I allow uBlock and aXe because they have large communities backing them.

Axe (https://www.deque.com/axe/) seems amazing. I know it wasn’t the focus of your post – but I somehow missed this when debugging an accessibility issue just recently, I wish I had stumbled onto it. Thanks!

I have never needed to debug WebSockets and see no reason for that functionality to bloat the basic browser for everybody. Too many extensions might not be a good thing but if you need specific functionality, there’s no reason to hold back. If it really bothers you, run separate profiles for web development and browsing. I have somewhat more than two extensions and haven’t had any problems.

I do understand your sentiment, but the only extension that I see these days is marked “Experimental”.

On the other hand, I don’t see how it would “bloat” a browser very much. (Disclaimer: I have never written a browser or contributed to any. I am open to being proved wrong.) I have written a WebSockets library myself, and it’s not a complex protocol. It can’t be too expensive to update a UI element on every (websocket) frame.

The extensions are all terrible – but what’s more important is that I lost the belief that any kind of vertical tab functionality has any chance of long-term survival. Even if support was added now, it would be a constant battle to keep it and I’m frankly not interested in such fights anymore.

Mozilla is chasing their idealized “average user” and is determined to push everyone into their one-size-fits-all idea of user interface design – anyone not happy with that can screw off, if it was for Mozilla.

It’s 2018 – I don’t see why I even have to argue for vertical tabs and mouse gestures anymore. I just pick a browser vendor which hasn’t been asleep on the wheel for the last 5 years and ships with these features out of the box.

And if the web in the future ends up as some proprietary API defined by whatever Google Chrome implements, because Firefox went down, Mozilla has only itself to blame.

The extensions are all terrible – but what’s more important is that I lost the belief that any kind of vertical tab functionality has any chance of long-term survival. Even if support was added now, it would be a constant battle to keep it and I’m frankly not interested in such fights anymore.
The whole point of moving to WebExtensions was long term support. They couldn’t make significant changes without breaking a lot of the old extensions. The whole point was to unhook extensions from the internals so they can refactor around them and keep supporting them.

I’m not @soc, but I wish Firefox had delayed their disabling of old-style extensions in Firefox 57 until they had replicated more of the old functionality with the WebExtensions API – mainly functionality related to interface customization, tabs, and sessions.

Yes, during the time of that delay, old-style extensions would continue to break with each release, but the maintainers of Tree Style Tabs and other powerful extensions had already been keeping up with each release by releasing fixed versions. They probably could have continued updating their extensions until WebExtensions supported their required functionality. And some users might prefer to run slightly-buggy older extensions for a bit instead of switching to the feature-lacking new extensions straight away – they should have that choice.

What’s the improvement? The new API was so bad that they literally had to pull the plug on the existing API to force extension authors to migrate. That just doesn’t happen in cases where the API is “good”, developers are usually eager to adopt them and migrate their code.

Let’s not accuse people you disagree with that they are “against improvements” – it’s just that the improvements have to actually exist, and in this case the API clearly wasn’t ready. This whole fiasco feels like another instance of CADT-driven development and the failure of management to reign in on it.

The old extension API provided direct access to the JavaScript context of both the chrome and the tab within a single thread, so installing an XUL extension was disabling multiprocess mode. Multiprocess mode seems like an improvement; in old Firefox, a misbehaving piece of JavaScript would lock up the browser for about a second before eventually popping up a dialog offering to kill it, whereas in a multiprocess browser, it should be possible to switch and close tabs no matter what the web page inside does. The fact that nobody notices when it works correctly seems to make it the opposite of Attention-Deficient-Driven-Design; it’s the “focus on quality of implementation, even at the expense of features” design that we should be encouraging.

The logical alternative to “WebExtension For The Future(tm)” would’ve been to just expose all of the relevant threads of execution directly to the XUL extensions. run-this-in-the-chome.xul and run-this-in-every-tab.xul and message pass between them. But at that point, we’re talking about having three different extension APIs in Firefox.

Which isn’t to say that I think you’re against improvement. I am saying that you’re thinking too much like a developer, and not enough like the poor sod who has to do QA and Support triage.

Improving the actual core of Firefox. They’re basically ripping out and replacing large components every other release. This would break large amount of plugins constantly. Hell, plugins wouldn’t even work in Nightly. I do agree with @roryokane that they should have tried to improve it before cutting support. The new API is definitely missing many things but it was the right decision to make for the long term stability of Firefox.

Eh … WAT? Mozilla went the extra mile with their recent extension API changes to make things – that worked before – impossible to implement with a recent Firefox version.
The current state of tab extensions is this terrible, because Mozilla explicitly made it this way.

I used Firefox for more than 15 years – the only thing I wanted was to be left alone.

It’s one of the laws of the internet at this point: Every thread about Firefox is always bound to attract someone complaining about WebExtensions not supporting their pet feature that was possible with the awful and insecure old extension system.

If you’re care about “non terrible” (whatever that means — Tree Style Tab looks perfect to me) vertical tabs more than anything — sure, use a browser that has them.

But you seem really convinced that Firefox could “go down” because of not supporting these relatively obscure power user features well?? The “average user” they’re “chasing” is not “idealized”. The actual vast majority of people do not choose browsers based on vertical tabs and mouse gestures. 50% of Firefox users do not have a single extension installed, according to telemetry. The majority of the other 50% probably only have an ad blocker.

Picking just one example: Having the navigation bar at a higher level of the visual hierarchy is just wrong – the tab panel isn’t owned by the navigation bar, the navigation bar belongs to a specific tab! Needless to say, all of the vertical tab extensions are forced to be wrong, because they lack the API do implement the UI correctly.

But you seem really convinced that Firefox could “go down” because of not supporting these relatively obscure power user features well?? The “average user” they’re “chasing” is not “idealized”. The actual vast majority of people do not choose browsers based on vertical tabs and mouse gestures. 50% of Firefox users do not have a single extension installed, according to telemetry. The majority of the other 50% probably only have an ad blocker.

You can only go so far alienating the most loyal users that use Firefox for specific purposes until the stop installing/recommending it to their less technically-inclined friends and relatives.

Mozilla is so busy chasing after Chrome that it doesn’t even realize that most Chrome users will never switch. They use Chrome because “the internet” (www.google.com) told them so. As long as Mozilla can’t make Google recommend Firefox on their frontpage, this will not change.

Discarding their most loyal users while trying to get people to adopt Firefox who simply aren’t interested – this is a recipe for disaster.

I still miss them, didn’t cripple me, but really hurt. The other thing about Tree (not just vertical) tabs that FF used to have was that the subtree was contextual to the parent tree. So, when you opened a link in a background tab, it was opened in a new tab that was a child of your current tab. For doing like documentation hunting / research it was amazing and I still haven’t found its peer.

I don’t think we’ll ever get the majority of browser share back into the hands of a (relatively) sane organization like Mozilla—but we can at least get enough people to make supporting alternative browsers a priority. On the other hand, the chances that web devs will ever feel pressured to support the browsers you mentioned, is close to nil. (No pun intended.)

What would you like me to say, that Firefox’s existence is worthless? This is an absurd thing to insinuate.

funded by google

No. I’m not sure whether you’re speaking in hyperbole, misunderstood what I was saying, and/or altogether skipped reading what I wrote. But this is just not correct. If Google really had Mozilla by the balls as you suggest, they would coerce them to stop adding privacy features to their browser that, e.g., block Google Analytics on all sites.

sends data to google by default

Yes, though it seems they’ve been as careful as one could be about this. Also to be fair, if you’re browsing with DNT off, you’re likely to get tracked by Google at some point anyway. But the fact that extensions can’t block this does have me worried.

i’m sorry if i misread something you wrote. i’m just curious what benefit you expect to gain if more people start using firefox. if everyone switched to firefox, google could simply tighten their control over mozilla (continuing the trend of the past 10 years), and they would still have control over how people access the web.

It seems you’re using “control” in a very abstract sense, and I’m having trouble following. Maybe I’m just missing some context, but what concrete actions have Google taken over the past decade to control the whole of Mozilla?

Google has pushed through complex standards such as HTTP/2 and new rendering behaviors, which Mozilla implements in order to not “fall behind.” They are able implement and maintain such complexity due to funding they receive from Google, including their deal to make Google the default search engine in Firefox (as I said earlier, I couldn’t find any breakdown of what % of Mozilla’s funding comes from Google).

For evidence of the influence this funding has, compare the existence of Mozilla’s Facebook Container to the non-existence of a Google Container.

No word on the exact breakdown. Visit their 2017 report and scroll all the way to the bottom, and you’ll get a couple of helpful links. One of them is to a wiki page that describes exactly what each search engine gets in return for their investment.

I would also like to know the exact breakdown, but I’d expect all those companies would get a little testy if the exact amount were disclosed. And anyway, we know what the lump sum is (around half a billion), and we can assume that most of it comes from Google.

the non-existence of a Google Container

They certainly haven’t made one themselves, but there’s nothing stopping others from forking one off! And anyway, I think it’s more so fear on Mozilla’s part than any concrete warning from Google against doing so.

Perhaps this is naïveté on my part, but I really do think Google just want their search engine to be the default for Firefox. In any case, if they really wanted to exert their dominance over the browser field, they could always just… you know… stop funding Mozilla. Remember: Google is in the “web market” first & the “software market” second. Having browser dominance is just one of many means to the same end. I believe their continued funding of Mozilla attests to that.

It doesn’t have to be a direct threat from Google to make a difference. Direct threats are a very narrow way in which power operates and there’s no reason that should be the only type of control we care about.

Yes Google’s goal of dominating the browser market is secondary to their goal of dominating the web. Then we agree that Google’s funding of Firefox is in keeping with their long-term goal of web dominance.

if they really wanted to exert their dominance over the browser field, they could always just… you know… stop funding Mozilla.

Likewise, if Firefox was a threat to their primary goal of web dominance, they could stop funding Mozilla. So doesn’t it stand to reason that using Firefox is not an effective way to resist Google’s web dominance? At least Google doesn’t think so.

Likewise, if Firefox was a threat to their primary goal of web dominance, they could stop funding Mozilla. So doesn’t it stand to reason that using Firefox is not an effective way to resist Google’s web dominance?

You make some good points, but you’re ultimately using the language of a “black or white” argument here. In my view, if Google were to stop funding Mozilla they would still have other sponsors. And that’s not to mention the huge wave this would make in the press—even if most people don’t use Firefox, they’re at least aware of it. In a strange sense, Google cannot afford to stop funding Mozilla. If they do, they lose their influence over the Firefox project and get huge backlash.

I think this is something the Mozilla organization were well aware of when they made the decision to accept search engines as a funding source. They made themselves the center of attention, something to be competed over. And in so doing, they ensured their longevity, even as Google’s influence continued to grow.

Of course this has negative side effects, such as companies like Google having influence over them. But in this day & age, the game is no longer to be free of influence from Google; that’s Round 2. Round 1 is to achieve enough usage to exert influence on what technologies are actually adopted. In that sense, Mozilla is at the discussion table, while netsurf, dillo, and mothra (as much as I’d love to love them) are not and likely never will be.

I know you were joking, but I do feel like there is something to be said for the simplicity of systems like gopher. The web is so complicated nowadays that building a fully functional web browser requires software engineering on a grand scale.

I was partially joking. I know there are new ActivityPub tools like Pleroma that support Gopher and I’ve though about adding support to generate/server gopher content for my own blog. I realize it’s still kinda a joke within the community, but you’re right about there being something simple about just having content without all the noise.

I’ve relatively recently switched to FF, but still use Chrome for web dev. The dev tools still seem quite more advanced and the browser is much less likely to lock up completely if I have a JS issue that’s chewing CPU.

I tried to use Firefox on my desktop. It was okay, not any better or worse than Chrome for casual browsing apart from private browsing Not Working The Way It Should relative to Chrome (certain cookies didn’t work across tabs in the same Firefox private window). I’d actually want to use Firefox if this was my entire Firefox experience.

I tried to use Firefox on my laptop. Site icons from bookmarks don’t sync for whatever reason (I looked up the ticket and it seems to be a policy problem where the perfect is the enemy of the kinda good enough), but it’s just a minor annoyance. The laptop is also pretty old and for that or whatever reason has hardware accelerated video decoding blacklisted in Firefox with no way to turn it back on (it used to work a few years ago with Firefox until it didn’t), so I can’t even play 720p YouTube videos at an acceptable framerate and noise level.

I tried to use Firefox on my Android phone. Bookmarks were completely useless with no way to organize them. I couldn’t even organize on a desktop Firefox and sync them over to the phone since they just came out in some random order with no way to sort them alphabetically. There was also something buggy with the history where clearing history didn’t quite clear history (pages didn’t show up in history, but links remained colored as visited if I opened the page again) unless I also exited the app, but I don’t remember the details exactly. At least I could use UBO.

This was all within the last month. I used to use Firefox before I used Chrome, but Chrome just works right now.

I definitely understand that Chrome works better for many users and you gave some good examples of where firefox fails. My point was that people need to use and support firefox despite it being worse than chrome in many ways. I’m asking people to make sacrifices by taking a principled position. I also recognize most users might not do that, but certainly, tech people might!? But maybe I’m wrong here, maybe the new kids don’t care about an open internet.

I already complained about SPA being a default choice nowadays for literally everything

It isn’t.

Maybe I am wrong, and it is not that bad.

That’s right — it isn’t that bad.

I actually believe jQuery is still big (can’t really provide any data, just my gut feeling)

Humans do not have a good intuition for size or performance, so this belief is unfounded.

but it is not cool anymore

You’re right about this. There is a trend to over-engineer. You don’t have to follow this trend though. In fact you’ll be better off generally by carefully considering the problem you’re solving and then solving it in the simplest way possible. Sometimes that’ll be with a big complex tool; sometimes it’ll be words on a page.