I'm not sure why his 'bright spots' list only seems to list new stuff, rather than old stuff that's still being maintained and still is pretty small. Things like bash, and vim. And to a lesser extent (because it keeps adding codecs) ffmpeg and mplayer still run well on my old hardware.

@Linux-SWAT Exactly. But there has been added so many extra's, for example, fill your hard drive with multi-edit a spreadsheet. Or collaboration (video)
Although I am impressed with the speed of Excel. Opening a 40MB file in 3 seconds, while LibreOffice takes 30.

It's XML too, just... different. See, it has this file that says that A1 contains reference 1 and A2 contains reference 2, and then another XML file that has reference with id 1 is this, and with id 2 is that...
The XML is bloated also with extra parameters.
LibreOffice is not happy because the file contains many tabs
I'm trying to make a perl data extractor, but even that is very slow compared to opening it with excel and copy pasting the data...

the writer makes lots of errors (conclusino), presumably since that's how kids write nowadays. interesting points, i wonder if they still stand. (e.g. glimmerjs including all of the brittanica to include the definition of glimmer in their help file.)

There is a problem. But it seems like it's made mostly by two factors:
1. Business wants software now, written asap, not after optimization, thinking that computers have almost infinite power. It may even work, but only if one such badly written program is operating at a moment. Now this is the programming method not of this bloated CAD package, but whole operating systems. This trend is then shifted to open source which was always underoptimized as most devs have computers much faster than average. The true problem will start when they will try to fix security by forced virtualization instead of firing the **** who notoriously accesses arrays in C by abusing pointer over another array's boundary. Spotted in a scientific package which costs about 8K € per user.
2. Most students of 3rd degree Computer Science I had met have no idea about elementary computer architecture, registers, assembler at all, memory access methods, but hey, they can program in C# or JS. To use a one-directional linked list they can pull a 80MB library into project.

I'm not sure why his 'bright spots' list only seems to list new stuff, rather than old stuff that's still being maintained and still is pretty small. Things like bash, and vim. And to a lesser extent (because it keeps adding codecs) ffmpeg and mplayer still run well on my old hardware.

Click to expand...

I don't know exactly what is inside mplayer, but using it to play Youtube saves lots of CPU power for more battery time or computations. And by saying lots I think about 40% of a dual-core machine.

I'm trying to make a perl data extractor, but even that is very slow compared to opening it with excel and copy pasting the data...

Click to expand...

I was writing such thing for a proprietary simulation program (without documentation). Binary file, meshes with hundreds of thousands of nodes, lots of tables encoded in the most bizarre ways. The most efficient way was to load everything to memory and make a state machine to roll through the data. At least faster than original postprocessor.

I don't know exactly what is inside mplayer, but using it to play Youtube saves lots of CPU power for more battery time or computations. And by saying lots I think about 40% of a dual-core machine.

Click to expand...

Yes, that's very true. I use it on some of my oldest computers to allow them to play fullscreen videos downloaded from youtube, which works much better than using flash video. I also use a more modern machine to transcode videos to the screen resolution of the slower machine, so that it can avoid having to scale anything, and play them over the local network using sshfs. I've been able to script ffprobe and ffmpeg to work out the aspect ratio of supplied videos and transcode them to fit inside the target resolution, so I can control that all over an ssh connection from my slower more portable machines.

ffmpeg could still be faster in transcoding webm videos; it's appreciably close to real-time when handling mp4 videos for me, but webm videos doing the same translations run exponentially slower. But I guess because that's an encumbered codec they have difficulty using anything other than the provided source code and can't modify it because there's no public spec telling anyone what it all means, other than what the current code actually does.