Computer programs stopped being designed and written by people who were educated in the field.

Now any braindead monkey writes programs for a meal of curry by watching some shitty 'lern2kode in 2 mins" tutorial by Pajeet or some brain cell-deprived model. Said videos teach programming by copypasting with no explanations, and omit essential concepts because "there 2 advunced for beginers xd". As a result, modern software turned to shit.

Reminds me of what happened to math in schools and foreign language education (e.g. teachers tell students shit like 'I do know English' is wrong).

With respect to Emacs, may I remind you that the originalversion ran on ITS on a PDP-10, whose address space was 1moby, i.e. 256 thousand 36-bit words (that's a little over 1Mbyte). It had plenty of space to contain many large files,and the actual program was a not-too-large fraction of thatspace.

There are many reasons why GNU Emacs is as big as it iswhile its original ITS counterpart was much smaller:

- C is a horrible language in which to implement such thingsas a Lisp interpreter and an interactive program. Inparticular any program that wants to be careful not to crash(and dump core) in the presence of errors has to becomebloated because it has to check everywhere. A reasonablecondition system would reduce the size of the code.

- Unix is a horrible operating system for which to write anEmacs-like editor because it does not provide adequatesupport for anything except trivial "Hello world" programs.In particular, there is no standard good way (or even any inmany variants) to control your virtual memory sharingproperties.

- Unix presents such a poor interaction environment to users(the various shells are pitiful) that GNU Emacs has had toimport a lot of the functionality that a minimally adequate"shell" would provide. Many programmers at TLA neverdirectly interact with the shell, GNU Emacs IS their shell,because it is the only adequate choice, and isolates themfrom the various Unix (and even OS) variants.

Don't complain about TLA programs vs. Unix. The typicalworkstation Unix requires 3 - 6 Mb just for the kernel, andprovides less functionality (at the OS level) than the OSsof yesteryear. It is not surprising that programs that ranon adequate amounts of memory under those OSs have toreimplement some of the functionality that Unix has neverprovided.

What is Unix doing with all that memory? No, don't answer,I know, it is all those pre-allocated fixed-sized tables andbuffers in the kernel that I'm hardly ever using on myworkstation but must have allocated at ALL times for therare times when I actually need them. Any non-brain-damagedOS would have a powerful internal memory manager, but whoever said that Unix was an OS?

What is Unix doing with all that file space? No don'tanswer. It is providing all sorts of accounting junk whichis excesive for personal machines, and inadequate for largesystems. After all, any important file in the system hasbeen written by root -- terribly informative. And all thatwonderfully descriptive information after lots of memoryconsumed by accounting daemons and megabytes of disk takenup by the various useless log files.

Just so you won't say that it is only TLA OSs and softwarethat has such problems, consider everyone's favorite textformatter, TeX (I'm being sarchastic, although when comparedwith troff and relatives...). The original version ranunder FLA on PDP-10s. It is also bloated under Unix, and italso must go through contortions in order to dump apre-loaded version of itself, among other things.

It's not reasonable to check things at runtime. Its also not reasonable to pretend everyone puts resources on making programming languages that ensure bound checking at compile time because you couldn't figure things out but needed to go fast.

If you can't write simple C code without errors, chances are that even if all those memory checks were made for you, you would still make stupid logic errors everywhere.

And this... this is what makes programs actually bloated and slow. Some kid that didn't bother to think a little bit more, or simply would have been unable to think about simple problems... is making complex programs, and making insane choices that simply waste everyone resources.

If you use classes for everything, you are actually this kid, and you're not a knowledgeable person.

If you think patterns solve every problem, you are actually this kid, and you're not a knowledgeable person.

If you try to generalize your solutions, trying to predict the future and have a "scalable program"... you're actually prematurely optimizing your code usage (which is even more sinful that prematurely optimizing for performance) and you're this kid. And you're not a knowledgeable person.

Accostume yourself to simply refactor code, Flex your nut and learn how to not make mistakes through pain and tears, rather than trusting someone else to sell you the perfect programmng scheme that solves all problems magically. If you look at the last decades, you will find lots of oil salesman however you will see that AAA game programmers actually still program in c-like c++ code.

Abandon abstractions that are not useful, and your program will be faster, by the mere fact that you will not be confused and implement clown cars.

your "AAA" games are fucking trash. not only are they full of beginner C mistakes such as format string exploits, but they are still slow as fuck. they literally can't maintain a mere 60FPS so they can't use real beam racing vsync (0 latency as opposed to double/triple buffer), and have to go for gsync/freesync instead (which might still break down)

are you retarded? you're talking about a safety option that will simply not happen on a game, specially since a game will have their own way of printing text. This bug is something that a person that writes "modern" c++ thinks happens but its solved thanks to their embarrasing and inneficient couts. The problem wouldn't happen on the first place you dingus, because since I want to show you slick text in realtime In funky angles my string is made into a mesh and and rendered through the gpu.

>can't maintain a mere 60FPS

You wouldn't be able either, the difference is that you would have graphics of two decades ago and still wouldn't be able to write something that goes near 60fps either.

All your abstractions work against you, because EVEN if you managed to solve them at compile time, they put a veil between you and the hardware, you WILL waste cpu cycles doing stupid shit that you didn't intend to do.

The day compilers actually solve this problem is the day that programmers aren't needed either by the way, you wouldn't be needed if the compiler was able right now to actually lay out things optimally for you in all cases. Because the magical type of stuff it would need to do would be literally programming difficult stuff for you. Perhaps we're not that far away either. But we're not there now and you're retarded for feeling superior about thinking we're there yet.

I have to make a ray tracer that grabs some jumps for you to see some fancy realistic diffuse lighting. I have to do it in real time. I have to perform some blurring with an optimal quantity of edge stopping functions so that noisy, fast ray tracer actually gives a pleasant looking constribution, but it isn't so blurred you can't see the borders. I have to reuse data so I don't perform the same expensive operations more than once if it isn't needed. I have to gauge all the stuff that happens in a single frame, and if at any moment I'm struggling because some class isn't friend of another, or because "this would be wrong code, you're not supposed to use X that way, bjarne said so" then I would be losing time. And no, I wouldn't gain that time on the long run, that's a lie. I would lose even more time on the long run because your patterns are not able to predict how things should look in 30 or more iterations.

While I do that, you will still be trying to figure out the overload operators of your vertex class or worse yet, you will be using GLM or some poorly implemented but "modern" library which WILL have an enormous performance hit.

I've verified for myself both Crysis and Counter Strike Source have had format string exploits (former patched and latter silently patched). Literally all you do is name yourself %s%s%s. Am I retarded? no. All I was saying is that gayme developers can't even avoid format string exploits. And yes, they are indeed super retarded considering that this cout arrow operator bullshit prevents them in the first place.

>second paragraph

the fuck are you talking about? it isn't hard to maintain 60FPS if you write the code yourself. it has nothing to do with requiring advanced compiler technology. by saying "abstractions work against you", it seems you understand the problem: a bunch of idiots made these little components that when combined lose any realtime properties

maybe it's hard to be real time in 3D, i don't have experience with 3D rendering. it's probably more likely that people are just too busy ricing the fuck out of their "AAA" game then caring about something like that. i don't see why people who claim to make competitive games can't do a worst case analysis step by step as they develop the game to ensure it never drops framerate given reasonable resources

>I have to reuse data so I don't perform the same expensive operations more than once if it isn't needed.

yeah this must be why most 3D games go to 20 fps when there are more than 3 dynamic objects (players, player-built-structures etc) on screen

>While I do that, you will still be trying to figure out the overload operators of your vertex class or worse yet,

but i mean, they suck ass. slock for example had a bug where it would display the contents of your screen while locked. now without changing compile options it does this dumbass bullshit where it turns your screen red if someone types the wrong password

A big company hiring 700 pajeets can make a much fancier program much faster than an even competent individual can. And when that's all done, they can fire those pajeets and keep selling their monopolized software forever while keeping a handful of devs to add stupid features so they can justify selling the software again every couple years.

Normies don't give a fuck and will buy into it anyway even if it's bloated and slow and has 60 layers of botnet and doesn't do anything more useful than the previous version did, as long as it beeps happily when clicked on and has whatever trendy visuals that the king kike told them to prefer, and is 0 IQ-tier retardproof.

Meanwhile real software is unprofitable and risky so it can't support any developers other than autists who do it for free, and autists aren't known for their ability to make things user friendly or pretty, not to mention many of those autists are even worse than the pajeets that were hired by that big company.

oh yeah i'm just gonna go deep in X11 API and fix their screen revealing bug. the point is that the software sucks, not that it's easy to fix. really anything to do with the terminal is also garbage for fundamental reasons (metacharacter injection, etc). we can't just modify some terminal programs to fix this.

First off, you don't see the crap, since everyone has quick to forgot the unsuccessful and unimpressive (unless it was exceptionally terrible). It's the same reason why a "best hits" of a decade of music will almost always sound better than the Top 40.

You are right about people making do with limited resources. Though I will have to argue that

>all programs are bloated shit and use up all memory/cpu

ignores the fact that even well-optimized programs of the past would require almost all the memory/cpu of a system. It's not like early 3D renderers and image editors ran silky-smooth on a standard machine of the day, or that highly-optimized games like Star Fox or Kirby's Adventure didn't suffer FPS drop on the native consoles.

The several reasons for this behavior, off the top of my head:

•Necessity

On hardware with very immediate resource limits, limited (cheap) input/output expansion, and a basic OS which did not provide a convenient, failsafe framework, egregiously sloppy code would simply fail. If you did not perform proper hardware resource allocation in your code, from memory reclamation to a consistent and manageable runtime, your program would fail on its own in short order. There were still newbies in those days who would write sloppy code, but the bar for "acceptably sloppy" was significantly higher in order for the programs to even run.

•Simplicity

Because the hardware had these limited resources, on hardware which an engineer could digest with much greater ease than the hardware of today, the pathways and limits of optimization were more readily apparent. And by being less aware of faults, creative developers could more effectively utilize hardware in unorthodox ways to achieve certain results, such as you see in demoscene. In the modern world of APIs and developer libraries, byzantine kernel drivers, and advanced, opaque firmware on individual hardware, the amount of deviance permitted for such hacks is lowered, unless one works around those standards as well.

>Meanwhile real software is unprofitable and risky so it can't support any developers other than autists who do it for free, and autists aren't known for their ability to make things user friendly or pretty, not to mention many of those autists are even worse than the pajeets that were hired by that big company.

If highly-optimized software software is absolutely required to perform a certain profitable function, it will be made (if proprietary). It is just the case that this is increasingly an edge case, especially when computation and rendering have become distributed and cheap.

The FreeBSD and Linux kernels and their userlands have both received plenty of corporate developer contributions that improve hardware support, performance, and features, as those corporations are direct beneficiaries of those improvements. High-quality open codecs like, Opus and AV1, and the software used to deal with these codecs, like ffmpeg, are also developed by multimedia corporations without direct profit, so that they can facilitate and standardize future development. And corporations increasingly tend to release the source code for certain in-house tools online, to facilitate development and possibly adoption. All this is coming from someone who has an axe to grind against corporate influence.

It seems to me that when techies harp on about "real" software, they are more concerned with the philosophy and aesthetics of use (practical vs. entertainment, minimalist vs. embellished, etc.) , rather than the value produced for oneself, or society from using said software.

I'm not sure about you, but as a user, I'd rather have my programs crash on an unhandled error, rather than taking down my whole system and compromising it permanently. C simply doesn't provide me with that safety.

Sure, your C is perfect because you are not le epick pajeet meem and you are a perfect programmer, and you don't even need to debug or write tests because you write code without errors 100% of the time, but sadly, you are the only programmer in the world capable of doing this, and you are too much of a NEET to bother writing a secure operating system all by yourself, with all the necessary tools for day to day use, so we will have to rely on C software written by pajeets and other non-Indian pajeets, so we (including you) are fucked.

>And this... this is what makes programs actually bloated and slow.

>Simple if guards make programs bloated and slow.

No, most certainly not, unless we are talking about really tight loops, which anyways should be as simple as possible. Checks are what have turned personal computers into machines that are pleasant to use rather than "hey, reboot it" simulators. You fucks really have a short attention span if you don't remember the frustrations of using a computer in the early late 90s and early 2000s, with constant random crashes. We have advanced a fucking lot, and it hasn't been thanks to people like you complaining about the size of your programs not being measured in bytes.

No, programs are bloated and slow because programs nowadays depend on a thousand libraries, libraries they don't actually need. Libraries we wouldn't really need if languages included a sane set of tools in their standard libraries, but it seems we are either C levels of autistic reinventing the wheel, or Java levels of bloat. But who cares: need a simple hashmap? Sure, just include Qt or Boost, who will in turn make sure they include the entirety of their libraries because everything depends on everything else, then make the user download the whole library anyway just in case.

>hurr generalizing solutions is bloat

At work we are forced to copypaste stuff because Agile development or whatever, rather than designing a general solution for each non-specific task. As a result, we end up with files with tens of thousands of LoC, just because at the moment was faster and cheaper to do it this way. We are also forced not to linebreak statements, so we end up with lines that take up a whole panoramic screen to display and still need a scrollbar to see it whole, so you can imagine how much of a pain in the ass this is to maintain. They also determined higher level constructs such high order functions or closures were either too bloated or too difficult to understand that we are pretty much forbidden from using them, so we end up with 100 lines long switches and globals up the ass. They also determined dictionaries were too complicated, so we end up making linear searches over the same array over and over inside loops. But hey, it is simple to read, everything is adapted to work for their usecase and their usecase alone, and we just use the same three or four types for everything so it must not be bloated, right? :^) We also use 0 external libraries because we have a strong NIH syndrome at the office, which I can say is the only good idea we ever had. I just wish we bothered expanding it so we could boost our productivity by generalizing all the stuff we have to write by hand every single day

So it turns out Terry Davis was only halfway right when he said that. They went wrong when it became possible to make them usable by and sell them to niggers. Now people have become accustomed to supercomputer levels of power and take everything for granted. Hardware should have never progressed past the 80's. The average person should only be able to afford something like a 1-meg Amiga 500 with second floppy drive, or a 386SX wth 2 MB RAM, 40 MB HD, and the most basic of VGA and Soundblaster cards. Throw in a modem (9600 baud) and a 24-pin dot matrix printer, and that's all the hardware you get for the rest of your life. You don't like that? Tough noogies.

No, it's not that at all. People here talk about segfaulting all the time when talking about C memory unsafety, but segfaults by themselves are actually benign. Crashes are only benign until someone decides to take advantage of them and not quite crash the program, in which case you are fucked.

Protip: most people who use C and think C is the best language ever because it's the only thing they have learned in college, and then end up working in systems programming (the only job for real programmers such as ourselves), don't tend to know about this

>>Libraries we wouldn't really need if languages included a sane set of tools in their standard libraries

Deeply disagree, one has only to see c++ std to see how it doesnt work. Not only you have a terrible library that doesnt(because simply it realistically) cover all problems. You have people like retards above that keep peddling it like cout above even when it has been proven to be slower yet they keep talking about safety.

Standard libraries should be small so retards cant go around asking you why you didnt use the slow std version on every trivial problem

>It seems to me that when techies harp on about "real" software, they are more concerned with the philosophy and aesthetics of use (practical vs. entertainment, minimalist vs. embellished, etc.) , rather than the value produced for oneself, or society from using said software.

oh yeah it has nothing to do with getting RSI from using garbage software all day every day.

well now that we have high end machines we can use a lot of it up to make a safe system without requiring 6000 papers filled with advanced proofs. it's much easier to prove systems that just do redundant computation. contrary to what neckbearded C LARPers think, modern software is not slow because it has bounds checking. it is slow because it's shit. case in point: pretty much any C software you can name is just as slow and shitty.

the industry has been using higher order functions for years now. just not your job. the code is still just as shit, it's fundamental to how these comapnies are structured. no amount of trendy techniques or Best Practices (TM) will save them

>They also determined dictionaries were too complicated, so we end up making linear searches over the same array over and over inside loops.

I feel like you're one of those guys who uses a hash table for an array of 5 elements.

>the industry has been using higher order functions for years now. just not your job. the code is still just as shit, it's fundamental to how these comapnies are structured. no amount of trendy techniques or Best Practices (TM) will save them

It's not about bloat, it's about saving man hours. You can boost productivity a fucking lot by using general functors that do the nitty gritty stuff for you, and even more by investing a few hours in designing code generation for the dumbest, most repetitive tasks. You also reduce possible bugs and LoC. Just because some people manage to make shit high order functions doesn't mean someone remotely competent will write less using these features, compared to not using them.

>I feel like you're one of those guys who uses a hash table for an array of 5 elements.

While I have ended up doing stuff like that in some places maybe it wasn't the most optimal thing I could have done, but it surely was the cleanest, I also managed to optimize a task that took several hours to run to seconds by abusing dictionaries, even though they didn't want to let me do that at first.

>>, I also managed to optimize a task that took several hours to run to seconds by abusing dictionaries, even though they didn't want

That only means that the people making you code styles is as bad as you, theres a place for dictionaries and theres a place for arrays.

If you use dictionaries always you will be faster where arrays arent(to keep it simple). That doesnt mean that dictionaries are faster, and if you think a dictionary by default is something that cleans up code youre definetely not understanding your priblems either.

Eh, no. I use plenty of arrays, and unless I am fucking around in a scripting language that uses them by default, I will default to arrays (vectors, actually) unless the problem at hand requires several identical or similar searches over the same array, in which case I will probably cache them into a dictionary to avoid iterating over the same array over and over like a dumbass, or if the problem at hand just lends itself to a key-value data structure. That's what hashmaps are for, which are a quite handy data structure that can be used for almost anything, but obviously may be unnecessarily heavy or general for most tasks.

I don't know about you, but if I can avoid going O(n*m), I will do it.

Anyway, this is not about my usage of dictionaries. This is about how placing arbitrary restrictions on code or the languages themselves is dumb, even if it apparently reduces complexity or bloat. Chances are your programmers will end up writing more, less readable but simpler code, and do some really terrible stuff in order to circumvent your guidelines.

>Protip: most people who use C and think C is the best language ever because it's the only thing they have learned in college, and then end up working in systems programming (the only job for real programmers such as ourselves), don't tend to know about this

I actually went the other way around, at the beginning I was obsessed with oop and patterns and uml, and university only tried to reinforce that. Only being able to not code bullshit for once I realized that I could deal with one actual problem, instead of dozen more made up out of nowhere out of over abstraction.

In university youre told that all these modern workflows are the way to go, that they are flawless, that cache misses are negligible, that not only they have a solution but that they are the only solution and that they dont have any drawbacks watsoevet.

Wake up, team work doesnt exist, were all a bunch of autists unable to work with each other. Better accept that you HAVE to refactor code, better actually read the code of your coworkers rather than trying to do bizarre and-actually-not-useful-at-all abstractions before having to refactor code anyway.

Cache misses _are_ largely negligable. If they weren't it would be impossible to program anything because cache only works to your advantage in certain cases in the first place. The default is no cache. Cache is an optimization. Just like compression will utterly fail in certain cases, so will cache. As i said, software is not shit because people don't know about cache. The reasons it is shit are much simpler higher level reasons. Take a look at a modern acclaimed product like Fortnite BR, they fuck up every possible thing that a newgrad can fuck up:

-game requires roundtrip to server to switch weapons

-game requires roundtrip to server to get into edit mode

-while switching weapons, they get stuck in the air, you are no longer carrying them, and you can no longer use them until later (typical moronic malreuse[1] issue)

-mouse movements are queued in some retarded buffer

-game constantly stutters most likely because it's running some stupid OOP scripts and blocking the main loop

-more malreuse[2]: they use XMPP to implement in game chat and friends list

-more malreuse due to retarded OOP pretend-abstraction code: game lets you fire your weapon during weapon switch, causing your client to go into an invalid state (for example you wont be able to shoot again for another second, and ammo counts become wrong) and the server not acknowledging your shot

-game edits the wrong structure, it instead edits the one next to the one you're trying to edit. smells like typical rookie logic error guy makes after trying to designate elements within his data structure. webscale faggots do this shit literally every 5 minutes in their code. after the bug happens several times the client goes into an invalid state and can no longer edit anything (and then shows a lock icon on the screen which isn't even intended to be part of the game, it's from the single player mode - more malreuse)

-game plays music for a few seconds in the loading screen even though you have music disabled (typical malreuse issue)

as you can see, none of these are related to caching or whether they used C or not. they are purely issues with their logic. i see this same bullshit in literally every language (I've seen examples in at least: Java, C#, C, Haskell, Python, Ruby, JS, Perl, LUA, Coffeescript, Go, C++, Visual Basic)

1: Malreuse is when you make some abstraction but fail to define any useful semantics for it, and some other jackass reuses your code assuming it does X and Y despite that it has zero documentation and he never read the code of it. You seem to acknowledge this sort of thing in your last paragraph. But you imply all abstractions are bad, which is also completely false. Abstractions are good, save time, and reduce defects. The problem is when people don't use them properly.

2: Another example of malreuse is when you use some COTS bullshit that is ostensibly applicable to your problem but is not at all and you get some frankenstein bullshit.

Pretty much _any_ use of a game engine, web framework, or any type of "platform": web, Java, Flash, is a recipe for malreuse (and you don't even have access to the code in these cases)

Except in games where you should transpose your data in favor of soa to iterate through millions of objects and make use of intrinsics. So no. Stop with the meme, making your silicium scream just to paint two squares instead of thousands is literally why things feel sluggish.

>But you imply all abstractions are bad, which is also completely false. Abstractions are good, save time, and reduce defects. The problem is when people don't use them properly

Most things that you do in c are abstractions. So no, thats what not what Im trying to say. But theres no limit to how much one can abstract, and people tend to believe that the way to solve a problem is by creating an endless recursive layering of abstractions. By accepting abstractions by default you incentivate this behaviour, by being skeptic by default you diminish it.

Abstractions are not good or bad, but they are literally not what solves the problem.Its better to not let anyone that has not proven itself do pull requests code that has new abstractions.

lol. i program a game and an OS. the game uses cache and tries to maintain hard realtime guarantees (to achieve vsync with 0 latency, while also phase locking to the server). this makes it a lot harder and bug prone. the OS is in an interpreted language and cache is simply turned off to mitigate side channels

Dude, you can make 3D games with decent performance in fucking JavaScript of all things. You know, a language that fucking uses hashmaps instead of arrays that cast every key you throw at them to strings, where arr[1] is the same as arr["1"]. And you can still do stuff like https://threejs.org/examples/#webgl_materials_cars

They have shit like physics simulations, cloth simulations, breakable objects, fluid simulations (far from the best I have seen on a browser, but still), all done in JavaScript. No, they don't really use compute shaders, this is all being done in a shitty JS engine. I have worked with that library and the internals are all done in shitty JS. Raycasting is done in shitty JS, for example, and it can be pretty much run at 60FPS even if raycasting against all objects in a really crowded scene.

Some months ago I had to optimize a visualization program written in three.js. Shit was designed to run with relatively small data sets, but a client was trying to run it for a data set 2000 times bigger than the tests we initially ran. That would be 128000 polygons. Of course, shit barely managed not to crash. So what did I do? Did I solve it by whining about muh cache hits to my boss? No, I attempted something less retarded than wasting a hundred man hours in writing a compute shader capable of taking profit of cache hits in the GPU, away from JS's shitty hashmaps. Turns out every node in scene (around 80k) was uploaded as its own object to the GPU, so I used fucking JS to iterate over the every node to push every single and add it to a single, big static model. This required running matrix transforms over every single vertex, which is done in fucking JS. It also generated quite a lot of garbage in the process because I don't give a fuck. Really, shit could have been optimized much, much better. The result? Silky fucking smooth 60 FPS, and about two seconds to rebuild the scene. Would have taken fractions of a second if fucking JS's hashmaps, or JS as a whole, didn't suck as hard. So it was not muh cache after all. The moral is, you are a bitch, and that whining about cache hits for your shitty 4th-wall-breaking 2D 8-bit retro roguelite-like-like platformer is pure LARPing.

the graphics are being done in glsl shaders, which are written in a simple language that doesn't let much space for senseless abstraction. If 3d graphics were actually written in some meme language you would have some retards proposing to use backgular.js with modelviewrecursiveabstractcontroller and some three agile buzzwords that would change each 6 months or so. The mere fact that you are able to see that car is that three.js wasn't able to put its paws into the actual glsl code.

Second:

the moment you want to have lots of objects moving on the screen, actual complex pipelines, actual good graphics and ambient lighting, different materials or wathever, I don't know. you can come back to me. The car demo simply shows a single car, static, with very basic lighting, no shadows, no ssao, no other objects to be reflected upon (that's right, the car is only showing samples from a single texture, this is actually not complex performance wise), this is not good enough of a stress test. Not at all. A game has to do a lot more than that, even when most games bake 90% of the stuff you see on screen, the remaining dynamic 10% already has to be leagues more complex than what you're showing us here.

There's a wasm example that shows a comparizon of the same stuff between javascript and wasm binary that kinda makes js look as retarded as it is in a clear fashion.

Imagine how unfun would be to play a dinasty warriors clone made in javascript! and this is very basic shading!

I know that you people want to act like the new fancy languages are the fix for everything, and that they're fast or their disadvantages are negligible or wathever. But stop acting up. The fact that you showed up this demo kinda draws the point quite clear. It has terrible graphics, it's a poor benchmark. Games are forced to do better than that, and they still get criticized for going 2 or 3 frames behind... when if you were doing it in your fancy high level language of the moment you couldn't be reaching 3 quarters of that speed either. And I'm being generous.

yes, but this is not a big achievement. It matters very little when you will have to turn down your graphics for the web browser version because the native one goes faster. Your only option is for no one to use native options so you don't look silly while claiming to be the faster one. I bet you wish this was the case.

>the moment you want to have lots of objects moving on the screen, actual complex pipelines, actual good graphics and ambient lighting, different materials or wathever, I don't know. you can come back to me.

Everything you have said, except the objects moving on the screen, are a GPU thing, which, as you said, are programmed in a non-retarded language. My point was that more often than not, unless you are really trying to push a machine to its limit by making a compute-intensive game (think a RTS with a shitload of units, a bullet 9th-circle-of-hell, etc), your problem is gonna be in the rendering pipeline, with your GPU struggling to churn out all that bloom, SSAO, HDR, DoF and camera blur in real time. You can choke your GPU in any language anyway. You really sound like the kind of guys who insist on benchmarking Skyrim in 800x600 low quality.

We are talking fucking JS being capable of running simulations with a hundred thousand of polygons, in a shitty outdated GPU, using WebGL, which is miles behind anything you could have while native. We are talking Internet Explorer is kind of capable of running this, and IE DOESN'T EVEN FUCKING SUPPORT HARDWARE ACCELERATION. This is the current state of computers, capable of running amounts of data we can't even fucking imagine, in shitty slow platforms running on shitty slow platforms built on outdated tech, at decent speeds.

My point is, you can autistically hand optimize all the cache hits you want, but in the grand scheme of things, that optimization isn't what's going to make your game run on toasters. There are bigger optimization fishes to hunt out there.

Actually IE (11 that is) supports hardware acceleration. It just support less than half of the webgl standard to a painfull extent, with very few extensions. Edge is a little bit better. I know because I had to port an engine to IE11 and I almost died in the process.

People will consume whatever resources are available to them. There was a semi recent article from the lead programmer of Naughty Dog (I think?) where he understood this phenomenon even back in the day. At the start of a project he would allocate an array of garbage. The team, even though they knew each byte counted and were unaware of this garbage array, would go slightly over their budget anyway. Then suddenly when it's time to go gold and everything has been optimized bone-dry the lead programmer magically found some more memory that would let the game work and have it ship.

It does though. In a free market profit is the ultimate goal, if it was planned quality could be chosen instead. Making shitty software is a prime example of the capitalist maxime of privatizing profits while socializing costs. Thanks to shit like electron the software maker gets the profit of having cheap workers who have a lot of competition and therefore low wages and they can develop it quicker so pay them less. The users are the ones who have to bear the slow performance, bugs and electricity costs.

What kind of shitware are you using? If you want software that runs on even the slowest toasters, it's out there and it's pretty decent. For Linux based systems, you can use DSL, Tiny Core and Alpine. For BSD you can use NetBSD which is made for embedded use and comes with glorious pkgsrc, or you can use a minimal install of OpenBSD. These run on fuckall resources and they're free and open source software. Combine any of those with a basic window manager like MWM or Fluxbox and go to town installing whatever basic apps you need.

If you're some sort of billionaire that can afford a shitty Intel netbook from 2007 with a 1GHz CPU and 1GB of RAM, you can slap Debian or nearly any other modern distro on it with a window manager, and you can even set up a light web browser like Dillo. You can even use the latest version of Firefox if you're not multitasking or streaming video.

I use pirated Winshit 10 LTSB on a 2012 Core i5 3320M ThinkPad X230 with a 250GB SSD and 12GB of RAM for CAD software and shitposting every day. It's not nearly as bad as people say, and I struggle to utilize even 1/3 of my RAM with the page file off and Firefox and OpenOffice running. I'm getting around 10+ hours of battery life with an original 6 cell battery. The whole thing cost me about $350. My next upgrade will be a 250GB mSATA SSD as a second hard disk for Fedora or Debian Linux. This is 2012 technology with some minor parts upgrades and it runs like new.

…maps such as ad_sepulcher, "foggy fogbottom" and "firetop mountain" are … fucking hell you just need to go and play them, I'd be doing a disservice to you if I write a spoiler but it's something you ought to do before you die IRL. (if you liked Quake at least to some extent)

You'll spend your whole life fixing problems that didn't even need to exist in the first place. And the net result at the end will negative, since you'll have wasted your time. It's much simpler to just not use the modern shit altogether, or limit your exposure to it as much as possible.

I never had a PDP, earliest computer was pic. Either way, both can access the Internet with the appropriate comms hardware.

But those old computers are pretty hard to get now. So I do the practical thing instead: limit exposure to modern shit as much as possible. For starters, no javascript browsers, no desktop environments, and avoidance of most GUI shits.

Most people are teachers who ended up teaching because the couldn't get a better job. In addition to mediocre competence they often bitter and insecure, and want their students to be drones who memorize and regurgitate shit as they are told to. Such teachers fear bright students who want to acutally understand shit because they fear such students might expose their deficiencies or even surpass them in the more or less distant future. So they react allergically and try to make every student into a drone who doesn't challenge anything.