Posted
by
timothyon Sunday July 14, 2013 @08:05PM
from the down-in-the-weeds dept.

First time accepted submitter faffod writes "Coming from a background of console development, where memory management is a daily concern, I found it interesting that there was any doubt that memory management on a constrained system, like a mobile device, would be a concern. Drew Crawford took the time to document his thoughts, and though there is room for some bikesheding, overall it is spot on. Plus it taught me what bikeshedding means."

Since this was literally the same link, not just the same story but via a different source, it seems like there could be a pretty simple automated dupe-checker that flags it and tells the editors: a story with the same link was posted in the past N weeks, are you sure you want to post?

I'm surprised I haven't seen anybody publish a script that sees if something similar has been posted in the recent past on slashdot based on key words or some analytics voodoo. Would make for an interesting project, and fitting too!

A Slashdot story isn't a single link. It's a block of text that may contain 0 or more links. It's not like reddit or Fark where you're posting a link to a single article, and that's about it. They could write code to scan all summaries for all links and log each of them, but it's not as simple as it would be with something like Fark.

Coincidentally, most IDEs for javascript have little to no spelling assistance

Spell-check in IDEs generally relies on static analysis of the variables in scope at any given point in the program. The more dynamic a language's type system is, the harder it is to statically find the names of symbols in scope at any given point in the program. PHP and Python are kinda-sorta OK for this because all global variables are out-of-scope (PHP) or read-only (Python) unless declared otherwise at the top of a function's definition (or, in PHP, unless the variable is one of the predefined superglobals, whose names are all uppercase starting with $_). This way, the IDE can parse a function for all variables assigned to in a function and assume they're local. JavaScript, on the other hand, defaults to making all variables global unless declared local with the var keyword.

Spell-check also relies on static knowledge of what source code files are in scope. This is dead easy for Java. In PHP you scan for require_once, and in Python you scan for import, but even then, a module is occasionally conditionally imported, and importing has side effects. JavaScript can't include JavaScript at all except by appending a <script> element to the HTML DOM with the src= attribute referring to the other script, and the idiom for that is harder to recognize than a simple import statement.

I'm probably going to get downvoted as troll, but my experiences with most console developers were often strange (as a developer myself).
Talks usually end up in most of them dismissing scripting languages, higher level APIs (such as OpenGL), or certain algorithms as useless because they are slow, use too many instructions unnecessarily, waste cache, etc.

Any attempt to raising a point about how you don't need to optimize everything but only few critical zones of your code (what matters), or that a cache wasting algorithm can end up being faster anyway just because it's more efficient, immediately results in myself being dismissed or treated as ignorant because, something inefficient is obviously inefficient and I must be stupid for not realizing that.

This article reminds me of that. The author claims (in his first claim) that he is determined to prove that something is less useful because it's slower, and nowhere in that huge long piece of text there is anything useful offered as proof, instead he keeps posting data about how slow Javascript really is.

Any attempt to raising a point about how you don't need to optimize everything but only few critical zones of your code (what matters)... immediately results in myself being dismissed or treated as ignorant

To be fair, if you were debating with someone who writes applications that really do need the very top levels of performance, and you claimed that optimising trouble-spots would be sufficient to achieve that, then you were ignorant. For most software, being within a factor of 2 or 3 of hand-optimised low-level code is easily sufficient, and a bit of work with the profiler to identify the most serious flaws will go a long way. The rules change when you shift from that kind of performance standard to needing

Whine me a river. I wish I would have found a better reference for that concept, but I don't have all day. Since you already wasted 17 seconds on it, feel free to double down and google up a better reference and post it here.

They arent useless but they are certainly not always the best (or even simply an appropriate) tool to use. Scripting is great for the trivial-but-useful, particularly a one-off or a mock-up. But any sort of serious computer program is going to be better if you program it and most anything that is trivial-but-useful enough to have a jscript app should be quickly reproducable on any platform that you want in a native fashion. Do you need a native program to have a simple calculator? No, javascript will do. An

Whee, the UI of a game is scripted. That's not exactly a critical application, and it's not like they tried to write the whole thing in ecmascript. And it's not like the gamer market hasnt been well traind to accept and even expect to constantly buy more powerful hardware in order to 'consume' a never-ending string of bugfixes from game launch date to EOL.. Anyone that sets anything up so that his game UI crashing or malfunctioning in strange or unforseen ways is a big problem is an idiot.

WOW is a serious project, no denying it. But if it doesn't perform sufficiently, the user is expected to purchase better hardware. On a console or mobile, that is not an option, the hardware is fixed. If two developers make the exact same game, and one uses a native language with few libraries, and the other uses an interpreted library with highly abstracted libraries, the experience provided to the end user will not be equal. You can profile the interpreted code to death and still never achieve the fidelit

Debuggers are usefull. But if the developer does not know what and where to look inside the data provided by the debugger, it will not help anything. (And oh yes, the debugger does not replace the need for the developer actually know what they are doing)

That is THE fundamental problem with junior / immature programmers. They don't know how to THINK about the problem: its boundary and edge cases, the run-time memory usage, knowing if they are CPU bound, knowing if they IO bound, etc.

Scripted languages have enormous advantage of being much easier to bugfix than native apps.

PFFFF Hahahahaha! Damn you, my desk is now a coffe mess!:-) Have you ever tried to debug something made in Javascript? With breakpoints, value inspection and etc? (If now there is a magical tool that could let me know because I need urgently)

The problem is that your baby is not the only thing running on the system. When you waste resources, you do it on behalf of everything else that runs too. Even if your baby isn't doing anything critical when you waste it.

It only takes one selfish programmer to screw up an embedded system. You are he.

It only takes one selfish programmer to screw up an embedded system. You are he.

Even though it's unrelated with my original post, you are saying that not going native is worse because it uses more CPU cycles/battery?
Explain to me why, for decades, the industry used J2ME, Java (Android) and now ObjC (Apple). I guess the entire mobile industry is selfish and greedy?
You probably didn't understand GP, though, the message is that you don't need to optimize something that doesn't consume enough cycles be a performance problem.

Even though it's unrelated with my original post, you are saying that not going native is worse because it uses more CPU cycles/battery?Explain to me why, for decades, the industry used J2ME, Java (Android) and now ObjC (Apple). I guess the entire mobile industry is selfish and greedy?

Of course they are. But that's beside the point. Development is a trade-off - you have to work with the market you have, within deadlines that means you'll sell, and developers you can find and afford. So yes, you make do with what makes the task feasible.But you don't have to make it any worse than necessary by allowing bloat and doing things inefficiently. Adapting a mindset that you do work in a shared embedded environment, and do things frugally doesn't incur a great cost.

You probably didn't understand GP, though, the message is that you don't need to optimize something that doesn't consume enough cycles be a performance problem.

To follow up on my own post, what we see in environments like the Android world is a tragedy of the commons. If everybody played nice, everybody would benefit. But there's no penalty to yourself for being greedy, so you are. And so are all others.

I understand your point, but I believe it's a little too extremist.
In the real world, It is always possible to write more efficient code, but the more you optimize, the more difficult to develop, maintain or port it becomes, exponentially.
So in the end, it's always a trade off between performance and cost of development, added to the fact that not all code needs to be optimized, only the little portions that perform the most critical tasks.

added to the fact that not all code needs to be optimized, only the little portions that perform the most critical tasks.

That this is false is my point - it's only true if your app is the only app on a system. On a shared embedded system, the portions that don't do critical tasks are just as important to optimize for the rest of the system.Because there's no penalty to your own app, it becomes a tragedy of the commons [wikipedia.org].

So, what's the difference then, that your phone battery will last 18 hours instead of 20 because you didn't optimize more than the critical tasks?
It seems much cheaper to solve this by adding a little more battery capacity, yet keep your phone OS and applications easier and cheaper to develop.
No matter how you look at it, I can't see the scenario you describe as being a tragedy..

You didn't follow the link, did you? It's a situation that's called "a tragedy of the commons", which doesn't mean it's a tragedy.

And anyhow, it's not about battery life, but applications using more than their fair share of memory, IO or other resources contribute to starvation for other apps that run at the same time, possibly causing crashes in other apps when they cannot allocate memory (because they're well behaved and allocate when needed and free when done), cannot update alarms in time, can't take a

Because Sun pushed it on everyone. It sucked big time, though. Did you ever write a J2ME app? It was the kind of platform where everything was an object except for primitives, but memory management was so messed up because of the combination of GC and extremely small heap, that pretty much any serious app didn't use any objects. Instead, you preallocated arrays of primitives, and used that for everything.

Java (Android)

You mean, the only mobile platform that still has horrible UI latency?

It's impossible to personally attack an Anonymous Coward.But I'm glad you recognize what you are doing.

As for "What wasting", I asked you(?) to re-read the guy's second paragraph, but this was apparently too hard. So let me quote it:

Any attempt to raising a point about how you don't need to optimize everything but only few critical zones of your code (what matters), or that a cache wasting algorithm can end up being faster anyway just because it's more efficient, immediately results in myself being dismissed or treated as ignorant because, something inefficient is obviously inefficient and I must be stupid for not realizing that.

His point was, that there is no point in optimizing the code that barely takes any resources to begin with.

Actually his point is usually applied to a concept of "hot path" where, regardless of how many resources a peice of code "takes", what matters is how often that peice of code is run. The problem with doing this is that programs and their usage patterns change and evolve over time. What was not a "hot path" yesterday can become one due to changes outside the program itself -- e.g. suddenly all the packets it receives have an option set that was only set on 1% of packets when the code was initially profiled

I happen to have an average of about 200 tabs open on most of my daily use machines. This tends to eat most of the resources that my machine has, regardless of how modern the machine is. If it's an old clunker, it chokes on less and I generally don't make it that far before I have to kill the browser and restart it. It seems impossible for any OS vendor and any "full featured" web browser to just deal with the limitations of the system and keep the application snappy and usable.

> Talks usually end up in most of them dismissing scripting languages, higher level APIs (such as OpenGL),

A few years back I've implemented OpenGL on the Wii and did maintenance work on our OpenGL version running on the PS2. Hell, even shipped a couple of games with it. OpenGL 1.x _can_ be implemented efficiently on a console if you apply some discipline. People who dismiss a rendering pipelineprobably have never implemented one.HOWEVER, their point is that memory manage CAN be an issue if one isn'

I learned C++ then VB.NET then C# then JavaScript. I was shocked at how "anything goes" it was and I assumed it was a memory nightmare. Considering my i5-2400 sometimes maxes out on pages with complicated javascript, I'm not surprised. I heard that JS can take 10x more memory than it needs at any given time realistically.
The one thing I can't wrap my head around is if it was made a "real" language, would it be a gigantic security disaster? Or could it be limited enough to not turn into Flash, Java, etc

I started programming at a time when GOTO was still considered "kosher" (in C, no less) as a lot of algorithms were designed as state machines. To this day I still sometimes consider a goto, but even I stand in awe of the complete retardedness that is javascript.

What happened to languages that pick some things and then do them well? JavaScript seems to have evolved to try to do everything, and yet it doesnt do anything even close to well at all.

Good question. My educated guess is that the first problem would be security, after all you would be running a complete application simply by accessing the page. The second problem would be that as each one have a favorite language, in deciding what would be the "lingua franca" of the Web we would have a Digital World War

I think they're of the devil, but for some reason a lot of baseball stat heads still use them instead of a video format when they want to post a few seconds of a game for illustrative purposes. It's weird because these are generally young guys, not the old farts who you'd expect not to have changed their workflow since 1995...

JavaScript on the iPad? That doesn't seem slow - certainly not enough to where it registers anyway.

for some reason a lot of baseball stat heads still use [GIF animations] instead of a video format

Some browsers can view only H.264 and animated GIF. Other browsers can view only Theora, WebM, and animated GIF. Some, such as the latest version of Internet Explorer that runs on Windows XP, can't view anything but animated GIF without plug-ins that may or may not be installed and that the current user may or may not have privileges to install. If the only video format supported by all browsers is animated GIF, what should a site use to reach the most viewers?

True, animated gif is the most widely supported "movie" format if you look at all target platforms. However, there currently is no technology implemented in browsers that will take an animated gif, re-render it into something that can be accelerated by the video card and use that for output. This results in the browser pumping all the frames non-accelerated to the video card. Devices with limited (read, all devices that have more than a few tabs open, or mobile devices) will hit limitations of the hardware

If the user's browser has more than one tab open, it won't send frames in inactive tabs to the screen. If your site has a large audience on mobile devices, and you're a big enough company to license footage from MLB, you can probably afford to create a native app for iOS, a native app for Android, and a native app for Windows Phone.

I made a comment [slashdot.org] to a poster over on the original posting of this. I think it's worth expanding upon in case people are persuaded by the arguments in the paper.

First off, just as TFA predicts, I'm not going to try to conquer his mountain of facts and experts by presenting a mountain of citations. Instead, I'm going to point out where his conclusions are not supported by his facts and point out his straw man arguments and his attempt to convince us through overwhelming expert opinion.

The straw man: In the article, he presents two scenarios (photo editing and video streaming) and claims that you can't reasonably do those because of memory limitations (on the iPhone/iPad). He then concludes you can't produce useful apps because you can't do those two. I couldn't find any citations of people attempting to do this on mobile using JavaScript. Choose the right tool for the job here. I'll give him these two use cases (and several others: 3D Games, audio processing, etc), however to extrapolate from here that no useful apps can be produced (ever!) using JavaScript is a leap too far.

Next, he spends a lot of time diving into the particulars of garbage collection (GC). I'm going to grant him practically every point he made about GCs. They're true. And, it's true that mobile is a constrained environment and you must pay attention to this. But, this is largely known by developers who are trying to write high-performance JavaScript applications on mobile. Hell, -anyone- writing high-performance apps in any language need to be aware of this. If you allocate memory during your animation routines in a game you're asking for trouble, regardless of the language. So, to me, this part is just a call to pay attention to your memory usage in your apps. This is really useful advice and I will be paying even more attention to the new memory tools available in the latest Google Chrome dev tools.

One of the biggest problems in the rant is the comparison of pure computing performance and his claim that ARM will never be as fast as desktop. I'm going to again grant that this is true. However, this means crap-all for most apps. Tell me: How many apps do you have one your phone that are processor bound? None? One? Two? The vast majority of apps spend their time either waiting on the user or, possibly, waiting on the network. You can write a lot of really useful apps even given constrained processor and memory. Anyone remember the Palm Pre? The TouchPad? Most of those apps were JavaScript and they worked just fine.

This brings me to the point of all this, TFA's author focuses on performance. However, users focus on responsiveness. JavaScript is perfectly capable of producing responsive applications. Sometimes, it takes attention to detail. Nothing is ever 100% free and easy. JavaScript is not a magic solution and those of us who think that JavaScript has a future in mobile app development know this. This is why programmers get the big bucks. Writing mobile apps, you need to be aware of the effects of CSS, memory, processor, responsiveness and more.

One of the biggest problems in the rant is the comparison of pure computing performance and his claim that ARM will never be as fast as desktop. I'm going to again grant that this is true. However, this means crap-all for most apps. Tell me: How many apps do you have one your phone that are processor bound? None? One? Two?

The other issue you're not addressing it runtime memory requirements. From the GC performance chart, the best performing GC that provides near native performance does so by requiring 5x more memory. This is going to impact the number of concurrently running apps on your phone/tablet before things start slowing down. Users prize snappy interfaces and if their mobile device slows down then the knowledgable users will bring up a task manager to figure out what's going on.

I think you're overstating the case of performance verses responsiveness. He does specifically point out that GC is negatively impacting UI response times and that that is not going to work.

I agree with you that people who work in the problem space know about memory management by experience. However, that JavaScript makes it so difficult to manually manage memory seems to be his real point.

Lastly, if we could solve the network speed problem, you would just outsource real CPU/memory apps to the server and si

If you allocate memory during your animation routines in a game you're asking for trouble, regardless of the language.

That's not true at all. If you allocate memory during animation in a language with deterministic memory management, you have a pretty good understanding of what it'll cost and whether you can afford it (and in many cases, the answer is yes).

Note that animations are not specific to games. One common case where you allocate memory during an animation is when the user is scrolling a list that is backed by a dynamic data store (i.e. items are generated "on the fly").

Actually a pretty well written piece, if a bit wordy. I see a lot of people commenting here are perhaps missing the point, thinking that the author's angle was JS=BAD. Not at all. My take was his issue was not so much with JavaScript, but with Garbage Collected languages in general.

An important point he made regarding GC routines and how they tend to be unpredictable in terms of when and how long they run. Also, much was discussed on his observations that, if you have several times more memory available than what your app needs, the GC routines are very non-intrusive. However, when you get into a low memory situation, the performance hit from GC is huge and causes obvious stutters in the application and/or it's UI.

Also, some discussion on the irony of working around (or trying to "spoof") the GC by using various manual techniques, and how that almost amounts to manual memory management. All in all, a really interesting read.

Opera browser (versions up to 12) was shown on one graph (iphone 4S) in the article and it kicked some serious a$$ but never mentioned again.

One reply to the article was about Opera being only browser to run Google Wave, kind of... "The only browser that ran it with anything resembling“speed” for it’s first year or so was Opera, and Opera never really worked very well with it anyway."

I enjoy security through obscurity but Opera is just too good a browser to ignore.

Exactly. Because it's an appliance. And you're either willing to take it apart and learn how it works so you can make it do what's needed, or you accept that you're an appliance operator and stop bitching about it.

It looks like you want to get rid of all JavaScript in web pages. What's a better way to present interactive forms over the Internet that doesn't involve reloading an entire 100 kB page whenever the tiniest bit changes and doesn't involve paying someone to make six different native applications, one for each operating system?

"It looks like you want to get rid of all JavaScript in web pages. What's a better way to present interactive forms over the Internet that doesn't involve reloading an entire 100 kB page whenever the tiniest bit changes and doesn't involve paying someone to make six different native applications, one for each operating system?"

Dont force a single 100kB monster to begin with, doh. Break the monster down into bytesized chunks and this suddenly doesnt look so impossible to do in straight html, now does it? You can even keep your 100kB script as well if you want, but you must put a link to the straight version in the noscript tags at the very least. (Personally I urge you to give me an option to use the straight version even in a scriptless browser, otherwise you will probably force me to disable all scripts on your site, but it's not a formal requirement like the noscript tag.)

From day one that was the way you were supposed to do it when you added scripts to your web pages, and it's not that I want to remove all scripts from the web, I want to remove this idiotic assumption it's ok to skip the webpage, hand out a script instead, and pretend all is well. It isnt. Javascript is fine for making a fancier version of a webpage (but only as long as you dont use it as an excuse to skip the simple version.) But scriptless browsers are an integral part of the web as long as it's existed and they arent going away. If you dont support them you arent supporting the web and are missing the point of the web.

With the current threats and trends in malware, you're likely to see only more and more scriptless browsers. Browsers that support scripts just fine are being told not to support YOUR scripts - at least not until you are trusted. Making a good first impression more and more means making a good first impression WITHOUT grabbing your ecmascript crutches, without just ASSUMING that the visitor is immediately comfortable enough with you to be touched in that way.

Even if you cant figure out how to write a webpage or hire someone that knows, you should not need to pay for 6 different native apps - unless your app has a really niche market at least. Just get it written once in a high level language, release it GPL so that anyone interested in porting it to a new platform can. You'll likely have ports contributed back faster than you can pick out the right guy internally to receive them. (This part assumes your app does something that a computer literate person might find useful, of course, it strikes me that is a blind assumption though.)

Dont tell me your afraid to release your precious source - you're doing that right now every time your server sends out 100kB of ecmascript already.

Sorry. Things change. It won't be long before browsers are just wrappers around the JSVM and all web addresses are just sandboxed applications running in it. The web is the biggest App Store there is.

You want raw data in a structured format? There are REST APIs for that, they return JSON. HTML is too verbose by far as the default response and is useless for any other client besides an HTML renderer.

Text content is moving to Markdown as a standard as well. It's quicker and easier and covers text formatting. Clients should just render it directly.

I could go on but it's probably lost on you. Suffice to say that modern server app development is evolving to be service driven and client agnostic. HTML is one of many targets. Why write server apps or content engines (CMS/blog/forum) for a single client when you can instead create a server based API and several thin client apps (JS, DART, iOS, Android, C++,.Net, Lua, etc).

So your big complaint is ads? You do know they are inevitable regardless of the tech employed. If HTML had never become the standard and everything was written in C++ you would still get ads. Look at iOS and Android apps. Look at the latest Ubuntu search (and Windows now too). There are ad supported (and malware linking) apps all over. It's even worse as they have less sandboxing and can cause more problems.

He said (and I agree) that the problem is NOT ads. But if you choose to make your ads annoying javascript contraptions instead of readable text, THEN we will block your ads. And not feel bad at all about it. If you want us to see your ads, you know what to do.

You need to read a little better. Although I congratulate you on posting good links and formatting your message readably, you sadly missed the point. Those are not text ads, and text ads arent how drive-bys work. The 'annoying javascript contraptions' I was talking about - THAT is how the drivebys work.

I said the problem is not ads and I will repeat it. The problem is not ads. The problem is the way that ads are typically being delivered, and it's not exclusive to the ads. Other things delivered in the same manner are subject to the same problems, and it's perfectly possible to do advertising in other ways.

Say a Slashdot story has 100 comments, and each comment is 1 kB. How should these comments be sent to the browser?

Break the monster down into bytesized chunks

Without JavaScript, you can't use AJAX, and without AJAX, you can't retrieve comments incrementally as the user requests them and add them to the page. Or are you talking about displaying one comment on each page the way certain mailing list archives do?

Yes, <iframe> was proposed as an alternative to AJAX [slashdot.org]. But years ago, hundreds of iframes on a page would cause certain browsers to crash [mozilla.org]. And even in 2013, the test case from that bug still causes Chrome to show an unresponsive tab alert on my laptop. Besides, putting each comment in its own iframe doesn't support batching. Unlike JavaScript, which can request multiple comments as a single JSON or XML object, each iframe needs its own HTTP request.

It's not the language that's the problem but the monolithic page in which you put all the content. Just cut it up into the static bits and the small dynamic bit. Any other language, whether native, bit code or machine code will require reloading if you put all your content in a single file and you change the file. While you're at it, you might as well put different parts of your page in different files, so you can re-use things like a menu bar, styles, headers and footers for other pages.

In my extensive experience, reloading the page every time is actually still FASTER and makes for a more responsive experience.

I agree with you that some sites using AJAX are poorly designed. I disagree that sites using AJAX must necessarily be slower than reloading the whole page for every little piddly action. Say you've landed on an article with 100 comments in the discussion section. Every time you expand or collapse a thread, the article and all 100 comments reload. Every time you begin, preview, or submit a comment, the article and all 100 comments reload. I don't see how that'd be so convenient for people on slow and/or mete

The problem with that protocol is that it, to state things simply, sucks. Which is not surprising, given that it is an ad-hoc hodge-podge of technologies, most of which were not particularly good in the first place, and which were also used for the purpose their design did not originally anticipate.

It's basically one of the worst examples of design-by-mob: take random things that people know, put them together, and add a lot of kludges so that the whole thing works. Yes, it does work, but that's about the o