Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Nim is just yet another statically-typed GC language with an unsafe
escape hatch. I can get the same thing (and much better syntax) with
Java and JNI or C# and P/Invoke. Yawn.

Rust, on the other
hand, is something genuinely new: it provides completely memory
safety without a requiring a garbage collector at all. It's sad to
seeing people switch from Rust to Nim: they're often too inexperienced
to know what they're giving up, and I feel like they're seeking
(syntactic) novelty, not a programming environment that's actually
useful.

Microsoft has an incredible amount of deadweight, mostly as a product of the exceptionally brain-damaged review system, which encourages managers to keep untalented people on their teams so that can reserve the good-review-score quota for productive people. Reforming the review system and eliminating this deadweight would be wonderful.

How do you distinguish between a user and an application he's running? How do you do it over a network?

Any piece of information an application (here, a client-side program) can access, a user can access too. If we can't distinguish between users and applications, we're forced to rely on the user as the unit of trust.

The situation is different for a "web application" that can store information inaccessible to users. But for local applications, a secret key is pointless.

Do we really have to re-learn the same lessons every 5-10 years? Trust users, not programs; don't trust the client; security through obscurity is no security at all: these are fundamental concepts, but we keep forgetting them.

What exactly is the point of the API key? Anything an application can do, a user with access to that application can do. Spammers can extract a key from application and pretend to be that application. You stop spam at the user level.

These attacks by LulzSec, Anonymous, et. al. remind me of the old Twilight Zone episode "It's a Good Life". In that episode, a child with godlike mental powers causes untold misery when, without understanding, he compels the residents of a small Ohio town to conform to his whim. Likewise, these hacktivist groups wield previously-unknown power, and they use to capriciously destroy whatever offends their ego, whimsy, or underdeveloped sense of justice. In the process, they not only hurt innocent bystanders not only undermine the legitimacy of their cause, but actually encourage more stringent regulation of the Internet. Like a character from a Sophocles play, they hasten the outcome they would fight.

They are legion. They do not forgive. They do not forget. They do not plan. They do not show restraint. They do not not choose their battles. They do not help.

Look: you can always give up efficiency to gain predictability. That's how real-time operating systems work. If you need hard bounds on access time, you can turn off the pagefile (or lock your application into memory). But the price is much less efficient use of RAM --- when you disable paging, the OS is forced to keep useless junk in memory, making less available for useful things.

In the real world, we don't need hard realtime guarantees in the vast majority of situations. In the real world, paging is the right thing to do because it's a huge efficiency win, and because the OS makes the right page replacement decisions most of the time.

But sure, if you're writing robot control software and people will die if the velocity control routine needs to be paged in, fine. Turn off paging. Or better yet, use a realtime OS like QNX. But for the rest of us, letting the OS manage access patterns is the right thing to do because the OS knows more than your application possibly can.

handle some asynchronous task because things are taking too long takes up 1 or more megabytes of RAM

You're still working with a naive mental model of how memory works. Thread stacks don't "take up" memory when they're created. Memory is not real estate. Thread stacks take up no memory until they're used.

Yes, the kernel will set aside pagefile space to make sure it can satisfy requests for memory (unless you're using Linux and you use overcommit) --- but that's not the same as actually keeping all that memory resident.

And that's why it's a privileged operation. Yes, there are exceptions to the general theme, but real-world systems are always more complex than one would suppose from a distance. The presence of mlock doesn't change my overall point though: the operating system decides what gets to stay resident.

A single program that hogs memory

You don't get it. The operating system arbitrates between applications and decides whose memory is actually in physical RAM. It makes these decisions based on access-pattern information unavailable to normal applications. Yes, all things being equal, accessing less memory is better. But imagining applications as being "selfish" and as "hogging" memory is using a very naive mental model to deal with a very complex real world system. In general, that doesn't go very well.

let the OS know that instead of paging this memory to disk when low on memory, it should instead just free it and let the application know it has done so

madvise with MADV_DONTNEED or equivalently, or VirtualAlloc with MEM_RESET under Windows. Discardable memory isn't as useful as you think though.

the OS really doesn't have any intelligent insight into the usefulness of a particular allocation to an application

But applications can tell the OS what pages are important. On Unix, applications can use posix_fadvise and madvise. On NT. each page has a priority attached to it, and pages with lower priority are evicted only after those with higher priority are gone.

Of course it's better to touch fewer bytes and to keep the bytes you do touch as close together as possible. Virtual memory doesn't magically make these things happen for you. What it does do is make decisions about what makes to keep in RAM based on access patterns for the whole system, something no individual program can do on its own. In other words, it's exactly what the OP asked for!