You mean the D-Wave computer? It seems to be a common misconception that it's a quantum computer in the sense of using coherent superpositions to perform computations - it is not. It's another kind of quantum computer - it performs quantum annealing, which is a way to use quantum effects to accelerate certain cases of simulated annealing. Whether or not the D-Wave computer can actually do anything faster than more traditional methods is up for discussion, but the only one who ever claimed it was a quantum computer in the sense discussed here has been clueless journalists.

Let me correct myself here - it's not even on by default. You have to actually check a "Enable Kernel Caching" checkbox to turn it on. People are spending way too much time bashing a feature that's opt-in.

The user context doesn't really matter when it runs in kernel space as nothing can stop you from just replacing the user context. Why http parsing is done in kernel space is exactly to maximize performance. As mentioned in TFS you can disable it if you want to. One could argue that it shouldn't be on by default because it doesn't give you much if you are serving dynamic content.

> IIS kernel caching
For performance reasons probably. It's optional though. I have no idea about real numbers, but there is always some overhead associated with contex switches which may be reduced if the http stream is assembled in chunks in kernelspace and control is only switched to userspace when a chunk is ready. Also it may be possible to parse the http stream directly from the buffer that the hardware writes the received data to without the overhead of copying the packets to userspace.

While they can write anything in the site's TOS, it may not be legally enforceable depending on where you live. For example I'm not in anyway confirming that I have read the terms when I'm posting this comment, which means it is probably not legally binding in EU. Even if I had to confirm that I agree with the ther terms they may not be legally enforceable as EU has some quite strict laws about unfair contracts.

I don't think you know what "monolithic" means. No one said anything about everything being in the same binary. systemd consists of several components that has been designed to only work with each other. There is no modularity in the sense that there is no modules you can replace or decide whether or not to use.

An anonymous reader writes: Like many Slashdotters, I intend to stop visiting Slashdot after the beta changeover. After years of steady decline in the quality of discussions here, the beta will be the last straw. What sites alternative to Slashdot have others found? The best I have found has been arstechnica.com, but it has been a while since I've looked for tech discussion sites.

mugnyte writes: With Slashdot's recent restyled "BETA" slowly rolled to most users, there's been a lot of griping about the changes. This is nothing new, as past style changes have had similar effects. However, this pass there are significant usability changes: A narrower read pane, limited moderation filtering, and several color/size/font adjustments. BETA implies not yet complete, so taking that cue — please list your specific, detailed opinoins, one per comment, and let's use the best part of slashdot (the moderation system) to raise the attention to these. Change can be jarring, but let's focus on the true usability differences with the new style.Link to Original Source

Taco Cowboy writes: Before I register my account with/. I frequented it for almost 3 weeks. If I were to register the first time I visited/. my account number would be in the triple digits.

That said, I want to ask Dice why they are so eager to kill off Slashdot.

Is there a secret buyer somewhere waiting to grab this domain, Dice ? Just tell us. There are those amongst us who can afford to pay for the domain. What we want is to have a Slashdot that we know, that we can use, that we can continue to share information with all others.

Flickering and architectural problems. The first is purely cosmetic, but is impossible to fix without making chances to the core protocol. The second means that an order of magnitude more work is required to add new functionality than what could be done with a more modern design.