While the interconnects are certainly a very important part of a supercomptuer, they aren't the hardest part. Building a high performance CPU takes a shit ton of research and infrastructure. The barrier for entry is exceedingly high and takes a long time to spin up. You can see that with China's Longsoon processor which for all the hyped ended up being a license of a MIPS core, built on an old process technology. Building a ton end CPU is just tough stuff.

Of course then there's the other fact that there are plenty of interconnect makers that are not Chinese. The big names in high speed interconnects are Cray (US), IBM (US), and Infiniband (which is made by many companies like Intel and Mellanox). It's not like China has the high speed interconnect market cornered.

Finally there's the silliness of focusing on #1. Yes, they have the #1 computer at the linpack benchmark (which is not good at representing performance in all things). However the US has the #2, 3, 5, 8, and 10. In other words, half of the top 10. The idea that only the top spot matters is very, very silly.

Global warming is a complex issue, with many factors and no easy answer. Because of this complexity it makes it easy for someone to just not believe it is true, because the complexity it too much for any one person to handle. It is more complex than switching to solar panels, and electric cars, and stopping cows from having gas.Fixing these issues requires changing culture, which is hard, and will create a lot of people resistant to changes, they will hire a lot of people to make their point across, to convince others.

We have a lot of science, and we need more... However I think one thing is needed isn't finding a silver bullet, is to counter the destructive marketing with more counter marketing. Many of the colleges and universities who are doing a lot of science on the topic, also have business schools and programs. Get a handful of those MBA and Public relation majors onto your grant, to help spread the information to help change the culture.

I have seen major cultural changes happen due to effective marketing. From 2004 - 2015 where there was talk to make a constitutional amendment to ban Gay marriages, to it being legal in all states. The rise of smart phones and mobile connectivity...

Marking isn't always bad and trying to sell you products, it is also used to explain ideas. They are actually a lot of MBA students who are not about being money grubbing capitalists, but are about trying to make the world better. (MBA with considerations in not-for-profit is a popular track). These grant for science, should also be allocated to students who are trained to sell the ideas to the general population.

Showing a graph doesn't have impact on those who don't know how to read graphs.

When faced with a tricky question, one think you have to ask yourself is 'Does this question actually make any sense?' For example you could ask "Can anything get colder than absolute zero?" and the simplistic answer is "no"; but it might be better to say the question itself makes no sense, like asking "What is north of the North Pole"?

I think when we're talking about "superintelligence" it's a linguistic construct that sounds to us like it makes sense, but I don't think we have any precise idea of what we're talking about. What *exactly* do we mean when we say "superintelligent computer" -- if computers today are not already there? After all, they already work on bigger problems than we can. But as Geist notes there are diminishing returns on many problems which are inherently intractable; so there is no physical possibility of "God-like intelligence" as a result of simply making computers merely bigger and faster. In any case it's hard to conjure an existential threat out of computers that can, say, determine that two very large regular expressions match exactly the same input.

Someone who has an IQ of 150 is not 1.5x times as smart as an average person with an IQ of 100. General intelligence doesn't work that way. In fact I think IQ is a pretty unreliable way to rank people by "smartness" when you're well away from the mean -- say over 160 (i.e. four standard deviations) or so. Yes you can rank people in that range by *score*, but that ranking is meaningless. And without a meaningful way to rank two set members by some property, it makes no sense to talk about "increasing" that property.

We can imagine building an AI which is intelligent in the same way people are. Let's say it has an IQ of 100. We fiddle with it and the IQ goes up to 160. That's a clear success, so we fiddle with it some more and the IQ score goes up to 200. That's a more dubious result. Beyond that we make changes, but since we're talking about a machine built to handle questions that are beyond our grasp, we don't know whether we're making actually the machine smarter or just messing it up. This is still true if we leave the changes up to the computer itself.

So the whole issue is just "begging the question"; it's badly framed because we don't know what "God-like" or "super-" intelligence *is*. Here's I think a better framing: will we become dependent upon systems whose complexity has grown to the point where we can neither understand nor control them in any meaningful way? I think this describes the concerns about "superintelligent" computers without recourse to words we don't know the meaning of. And I think it's a real concern. In a sense we've been here before as a species. Empires need information processing to function, so before computers humanity developed bureaucracies, which are a kind of human operated information processing machine. And eventually the administration of a large empire have always lost coherence, leading to the empire falling apart. The only difference is that a complex AI system could continue to run well after human society collapsed.

Contrariwise show me a form of telecommunication that does *not* involve computers. Even plain old telephone service. Even if you discount the digital switching equipment, the PBXs at business locations are computers.

The overriding principle in any encounter between vehicles should be safety; after that efficiency. A cyclist should make way for a motorist to pass , but *only when doing so poses no hazard*. The biggest hazard presented by operation of any kind of vehicle is unpredictability. For a bike this is swerving in and out of a lane a car presents the greatest danger to himself and others on the road.

The correct, safe, and courteous thing to do is look for the earliest opportunity where it is safe to make enough room for the car to pass, move to the side, then signal the driver it is OK to pass. Note this doesn't mean *instantaneously* moving to the side, which might lead to an equally precipitous move *back* into the lane.

Bikes are just one of the many things you need to deal with in the city, and if the ten or fifteen seconds you're waiting to put the accelerator down is making you late for where you're going then you probably should leave a few minutes earlier, because in city driving if it's not one thing it'll be another. In any case if you look at the video the driver was not being significantly delayed by the cyclist, and even if that is so that is no excuse for driving in an unsafe manner, although in his defense he probably doesn't know how to handle the encounter with the cyclist correctly.

The cyclist of course ought to know how to handle an encounter with a car though, and for that reason it's up to the cyclist to manage an encounter with a car to the greatest degree possible. He should have more experience and a lot more situational awareness. I this case the cyclist's mistake was that he was sorta-kinda to one side in the lane, leaving enough room so the driver thought he was supposed to squeeze past him. The cyclist ought to have clearly claimed the entire lane, acknowledging the presence of the car; that way when he moves to the side it's a clear to the driver it's time to pass.

Slashdot has been crying wolf since they are a geek site and geeks seem to like that kind of thing and also like new technology, no matter the cost and issues.

However there have been actual depletions of IPv4 space of various kinds. First it was that all available networks were allocated to regional registrars. Now some of those regional registrars are allocating all their remaining addresses.

That doesn't mean doomsday, of course, it means that for any additional allocation to go on, something would have to be reclaimed. That has happened in the past, organizations have given back part of their allocations so they could be reassigned. It may lead to IPs being worth more. Company A might want some IPs and Company B could cut their usage with renumbering, NAT, etc so they'll agree to sell them.

Since IPs aren't used up in the sens of being destroyed, there'll never be some doomsday where we just "run out" but as time goes on the available space vs demand will make things more difficult. As that difficulty increases, IPv6 makes more sense and we'll see more of it.

We are already getting there in many ways. You see a lot of US ISPs preparing to roll it out, despite having large IPv4 allocations themselves, because they are seeing the need for it.

While I agree with parts of your argument, land lines are expensive more because they have millions of miles of physical wires to maintain. Cell towers do not have this burden.

Also, Cell phone service for any smart phone is MUCH more expensive than landlines now if you are single. It's sort of like "$100 for 4" or "$100 for 1".

That said, I use smartjack (flawlessly) over my internet. $19 a year. It's mainly a backup to find my cell phone and for extremely long gaming calls (can't get one player to use skype). I think the network effect for land lines is collapsing.Pretty soon it will be smarter to have a "land line" format phone that actually connects to a local cell tower (no lines to maintain, install, etc.).

But it occurs to me that as long as they have DSL cable service, the lines will be there anyway. So maybe the network effect won't be lost. not sure. I haven't been a landline customer for 3 years.

However... From my experience, the leading edge systems have been getting much MUCH better.Many of the core stuff has been stabilized for years.

Windows 10 still uses the NT based Kernel. Like the previous versions. Most of the drivers are the same as well. The buggy stuff are in the new features, that are often not yet implemented into the prod environment anyways.

The bad old days of the 1990's seem to be over for now. Quality is much better sense then. We can do a lot of things now without much fear of bad consequences.

Just like in the 1990's we stopped having to worry so much about failure in RAM as a major issue, because RAM has became a rather reliable component on the system.

I really wasn't impressed with edge at all. The touch interface is very buggy, pinch zoom and scrolling doesn't work past the first few seconds, in desktop mode. the browser stuff takes up a lot of screen real-estate. And still the lack of plugins such as adblock hinders the web experience.I still don't see the point on drawing on your web page either.

There could be less demand, If we really had a good handle on the limited time to upgrade for free window.There are a lot of people who are not in a rush to get windows 10. However this limited time means they might as well upgrade now vs waiting too long and having to pay for it. (Yes I am wide open about Free/Open Source Linux advantages...) But is it that important to give an artificial high demand to make investors thinks people really REALLY want the upgrade. vs just Getting it now for Free, vs waiting later for it.

No one goes on the bleeding edge, often the leading edge for production environments. But it is handy for your personal usage, as well for system testing. As the Leading Edge OS, may become you standard reliable OS.

Which is fine.I had my Linux for a desktop kick for a while back in the late 1990 and early 2000sthen I was on on Solaris for a while, then Mac OS.I am actually trailing on a Windows kick, it is getting to a point where I may want to switch a again.

Nothing is wrong with any of these system they have their pluses and minus.However OS X and Windows, is less struggling for hardware compatibility. Linux seems to be hit or miss, unless you invest a lot of time trying to determine if it is compatible enough, as many of discussions on such hardware fail to state if it works with a distribution or not.

Linux: I tend to prefer when I need to be very productive, When I need to crunch a lot of data. Also it is handy for cases when I need to do something outside the box, as it doesn't dumb down lower level access.

Benchmarks are hard for comparing computing systems already. Design trade-offs are made all the time. As the nature of the software these systems run change over the time, so does the processor design changes to meet these changes. With more software taking advantage of the GPU there may be less effort in making your CPU handle floating points faster, so you can focus more on making integer math faster, or better threading...2005 compared to 2015...2005 - Desktop computing was King! Every user needed a Desktop/Laptop computer for basic computing needs.2015 - Desktop is for business. Mobile system Smart Phones/Tablets are used for basic computing needs, the Desktop is reserved for more serious work.

2005 - Beginning of buzzword "Web 2.0" or the acceptance of JavaScript in browsers. Shortly before that most pages had nearly no JavaScript in they pages, if they were it was more for being a toy, at best data validation in a form. CSS features were also used in a very basic form. Browsers were still having problems with following the standards.2015 - "Web 2.0" is so ingrained that we don't call it that anymore. Browsers have more or less settled down and started following the open standards, And JavaScript powers a good portion of the pages Display. the the N Tier type of environment it has became a top level User Interface Tier. Even with all the Slashdot upgrade hate. Most of us barely remember. clicking the Reply link, having to load a new page to enter in your text. And then the page would reload when you are done.

2005 - 32 bit was still the norm. Most software was in 32 bit, and you still needed compatibility for 16 bit apps.2015 - 64 bit is finally here. There is legacy support for 32 bit, but finally 16bit is out.

These changes in how computing is used over the time, means processor design has to reweigh its tradeoffs it choose in previous systems, and move things around. Overall things are getting faster, but any one feature may not see an improvement or it may even regress.