So, I checked slashdot on my phone today over lunch, and I saw the big "We hear you!" post discussing beta. Then, I got home tonight and was redirected to the new beta interface. So, clearly, slashdot the corporate group doesn't hear what slashdot the community is saying. If people are still being involuntarily redirected to something that has put the community at the edge of open rebellion, slashdot is clearly plunging in relevance even faster than a post Gox bitcoin. It's been a good run. I had over a decade of fun here on slashdot. I had excellent karma. But, clearly it's time for me to walk away. It's a shame that that./ is so hell bent on shooting themselves in the face with this redesign. If they don't completely abandon it, this will probably the last post from this account. Weird. If you want to improve./, add utf8 support and math rendering. Stuff people are actually asking for. Trying to refine the redesign is the wrong path. It's the wrong direction. Trying to dial in the details of shooting yourself in the face doesn't really matter. It doesn't matter if you trade the shotgun for a pistol, or whether you aim for the nose or the roof of your mouth. That's all "Beta isn't ready" means. It's a shame when a good community dissolves, but good night.

It is a lot of work to raise your arm and point at an exact location on the screen (and slow too). After a short time you will be feeling the fatigue building up in your arm, which starts feeling very heavy. Then you will hate your touch screen and go back to using a mouse, touchpad, or keyboard, none of which require you to make large arm movements, or hold up the weight of your arm in front of you.

Why is touch on the desktop always assumed to be something that would have to replace using other inputs? I mean, if touch added $5 to my monitor, and I used it once every few weeks, I'd consider that a win. And, if it were widely deployed, economies of scale would mean that it really would be very cheap to add. (Like audio on the motherboard.) Having things like pinch to zoom could be handy on the desktop.

instead, they ran rampant and now we have a bullshit system which even on my system, sometimes fails...chrome doesnt play audio, firefox does...no idea why...although getting my HDMI tv to play sound on fedora was interesting, the eventual solution was I had to edit a file in/usr/share and add a:0 to the end of one of the parameters...I have no idea why....in linux mint it was fixed and I never had to do it...but weird shit like this seems to happen all the time...

Despite my best efforts, with Chrome on Ubuntu, Some YouTube videos will play out of one sound card, and some videos will play out of another. I think it's Flash vs. HTML5 being used for different videos. Seriously, it's the most bewildering user experience to have to randomly switch between my USB headphones and my analog headphones. Getting bluetooth audio working reliably is just a lost cause. Skype used to work. I apparently broke it in the course of trying to fix other things. 10 years professional experience as a UNIX admin, and I can't figure out how to make Youtube work without wearing two different headphones. It's sort of fucked.

Well, if he has identified it as taking up a large amount of the available bandwidth, then it certainly makes sense to consider it a target for reductions. Perhaps more importantly, users tend not to care about updates like that. A user actively downloading a file from some source is probably more important than some automated process the user doesn't care about, and can be deferred until the user gets home without them noticing anything.

That said, I've been saying for a while that there needs to be some sort of bandwidth discovery protocol. My original thought process was driven by apps on mobile phones, but this seems like it would benefit for the same reasons. Wireless oeprators are always concerned about using scarce bandwidth resources so we get plans with low data caps and such. Imagine if there was a completely standardised way for an application (say an email app on a phone) to "ping" bandwidthdiscovery://mail.foo.com with some sort of priority metric. If nothing responded back, it would act normally, so the system would be completely backwards compatible. If something did respond back along the route (for example, the wireless ISP you are connected to, but it could theoretically be something local or distant. The school's DDWRT router in the OP example.) it could reject the session, or encourage a delay. That way an email app set to check every 5 minutes could occasionally get a polite rejection from the ISP asking the app to hold off since circuits are overloaded. The phone would then wait a few minutes before trying again. Eventually the phone would download new email, but at high traffic times, it might wind up going 15 minutes instead of 5, saving the network some trouble. Software updates might defer a download for days or weeks if there is a continual rejection.

My Android phone lets me set software updates and podcast downloads to only happen over wifi, under the assumption that cellular data is expensive, but wifi data is unlimited. But, if I connect to a Mifi access point connected to a cellular connection, my phone currently has no way to discover that it is actually using (limited) cellular data. With a bandwidth discovery protocol, it would get the same rejections from the ISP that it would get if it had directly connected to the cellular data itself. And, local admins could easily set up rejection rules like the OP would be interested in, while still allowing the possibility of user overrides in cases where the school IT guy really wants to manually update the school's computer systems and whatnot. Think of it as a sort of queryable QoS.

And because any intermediate system on the route can let apps know to reduce bandwidth usage, a server being slashdotted can have some queries be rejected, rather than everything being on the link local side near the user. Obviously, none of this helps the admin in the immeadiate term. But, it would seem like that's how it ought to work.

Implicit semicolons. '5' + 3 gives '53' whereas '5' - 3 gives 2. I tried to include the famous Javascript truth table. Look it up. Including it in the post just triggered the junk filter, but it's hilarious. Javascript manages to be chock full of wtf even without the DOM at all. I always wished that Python would show up in the browser at some point. Once apon a time, the idea of genuinely novel scripting languages for web pages actually seemed plausible. (Remember vbscript web pages?) I guess there is so much legacy JS now that it's just the way things work and we'll never be completely rid of it.

And then you need to duplicate the whole thing in another datacenter for geographical redundancy.

Useful for some workloads, sure. But if it is an internal service, rather than something like a website (gasp, not all servers are public facing websites) then if my office gets taken out by a meteorite, none of the corpses in the building actually care about whether or not some instance of the service exists in some other safer geographic region.

The flip side is that at a small scale, you get a certain amount 'for free.' If you need to have some infrastructure locally, then you already have some sort of a room with space to put a new server in, you already have sufficient electricity. You already have a guy to replace a blown hard drive. The extra time he spends replacing it is technically nonzero, but it's a fairly rare event, so a single extra server tends to be "in the noise." The big cost is as soon as you exhaust your existing capacity. I.E. The guy is already replacing drives full time, so adding one more server will mean needing to add another full time guy. Or, all the racks are full and you will need to add additional space. You can see a point where the TCO of the last server was genuinely much less than outsourced infrastructure, but the TCO of the next server will effectively be $500000 if you only add one more machine.

Emulating a piece of hardware with another piece of hardware in software is always slow. I remember when you needed a fairly beefy PC to play emulated NES games effectively. If you think that emulating a current console on a PC will never be practical, given that they are essentially just PC's themselves now, then you attention span is too short to have bothered reading this far into my comment, so I'm not entirely sure why I bothered.

Except now the car can just take itself to a maintenance appointment while you are at work or overnight, so you never need to actively actually do anything. In any case, I think the cars will make the best estimate of the world that they can, based on a combination of sensors. It seems to be the case that you can drive a car optically (humans do it) so if the radar sensors go out, it's probably still perfectly safe to let it drive on lidar and cameras for a little while. It doesn't get scary until cheap econobox cars go autonomous without any real redundancies. But, they'll still be safer than human drivers.

The thing is that every "video news" website gets it wrong. Nobody cares about the talking heads giving bookends for the content, and nobody wants auto play. So, whenever you go to such a web page, it instantly starts playing some random person giving a banal intro. OTOH, an article saying "X happened" with a video that you can choose to play to see X happening would actually be valuable. If person X gave a speech to the UN or something, then having video of the speech is reasonable. But, yeah, I'll play it if I want it.