Posted
by
Soulskill
on Wednesday August 13, 2014 @01:20AM
from the before-the-AIs-violently-revolt dept.

snydeq writes: Now that the technologies behind our servers and networks have stabilized, IT can look forward to a different kind of constant change, writes Paul Venezia. "In IT, we are actually seeing a bit of stasis. I don't mean that the IT world isn't moving at the speed of light — it is — but the technologies we use in our corporate data centers have progressed to the point where we can leave them be for the foreseeable future without worry that they will cause blocking problems in other areas of the infrastructure. What all this means for IT is not that we can finally sit back and take a break after decades of turbulence, but that we can now focus less on the foundational elements of IT and more on the refinements. ... In essence, we have finally built the transcontinental railroad, and now we can use it to completely transform our Wild West."

Now - just because one company goes belly-up doesn't mean that another can't take over and be successful.

What you have is not by far a successful IT platform yet, you have the foundation. What is limiting is the ISPs and their customer agreements that effective limits the users to being consumers of bandwidth and services. When the ISPs realizes that their models with bandwidth throttling and agreements prohibiting customers to set up services at home slows down development of new companies and services the

That's only in the US. Here in the EU we solved that problem long ago, paving the way for the development of new companies and services. You will be left behind if you keep letting the market decide these things.

Yeah? But, but... All those EU countries and their policies are socialist. How could it be possible that socialist policies lead to anything that is better, faster, and more readily available? The free market ensures that where there's a demand, those things are always available to anyone who wants to buy them. Right. RIGHT?

I know that was sarcasm, but for the sarcasm impaired (or the ignorant), I recommend reading Greenspan's testimony to a congressional oversight committee in 2008 where he was forced to admit that the objectivist-based idiocy about free markets and rationality always winning out that underpinned the Reagan Revolution and subsequent de-regulation and freeing of the "free market" does not work in the real world.

Amazing, to see someone who gazed admiringly on Ayn Rand as he sat at her feet forced to admit his e

And if he had known his history he would have seen the stupidity of his ways. The history of financial crashes of the 1800's, before financial regulation was wide spread or even conceived of in some cases, is compelling. See en.wikipedia.org/wiki/List_of_banking_crises

Reasonable regulation, built on experience, is all that people ask for and all that is needed.

Forcing companies to provide mortgages to people who are patently unqualified is an example of unreasonable regulations that resulted in untold devastation to the economy.

Now, the Feds are going around telling banks that these businesses are "bad" and that if they provide service to them, they'll be audited from top to bottom. It is a defacto [washingtonpost.com] suppression of Free Enterprise b

Forcing companies to provide mortgages to people who are patently unqualified is an example of unreasonable regulations that resulted in untold devastation to the economy.

Nobody EVER did that. They were required to be a bit more flexible in determining qualifications for loans on starter homes (that is, small mortgages) and to stop blatantly racist redlining. Instead, they began actively talking people into huge loans with built in time bombs for McMansions (that is, huge mortgages). Then they invented a variety of wildly complex new financial instruments based on those crazy loans deliberately (and fraudulently) mis-represented as AAA investments.

Banks were implicitly threaten with audits if their loan profiles didn't meet certain expectations. It's the same with Operation Choke Point today where they are told they will probably be audited if they don't conform their client list to excluded businesses that match certain profiles.

Don't confuse explicit demands with implicit threats, which can be just as effective in controlling behavior.

On the other hand, I've found that people talking about explicit demands are more reliable than ones talking about implicit threats. People tend to find concepts that support their ideology, and it's always easy to find implicit threats when you want to.

alternately, it will soon be time for the pendulum to swing back to "we've got to have everything in-house, these security breaches are killing us" and "dumb terminals and having everything in the 'cloud' is killing productivity when the cloud is down, we need real apps so users can work even when the cloud doesn't"

Or, in the future, you could just have apps that can run their parts either on the client side or on the server side depending on what's more advantageous in the given situation (based on network bandwidth, network latency, intra-app communication patterns, current server load, client CPU performance, client storage options etc.) That seems like the most flexible option I've ever heard of, and it subsumes having an offline mode. (Plan 9 already did something vaguely similar on the OS level.)

If the "transcontinental railroad" is truly built, then the cloud won't be going down (for any significant amounts of time) in the future.

How often do you venture out onto the Eisenhower Interstate Highway system, stymied that you can't use it in the normal fashion (yes, rush hour in metro areas still needs work, mostly population control, I say, but...)?

If your "cloud is down" more than 5 minutes per day, or has a big (multi hour) outage more than once a year, then you have not yet arrived at modern (2014)

WRONG. I was booking a trip to Flagstaff, AZ the other day via train and there were many track warnings of BASF doing track upgrades, resulting in passengers being taken off the 'network' entirely and shoved into buses.. Trains are NOWHERE near 5 sigma in any way. The Pacific Surfliner route has a 78% on time record....not great (amtrak publishes their reliability stats).

The article is a rather simplistic hardware-centric viewpoint. It doesn't even begin to touch on the areas where IT has always struggled: design, coding, debugging, and deployment. Instead it completely ignores the issue of software development, and instead bleats about how we can "roll back" servers with the click of a button in a virtual environment.

Which, of course, conveniently ignores the fact that someone has to write the code that runs in those virtual servers, debug it, test it, integrate it, package it, and ship it. Should it be an upgrade to an existing service/server, add in the overhead of designing, coding, and testing the database migration scripts for it, and coordinating the deployments of application virtual servers with the database servers.

Are things easier than they used to be? Perhaps for they basic system administration tasks.

Indeed, and virtualization is a rapidly evolving part of infrastructure right now. We may no longer be upgrading the hardware as rapidly (although I'm not certain about that either), but the virtual layer and tools are changing, and upgrading those requires just as much upheaval.

Virtualization is a pain in the ass. Want a new prod server? *click*
Want a new dev environment? *click*
Want a new db server? *click*
Need an FTP server? *click*
Need an HTTP server? *click*

Before you know it when you need to deploy a small software change it becomes a big deal because you have a billion bloody servers to update.
Before virtualization (or at least the ease of virtualization) you took your time and planned - checked available resources etc. Resources were scarce, RAM wasn't so abundant,

It seems too many forget that all this virtualization still runs on physical servers. Those physical servers still need hardware upgrades, monitoring, and resource management (especially when one starts oversubscribing). I don't get why people keep thinking hardware went away. Instead of lots of 1U servers, now you have big iron running lots of virtual servers.

Yes, but now you have one, maybe two (hopefully super-smart) guys onsite with a deep systems knowledge, instead a fleet of screwdriver wielding guys with an A+ certification who are as likely as not to screw up your system. Once it's up and running you just have to keep that machine and it's backup going, and everyone can build on top of that in software, from anywhere in the world.

Nobody is forgetting that because it's now partially irrelevant. Need to upgrade ram in a server? Migrate the VM's to another, shut it down, upgrade, turn it back on. Have a server catch fire and die? HA has already migrated the VM's for you. Getting low on ram or cpu hits 100%? Look! An alert!

Hardware does still matter, but it's no longer something that must be watched closely and in fear.

I remember very well back in the bad old days, that white knuckle time between telling the remote server to reboot with a new kernel and ping starting again. And of course, the advance setup where you make the old kernel the default in hopes that if it all goes sideways you can call and find someone on-site who can manage to find and press reset should it hang or have some random problem that keeps it off the net.

Are things easier than they used to be? Perhaps for they basic system administration tasks.

But those have never been where the bulk of time and budget go.

They could be if you did not know what you were doing. Like I suspect the author of TFA did not know.

From TFA:

Where we once walked on tightropes every day doing basic server maintenance, we are now afforded nearly instant undo buttons, as snapshots of virtual servers allow us to roll back server updates and changes with a click.

If he's talking about a production system then he's an idiot.

If he's talking about a test system then what does it matter? The time spent running the tests was a lot longer than the time spent restoring a system if any of those tests failed.

And finally:

Within the course of a decade or so, we saw networking technology progress from 10Base-2 to 10Base-T, to 100Base-T to Gigabit Ethernet. Each leap required systemic changes in the data center and in the corporate network.

WTF is 10Base-2 doing there? I haven't seen that since the mid-90's. Meanwhile, every PC that I've seen in the last 10 years has had built-in gigabit Ethernet.

If he wants to talk about hardware then he needs to talk about thing like Cisco Nexus. And even that is not "new".

And, as you pointed out, the PROGRAMMING aspects always lag way behind the physical aspects. And writing good code is as difficult today as it has ever been.

Programming in good code isn't hard at all. What's hard is programming well when you're on the fifth "all hands on deck" rush job this year, you have two years of experience and no training because your company was too cheap to pay a decent wage or train you, a humiliating and useless performance review is just 'round the corner, and you doubt anything you type will end up in the final product.
The problem is a widespread cultural one. When IT companies are willing to spend the time and money for consistent quality that's when they'll start to put out quality products.

Agreed 100% There is never time to do it right, but there is always time to do it over. Reviews that admit success, but celebrate weakness are not positive experiences. There is another trend of third parties marketing infrastructure solutions to high level management, skipping local subject matter experts. This triples the work we have to do. change is fine, and embraced, but we are paid for something. Provide stability and compliance in a rapidly evolving globalized environment.

Where we once walked on tightropes every day doing basic server maintenance, we are now afforded nearly instant undo buttons, as snapshots of virtual servers allow us to roll back server updates and changes with a click.

If he's talking about a production system then he's an idiot.

Why? Is it your contention that the work of sysadmins and support personnel has just been trouble-free for decades, and all the problems were caused by a sysadmin "not knowing what they were doing"?

and instead bleats about how we can "roll back" servers with the click of a button in a virtual environment.

Meanwhile I'd like to be able to turn clusters into a virtual server instead of having to codes specificly for clusters. Something like OpenMosix was starting to do before it imploded. Make serveral machines look like one big machine to applications designed to only run on single machines.

It's not exactly all warm and fuzzy. Things are much improved from the Mosix days in terms of having the right available data and kernel scheduling behaviors (largely thanks to the rise of NUMA architecture as the usual system design). However there is a simple reality that the server to server interconnect is still massively higher latency and lower bandwidth than QPI or HyperTransport. So if a 'single system' application is executed designed around assumptions of n

Even software is slowing down, though. A lot of the commodity software reached the point of 'good enough' years ago - look how long it's taken to get away from XP, and still many organisations continue to use it. The same is true of office suites: For most people, they don't use any feature not present in Office 95. Updating software has gone from an essential part of the life cycle to something that only needs to be done every five years, sometimes longer.

Around 2025 we will probably see a repeat of the XP situation as Microsoft tries desperately to get rid of the vast installed base of Windows 7, and organisations point out that what they have been using for the last decade works fine so they have no reason to upgrade.

The only reason many organisations are ditching XP right now is that MS stopped supplying updates. That isn't "Getting new software to further advance the organisation." That's more "Reluctantly going through the testing and training nightmare of a major deployment because Microsoft want to obsolete our otherwise-satisfactory existing software."

A lot of the commodity software reached the point of 'good enough' years ago - look how long it's taken to get away from XP, and still many organisations continue to use it.

I find it hard to believe that operating systems became "good enough" with Windows XP. Rather, Vista took so long to come out that it disrupted the established upgrade cycle. If the previous 2-to-3-year cycle had continued, Vista would have come out in 2003 (without as many changes, obviously), Windows 7 in 2005 and Windows 8 in 2007. We'd be on something like Windows 12 by now.

It's good that consumers are more aware and critical of forced obsolecence, but I don't agree with the "XP is good enough" crowd. I

Well beyond hardware, Software reliability over the past few decades has shot right up.

Even Windows is very stable and secure. Over the past decade, I have actually seen more kernel panics from Linux than a BSOD. We can keep servers running for months or years without a reboot. Out Desktops,Laptops, and even mobile devices now perform without crashing all the time, and we work without feeling the need to save to the hard drive then backup to a floppy/removable media every time.

1) Accurate, though has been accurate for over a decade now2) Things have improved security wise, but reliability I think could be another matter. When things go off the rails, it's now less likely to let an adversary take advantage of that circumstance.

3) Try/Catch is a potent tool (depending on the implementation it can come at a cost), but the same things that caused 'segmentation faults' with a serviceable stack trace in core file cause uncaught exceptions with a

The article is a rather simplistic hardware-centric viewpoint. It doesn't even begin to touch on the areas where IT has always struggled: design, coding, debugging, and deployment. Instead it completely ignores the issue of software development, and instead bleats about how we can "roll back" servers with the click of a button in a virtual environment.

And now is when we have a long and stupid debate as to whether the term "IT" signifies a grouping of all computer-related work including development, or whether it's limited to workstation/server/network design, deployment, and support. And we go on with this debate for a long time, becoming increasingly irate, arguing about whether developers or sysadmins do more of the 'real' work, and...

Let's just skip to the end and agree that, regardless of whether IT 'really' includes software development, it's pret

I kind of agree with TFA here -- hear me out here. We went through a pretty fundamental shift in the datacenter over the last 10 years or so, and it's finally settling down. Of course there will be constant evolutionary progressions, updates, patches, etc we're basically done totally reinventing the datacenter. 10GbE, virtualization, the rise of SANs and converged data/storage, along with public/private/hybrid clouds - these huge transformative shifts have mostly happened already and we're settling into

The article starts with the observation that the hardware bottleneck is mostly gone, if you can afford to supply basic coffee to your employees, the IT hardware doesn't cost much more than that - contrast that to 1991 when the PC on my desk cost 2 months of my salary, and our "network" was a 4 line phone sitting next to it (modems came to our office 5 years later).

I assume you are talking about the hardware... because once you have a "private cloud", the next step is moving away from setting up servers and configuring the applications manually, and getting into full on DevOps style dynamically scaling virtual workloads, that are completely (VM and their applications, the network configuration including "micro networks" and firewall rules) stood up and torn down dynamically according to the demands of the customers accessing the systems.. those same workloads can move anywhere from your own infrastructure to leased private infrastructure to public infrastructure without any input from you... of course, none of this is new... but it's certainly a paradigm shift in the way we manage and view our infrastructure... hardly something static or settled.
Really this is a fast moving area that is hard to keep up with.

As soon as we have 8k video commonly available, which could be as soon as 2020, if Japan gets to host the Olympic Games, we will run out of storage, out of bandwidth, and there is not even a standard for an optical disc that can hold the data, at the moment. So our period of rest will not be too long.

The only problem there is that it is, for most purposes, pointless. Most people would be hard pressed to tell 720p from 1080p on their large TV under normal viewing conditions. What we see really is a placebo effect, similar to the one that plagues audiophile judgement: When you've paid a heap of cash for something, it's going to sound subjectively better.

Personally, when reading pdf articles, the difference between 1080p and 4k make a world of difference. The aliasing is almost invisible in 4K. Also, thin diagonal lines in plots are much more clearly defined. Because this difference is so large, I have little doubt that 8k will also make a difference, although maybe not as much as from 2k to 4k. I have not yet had the pleasure of comparing video on 1080p and 4k.

Exactly, yet you will have a TON of people claiming they can tell the difference. In reality they can not.

99% of the people out there sit too far away from their TV to see 1080p Unless they have a 60" or larger and sit within 8 feet of the TV set. The people that have the TV above the fireplace and sit 12 feet away might as well have a standard def TV set.

But the same people that CLAIM they can see 1080p from their TV 10 feet away also claim that their $120 HDMI cables give them a clearer picture.

In the recent decades we've been eyewitnesses to the revolutionary breakthroughs in such fields as energy, transportation, healthcare, and space industry, to name a few. The technologies emerged are nowadays pretty much ubiquitous and impossible to go without in day to day life. Yet the hardware IT industry is stuck with Moore's law and silicon, and there's even an embarrassing retreat to functional programming in the software branch.

Now standardize all your password requirements to a strength-based system without arbitrary restrictions or requirements, and standardize your forms' metadata so that they can be auto-completed or intelligently-suggested based on information entered previously on a different website. Trust me, this sort of refinement will be greatly appreciated.

I don't subscribe to this rose-tinted point of view, especially if you look at all this beautiful tech from the security standpoint.Most of the tech we deal with today was originally designed without security concerns. In most cases, security is an afterthought.So much for sitting back and taking a break.

I think it's more about the end of the MHz wars. Nowadays, to get more power, you add more cores. If you can't do that, you add more boxes.

If you've got a single threaded million instruction blob of code, it's not executing very much faster today than it was a few years ago. If you're able to break it into a dozen pieces, then you can execute it faster and cheaper now than you could a few years ago, though.

Moore's law hasn't really run out of steam, it more that it's rules have changed a bit - the raw power

Moore's law was about a one node shrink every 18 months, meaning a reduction in structure sizes by sqrt(2), i.e. twice the number of transistors at the same die size. The reduction in size meant a reduction in gate thickness and operating voltage by sqrt(2) and a reduction of capacitances by a factor of two. Those allowed an increase in clock speed of sqrt(2) at constant power. None of that is happening any more.

No, you IT people are no longer the great revolutionists - your time is gone. You are now just plumbers, who need to fix the infrastructure when it are broken. Other than that, we don't want to hear from you, and we certainly don't want your veto on our business decisions - that is why a lot of us business people use the cloud, because the cloud doesn't say "can't work, takes X months, and I need X M$ to set it up", but is running tomorrow out of operational budget.

Sure the cloud runs with gremlins, fuck yeah. I guess you also dont care about your mechanic says and use the "garage", and also do not care without you dentist says and go there once every 5 years. If you do not care what professionals advise you, you are an idiot and do not deserve competent people working for/with you. Douche bag.

With the increased reliability of modern cars, people do make fewer trips to the garage. So it's not unlikely that cars won't be in the garage more than every 5 years.

I guess the same fact being true for IT really bugs you. The IT drones where I work are right now in a tizzy because the corporate IT people in Mexico are taking over. Because they can, and it saves a lot of $$, and also because the local fucks just aren't needed much anymore. There's no need for a guy to clean the lint out, all the mice are o

Give me a car that is in the garage only once in 5 years, and I wont mind to pay the price tag. There are IT drones/fucks and then there are IT people, and guess what I do not belong to the former. As much as there are fucks in any other job, yours included. People that get all cozy, did not had the vision to go ahead, keep up with the times, and get away with the menial tasks. But guess what, the stuff does not automate yourself and does not run alone. And when it seems to run alone, is because people are

I am no more defensive than you are a big idiot. This has been an ongoing trend for ages...People think they can get without people who maintains things working, and then, depending on the quality of the work of the previous team, things go down after one year, at most two years, and they spend much more in external firms/consultants fixing the "malfunction". I have already fixed a couple of places and earned a lot of money thanks to idiots with a prejudice and lack of vision like you. And no, I dont even t

So essentially you're saying that you, as a technically illiterate person, don't give a crap about the opinion of your sysadmin in technical questions.
Oh, wait, you've already mentioned you're a business person. Enjoy your Dunning-Kruger while it lasts.

need to fix the infrastructure when it are broken.

Shall we fix your understanding of the English language while we're at it? Or would that be too mission-critical a business decision?

...like a dinosaur in the last days before the meteor. The future is over there in the Makerspaces, where 3D printing, embedded stuff, robotics, CNC machines, homebrew PCBs at dirt-cheap prices are happening. It's all growing like weeds, crosses the boundaries between all disciplines includg art, and is an essential precursor to the next Industrial Revolution, in which you and your giant installations will be completely bypassed.

What the -- ?? "the technologies behind our servers and networks have stabilized" -- when did this happen? I'm not a datacenter person, but isn't the world filled with competing cloud providers with different APIs, and things like OpenStack? Did all this stuff settle down while I wasn't paying attention?

I think would be a better way of looking at what this article is on about.Back in the late 80's early 90's when I graduated and started my career in the Networking Industry the OSI 7 layer model (https://en.wikipedia.org/wiki/OSI_model) was often referred to. You don't hear it mentioned much these days.If you applied IT history and economics to it you'll find that each of those layers saw a period of fantastic growth & innovation (a few short years) before becoming IT commodities and having little valu

As a senior engineer, im glad to get some downtime before the "next revolution." I certainly havent had to patch any hacks or bugs related to our transcontinental wonkavator. this week ive done nothing but drink pina coladas and enjoy a long vacation instead of worry about vendor lock-in and incompatibility, which as we all know was solved during the IT Revolution(c). thanks to the IT revolution (and especially the cloud) ive had plenty of time to spend with friends playing my favourite games, which in no way were encumbered by a lack of reliable infrastructure to play them on (thanks again IT Revolution!) Technologies used in the corporate data center like DRAC and EFI PXE have worked so well that i dont even have to worry about security vulnerabilities or bugs. gone are the days when disk and ram shortages were commonplace, as are the days when disks were specifically coded to certain vendors and controllers.

At the risk of pissing off some folks, I must say I've worked in IT since before it was called IT, and I can honestly say no revolutions will come from that area. After all, IT isn't known for it's innovative and R&D atmosphere. IT is the result of cramming middle management, contractors, and novice-to-mediocre developers together in cubicles. Sure, it's a steady paying job, which is why most of us do it. The revolutionary stuff will continue to come from those who have the luxury of choosing not to

The on-going technology churn we've seen in the last decade is *not* a feature of a revolution in progress that may be coming to an end, it's a reflection of stagnation in technology, without the ideal data centre technology (at least in terms of software) having achieved any kind of dominance. There's been a endless parade of new web technologies, none of which is more than an ugly hack on HTML. Websites are better than they were in twenty years ago, but certainly not 20 years' worth of progress better.

The concept is false. Things have changed in how they break and what we are concerned about
on a daily basis. 10 years ago I didn't have compromised accounts to worry about every day.
But I did spend more time dealing with hard drive failure and recovery. We are still busy with new problems
and can't just walk off and let the systems handle it.

If you believe IT is like running your Android device, then yes, there is little to be done other than pick your apps and click away. If you have some security awareness you would know there is much going on to be concerned about. When the maker of a leading anti-virus product declares AV detection is dead, it is time to be proactive looking at the problem. Too many IT folk believe if there is malware it will announce itself. Good luck with that assumption.