computers – HPCwirehttps://www.hpcwire.com
Since 1987 - Covering the Fastest Computers in the World and the People Who Run ThemFri, 09 Dec 2016 21:51:05 +0000en-UShourly1https://wordpress.org/?v=4.760365857Intel Updates Plans for 48-Core Chiphttps://www.hpcwire.com/2012/11/01/intel_updates_plans_for_48-core_chip/?utm_source=rss&utm_medium=rss&utm_campaign=intel_updates_plans_for_48-core_chip
https://www.hpcwire.com/2012/11/01/intel_updates_plans_for_48-core_chip/#respondThu, 01 Nov 2012 07:00:00 +0000http://www.hpcwire.com/?p=4271What could you do with a 48-core smart phone? If Intel has its way, you won't have to wait long to find out.

]]>When mobile devices become as smart as your PC, will you still need your PC?

What could you do with a 48-core smart phone? If Intel has its way, you won’t have to wait long to find out. The device could be a reality in just 5 to 10 years, as the company’s research arm preps its 48-core processor for the mobile market.

When you can harness the power of a PC in the palm of your hand, the distinctions between computers and tablets and smart phones start to break down, notes a recent piece at Computerworld.

This opinion is shared by Patrick Moorhead, an analyst with Moor Insights and Strategy, who was interviewed by Computerworld. With a chip like the one Intel is designing, “the phone would be smart enough to not just be a computer but it could be my computer,” he stated.

Intel first announced its stamp-sized 48-core processor, the so-called “Single-chip Cloud Computer” (SCC), in 2009. The test chips clocked in under 2.0 GHz, and were suitable for linear algebra, fluid dynamics and Web serving. At the time, though, they were not intended for commercial deployment. Instead, they were to be used as a testbed for multicore experimentation and application porting.

Intel has since changed course and is now pushing the cluster-on-a-chip technology toward the mobile space. The company says we might not have long to wait: the super-smart phones and tablets could be here as early as 2018.

Low-power multicore chips have gained prominence as the Web era’s emphasis on processing many small tasks pushes the boundaries of the traditional x86 architecture. Tilera debuted a 100-core chip in 2009, while Intel, AMD, Dell, HP and Penguin Computing are all looking to microservers, a very similar concept, to meet the demands of the cloud and mega-datacenter space.

The current crop of mobile devices already uses multicore chips, but they’re of the dual and quad-core varieties. Putting a 48-core processor in a handheld device is a definite game-changer. The additional cores would allow multiple applications to be processed in parallel. A few cores to process email, a few to surf, a few to check email, and so on. Current designs have some capacity for task-sharing, but there’s often a noticeable lag, resulting in a less-than-optimal user experience.

Multicore architecture provides an efficient way to handle many small workloads and it offers an attractive compute per watt profile as well. In the proposed Intel model, the cores act like a dynamic computing mesh with just the right amount of cores allocated to a particular app. Such a device could operate in tandem with a remote system (aka “the cloud”), offloading compute-intensive tasks to further optimize on-board resources.

However, there is a potential obstacle. As with the HPC space, application parallelization is not easy and requires the will of the developer community. Enric Herrero, a research scientist at Intel Labs called the lack of suitable software a “limiting factor.”

“We need to modify how operating systems and apps are developed, making them far more parallel. Now, [having] cores doesn’t matter if I can’t take advantage of it,” he told Computerworld.

Another analyst interviewed for the piece, Rob Enderle, agreed, saying we still have a ways to go writing for 6- to 8-core machines, and when it comes to getting code to work across massive multicore, “we haven’t even really started to do that yet.”

Moorhead is more hopeful, though. He believes the software will come when the hardware is ready. When it does, PCs as we know them may become a thing of the past. We’ll carry our personal computers with us wherever we go, connecting wirelessly to peripherals like displays, keyboards and mice as needed.

]]>If you’ve been following the health care debate in the US, it’s become fairly clear that the current trajectory of medical costs will soon be unsustainable for the economy. The latest government figures has the average US health care spend per person at over $8,000, and is projected to top $13,000 by 2018. Whether the latest health care legislation will do much to curb these costs is debatable.

If that $13,000 per capita figure holds up, that means about 20 percent of the nation’s GDP will be spent on medical bills. Other developed nations are currently about twice as efficient as the US, but even there health care cost are outrunning incomes. Fortunately, economic forces that strong have a way of disrupting the status quo.

Probably the lowest hanging fruit for optimizing the health care sector is in information technology. Even though we think of medicine as a high-tech endeavor, it’s mostly based on 30-year-old IT infrastructure overlaid with a manual labor approach to data collection and analysis. Essentially we have a system using 20th century computing technology, but with 21st century wages.

Just going to a doctor’s office and filling out a medical history form (on paper!) for the 100th time should give you some idea of how antiquated the health care industry has become. It’s as if the Internet was never invented.

But it’s not just about your medical records ending up in isolated silos. The amount of data that can be applied to your health is actually growing by leaps and bounds. The results of medical research, genomic studies, and clinical drug trials are accumulating at an exponential rate. Like most sectors nowadays, health care revolves around data.

In general though, your health care provider doesn’t do anything with all this information since the analysis has to done by a time-constrained, high-paid specialist, i.e., your doctor. But that could soon change. The latest advanced analytics technologies are looking to mine these rich medical data repositories and transform the nature of health care forever. Not surprisingly, IT companies are lining up to get a piece of the action.

IBM, in particular, has been pushing its analytics story for all sorts of medical applications. Last week, the compay announced it was expanding its Dallas-based Health Analytics Solution Center with additional people and technology.

Part of this is about sliding the IBM Watson supercomputing technology into a medical setting. With it’s impressive Jeopardy performance under its belt, IBM is now applying HPC-type analytics to understand medical text. Specifically, they want to combine Watson’s smarts with voice recognition technology from Nuance Communications to connect doctors to their patients’ medical data via a handheld device like a tablet or smart phone. From the press release:

By using analytics to determine hidden meaning buried in medical records, pathology reports, images and comparative data, computers can extract relevant patient data and present it to physicians, ultimately leading to improved patient care.

Analytics vendor SAS is also in the game. In May, they unveiled a new Center for Health Analytics and Insights organization that is designed to apply advanced analytics across health care and life sciences. Although the specifics were a little thin, the group will focus on “evidence-based medicine, adaptive clinical research, cost mitigation and many aspects of customer intelligence.”

It’s not all about clinical care though. One of the most expensive undertakings of the health care industry is ensuring drug safety. Both the FDA and pharma have had some spectacular failures in this area, the most recent being Vioxx, a pain-relief drug that was pulled from the market in 2004 after it was discovered that it was causing strokes and heart attacks in some patients.

A recent study by the RAND Corporation suggests data mining can be used to find some of these dangerous drugs before they enter into widespread usage. RAND CTO Siddhartha Dalal and researcher Kanaka Shetty developed an algorithm to search the PubMed database to uncover these bad players. The software employed machine learning algorithms in order to provide the sophistication necessary to differentiate truly dangerous compounds from ones that only looked suspicious (false positives). According to the authors, the algorithm uncovered 54 percent of all detected FDA warnings using just the literature published before warnings were issued.

A more ambitious medical technology is envisioned by the X PRIZE Foundation, a non-profit devoted to encouraging revolutionary technologies. Recently they teamed with Qualcomm to come up with the Tricorder X PRIZE, offering a $10 million award to develop “a mobile solution that can diagnose patients better than or equal to a panel of board certified physicians.” In other words, make the Star Trek tricorder a reality.

The device is intended to bring together wireless sensors, cloud computing, and other technologies to perform the initial diagnosis, and direct them to a “real” doctor if the situation warrants. Presumably the cloud computing component will support the necessary data mining and expert system intelligence, while the tricorder itself would mostly act as the data collection interface and do some medical imaging perhaps. The X PRIZE Foundation will publish the specific design requirements later this year, with the competition expected to launch in 2012.

None of these solutions are being promoted as substitutes for doctors or other medical professionals. Inevitably though, if these technologies become established, these jobs will be very different. With powerful analytics available, doctors won’t have to memorize all the information about the biology, drugs, and medical procedures any more. In truth, they can’t even do that today; there is already far too much data, and it continues to expand.

In an analytics-supported health care system, medical practitioners will need to do less data collection and analysis and more meta-data analysis. Just as today, writers don’t need to know how to spell words (remember, 50 years ago a spell checker was a person, not a piece of software) doctors will not need to memorize which drugs are applicable to which diseases. And that means a lot fewer doctor and less supporting staff. Essentially we’ll be replacing very expensive PhD’s with very cheap computer cycles.

If that seems like a scary prospect, consider the more frightening scenario of a health care system that bypassed this technology and tried to burden medical practitioners with the data deluge. Also consider that without advanced analytics, the majority of the population will be burdened by the long-term costs of sub-standard medical care.

Beyond that, advanced analytics will also be involved in propelling other health care technology forward, including drug discovery, genomics, and the whole field of personalized medicine. Many of these advances will enable medical conditions like heart disease, cancer and diabetes to be prevented, which is a far less expensive proposition than treatment.

It’s reasonable to be optimistic here. Nature abhors a vacuum — in fact, any sort of stark discontinuity. Our problematic health care model will eventually be transformed by technologies that make economic sense. Advanced analytics is poised to be a big part of this.

]]>https://www.hpcwire.com/2011/06/02/a_healthy_dose_of_analytics_from_ibm_watson_to_tricorders/feed/04826“Cloud Robotics” Research Aims to Create Smaller, Smarter Machineshttps://www.hpcwire.com/2011/03/03/cloud_robotics_research_aims_to_create_smaller_smarter_machines/?utm_source=rss&utm_medium=rss&utm_campaign=cloud_robotics_research_aims_to_create_smaller_smarter_machines
https://www.hpcwire.com/2011/03/03/cloud_robotics_research_aims_to_create_smaller_smarter_machines/#respondThu, 03 Mar 2011 08:00:00 +0000http://www.hpcwire.com/?p=9080A new research subfield of robotics relies on using the cloud to power robot access to information and new skills and for offloading computationally-intensive tasks.

]]>Robotics is anything but a static field with a continuous stream of advancements adding to both the compexity and possibility behind each new development. A new subfield–“cloud robotics”–is emerging as a hot topic in research. As one might imagine, instead of relying on “in-house” resources, robots can potentially leverage the cloud to deliver instant information and to handle computationally-intensive tasks that would otherwise use a great deal of a robots on-board system.

Erico Guizzo from IEEE’s Spectrum pointed out that while we are unable to upload information directly into our “meat brains” to have instant access to information to help us perform tasks (ala The Matrix) robots have that advantage.

Recent research projects are leveraging information stored in the great vast cloud to enable robots to quickly acquire the skills and knowledge they need in the blink of an eye. Furthermore, the notion of cloud-driven robots means that there is the possibility for a robot to cast off heavy-duty computation to the cloud so that it can free up resources for other tasks, thus providing the opportunity for added sophistication due to more resources becoming available.

As Guizzo reported, there are a number of research groups that are exploring the idea of “robots that rely on cloud computing infrastructure to access vast amounts of processing power and data. This approach, which some are calling ‘cloud robotics’ would allow robots to off-load compute-intensive tasks like image processing and voice recognition and even download new skills instantly, Matrix-style.”

One of the more promising aspects of this idea goes beyond the “cool” factor of offloading and instant skill “level-up” via cloud-stored information. This also means that robots can decrease in size since so many are forced to carry extensive on-board computation—a serious task considering the computationally-intensive tasks that most robots perform.

In addition to the computers they need to schlep around are some seriously heavy-duty power sources, most often in the form of batteries to keep the computation and movement humming along. Reduced need for extensive on-board computation means less power usage—which combined means the possibility for much smaller robots.

With the ability to understand language, learn from experience and perform sophisticated analytics, Watson aims particularly high in this regard. Although the Jeopardy contest is just a PR demonstration, IBM has its eye on moving the technology into the commercial realm for things like business intelligence, financial analytics, medical diagnostics, and a host of other lucrative applications.

The human counterparts for these types of jobs tend to be well-educated and well-compensated individuals. And these “smart” machines are not all that expensive. A computer like Watson would probably cost in the neighborhood of $5 million today. That’s hardware only — we can only imagine what IBM would charge for the software, which is the real value add here. Even at $5M-plus, being able to replace one or more five-figure salaries with a machine that can work 24/7 would still be tempting.

To be fair, the Watson technology is not at a point where it could actually take the place of a medical diagnostician or a stock portfolio manager. Rather it would act as a support tool that could greatly magnify the performance of those individuals. The idea would be for a single analyst to perform the work of a dozen.

IT writer Nicholas Carr has expounded on this subject at length in his books and online blog. His take is that the advance of information technology is displacing the modern workforce at a rapid rate, just as the industrial revolution did for manual labor in the 18th and 19th centuries. And as the machines become more sophisticated, ever more highly-skilled jobs are being threatened.

In the past information technology tended to reduce demand for low-skilled jobs but increase demand for higher-skilled specialists. Now, automation is moving up the skills ladder, as the Internet and sophisticated software combine to reduce the need for more categories of knowledge and creative workers. One has to wonder what new categories of employment will expand to absorb the losses.

[I]f you look at more recent trends, you see that software is becoming increasingly more adept at taking over work that has traditionally required relatively high skills – or even, in YouTube’s case, enabling the creation of sophisticated goods through the large-scale and automated harvesting of free labor. The next wave of “superstars” may be algorithms – and the small number of people that control them.

Carr isn’t the only one to notice this. He cites a number of economists who have hypothesized that tech advancements may be one of the primary causes of the concentration of wealth for top earners. Fed chair Ben Bernanke noted that new technologies tend to increase the productivity of highly-skilled workers, and thus their wages, compared to lower-skilled workers.

“Considerable evidence supports the view that worker skills and advanced technology are complementary,” says Bernanke. “For example, economists have found that industries and firms that spend more on research and development or invest more in information technologies hire relatively more high-skilled workers and spend a relatively larger share of their payrolls on them.”

That would suggest that we just need to develop an increasingly higher-skilled workforce to keep pace with technological innovation. But a funny thing happens on the way up the food chain. The structure is really a pyramid, with fewer and fewer positions as you approach the top. For example, if you replace a maid with a robot, a single technician would be able to maintain many robots. So you just can’t retrain the maids to be technicians. The same would go for analysts as they get displaced by smart machines.

Today many economists are concerned about how slowly employment is recovering after the Great Recession. Sure enough, as the economic slide ended, productivity surged as companies discovered new ways to run their businesses with fewer people. I suspect a lot of that productivity surge was the result of more IT deployment rather than longer work hours. In many cases, businesses cut work hours to reduce costs.

So what happened to all the displaced workers from the recession? Well many are still looking for a path back into the workforce, while others have given up entirely.

Here’s an interesting graphic from the Bureau of Labor Statistics that shows the recovery of employment after the last six economic downturns:

Although the March 2010 New York Times article that cites this graph is making a point about the lag in re-employment, the more interesting fact is that the lag times appear to be lengthening significantly with each successive recession, regardless of the severity. That would suggest that job seekers are finding it increasingly difficult to return to work with each passing year. It’s not unreasonable to imagine that the inability of workers to keep pace with technology advancements is playing a role here.

If true, at some point that lag will be so long that employment won’t recover before the next recession hits. And then what? Well, we better hope that those new categories of employment Carr wonders about will actually come to pass.

]]>Algorithmic trading is getting another cycle of press scrutiny, thanks mainly to a very well-researched article in Wired by Reuters financial blogger Felix Salmon and Ars Technica writer Jon Stokes. In it, they outline how pervasive these high-tech algorithms have become to the everyday running of financial trading. And the problem is no one knows how this software drives the market behavior — not the investors, not the traders, and not even the people who run Wall Street.

The motivation for all this high-tech trading is, of course, money. And in this case, he who develops the fastest system usually wins. This often comes down to placing the trading servers in the same room as the stock exchange servers to get that millisecond edge on executing transactions. The code too is designed for maximum speed, being constantly tweaked to squeeze the last ounce of performance from the underlying computer chips. Appro recently launched a server based on overclocked Intel “Westmere” CPUs, to give high frequency traders that extra speed boost. But all that digitally enhanced speed means it’s that much harder for humans to control.

That’s partly because that computer-generated bids can be executed so quickly (10,000 bids per second for a single stock) and in such a complex manner that humans cannot comprehend the ramifications. The feedback loops become intertwined, such that the entire trading system exhibits emergent behavior, untraceable to any particular piece of code.

In a recent interview on NPR’s Fresh Air program, Salmon declared. “The man danger about algorithmic trading is that we simply don’t understand it.” He says although the individual algorithms are controlled, and presumably understood, by their masters, the interactions between them are not.

In researching his article, Salmon talked with Michael Kearns, a CS Prof at the University of Pennsylvania, who has developed algorithms for various Wall Street firms. Kearns told him that the financial markets have become what he called an “automated adaptive dynamical system with feedback.” That may sound very cool, but according to Kearns there is no science he’s aware of that is able to understand such a system.

It should come as no surprise that occasionally such a system would run the financial markets into a ditch. That happened last May, with the so-called flash crash, when the Dow Jones Industrial Average plummeted 900 points in a matter of minutes — before regaining most of its value. The cause was traced to a relatively obscure mutual fund company that decided to make a very large trade in a very short amount of time (about 20 minutes). The algorithms monitoring the market interpreted this as a panic and came to the same decision all at once: sell. The reason the mutual fund company decided to dump the shares in the first place was to hedge against the possibility of a future stock market drop. Talk about self-fulfilling prophesies.

In the wake of the flash crash, the Securities and Exchange Commission (SEC) announced some measures intended to prevent a reoccurrence. These include “circuit breakers” procedures, such as automatically halting trading when a stocks share price fluctuates by more than 10 percent in 5 minutes. The SEC is also considering other measures like limiting the size and speed of trades and requiring a complete audit trail of all transactions.

But Salmon considers those rather crude remedies for such a tightly wound system. The flash crash event was actually a rather simple example of what could go wrong. The interactions between all the analytic software inhabiting Wall Street datacenters is much more complex. For example, unlike that mutual fund company that executed the large trade all at once, algorithmic trading software tries to hide a big buy or sell events with a series of smaller transactions so as not to tip their hand.

Meanwhile, other algorithms are simultaneously monitoring the activity to discern the larger patterns that the other codes are trying to hide. In some cases, even more devious codes will purposely initiate transactions with no intention of executing them in order to confuse their rival software. It’s very much algorithmic warfare, with no real thought given to collateral damage.

The quantitative analysts themselves have become somewhat innocent bystanders. The Wired article describes a typical quant shop, in this case Berkeley-based Voleon Capital Management, that specializes in statistical arbitrage. The idea is to process mounds of financial data, looking for patterns that would point to an profitable arbitrage opportunity. But the quants have no knowledge of the underlying fundamentals of the assets; they are simply looking for patterns. To them, it’s just a pile of bits unrelated to any larger reality.

The software is becoming more sophisticated too. Salmon documents a recently launched service, called Dow Jones Lexicon, that mines the text in financial news stories and attempts to map keywords to market conditions, the idea being to help predict market trends based on external events. Although such software is in its nascent stage, this could add a whole other layer of complexity to trading models.

The fact that so much trading — the majority, in fact — is performed algorithmically suggests that the market is no longer balanced between investors and speculators. And since the speculation component is being propelled by superfast computers, the market has become increasingly volatile and unpredictable. Even before and after the May 2010 flash crash, there have been a number of examples of unexplained price fluctuations.

The University of Pennsylvania’s Kearns suggests that we should to build a ginormous stock market simulator in order to provide some much-needed science for our market structures. In a recent Reuters blog by Salmon, Kearns is quoted about this at length. Although, the professor doesn’t see a simulator as a magic bullet, in his estimation it’s certainly the place to start.

Given the importance of the stock market to the economy, its increasing susceptibility to damaging volatility, and the lack of our understanding of current system, a simulator project seems like a no-brainer. Sounds like a nice little science project for the SEC.

]]>https://www.hpcwire.com/2011/01/19/algorithms_engulf_wall_street/feed/050042000 Years of Computing on Display at Computer History Museumhttps://www.hpcwire.com/2011/01/12/2000_years_of_computing_on_display_at_computer_history_museum/?utm_source=rss&utm_medium=rss&utm_campaign=2000_years_of_computing_on_display_at_computer_history_museum
https://www.hpcwire.com/2011/01/12/2000_years_of_computing_on_display_at_computer_history_museum/#respondWed, 12 Jan 2011 08:00:00 +0000http://www.hpcwire.com/?p=4982The Computer History Museum reopens its doors this week after undergoing a $19 million, 25,000-square-foot building renovation.

]]>The Computer History Museum in Mountain View, Calif., reopens its doors this week after undergoing a $19 million, 25,000-square-foot building renovation. The gem at the heart of this giant undertaking is a major new exhibit that traces the history of computing from the ancient abacus to the personal digital assistant (PDA) of the 90s. The details of the exhibition, titled “Revolution: The First 2000 Years of Computing,” were the subject of a recent article at Computerworld.

John Hollar, museum CEO, shared with Computerworld the impetus for the project:

“Many times, people coming to the museum have very basic questions: ‘How did that computer on my desk get there? How did that phone I’ve used for so long get so smart?’ It’s an exhibition that’s primarily aimed at a nontechnical audience, though there’s a ton of great history and information for the technical audience as well.”

The show’s 19 galleries house documents, video presentations, and more than 5,000 images and 1,100 artifacts. Some of the presentations on display are designed for hands-on use. For example, visitors will be able to pick up a 24-lb. Osborne computer or play a game of Pong, Pacman or Spacewar.

Among other noteworthy artifacts are a 1956 IBM 305 computer and its 350 hard drive, the first commercially-available machine of its type. The machine holds 5MB of data and occupies almost an entire room.

Also on display are “the console of a 1950 Univac 1, the first computer to become a household name; a complete installation of an original IBM System/360, which dominated mainframe computing for 20 years; and a Cray-1 supercomputer, which reigned as the world’s fastest from 1976 to 1982.”

During the next year, the museum will host a special lecture series, called “Revolutionaries,” which will spotlight prominent technology innovators speaking about the developments and discoveries that have influenced our world.

A permanent installation, “Revolution: The First 2000 Years of Computing” opens to the public tomorrow, Jan. 13.

]]>https://www.hpcwire.com/2011/01/12/2000_years_of_computing_on_display_at_computer_history_museum/feed/04982CFD: Light at the End of the Tunnel?https://www.hpcwire.com/2010/03/08/cfd_light_at_the_end_of_the_tunnel/?utm_source=rss&utm_medium=rss&utm_campaign=cfd_light_at_the_end_of_the_tunnel
https://www.hpcwire.com/2010/03/08/cfd_light_at_the_end_of_the_tunnel/#respondMon, 08 Mar 2010 08:00:00 +0000http://www.hpcwire.com/?p=5397Formula One engineers differ on benefits of CFD.

]]>Nick Wirth swears by it; Adrian Newey and Mike Gascoyne are highly sceptical. We are talking about computational fluid dynamics, or CFD for short, which in layman’s terms is the use of computers to simulate airflow over a solid surface. In order to save millions of pounds a year on aerodynamics, Virgin Racing will make Formula One history this season as their cars have been designed without the use of a wind tunnel.

]]>https://www.hpcwire.com/2010/03/08/cfd_light_at_the_end_of_the_tunnel/feed/05397No Country for Old Menhttps://www.hpcwire.com/2010/02/19/no_country_for_old_men/?utm_source=rss&utm_medium=rss&utm_campaign=no_country_for_old_men
https://www.hpcwire.com/2010/02/19/no_country_for_old_men/#respondFri, 19 Feb 2010 08:00:00 +0000http://www.hpcwire.com/?p=5485Why the IT industry is infatuated with younger workers.

]]>A recent article in InfoWorld about the shrinking population of older IT workers hit me especially close to home. As a former programmer — pardon me, software engineer — who left the field in my mid-forties, I was interested in learning why the IT industry tends to shed its older, more experienced workers. According to the article’s author, Lisa Schmeiser, the reasons for this phenomenon are not what you might think.

For example, while age discrimination is alive and well, older workers, in general, have lower unemployment rates and higher salaries compared to their younger counterparts. In fact, the more money you make, the less likely you are to be unemployed. (This is true throughout the labor pool, not just the IT sector.) This would suggest that the industry should be well-populated with middle-aged techies. But apparently that’s not the case. Schmeiser writes:

A late-1990s study by the National Science Foundation and Census Bureau found that only 19 percent of computer science graduates are still working in programming once they’re in their early 40s. This suggests serious attrition among what should be the dominant labor pool in IT.

The idea that IT shops are filled with gray-bearded Unix geeks is a relic of the past. Today those same organizations are more likely to be populated with twenty-something Linux programmers.

Schmeiser cites some possible reasons the industry is shifting to a younger workforce, including a changing IT culture, the perceived lower price-performance of older workers, the devaluation of technical experience and skills, and the changing nature of the IT job. In fact, all of these are related, and have a lot to do with the shift from an engineering-focused culture to a business-focused culture as IT companies mature. In such an environment tech workers become commodities, with the older ones tending to become obsolete.

The attitude is summed up by this gem of a quote from former Intel CEO Craig Barrett, who was reputed to have said: “The half-life of an engineer, software or hardware, is only a few years.” The implication here is that years of experience with one set of technology — programming language, hardware architecture, what have you — is not applicable to the next job, so there is little reason to value such experience.

The result is that the more skilled, more specialized, and more expensive workers tend to get laid off first during a precipitating event, like when a company downsizes or shifts to a new set of products and technologies. Absent a layoff, the workers themselves often leave of their own accord as they are forced to accommodate new responsibilities or change their work habits. Schmeiser concludes:

Thus, the harsh reality may be that IT jobs — at least as they’re defined now — may be perpetually entry-level.

The entire text is worth a read, especially if you’re a young programmer or engineer who might be wondering what your career has in store for you. Of course, a follow-up piece on how to manage such a career would surely be appreciated. But that’s likely to require a much longer article.

]]>The calendar says we are well on our way to winter, but for many technology companies, orders are starting to bloom like flowers after a spring rain. Strong and steady improvement, economists say, would suggest that the United States is truly emerging from the Great Recession.