Computer technologies for 2012

ARM processor becomes more and more popular during year 2012. Power and Integration—ARM Making More Inroads into More Designs. It’s about power—low power; almost no power. A huge and burgeoning market is opening for devices that are handheld and mobile, have rich graphics, deliver 32-bit multicore compute power, include Wi-Fi, web and often 4G connectivity, and that can last up to ten hours on a battery charge.The most obvious among these are smartphones and tablets, but there is also an increasing number of industrial and military devices that fall into this category.

The rivalry between ARM and Intel in this arena is predictably intense because try as it will, Intel has not been able to bring the power consumption of its Atom CPUs down to the level of ARM-based designs (Atom typically in 1-4 watt range and a single ARM Cortex-A9 core in the 250 mW range). ARM’s East unimpressed with Medfield, design wins article tells that Warren East, CEO of processor technology licensor ARM Holdings plc (Cambridge, England), is unimpressed by the announcements made by chip giant Intel about the low-power Medfield system-chip and its design wins. On the other hand Android will run better on our chips, says Intel. Look out what happens in this competition.

Engineering Windows 8 for mobile networks is going on. Windows 8 Mobile Broadband Enhancements Detailed article tells that using mobile broadband in Windows 8 will no longer require specific drivers and third-party software. This is thanks to the newMobile Broadband Interface Model (MBIM)standard, which hardware makers are reportedly already beginning to adopt, and a generic driver in Windows 8 that can interface with any chip supporting that standard. Windows will automatically detect which carrier it’s associated with and download any available mobile broadband app from the Windows store. MBIM 1.0 is a USB-based protocol for host and device connectivity for desktops, laptops, tablets and mobile devices. The specification supports multiple generations of GSM and CDMA-based 3G and 4G packet data services including the recent LTE technology.

The Cloud is Not Just for Techies Anymore tells that cloud computing achieves mainstream status. So we demand more from it. That’s because our needs and expectations for a mainstream technology and an experimental technology differ. Once we depend on a technology to run our businesses, we demand minute-by-minute reliability and performance.

Cloud security is no oxymoron article is estimated that in 2013 over $148 billion will be spent on cloud computing. Companies large and small are using the cloud to conduct business and store critical information. The cloud is now mainstream. The paradigm of cloud computing requires cloud consumers to extend their trust boundaries outside their current network and infrastructure to encompass a cloud provider. There are three primary areas of cloud security that relate to almost any cloud implementation: authentication, encryption, and network access control. If you are dealing with those issues and software design, read Rugged Software Manifesto and Rugged Software Development presentation.

Enterprise IT’s power shift threatens server-huggers article tells that as more developers take on the task of building, deploying, and running applications on infrastructure outsourced to Amazon and others, traditional roles of system administration and IT operations will morph considerably or evaporate.

Explosion in “Big Data” Causing Data Center Crunch article tells that global business has been caught off-guard by the recent explosion in data volumes and is trying to cope with short-term fixes such as buying in data centre capacity. Oracle also found that the number of businesses looking to build new data centres within the next two years has risen. Data centre capacity and data volumes should be expected to go up – this drives data centre capacity building. Data centre capacity and data volumes should be expected to go up – this drives data centre capacity building. Most players active on “Big Data” field seems to plan to use Apache Hadoop framework for the distributed processing of large data sets across clusters of computers. At least EMC, Microsoft, IBM, Oracle, Informatica, HP, Dell and Cloudera are using Hadoop.

818 Comments

Acer, Asustek Computer and Lenovo are gearing up to promote their new touchscreen-based notebooks in 2013. Touchscreen devices are expected to account for 10-15% of overall notebooks shipped in 2013, according to sources at notebook companies.

Panel makers hold a more optimistic view, anticipating that shipments of touchscreen models will contribute more than 20% to total notebook shipments in 2013.

The availability of Microsoft Windows 8 is expected to reshape the PC market, with touch-capable notebooks to become mainstream, industry sources believe.

HP and Dell will see touch-capable devices account for 7-9% and 4-5%, respectively of their notebook shipments in 2013, the sources said.

Microsoft is trying to make up for below expected earnings following Windows 8’s and Surface RT’s lack luster adoption rates by increasing the prices of its products by as much as 400 per cent it has been revealed.

The Windows 8 maker has been bleeding chips for quite some time now with increased competition from Apple’s Mac OS X and Linux variants when it comes to desktop operating system and from Red Hat, Cent OS and others for its Windows Server operating system. Redmond has even gone to the extent of blaming OEMs for not-so-good adoption rates of Windows 8. Home and small business users have moved onto to more mobile alternatives and Microsoft hasn’t got anything concrete in this arena with iOS and Android dominating the scenes.

Microsoft has increased user CALs pricing 15 per cent; SharePoint 2013 pricing by 38 per cent; Lync Server 2013 pricing by 400 per cent; Project 2013 Server CAL by 21 per cent. This strategy may very well hurt Microsoft in the current times as it is not in sync with declining economy where companies are offering incentives to its customers.

For much of the week, Microsoft has been trumpeting the strong start of Windows 8.

But on Thursday, NPD, the retail sales tracking firm, published data that painted a darker picture of the Windows 8 introduction.

Unit sales of Windows PCs in retail stores in the United States fell 21 percent in the four-week period spanning Oct. 21 to Nov. 17, compared to the same period the previous year, according to the firm.

NPD said sales of Windows 8 tablets had been “almost nonexistent,” accounting for less than 1 percent of all Windows 8 device sales.

The figures suggest that Windows 8 did nothing to arrest the downward trajectory of the PC business, much less lead to a rebound in a market that has been struggling for some time. “It hasn’t made the market any worse, but it hasn’t stimulated things either,” Stephen Baker, an analyst at NPD, said in an interview. “It hasn’t provided the impetus to sales everybody hoped for.”

Microsoft’s 40 million figure, in contrast, represents copies of Windows that Microsoft sells to all of its customers. That includes some consumers but more often it reflects sales to the hardware makers that install Windows on their machines, some of which have not yet been bought by consumers.

The Windows 8 debut looks like it had much less of a positive impact on PC sales than did its predecessor, Windows 7, which went on sale to the general public on Oct. 22, 2009.

The PC business in 2007 had much stronger unit sales than it has now, in large part because of a boom in the low-cost laptops known as netbooks. Fast forward to 2012, and sales of netbooks have nearly vanished, replaced by surging sales of the iPad and other tablets.

The global technology analyst firm Ovum expects the sustainable data center market to see accelerated growth in 2013, as the market becomes more focused on cost-savings, and provides more efficient internal IT delivery methods. Such methods may include virtualization, software-defined networks (SDNs), and the use of converged infrastructure solutions (i.e. the so-called “cloud-in-a-box”), says the firm.

Apple CEO Tim Cook revealed that one of the existing Mac lines will be manufactured exclusively in the United States next year, making the comments during an exclusive interview with Brian Williams airing tonight at 10pm/9c on NBC’s “Rock Center.” Mac fans will have to wait to see which Mac line it will be because Apple, widely known for its secrecy, left it vague.

This announcement comes a week after recent rumors in the blogosphere sparked by iMacs inscribed in the back with “Assembled in USA.”

Apple would not reveal where exactly the Macs will be manufactured.

“When you back up and look at Apple’s effect on job creation in the United States, we estimate that we’ve created more than 600,000 jobs now,” said Cook. Those jobs, not all Apple hires, vary from research and development jobs in California to retail store hires to third-party app developers. Apple already has data centers in North Carolina, Nevada and Oregon and plans to build a new one in Texas.

Apple and other manufacturers who have their gadgets produced by Foxconn were forced to defend production in China.

Given that, why doesn’t Apple leave China entirely and manufacture everything in the U.S.? “It’s not so much about price, it’s about the skills,” Cook told Williams.

Echoing a theme stated by many other companies, Cook said he believes the U.S. education system is failing to produce enough people with the skills needed for modern manufacturing processes. He added, however, that he hopes the new Mac project will help spur others to bring manufacturing back to the U.S.

“The consumer electronics world was really never here,” Cook said. “It’s a matter of starting it here.”

The question most taxing the minds behind the personal computer industry right now is how to persuade punters to spend their money not merely on new notebooks and desktops, but specifically on more powerful – and thus more expensive – machines.

All the evidence suggests they are currently not doing so. More problematically, they don’t need to.

IHS iSuppli, a market watcher, this week said it had found that only six per cent of the desktop PCs that have been and are yet to be sold this year are what techies might call a “performance” machine – a computer based on the latest processor, graphics and storage technologies. For notebooks, the figure is slightly higher: 9.2 per cent.

Instead, punters are focusing their interest on what iSuppli calls the “value” and “mainstream” segments – defined, respectively, as machines cost $500 (£314) or less, and those in the $500-1000 (£314-629) band – of the desktop and laptop markets. Each accounts for more than 45 per cent of the whole.

Of course, today’s low-cost computers were yesterday’s high-end machines, and it’s in the very nature of the computer business

Pushing technology down-market means that a cheap PC today can deliver the same performance that a top-of-the-line machine provided two years ago. Some software – games and professional graphics programs, for instance – still involve crunching plenty of numbers, but by far the majority of applications ordinary folk run no longer need the lastest, fastest processors or graphics chips.

Proof of that is the fact that so many of those applications – or versions of them – are being run on tablets and even smartphones.

Chip makers can no longer look to Microsoft to solve the problem by regularly updating its operating system with technology that demands the performance only the latest processors can deliver.

Quite apart from punters buying tablets instead of laptops and desktops, they are finding their existing PCs sufficiently powerful for the tasks to which they’re being put.

“After years of denial, most PC industry players still don’t seem to realise what is happening – and don’t have contingency plans,” says Reitzes.

Worse, they may be simply sticking their collective fingers in their ears and trying to carry on as before.

Intel, for one, is looking to Windows 8 to revive the replacement cycle and raise demand for pricier PCs containing its more expensive chips. It’s too early to say whether this will happen, but if even the new operating system’s own developer is promoting a tablet this Christmas – its own Surface product – this may not be a season of celebration for the PC industry.

The now released Red Hat Enterprise Virtualization (RHEV) 3.1 fully supports using the web portal for managing host systems that are used in virtualisation. This means that Red Hat has eliminated the last dependency on Windows systems from RHEV; in RHEV 3.0, which was released in January, the web interface that could be used with different browsers was still new and only available as a Technology Preview with no official support. Its predecessor, the Administrator Console, required Internet Explorer to operate and has been dropped in 3.1.

According to the release announcement, RHEV 3.1 offers increased scalability and supports guest systems with up to 160 logical CPUs and up to 2 terabytes of memory

To hear Intel Fellow Matt Adiletta tell it, Chipzilla not only invented the term microserver but saw the trend towards wimpy computing coming way ahead of the all this fawning over the ARM architecture and a half-dozen upstarts wanting to take big bites out of the Xeon server processor cash cow.

Adiletta, as it turns out, caught the microserver bug back in 2006, when it wasn’t even called that yet. In 2007 his team at Intel created what he calls a “CPU DIMM” that was about the size of a folded wallet, as he explained in a conference call today with the press, that had either Atom or two-core Core desktop/laptop processors on them.

As far as Calxeda is concerned, putting ARM cores and a distributed Layer 2 switch that scales to 4,096 nodes today and to over 100,000 nodes in a few years is the real engineering task with microservers – not welding an Ethernet NIC to an Atom processor. After having bought Ethernet chip maker Fulcrum Microsystems a few years back, Intel certainly could respond with something similar, but Adiletta was not there to provide an actual roadmap, but rather to establish Intel’s cred in microservers and ramp up excitement for the Atom S Series.

“This has been a classic question from the communications space for a long time: do you go distributed or do you do centralized,” explained Adiletta when asked about integrated networking on the future Atoms.

What Intel should probably do is embed an Atom S chip on a Xeon Phi, use the PCI slot for power only, slap an InfiniBand port on it, stick a boatload of SATA or SAS ports on it for hard disks or SSDs, and throw away the Xeon node for all but the most serious single-threaded work where a brawny core is required. (We are only half joking here.)

“There is a lot of performance that we could gain by adding sophistication to our Atom cores,” Adiletta said. “I like where we are. We have the right tools in the toolbox and the management support to go out and do this.”

“Perhaps a sign of our troubled times or a sign that FreeBSD is becoming less relevant to modern computing needs”

Comments:

Mac OS took code one way; the main developers…and gave out free laptops to the others. Its an example how the spirit of sharing from BSD is not as strong as having a license enforce it. When a company gets involved with Linux the ecosystem gets stronger…not sort of meander into obscurity [and no throwing money it at in a PR stunt is not the answer]. The only sick thing is the amount of Apple users promoting BSD.

The improvements to the BSD were publicly known but who funded them never was.

Having worked on FreeNAS and its commercial counterpart, I can tell you that iX Systems, the folks behind FreeNAS, give quite a lot back to FreeBSD.

Since we made the switch to FreeBSD in 2004, providing various services such as proxying web usage or web access logging for corporations, we’ve never even considered another OS as it’s been a rock solid performer. Thousands of users in various locations are relying on our systems and despite inept people accidentally unplugging some of them, failed UPS’, failed hard drives, they ruggedly truck on without issue.

Hopefully the front page posting will encourage other FreeBSD users to donate. There’s certainly more servers in production, especially some of the more reliable ones, that are using FreeBSD according to Netcraft.

Technically, FBSD seems to have done a fine job, but they need to be more proactive in proliferating the market. For one, they could partner w/ server manufacturers of various platforms.

The other thing FBSD can do is try selling itself against Linux. Here, they can adapt a 2 pronged strategy – offer FBSD to any server vendor considering Linux as a server, and offer other alternatives, based on the target applications. If it requires good SMP support or a special file system, consider DragonFly BSD. If it’s for routers and firewalls, promote pFsense or m0n0wall. If it’s for desktop or laptop use, promote PC-BSD. If it is for embedded applications, consider Minix, or maybe one of the other BSDs. The main marketing strategies should focus on all technical advantages of FBSD and FBSD based distros over Linux based distros. Things like backwards compatibility, stable APIs and ABIs, and so on. Use the licensing advantage only as icing on the cake. While some Linux shops may be dug in, others may be more open to such alternatives.

As the rumours about Intel Corp.’s plan to abandon microprocessor sockets in the future mainstream desktop platforms emerge and then get partly denied.

Advanced Micro Devices promised to continue using sockets for its chips for at least another two years.

Considering the fact that the company will barely change its product line in the next couple of years, it is logical to expect the firm to remain committed to sockets.

“AMD has a long history of supporting the DIY and enthusiast desktop market with socketed CPUs and APUs that are compatible with a wide range of motherboard products from our partners. That will continue through 2013 and 2014 with the ‘Kaveri’ APU and FX CPU lines. We have no plans at this time to move to BGA-only packaging and look forward to continuing to support this critical segment of the market,” said Gary Silcott, a spokesman for AMD.

AMD will delay the roll-out of its Steamroller micro-architecture powered server and desktop processors till late 2014, hence, the firm will continue to sell FX-series chips in AM3+ form-factor throughout 2013 and most of 2014.

It is logical to expect AMD to continue offering interchangeable high-end desktop central processing units after 2014 to provide necessary flexibility to enthusiasts.

HP may still be clinging onto the top spot in the global PC stakes but in the world of smart connected devices it is becoming less and less of a relevant player, market stats show.

The boxes – desktops, notebook, tabs and smartphones – sold globally in Q3 have been counted by abacus fondler IDC and the US titan has come out of it pretty badly.
More Reading
Moody’s slashes HP’s credit ratingLawsuit launched against HP following Autonomy catastrophePCs punch HP in the gut, servers knee it in the jewelsHP: AUTONOMY ‘misrepresented’ its value by $5 BILLION, calls in SECEuropean HP workers take IT giant to court over ‘high handed’ job cuts

Samsung leads the pack in terms of device shipments, growing 97.5 per cent to bag 21.8 per cent market share. Apple is next with 38.3 per cent growth to hold 15.1 per cent of all sales.

Lenovo – which is in the midst of a two horse race with HP in the PC space – is the third biggest shifter of devices on the planet, with sales up 60 per cent and its share climbing to seven per cent.

The decline is hardly a surprise – HP has no smartphone on the market, and won’t have anytime soon. Its presence in the slab space has been marginal since it shelved the TouchPad.

The bets are being placed behind smartphones and tabs which are projected to rise 95.9 and 131.2 per cent over the forecasted period to own 70 per cent share of all device sales.

A password-cracking expert has unveiled a computer cluster that can cycle through as many as 350 billion guesses per second. It's an almost unprecedented speed that can try every possible Windows passcode in the typical enterprise in less than six hours.

The five-server system uses a relatively new package of virtualization software that harnesses the power of 25 AMD Radeon graphics cards.

It achieves the 350 billion-guess-per-second speed when cracking password hashes generated by the NTLM cryptographic algorithm that Microsoft has included in every version of Windows since Server 2003.

The Linux-based GPU cluster runs the Virtual OpenCL cluster platform, which allows the graphics cards to function as if they were running on a single desktop computer

The advent of GPU computing over the past decade has contributed to huge boosts in offline password cracking. But until now, limitations imposed by computer motherboards, BIOS systems, and ultimately software drivers limited the number of graphics cards running on a single computer to eight.

"Before VCL people were trying lots of different things to varying degrees of success," Gosney said. "VCL put an end to all of this, because now we have a generic solution that works right out of the box, and handles all of that complexity for you automatically. It's also really easy to manage because all of your compute nodes only have to have VCL installed, nothing else. You only have your software installed on the cluster controller."

The precedent set by the new cluster means it's more important than ever for engineers to design password storage systems that use hash functions specifically suited to the job. Unlike, MD5, SHA1, SHA2, the recently announced SHA3, and a variety of other "fast" algorithms, functions such as Bcrypt, PBKDF2, and SHA512crypt are designed to expend considerably more time and computing resources to convert plaintext input into cryptographic hashes. As a result, the new cluster, even with its four-fold increase in speed, can make only 71,000 guesses against Bcrypt and 364,000 guesses against SHA512crypt.

Billionaire entrepreneur Michael Dell has revealed that Autonomy was offered to him before it was bought by Hewlett-Packard, but that he rejected the British software firm because it was “overwhelmingly obvious” that it was overpriced.

Speaking in an exclusive interview with The Sunday Telegraph, the founder of Dell, the US computer giant, said that “any reasonable person” would have drawn the same conclusion.

His comments raise fresh questions over HP’s decision to pay $10bn (£6.3bn) for Autonomy last year – a 59pc premium to its market value at the time – and its subsequent claim that it only overpaid because the British company had cooked its books.

Michael Dell echoed their concerns, saying he had been hugely surprised at the size of the premium HP was willing to pay

Given a choice, customers of a Pacific Northwest PC system builder overwhelming pick Windows 7 over the newer Windows 8, the company’s president said Thursday.

“Windows 7 is known, it has years of solid reputation behind it, but Windows 8 has gotten a mixed reaction in the press and social media, and the lack of a Start menu is a hot-button issue among our customers,” said Jon Bach, president of Puget Systems, an Auburn, Wash. independent PC builder.

Puget Systems is no Dell or Hewlett-Packard, but instead sells high-performance, built-to-order PCs.

Since Windows 8′s launch, between 80% and 90% of the systems sold by Puget were pre-installed with the three-year-old Windows 7.

Bach was surprised by the sales numbers.

“Before we looked at the data, I would have guessed that Windows 8 was 30% to 40%, but it’s just 10% to 20%.”

And it’s not like Puget hasn’t given Windows 8 a shot.

Three years ago, Puget saw no such hesitation to adopt Windows 7, in large part because of the dissatisfaction with Vista

Simply wish to say your article is as astounding. The clarity to your publish is just excellent and i could assume you’re an expert in this subject. Well together with your permission let me to grasp your feed to stay updated with forthcoming post. Thank you a million and please carry on the gratifying work.|

Summary: The latest major Linux kernel release is here and it includes features that ARM developers and network administrators will love

Only months after the arrival of Linux 3.6, Linus Torvalds has released the next major Linux kernel update: 3.7. The time between releases wasn’t long, but this new version includes major improvements for ARM developers and network administrators.

Programmers for ARM, the popular smartphone and tablet chip family, will be especially pleased with this release. ARM had been a problem child architecture for Linux. . As Torvalds said in 2011, “Gaah. Guys, this whole ARM thing is a f**king pain in the ass.”

ARM got the message. Thanks to Olof Johansson, a Google Linux and ARM engineer, unified multi-platform ARM was ready to be included in Linux 3.7.

ARM’s problem was that, unlike the x86 architecture, where one Linux kernel could run on almost any PC or server, almost every ARM system required its own customized Linux kernel. Now with 3.7, ARM architectures can use one single vanilla Linux kernel while keeping their special device sauce in device trees.

The end result is that ARM developers will be able to boot and run Linux on their devices and then worry about getting all the extras to work. This will save them, and the Linux kernel developers, a great deal of time and trouble.

Just as good for those ARM architects and programmers who are working on high-end, 64-bit ARM systems, Linux now supports 64-bit ARM processors.

Website and network administrators will also be happy with Linux 3.7. TCP Fast Open will now be supported on servers By eliminating a step in opening Internet TCP connections, TCP Fast Open can speed up Web page opening speeds from 10 to 40%.

FRAMINGHAM, Mass., December 10, 2012 – The worldwide smart connected device market – a collective view of PCs, tablets, and smartphones – grew 27.1% year-over-year in the third quarter of 2012 (3Q12) reaching a record 303.6 million shipments valued at $140.4 billion dollars. Expectations for the holiday season quarter are that shipments will continue to reach record levels rising 19.2% over 3Q12 and 26.5% over the same quarter a year ago. According to the International Data Corporation (IDC) Worldwide Quarterly Smart Connected Device Tracker, 4Q12 shipments are expected to reach 362.0 million units with a market value of $169.2 billion dollars.

“The battle between Samsung and Apple at the top of the smart connected device space is stronger than ever,”

Looking forward, IDC expects the worldwide smart connected device space will continue to surge well past the strong holiday quarter and predicts shipments to surpass 2.1 billion units in 2016 with a market value of $796.7 billion worldwide. IDC’s research clearly shows this to be a multi-device era

There has been some Twitter chatter about the closure of silverlight.net, Microsoft’s official site for its lightweight .NET client platform. multimedia player and browser plug-in.

One of the things this demonstrates is how short-sighted it is to create these mini-sites with their own top-level domain. It illustrates how fractured Microsoft is, with individual teams doing their own thing regardless. Microsoft has dozens of these sites, such as windowsazure.com, windowsphone.com, asp.net, and so on; there is little consistency of style, and when someone decides to fold one of these back to the main site, all the links die.

What about Silverlight though? It was always going to be a struggle against Flash, but Silverlight was a great technical achievement and I see it as client-side .NET done right, lightweight, secure, and powerful. It is easy to find flaws. Microsoft should have retained the cross-platform vision it started with;

The reasons for the absence of Silverlight in the Windows Runtime on Windows 8, and in both Metro and desktop environments in Windows RT, are likely political.

Dutch hardware hacker, Emile Nijssen (nickname Mux), claims he has built the world’s most efficient high-end desktop computer: An Intel Core i5-3570K with 16GB of RAM, 64GB SSD, and other assorted bits, that consumes just 5.9 watts when idling and 74.5 watts at full load. Your desktop PC, by comparison, draws around 30 watts while idle and 150 watts at full load (while playing Angry Birds, or surfing a Flash website).

Mux has a bit of a history when it comes to ultra-efficient computers

How does one go about building a 5.9-watt computer? Well, fortunately Mux is one of those hardware hackers who takes lots of photos, produces his own illustrative diagrams and graphs, and records everything that he does in minute detail.

Google Inc. (GOOG)’s Android is extending its lead over Apple Inc. (AAPL) in the mobile-software market at a rate that compares with Microsoft Corp. (MSFT)’s expansion in desktop software in the 1990s, Google Chairman Eric Schmidt said.

Booming demand for Android-based smartphones is helping Google add share at the expense of other software providers, Schmidt said yesterday in an interview at Bloomberg’s headquarters in New York. Android snared 72 percent of the market in the third quarter, while Apple had 14 percent, according to Gartner Inc. Customers are activating more than 1.3 million Android devices a day, Schmidt said.

“This is a huge platform change; this is of the scale of 20 years ago — Microsoft versus Apple,” he said. “We’re winning that war pretty clearly now.”

“The core strategy is to make a bigger pie,” he said. “We will end up with a not perfectly controlled and not perfectly managed bigger pie by virtue of open systems.”

Getting hosed by your Internet service provider may seem as inevitable as death and taxes, but a new startup aims to change that.

Startup FreedomPop, which is backed by Skype co-founder Niklas Zennstrom, DCM and Mangrove Capital, provides cheaper Internet access and the ability for people to share access with others on its network. In exchange for sharing their Internet access, they get credits for more free Internet access. The company has the potential to be as disruptive to the broadband industry as Skype is to voice.

FreedomPop has a service for sharing and receiving mobile Internet access through its iPod Touch cases (iPhone coming soon).

FreedomPop also plans to release a new “open Wi-Fi” local-sharing Internet service through its devices, CEO Stephen Stokols exclusively told FORBES. This new feature will enable FreedomPop devices to share their broadband access to others nearby by using two SSIDs.

Stokols believes this service will disrupt others such as FON, another free Wi-Fi startup. That’s because FON cuts deals with large telecommunications providers such as BT, while FreedomPop doesn’t need to.

FreedomPop is now also entering the home market, with a free home broadband product called FreedomPop Hub Burst that uses Clearwire WiMax, the company is announcing today.

The service is designed for people who do not use a massive amount of data–that is, don’t stream Netflix or other video–but mostly use email or other “lighter” services. For users who want more data, they can pay $10 per month, which is still much cheaper than a typical $50 from DSL or cable.

Why share your Wi-Fi? If you can share (or rent) your house or car on collaborative consumption sites like Airbnb, RelayRides or Getaround, why not share some of your Wi-Fi access in exchange for more access?

C++ 11 is “far better than previous versions”, says the inventor of the language Bjarne Stroustrup.

C++ is an ISO standard, first ratified in 1998 with C++ 11 completed in 2011, but Stroustrup revealed he was initially resistant to standardisation efforts.

“It took some arm-twisting to get me to realise that it was time to start a standards effort,” he said. “People pointed out that you couldn’t have a language used by millions controlled by a single guy in a single company. Even if you could trust the guy, you can’t trust the corporation. I was a bit sad, because the things I wanted to do would take years instead of months, because you have to build up consensus, and then you have to wait for five compilers to catch up.”

“On the other hand the fundamental argument is correct. If you want something that is really widely used, you need some kind of standard.”

“A lot of the languages that are seen as competitors to C++ are owned by a single corporation, but they do tend to fight with all the other corporations, and portability is a really hard thing to achieve.”

Despite its wide usage, C++ is among the most complex programming languages and hard to learn. Stroustrup says the solution is not to attempt to learn everything, and that C++ 11 is easier than before.

“One of the things that was limiting C++ 98 [the previous standard] was that you could build really good resource handles – like vector, or istream, or thread – but they are hard to move around because this is computers and everybody knows that you copy things. In the real world you don’t copy things.”

Stroustrup does not favour a “dumbing down” of the language. “People always want to simplify the language to do exactly what they want, but to be part of a huge global and multi-industry community the language has to support things you would never do in your field. I see the language as a general purpose tool, and coding standards as specifying what you can do in a specific domain,”

“The quality of teaching C++ has gone down over the last 10 or 15 years. It’s gone away from how to write a good program, to here is this long list of features you should understand. It is easy to teach a list of features, but hard to teach good programming.”

So what are programmers doing wrong? One thing is too much use of inheritance. “It is obviously hugely overused,” he says.

He also takes care to distinguish “implementation inheritance, where in some sense you want a deep hierarchies so that most of the implementation is shared, and interface inheritance – where you don’t care, all you want to do is to hide a set of implementations behind a common interface. I don’t think people distinguish that enough.”

Another bugbear is protected visibility.

“If I say protected, about some data, anybody can mess with it and scramble my data.”

Macros are another issue. “For most uses of macros, there is something better in C++. The exceptions are source code control using #if and #ifdef. This will stay that way until we get some kind of module system.”

Dell vice chairman Jeff Clarke made a less than shocking announcement at this year’s Dell World Conference in Austin. The company is officially giving up on Android phones and tablets. From a hardware perspective its not surprising consumers didn’t embrace Dell’s lackluster attempts at mobile devices, but you could also make the argument the company predicted the rise of mammoth smartphones, a market Samsung ran away with.

So why dump Android? According to Clarke, “It’s a content play with Android”.

So if Dell is giving up on Android, what comes next? The company claims its doubling down on Windows 8, and the enterprise market.

Windows users were surprised to find that a Microsoft security update stopped fonts from working on their PCs.

Security update KB2753842 has killed certain fonts on PCs where it has been installed, rendering many of them unusable, and causing problems for designers and businesses who rely on using the types in their work.

Uninstalling the patch restores fonts, but this presumably leaves users open to the security risks that Microsoft was trying to fight

If you see the phrase “any time, any place, anywhere” in relation to mobile access, and are tempted to point out the language redundancy (any place, anywhere), then you are probably not old enough to remember the birth of client-server in the late ’80s and early ’90s.

If, however, cheesy music from a Martini ad is now running through your head, you probably were there and can recall exactly how client-server panned out.

You will remember how liberating it all seemed.

Over the years, of course, the realisation dawned that client-server brought with it as many problems as it solved. As client machines multiplied, developers ended up having to develop and test for a whole range of workstation specs and environments, and whenever something changed operations staff had to worry about getting new versions of software out to every desktop.

As support became more complicated and users discovered that an intelligent client with local storage meant they could create their own little offline empire, the overhead, costs and risks began to escalate.

Following a period of re-centralisation using Web-based architectures, it looks as if we are beginning to come full circle. When some of us old-timers see how the next generation is getting all excited about using mobile apps as front-ends for accessing services across the network, we can’t help noticing parallels with the past.

Target practice

With regard to endpoint proliferation, it is not just a matter of deciding whether to support the iPad or iPhone, popular Android devices, less popular Android devices, old Windows 7 phones, new Windows 8 phones, Windows 8 RT, BlackBerrys, Symbian phones, and so on (phew).

Driven by consumer-calibrated release cycles, devices are superseded within three to six months, which means each platform is a moving target in its own right.

But surely we have as our friend that ubiquitous access mechanism known as the browser? With a few tweaks of the server-based application to deal with different screen sizes and browser standards, and knowing that all data stays on the server, can’t we be pretty relaxed about all that client-side diversity?

If only that were the case.

In the real world, the fast and reliable connectivity upon which this model depends just isn’t there in most countries at the moment – hence you quickly get back to local applications and offline data storage, with a heavy reliance on replication and synchronisation for more critical applications.

But at least HTML5 and cross-platform development and execution environments are now with us to save us from all of the historical overhead associated with client-side software. Or are they?

The debate continues to rage about whether HTML5 cuts it and whether cross-platform environments pose too much risk of lock-in, not to mention user interface compromises so those native apps keep accumulating.

The emergence of mobile device management and mobile application management solutions that allow us to monitor and control everything out there can help but the truth is that it is all pretty fluid at the moment.

Viewing that Google and Amazon have launched 7-inch tablets at US$199, other vendors can offer 7-inch tablets at below US$150 only by adopting cheaper components, according to Taiwan-based TrendForce.

As panels and touch modules together account for 35-40% of the total material costs of a 7-inch tablet, replacing the commonly used 7-inch FFS panels with 7-inch TN LCD panels accompanied by additional wide-view angle compensation could save over 50% in panel costs, TrendForce indicated. In addition, replacing a G/G (glass/glass) or OGS (one glass solution) touch module with a G/F/F (glass/film/film) one, although inferior in terms of transmittance and touch sensitivity, can cut costs by about 70%. Thus, the adoption of a TN LCD panel and a G/F/F touch module for a 7-inch tablet could reduce material costs by about US$25, TrendForce said.

Given that the type of DRAM affects standby time only as far as user experience is concerned, costs can be reduced through replacing 1GB mobile DRAM priced at about US$10 with 1GB commodity DRAM priced at about US$3.50,

The Worldwide Web Consortium (W3C) has moved ahead with plans to develop the next two versions of the HTML web markup language, having released new draft specifications of HTML5, HTML 5.1, and related standards.

On Monday, the web standards body published the first “candidate recommendation” of HTML5, bringing the standard to a level that indicates its features are mostly locked and that future significant changes are unlikely.

“As of today, businesses know what they can rely on for HTML5 in the coming years, and what their customers will demand,” W3C CEO Jeff Jaffe said in a statement. “Likewise, developers will know what skills to cultivate to reach smart phones, cars, televisions, ebooks, digital signs, and devices not yet known.”

WHATWG’s work was adopted by the W3C HTML Working Group in 2007, and the current HTML5 specification combines the work of both organizations, although each takes a somewhat different approach.

In September, recognizing that its formal standardization process might be just a touch too glacial, the W3C announced that it would defer some features of its proposed HTML5 spec until a later version, which would be known as HTML 5.1.

The group published the first draft of the HTML 5.1 spec on Monday, simultaneous with the release of the latest HTML5 draft.

The cost of supplying IT services inside businesses has never been more visible, with much marketing attention focusing on the question “Why aren’t you using cloud-based services instead of running your own systems?”

More than ever, IT departments are having to justify their funding and show they are doing a good job. Just how will financing and budget models need to change in the coming years as business pressures on IT services continues to ramp up?

For the past two or three decades the bulk of major IT infrastructure spend has been directed at new or upgraded applications, resulting in data centres filling up with servers, each having its own storage system, operating in isolation and running a single piece of business software.

Even as IT technology has developed to allow servers to run multiple virtualised and shared applications and storage platforms, many organisations have continued to operate their computer systems as a series of separate islands.

Today, however, business needs may change very rapidly, placing great pressure on IT to respond to new requests with little time to plan.

Resource optimisation: x86/x64 Servers

The systematic overprovisioning of IT systems and their resulting under-utilisation has been one of the main drivers behind x86 server virtualisation.

Storage

Organisations are facing the considerable challenge of storing ever greater volumes of data while providing access to it from an expanding portfolio of devices at all times.

Software

As IT infrastructure becomes more flexible through server and storage virtualisation, software licensing models will also have to evolve if maximum business value is to be delivered.

Systems integration

Systems integration and optimisation are an area where IT professionals always expend considerable efforts, so the question arises: is it better to try and build systems from distinct pools of servers, storage and networking, or easier and faster to buy pre-configured solutions with all the elements already assembled in the box?

Budgets and funding

At least three possible approaches to addressing the procurement and budgeting problem are commonly encountered.

It’s good to share

The efficient delivery of IT services will depend more and more on the use of shared pools of IT resources, running inside or outside the data centre.

We might well be heading for an era of Darwinian evolutionary change, with many things being tried, some successfully, others less so.

Tomi Engdahl says:

“After dropping 20% in the second quarter of 2012 alone, SSD prices fell another 10% in the second half of the year. The better deals for SSDs are now around 80- to 90-cents-per-gigabyte of capacity, though some sale prices have been even lower”

“At the same time, hard disk drive prices have remained “inflated” — about 47% higher than they were prior to the 2011″

Perl, the open source programming language used by developers and sysadmins to automate any number of text-wrangling and data-management tasks, celebrates its 25th birthday on Tuesday.

It was on December 18, 1987 that Larry Wall released Perl 1.0, posting the source code to the Usenet newsgroup comp.sources.misc.

By the time Perl 5 shipped in 1994, it had developed into a full-fledged general programming tool

Around the same time, web developers began adopting Perl as the go-to language for coding CGI scripts, an early method of developing web applications. The fact that Perl is an interpreted language made scripts quick to write and easy to debug, and its strong text-processing capabilities made it ideally suited for outputting complex HTML.

Perl has fallen out of favor for web development somewhat in recent years, its role having in large part been subsumed by more recent upstarts such as PHP, Python, and Ruby.

Eighteen years after Perl 5 was released, it still remains the most popular version of the language, with the current stable version of that branch numbered 5.16.

Separately, however, a portion of the Perl community has moved on to Perl 6, a troubled rewrite that intentionally breaks compatibility with earlier versions. Despite good intentions and lofty goals, Perl 6 has remained in “active development” for over a decade, yet is still considered “not production ready”.

Digia released on Wednesday, the Qt development environment for the new 5.0 version. Digia acquired Qt technology as a whole from Nokia earlier this fall. 5.0 is the first version of Digia’s major development platform release.

Qt 5 comes with integrated Qt WebKit browser engine is for online content can be imported into applications. In the previous version of Qt 4-developed applications can transfer Qt 5 by compiling them on again.

Qt 5 to use the OpenGL ES API

The company promises the full Android and iOS support in the next year.

Customers should be monitoring more closely their service provider activity. This was the estimate of IT consulting firm Embrasserin consultant Paolo Abarca on Computer Sweden.

“One of the problems is that the suppliers do not have the functionality that they promise. They sell the air, “Abarca said.

Similarly, the functionality of the existing functionality is not often confirmed in practice: the company can, for example to say that the data is backed up, but it’s not necessarily ever tried to restore a backup.

Competition heated up as firms tried to cut power and improve graphics performance

CHIP VENDORS have concentrated their efforts in improving GPU performance in 2012, with AMD, Intel and Nvidia all pushing the graphics – and general purpose GPU computing – capabilities of their products thanks to manufacturing improvements. The INQUIRER has a look at the major semiconductor vendors and how they fared during 2012.

Netbooks – those compact, underpowered, inexpensive notebook PCs once hailed as the future of mobile computing – are set to disappear from retailer shelves in 2013, as the last remaining manufacturers of the devices prepare to exit the market.

According to Taiwanese tech news site DigiTimes, Acer and Asus are the only two hardware makers still producing netbooks, and they are mainly doing so to sell them to emerging markets such as South America and Southeast Asia.

“A new version of GNU C Library (glibc) has been released and with this new version comes support for the upcoming 64-bit ARM architecture a.k.a. AArch64. Version 2.17 of glibc not only includes support for ARM, it also comes with better support for cross-compilation and testing”

Tomi Engdahl says:

“The Free Software Foundation is on an offensive against restricted boot systems and is busy appealing for donations and pledge in the form of signatures in a bid to stop systems such as the UEFI SecureBoot from being adopted on a large-scale basis and becoming a norm in the future. The FSF, through an appeal on its website, is requesting users to sign a pledge”

Tomi Engdahl says:

We, the undersigned, urge all computer makers implementing UEFI’s so-called “Secure Boot” to do it in a way that allows free software operating systems to be installed. To respect user freedom and truly protect user security, manufacturers must either allow computer owners to disable the boot restrictions, or provide a sure-fire way for them to install and run a free software operating system of their choice. We commit that we will neither purchase nor recommend computers that strip users of this critical freedom, and we will actively urge people in our communities to avoid such jailed systems.

A five-year lifespan turned out to be all that netbooks got. Acer and Asus are stopping manufacture from 1 January 2013 – ending what once looked like the future of computing

The end of 2012 marks the end of the manufacture of the diddy machines that were – for a time – the Great White Hope of the PC market.

Still, there’s an eWeek article from July in which ABI says that “consumer interest in netbooks shows no sign of waning, and the attraction remains the same: value rather than raw performance.”

Actually, the number sold in 2013 will be very much closer to zero than to 139m. The Taiwanese tech site Digitimes points out that Asus, which kicked off the modern netbook category with its Eee PC in 2007, has announced that it won’t make its Eee PC product after today, and that Acer doesn’t plan to make any more; which means that “the netbook market will officially end after the two vendors finish digesting their remaining inventories.”

As he also pointed out then, a key factor in that slowdown was that Linux didn’t work well as an OS for users who were expecting to run PC software – which meant that Windows XP had to be pressed into the task. But that meant cleaving to Microsoft’s demands

The promise of the netbook was that it would be more portable, have longer battery life, and run all the software you needed. With the overall PC market shifting towards more and more replacements, the netbook arrived at the right time to create a “first-time” market – of people buying a machine purely for its portability and/or battery life.

Netbooks are dead. Good riddance! Just a few years ago, these small, underpowered, ultracheap laptops were considered the future of the computer industry. In 2008 and 2009, recession-strapped consumers around the world began snapping up netbooks in droves. They became the fastest-growing segment of the PC market, and some wild-eyed analysts were suggesting that netbook sales would soon eclipse those of desktops and regular laptops combined. That didn’t happen. Over the past couple years the netbook market crashed. Now, as Charles Arthur reports in the Guardian, most major PC manufacturers have stopped making these tiny machines. The last holdouts were the Taiwanese firms Acer and Asus. Both say they won’t build any netbooks in 2013.

ARM Holdings, the design and licensing company behind the ARM processor architecture, unmasked its 64-bit Cortex A50 processor designs in October 2012, and AMD, Samsung Electronics, and Cavium have licensed those designs. AMD and Cavium have admitted that they will be using these ARMv8 architecture chips in servers, and Samsung is widely believed to be working on server parts as well, but has not confirmed its plans. Marvell has aspirations in the ARM server space, too, and has Dell building experimental boxes using its ARM designs and related networking chips.

The battle pitting ARM chips against X86 processors in the data center – mostly Intel Xeons and now Atoms – is not just about low-energy processing, but also about virtualization, networking, and a more integrated data-center design.

If you are wondering why Intel spent past year acquiring the supercomputer interconnect business from Cray, the InfiniBand business from QLogic, and the Ethernet business from the formerly independent Fulcrum Microsystems, it was to get access to interconnect experts and to figure out when and how interconnects – the next logical piece of the hardware stack – can be integrated onto the processor chip complex.

As we discussed at length in November, former Intel chip boss and now VMware CEO Pat Gelsinger thinks that the future is ARM and Intel on the endpoints and Intel in the data center. Specifically, by 2015 the analysis that Gelsinger’s staff at EMC put together for the Hot Chips 24 conference shows most of the processor and chipset money either in the data center or on end points.

Mobile devices based on non-x86 architectures in the EMC model are expected to be the largest part of the IT ecosystem, pushing around $34bn in chip and chipset revenues, followed by mobile x86 devices (mostly laptops but some tablets and smartphones) driving maybe $27bn in revenues in CPUs and chipsets. That leaves x86-based servers driving around $18bn in revenues in 2015 and x86-based PC desktops with a mere $5bn in processor and chipset sales.

To Gelsinger’s way of thinking, ARM on the endpoint and x86 in the data center becomes the new normal because of the size of the software investment on each side.

Calxeda: This is the first silicon etcher to jump into the ARM server fray back in November 2011 with a custom quad-core Cortex-A9 chip that integrated processing and interconnect onto a single chip.

This year, Calxeda will move to a Cortex-A15 core with a chip code-named “Midway” that sports 40-bit memory addressing, boosting the memory on a four-core chip to 16GB.

Sometime in 2014 – about a year after Midway ships – Calxeda will move to the ARMv8 core from ARM Holdings with its “Lago” system-on-chip, providing 64-bit processing and memory addressing.

Applied Micro Circuits: This company is backing into the server chip business from the networking chip and embedded processor markets where it has been making its living in the hopes of carving out a big, juicy, profitable slice of the server racket.

The company launched its X-Gene multi-core SoC based on the ARMv8 design in October 2011, a year before ARM Holdings put out the full ARMv8 specs as embodied in the Cortex-A53 and Cortex-A57 reference designs.

The next generation X-Gene will be shrunk using TSMC’s 28nm process, and will have a total of 16 cores running at 3GHz.

Marvell: It has been more than two years since this chip maker launched its Armada XP ARMv7 derivatives aimed at servers, and the company has gotten some traction with its silicon.

What Nvidia did announce in January 2011 was an effort called Project Denver, which will create custom 64-bit ARM processors that the company will embed on future “Maxwell” GPU chips, due in 2013 and offering around five times the gigaflops per watt as the current “Kepler” series of GPUs.

AMD: If AMD has a code name for its future ARM Cortex-A57 Opteron processors, El Reg doesn’t know about it

Cavium: This supplier of network processors based on the MIPS architecture is expanding out to ARM chips through an effort called Project Thunder, launched last August.

Samsung: In many ways, Samsung is the wild card in the ARM server-chip racket.

Worldwide PC shipments totaled 89.8 million units in the fourth quarter of 2012 (4Q12), down 6.4% compared to the same quarter in 2011 and worse than the forecasted decline of 4.4%, according to the International Data Corporation (IDC) Worldwide Quarterly PC Tracker.

Although the quarter marked the beginning of a new stage in the PC industry with the launch of Windows 8, its impact did not quickly change recently sluggish PC demand, and the PC market continued to take a back seat to competing devices and sustained economic woes. As a result, the fourth quarter of 2012 marked the first time in more than five years that the PC market has seen a year-on-year decline during the holiday season.

The lackluster fourth quarter results were not entirely surprising given the spate of challenges the PC market faced over the course of 2012. IDC had expected the second half of 2012 to be difficult.