Monthly Archives: August 2014

“It doesn’t matter what those around you do, ignore them, you give your best on every job no matter what it is, taking out the garbage, washing dishes, it doesn’t matter what the job is. You should show pride in your work. Give 100% and you WILL be recognized for it… ”

I started working at aged 7 selling fruit from our fruit trees around the neighborhood, selling lemonade on the side of the street, mowing the neighbors lawns and started digging ditches at age eleven for my first real job with a steady paycheck. These words of advice stuck in my head and have kept me in good stead my entire working life. I worked my way from dishwasher to sous chef after the US Navy in gourmet French and Italian restaurants before getting my first computer at aged 30 which changed my passions (and life) over night and I have never been afraid of starting at the bottom and working my up and proving myself.

There is a spot just off the MIT campus in Cambridge, Massachusetts, that is home to what may be the world’s densest concentration of startup companies. There, near the edge of Kendall Square, the founders of more than 450 startups crowd into nine floors. Some occupy common rooms where the rule is “Grab any seat you can.”

On a heat map of innovation, the place is glowing bright red. Sharing the same elevator banks are venture capital firms that collectively manage funds totaling $8.7 billion. Fifteen years ago, the local tech scene was anemic and there were few investors. Now Kendall is a beacon that’s drawing more and more technology companies. Amazon has moved a mobile development team to the area, Google has expanded quickly into new buildings, and drug companies are piling in, too.

Kendall has become what economists call a cluster, a concentration of interconnected companies that both compete and collaborate. There’s economic value in that, as the price of office space attests: rents have spiked to $70 per square foot from half that a decade ago, similar to what you’d pay in midtown Manhattan. “Rents don’t lie,” says Tim Rowe, head of the Cambridge Innovation Center, the shared office space where most of the startups are located.

There’s value to the region as well. Cities used to try to win jobs by “smokestack chasing,” or luring big industries. But large existing firms tend to shed jobs, research has found. At least in the United States, net job growth comes from startup companies, especially the kind that explode from a few employees to several thousand. In technology, those winners have a way of producing more winners. The process reaches critical mass in the web of intertwined companies, resources, advantages, ideas, talent, opportunity, and serendipity that defines a technology cluster.

It’s clear that what’s essential is proximity to human talent and new ideas. Jean-François Formela, a venture capitalist at Atlas Venture who invests in early-stage biotechnology startups, says he visits Boston-area academic labs several times a week, trying to find the next invention that he can license and turn into a company. And because there are so many PhDs and MDs in the area, he can start a company and build a team remarkably fast. “People don’t even have to change buildings,” he says. “They just switch floors.”

The big questions in this month’s MIT Technology Review Business Report are why technology clusters arise and what the ingredients are to create one (see “Silicon Valley Can’t Be Copied”). Unhappily for regions that have spent billions attempting to become the next Silicon Valley, the answers to these questions are still in debate. “Clusters exist—it’s empirically proven,” Yasuyuki Motoyama, a senior scholar at the Kauffman Foundation, told me. “But that doesn’t mean governments can create one.”

What’s certain is that they are trying. The largest such effort we know of is the Skolkovo complex outside Moscow, where $2.5 billion is being invested in a university, a technology park, and a foundation. Another, in Waterloo, Ontario, aims at gaining a lead in a particular advanced technology, quantum computing. The price tag there: more than $750 million.

The problem for governments is that they often try to define where and when innovation will occur. Some attempt to pick and fund winning companies. Such efforts have rarely worked well, says Josh Lerner, a professor at Harvard Business School. Governments can play a role, he says, but they should limit themselves mostly to “setting the table”: create laws that don’t penalize failed entrepreneurs, reduce taxes, and spend heavily on R&D. Then get out of the way.

Still, there’s no recipe that guarantees success. One reason is that some hard-to-copy ingredient—a fluke of history or culture—often helps explain the vibrancy of a technology hub. Take Israel, where per capita venture capital investment is the highest of any country. Most young people go through compulsory military service, where they are exposed to advanced technology and learn teamwork. Google chairman Eric Schmidt, after visiting last summer, was impressed by Israel’s unique“live for today” attitude toward taking entrepreneurial risks (see “Israel’s Military-Entrepreneurial Complex Owns Big Data”).

Even so, a wider group of cities and regions now aspire to become technology hubs. One reason is that the Internet has spread both the ideology of startup culture (you, too, can be Mark Zuckerberg) and the means of participating through apps and Web software. Now every place from Chile to Iceland to Adelaide, Australia, seems to have created a startup program in an effort to jump-start its own technology scene without expensive laboratories or even a top university.

One proponent of this idea is Brad Feld, a partner at Foundry Group and a creator of the technology company accelerator TechStars, who developed what he calls the “Boulder Thesis” based on his experiences in Colorado (see “It’s Up to You, Entrepreneurs”). It is a four-point plan for how entrepreneurs—not governments or universities—can organize and create what he terms “entrepreneurial communities” in any city. Feld says the startup movement is now an “enormous global community with tens of thousands, hundreds of thousands, of people around the world.”

But can entrepreneurs succeed in creating clusters where governments have had so much difficulty? “The conflict now is between two logics on how to create an ecosystem,” says Fiona Murray, a professor at MIT’s Sloan School, who consults as a kind of therapist to clusters, including London’s TechCity. One is “a government logic that says it’s too important to leave to entrepreneurs, and that you that need specialized inputs, like a technology park.” The other is “purely focused on people and their networks.”

Murray believes the answer lies somewhere in the middle. Governments are good at organizing but poor at leading. One popular approach these days is to pair entrepreneurship programs with urban revitalization projects. In this issue, we visit Zappos CEO Tony Hsieh, who is trying to morph Las Vegas’s depressed downtown into a scene for startups. He’s trying to make it a cool place to be, and because Las Vegas is so spread out, he’s reserved 100 Tesla electric sedans to ferry entrepreneurs around town. That way, he says, he’ll increase the odds of serendipity (see “Zappos CEO Bets $350 Million on a Las Vegas Startup Scene”).

The risk of all these plans is that economists still don’t agree on exactly what levers must be pulled to create a technology cluster. But there is one finding they agree on. Centers of innovation do move, sometimes rapidly, and they tend to go where the latest mousetrap was invented. Boston gave up its lead in computing to Silicon Valley in the 1980s, after the personal computer was developed. But who knows? One of those 450 startups in Kendall might just hit upon something big. That’s a reason that any place can still hope—with a few decades of effort, and plenty of luck—to become a Silicon Valley too.

The New Paradigm or a Stale Idea

However, some people are reluctant to buy into the “clustering” paradigm, as one startup CTO and the Founder of “Synapse Synergy Group, Inc.” (SSG) Jarrett Neil Ridlinghafer, who started his career over twenty years ago as employee number 13 in Netscape’s Technical Support Department in 1995 (where he watched the fastest growing startup in history expand to almost 4000 employees in only four years) and is now founder of his sixth self-funded startup and eighth overall, says

”

We wanted to prove a new model for the 21st century, one that is global in nature,just as the internet is, we wanted to ‘walk the talk’ and do something everyone told us couldn’t be done, so we did…We started a 100% Virtual Think-Tank, Incubator & Accelerator model which literally everyone we approached with the idea, from well known Angels to respected businessmen and VC’s all saying it couldn’t succeed, well that’s the wrong thing to say to me because it’s all I need to go prove you wrong. And we have proven the skeptics wrong and are writing a Kindle book about the experience and what we believe is a brilliant new startup paradigm/model for the 21st century. It has not been easy but we pushed through the rough spots and the benefits we discovered during the past year, have far exceeded any perceived negative aspects, such as not being elbow to elbow crammed in a warehouse together which personally is not appealing to me as an innovator.”

He claims to have saved 90% in startup costs using their model, an estimated $3,000,000 saved in their first ten months alone and to those who exclaim you cannot have the same quality collaboration in a virtual remote environment they boast over twenty-four patents have been created during their “nightly brainstorming sessions using Skype and Google Hangouts with brilliant people all around the world”.

SSG now has over forty software and hardware (and combinations of both) products they plan to produce and push into production over the next three years…Four to six will be ready to go into production within the next thirty days he stated.

Says Mr. Ridlinghafer “The fact is, communes and socialists have been advocating communal living and working for hundreds of years, there’s nothing new at all with that model, it’s a tired and stale model which is really extremely limiting, not the other way around. Your limited in your work force to a physical geographic area, your limited to when and where your allowed to work, your limited to using only the resources provided to the ‘community’ and security is atrocious when most startups are very concerned or should be about their IP”

Jarrett continues;

“I don’t buy into the ‘follow the crowd theory that says ‘do what everyone else is doing, it must be right and the best way’ in fact, I become suspicious of anything everyone embraces and do my own thinking, thank you very much…”

He had a lot more to say on the whole subject and it was obvious he is a brilliant entrepreneur in his own right and someone who’s mind never shuts off or stops thinking or considering better ways to do just about everything and anything you can imagine.

As we spoke, I got the impression that here was someone who lived, ate, breathed and slept with technology and startups. I’ve rarely seen any as passionate in their speaking about a subject.

At one point, something passed over his features and a faraway look turned into the following admission;

“I’ll admit”, Jarrett, the founder of Synapse Synergy Group, Inc. states, “it’s been the toughest year of my entire career and I’ve had some amazingly hard years. like early in my career, shortly after leaving Netscape after the traitorous AOL deal went through, I moved to the small bay area city of Los Gatos CA, during the summer of 99 and was told I could only get dialup, I said to myself “there’s no way in hell I am going back to dialup… ” and so I cashed in my Netscape stock, got a line of credit from my friends at Cisco, googled how to configure routers, switches and everything else (at that time I had never even touched any network equipment except the basic home stuff which wasn’t much back then)”,

When asked, he admitted he dropped out of college after a single semester and so is self taught in everything he knows and says he learns everything extremely fast which, has been a “blessing” throughout his 25yr career.

He continued “I started the first broadband ISP there in the city of Los Gatos where I also built the first call-center and data-center and brought in the first fiber and then proceeded to invent my first patented Invention as well, but when I tried to start three companies and was offered VC funding for the first ever plug-n-play firewall router by KPCB in 2000, all at the same time…. I had a nervous breakdown I’m pretty sure, and walked away from all my startups and even the offer of funding for my invention. I had to make a hard choice so I dropped everything except my broadband ISP business, where Steve Wozniak and the stealth-mode startup company none of you have probably heard of called “Netflix”, were some of my first customers….

But this latest startup, SSG, has been tougher because I put everything I own and all my ideas and passions and dreams into it and that kind of pressure and commitment is a pretty powerful force on ones psyche and nerves.”

You can discover very little at their company HQ website, for good reason as he explains, up to now the company and its whole existence even, has been a well kept secret and has been in stealth-mode however, Mr. Ridlinghafer states the corporate website will be launched within the next 30 days, sometime in September, he didn’t give us an exactly date, but did state “you will definitely hear about it when we go out, live into the public marketplace”.
Their website:http://synapsesynergygroup.com
Twitter: @SynapseSynergy
Email: info@synapsesynergygroup.com

Like this:

Starting with the idea that traditional “Data-Center” and server architectures are “constraints” on businesses, HP announced Thursday that it is releasing a new line of servers aimed at faster, simpler and more cost-effective delivery of computing services. Its portfolio of HP ProLiant Generation 9 (Gen9) servers “reimagines the server” for a new computing era, the company said.

According to HP, the future of data center technology is a “compute” era of flexible, scalable servers adapted for operating in an increasingly cloud -based, software-as-a-service IT environment. The company said it designed the ProLiant Gen9 line with those needs in mind.

HP said its new server portfolio spans four architectures: blade, rack, tower and scale-out. It claims the flexibility of the design will enable users to improve the workload performance ofbusiness -critical applications and to cost-effectively increase computing capacity across multiple workloads.

The ProLiant Gen9 servers will be available through HP and its channel partners starting on September 8. Prices will vary according to model and customer configurations.

‘New Approach to the Data Center’

While the global business world is rapidly moving its data and IT services to the cloud, all that computing is still being driven by servers somewhere. Traditional servers and data centers, however, were not designed for such an environment.

“The rise of mobile , cloud, social and big data is driving the need for a new approach to the data center and its processing engine — the server — to enable successful business outcomes,” said Antonio Neri, HP’s senior vice president and general manager of servers and networking.

According to HP, the ProLiant Gen9 servers will make it easier for organizations to, for example, roll out new mobile services for employees or customers “in minutes” and put customer behavior data to business-building use in real time.

“With HP compute, we can scale up to millions of transactions a day, deploy servers in virtualized environments in seconds and provision applications in minutes, all while anticipating our future needs,” said Harry Gunsallus, executive vice president and chief information officer, Redstone Federal Credit Union.

Additional Support in Late 2014

Once support is enabled for HP OneView, the Gen9 servers will provide a simpler, streamlined system for managing services across servers, storage and networking, HP said. The OneView system can help makeinfrastructure provisioning “up to 66 times faster,” according to HP. OneView is the management platform for HP BladeSystems, and ProLiant Generation 8 and 7 servers.

While the OneView dashboard is already available, it is not currently compatible with the Gen9 line. The converged management system will add support for Gen9 by late 2014, HP said.

HP added that it plans to provide more details about the new server line during the Intel Developers Forum, set to be held September 9-11 in San Francisco.

Like this:

At this year’s U.S. Open, IBM has been serving up a wide range of social media-focused technology geared to engage the tournament’s countless fans worldwide. Whether sitting courtside to watch Serena Williams in a match, or tweeting on a tablet from the comfort of home, the tournament’s fans are continuously monitoring live updates from the two-week grand slam tournament, which started on Monday.

IBM has been working closely with theUnited States Tennis Association to make sure all of this runs smoothly. This week the 25-year collaborators unveiled their mobile strategy for making fans across the globe feel a part of the action from Flushing, Queens.

ADVERTISEMENT

“Over 200 countries and territories are watching (the tournament) on broadcast. Even more than that are on their mobile devices, on iPads, on tablets, on desktops,” said Nicole Jeter West, the USTA’s senior director of ticketing and digital strategy, during an IBM press event held at the Billie Jean King National Tennis Center in Queens.

Key to this social interaction are the IBM-developed U.S. Open mobile apps and website. This year, IBM included new data sets that enable real-time match updates, player statistics, and detailed historical information. The data sets and enhanced visual elements were powered by IBM SlamTracker analytics technology, which includes information as detailed as ball and player movement on the court, and can even track how far a player runs during a match. According to an IBM press release, SlamTracker works by analyzing over 41 million data points from eight past years of tournament data. From this data, the analytics system designates three “performance indicators” that could potentially impact a player’s game. The technology processes all of this information in real-time.

At the event, representatives from both organizations emphasized that this is all to “grow the game of tennis.”

The quickly-processed data needed to grow the game is powered by updated cloud technology. Over the course of the tournament’s run, traffic to the U.S. Open’s website and mobile apps grows increasingly large. Last year, there were a record-breaking 419 million page views on the U.S. Open website, with 178 million views coming from mobile devices and 41.7 million from tablets. The predictive cloud technology used by the USTA helps manage sudden web traffic spikes.

Trying to make sure this works takes months of planning and preparation. In many ways a year-long process – IBM and the USTA meet for a post-mortem directly after each year’s tournament, and begin planning in earnest for the U.S. Open around February of each year — testing the technologies involves a great deal of trial and error.

“The concepting phase is very collaborative. We see what is working, what isn’t and what’s trending,” said John Kent, a program manager for IBM Worldwide Sponsorship Marketing, told FoxNews.com.

Kent said that both teams spend about two to three week at minimum on simulation testing. This involves synthesizing the tournament experience by using exact scores from the previous year. They run through possible scenarios like an unexpected player withdrawal or changes to the tournament’s schedule to ensure that the tech can respond quickly to real-time changes. Jeter West called it a “rehearsal.”

“We look at it from the user’s perspective, and then also from the sponsor’s perspective,” she said. “We have to make sure that Mercedes Benz’s logo is on – we are looking at another layer of testing on top of just the user experience.”

Both Kent and Jeter West emphasized that their jobs require a lot of guess-work on what platforms will be successful. In the ever-shifting tech world where trends spark fast and then flicker away, IBM and the USTA have to make sure they find relevant ways to engage their audience.

Part of that involves looking at potential successful strategies for future tournaments. Kent said higher-quality 4K-resolution video might be an attractive tool in the future. While the technology isn’t quite ready yet, being able to bring viewers the most enhanced visuals is part of the U.S. Open experience, he added.

Jeter West said that telemetry, automated and fast data transmission, and the constantly flowing data generated from a match will continue to be a big factor in her organization’s partnership with IBM.

“We are able to get player movement and then take that movement, convert it to data, and utilize that. We are always assessing how we can better realize that and bring it to our fans,” she said. “The question is: How can we take that information and make it useful?”

Like this:

In 1883, a volcanic eruption in a small archipelago of the Dutch East Indies (now Indonesia) changed the world. The eruption and subsequent tsunamis caused by the Krakatau eruption resulted in over 36,000 deaths, including all 3,000 souls on the island. The ocean floor was altered, and temperature and weather patterns didn’t return to normal for 5 years. And although the ash in the atmosphere produced spectacular sunsets, the lowered temperatures and acid rains devastated crops around the world.

In 2010, a similar event occurred in Iceland with the eruption of Eyjafjallajokull. Though the actual eruptions were much smaller, in relative terms, the ash cloud from this eruption grounded about 10 million travelers in Europe for six days.

Neither of these volcanoes are comparable in size to the volcano at Yellowstone. A new study from the United States Geological Survey (USGS) suggests that the ash cloud from a Yellowstone supereruption would blanket the Rocky Mountains several meters deep, and it would deposit at least millimeters of ash at least as far away as New York, Los Angeles and Miami. The results have been published in a recent issue of Geochemistry, Geophysics, Geosystems.

The research team used an improved computer model to develop their predictions. They believe that the large hypothetical eruption would create an umbrella ash cloud — one that expands in all directions evenly — sending ash across North America.

During a supereruption (the largest kind of eruption known), more than 240 cubic miles of material can be ejected from a volcano. This sort of eruption is highly unlikely, but if it should occur, electronic communications and air travel throughout the continent would be shut down, and the climate would be altered.

The underground reservoir of hot and partly molten rock beneath Yellowstone National Park is enormous. We know of three eruptions in the past, at approximately 2.1 million, 1.3 million and 640,000 years ago. According to the University of New Mexico, one of those eruptions formed the 24 by 40 mile caldera which is now Yellowstone Lake. Current geological activity at the park shows no sign that any volcanic eruptions will occur in the near, or even far, future. A relatively non-explosive lava flow near the Pitchstone Plateau was the most recent volcanic activity at 70,000 years ago.

The model, called Ash3D, projects that cities near the supereruption would be covered by a few feet of ash, a few inches would cover the Midwest region of the country, and cities on both coasts would see a fraction of an inch at least.

Scientists can use the findings from this study to understand past eruptions at Yellowstone and the widespread ash deposits left behind. Ash3D is also being used by other USGS researchers to forecast possible ash deposit hazards from restless volcanoes in Alaska.

Typical smaller eruptions deposit ash in a fan formation. A supereruption, however, resembles a bull’s-eye; dense in the center and lessening in all directions fairly uniformly. The researchers say that this type of formation is less affected by the prevailing winds than the fan formation.

“In essence, the eruption makes its own winds that can overcome the prevailing westerlies, which normally dominate weather patterns in the United States,” said Larry Mastin, a geologist at the USGS Cascades Volcano Observatory in Vancouver, Washington. “This helps explain the distribution from large Yellowstone eruptions of the past, where considerable amounts of ash reached the west coast,” he added.

The three large past eruptions deposited ash over many tens of thousands of square miles. The deposits, and the fossils they cover, have been found across central and western Canada and the US.

Accurately estimating the ash deposits from these past eruptions was made challenging by erosion, as well as the limitations of previous computer models which lacked the ability to accurately determine the mechanism of transportation for the ash.

Depending on the length of the eruption, Ash3D revealed that the leading edge of the ash cloud from a supereruption could expand at a rate that exceeds the ambient wind speed for hours or days. Such an expansion could drive ash both upwind (westward) and crosswind (north to south) more than 932 miles. This would produce the distinctive bull’s-eye pattern.

The simulation showed that modern cities near the park – like Billings, Montana and Casper, Wyoming – would be covered by a few inches to more than three feet of ash. Cities in the upper Midwest — like Minneapolis, Minnesota, and Des Moines, Iowa – would receive inches, at least. The East Coast and Gulf Coast would only receive fractions of an inch, while California cities would be between one and two inches. Pacific Northwest cities might receive just over an inch.

Although this might not sound bad because some of these cities receive more than this in snow each year, the effect on the climate of only an inch or less of volcanic ash could be severe. Previous research shows that such a blanketing could reduce traction on roadways, short out electrical transformers, and cause respiratory problems. Other studies also demonstrated that multiple inches of such ash could damage infrastructure, block sewer and water lines, disrupt livestock and damage crops.

The research team discovered that other eruptions that are smaller than the Yellowstone supereruption, yet still powerful, could cause an umbrella ash cloud as well.

“These model developments have greatly enhanced our ability to anticipate possible effects from both large and small eruptions, wherever they occur,” said Jacob Lowenstern, USGS Scientist-in-Charge of the Yellowstone Volcano Observatory.

Image 2 (below): An example of the possible distribution of ash from a month-long Yellowstone supereruption. The distribution map was generated by a new model developed by the U.S. Geological Survey using wind information from January 2001. The improved computer model finds that the hypothetical, large eruption would create a distinctive kind of ash cloud known as an umbrella, which expands evenly in all directions, sending ash across North America. Credit: USGS

Like this:

ANALYSIS: The Gaza crisis is having little impact on the output of the Israeli tech start-up community.

In spite of the country’s reputation as a world-leading start-up ecosystem, strict conscription laws have pulled many Israeli businesses directly into the conflict.

But in a conversation with IT Pro, the minister of the economic and trade mission at the embassy of Israel in London, Nathan Tsror, insisted it has been “business as usual” for many of the country’s start-ups during the hostilities.

“The Israeli economy is remarkably robust. This was the case during the recent economic downturn, it was the case during previous conflicts and it has once again been demonstrated during Operation Protective Edge,” he said.

“Despite the fact that over 70 per cent of the country has been exposed to increased volumes of rocket attacks, the robust Israeli economy continues to operate as normal with very little impact from the conflict.”

Start-up M&A deals in Israel have become increasingly more impressive. Waze, a navigation app, was sold to Google for an estimated $1.1 billion in June 2013 while the cyber security newcomer Cyvera was acquired by Palo Alto Networks in March 2014 for $200 million.

Israeli innovation

Investment in Israeli technology is on the upswing and the rate of early-stage start-up growth has remained stable because of the flexibility of Israel’s IT ecosystem, according to experts.

Mira Marcus, a spokeswoman from the start-up city’s Tel Aviv Municipality, explained: “One of the reasons why Israel in general and Tel Aviv in particular is so innovative is because of survival and resilience. And we see that now as well.

“We had a few exits [sell-offs] that were very big during the recent operation so we see that the business side is still thriving.”

At present, there are around 700 early stage businesses in Tel Aviv alone, she said, that specialise in the production of a wide range of technologies.

“While clean tech is very big in the rest of Israel, the strength here is really internet and mobile as we’re doing very cutting edge things in that field,” she said.

The city is currently preparing for its annual DLD festival, which will showcase the latest start-up technologies.

“We’re having about 2,500 people fly in from around the world. We’re having a start-up competition worldwide to find the best start-ups. We’re also exhibiting different technologies being developed in Tel Aviv including a fashion show of wearable technology and a huge robot that’s going to be imitating people’s movement’s while walking down the street,” Marcus continued.

These glamorous planned events juxtapose the border conflict happening beyond the city, as Tel Aviv is located just 53 miles from the Gaza strip. But not all tech companies in Israel have moved through the conflict unscathed.

Like this:

A Germany research institution adapts its facial recognition software for Google Glass, but promises to keep the data out of the cloud.

Nevermind how you’re feeling, how is the person you’re talking to feeling? A new Glassware app can help you identify a person’s gender, guess their age, and evaluate their emotional status in real-time. It won’t, however, tell you who the person is.

The Fraunhofer Institute for Integrated Circuits has adapted its Sophisticated High-speed Object Recognition Engine (SHORE) facial recognition technology for Google’s controversial Internet-connected headset. The SHORE Glassware app is able to process video in real-time on the Glass processor. To assuage privacy concerns about tracking through facial recognition software, Fraunhofer promises to never send the data up to the cloud.

Fraunhofer said the technology took “years” to develop and uses a “highly-efficient” library of data built on the C++ programing language to analyze the human face. SHORE could be implemented so that information about the person you’re speaking to would be superimposed next to their face, helping you figure out if they’re happy or sad, male or female, young or old.

The organization intends the app to be a communication aid, used by people, for example, on the autism spectrum who may have trouble identifying emotions. Fraunhofer also points out that its app could be applied to market analyses and other more commercial uses. Glassware makers have been exploring using Glass to help peoplewith developmental differences since Google threw open the doors to Glass apps.

Fraunhofer did not immediately respond to a request for comment.

The SHORE app is not available for download. It’s not clear if Fraunhofer has built it into a soon-to-be-available app, or if Fraunhofer is waiting to pair the tech with an app partner. Still, the SHORE app charts a less-traveled path through privacy concerns of facial recognition so that it can still be used to help people who need it.

Like this:

Google announced the purchase of the next startup. They found Zync Render – a service that allows you to process video in the “cloud.” Google plans to use it to provide access to the video processing to young companies, which for one reason or another can not afford the upkeep and maintenance of the device. Until today, the technology has been successfully applied Zync Render the creation of commercials known companies. Also, it has been used for the treatment of special effects in the film “Looper” and “Star Trek: Nemesis.”

After the purchase, employees will be engaged Zync Render transfer its technology to the new cloud platform. Google is going to reduce the cost of services provided for video processing by introducing a per-minute billing.

Like this:

KKBOX, a music streaming service based in Taiwan, has raised $104 million from Singapore GIC with participation from KDDI (Japan’s second-largest telecommunications company), HTC, and Taiwan’s Chunghwa Telecom.

KKBOX offers music streaming services to 10 million users in six Asian markets including Taiwan, Japan, Hong Kong, and Singapore.

Founded in 2005, KKBOX will put the new funds toward developing platform technology and continuing expansion into overseas markets.

Like this:

Nutanix, a converged infrastructure provider based in San Jose, has raised a $140 million Series E round at a $2 billion valuation from undisclosed Boston public market investors, rumored to be Wellington Management and Fidelity.

Nutanix combines storage and compute power in a single box to reduce space, power, cooling, and cost required to run it, serving 800 customers in 43 countries.

Founded in 2009, Nutanix has raised over $300 million to date and will use the latest funding to open two offices, hire additional staff, and expand product offerings.

Like this:

Next week, the IT industry will gather in San Francisco to discuss all things cloud and virtualization at VMworld. The discussion will center on “software-defined data centers” which will quickly morph to “software-defined security” in my world (Writer’s note: In my humble opinion, this is a meaningless marketing term and I don’t understand why an industry that should be focused on digital safety acts like its selling snake oil). So we are likely to hear about the latest virtual security widgets, VMware NSX, and OpenStack integration, virtual security orchestration, etc.

This will make for fun and visionary discussions, but there’s one critical problem: while almost every enterprise has embraced server virtualization and many are playing with cloud platforms, lots of organizations continue to eschew or minimize the use of virtual security technologies – even though they’ve had years of experience with VMware, Hyper-V, KVM, Xen, etc. According to ESG research, 25% of enterprises use virtual security technologies “extensively,” while 49% use virtual security technologies “somewhat,” and the remaining 25% endure on the sidelines (note: I am an ESG employee).

This is not a new situation – ESG cloud/virtualization guru Mark Bowker and I uncovered this very behavior with some research we did back in 2010. That data indicated that everyone loved server virtualization for its ability to consolidate workloads, but as soon as the virtual server infrastructure grew more complex and needed advanced security, network, or storage support, many organizations hit the brakes. Things have advanced somewhat, but a large part of the market remains reluctant to move from tried-and-true physical security controls to the virtual unknown.

Recently, ESG research dug into this issue further, asking security professionals why their organizations aren’t using virtual security appliances/technologies more extensively. Here are the top 5 responses:

37% of security professionals said that IT/compliance auditors are uncomfortable with virtual security appliances/technologies. 34% of security professionals said that they prefer to use existing security controls/technologies, even if this is not the most efficient method for virtual security32% of security professionals said that they have a lack of trust with virtual security appliances/technologies32% of security professionals said that virtual security appliances/technologies require additional management which is too much of a burden for the IT operations staff28% of security professionals said that they had a lack of knowledge/understanding about virtual security appliances/technologies

To be clear, I don’t think this situation is sustainable. At some point, the security requirements for server virtualization/cloud computing simply can’t be addressed by status quo physical security technologies and best practices. This may be true, but it seems like many security professionals are ignoring this inevitable transition.

Rather than focus on whiz-bang functionality and banal “software-defined security” labels, the server virtualization, cloud computing, and security industry faces a much more fundamental task – educating security professionals on virtual technologies, convincing them that virtual controls work, and providing them with a clear and concise migration/integration plan. I doubt whether this will happen at VMworld but it really needs to happen soon.

Like this:

The growing number of data breaches resulting in massive numbers of payment cards being stolen from retail stores and other businesses is occurring because they’re failing to keep up with the Payment Card Industry’s data security standard, according to the PCI Security Standards Council.

In its “best practices” guidance document published today, the PCI Council says although many businesses may be meeting the periodic compliance requirement of the PCI data-security standard (DSS) in an annual audit check, they are letting attention lapse and not keeping network security up to date. The “best practices” guidance contains several suggestions on how to further PCI-required security as an ongoing process (see graphic, below). Despite the PCI standard being in place for several years, retailers and restaurants that have to follow it continue to be hit by a rash of massive card breaches.
“They weren’t compliant,” according to Troy Leach, CTO at the PCI Council. “They think PCI compliance is a once-a-year achievement,” failing to maintain security controls as the needs of the business change in terms of users, applications and security, he says. That’s why a special interest group at the PCI Council put together the “best practices” document as a set of recommendations for businesses that must follow PCI rules because they accept or process payment cards.

Among the ideas that businesses can put in place, if they haven’t already, is to make someone in the organization the PCI compliance manager to engage key personnel or functional groups to ensure compliance is an ongoing process. Guidance also includes adopting automated monitoring of security controls, when possible, plus operating according to the standardized control frameworks established by the International Organization for Standardization, the National Institute of Standards and Technology and the Information Systems Audit and Control Association.

The guidance also says PCI compliance must be part of the ongoing security process that focuses on operational changes to system, network or security architectures and configurations.

“If organizations want to protect themselves and their customers from potential losses or damages resulting from a data breach, they must strive for ways to maintain a continuous state of compliance throughout the year rather than simply seeking point-in-time validation,” the document concludes.

Like this:

Leap Motion is jumping into virtual reality with a new add-on for the Oculus Rift that can track your hands in apps designed for the VR headset.

“If virtual reality is to be anything like actual reality, we believe that fast, accurate, and robust hand tracking will be absolutely essential,” Leap said in a blog post. “We believe in the concept of other specialized controllers as well, but our hands themselves are the fundamental and universal human input device.”

But using infrared imagery from Leap Motion sensors can turn VR headsets into “stereoscopic windows into the world around you,” Leap said. “What it sees, you see.”

Adding a Leap Motion device to something like the Oculus Rift “expands the tracking space to be in any direction you’re facing,” Leap said. “You can reach forward, turn around, look up and down, and the tracking follows you wherever you go. Because our device’s field of view exceeds that of existing VR displays, you’ll find it can start to track your hands before you even see them.”

To that end, Leap released a new developer mount for the Leap Motion that clips on to VR headsets like the Oculus Rift. The mount is now on sale for $19.99, while the Leap Motion Controller is $79.99, and the Oculus Rift Dev Kit 2 (DK2) is $350.

The tools are largely for developers; even the Oculus Rift is still in beta mode. But Leap Motion is updating its own beta SDK alongside the release of the mount, “which includes a massively improved ‘top-down tracking’ mode, as well as Unity and C++ examples.”

“These show how to use both the image overlays and the tracking data from a head-mounted position, then give further examples of more sophisticated 3D interactions,” Leap said. “Thanks to major software advancements since our V2 tracking developer beta was launched in May, this is all possible with the current generation peripheral device.”

Commenters on Leap’s blog post questioned whether the mount will interfere with the Oculus Rift’s new positional tracking camera. But Leap said it does not since “the DK2 is designed to have way more LEDs than strictly necessary” in order to accommodate these types of add-ons.

“We’ve used positional tracking with the mount on DK2 successfully without encountering any issues,” Leap said.

Looking ahead, Leap Motion is working on “Dragonfly,” a prototype sensor that virtual reality hardware makers can embed in their products. “Dragonfly possesses greater-than-HD image resolution, color and infrared imagery, and a significantly larger field of view,” Leap said.

Further to strong bookings in the first half of 2014 – up 45% for the first six months of the year compared to 2013 (including a 71% Q2) for both its LENS 3D Printers and Aerosol Jet Systems – Optomec, producer of industrial additive manufacturing systems using Aerosol Jet technology, has announced the sale of an Aerosol Jet Quad Print Engine to the Factory Automation and Production Systems Institute atFriedrich-Alexander-Universität Erlangen-Nürnberg (FAPS) to manufacturer artificial muscles for use in robotics, gaming and medical applications, such as biomimetic prothesis.

FAPS engineers aim to utilise Aerosol Jet technology to help facilitate the transition of artificial muscles – also known as Dielectric Elastomer Actuators (DEAs) – from base research unto their usage as regular control elements in complex robots and biomimetic prostheses. The DEAs require the printing of extremely thin layers of elastomer film, silicone and electrodes and Aerosol Jet technology can print a variety of materialsbelow a ten micron height, matching the needs of production for the artificial muscle application.

Aerosol Jet tech works by directly depositing a range of electronic materials – including conductive, insulating and biologic formulations – to most substrates using focusing nozzles (demonstrated in the video above). As the technology proffers electronic and biomaterials within the same deposition system, Aerosol Jet systems offer a unique potential for biomedical micro-device production. The Aerosol Jet Print Engine is a modular system allowing simultaneous, or sequential, printing on up to four different substrateson the same machine. The Engine’s open systems’ architecture allows integration withcommercial automation platforms,ideal for the FAPS application.

Dr. Sebastian Reitelshöfer, Director of the Research Sector Biomechatronics at FAPS enthused: “As Aerosol Jet printing technology allows the manufacturing of homogeneous layers with a thickness below 10 microns, the process seems very well suited to print stacked DEA composed of Silicone layers as dielectric medium andCarbon Nano Tube (CNT) compounded silicone electrodes. We believe this will be an important element in the successful production ramp up of DEA-based applications.”

Ken Vartanian, VP of Marketing at Optomec offered further enthusiasm: “We’re pleased to see the unique capabilities of Aerosol Jet technology being applied to the exciting area of DEAs and helping develop production grade manufacturing environments for their implementation. The research being done at FAPS holds great potential for many life-changing innovations and we’re proud to be contributing to accelerate the commercial viability of this work.”

The ongoing success and growth at Optomec may attract attention regarding an Intiatial Public Offering this year, postulated as a potential at the start of this year by star of stocks Gary Anderson? Will the current biggest players in 3D printing, Stratasys and 3D Systems, consider an acquisition of Optomec – the former to expand into metal printing and the latter regarding production deals such as the Google Ara Phone?

BT has consistently denied the allegations, originally made in a complaint by legal charity Reprieve in 2013, that it had breached international rules on corporate social responsibility by taking a contract to supply a fibre-optic connection between a US military communications centre in the UK and a base in North Africa that has been linked to controversial drone strikes.

BIS, which acts as official UK arbiter of international rules on corporate social responsibility administered by the Organisation for Economic Co-operation and Development (OECD), threw Reprieve’s original complaint out in 2013 after concluding there was not enough evidence to say whether the BT line had been used in drone strikes or not.

Reprieve said in a statement: “The technical analysis of documents relating to BT’s contract with the US government, carried out by Computer Weekly, demonstrates that crucial questions remain unanswered regarding the company’s role in the covert drone programme.

“We are therefore urging the UK government to reopen its investigation – if British companies are providing services which enable the US to carry out illegal drone strikes, the British public has a right to know.”

DISN is part of a US system of “network-centric warfare”, in which diverse sources of intelligence and military operations are combined in near real time over high-speed networks to direct drone aircraft against targets, such as terrorist suspects in far-flung countries.

The US Defense Information Systems Agency (DISA) contracted BT to supply part of this network between its communications hub at RAF Croughton, Northamptonshire and Camp Lemonnier, a military base in Djibouti, where the US military has undertaken drone missions to combat terrorism around the Horn of Africa.

A spokeswoman for BIS, known under the OECD guidelines as the National Contact Point (NCP), refused to discuss either the complaint or the conclusions of the high-level review.

“The NCP does not comment on complaints before it makes an initial assessment,” she said. “The NCP usually expects to make an initial assessment within three months of receiving a complaint.

“The UK NCP believes any additional guidance should be informed by wider work at OECD level, to ensure a consistent understanding between NCPs and enterprises across the OECD.”

Reprieve also warned of a potential conflict of interest that may compromise BIS’ impartiality on the BT case. Lord Livingston, minister of state for trade and investment at BIS, was BT chief executive when Reprieve made the first complaint, before being ennobled and taking up his current ministerial role.

But BIS said the NCP did not report to a minister and that Livingston took no part in the BT decision.

BT told Computer Weekly at the time of our investigation that it could not be held responsible for what anybody did with the communications infrastructure it supplied. Its subsequent statements have consistently denied any knowledge of links to US drone strikes.

“UK NCP assessed Reprieve’s complaint in February and rejected it. BT can categorically state that the communications system mentioned in Reprieve’s complaint is a general purpose fibre-optic system. It has not been specifically designed or adapted by BT for military purposes, including drone strikes,” said a BT statement.

“We have no knowledge, beyond press reports, of US drone strikes. We take our human rights obligations very seriously and are fully supportive of the OECD guidelines.”

Like this:

I recently read this quote and it matches up with my own and says it better then I usually do…. Hope you like

“It is important to remember that life is not a dress rehearsal, and that none of us should waste our time on doing things that don’t spark fires within us. My golden rule for business and life is: We should all enjoy what we do and do what we enjoy.”
-Richard Branson ﻿

I tell young adults whenever I can to “find what your passionate about and make that your career” – JNR

Like this:

When Marconi first popularized the radio, no one expected it to go far – literally. Radio waves ought to be stopped in their tracks by the curve of the Earth. Marconi proved they weren’t, but no one knew why.

Radio technology offered people a whole new way to experience the world. They could, communally, hear single performances. One record could be played to hundreds of thousands of people. And, without laying any cables, or any connecting wires, sound could be transmitted anywhere through invisible waves.

Oh, not anywhere, people protested. No matter how powerful Marconi, or anyone else, made radio waves, they couldn’t magically curve around the Earth. Radio waves would be limited by a horizon, or people would have to set up radio towers relaying a signal from one place to another. Either way, they had limited utility.

Guglielmo Marconi disagreed. He believed radio transmissions could travel farther than anyone imagined, and he would prove it. Marconi set up a radio station in Cornwall and stationed himself in Newfoundland. To pick up the signal he used an antenna that had to be elevated with a kite. The operation was chaotic, and for a long time, unsuccessful. And then Marconi picked up a single message in morse code – dot dot dot. The Cornwall station was transmitting the signal – S in morse code – again and again, and Marconi picked it up. He didn’t know how he had picked it up. Anyone who worked the physics on it knew it was impossible. The experiment showed something was happening that no one had counted on.

Years later, they figured out what. The ionosphere is an atmospheric layer of atoms whose electrons have been stripped off them by the radiation put out by the sun. These ions, and their electrons, for a kind of solid barrier for low-frequency waves, including radio waves. The radio waves in Cornwall would bounce off the ionosphere, and then off the ground, and then off the ionosphere again, traveling across the Atlantic, towards Marconi and his kite. Today, the ionosphere is one of the reasons why we get long-range radio broadcasts. It’s also why the stations our radio stations pick up “change” at night. Without solar activity the ionosphere calms down, and acts as a more efficient bouncer, at night. A far-away station that gets drowned out by more powerful local signals during the day will, at night, be able to reach a geographically wider audience.

The weather station monitors the conditions both inside and outside, and will have available plant sensors as well for keeping tabs on your orchids and whatnot. They’ll be tracking temperature, humidity, air quality, atmospheric pressure, and noise levels. The Archos Weather Station system is expected to be available in September for $149.

But new phones, tablets, and weather stations aren’t the only thing that Archos has in store for you. There’s also a pair of new music devices on tap. The Archos Music Light is a combination LED bulb and Bluetooth speaker that screws into an standard Edison socket for both illumination and tunes for $49. Alas, there doesn’t seem to be any controlling of the light remotely, just the music.

But the one we really love is the Archos Music Beany. We’ll just let Archos do the describing here:

A plush beany that combines the benefits of a headphone with the style and comfort of a traditional beany.”

The Music Beany connects via Bluetooth to your music source, and will retail for $39.

About four years ago, researchers in Michael Strano’schemical engineering lab at MIT coated a short piece of yarn made of carbon nanotubes with TNT and lit one end with a laser. It sparkled and burned like a fuse, demonstrating a new way to generate electricity that produces phenomenal amounts of power.

At the time, no one understood how it worked, and it was so inefficient that it was little more than a “laboratory curiosity,” Strano says.

Now, Strano has figured out the underlying physics, which has helped his team improve efficiencies dramatically—by 10,000 times—and charted a path for continued rapid improvements. One day, generators that use the phenomenon could make portable electronics last longer, and make electric cars as convenient as conventional ones, both extending their range and allowing fast refueling in minutes.

The efficiencies of the lab devices made so far are still low compared to conventional generators. Strano’s latest device is a little over 0.1 percent efficiency, whereas conventional generators are 25 to 60 percent efficient.

But Strano says they could be useful in some niche applications, where a sudden burst of power is needed. And Strano says that the further improvements in efficiency mean broader applications could soon be feasible.

The new generators exploit a phenomenon that Strano calls a thermopower wave. The conventional way to generate electricity by burning a fuel is to use heat to cause expanding gases to drive a turbine or a piston. In Strano’s system, as the fuel burns along the length of his nanotubes, the wave of combustion drives electrons ahead of it, creating an electrical current. It’s a much more direct and efficient way to generate electricity, since no turbines or conventional generators are required.

Since the nanogenerator runs on liquid fuels—which store far more energy than batteries—there’s hope that they could allow electric cars to go much farther than they do now.

It’s a setup not unlike the one in an internal combustion engine, in which bursts of fuel are sprayed into combustion chambers to drive pistons. Power electronic circuits could take the bursts of power from several nanotube generators and smooth it out, using it to drive electric motors in a car, for example. The fuel tank could be refilled like one in a conventional car. And because the carbon nanotubes aren’t consumed in the process, they can be used over and over again.

Recently, Strano discovered that switching from nanotubes to flat sheets of nanomaterials—such as single-atom-thick graphene—improves efficiency. Shaping the sheets to direct the energy of the thermopower wave also boosts performance.

Software able to understand human interactions could help with business decisions.

Photocopiers, PCs, and video conferencing rooms all rose from being technological novelties to standard tools of corporate life. Researchers at IBM are experimenting with an idea for another: a room where executives can go to talk over business problems with a version of Watson, the computer system that defeated two Jeopardy! champions on TV in 2012.

An early prototype has been made in the Cognitive Environments Lab, which opened last year at IBM’s Thomas J. Watson research center in Yorktown Heights, New York. It is intended to explore how software that can understand and participate in human interactions could “magnify human cognition,” says Dario Gil, director for symbiotic cognitive systems at IBM research.

The lab looks more or less like a normal meeting space, but with a giant display taking up one wall, and an array of microphones installed in the ceiling. Everything said in the room can be instantly transcribed, providing a detailed record of any meeting, and allowing the system to listen out for commands addressed to “Watson.”

Those commands can be simple requests for information of the kind you might type into a search box. But Watson can also take a more active role in a discussion. In a live demonstration, it helped researchers role-playing as executives to generate a short list of companies to acquire.

First, Watson was brought up to speed by being directed, verbally, to read over an internal memo summarizing the company’s strategy for artificial intelligence. It was then asked by one of the researchers to use that knowledge to generate a long list of candidate companies. “Watson, show me companies between $15 million and $60 million in revenue relevant to that strategy,” he said.

After the humans in the room talked over the results Watson displayed on screen, they called out a shorter list for Watson to put in a table with columns for key characteristics. After mulling some more, one of them said: “Watson, make a suggestion.” The system ran a set of decision-making algorithms and bluntly delivered its verdict: “I recommend eliminating Kawasaki Robotics.” When Watson was asked to explain, it simply added. “It is inferior to Cognilytics in every way.”

IBM’s researchers are also considering other ways the technology at work in their current demo might help out in a workplace—for example, by having software log the relative contributions of different people to a discussion, or deliver a kind of fact-checking report after a meeting that highlights mistaken assertions.

By surfacing that kind of information, Watson could change the dynamics of group interactions for the better, says Gil. “Watson could enhance collective intelligence by facilitating turn taking, or having a neutral presence that can help prevent groupthink,” he says. For example, people may feel freer to question their boss’s opinion if Watson is the first to suggest there is another way of looking at a problem.

IBM is not the first to try to improve meetings by having software understand and enhance them. One large project backed by the European Union developed technology that records and summarizes meetings using a combination of speech recognition and sensors that tracked participants’ head movements and gaze for signals of the most useful content.

“Using recognition and content analysis technologies has a significant potential to enhance both face-to-face and remote meetings, and could significantly improve organizational cultures,” says Steve Renals, a professor of speech technology at the University of Edinburgh who helped lead that project.

However, the accuracy of speech transcription remains a challenge to the reliability of such technology, says Renals. Even a person speaking directly into a microphone in a quiet room is unlikely to have all their words transcribed correctly, and meetings come with extra problems such as people talking over one another, echoes, and incidental noises such as tapping pens.

In the demonstration shown to MIT Technology Review, the IBM participants wore microphones to give Watson a clearer signal. But Gil’s team is also working on a system of microphones able to collect sound from multiple very focused—but steerable—directions. It would use information from cameras in the ceiling to lock onto people and get a clear recording of their speech.

Andrew Maimone thinks augmented reality hasn’t been much more than a gimmick so far.

Maimone, a PhD student at the University of North Carolina at Chapel Hill, is developing a new kind of head-worn display that could make augmented reality —whereby digital objects or pieces of information are overlaid on the real world via a screen—significantly more immersive.

While it’s possible to use a smartphone or tablet to, for example, conjure a virtual character and place it onto a real world table viewed on a smartphone’s screen, this just “isn’t very compelling” says Maimone. “The experience doesn’t occur in one’s own vision,” he says. “It acts as a little more than a small window into the virtual place.”

Conventional augmented reality glasses use lenses, beam splitters, waveguides, reflectors, and other optics to relay an image to the eye, and place the image at a distance where the eye can focus on it. These components add bulk, however, and the resulting glasses usually have a limited field of view.

Together with three other researchers from the University of North Carolina and two from Nvidia Research, Maimone has been working on an entirely new kind of augmented reality device that is light and compact, and offers a wide field of view.

Maimone’s device, called a Pinlight Display, does not use conventional optical components. It replaces these with an array of bright dots dubbed pinlights. “A transparent display panel is placed between the pinlights and the eye to modulate the light and form the perceived image,” says Maimone. “Since the light rays that hit each display pixel come from the same direction, they appear in focus without the use of lenses.”

In this configuration, small fragments of the image are flipped and superimposed, so the team has compensated for this by performing some image manipulation in software.

“One could think of Pinlight Displays as exploiting how the eye sees an image that is out of focus, in order to form an image that is in focus,” says Maimone. “The resulting hardware configuration is very simple—there are no reflective, refractive, or diffractive elements—so we do not run into the trade-off between form factor and field of view that has been encountered in past glasses designs.”

The benefits of the approach over previous devices are significant. While state-of-the-art commercial augmented reality glasses have a field of view of 40° or less, early Pinlight prototypes have demonstrated fields of view of 100° or more. It’s an impressive breakthrough, as evidenced by this explanatory video, which shows the difference that a wide field of view makes when viewing, say, a holographic-style spaceship from Star Wars.

Maimone argues that the potential uses for the technology are wide-ranging. “I’d love to be able to navigate a city by following some virtual bread crumbs laid down on the sidewalk,” he says. “I’d love to have a virtual lunch with my wife every day as if she’s seated across the table. I’d love to see the name of a new acquaintance floating next to them when we meet. I’d love to have all of things happen effortlessly in my glasses, and when they do, I think we’ll start to see computer graphics more as integral part of our visual system, rather than something that exists only on external screens.”

There may be other potential benefits to the team’s approach. “Since part of the image formation process takes place in software, we can adjust parameters such as eye separation and focus dynamically,” says Maimone. “[Therefore] we can imagine incorporating the pinlights into the corrective lenses or ordinary glasses, creating a display that looks like ordinary glasses with the addition of an LCD panel.”

Problems remain for the team, which recently showed off the technology at the Siggraph 2014 conference in Vancouver. The prototype suffers from low resolution and image quality, far below the level of existing commercial augmented reality glasses. Additionally they must successfully implement tracking, networking, low latency rendering, various and other features.

“The next step is to improve these factors,” says Maimone. And despite his skepticism about the current state or augmented reality, he believes that with the right research and engineering, the technology could be “transformed into something practical for everyday use.”

The Internet is critical infrastructure for homes, businesses, and government.

A system crash blacking out broadband service for all 11.4 million of Time Warner Cable’s customers for three hours early Wednesday morning raises questions about the stability of U.S. Internet infrastructure and the potential impact of Time Warner’s proposed mega-merger with Comcast, experts say.

A human error that cascaded throughout Time Warner’s Internet routers appears to have triggered the outage. The company said in a statement that during overnight network maintenance, “an erroneous configuration was propagated throughout our national backbone.”

“It looks like someone put the wrong configuration into one or more devices that propagated throughout their network,” says David Erickson, a cofounder of Forward Networks, a startup that develops advanced networking software. “This type of error is preventable, and detectable with the right software.”

A spokesman for Time Warner Cable, Scott Pryzwansky, would not discuss the details of the incident or why it wasn’t prevented, but he said the company is working to make sure it never happens again.

The lack of disclosure about accidental outages is itself a serious issue, says Jonathan Zittrain, professor of Internet law at Harvard Law School and the John F. Kennedy School of Government. “We ought to have standards for release of data by broadband providers to allow apples-to-apples comparisons and tracking of outages over time so the public, and policymakers, can gauge trends in connectivity,” he says.

Companies are required only to disclose details about forthcoming planned outages, not to share information after accidental ones. And Time Warner Cable is already in trouble for failing to do that. The U.S. Federal Communications Commission said Monday that the company had admitted to not submitting network outage reports on time and would pay a $1.1 million fine.

Wednesday’s outage also highlights a lack of alternatives to U.S. customers if one provider flames out. Some 28 percent of U.S. customers have only one choice of broadband provider and 37 percent have two, according to the FCC.

When just one or two companies own all the information networks in a region, the impact of any outage is increased, says James Cowie, chief scientist at Dyn, a company that provides Internet traffic management and performance assurance. “Right now, last-mile monopolies and duopolies are a significant source of risk in the American Internet, and it’s not yet clear how to build around that,” he says.

Time Warner Cable is the second-largest cable company in the nation and is seeking federal approval for a proposed $45 billion merger with the number one provider, Comcast. This week’s outage is sure to figure in regulators’ analysis of what impact the merger could have on consumers, says Susan Crawford, professor at the Benjamin Cardozo School of Law and co-director of the Berkman Center for Internet & Society at Harvard University (see “Here’s Why the Proposed Comcast/Time Warner Cable Merger Is Bad”).

She says the incident lends weight to arguments that cities and other public agencies should be encouraged to build their own networks and dilute the influence of corporations over the nation’s information infrastructure. “Time Warner’s crash is another argument for why we should be doing everything we can to help municipalities build alterative fiber networks,” she says.

The Time Warner Cable outage is believed to have been one of the largest ever to occur in the United States, though exact data are not available. But it wasn’t the only wide-scale failure in the past week. Last Saturday the small cable provider Charter Communications, which serves far fewer customers, also suffered a nationwide outage.

Harvard’s Zittrain says that the Time Warner and Charter outages also highlight the need for emergency networks that can fill in when communications are disrupted. Such capacity is currently lacking, he says. For example, wireless networks were overloaded after the Boston Marathon bombings (see “Former FCC Chairman: Let’s Build an Emergency Ad Hoc Network in Boston”).

As a side note, the rumors that the mysterious stealth-mode startup “Synapse Synergy Group, Inc.” May be designing and building the first ever Guaranteed 100% Uptime Infrastructure with their new and unprecedented “XtreemCloudX9” project (the plans apparently call for the largest private laboratory in the USA which includes a super-secret cloud infrastructure between their two laboratory locations of Florida and San Francisco )and that it reportedly is said to be 1000 times faster than AWS, are apparently true…

Like this:

When I meet with our Canadian customers, the first thing I hear from each of them is that, like most companies, they want to grow their business and become more successful. With competition at every corner, from the local startup to the global conglomerate overtaking the market, a lot of companies are struggling to remain relevant.

Some reports are pointing to the fact that Canadian businesses are lagging behind the rest of the world when it comes to exporting; however, it seems that small and medium enterprises are starting to close the gap.

SAP recently surveyed 2,100 small and medium sized businesses in 21 countries around the globe. The findings showed that although three-quarters of SMEs globally generate revenue through exports, only 26% of Canadian small businesses are exporting products outside of Canada. In fact, larger businesses in Canada fared even worse according to the survey.

A study published by Deloitte underscores this trend and demonstrates the fact that Canadian companies have a reputation for failing to follow even successful global trends. Peter Brown, a senior practice partner with Deloitte, says that Canadians, and Canadian businesses, tend to shy away from exporting, likely because of their aversion to the risks involved with exploring new markets despite evidence that exporting is good for business. “Companies who export survive…Exporters have much more lasting power in business.”

It’s not just a cultural phenomenon that keeps Canadian businesses from being on the leading edge; it’s also that Canadian businesses are lagging in the business investments they make for their employees.

Although business investment is critical to economic growth and innovation, a C.D. Howe report released this month shows that Canadian businesses investments per worker is lower than the average spent amongst OECD countries, and that the rate of spend is falling drastically behind countries such as the U.S. and Australia.

The trend is concerning for the entire country, but especially for Ontario and Quebec where many of Canada’s largest companies are based. Forecasts for investments per worker for these two provinces in 2013 and 2014 are at their lowest level in 10 years, despite many businesses in Canada sitting on larger cash reserves due to tax relief programs, and Quebec’s spend is particularly startling as it will be the lowest business investment in the last 30 years of data collection.

Even Western Canada, which has benefitted from business investments in the energy and natural resources industries in the past, is showing trends of stalled and negative growth in the recent report.

The bright spot in the report is that the Atlantic Provinces seem to understand the principle of making investments to ensure growth, particularly in Newfoundland and Labrador as the province’s economy grew faster than China in 2013.

So, my response to customers when I meet with them is simple: Invest now to grow your business and be innovative, and don’t be scared to take risks and try new things. All of Canada’s greatest innovations came from people who weren’t afraid to dream big and build for the future.

Like this:

Americans lag behind Asians and Europeans when it comes to online shopping.

According to a recent Nielsen study, 63 percent of U.S. shoppers tend to research their purchases online, as well as use online reviews to inform their spending decisions. In addition, 78 percent reported finding online shopping convenient. However, many U.S. consumers are still hesitant to pull the trigger and make their purchases online.

Worldwide e-commerce rates are expected to reach $1.5 trillion this year, increasing 20 percent from last year. The most popular items purchased online are non-consumables, most being in the entertainment category—hotels, airlines, event tickets, sporting goods and toys.

Shoppers in the Asia-Pacific region have the highest online buying rates—so much so that online buying exceeds browsing rates in more than half of all product categories.

Western Europe leads the way on CPG e-commerce. Online spending rates in Britain went from $70 million in the first quarter of 2013 to $91 million in the same quarter this year. Similarly, France’s online spending on consumables has increased from $32 million to $42 million year over year.

There have been some significant jumps in American online shopping. The biggest jump was seen in airline reservations, up from 19 percent in 2013 to 43 percent in 2014. Hotels/tours accounted for 43 percent of online spending, up from 16 percent. Online purchases of electronic equipment also increased in prevalence, jumping from 15 percent of all online spending to 31 percent year over year. E-books, music and clothing/shoes also saw an increase.

“While online transactions make it easy to download a book, buy a ticket to a sporting event or book a hotel room, building a consumer base for consumable categories requires more marketing muscle,” said John Burbank, president of strategic initiatives, Nielsen. “Finding the right balance between meeting shopper needs for assortment and value, while also building trust and overcoming negative perceptions, such as high costs and shipment fees, is vital for continued and sustainable growth.”

What is stopping U.S. shoppers from online purchases? Well, 46 percent said it was shipping costs and 37 percent said they don’t feel safe disclosing credit card information online.

Even if U.S. shoppers prefer to buy in brick-and-mortars, retailers can’t ignore the mounting evidence supporting online’s growing significance. Of those surveyed in the Nielsen study, 54 percent subscribe to retailers’ email lists, and the same percentage spend considerable time browsing online prior to buying.

Like this:

Guess’ reported net earnings in the second quarter of 2015 amounted to $22 million, a 50.5 percent year-over-year decrease. However, e-commerce afforded the brand some figurative sunshine, as sales were up 48 percent from last year.

Profit growth was especially scarce in the North American market, where revenues decreased 4 percent and comparable sales, including e-commerce, decreased 5 percent in U.S. dollars.

“Overall second quarter earnings were consistent with our expectations but were short of our operational goals due to a soft environment in North America, where traffic and promotional activity have continued to put our brick-and-mortar stores under pressure,” said Paul Marciano, CEO of Guess. “However, we are encouraged by our North American e-commerce business, which grew by almost 50 percent in the second quarter. So far in the third quarter, our fall collection in North American retail has not seen the traction with the consumer that we were expecting and we have adjusted our expectations for the back-half of the year accordingly.”

Total net revenue for the second quarter decreased 4.8 percent to $608.6 million.

The retailer reported a 4.1 percent decrease in North American revenue, which was down to $244 million. Net revenue for the company’s North American wholesale segment decreased 7.5 percent to $38.3 million.

On the heals of the fiscal reports, Marciano announced that Guess would close 50 of the 488 North American stores after reducing that figure from 507 one year ago, reported Woman’s Wear Daily. He felt the brand needed to adapt to retail today.

“In addition to these 50 stores, 50 percent of our North American store base will come up for renewal in the next three-and-a-half years, which will give us flexibility to optimize our real estate portfolio,” Marciano said during a conference call with analysts.

Like this:

Macy’s (NYSE:M) is hopping on the digital wallet bandwagon, announcing on Aug. 26 that shoppers would be able to use the new option to manage special offers and make in-store and online payments.

Shoppers simply sign into their Macy’s profile and choose My Wallet before registering a Macy’s credit card. Star Pass promotions will automatically be added to the wallet and applied online. Shoppers will also see a list of special offers when they swipe their card during in-store checkout. The digital wallet can include up to 10 credit cards.

With security such a concern for many consumers when it comes to using digital wallets, Macy’s has added a couple of safety measures to make sure customer information doesn’t end up in the wrong hands.

Users are automatically logged out of the service if the wallet hasn’t been used for 30 minutes, meaning if shoppers should lose their mobile device with My Wallet registered on it, a username and password will have to be provided to access the wallet again.

Macy’s is the most recent retailer to tinker with a digital wallet, despite similar forays into new payment options that have shown mixed results. Neiman Marcus recently enabled Visa Checkout for online payments, while Amazon has begun testing a wallet of its own with little fanfare.

After reporting lower-than-expected earnings and tempering its outlook for the rest of the year, it has become critical that Macy’s make sure it’s as easy as possible for shoppers to find and buy the items they want.

The new payment initiative is part of the retailer’s strategy to build out its online support and omnichannel options. The company has promised to invest $1 billion in technology and infrastructure to bolster its efforts.

Like this:

Dollar General (NYSE:DG) announced it would continue its bid for Family Dollar (NYSE:FDO) after reporting a 2 percent increase in second quarter income.

The retailer said an increase in shoppers and shopper spending brought the chain’s net income to $251 million.

Sales for the second quarter totaled $4.7 billion, up 7.5 percent percent from a year ago.

Earlier this month, Dollar General’s $9.7 million takeover bid was rejected by Family Dollar, but CEO and chairman Rick Dreiling said the push for a deal will continue.

“In regards to our proposal to acquire Family Dollar, we remain firmly committed to the acquisition,” Dreiling said in the quarterly statement. “The financial benefits of our offer to Family Dollar shareholders are indisputable, and the proposed combination would unlock tremendous value for Dollar General shareholders. We continue to believe the potential antitrust issues are manageable and that our transaction as proposed is both superior and achievable.”

Family Dollar’s board said the original bid was rejected because of antitrust issues, even though Dollar General agreed to close 700 stores to appease regulators.

Combined, Dollar General and Family Dollar would own more than 19,000 locations, trumping Dollar Tree’s 5,000 locations, reported USA Today.

Dollar General also reported a 2.1 percent rise in same-store sales this past quarter, increased by higher transaction values and customer traffic. Other contributors to an increase in sales included new stores, tobacco products, perishables, and snacks such as candy.

The retailer predicts total sales to increase 8 to 9 percent in fiscal 2014, with same-store sales increasing 3 to 3.5 percent.

Dreiling recently announced he would retire from his position as CEO in May or upon the appointment of a replacement.

The deal will end a long-standing family feud over control of the New England supermarket chain. Demoulas has been granted authority to run the company from a consultant position as the rest of the deal closes in the coming months.

Demoulas’ firing in June had set off a number of employee walkoffs and resultant boycotts by dedicated customers, costing the company millions in sales and augmenting competitors’ profits, reported Supermarket News. Throughout the six-week period following the firing, several supermarket brands, including The Delhaize Group, were rumored to have bid on ownership of Market Basket, but nothing materialized.

“Effective immediately, Arthur T. Demoulas is returning to Market Basket with day-to-day operational authority of the company. He and his management team will return to Market Basket during the interim period while the transaction to purchase the company is completed. The current co-CEO’s will remain in place pending the closing, which is expected to occur in the next several months,” a Market Basket statement said.

Anonymous sources reported that Demoulas and his sisters had agreed to pay more than $1.5 billion for the 50.5 percent of the company owned by the Class A shareholders appointed by his cousin, Arthur S. Demoulas. The offer includes a $500 million contribution from a private equity firm and a mortgage loan secured by the company’s real estate holdings in Massachussetts .

“All associates are welcome back to work with the former management team to restore the company back to normal operations,” the notice stated.

The agreement ends a dispute so bitter that the governors of Massachusetts and New Hampshire had to help the Demoulas family reach a resolution, reported the Boston Globe.

T. Demoulas will now be authorized to manage the business and stabilize operations at the company’s 71 stores.

Like this:

Fans of middling opening-season college football matchups rejoice: Dish Network (NASDAQ: DISH) has resolved a dispute with Fox Sports 1 and will carry four intersectional matchups slated for Labor Day weekend.

“We are proud to deliver the most college football anywhere, at the best possible value,” said Dish VP of programming Josh Clark in a statement.

Dish did not disclose the terms of its agreement with the national sports network.

The satellite carrier, which has aggressively promoted its carriage of college-based regional sports networks, announced earlier via corporate blog that it might black out the four games. Dish claimed Fox agreed to pay an “inflated price” to license the games and was passing it along in the form of a surcharge to operators, on top of the carriage fee they already pay Fox Sports 1.

With Dish running funny commercials featuring former college football stars like Brian Bosworth, Heath Schuler and Matt Leinart, and Fox Sports 1 carving a reputation for reasonable carriage fees early on, the near-blackout created perhaps outsized buzz, given the four matchups featured only one team ranked in the Associated Press pre-season top 25 (No. 10 Baylor).

Like this:

Two decades ago now, ABC Nightline anchor Ted Koppel coined the phrase “news you can choose,” in a speech about the growing fragmentation of media sources differentiated by their underlying messages. People were beginning to select the news they wanted to hear, and the Web was starting to demonstrate how news could appeal to more narrow, even vertically-defined, interests.

One of the more important studies the Pew Research Center has ever released–first made available on Tuesday (.pdf)–demonstrates that the trend Koppel warned about goes both ways. Social media–which, by definition, is two-way–is fragmenting participants’ responses and interactions with others on important topics into “views they can choose”: aspects of their character that may be colored, or entirely falsified, to appear in agreement with a desirable sub-segment of the social network. Otherwise, it stops some people who cannot so alter their viewpoints from participating altogether.

A survey conducted at this time last year, with cooperation from Rutgers University, asked 1,801 Americans ostensibly about their views concerning then-recent revelations by former NSA contractor Edward Snowden of clandestine metadata-collecting operations by NSA employees. A previous Pew poll had already determined Americans’ opinions to be divided almost down the middle, with a slight preponderance saying Snowden’s leaks served the public interest.

But the survey then went on to ask, under which conditions would people be willing to openly discuss the Snowden topic in public? While some 75 percent of all respondents were either very or somewhat willing to discuss Snowden at a family dinner, just 43 percent of the subset of respondents (960 people) who used Facebook, and 41 percent of the subset (223 people) who used Twitter, were as willing to discuss the Snowden topic online.

And in the most chilling finding of the entire survey, just 7 people (when you do the math) were willing to discuss Snowden under any other circumstances, when they were also unwilling to discuss the topic online. Put another way, people who use social media are less likely to speak openly on a controversial topic with their own family face-to-face than people who don’t.

The Rutgers/Pew team’s conclusion is the inverse of the picture of boundless enablement and stimulating engagement that the Web loves to propagate: Social media is training people, systematically, to keep silent.

Is it because people on social networks are mostly disagreeable? Is it that trolls are far too willing to make harmful comments in response to points of view they characterize as being both wrong and in the minority?

Evidently not. A scan of the actual survey results reveals the question of whether respondents felt people typically agree with their points of view. Of 1,017 respondents who are either married or otherwise espoused, 85 percent said their spouses agreed with them on public issues either somewhat or a lot. Some 69 percent said their family tends to agree with them, and 72 percent say their close friends are agreeable.

Even among the social network users, there’s a belief that those who follow them are typically agreeable, with some 60 percent of Facebook users and 50 percent of Twitter users saying their followers tend to agree with them. The process of social selection is clearly aligning people’s points of view. As some Facebook observers have noted, users’ news feeds are indeed becoming echo chambers of their own sentiments.

So it isn’t the lack of a receptive audience that’s the problem. The social pressure, Pew concludes, appears to be coming from the agreeing parties online, not to make utterances anywhere, including in public, that go against the party line.

Like this:

There’s a huge difference between “openness” and “partnership”. An “open” strategy is typically intended to spread one’s wealth throughout a market equally, in an effort to build a platform. The latter is an exclusive arrangement through which the partners expect to attain a competitive advantage.

Then there’s the situation where a vendor tries to engineer a competitive advantage for itself when it also has partners to consider. That was the case with VMware. Last year at this time, the company announced a Virtual SAN initiative with its corporate parent EMC. It’s hard to characterize a parent company as a partner–though VMware certainly tried.

The move was widely feared to be VMware leveraging the storage prowess of EMC to gain some competitive advantage for itself. Monday morning, during the Day 1 keynote at VMworld in San Francisco, CEO Pat Gelsinger found himself apologizing to partners for the way all that turned out.

He discussed the ongoing beta cycle for VSAN 2.0, which is being shipped as part of the beta for vSphere 2.0. But there’s another part of vSphere 6 called virtual volumes, or vVols (pronounced “vee-vols,” like Boris Badonov saying “weevils”). Originally, they weren’t supposed to be interchangeable. But when vVols began addressing partners’ expectations about what they expected VSAN to be, the vVol ended up being an adequate olive branch.

“When we announced VSAN… I apologize to you, our industry partners, particularly in the storage area,” said Gelsinger, “because one of our theses of disruptive innovation is always enabling the ecosystem to come along. And VSAN didn’t enable you to do that. VVol does, and we’re committed to delivering this to participate in the software automation and policy management of that platform, as we continue to innovate on VSAN and the integrated technology that comes as part of vSphere 6.”

VSAN’s purpose is to pool various storage resources, including HDD and flash memory, into what appears to VMs to be a single pool. Partners have an interest in producing hardware that supports VSAN just as easily as it supports a physical SAN from EMC, and Dell has been one of those partners. Other companies that produce so-called converged infrastructure technology have an interest in being compatible with vSphere.

But last February, the fact that some of those partners would actually compete with VMware in the virtual storage space directly–specifically, for contracts with OEMs like Dell–led to their being disinvited from VMware’s annual partner conference. One of them was Nutanix, a producer of converged storage software. In June, Nutanix responded to the snub by entering into an OEM agreement directly with Dell–perhaps VMware’s most valuable single partner–to integrate Nutanix software into Dell’s XC Series storage appliances, often bundled with its PowerEdge servers.

The public partnership forced VMware to turn up the volume, as it were, on vVol, literally days later.

“For the first time, and unlike previous beta cycles for vSphere, the vSphere beta and VVols beta are open for everyone to sign up,” wrote VMware senior architect Rawlinson Rivera on his company’s blog last June 30. “This approach allows participants to help define the direction of the world’s most widely adopted, trusted and robust virtualization platform. With Virtual Volumes (VVols), VMware offers a new paradigm, one in which an individual virtual machine and its disks, rather than a LUN, become a unit of storage management for a storage system. Virtual volumes encapsulate virtual disks and other virtual machine files, and natively store the files on the storage system.”

By some accounts, the vVol beta program (upper- or lower-case initial “v,” depending upon whom you ask) has been one of VMware’s most popular. If partners, and eventually the world, come to embrace vVols, one will wonder what happens when the parent company finds itself holding the wrong end of the stick.

Like this:

“Before NSX came along, it was thought that you could not do networking in software,” said Raghu Raghuram (pictured right), general manager for software-defined data center at VMware, during Day 2 keynotes at VMworld in San Francisco on Wednesday. “NSX changes all of that, and in just one short year, the largest banks, the largest telcos, indeed the largest enterprises in the world are all deploying NSX to transform their networks. That’s amazing momentum.”

This is the story of that little comment–whose nuances may not be self-evident.

There is a mindset that networking architecture, as a basic principle, must be “open” to the extent that it enables devices (both physical and virtual) to plug in. There is a second mindset that says such architecture must be “inclusive”. The two are different in the way that a fisherman having lunch and a shark having lunch are quite different.

Last year, VMware introduced a concept called NSX. It serves as a network overlay, which enables a server to connect to various physical network components, but process and route traffic using rules and processes that don’t resemble physical networking at all. Indeed, network engineers say these overlays create tunnels between switches, creating a metaphor that’s like “cloud” to represent processes that, once everything works right, you shouldn’t have to see or care about.

VMware’s NSX is different from OpenFlow in that it does not try to represent a physical network in a virtual way, however new that way may be. It does its work with its own methods. Although it emerged from a process that borrowed from Open vSwitch technology–which VMware inspiredbefore it became marshaled by Apache–its aim is actually to replace virtual switches as separate components, with server components that perform the switching function.

It’s networking, it’s in software and it lends definition. So there’s an “S,” a “D,” and an “N”. But since its inception, VMware has positioned NSX as something other than SDN. It has stopped short of calling NSX a competitor, probably out of courtesy toward those outside parties with whom it must still cooperate to achieve network interoperability.

Last January, in an interview with NetworkWorld‘s John Dix, VMware senior vice president Steve Mullaney took that tack to the very precipice. He said he believed in “sdn”, except as a philosophy, which he prefers described with lower-case letters. Yes, software will define networking. But the purpose of having software do that in the first place, he argued, was to decouple software from the physical infrastructure, in such a way that the software is not bound to hardware’s rules.

“It has nothing to do with controlling physical switches and using OpenFlow to control those switches,” said Dix. “The key is not to have to touch the physical infrastructure. Leave it alone and do what you do as an augmentation. Make that physical infrastructure better without touching it.” He went on to say, in classic VMware fashion, that any other implementation of defining a network in software is a “bastardization” of SDN, characterizing alternative approaches as both spurious and wrong.

Which brings us back to last Tuesday. VMware’s Raghuram took that tack one critical step further, which leaves open the question of whether VMware is stepping onto solid ground or into a more negative connotation of open space. VMware has a concept of the software-defined data center (SDDC, upper-case) which excludes SDN (upper-case). Now, that exclusion is more than just bastardization. SDN doesn’t even exist.

“SDDC,” said Raghuram, “is the only architecture that can resolve the conflicts that Ben [Fathi, VMware CTO] talked about [namely, the need to decouple]. It is the architecture for today and for tomorrow. It is the architecture that brings together traditional applications and cloud-native applications. It is the architecture that allows IT to run the infrastructure, and DevOps and developers to consume infrastructure programmatically. It is the architecture that enables governance and control on the one hand, while enabling self-service and elasticity on the other hand.”

Like this:

A few weeks back I mentioned how Microsoft was crowing over 785 customers returning to the Microsoft fold after trying Google Apps. Well, Google is striking back, sort of.After that blog post, Google came to me with its rebuttal, which included the claim of 5,000 companies signing up for Google Apps on a daily basis. I called shenanigans because at 5k companies per day, they would grab every company in the U.S. in just a few years.Google has in fact made its case to analysts, providing them with their own facts and numbers. Someone leaked it to Forbes, and Google’s not very happy. But they’ve confirmed the stats do come from them.To read this article in full or to leave a comment, please click here Read More

Like this:

Burger King (NYSE:BKW) wants customers to use its new mobile app, so it’s giving away Android smartphones.

Burger King began updating its mobile app in March, and the giveaway coincides with the official launch. The app features a store locator, coupons, menus and limited mobile payment options.

Samuel Heath, director of revenue management and pricing for BK, told Mashable that the offer is meant to get handsets into the hands of more shoppers. “We wanted to make sure we took care of all the people who come to Burger King,” he said. “You forget sometimes that only half the people have smartphones.”

The offer comes with the same restrictions as most discounted new phones, a two-year new or upgraded agreement with AT&T, Sprint or Verizon. There are more than 20 available phones including the Samsung Galaxy S3 and Galaxy S4.

The phones are available online and shoppers are prompted to enter a promotional code at checkout.

Burger King may have to do more than give away free phones to win over consumers following news that it would relocate to Canada following the proposed acquisition of Canadian coffee and doughnut chain Tim Hortons for $11.4 billion. Moving its corporate headquarters in a tax inversion will allow the new company to pay a lower tax rate, but is drawing criticism from consumers and lawmakers alike.

Like this:

One cannot talk intelligently about big data and not be thinking of its ultimate outcome: artificial intelligence, or AI. Keeping abreast of such developments is therefore a natural and important part of staying informed on advances in big data. BabyX’s first words are of particular note. BabyX is not a real baby but it shows us just how far AI has already come.

BabyX presents with a human toddler’s face lending it an air of non-threatening innocence. Some will think it cute; others will no doubt find it creepy. Suffice it to say that the baby persona is intended to aid researchers and observers in remembering the developmental stage of the machine’s learning and to relate better with it as well.

Despite the face of a cherub, in reality it is decidedly neither otherworldly nor human. Instead it is “an experimental computer generated psychobiological simulation of an infant which learns and interacts in real time” conceived and delivered in life by scientists at the Lab for Animate Technologies in New Zealand. More specifically, “BabyX integrates realistic facial simulation with computational neuroscience models of neural systems involved in interactive behaviour and learning.”

You can read more about this project at the Auckland Bioengineering Institute’s Laboratory for Animate Technologies website. You can also read Chris Person’s take on this project in his post in Kotaku.

This is only one of many AI projects in development but it’s certainly one of the most interesting for a number of reasons. First, the technological achievements here are simply breath-taking and the potential for this tech is very high. But, arguably, the potential for danger is also higher in this project than most. Why? Because the researchers used a real person, a real baby whom they know well, for this AI’s face and as a result they may relate too emotionally to the project. They could easily fail to see dangers and problems it may present later or overlook developments that would have alarmed them had the thing been wearing a stranger’s face.

Emotional involvement by researchers and developers is understandable. Passion for the project is what gets most humans over the obstacles and to the end goal. But this is not a real baby. And it certainly isn’t a twin of the real baby they know. Their emotional involvement is highly likely to be the one obstacle they can’t overcome.

I know that it isn’t politically correct or a welcomed response to the AI community to continually urge caution. But responsible scientists will continue to push for caution and reasonable constraints on AI anyway. The general public does not understand this technology, how far it has come or how far it is likely to go. We do. Therefore it falls upon the big data and AI communities to govern it.

Certainly I don’t want to be standing at the end of the world trying to explain why I never raised an alarm with a feeble “but it would have been so much worse for me to have said something because some people in the science community might have mocked me.”

Yeah, that would be lame.

Just because something is exciting and advanced does not mean it is good, innocent, benign, or beneficial–no matter how cute a baby face it wears. AI is growing up. We must address it now before it has advanced beyond our ability to do so.

Even so, I congratulate the researchers at Lab for Animate Technologies, for this truly is an astonishing achievement. But I’m not convinced that these types of achievements are at all in our best interest. And I’m not the only one that thinks that.

“Artificial intelligence could be the worst thing to happen to humanity,” said Stephen Hawking.

“One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all,” wrote Stephen Hawking, Stuart Russell , Max Tegmark and Frank Wilczek in a post in The Independent.

Like this:

RadioShack (NYSE:RSH) is in talks with shareholder Standard General to secure a rescue financing package. The retailer is hoping to avoid bankruptcy.

Standard General would bolster RadioShack’s cash by issuing debt or equity, a source close to the matter told Bloomberg. The firm is also assisting in developing a plan intended to help the retailer avoid filing for Chapter 11 bankruptcy. In addition, Standard General is seeking to refinance RadioShack’s $250 million second-lien term loan, which is held by Salus Capital Partners and Cerberus Capital Management.

In the last quarter, RadioShack reported a loss of $98.3 million, same-store sales fell 14 percent, and RadioShack creditors blocked the closing of 1,100 stores earlier this year, limiting potential closings to as many as 200 instead.

Standard General owned more than 7 percent of RadioShack as of June 30, and now the share is up to 10 percent.

RadioShack would not be the first retailer resuscitated by Standard General. Last month the investor put together a package for American Apparel with as much as $25 million to help the flailing chain. Of that money, $10 million was used to buy out Lion Capital’s loan. As a result of the partnership with ousted CEO Dov Charney, Standard General was allowed to name three seats on the board, plus two seats agreed upon by both the firm and the company—one which went to RadioShack’s CEO Joseph Magnacca.

To avert continuing losses, RadioShack has been trying to reinvent itself as a more modern retailer. A few months ago, the retailer introduced RadioShack Labs, a program to support inventors and startups aimed at creating new technology. The company has also undergone extensive store remodeling and begun rolling out new concept stores.

Despite being the younger sibling that Facebook took under its wing in 2012, Instagram is on its way to becoming a major marketing tool in its own right.

Both networks succeed in capturing users with their visual communication, but Instagram has recently overtaken Facebook with three times the actions-per-post and double the increase in brand posting, even though Facebook recently celebrated its tenth birthday.

So what does this mean for your business? Should you take a break from your Facebook posts to focus on Instagram?

With little knowledge of Instagram’s marketing potential, it’s best to dabble in multiple social networks and not focus on one alone. Instagram certainly deserves attention for its high engagement and is a good way for your business to interact with users. Here’s what you can do to tap into the high level of Instagram engagement available to your business:

1. Balance the Two Networks

Facebook is still king of the social networking realm, boasting over a billion monthly active users. For this reason, Facebook is your home base, but it wouldn’t hurt to focus additional attention on Instagram.

It’s easier to reach users with Instagram because of its strictly visual posts. They’re quick messages with quick meanings, and your business can take advantage of that for branding and making a direct connection. It’s also proven that the average Instagram post reaches more people than the average Facebook photo, which means there is more potential to reach your existing fans despite the recent downturn in engagement.

However, Instagram has one drawback. As easy as it is to post a photo, it doesn’t have a built-in “re-gram” feature that encourages virality like retweets, reblogs, or Facebook sharing do. You’ll just have to share your Instagram images to Facebook and hope they get reshared from there.

2. Ease Users into Commercial Branding

McDonald’s recently suffered complaints from overusing their ads on Instagram. Marketing is new to this social network, and until now users turned to it for interesting things, ideas and life events. Advertisements could strike a wrong note if shown too often too fast.

The best way to introduce your business without seeming like an attention-seeking advertisement is to have the users’ best interest in mind. Entertain and enlighten them with pictures that have personal appeal and not just your company’s vision. Considering their personal feelings will lead to more actions per post.

3. Listen to Your Followers

Loyalty is a key part of building your business. You’ll want loyal clients and followers to keep you afloat and present in your field, and one way to get there is by listening.

After you post photos on Instagram, take time to read and respond to the comments. Observe which photos receive more attention than others, and cater your posts to user preference. With a band of loyal followers, your business will continue to grow.

Spread your marketing efforts across various social networks. Since Instagram engagement is relatively high, focus on how you can reach its users to introduce your brand, but use it as an accompanying tool to the ever-popular Facebook.

Like this:

By Archana Venkatraman
The number of active wireless connected devices will exceed 40.9 billion by 2020 – more than double the current total, according to ABI Research. The explosion in connected devices will be driven by the internet of things (IoT).

“If we look at this year’s installed base, smartphones, PCs and other ‘hub’ devices represent 44% of the active total, but by the end of 2020 their share is set to drop to 32%,” said Markkanen. “In other words, 75% of the growth between today and the end of the decade will come from non-hub devices – sensor nodes and accessories.”

From every technology supplier’s strategic point of view, the critical question is how this plethora of IoT devices will ultimately be connected, said ABI. Until recently, the choices that OEMs have faced have been fairly straightforward, with cellular, Wi-Fi, Bluetooth and others all generally addressing their relative comfort zones.

Thread is a new IP-based wireless networking protocol initiated by IoT players such as Google Nest to find a new and better way to connect products in the home. It is thought that existing wireless networking approaches, devised and executed before the rise of IoT, are not capable of connecting large numbers of devices.

Thread is an IPv6 networking protocol built on open standards, designed for low-power 802.15.4 mesh networks. But with billions of devices powered by IoT, IPv4 will not be good enough, said Ian McDonald, IT director at Swiftkey.

Shey said it is not only setting the bar higher for ZigBee in the 802.15.4 space, but is also piling pressure on Bluetooth suppliers to enable mesh networking.

ZigBee is a specification for wireless personal area networks (WPANs) operating at 868MHz, 902-928MHz and 2.4GHz. Devices can communicate at speeds of up to 250kbps, and can be physically separated by up to 50 metres in typical applications, more in an ideal environment. ZigBee is based on the 802.15 specification approved by the Institute of Electrical and Electronics Engineers Standards Association (IEEE-SA).

Shey added: “In the meantime, the LTE-MTC and LTE-M initiatives may well expand the market for cellular M2M, while startups like Electric Imp and Spark could do the same for Wi-Fi. And finally, we also should not ignore what is going on with passive, proximity-based connectivity offered by RFID and NFC.”

Like this:

“Personally I would only own a TiVo box, none compare and I’ve tried most……I believe Synapse Synergy Group, Inc., that mysterious technology company people keep hearing about, that Think-tank, Incubator & Accelerator or something like that…..Didn’t I hear tell they were building a box even better than TiVo?” – JNR

Continuing to transform its business to one driven by MSO partnerships, TiVo added 283,000 pay-TV subscribers in the second quarter, a 20 percent improvement over the same period in 2013.

The additions bring the DVR maker’s pay-TV customer base to nearly 3.9 million users. Overall, TiVo touts 4.8 million customers, with MSO adds in Q2 more than offsetting the loss of 20,000 users of retail-purchased devices.

TiVo has added about 1.2 million pay-TV subscriptions over the last 12 months vs. just 44,000 retail customers.

TiVo reported net income of $9.3 million in the second quarter, a significant year-over-year increase once a $276 million windfall in Q2 2013 relating to the Cisco/Motorola litigation settlement is taken out of the equation.

“We also continue to build out integration for operators of traditional video and next generation video into what is the only true advanced television bundle of all content from all sources,” said TiVo President and CEO Tom Rogers. “For example, Netflix (NASDAQ: NFLX) continues to be integrated into the TiVo cable set-top box experience with U.S. operators, which now include Atlantic Broadband, Cable One, Grande Communications, Suddenlink Communications, RCN Corp., Midcontinent Communications, and GCI in addition to European operators Virgin and Com Hem.”

Like this:

A problem created by what the operator termed as “routine maintenance” resulted in a service interruption for Time Warner Cable’s (NYSE: TWC) 12 million broadband subscribers nationwide this morning.

“At 4:30 a.m. ET this morning during our routine network maintenance, an issue with our Internet backbone created disruption with our Internet and on demand services,” Time Warner Cable said in a statement. “As of 6 a.m. ET services were largely restored as updates continue to bring all customers back online.”

Separately but related, the company will pay $1.1 million to resolve an FCC investigation that found the operator failed to report multiple network service outages in 2013,

“TWC failed to file a substantial number of reports with respect to a series of reportable wireline and Voice over Internet Protocol network outages,” the FCC said in a report revealing the settlement, which was released Monday and originally reported on by Reuters. “TWC admits that its failure to timely file the required network outage reports violated the Commission’s rules.”

The FCC requires providers of fixed Internet connection or VoIP calling to promptly report some network outages that last 30 minutes or longer, for instance those that potentially affect emergency response 911 facilities or those that impact enough consumers to collectively result in at least 900,000 minutes of disrupted Internet or phone use. Operators also have 30 days to file a report and explain what happened.

The rules were adopted in the post-9/11 era and motivated by public safety.

The FCC found TWC “had failed to file a substantial number of initial reports and/or final reports with respect to a series of reportable wireline and VoIP network outages for which TWC had timely filed the required notifications,” the FCC said in its report.

“We look forward to working with the FCC to ensure that its reporting rules are properly implemented and followed,” a Time Warner Cable spokesman said in a statement.

ZipRecruiter provides a platform for small and medium businesses to post job openings and connect with potential employees.

Founded in 2010, ZipRecruiter is already profitable and hasn’t taken in any external money prior to this round, which it plans to put toward building new products and potentially expanding outside of the U.S

Like this:

Amazon (NASDAQ:AMZN) will buy the video game streaming site Twitch for $970 million in cash, giving Amazon a leg up in the advertising pool shared by competitors such as Netflix and YouTube.

It was reported in May that Google (NASDAQ:GOOG) was in talks to acquire the company for $1 billion, but that deal never materialized, reported Re/code.

Twitch is a platform for making and discussing the recordings players make of their gaming experiences. The site, which is an offshoot of Justin.tv, had 50 million unique viewers in July. Justin.tv announced its coming shutdown earlier this month.

“We chose Amazon because they believe in our community, they share our values and long-term vision, and they want to help us get there faster. We’re keeping most everything the same: our office, our employees, our brand, and most importantly our independence. But with Amazon’s support we’ll have the resources to bring you an even better Twitch,” Twitch CEO Emmett Shear wrote in his blog.

Amazon has been working to create a successful platform for streaming movies, TV shows and original series, all in efforts to compete with Netflix, which recently reported strong second quarter results as it moves past the 50 million subscribers mark.

Bringing a video game streaming site into the Amazon fold could launch the mega retailer into a new echelon of consumer engagement, namely the market of free user-generated content already utilized by websites such as YouTube. Video game footage is among the most popular content on YouTube, the world’s No. 1 video website, reported Reuters.

Twitch allows some popular broadcasters to participate in a “partner” program, meaning they share in some of the ad revenue generated by advertisements that display against their videos and sometimes charge for paid subscriptions. This partnership may fit well into the Amazon Prime platform, which already includes access to streaming content.

Amazon has made a big push into expansion and new ventures. Although sales in the second quarter increased by about 25 percent, the retailer posted a net loss of $126 million because of new research programs and project funding.

Like this:

American Airlines has diverted a flight carrying the president of Sony Online Entertainment following what may have been a bomb threat by a group of alleged hackers—the same group that has claimed responsibility for a series of outages across the PlayStation Network and other gaming services this weekend.

The FBI is currently investigating the incident, an SOE representative told Kotakutoday.

This afternoon at around 1:30pm Eastern, the group Lizard Squad tweeted at American Airlines to say that a flight carrying John Smedley, president of SOE, the developer behind EverQuest Next among other games, had “explosives on-board.”

Like this:

Hannah Arendt wrote about “the banality of evil”: “The neutral expressions on the shooter and his uniformed audience pretty well encapsulate that concept: they could be watching a barber cut hair, instead of the heartless extermination of innocents. Humans can adapt to endure almost anything, but in doing so, they sometimes perpetuate incredible evil. The death of human empathy is one of the earliest and most telling signs of a culture about to fall into barbarism.”

In other words, the government demands all these organizations to be its unpaid spies. They all have quotas. And if any should fail to file a SAR, they’ll receive a visit from FinCEN.

Let me put this more clearly: even if a banker doesn’t feel like anything suspicious has happened, s/he is still required to file a minimum quota of SARs.

If you walk into a bank and say or do anything that’s slightly out of the ordinary, or simply different than the rest of the bank’s customers, chances are they’ll file a SAR.

FinCEN’s statistics show, in fact, that there has been a surge of SARs filed on bank customers who have conducted any Bitcoin-related transaction (i.e. transferring funds from a bank account to CoinDesk).

I used to know a broker who thought this requirement absurd and immoral. And in order to save his clients’ privacy, he would file all the SARs against himself.

But such values are unfortunately rare. Most bankers, brokers, etc. simply accept the duty of being an unpaid government spy. They’ll smile to your face and then file a SAR because you had the audacity to do something different.

I remember one FinCEN case in which they went after a remittance business in Chicago; the proprietor had been in business for decades and knew each of his customers personally.

He knew their circumstances, what they were doing, who they were sending the money to, etc. Many had even become personal friends. So he didn’t file any of the reports. FinCEN threw the book at him and ran the poor chap out of business. Totally disgusting.

But now comes a new case that takes the cake.

A few days ago, FinCEN enthusiastically announced that they had been working on a CRIMINAL investigation against a casino based in the Western Pacific… thousands of miles from US shores.

FinCEN had apparently sent some of its agents undercover to the Tinian Dynasty Hotel & Casino posing as hard-gambling, wealthy Russian businessmen.

These undercover agents indicated that they wanted to bring in large amounts of cash to gamble at the casino, and expressly requested that the casino not report the currency transactions.

The casino agreed. After all, why would an offshore casino bother reporting anything about Russian nationals to the US government anyhow?

Why indeed?

But such logic does not factor into FinCEN’s motivation. So they slammed the casino’s VIP Services Manager (a guy who’s not even a US person) for not filing any SARs.

There’s so many things wrong with this it’s hard to know where to begin.

Jon Corzine (of MF Global) walks the streets a free man.

Yet FinCEN is wasting taxpayer resources sending undercover agents to entrap some offshore casino, and they act as if they’ve infiltrated a major terrorist organization.

This is practically secret police stuff… all because a non-US person working overseas didn’t file a meaningless report on Russian businessmen.

Amid all the debt, graft, and incompetence that’s so prevalent today in government, this is another sad testament to the direction that things are headed.

Like this:

The Chinese government has shown strong support for nuclear power as part of the country’s energy mix in its efforts to decrease air pollution from coal-fired plants. This year alone, China has brought three new reactors online, totaling 3.2 GW of capacity, according to research and consulting firm GlobalData.

In addition to the new reactors (Yangjiang 1, Hongyanhe 2 and Ningde 2), China has also made a substantial investment in two new units at its Haiyang facility.

“On February 27, 2014, the Chinese government agreed to invest $5.1 billion in the construction and development of two nuclear power units at the Haiyang nuclear facility in Yantai, Shandong Province,” said Sneha Elias, GlobalData’s power analyst. “The total installed capacity of the two units is 2.2 GW. The investment per megawatt will be $2.32 million.”

Additionally, the China National Nuclear Corporation and one of its subsidiaries — China Nuclear Engineering — is listing shares on the Shanghai Stock Exchange, for gross proceeds of $2.64 billion and $0.29 billion, respectively.

“China National Nuclear Corporation intends to issue up to 3.651 billion shares, or 25 percent of its enlarged capital, at a price of CNY4.46 ($0.72) per share, for gross proceeds of up to CNY16.3 billion ($2.64 billion) in an initial public offering,” Elias said. “The company intends to use these proceeds to finance its four nuclear power projects in Fujian, Zhejiang, Hainan and Jiangsu province, and for general working capital purposes.”

Currently, China has 20 active nuclear reactors, with 28 more under construction and another 10 that are to begin commercial operation between 2017 and 2025.

That is the media technology question of the day, with the New York-based, VC-funded startup attracting scrutiny from pay-TV lawyers as it expands its reach from the Big Apple to Chicago.

According to Variety, “at least one” pay-TV operator has begun a legal review of the company to determine its legitimacy. The trade did not name the operator.

That NimbleTV should come under legal crosshairs of the cable business isn’t surprising, given that the company is basically bypassing the moribund TV Everywhere initiative by offering authenticated streams of pay-TV programming to subscribers.

The company began last year offering New York-area Cablevision (NYSE: CVC), Time Warner Cable (NYSE: TWC), Verizon FiOS (NYSE: VZ) and RCN subscribers cloud-based access to their pay-TV programming through the usual assortment of OTT devices, notebook computers, smart phones and tablets. It currently touts around 80,000 subscribers.

On Wednesday, the company announced that it will move into metropolitan Chicago, offering its service to Comcast (NASDAQ: CMCSA) and AT&T U-verse (NYSE: T) customers.

Starting at around $5 a month, NimbleTV subscribers can access varying tiers of their pay-TV programming via cloud on IP devices. Upper tiers allow the use of a virtual digital video recorder.

Speaking to Variety, NimbleTV founder and CEO Anand Subramanian insisted the company has initiated conversations with every major pay-TV operator and programmer. “Everybody knows what we do,” he said. Subramanian also said the company is “brutally insistent” that users be able to authenticate a legit pay-TV subscription.

Of course, as the arduous rollout of TV Everywhere has shown, both operators and programmers are concerned about lot more than just authentication, starting with how they’ll tally audience sizes when part of the viewership is fragmented to a third-party service.

For its part, NimbleTV is defining itself as viewing displacement service similar to technologies offered by SlingMedia, which have been around for nearly a decade and have survived legal fire.

Broadcasters, however, might just as easily argue that the service is no different than that provided by nearly vanquished Aereo.

Like this:

A University of Manchester graphene scientist has won a prestigious award for his business proposal to set up a standards service for the one-atom thick material. Antonios Oikonomou is the winner of the Eli and Britt Harari Graphene Enterprise Award 2014 for his business enterprise; Graphene Characterisation and Standardisation Services (GCSS).

Graphene, first isolated from graphite at The University of Manchester in 2004, comes in a large number of different forms and can have varying levels of quality. A major challenge for researchers and commercial users is verifying samples and how they can be used in developing applications.

Antonios with graphene samples

GCSS will offer advanced graphene characterisation, certification and standardisation services which aim to use in-house expertise at The National Graphene Institute at The University of Manchester to develop the global standards of graphene quality. This aims to establish benchmark materials which can develop into standards that can be adopted by the rest of the industry.

Additionally, working with the standards bodies, industry and other research centres, GCSS aims to design new characterisation instruments that will bring the quality of lab equipment to an industrial level by improving the speed of measurement and establishing quality control in manufacturing processes.

The Eli and Britt Harari Graphene Enterprise Award is run in association with Sir Andre Geim, who together with Sir Kostya Novoselov won the Nobel Prize for Physics in 2010 for the isolation of graphene. The £50,000 award aims to encourage the development of new graphene enterprises from budding entrepreneurs across the University’s undergraduate, postgraduate and post-doctoral researcher communities, as well as recently-graduated alumni of the University.

Antonios, 32, who recently completed his PhD at The University of Manchester, said: “A major barrier for the commercialisation of emerging technologies, like graphene and other 2D materials, is the lack of quality standards across the industry.

“Through the implementation of various strategies, we want to guarantee that each batch of raw material manufactured will meet the quality and consistency that the downstream users require to develop innovative products.

“The award does not only provide the vital seed funding at this important stage, but also enables access to a number of important services throughout The University of Manchester.”

Professor Luke Georgiou, Vice-President of Research and Chair of the judging panel, said: “Antonios’ proposal meets a key requirement in the emerging graphene market, as many customers are unsure about the quality of graphene and 2D material samples they purchase from various sources, or for what applications they can be used.

“Establishing a standards centre on the University campus, the home of graphene, reinforces the University’s standing as the definitive voice of graphene in the world.”

The award is co-funded by the North American Foundation for The University of Manchester through the support of one of the University’s former physics students Dr Eli Harari (founder of SanDisk Corp), his wife Britt and the UK Government’s Higher Education Innovation Fund.

Applications were judged on the strength of their business plans to develop a new graphene-related business. The £50,000 prize will help to take the first steps towards realising the GCSS business plan. The award helps recognise the role that high-level, flexible early-stage financial support can play in the successful development of a business, targeting the full commercialisation of a product or technology related to research in graphene.

Chris Cox, Director of Development and Alumni Relations, said “It is thanks to work of the North American Foundation for The University of Manchester, that such a visionary business case can be made a reality. This is a great example of philanthropy extending the reach of what is possible across campus, and opening up new avenues.

“We hope that the award will become a major feature in the graphene landscape at the University for years to come.”

Like this:

Staples (NASDAQ:SPLS) announced it will close 140 stores this year as part of a turnaround plan announced earlier this year. The company reported an underperforming second quarter in which net income dropped 20 percent to $82 million.

To keep up with Web-based rivals, Staples outlined plans in March to close as many as 225 North American stores through 2015 and to reduce costs by as much as $500 million reported Bloomberg.

According to the company’s quarterly reports, sales fell 2 percent in Q2, down to $5.2 billion. The total sales growth was negatively impacted by foreign exchange rates and store closures in North America over the past year.

Like this:

Computer chips powered directly by the sun and cooled by water. Data stored on a single electron. Self-learning cognitive systems. Chips with as many synapses and neurons as the human brain. A supercomputer that analyses in just one day more than double the world’s current internet traffic.

Such a list may seem like the realm of sci-fi to some, but these are all projects currently underway at IBM’s Zurich research labs, and are likely to produce commercially available products in the next 10-15 years.

“What would you do with a thousand times the capability [of today’s computers]?” asks Matthias Kaiserswerth, the director of the Zurich labs. “We are actively working to make this happen in the next 10 years.”

What’s more, the basic chip and storage technologies are close to the physical limits of current design and manufacturing techniques. IfMoore’s Law is to continue, we need new paradigms for how computers are made.

There is only so far that current technology can scale, due to physical size, energy use and heat generation, says Kaiserswerth.

This is the starting point for the work carried out by IBM researchers in Zurich – a team that has won two Nobel prizes.

Self-learning systems

IBM likes to show off its computers. It has, over the years, famously developed the first computer to beat a grand master at chess, and more recently the first to beat a top competitor on the US quiz show Jeopardy. Watson, the game-show winner, is described as a “self-learning” system, using the very latest in statistical and analytical software to work out the most likely answer to a question.

But we humans still retain one great advantage, even in defeat. A system such as Watson requires about 200,000 Watts of energy – the human brain it defeated uses just 20 Watts.

“In the brain, energy and cooling is delivered by the same fluid – blood. We want to replicate this for chips,” says Bruno Michel, one of IBM’s researchers.

IBM has already built its first “synapse chip“, a processor with 262 programmable synapses, designed to mimic the way the brain processes information – although Kaiserswerth is quick to stress that it is not a “brain on a chip”, and more about learning lessons from how the brain works and applying them to chip design.

The human brain, by comparison, has about 100 trillion synapses.

But one of the things that makes the brain so energy efficient is the fact that its key components – the synapses and neurons – are so close together. The conventional two-dimensional design of computer chips means comparatively big distances between components such as processors and memory – that slows down speeds, and requires more energy to bridge the gap.

So, IBM is working on a stacked, or 3D chip, where components are layered on top of each other, reducing the distances, increasing performance and reducing the electricity needed to power it. Michel predicts that 3D chips can theoretically improve system performance by a factor of 5,000 – although the ability to deliver this is about 15 years away.

Even then, there will be new ways needed to provide enough energy to power a computer based on such advanced 3D chips – one that could provide the power of the largest supercomputer today in a system the size of a desktop PC.

To address this, IBM is researching ways of powering the chip directly from the sun.

The light reflected from what is still a fairly low level of solar concentration would be enough to permanently damage your eyes if you looked at it without a filter. Ultimately, IBM needs to find a way to concentrate sunlight by a factor of 1,000 onto a specific point on a chip.

Even then, the chip will still need to be cooled – and it is likely that will be done with water.

“We know the future design of a chip with concentrated solar power and water cooling. We are aiming to get there through our research,” says Michel.

A prototype chip already exists, with tiny pipes on top feeding the coolant directly into the structure of the processor.

New storage technologies

Of course, a faster, more energy-efficient computer will require more storage capacity too.

One of the projects driving these requirements is IBM’s involvement in the Square Kilometre Array (SKA), an international consortium building the world’s largest and most sensitive radio telescope.

When completed in 2024, SKA will generate 10 exabytes of data every day – that’s about 10 petabytes every second, roughly double the current levels of global internet traffic, according to IBM.

The supercomputers that will support SKA will also need to analyse all that information, in near real time, to remove unnecessary data and store only what is required for the project.

“You have to screen out data, reduce the order of magnitude by two to six times, and analyse in real time,” says IBM fellow Evangelos Eleftheriou.

SKA will need new storage technology, in what Eleftheriou calls the biggest change in IT architecture since IBM’s System 360 mainframe, launched in 1964. This will be a “data-centric model”, where data is retained in persistent memory, and is surrounded by many central processing units (CPUs) – unlike today’s model where the CPU sits at the centre and calls in data from different media as needed.

This will involve blurring the boundaries between what current paradigms see as memory and storage. “Memory/IO hierarchy will eventually disappear and be replaced by flat, globally addressable memory,” says Eleftheriou.

IBM is developing a technology called phase change memory (PCM), which overcomes the scaling problems of existing DRAM memory. PCM exploits the different electrical resistance of two distinct solid phases of a metal alloy – changing the physical properties of the metal to store a bit. The first commercial PCM chips are expected by 2016.

Even tape storage will continue to have a role to play, according to the supplier. Experts have been predicting the death of tape as a storage medium for years, but IBM researchers predict it will continue to be the best way to store archived data for a long time yet.

“The only drawback of tape is the slow access time,” he says. IBM is developing ways to use policies to move data between tape and disk or memory so that it is readily available when needed.

Nanotechnology

The research at Zurich does not stop at the level of technologies such as chips and storage. Researchers are looking at the use of nanotechnology in chip design. Nanowires – connections a thousand times thinner than a human hair – can reduce the voltage used within an individual switch as it changes its state from binary zero to one.

Analysis at an atomic level takes things even further. “We have shown in principle that a single atom can be used to store a single bit,” says researcher Fabian Mohn.

Meanwhile, Watson – the self-learning system that won Jeopardy – is now finding practical uses in business. IBM is working with a leading US cancer hospital to develop a new version of Watson to assist oncologists with cancer diagnosis and treatment. The healthcare version of Watson would take patient data and look through the huge quantities of published literature and research on cancer to make recommendations on likely prognosis and possible treatments.

IBM predicts that the combination of Watson’s big data handling, with exascale computing, cognitive chips and nanotechnology, is the future of IT.

“IT for the back office has happened. Where it’s interesting is where it’s facing outwards,” says Kaiserswerth. “We are entering the cognitive systems era, with computers a thousand times more powerful than now.”

Like this:

By Warwick AshfordResearchers at Massachusetts Institute of Technology (MIT) have developed an analogue silicon chip with 400 transistors that emulates the activity of a brain synapse, which is the connection between two neurons, in the first step to building truly intelligent systems.

There are about 100 billion neurons in the brain, each of which forms synapses with many other neurons. This process is believed to underpin many brain functions, such as learning and memory.

The chip, described in the latest edition of the journalProceedings of the National Academy of Sciences, will allow neuroscientists to conduct basic research on how the brain actually works and could lead to the study and treatment of diseases related to brain malfunction.

But the chip could also potentially improve devices that allow people to operate things such as computer mice with their thoughts and create artificial intelligence devices that replicate brain behaviour for tasks such as pattern recognition, cognition, learning, memory and decision-making.

Chi-Sang Poon, a research scientist in the Harvard-MIT Division of Health Sciences and Technology, toldMSNBC: “We are not talking about recreating a whole brain at this point. We have to start with one system.”

Unlike digital computer chips that treat the function of neurons like a simple on/off switch, Poon said the MIT chip get into the nitty-gritty of how the neurons work intra-cellularly, which involves all the ionic processes that are going on.

Activity in the synapses relies on so-called ion channels which control the flow of charged atoms such as sodium, potassium and calcium.

The MIT team said understanding how the brain works will enable scientists to reverse-engineer it and put it in a chip to reproduce those functions.

The team plans to use their chip to build systems to model specific neural functions, such as visual processing, according to theBBC.

Such systems could be much faster than computers, and the chip could ultimately prove to be even faster than the biological process, said the researchers.

Like this:

A federal judge in North Carolina recently struck down the latest challenge by the U.S. Justice Department to a state law that requires voters to bring photo identification to the polls. Voters continue to strongly support voter ID laws and don’t consider them discriminatory.Read More

Like this:

The epidemic of massive data breaches is showing no signs of letting up. Instead of retail and tech companies, however, this time hackers have hit one of the largest hospital groups in the country.

Hackers got into the hospital’s systems and installed a virus that stole upwards of 4.5 million patient records. The information stolen includes names, Social Security numbers, patients’ home addresses, birthdays and telephone numbers.

So far it seems hackers didn’t get medical history information or credit card numbers. Still, that’s a small comfort given the damage they can do with what they did get.

If you’ve been there in the past five years, or were referred there – even if you didn’t go – your information could be in hacker hands. At this point it looks like the hackers are based in China given their methods.

CHS says it is notifying affected customers and will be providing free credit monitoring services, but you shouldn’t wait. The hackers are probably going to sell your information online, which means criminals around the world could start using it any day now.

Here are some steps you need to take right away to protect yourself.

Scammers are going to have a field day with so many names, physical addresses and phone numbers. You can expect an increase in calls and mail that are meant to trick you into giving up money and information.

You might get a variation of the old phone tech support scam – learn how to spot and avoid it. You might start getting mail claiming you’ve won prizes and just need to pay a fee to get them. Maybe someone will even show up at your house as a repairman or solicitor.

So, keep an extra careful watch and avoid anything suspicious.

WATCH YOUR CREDIT

With your name, SSN, birth date, address and phone number out there, it won’t be hard for someone to impersonate you. That’s most of the information they need to take out a loan, open a new credit card or apply for a mortgage.

Keep an eye on your credit report for suspicious activity so you can take action fast

name, address and SSN to police if they’re arrested and you’ll show up in criminal or sex offender databases.

The only way to protect against all this is with an identity protection service. The service CHS is going to give you for free probably won’t be the best around and will only last for a year.

You want the best, and for my money that’s LifeLock. It goes above and beyond other services to let you know when thieves use your information so you can put a stop to it.

LifeLock Ultimate Plus not only monitors your checking, savings and investment accounts for fraud, but it also keeps continual tabs on public records, payday loan companies and black market sites – making sure YOUR information isn’t being misused or up for sale.

When big breaches like this happen, LifeLock goes into action mode, immediately notifying you and double checking your recent activity on all fronts. You need LifeLock in your corner in the fight against identity theft.

Like this:

The fight for Family Dollar (NYSE:FDO) is heating up. Dollar General (NYSE:DG) has made a proposal to acquire Family Dollar Stores for $78.50 per share in cash, in a transaction valued at $9.7 billion.

Dollar General’s offer comes two weeks after Dollar Tree announced its bid for the chain in a deal valued at $8.5 billion. That deal was unanimously approved by both boards of directors, but Dollar General’s interest stalls that merger.

Dollar General was reportedly weighing interest in the chain last week. Stated reasons include solidifying Dollar General’s position as the largest small-format discount retailer with nearly 20,000 stores in 46 states and sales of over $28 billion and interest in Family Dollar’s management team.

Like this:

It was already being declared a success–in the context of information technology, on a scale approaching the fall of the Berlin Wall: the city of Munich, Germany’s 2004 migration plan away from proprietary software, specifically Microsoft’s Windows and Office, and towards a specialized distribution of Ubuntu Linux called LiMux. It was part of the German government’s effort to follow the European Union’s lead in avoiding situations where government services found themselves restricted and constrained by a foreign country’s standards and formats.

But now, Munich Deputy Mayor Josef Schmid tells the online publication Süddeutsche.de(translating from German text) that his city has reached a state of desperation. Now Schmid is calling for a study to investigate the feasibility, and potential benefits, of moving some 15,000 city officials’ desktops back from LiMux to Windows.

One very large clue as to Munich’s problem came two years ago, when the makers of the open source productivity suite LibreOffice (which uses OpenDocument format) announced that Munich would be changing its office productivity suite transition plans from OpenOffice to LibreOffice. This announcement came a full eight years after the previous transition officially began, leading some German journalists to wonder what was really going on.

As city officials stated just after the turn of the century, the problem with the mostly Windows-based systems they had was software fragmentation and the lack of standard choices, according to published reports. Despite the fact that Office was being used for general productivity, for example, city departments made their own choices about which applications to use, for instance, for designing Web pages or editing photographs. Each application stuck to its own proprietary standards, and IT workers were bogged down with requests for translating files (you can just imagine the floppy disk traffic) from one set of formats to another.

But the problem that some city officials now report doesn’t sound much different. Although LiMux comes with its own selection of free and open source software for a variety of general purposes, it appears general purposes are not the problem. There’s evidence that federal government officials may be bringing in their own computing devices for special purposes. Specifically, the publicationgolem.de cites a lack of outrage among federal workers about the use of proprietary formats, who are evidently only using the ODF format when they really need to–an indication that they’re bringing Office to work with them.

Schmid is quoted by several sources as saying that his city is paying real money to adapt LiMux to the needs of city workers, though he did not say how much. Though supporters of the ongoing migration state it has already saved the city some €10 million in licensing expenses, a 2013 study conducted on Munich’s behalf by HP (translated from the German) stated the city spent a full €61 million in IT-related expenses, including re-training, just to avoid spending license fees.

Munich Mayor Dieter Reiter recently admitted to the press that, in the time he was waiting for Linux-based software to finish its job, he could go sing a song. Schmidt says he’s willing to consider the possibility of advocating a move back to Windows–as a fan of neither system himself, he says, he hasn’t made a decision.

Like this:

Any effort by the Obama Administration to support legislation that would tighten government compliance controls for major service providers, particularly around big data, could have the unwanted side-effect of stifling creativity and competition. That’s the message given last August 5 by the Internet Association, an industry association that describes itself as “the unified voice of the Internet economy,” whose members include Google, Yahoo, Amazon and AOL.

Having read a report issued last May by the White House Office of Science and Technology Policy (OSTP) after a 90-day review, ordered by the President, of possible policy initiatives for the big data industry, the authors of an Internet Association open letter to the Commerce Dept.’s NTIA (.pdf) advise the Administration to conduct another study into what they call “the existing regime” before taking any regulatory action or supporting legislative action.

“At this time, any legislative proposal, to address ‘big data’ may result in a ‘precautionary principle problem’ that hinders the advancement of technologies and innovative services before they even develop,” reads the letter. “Given the breadth of existing protections for consumers, we encourage the Administration to carefully examine the existing regime to avoid negative, unintended consequences.”

The letter then criticizes the NTIA report itself, saying that “a majority of concerns raised by the report were largely speculative rather than actual harms. This calls into question whether there is a need to engage policymakers and industry on this issue rather than focusing attention and resources to areas where users experience real harms, such as data security.” It then called attention to the report’s singling out how certain analytics practices, such as profiling, could result in discrimination against citizens. Stating that discrimination is a different type of injustice from a privacy breach, the Association suggested that officials turn their attention to the former, and perhaps let privacy breaches become a real problem before applying premature remedies.

The Internet Association letter comes on the same day that Microsoft, under its own auspices, sent the NTIA a letter of its own (.pdf) urging the Administration to support “comprehensive federal privacy legislation.” Penned by the company’s deputy general counsel, David A. Heiner, the letter comes just days after a U.S. District Court judge in New York ordered Microsoft to comply with a Dept. of Justice warrant for emails believed to be stored on its servers in Dublin, Ireland–in violation of E.U. laws.

“Without new privacy legislation, U.S. companies will find themselves increasingly disadvantaged compared to foreign providers that will compete against U.S. companies in their home and other jurisdictions based on more protective privacy regimes,” writes Heiner. “Over time, absent sound rules of the road, it will likely become harder for U.S. companies to keep the trust of consumers worldwide… The adoption of a comprehensive U.S. privacy law may, conversely, encourage the flow of data to the United States, triggering increased physical data center infrastructure and generating more big data-focused jobs and growth here at home. A comprehensive U.S. regime may also act as a counterpoint to more restrictive third-country proposals, inspiring countries to adopt a less protectionist view of privacy and encouraging the free flow of data globally–to the benefit of businesses and their customers both in the United States and abroad.”

The need for comprehensive privacy law emerges, Heiner goes on to argue, from the Administration’s own Consumer Privacy Bill of Rights–a set of fundamental principles which Microsoft believes mandate the creation of laws to support them.

Like this:

In a surprise change of plans late Friday afternoon, the Federal Communications Commission announced that it has extended the already-expired deadline for public comments regarding its proposed revisions to Open Internet regulations until September 15.

The move was made, according to the announcement, “to ensure that members of the public have as much time as was initially anticipated to reply to initial comments in these proceedings.”

Last week, FCC Special Counsel for External Affairs Gigi Sohn promised on the Commission blog that “every” (both boldfaced and italicized) one of the over 1.1 million public comments it had already received, will be reviewed as part of the official record. However, Sohn did not specify how this review would take place. Conceivably, this is a big data application if ever there was one.

But another distinct possibility is that the Commission may find itself hiring temporary help simply to handle the deluge of comments it has already received. Extending the deadline once again may have been the most graceful way for the commission to postpone its time of completion for having reviewed all the comments, as Sohn promised.

A check of the FCC’s comment system just minutes after the Commission issued its announcement showed that 10 new comments had already been posted.

Among them was this from a fellow in Richmond, Virginia: “The Internet is a repository of ideas and knowledge and incubator for innovation, and net neutrality is critical to ensuring the free flow of ideas and information. As well, state-granted monopolies are and should be subject to government oversight. In this instance, Internet service providers should not be allowed to discriminate against different content providers. The risks are straightforward and shared by all users of the Internet, while the benefits of such discrimination are only available to service providers, in the form of increased revenue; [to] content providers, in the form of being better able to promote their content; and [to] certain users who want access to the preferred content. The irony is that the Internet is the ultimate free market for ideas, and the service providers wish to profit from their regulation of that marketplace. Don’t cave into this absurdity.”

But on the other end of the spectrum, there was this from a lady in Old Lyme, Connecticut, in support of Chairman Tom Wheeler’s consideration to reclassify Internet services under Title II of the Telecommunications Act: “Reclassifying broadband providers as common carriers will prevent online discrimination as well as protecting consumers. Safeguard the public interest by keeping the Internet free.”

Like this:

I wanted to speak to the topic of “the value of failure” for a minute. I tell people that “I’ve learned more in each of my business failures, than I have in all my 25 year career combined” (well I tell some people, something similar anyway)

The fact is, when we become stuck “in a rut” becoming “comfortable” we rarely change…. And definitely not for the better. We may have minor advances and improvements…. But I bet you, like me, you probably fall right back into the same old, bad habits, the same procrastination, the same laziness, the same fighting with your spouse, the same forgetting to pray or meditate each morning or whatever it is in your life, within just a few days… What does all this have to do with “the value of failure”? You may be asking…. Well

To me, FEAR and ACTION go hand in hand… If your too afraid to lift yourself out of that “comfortable rut” your in and take on real challenges in life, if that comfortable armchair Lazyboy keeps calling you back because it’s “easy” then you will never change or attempt to live your dreams or follow your passion (I mean really attempt some significant changes in your life, the kind where you actually RISK something you hold precious). I’m not referring to what we all do, those half-hearted attempts to stop the whining spouse, or son or daughter or parent(s) or ease our own guilty feelings of failure… THOSE WILL ALWAYS FAIL!

Unless we do it for ourselves and commit EVERYTHING to its success or failure, we’ve not truly made a valid attempt at taking action to change our lives for the better friends… Many people will argue this point and try to say you don’t have to go that far…but I have always been of the belief that:

#1 If your going to do something, DO IT RIGHT
#2 Always give 100% to everything you do (not only do you deserve nothing but the best from yourself, but if it’s a paying gig, whoever is paying you deserves nothing else)

I’m not talking about those half-hearted things you call “attempts”… I’m also not talking about the 99% of dreamers who, never stop dreaming long enough to be doers…. I read a phrase the other day that said something like “Everyone has great ideas, it’s the 1% who take action on their idea that makes the real difference” To take their dreams to fruition. To my mind it’s ALL about FEAR. I’m talking about mortgaging your home to achieve your dreams, or selling everything you own and ending up in the streets kind of commitment here folks… I’m NOT JOKING, it’s happened to myself and many others and not just in the present, many great people of history never had a dime to their name and many ended up dying poor and never received their recognition till years after they were no longer even here! BUT THEY BELIEVED in their dreams enough to FACE THEIR FEAR and “move beyond it” as I have heard many people in combat say… By recognizing the fear, but not allowing it to control your actions or life.

A friend of mine told me recently “I can’t do it, I’m giving up….” then after he had been talked into sticking it out… The miracle finally happened the next day…. I believe God (whatever you call him) wants us to give everything we have, and then, when we’ve exhausted everything and are on the very edge of collapse or walking away or perhaps we just quit… typically that exact moment, when the end is so close we can smell it and we are reduced to tears… That’s when God steps in, at that ultimate last moment, when we have exhausted ourselves trying to do it with our own power and says “here it is…see how easy it was?” I am one the ones who seriously believe that God has an AWESOME SENSE OF HUMOR because the fact is, it has happened to me so many times in my life it’s almost an old friend! ☺

My round about point is this…LIFE IS AND SHOULD BE LOOKED AT AS A CONSTANT LEARNING EXPERIENCE AND ADVENTURE FOR BETTERING OURSELVES…. What can we learn sitting in front of the “boob tube” with a remote, glass of sugar water and a donut?

When people are afraid of doing something, I always ask them, just as I ask myself when facing my own fears “What’s the worst that could happen?”

Now sometimes it’s as simple and harmless as “you may get no for an answer” and yet you can’t imagine how scary that is to some people, (and we should respect that it’s a real fear for them, it may seem a harmless “no” to you and I, but we’re not here to judge others, but to help them), however other times it may be as substantial as “I may die” or “I may lose my home” or “lose my car” but the bottom line is, NOTHING TOO BAD WILL EVER HAPPEN as “Death” has no power over those who believe in an after life (for you who have not found something or someone “greater than yourself” I encourage you to start or keep seeking because otherwise if we are the most intelligent creatures here, THAT WOULD TRULY BE SOMETHING TO BE SCARED OF),the death of the flesh or our bodies is not a “bad” thing or anything to be afraid of… And as for the rest, they are just “things” and can easily be replaced.

We came into this world Alone & Naked and will be leaving it in the same way…

You however, only have ONE LIFE TO LIVE!
I pray you won’t allow your fears to keep you from realizing your real and sometimes, deeply hidden dreams and living your passions my friends!

Like this:

Today, Tesla announced that the Model S drive unit warranty has been increased to match that of the battery pack. That means the 85 kWh Model S now has an 8 year, infinite mile warranty on both the battery pack and drive unit. Moreover, the warranty extension will apply retroactively to all 85 kWh Model S vehicles ever produced.

No other changes have been made to the warranty.

Here’s what CEO Elon Musk wrote about the new policy in a blog post today:

Infinite Mile Warranty

The Tesla Model S drive unit warranty has been increased to match that of the battery pack. That means the 85 kWh Model S, our most popular model by far, now has an 8 year, infinite mile warranty on both the battery pack and drive unit. There is also no limit on the number of owners during the warranty period.

Moreover, the warranty extension will apply retroactively to all Model S vehicles ever produced. In hindsight, this should have been our policy from the beginning of the Model S program. If we truly believe that electric motors are fundamentally more reliable than gasoline engines, with far fewer moving parts and no oily residue or combustion byproducts to gum up the works, then our warranty policy should reflect that.

To investors in Tesla, I must acknowledge that this will have a moderately negative effect on Tesla earnings in the short term, as our warranty reserves will necessarily have to increase above current levels. This is amplified by the fact that we are doing so retroactively, not just for new customers. However, by doing the right thing for Tesla vehicle owners at this early stage of our company, I am confident that it will work out well in the long term.

Like this:

As a way to address Canada’s looming labor crisis, Electricity Human Resources Canada (EHRC) has launched Bridging the Gap, a public/private initiative that aims at increasing the representation of women as skilled workers in the electricity and renewable energy sector. The initiative is being funded by Ontario Power Generation (OPG), Hydro One, Employment Ontario, Alberta Advanced Education, and Engineers Canada.

EHRC will provide women with opportunities in career training, mentoring, and apprenticeships with the help of industry, government, and stakeholders, such as educators, labor union groups and others, who make up the initiative’s advisory committee.

“The electricity and renewable energy sector is poised for huge growth in the coming years, and we know that close to one in five new jobs in Ontario are expected to be in the skilled trades in the next decade,” said Reza Moridi, minister of training, Colleges and Universities. “It’s crucial that women have the opportunity to pursue meaningful work in technical vocations, trades and other professions in the skilled trades, and within the electricity and renewable energy sector.”

To address these issues, EHRC advocates a long-term talent strategy, which includes partnerships with industry, educators, training institutions, labor and others. For its part, EHRC will take the lead to strengthen existing initiatives and foster an environment for the development of practical and effective programs targeted toward women entering the workforce (at the high school, apprenticeship, college and university levels), as well as those currently working in the sector.

Like this:

Cisco Systems will cut as many as 6,000 jobs over the next 12 months, saying it needs to shift resources to growing businesses such as cloud, software and security.The move will be a reorganization rather than a net reduction, the company said. It needs to cut jobs because the product categories where it sees the strongest growth, such as security, require special skills, so it needs to make room for workers in those areas, it said.“If we don’t have the courage to change, if we don’t lead the change, we will be left behind,” Chairman and CEO John Chambers said on a conference call.Cisco has about 74,000 employees, so the cuts will affect about 8 percent of its staff. It will take charges of about US$700 million for the cost of the reorganization, up to half of that in the current quarter, Chief Financial Officer Frank Calderoni said.To read this article in full or to leave a comment, pleaseclick here

Like this:

Six Apple store employees in Florida are facing charges over an alleged scheme that would see them replace more than 600 stolen iPhones with new devices taken from store inventory.The Fort Lauderdale Sun Sentinel reports: How did the scheme work? Since April, thieves who posed as customers at the store handed stolen iPhones to employees in the ring and exchanged them for new iPhones, police said. Each Apple employee in the ring was paid between $45 and $75 for carrying out a fraudulent transaction, police said.To read this article in full or to leave a comment, please click here

Like this:

From the “Oh, that’s so cool” category comes this unique use of algorithms to perfect and smooth shaky videos taken on cameras in high thrill-and-spill environments, for example a hang glider’s cam, Google Glass, or a GoPro camera. But beyond rendering some really great personal experience and action videos, this accomplishment can lead to a much needed means to examine, compare and analyze video data from nearly any camera source, including those in extreme environments.

Most people tend to think of big data analysis as a fancy form of number crunching. But the truth is that we already have massive amounts of data that are not in number or text form but in something else, such as video and a wide array of other images. But also, as Ronald L. Wasserstein, executive director of the American Statistical Association, said in his guest post in FierceBigData, in “topology (for the spaces from which the data are sampled).” There is even data in the form of minute object vibrations to be analyzed. For more about those, see my earlier post titled “Vibrations on potato-chip bags aid in recording.”

So, you see, data analysis and data use cannot be limited to text and numbers, hence the need for tech and techniques such as Microsoft’s Hyperlapse to assist to one degree or another. Yes, Hyperlapse makes cool video way cooler. But this is also tech with a great deal of promise for other applications. Fortunately, it will end up as an app of its own which will help in using it for many purposes.

“We are working hard on making our Hyperlapse algorithm available as a Windows app. Stay tuned!” writes the Microsoft Research team in a blog post. You’ll want to read that post for more technical details.

Meanwhile, check out this short video for a fast understanding of their work and a meaningful technical explanation beyond that provided in the blog post.

Like this:

Elements of the US Army 82nd Airborne’s elite paratrooper force are returning back to Southern Afghanistan, to help “finish off” the Taliban with their Afghan army counterparts. This is proof that counter to rumors about lowering troop levels, there is still a big job for the 33,000 American troops still in Afghanistan.

It’s still a hot war against the resilient Taliban who threaten the lives of our troops and put them in harm’s way. We are so proud of our troops and their selfless service. They risk their lives to keep us safe.

You can make a difference in the daily lives of our troops on the front lines.

Nothing brings a serviceman or woman more joy than knowing that someone back home is thankful and thinking of them while they are on deployment. Care packages include a wonderful array of high quality food, snacks, and hygiene items that troops overseas consistently request. Click Here to Send One Today.

Like this:

Watch out wearables, you just may become passé quicker than the latest fashions on a model sashaying down the catwalk. You didn’t think nerdy glasses, clunky wrist watches and bands, or even implants were going to remain hot forever did you? OK, those will still be around for awhile longer but people aren’t going to continue adorning their bodies with tech, even adorable tech, when pervasive sensors can lighten their load and free their fashion choices to something less encumbering.

The shift to pervasive sensors is already underway. What does that mean, you might ask?

Wearables are sensors dedicated to interpreting you and the world from your perspective. Pervasive sensors, on the other hand, sit in places throughout the environment where they study and respond to you when you’re in the vicinity. You have to carry around the wearables; the pervasive sensors just sit and wait for you to pass by.

Or, as Natasha Lomas puts it in her post in TechCrunch: “This is the sensible trajectory of connected sensor technology. The world around us gains the ability to perceive us, rather than wearable sensors trying to figure out what’s going on in our environment by taking a continuous measure of us.”

Make no mistake, sensors will soon be everywhere. Pervasive computing and pervasive sensors will lighten our load considerably in terms of what we must carry around on our bodies or in our pockets.

However, they will also increase the fishbowl effect–meaning there will be almost no opportunity for personal privacy. You can, at least, take off or shut off a wearable device–at least those that you own. There isn’t much you can do about a Google Glass wearer who may be observing and recording you from across the restaurant or on the street, though you can still try. Pervasive sensors, however, are not something you can set aside or easily shut down.

Privacy issues aside, most people will find many of the sensors helpful and useful to the individual as well as to society at large. For example, just as a car can automatically call for help now if there is a crash and the driver is unable to summon help on his own, soon the environment can note that a person is injured or ill and automatically summon help for them. Sensors in the ambulance or hospital will soon be able to identify you and provide your medical records to emergency personnel instantly. These sensors can do far more than that of course.

Consider Disney’s MagicBands, which are wearables, and how they are used to add convenience and interactive experiences at Walt Disney World. You can see a demo on how those work in the video below.

With pervasive sensors, no wristbands will be needed to do those very same things as the sensors in the environment will recognize each person and behave accordingly without the need to wear anything or tap the wearable against another sensor. You simply walk up and do whatever you want (well, those things that the system will allow). Imagine never standing in line at a cash register–you just pick up an item and walk out. The item will be charged to your credit or debit card without you having to do anything extra. No more lines, no more tapping one device against another, no more looking for a door or car key, no more hassles in practically any person-to-object interaction.

Rest assured that data about you and your activities will flow to other parties at an enormous rate whether you are using wearables or depending on pervasive sensors. But, for the most part, that data will be working on your behalf and for your convenience. It might even save your life one day.