NEC promises efficient and scalable M2M processing

Japanese tech specialist says current technologies can be slow and limited.

NEC has developed two M2M (machine to machine) technologies that are able to maintain high speed and scalable processing.

The first is a new algorithm that optimises the arrangement of processing rules created by users in advance for each server that is arranged in parallel. If there are relationships or dependencies between the processing rules, the rules are placed on the same servers and the load for each server can be balanced.

This means that there is a reduction of wasteful transmissions and links between servers, as well as a more effective load distribution of the computing resources.

The second involves transferring events to the most appropriate server based on the assigned processing rules. This reduces processing loads for servers and networks and supports system growth that will be needed in response to data expansion in the future.

"These newly developed technologies achieve high speed processing by automatically equipping servers with the rules for processing and making efficient use of computing resources, regardless of changes in the volume and variety of data," said Motoo Nishihara, general manager, Cloud System Research Laboratories, NEC.

"Furthermore, since the system can automatically reassign processing rules, highly scalable big data processing infrastructure can be achieved, even as the number of servers increases or decreases."

NEC said that the technologies were verified by achieving real time processing on a system where 2.7 million events occurred per second through a 16 unit server system (10 system units, six load generator units) featuring 100,000 processing rules.

This is the equivalent of taking just 20 seconds to send information on 50 million users from 100,000 stores for a service that provides store and coupon information to mobile phones. This scale can be expanded even further in order to meet the needs of larger services.

According to NEC, existing processing technologies are burdened by issues such as complex processing rule allocations that are assigned by hand, as well as the relationships and dependencies between rules that cause wasteful information transmissions between servers, which can reduce processing speed by more than 90 percent, limit the scalability of servers and impair the response to fluctuating volumes of data.

The Japanese company expects to commercialise the technologies by first quarter of 2013.