At most it is a new way of accessing the memories, in digital systems dealing with data is nothing but dealing with memories, Big Data Problems still demands a universal way to handle any problem associated with it, but still this solutions if claiming good results in certain directions, let's see how much it actually becomes effective.

Processing big data sounds like an ideal application for parallel processing: huge quantities of data each of which nbeed to be processed through the same algorithms. I used a massively parallel processor (32,000 processors as I recall) in 1979 for image processing and was amazed at the work that could be done with just a 1 MHz clocked system. The pixels in images are just a special case of big data.

According to the article, the AP architecture is data-flow based. In the past, dataflow architecture was also proposed for network procesing applications (e.g. xelerated dataflow network processor) but the main challenge is the programming complexity and the programming restrictions of these architectures.

Also it looks similar to the transputer processors architecture that were targetting parallel computing.

However, this automata processor seems to have included both an efficient programming framework/SDK and an efficient silicon implementation.

I've lost count of how many novel and ground breaking parallel processors I've written about in 20 years at EE Times that have died quiet deaths because no one could write code for them. Is this any different?

Don't know much about Automata Processor (AP)...please forgive me for a novice question.
"Its design is based on an adaptation of memory array architecture"...sounds more like an architecture similar to that of a CPLD...how does a AP architecture compare to that of a CPLD or FPGA?