Archives

Machine learning was limited for a long time by its inability to process raw data. For decades, machine learning had a great dependency in features engineering that transformed raw data (for example, pixels values from an image) into an internal representation or feature vector from which the learning subsystem, often a classifier, could detect or […]

As showed in the last post, machine learning is great for pattern recognition. But it’s getting so much powerful that can be used in much more areas than only pattern recognition, such as: Drug Discovery and toxicology Discovering new treatments for human diseases is one of the most complicated tasks that exist. A new compound […]

Since it’s revival in 2006, Deep learning advanced so quick that is used in many aspects of modern society such as: Deep Learning Analytics Companies such Google and Microsoft have access to large volumes of data, social media organisations such as Facebook, YouTube and Twitter have billions of users that constantly generate a very large […]

There are several ways to process and integrate data but due the lack of standards and the heterogeneity of the “things”, a key role is played by the middleware. Middleware is application-independent software that provides services that allow communications between applications. Middleware hides the complexities of the lower layers, like operating system and network, in order […]

As seen in the last post, after the event of CNN’s (Convolutional Neural Networks) we step in the second AI Winter, but in 2006 this would change… 2006: Deep Learning Arises In 2006, Hinton, Simon Osindero and Yee-Whye Teh published a paper where they solved the vanishing or exploding gradient problem. In this paper, “A fast learning […]

Deep Learning is one the last iterations on the evolution of Machine Learning. One evolution that started with something as simple as Linear Regression, and despite all the difficulties and winters that grassed for, it has been an area whose advances have generated a tsunami that has waves that reached into the most varied fields […]

Edge computing takes localised processing a bit further than Fog Computing, because it allows for actions to be taken on-site, in the processing point. This poses an advantage over Fog Computing as there are less points of failure. Each item in the chain is more independent and capable of determining what information should be stored […]

Even though Cloud computing is a great way of processing the data generated by the “things”, it doesn’t meet all IoT’s needs. For instance, one issue that affects the quality of service (QoS) severely is network latency. Real time applications are affected by the delay caused by latency in networks [1]. For example, when the […]

In the Lambda Architecture website we have a brief history and description of the architecture. “Nathan Marz came up with the term Lambda Architecture (LA) for generic, scalable and fault-tolerant data processing architecture, based on his experience working on distributed data processing systems at Backtype and Twitter. The LA aims to satisfy the needs for […]

The definition of Cloud computing provided by the National Institute of Standard and Technologies says: ‘‘Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing re-sources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service […]