Friday, April 28, 2017

The MIPI Alliance has completed the first plugfest for the new I3C sensor interface.

Held in Barcelona, Spain, this was the first opportunity for early adopters of the new MIPI I3C sensor interface to perform interoperability testing of their designs for smartphones, IoT, automotive and other applications.

The event drew participants from semiconductor, IP and test equipment firms, demonstrating industry commitment to MIPI I3C and paving the way for commercialization of products and devices based on the specification. It highlighted the importance of interoperability testing early in the design cycle to ensure seamless functionality between devices and speed up time to market.

“Plugfests are an essential step in the product development process because the testing and debugging activities take place in real-world system integration environments, helping companies ensure interoperability of their components, improve product quality, speed the development process and optimize the manufacturability of their designs,” said Ken Foust, chair of the MIPI Alliance Sensor Working Group.

The new bus interface, approved in January, connects sensors to an application processor, combining multiple sensors from different vendors to streamline integration and improve cost efficiencies.

I3C can integrate mechanical, motion, biometric and environmental, and any other type of sensor and combines key attributes of the traditional I2C and SPI interfaces to provide a new, unified, high-performing, very low power solution.

The technology is implemented on a standard CMOS I/O with a two-wire interface, which reduces pin count and signal paths to offer system designers less complexity and more flexibility. It can also be used as a sideband interface to further reduce pin count. It supports a minimum data rate of 10 Mbps with options for higher performance high data rate modes, offering a substantial leap in performance and power efficiency compared with previous options.

It also includes multi-master support, dynamic addressing, command-code compatibility, and a uniform approach for advanced power management features, such as sleep mode and provides synchronous and asynchronous time-stamping to improve the accuracy of applications that use signals from various sensors. It can also batch and transmit data quickly to minimize energy consumption of the host processor.

ABB and IBM are working to links the ABB Ability networking technology with IBM Watson Internet of Things (IoT) artificial intelligence.

This will create a suite of software tools to help industrial organizations improve quality control, reduce downtime and increase speed and yield of industrial processes in a completely new way. The solutions enable current connected systems that simply gather data to become cognitive industrial machines that use data to understand, sense, reason and take actions to support industrial workers.

ABB has an installed base of 70 million connected devices, 70,000 digital control systems and 6,000 enterprise software solutions, giving Watson a large base of devices.

The first two joint industry solutions will bring real-time cognitive insights to the factory floor and smart grids.

ABB and IBM will use Watson’s AI to help find defects via real-time production images that are captured through an ABB system, and then analyzed using IBM Watson IoT for Manufacturing. Previously these inspections were done manually, which was often a slow and error-prone process. By bringing the power of Watson’s real time cognitive insights directly to the shop floor in combination with ABB’s industrial automation technology, companies will be better equipped to increase the volume flowing through their production lines while improving accuracy and consistency. As parts flow through the manufacturing process, the solution will alert the manufacturer to critical faults – not visible to the human eye – in the quality of assembly. This enables fast intervention from quality control experts.

In another example. ABB and IBM will apply AI to predicting supply patterns in electricity generation and demand from historical and weather data, to help utilities optimize the operation and maintenance of today’s smart grids, which are facing the increased complexity created by the new balance of conventional as well as renewable power sources. Forecasts of temperature, sunshine and wind speed will be used to predict consumption demand, which will help utilities determine optimal load management as well as real-time pricing.

“This powerful combination marks truly the next level of industrial technology, moving beyond current connected systems that simply gather data, to industrial operations and machines that use data to sense, analyze, optimize and take actions that drive greater uptime, speed and yield for industrial customers,” said ABB CEO, Ulrich Spiesshofer.

“This important collaboration with ABB will take Watson even deeper into industrial applications — from manufacturing, to utilities, to transportation and more,” said Ginni Rometty, IBM Chairman, president and CEO. “The data generated from industrial companies’ products, facilities and systems holds the promise of exponential advances in innovation, efficiency and safety. Only with Watson’s broad cognitive capabilities and our platform’s unique support for industries can this vast new resource be turned into value, with trust. We are eager to work in partnership with ABB on this new industrial era.”

I love this story about a virus that can inoculate IoT devices against malicious attacks. For millions of connect devices such as cameras that don’t have security built in, or simple admin passwords, this virus can spread form node to node to add that protection.

Hajime is a sophisticated IoT botnet that acts just like a biological virus. Just as cowpox blocked the cell receptors to stop the smallpox virus infecting a person, so Hajime gets into a vulnerable IoT node and switches off the ports that malware uses to infect it.

Hajime was first reported by Sam Edwards and Ioannis Profetis from Rapidity Networks, who discovered the first occurrence of Hajime back in October, 2016, and a more quantitative research by Symantec, which assesses the size of the threat. It has binaries for the arm5, arm6, arm7, mipseb and mipsel platforms, demonstrating the embedded focus of the virus.

The Radware report does a great job explaining how it works and how it is benign. The distributed bot network used for command and control and updating is overlaid as a traceless torrent on top of the public BitTorrent peer-to-peer network using dynamic info_hashes that change on a daily basis. All communications through BitTorrent are signed and encrypted using RC4 and private/public keys.

The current extension module provides scan and loader services to discover and infect new victims. The efficient SYN scanner implementation scans for open ports TCP/23 (telnet) and TCP/5358 (WSDAPI). Upon discovering open Telnet ports, the extension module tries to exploit the victim using brute force shell login in the same way as the malware Mirai virus.

Radware’s logs from its isolated ‘honeypot’ show that the credentials used during an exploit change depending on the login banner of the victim. In doing so, Hajime increases its chances of successfully exploiting the device within a limited set of attempts and avoid the system account being locked or its IP being blacklisted for a set amount of time.

Hajime accounted for half the IoT bot activity in Radware’s honeypots. In a timespan of little over five weeks, Radware recorded 14,348 infection attempts from 12,023 unique IPs. Considering Hajime sometimes uses a different infected node to download its malware, the total number of unique infected IPs counted was 18,623, indicating a huge security issue with IoT nodes that is beig addressed.

Upon execution, Hajime prevents further access to the device through filtering ports known to be abused by IoT bots such as Mirai:

TCP/23 (telnet) – the primary exploit vector of Mirai and most IoT botnets

TCP/7547 (TR-069) – as first used in the DT attack by a Mirai variant

TCP/5555 (TR-069) – alternate port commonly used in TR-069

TCP/5358 (WSDAPI)

At the same time, Hajime also tries to remove existing firewall rules with the name ‘CWMP_CR’, the CPE WAN Management Protocol or TR-069. Removing any potential CWMP rules set by an ISP to allow specific management IPs or subnets that will now be locked out leaving ISPs without control of the CPE device.

Besides locking down the device, Hajime opens up port UDP/1457 and a random higher port number (> 1024) for UDP and TCP. In doing so, allowing itself to use BitTorrent DHT and uTP from port UDP/1457 to build its peer-to-peer command and control network. The random higher port serves the purpose of the loader service used by the infection process to remotely download the malware onto new victims.

Hajime prefers the use of volatile file systems as working directory, ensuring any indicator of compromise is gone after a device reboot. The botnet code is not persistent so rebooting the device will clean it from infection, but only until the next infection.

Thursday, April 27, 2017

STMicroelectronics has launched an IoT development kit with modules for Bluetooth low energy (BLE), sub-GHz RFand Wi-Fi, with a dynamic NFC-tag IC with printed antenna to a high-performance, ultra-low power STM32L4 microcontroller all on the same board as a range of sensors.

A MEMS accelerometer and gyroscope IC and MEMS magnetometer for 9-axis motion sensing, a barometric pressure sensor, temperature/humidity sensor, two omnidirectional digital microphones, as well as a FlightSense time of flight proximity and gesture sensor can all be added with no extra integration via industry-standard Arduino and Pmod expansion connectors.

The $53 Discovery kit lets users take advantage of ST’s X-CUBE-AWS expansion software to connect to the Amazon Web Services (AWS) IoT platform, and access tools and services in the Cloud, such as device monitoring and control, data analysis, and machine learning. Support for other Cloud providers will be added in future, as well as software function packs that provide all the components needed to prototype end-to-end IoT solutions, including pre-integrated full application examples.

The heart of the kit is an 80MHz STM32L475 32-bit microcontroller that combines the ARM Cortex-M4 core featuring DSP extensions, 1MB on-chip Flash and ST-Link debugger/programmer on-board so that no external probe is needed. It can be used with ARM Keil MDK-ARM, IAR EWARM, or GCC/LLVM-based Integrated Development Environments (IDEs) including free AC6 SW4STM32, or with mbed online tools.

Memory maker Micron has teamed up with Microsoft to add authentication technology to its memory devices to boost the security of the Internet of Things. This is linked to Microsoft's move to provide IoT-as-a-service on the Azure cloud.

The technology uses a hardware ‘root of trust' integrated into Micron's flash memory in the IoT device along with the Microsoft Azure IoT cloud to establish a strong trusted link between that IoT device and the cloud.

Micron has also launched strong cryptographic identity and device health management in flash memory. The concept of monitoring persistent memory storage is becoming more and more critical to understanding a device's health and by using Microsoft's support of Device Identity Composition Engine (DICE), an upcoming standard from the Trusted Computing Group (TCG), the combination of the Azure IoT cloud and Authenta helps ensure that only trusted hardware gains access to the IoT cloud.

The key aspect of the combined solution is that the health and identity of an IoT device is verified in hardware on the device where critical code is typically stored. This enables more advanced functionality like hardware-based device attestation and provisioning.

Authenta provides protection for the lowest layers of IoT device software, starting with the boot process. This enables system developers to harden system level security without adding additional hardware components, leading to a more affordable and robust IoT solution, and means IoT devices that use standard flash memory chips (which is most of them) can now be enhanced to improve cyber-security using this combined approach.

Microsoft and Micron will offer software development kits (SDKs) that help make it easier to provide the secure IoT cloud management and connectivity for new platforms and devices, as well as the ability to retrofit legacy systems.

Expect Micron to be in discussion with other cloud providers on SDKs to link Authenta to other services so that hardware designs are not locked into one cloud provider. "Microsoft and Micron are collaborating to provide customers with a unified approach to improve IoT security. This capability will speed up adoption of the latest IoT concepts by enabling customers to broaden their IoT connectivity while decreasing the investment of implementation," said Sam George, director of Azure IoT cloud services. "Combining these technologies will enable critical security competencies to be underpinned at a low-level in both hardware and software so that users can quickly begin to add their value to these solutions without many of the resource burdens that have been repressing innovation in the industry."

"A secure Internet of Things requires an always on trust between billions of end-points and cloud management services. Anchors of this trust must be rooted in hardware and be scalable to even the smallest embedded devices," said Amit Gattani, senior director of Segment Marketing, Embedded Business at Micron. "We are pleased to see Microsoft extending their Azure IoT platform to include such trust services and creating an ecosystem with partners like Micron that provide hardware root of trust building blocks for end-devices. This will significantly ease developments and deployments for our customers across Industrial, Automotive and Consumer IoT markets."

Authenta is initially available in the Serial NOR product family and is sampling now to select customers. Users of Microsoft's DICE technology and Azure IoT services can now contact Micron and Microsoft to begin evaluation and integration of these security and identity solutions.

Wednesday, April 26, 2017

Microchip has launched a System in Package (SiP) that combines an ultra-low power microcontroller with an 802.15.4 sub-GHz radio to provide multi-year battery life in a 5 x 5 mm package.

Rather than integrating RF and digital functions into a single chip Microchip has put two chips in a single package for the SAM R30 SiP. It uses the same protocol as Zigbee but in the 915 or 868MHz ISM bands, giving longer range and lower power consumption but with lower data rates,making it suitable for connected home, smart city and industrial applications in the Internet of Things (IoT).

The SiP is built using the SAM L21 MCU (acquired with Atmel) that is based on the Cortex M0+ architecture and features ultra-low power sleep modes, with wake from serial communication or General-Purpose Input/Output (GPIO) while consuming 500nA.

With the radio chip operating in the 769-935 MHz range, the SAM R30 SiP gives developers the flexibility to implement a point-to-point, star or mesh network. Microchip helps developers get started immediately with the free MiWi point-to-point/star network protocol stack. Mesh networking capabilities will be available later this year. Nodes outfitted with the SiP can be positioned as far as one kilometer apart, with the ability to double the range in a star topology. When used in a mesh network, the SAM R30 delivers reliable wide-area coverage for applications such as street lighting or wind and solar farms.

Developers can begin prototyping immediately with the ATSAMR30-XPRO development board, priced at $65. This USB-interfaced development board is supported by the easy-to-use Atmel Studio 7 Software Development Kit (SDK).

The SAM R30 SiP is available in 33pin and 48pin QFN packages to be sampled or purchased in volume production quantities.

Tuesday, April 25, 2017

Microsoft is offering its IoT capability on its Azure cloud as software-as-a-service (SaaS) to speed up deployments and has boosted its security provision as a result.

Microsoft IoT Central is a fully managed SaaS offering that enables powerful IoT scenarios without requiring cloud solution expertise. Built on the Azure cloud, it simplifies the development process and makes it easy and fast for customers to get started.

To do this, Azure IoT now supports Device Identity Composition Engine (DICE) and many different kinds of Hardware Security Modules (HSMs), says Arjmand Samuel, Principal Program Manager at Microsoft. DICE is an upcoming standard at Trusted Computing Group (TCG) for device identification and attestation which enables manufacturers to use silicon gates to create device identification based in hardware, making security hardware part of new devices from the ground up. HSMs are the core security technology used to secure device identities and provide advanced functionality such as hardware-based device attestation and zero touch provisioning.

The Azure IoT team is also working with standards organizations and major industry partners to employ latest in security best practices to deploy support for a wide variety of Hardware Secure Modules (HSM). HSMs offer resistant and resilient hardware root of trust in IoT devices and Azure integrates HSM support with new platform services such as Hub Device Provisioning and Management, enabling developers to focus more on identifying specific risks associated with their applications and less on security deployment tactics.

IoT device deployments can be remote, autonomous, and open to threats like spoofing, tampering, and displacement. In this case HSMs offer a major defense layer to raise trust in authentication, integrity, confidentiality, privacy, and more. The DICE minimalist approach is an alternative path to more traditional security framework standards like the Trusted Computing Group’s (TCG) and Trusted Platform Module (TPM), which is also supported on the Azure IoT platform.

The move also includes analytics with Azure Stream Analytics on edge devices, a new feature that extends from the cloud down to the device level.

Azure Stream Analytics on edge devices has the same unified cloud-management for stream analytics running across edge devices and the cloud. This approach enables organizations to use streaming analytics in scenarios where connectivity to the cloud is limited or inconsistent, but the need for quick insight and proactive actions are essential to run the business.

The Linux Foundation has launched an open source project to build a common open framework for Internet of Things (IoT) edge computing and an ecosystem of interoperable components for Industrial IoT.

The EdgeX Foundry aims to simplify and standardise Industrial IoT edge computing, although this is still at the level of the intelligent gateway rather than further down into the edge of the network.

So far 50 companies, including AMD, Analog Devices, Dell and sensor company RFmicron as well as energy harvesting EnOcean Alliance have signed up, although Intel, ARM and board and gateway makers are conspicuous by their absence at this point.

The project however is dominated, naturally, by the IoT software services companies as it aims to develop a range of microservices written in Java, Javascript, Python, Go or C/C++ (see figure) that can sit on a range of operating systems and hardware (whether x86 or ARM). The choice of operating systems -Windows, Linux (of course) and even MacOS - highlight the gateway focus of the project. However, an OS-agnostic project lends itself to porting to real time operating systems further towards the network edge.

"EdgeX Foundry is part of our commitment to playing a major role in providing solutions to help customers bridge the physical and digital world through IoT," said Michael Murray, General Manager of Industrial Sensing Products at Analog Devices. "We want to reduce complexity, democratize IoT standards and provide trusted data for customers, and we look forward to working with the EdgeX community to achieve those goals."

The Linux Foundation points to widespread fragmentation and the lack of a common IoT solution framework that are hindering broad adoption and stalling market growth. The complexity of the current landscape and the wide variety of components creates paralysis, and EdgeX is intended to solve this by making it easy to quickly create IoT edge solutions that have the flexibility to adapt to changing business needs.

"Success in the Internet of Things is dependent on having a healthy ecosystem that can deliver interoperability and drive digital transformation," said Jim Zemlin, Executive Director of The Linux Foundation. "EdgeX Foundry is aligning market leaders around a common framework, which will drive IoT adoption and enable businesses to focus on developing innovative use cases that impact the bottom line."

EdgeX Foundry is unifying the marketplace around a common open framework and building an ecosystem of companies offering interoperable plug-and-play components. Designed to run on any hardware or operating system and with any combination of application environments, EdgeX can quickly and easily deliver interoperability between connected devices, applications, and services, across a wide range of use cases. Interoperability between community-developed software will be maintained through a certification program.

Dell is seeding EdgeX Foundry with its FUSE source code base under Apache 2.0. The contribution consists of more than a dozen microservices and over 125,000 lines of code and was design following feedback from hundreds of technology providers and end users to facilitate interoperability between existing connectivity standards and commercial value-add such as edge analytics, security, system management and services.

"One of the key factors holding back IoT designs in the enterprise is that there are too many choices to safely and easily implement a system that will provide a return on investment in a reasonable timeframe," said Mike Krell, Lead IoT Analyst at Moor Insights & Strategy. "EdgeX Foundry will fundamentally change the market dynamic by allowing enterprise IoT applications to choose from a myriad of best-in-class software, hardware and services providers based on their specific needs."

According to a Gartner report, there will be 20.4 billion connected things in use globally by 2020. The sheer quantity of data that will be transmitted from these devices is driving adoption of edge computing, where connected devices and sensors transmit data to a local gateway device instead of sending it back to the cloud or a central data center. Edge computing is ideal for deploying IoT applications because it allows for quicker data analytics and reduced network traffic. This is essential for applications which require localized, real-time data analysis for decision making such as factory optimization, predictive maintenance, remote asset management, building automation, fleet management and logistics.

"Businesses currently have to invest a lot of time and energy into developing their own edge computing solutions, before they can even deploy IoT solutions to address business challenges," said Dr Philip DesAutels, Senior Director of IoT at The Linux Foundation. "EdgeX will foster an ecosystem of interoperable components from a variety of vendors, so that resources can be spent on driving business value instead of combining and integrating IoT components."

Adopting an open source edge software platform allows hardware makers to scale faster with an interoperable partner ecosystem and more robust security and system management, while Sensor and Device Makers can write an application-level device driver with a selected protocol once using the SDK, and System Integrators can get to market faster with plug-and-play ingredients combined with their own proprietary inventions.

The Linux Foundation will establish a governance and membership structure for EdgeX Foundry and a technical steering committee will provide leadership on the code and guide the technical direction of the project.

Monday, April 24, 2017

German IoT software developer OSIsoft is extending its partnership with Rockwell Automation to integrate its PI System technology into a new bot-based appliance for the Industrial Internet of Things (IIoT).

Rockwell's FactoryTalk Analytics for Devices automatically discovers devices on industrial networks to conduct diagnostics and monitor their health, providing early warnings, diagnoses problems and gives insight to take action. All of this to improve uptime of processes and machines using the PI System software. Users of the system can receive “action cards” through their smartphones, tablets or a web browser or engage with the device through “Shelby,” a natural language voice-activated (bot) system.

The PI System technology embedded in FactoryTalk Analytics for Devices captures and organizes the vast amount of data generated by these networks so it can serve customers immediately from the appliance, or later have the data delivered to Microsoft Azure via FactoryTalk Cloud for further Big Data analytics. Worldwide, the PI System manages over 1.5 billion sensor-based data streams, making it one of the most widely uses IIoT technologies.

“Industrial customers need deep, detailed insight into their operations in real-time to stay competitive: that is what drives our FactoryTalk strategy,” said John Genovesi, Vice President of Information Software and Process Business at Rockwell Automation.

The PI System captures data from sensors, manufacturing equipment and other devices and transforms it into rich, real-time insights that engineers, executives and partners can use to reduce costs, dramatically improve overall productivity, and create new connected services and smart devices.

OSIsoft and Rockwell Automation have collaborated for over a decade on the technology.PI System powers the FactoryTalk Historian embedded in many Rockwell Automation systems. BHP Billiton, for instance, manages millions of data tags across mines, transportation assets and production facilities to reduce variability and increase quality. PI System technology ships in approximately 1,800 Rockwell Automation Systems per year and by 2020, OSIsoft anticipates that hundreds of thousands of devices from various vendors with PI System technologies will be shipping.

“Right now fewer than 14 percent of companies have completely connected their production data to the rest of their enterprise,” said Martin Otterson, Senior Vice President of Customer Success at OSIsoft. “Our relationship with Rockwell Automation will fuel the development of products and solutions that will let more people take advantage of machine and operational data for more projects in more ways than ever before.”

Thursday, April 20, 2017

Being able to add new hardware to a design in the field just with a software download is one of the huge advantages of using a field programmable gate array (FPGA). Being able to do this for one element of the design without impacting on the rest - partial reconfiguration - is a key capability that has been many years coming.

Now the latest update of the Vivado design tool from leading FPGA maker Xilinx has included Partial Reconfiguration technology. This enables dynamic field updates and increased systems integration in a broad range of applications such as wired & wireless networking, test & measurement, aerospace & defense, automotive, and data centres.

Designers can now change functionality on the fly, eliminating the need to fully reconfigure and re-establish links, dramatically enhancing the flexibility of All Programmable devices. System upgradeability & reliability are greatly enhanced by providing the ability to update feature sets in deployed systems, fix bugs, and migrate to new standards while critical functions remain active.

“The use of Partial Reconfiguration in Xilinx devices allowed us to optimize the size of the FPGA, and provide complete flexibility to maintain system connectivity while independently reconfiguring multiple ports in our design,” said Craig Palmer, senior engineering manager, Viavi Solutions.

The Partial Reconfiguration technology enables dynamic configurability by swapping portions of the design while the rest remains operational, requiring zero downtime and little impact to cost or development time.

“Partial Reconfiguration of FPGAs is a key element in Keysight’s toolbox for creating the next generation of test and measurement solutions. Partial Reconfiguration enables us to manage the ever increasing need for flexibility and complexity of test systems,” said Tom Vandeplas, senior researcherat test equipment maker Keysight Laboratories.

The Vivado Design Suite HLx Editions 2017.1 release is now available for download. Partial Reconfiguration functionality is now included at no additional cost with the Vivado HL Design Edition and HL System Edition. In-warranty users can regenerate their licenses to gain access to this feature. Partial Reconfiguration is available for Vivado WebPACK Edition at a reduced price.

Wednesday, April 19, 2017

Researchers at Princeton University have found a number of significant flaws in the RISC-V open source processor core. The specification is set to come to market later this year, although some companies such as SiFive are already using it.

The researchers, testing a technique they created for analyzing computer memory use, found over 100 errors involving incorrect orderings in the storage and retrieval of information from memory in variations of the RISC-V processor architecture. The researchers warned that, if uncorrected, the problems could cause errors in software running on RISC-V chips. Officials at the RISC-V Foundation said the errors would not affect most versions of RISC-V but would have caused problems for higher-performance systems.

"Incorrect memory access orderings can result in software performing calculations using the wrong values," said Margaret Martonosi, Professor of Computer Science at Princeton and the leader of the Princeton team that also includes Ph.D. students Caroline Trippel and Yatin Manerkar. "These in turn can lead to hard-to-debug software errors that either cause the software to crash or to be vulnerable to security exploits. With RISC-V processors often envisioned as control processors for real-world physical devices (i.e., internet of things devices) these errors can cause unreliability or security vulnerabilities affecting the overall safety of the systems."

Krste Asanović, the chair of the RISC-V Foundation, welcomed the researchers' contributions. He said the RISC-V Foundation has formed a working group, headed by Martonosi's former graduate student and co-researcher Daniel Lustig, to solve the memory-ordering problems. Asanović, a professor of electrical engineering and computer science at the University of California-Berkeley, said the RISC-V project was looking for input from the design community to "fill the gaps and the holes and getting a spec that everyone can agree on."

"The goal is to ratify the spec in 2017," he said. "The memory model is part of that."

Lustig, a co-author of Martonosi's recent paper and now a research scientist at NVIDIA, said work was underway to improve the RISC-V memory model.

"RISC-V is in the fortunate position of being able to look back on decades' worth of industry and academic experience," he said. "It will be able to learn from all of the insights and mistakes made by previous attempts."

The RISC-V instruction set was first developed at UC-Berkeley, with the idea that any designer could use the instruction set to create processor cores and software compilers. The project is now run by the RISC-V Foundation, whose membership includes a roster of universities, nonprofit organizations and top technology companies, including Google, IBM, Microsoft, NVIDIA and Oracle.

Martonosi's team discovered the problems when testing their new system to check memory operations across any computer architecture. The system, called TriCheck, allows designers and others interested in working with a design, to detect memory ordering errors before they become a problem. The tool has three general levels of computing: the high-level programs that create modern applications from web browsers to word processors; the instruction set architecture that functions as a basic language of the machine; and the underlying hardware implementation, a particular microprocessor designed to execute the instruction set.

The memory ordering challenge stems from the complexity of modern computers. As designers squeeze more performance out of computer systems, they rely on many concurrent operations sharing the same sections of computer memory. This parallel, shared-memory operation is extremely efficient, both for speed and power usage, but it puts a heavy demand on the computer's ability to interleave and properly order memory usage. If, for example, several processes are using the same section of memory, the computer needs to make sure that operations are applied to memory in the correct order, which may not always be the order in which they arrive from different concurrently running processors.

Subtle changes in any of the three computing levels — the machine level, the compiler and the high-level programming languages — can have unintended effects on the other layers. All three have to work together seamlessly to make sure memory errors don't crop up. One advantage of TriCheck is that it allows experts in one of these layers to avoid conflicts with the other two layers, even if they do not have expertise in them.

"If I write a program in C, it makes some assumptions about memory ordering," said Martonosi. "Subsequently, a different set of memory ordering rules are defined by the instruction-set architecture. We need to check that the high-level program's assumptions are accurately supported by the underlying instruction set and processor design."

However, the researchers said the TriCheck's greatest strength is its ability to give designers a broad view of memory usage. Although designers have long been interested in this perspective, previous attempts to comprehensively analyze memory operations have been too slow to be practical.

TriCheck is able to check memory ordering efficiently by using succinct formal specifications of memory ordering rules, known as axioms. For a given program, compiler, instruction set and hardware implementation, TriCheck can enumerate many ordering possibilities from these axioms, and then check for errors. By expressing the memory-ordering possibilities as connected graphs, TriCheck can identify potential errors by looking for cycles in the graphs. These checks can be done very efficiently on modern high-performance computers, and TriCheck's speed has allowed it to explore larger and more complex designs than prior work.

"TriCheck is an important step in our overall goal of verifying correct memory orderings comprehensively across complex hardware and software systems," she said. "Given the increased reliance on computer systems everywhere — including finance, automobiles and industrial control systems — moving towards verifiably correct operation is important for their reliability and safety."

Friday, April 14, 2017

French low power wide area network (LPWAN) technology developer Actility has raised $75m to expand its delivery of the industrial Internet of Things (IoT) using a wide range of technologies .

The Series D funding round included Creadev, Bosch and Inmarsat, alongside telecoms operators KPN, Orange Digital Ventures, Swisscom and equipment maker Foxconn. A second closing later this month will see additional strategic investors joining the company without involving banks.

Actility's ThingPark platform is used for large-scale LPWA rollouts worldwide with the LoRaWAN LPWAN protocol that Actility co-developed, as well as LTE-M and NB-IoT. A software stack with the OS service and business support manager, application integration enabler, and e-commerce platform provides a turn-key IoT platform supporting sensor to cloud applications.

“This funding will enable us to grow our IoT technology and ecosystem platform faster to meet the needs of service providers, solution providers and enterprises in large industry verticals, for example rolling out our disruptive global location and tracking service more quickly,” said Actility CEO Mike Mulica. “It will also allow us to accelerate our strategy for the US, and build strength in China. And last but by no means least, it will enable us to look at strategic acquisitions to broaden our technology portfolio and cement our leadership in LPWA.”

“We have been looking the best project in the business of connectivity for the IoT for a while," said Florent Thomann, a member of Creadev’s management board, and in charge of new digital models. "In Actility, we found a company that offers an ideal solution and has made the perfect technology choices in LPWA and LTE-M to meet that growing connectivity market. We are convinced by both the company and its management, which shows a real visionary insight into the technology and business models and the way that connectivity will evolve. Furthermore, Actility’s team is proving to be particularly agile at innovation, adapting to new technologies very efficiently. We are pleased to bring our culture of ambition, support and sharing best business practice to help nurture Actility’s long-term growth.”

Having a satellite operator such as Inmarsat is a significant boost. “Inmarsat sees a great deal of potential in Actility, and its expertise in global IoT networks, based on LoRaWAN, makes it a natural fit for our investment," said Paul Gudonis, President of Inmarsat Enterprise. "There are clear synergies between us, namely the ability to deliver innovative connectivity services to customers in remote locations, creating the potential for a global IoT network. To this end, we recently developed our LoRaWAN-based network in partnership with Actility to enable IoT to reach every corner of the globe. We have many more projects planned with Actility and we are excited to support the company’s rapid growth as it continues to make great strides in the IoT arena. This market is rapidly maturing and Actility, with its growing ecosystem of partners, is ideally positioned to take advantage of this for the benefit of businesses across a variety of industries.”

ViDi Systems, based in Villaz-St.-Pierre, Switzerland, was founded in 2012 by computational neuroscientist Dr Reto Wyss and the CPA Group, a Swiss industrial holding company and business incubator. This follows two machine vision acquistion late last year.

ViDi’s deep learning software uses artificial intelligence techniques to improve image analysis in applications where it is difficult to predict the full range of image variations that might be encountered. Using feedback, ViDi’s software trains the system to distinguish between acceptable variations and defects.

EnShape in Jena, Germany, was acquired in October for its patented 3D area-scan technology for fast image capture at high resolution, and eliminate the need to mechanically move objects in front of the device as required with laser line scanners. This created a new Cognex engineering centre in Jena.

In August Cognex also completed the acquisition of 3D vision software developer AQSense in Girona, Spain. AQSense develops and sells a library of field-tested 3D vision tools, and the company’s software engineers joined Cognex’s 3D engineering team upon the closing of the acquisition.

Thursday, April 06, 2017

NXP has combined the development tools for two of the most popular embedded microcontrollers in the industry, giving designers dramatically more flexibility in system implementation.

The MCUXpresso Integrated Development Environment (IDE) unifies development support for thousands of LPC and Kinetis (formerly Freescale) MCUs based on ARM Cortex-M cores using the same software suite.

The MCUXpresso IDE features simple, scalable and user-friendly interfaces and tools and is built to leverage the capabilities of the highly popular MCUXpresso SDK and Config Tools. The new feature-rich, Eclipse-based framework completes the trio of powerful MCUXpresso software development solutions and provides access to thousands of new project wizards and clone projects, saving designers valuable time by giving them a head-start to customise their own innovations.

“If design tools are simple, yet comprehensive, our customers stand a much better chance of designing tomorrow’s next amazing innovation,” said Geoff Lees, senior vice president and general manager of the microcontroller business line at NXP. “This unified software enablement gives developers more choice in high-quality controller solutions to fit their design needs. NXP will continue to stay ahead of the design trends and expand our MCUXpresso software and tools to support a variety of products in the future, ensuring our customers have access to the most comprehensive design tools on the market.”

Available in full-featured free and professional upgrade editions, the MCUXpresso IDE unifies Kinetis and LPC microcontrollers under a set of compatible tools. With a dedicated quickstart panel, automatic probe detection and configuration, and an intuitive project creation and cloning wizards, the MCUXpresso IDE is designed to ease developers through the setup and optimisation of their projects to application design and even multicore development. The MCUXpresso IDE supports full-featured, advanced debugging with unlimited code size and code profiling in the free offering, adds advanced trace features in the professional edition, and preserves hardware investments by supporting the former Freescale Freedom and Tower System, as well as LPCXpresso boards and custom hardware platforms.

This MCUXpresso SDK release adds new device support and includes examples and project files for use in the new MCUXpresso IDE. The MCUXpresso SDK also now includes support for NXP’s NTAG I2C Plus connected NFC tag for home-automation and consumer applications and will soon support the FRDM-KW41Z board designed for portable, extremely low power applications requiring Bluetooth® low energy (BLE) v4.2 and IEEE 802.15.4 RF connectivity. The MCUXpresso Config Tools offers a single powerful configuration environment with pins and clocks tool for dynamic generation of initialisation C code, and quickly guides users to example projects and web-based tools for rapid board bring-up.

Wednesday, April 05, 2017

Intel buying McAfee in 2011 for $7.7bn was all about the
enterprise. Now, Intel spinning out McAfee into a separate company in a $4.2bn
deal that is all about the Internet of Things.

Back in 2011, Intel was aiming to secure the enterprise
alongside its PC and server processors in a market where it dominated. Now it
needs to secure the IoT, it needs cooperation from companies that license the
ARM architecture. Hence the need for an independent venture.

The key change is the McAfee Data Exchange Layer (DXL), the
industry-endorsed communication fabric providing real-time interaction between
applications. This needs to be taken down the stack to the gateway, where Intel
processors are being used, but further down to the node. This is the challenge. Another Intel company, Wind River, is already taking up that challenge, pushing the VxWorks real time operating system further into the IoT.

The McAfee Security Innovation Alliance has over 135
partners around the world, and 30 of these are using the DXL connection as an
API.

The giveaway is in the new strapline for McAfee - innovation,
trust, and collaboration. The new company is 49% owned by Intel, with the remainder
from equity house TPG and private equity investment firm Thoma Bravo, bu tit has to demonstrate that it can work well with the rest of the industry that does not rely on Intel. Intel
Senior Vice President and General Manager Chris Young will lead the new McAfee
as Chief Executive Officer. TPG partner Bryan Taylor has been named Chairman of
the Board.

“Cybersecurity is the greatest challenge of the connected
age, weighing heavily on the minds of parents, executives and world leaders
alike,” said Christopher Young, CEO of McAfee. “As a standalone company with a
clear purpose, McAfee gains the agility to unite people, technology and organizations
against our common adversaries and ensure our technology-driven future is
safe.”

“We offer Chris Young and the McAfee team our full support
as they establish themselves as one of the largest pure-play cybersecurity
companies in the industry,” said Brian Krzanich, CEO of Intel. “Security
remains important to Intel, and in addition to our equity position and ongoing
collaboration with McAfee, Intel will continue to integrate industry-leading
security and privacy capabilities in our products from the cloud to billions of
smart, connected computing devices.”

The advantage of DXL is that it is an open standard. Unlike
typical integrations, each application connects to the universal DXL
communication fabric and there is just one integration process instead of
multiple efforts, which makes it suitable for enterprise scale IoT deployments.

OpenDXL will support a broad range of languages, enabling
developers to create integrations using their favourite development
environment. One app publishes a message or calls a service; one or more apps
consume the message or respond to the service request.

As is the goal for any standard, the interaction is
independent of the underlying proprietary architecture of each integrating
technology and integrations are much simpler because of this abstraction from
vendor-specific APIs and requirements.

In addition to creating native DXL integrations, developers
can also wrap their services to interact or wrap the API of a commercial
product to publish data onto DXL. Other services can listen to DXL messages and
calls to enrich their functionality with the latest data, or take appropriate
action. For a more sophisticated app reflecting orchestration, these sorts of
actions can be scripted together to drive a waterfall—or simultaneous set—of
actions.

The challenge now is to persuade the wider embedded industry
that the new McAfee is truly independent of Intel in order to use the
technology.

Tuesday, April 04, 2017

SYSGO has optimized its PikeOS hard real time operating system for multicore designs with a hypervisor and separation microkernel.

Release 4.2 has been designed specifically for systems and applications that need certification according to safety or security standards such as DO-178B/C, EN 50128, ISO 26262, as well as Airbus SAR and Common Criteria requirements.

A fine granular kernel locking that enables all cores to continue their processes even while one of them executes a system call, greatly reducing unproductive processor cycles. Other cores may only be blocked, if they attempt to access the exact same resource in order to control interference. This is necessary for the latest ARINC-653 multicore standard.

PikeOS 4.2 also improves the energy efficiency of embedded systems as it allows the developer to manage multiple hardware clock devices and frequencies on the same board - including System on Chip (SoC) internal and external clocks. This way, applications as well as IPs on the SoC can easily be stopped and restarted as needed, reducing both resource and energy consumption.

PikeOS 4.2 provides a modern compact and certifiable Hypervisor technology with separation microkernel, implementing robust time and resource partitioning, which allows interference channels to be managed within your certifiable project. In addition, PikeOS 4.2 provides time schedules for individual resource partitions by core, where direct CPU affinity may be used to implement core separation for ultra critical partitions to be managed with PikeOS.

"Being the leading European operating system manufacturer, we have a long track record in supporting our customers in the entire certification process", said SYSGO's VP of Marketing & Product Strategy, Franz Walkembach. "With PikeOS 4.2, these customers have now access to a software platform that has been strictly designed with certification in mind. What is more – they will also benefit from an entire ecosystem around this platform which brings together the expertise of SYSGO, our partners and the scientific community."

PikeOS 4.2 will be available this month for multicore CPUs, including ARM v7 and v8, 32 and 64 bit PowerPC and 32 and 64 bit x86. Board Support Packages (BSPs) are available for a wide selection of silicon vendors like NXP/Freescale, Renesas, Intel, Xilinx and Altera.

The first product based on PikeOS 4.2 will be SYSGO’s own safety and security certification kit that will enable customers an efficient safety or security certification kick-off element. Therefore helping to reduce time spent during long certification programmes.

Monday, April 03, 2017

The cloud and AI have dominated stories on the Embedded blog in March.While security concerns for the Internet of Things are still dominant,from a low cost encryption chip and ways to hcck smartphone using their accelerometers, edge analytics (with VxWorks) and ARM chips being using in Microsoft's Azure cloud were more prominent, as well as NVIDIA's Jetson 2 embedded card for artificial intelligence.

Flaherty Publishing

By looking across all the different technologies and markets in the embedded space, this blog pulls together trends and opportunities through exclusive news, video and comment that you might not have seen from sites dedicated to individual topic areas. The labels below allow you to select your own interest areas, and please look through the archive.