So much of what’s written about IoT is either related to infrastructure, networking or telecom. Why is this? One might think that IoT is driven by these capabilities since it requires hardware, connectivity and management capabilities. I would argue, as would many of the public cloud providers, that it has very little to do with hardware and more to do with software. Folks from the likes of Amazon, Microsoft and Google would agree. They’ve even gone so far as to making it simple to deploy, manage, collect and analyze the data coming off the connected things. The main use cases are the output from these analytics, whether they be manual or automated.

Download this free guide

New Trend in IoT: Digital Twin Tech

Digital twin tech, or a virtual representation of a product, is a critical concept in IoT that’s still being sorted out. Explore its benefits to IoT, 7 use cases and why it’s important for manufacturing’s future.

I agree to TechTarget’s Terms of Use, Privacy Policy, and the transfer of my information to the United States for processing to provide me with relevant information as described in our Privacy Policy.

Please check the box if you want to proceed.

I agree to my information being processed by TechTarget and its Partners to contact me via phone, email, or other means regarding information relevant to my professional interests. I may unsubscribe at any time.

Please check the box if you want to proceed.

By submitting my Email address I confirm that I have read and accepted the Terms of Use and Declaration of Consent.

These insights are meant to be consumed by users or other machines. This is one reason that IoT came from the carrier craze behind machine-to-machine (M2M) communications, a way for machines to process data coming from other machines. The M2M trend accelerated in the 2000s and reached its fever pitch in 2010, but since then has largely been replaced by IoT, where the use cases go far beyond the carrier space. Most of M2M vendors ended up shifting to IoT, and largely changed approaches, attempting to find the market that eluded them. This shift doesn’t seem to have materialized into meaningful revenue or growth yet, but that may change — the jury’s still out on that strategy.

M2M’s evolution changed concepts from managing a network of devices or service of a network toward things which can be measured in the real world. This is something that allows computing to have access to far more context, including where a device is, the environment the device is in, how it’s being used and how well it’s running, and allows us to programmatically answer countless questions — the possibilities are endless. The value of this data in isolation is limited, but as we start to collect, measure, analyze and integrate the insights from this data to other computing platforms, it becomes incredibly powerful and allows machine learning to appear magical — or what many call artificial intelligence.

Unfortunately those managing and building capabilities for IoT are met with a high degree of complexity. IoT platforms attempt to simplify things, but each has its own ecosystem due to a lack of standards across platforms and hardware. If that fragmentation isn’t bad enough, the lack of well-formed standards also creates further interoperability and security challenges which show up in news headlines daily. We take standards for granted, assuming they will be there to help us avoid lock-in and to enable computing to work across providers and technologies. In IoT, these are completely missing. This causes major gaps in operating and assuring the services and products built on IoT. Through the years, our customers have expressed their concern over these operational challenges, which is why we’ve been investing in building solutions for these problems. But the requirements are always expanding based on the specific customer environments and use cases — an interesting and expanding problem, indeed.

Thankfully, getting visibility across these disconnected components has been made easier. Several application performance monitoring (APM) companies that specialize in visibility and observability are trying to solve this issue with a combination of APIs and automated instrumentation. These platforms, which are not tightly coupled to specific technology providers, enable a view across these technologies. We’ve built one at AppDynamics, and there are other providers doing similar work. This is a natural extension of what we do in the APM market. One of the key capabilities across APM is providing cross-technology visibility for operations, and then analytics on top of the collected and correlated data. The interconnected nature of things, mobile apps, web apps and back-end services makes it a natural fit for APM technologies.

One great example is the connected car, where we see a large set of complex systems involved which start based on drivers using features and apps in the car. In order to run these apps, the carrier/mobile connectivity is heavily used and, naturally, the back-end services of these apps are critical in making connected car systems function. In most cases, there are several third parties involved in delivering and managing connected car services and apps. When there are issues, such as the unlocking feature not working, understanding where the problem is — and which team or third party can remediate the issue — is a big problem. These problems wind up upsetting users and tarnishing the car brand. Being able to visualize the end-to-end flow is a major win across many organizations, for both those in IT and executives who are betting on the connected car being a differentiator. Interestingly, as with any technology, manufacturers are following each other. Four years ago, our first connected car manufacturer started using AppDynamics’ APM system to solve such challenges and gain the visibility needed to visualize the end-to-end flow and keep drivers happy. Fast-forward to today and many top luxury and premium car brands are using our APM technology to help manage this complex connected car ecosystem.

There are many other IoT use cases where similar situations occur, such as connected factories, energy generation and distribution, smart cities, government and more. Solving this complexity will be an increasingly important issue to satisfy customers and ensure that these organizations can continue to innovate and deliver capabilities that differentiate them from their competitors. It’s an exciting time in software when our technologies can help provide that edge, release more quickly or react rapidly to find new opportunities or resolve brand-affecting issues.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

Join the conversation

1 comment

Send me notifications when other members comment.

Register

I agree to TechTarget’s Terms of Use, Privacy Policy, and the transfer of my information to the United States for processing to provide me with relevant information as described in our Privacy Policy.

Please check the box if you want to proceed.

I agree to my information being processed by TechTarget and its Partners to contact me via phone, email, or other means regarding information relevant to my professional interests. I may unsubscribe at any time.

Your password has been sent to:

Please create a username to comment.

Major concern, you can take down a entire building if someone digs up your fiber. Images should be local to the building utilizing them if possible. Distributed image architecture is best. Master copy at data center, local copies for execution.