The move was bold – not just launch the pilot of self-driving vehicles in the streets of Pittsburgh, but give an opportunity for the public to get closer to the technology. Uber’s fleet of choice for the pilot was modified Volvo XC90 SUVs that were instrumented with hundreds of sensors where the Uber drivers would have the capability to take the control over during the ride, as needed.
At $69 billion, Uber is generously valued at 16 to 17 times its annual revenue. According to Bloomberg, Uber lost about $1.2 billion just this year with a large portion of the loss attributed to the high driver wages that Uber pay out. According to various figures, Uber pays 1.5 to 2 times more on an average to Uber drivers than traditional taxi drivers. Pressure is probably mounting for Uber to take the plunge (think IPO).
80% of the ride cost is split between the car owner and the driver, whereas the remaining 20% is taken by Uber. Uber is waiting for the day to have its fleet self-driven, thereby turning the table around, making it attractive not just for the riders but for Wall Street too!
The next generation transportation will be defined in the backdrop of partnership between traditional auto manufacturers and tech companies. Uber recently acquired Otto, a start-up company founded by former Google employees focused on autonomous vehicles. Tesla, ahead in this game, claims it has over 100+ million miles’ worth of autonomous driving data. Google very recently announced to roll out a service in San Francisco that would allow the Google owned “Waze” app users to carpool with commuters heading in similar directions. With the $500 million investment in Lyft that General Motors made, the game is ON to define the next generation.
With the self-driving cars, partially autonomous vehicles and ride-sharing models near the horizon, the viability of the business model behind this transportation system lies within the uptime and reliability of the vehicles on the road. Optimizing the ride, vehicle health and the cost are key to the bottom line and Connected Car promises to deliver just that.
Scheduling an appointment at the dealership to get the “Check Engine Light” will be “too-late”. Time based scheduled maintenance will be replaced by “condition-based” maintenance – thanks to telemetry data and real-time analytics in the cloud that can notify the car owners of just-in-time or even predictive maintenance. The dealers will no longer need to stack inventory of parts. They can be part of this ecosystem getting the lead, enabling them to have just-in-time inventory. A round-trip to the dealer will be replaced by “Over-the-air” flash for most of the ECU fixes keeping the uptime and customer satisfaction at the highest. While our cells are rejuvenating at night, self-driving vehicles are going to share and learn its everyday experiences amongst its peers.
What is the future for the insurance carriers in the world of self-driving cars? Liabilities aren’t going to disappear for years to come till the technology matures. So, beyond vehicle health, the connected car ecosystem and ride-sharing model of transportation is bound to bring an overhaul and paradigm shift in the product offerings by insurance carriers as the association of the vehicle to the driver will become more dynamic in nature. The insurance product offerings are likely to move from “Pay-as-you-drive” to “Manage-how-you-drive”, thanks to hundreds of sensors and big data pipe between the vehicle and the telematics service provider.
Watch this space for “Diagnostics-as-a-Service” and “Risk-Analytics-as-a-Service” offerings that enables the OEMs and insurance carriers to get the best out of the Connected Car ecosystem!

09/27/16- Evolution Making IoT Possible

The “Internet of things” or more popularly known as “IoT” could further be expanded to say “Inter-network of Things.” When talking about IoT, we are talking about connecting millions & trillions of devices. The nature of these devices vary a lot in terms of physical, logical & functional characteristics. A common question asked surrounding these devices - "haven't these devices been in existence?" Well, the simple answer is yes, they have. So why do we continue to talk about these devices which are already around us doing their job? Because they do exist, but the majority are not connected and hence cannot communicate.
Let's step back in time and talk about connected devices. With the innovation of computer networks came the notion of connectedness to communicate & share resources. Slowly the connectivity spread across buildings, campuses, cities, continents, etc. But the devices we are discussing are specialized devices, which has an ability to connect and communicate using a specialized protocol, called Internet Protocol (IP). Hence the connectivity was made possible only to a few of these specialized devices (i.e. computers, laptops, networking switches, routers, controllers, etc.) with special characteristics (hardware and software). The ability to connect was confined mainly because of hardware and software limitations, meaning not many devices were satisfying the criteria and remain isolated and dis-connected.
Over time with great innovations in the field of networking and communications (Wi-Fi: 802.11b, g, n, ac, ad, aj, ax and ay / Cellular: CDMA, GSM, 2G, 3G, 4G, 4G LTE & 5G - yet to come / Bluetooth / Bluetooth Low Energy: BLE / 6LowPAN: IPv6 ver. for IEEE 802.15.4 based networks / ZigBee: for IEEE 802.15.4 based networks / Lora WAN etc.) the limitations have not only been lifted but also made communication possible among a variety of devices, from small sensors to giant trucks irrespective of their location, whether it is physically connected (wired) or remote (wireless).
Let me take a popular use case of firmware upgrade (transformed to so called “Over The Air Update” - OTA) & draw a simple graph to show the renovation & realization of use cases across the devices (with different characteristics) as the innovations and technology progress:
Now that we understand the importance of connectivity and its major role to make IoT happen, let me talk about other aspects of IoT and the eco-system. When we talk about millions and trillions of devices, can you imagine the amount of data these devices generate? Since we know they aren't going to sit quiet... lets say an enormous amount of data. So how does this data reach the destined application servers (residential gateways, cloud gateways, data analytics systems etc.)? Obviously through networks. Are the existing traditional networks good enough to withstand the storm of data & provide intelligence to avoid single point of failure? Can we utilize the network resources in a most efficient way without putting strain on the network? What about dynamic load management? I would say NO.
It doesn’t stop there, luckily we see the emerging trends “Software Defined Networking (SDN)” in the field of Networking & Communications, which can deal with real time network challenges posed by IoT.
The institution of thought behind SDN is to make the network equipment programmable and combine the power of virtualization. Hence the basis of SDN is virtualization and its ability to separate the network control plane with forwarding plane and provide programmatic interface into the network equipment. Which means the intelligence to take decision and route the packets is completely abstracted and lifted from a network device to a much powerful server (virtualized). From a broader perspective SDN allows organizations to replace manual interface in network equipment with programmatic interface that can enable the automation of tasks such as flow control, configuration and policy management and can also enable the network to dynamically respond to changing network conditions and application requirements (e.g.: efficiently handling storm of data generated by IoT devices). The provision to run network software separately from underlying hardware especially in a virtual environment can revolutionizes computing power, storage, logical centralization of control plane functions & network operations, which can deal with pools of network devices as a single entity to plan and control the network and resources dynamically. With SDN, network flows are controlled at the level of global network abstractions rather than at the level of individual devices, which will not only help in dynamic load management but also prevent single point of failure. Emerging trends and technology in networking especially SDN and NFV is starting to create a perfect eco-system and can only help make the realization of IoT possible not only for now but for days to come.

08/18/16- IoT Edge Design Considerations – Walking along a thin wire

The tremendous value in connecting devices and harvesting data comes with a big responsibility of making the right choices to achieve the engineering excellence. The main challenge here is not the lack of choices, but that there are way too many choices to make, be it the protocol selection, power consideration, data sampling, wireless technology and so on. It gets more complicated as each one of these are not to be seen as independent choices to be made, but inter-dependent. However, understanding the relationship amongst them empowers you to make the right choice.
It is important to understand the key factors that influence the design choices when it comes to IoT Edge design. Those key factors are,

Power

Range

Bandwidth

Fidelity

Latency

Cost

Reliability

The IoT Edge devices could vary from coin-cell battery powered to solar-powered to industrial devices that are powered 24x7 with backup batteries. Power is one of the important considerations, as the choice of the data transmission frequency and amount of edge computation will have an impact on the battery life. For example, an IoT BLE device running on a coin-cell battery will have to consider how much and how frequently it will transmit data to optimize the battery life to sustain for a year or two. Another factor to consider is that the broadcasting power setting in the chipset is typically fine-tuned based on the required range and deployment environment. Two important elements that has a direct relationship to the power are data transmission frequency and amount of CPU cycles that are required on the edge device. One of the common optimization techniques employed for power and bandwidth includes sampling the data at a set interval and aggregate, but turning on the transmitter radio at a different interval. In cases of using LTE radio, turning on the LTE radio only during transmission saves considerable power. But, caution must be taken not to turn ON and OFF the radio too frequently as it can be counter-productive. Also, the tolerance for latency must be considered based on a case-by-case basis before making the choice of store-and-forward. One other way to optimize the data transmission is to perform some basic computation on the edge to avoid sending noise, but send only significant information to the cloud. It is important to understand that computation involves significant power, sometimes even more than using LTE radio. For example, sending data using a LTE radio at 1 Mb/s rate could consume approximately 1700 mW, whereas running ARM processor with 100% CPU cycles could consume 2000 mW.
Before making IoT Edge power design choices here are some common questions we should be asking ourselves –

“Is the edge device connected to a power socket or battery powered?”

“If battery powered, do we have a separate Field Gateway (that is powered to a socket) that this IoT device can connect to send data?”

“What is the amount and frequency of the data to be sent?”

It is important to understand that the choice of the network, application and messaging protocol is a function of various factors – where data transmission rate, battery power and range are the most important.
The data transmission rate will usually depend on the required fidelity and amount of data that is transmitted. Often, optimizing on the data transmission rate will compromise the fidelity which might have a negative impact on the use-case for this data. In some cases, the missing data can be extrapolated or interpolated using techniques such as regression, curve-fitting and smoothing.
It is also important to strongly understand the deployment environment to make the right design choices. For example, a device in the mining field with little to no cellular signal might have to resort to satellite or data muling. In cases where devices are not connected to the internet, data muling can be used. This is when a mobile device (such as a user’s mobile phone) can connect to the IoT device when it comes in proximity to harvest the data to send it to cloud. Some IoT use cases are implemented using crowd-sourced data muling where the general public’s mobile phone acts as a gateway when they come in proximity to the IoT device. The disadvantage with data muling is the latency that is not deterministic often.
These factors are very closely related to each other and they all have a direct relationship to the cost of the edge device hardware and also the software solution. The importance of the IoT edge device design phase cannot be understated as one undertakes the journey of connectivity.

08/02/16- The Lochbridge Connected Car Maturity Model

The connected car ecosystem consists of a plethora of players both small and large. Lots of innovation and transformation is happening in this space, which can cause some confusion. Where would an OEM want to focus their time and investments? What decides what matters most for an OEM to align with its vision to give a competitive advantage?
We at Lochbridge have developed a maturity model to help OEMs map the connected car journey and stay ahead of the curve. Our approach focuses on outcomes and not just on the progression of technology. Technology should help focus on business objectives.
Take a quick trip down memory lane - our industry started in 1997 with OnStar’s introduction of Telematics. The business model then was commercialization of remote connectivity to the car, the early days of IoT in my perspective. In OnStar's case – a simple 3 button system to provide safety and security related services. In fact, OnStar was set aside for all practical reasons as a separate company focusing not just on OnStar but other OEMs as well. Other similar ventures like Wingcast (from Ford) and ATX Technologies were started as well. All of these offerings were TSP-centric (Telematics Service Provider) and were available only for high-end cars.
Today our industry has progressed where the pipe is being leveraged for a multitude of things making the car another node on the network. The focus is both on the Customer and the Vehicle. Every OEM either has a Telematics offering on their own, are in the process of building one or uses a TSP. Telematics is slowly but surely becoming a standard offering.
Infotainment related offerings is becoming more mainstream as well. A drop in computing and display costs are allowing the 7” screen to become more affordable and a standard offering.
With the newly booming industry it is important to make sure that a proactive and not a reactive approach is being taken. The following sections details the outcomes which we think are important in deciding and defining your Connected Car journey.
Loyalty
Every OEM wants to create that sticky relationship with the Customer to enhance their brand awareness and create the Loyalty factor. So we like to re-define TCO from “Total Cost of Ownership” to “Total Convenience of Ownership”. A hassle-free ownership experience with an automobile plays the biggest role in creating brand loyalty.
We have made a lot of progress in this regard but gaps still exist in the customer-OEM-dealer relationships. I own two luxury brands in my household - one domestic and another foreign. Both have telematics and neither can help my dealer identify me at a dealership automatically or the reason for my visit to the dealership. Neither can help me download a performance or feature upgrade on-demand. We do have gaps to be filled and by closing these gaps, we can create a loyal customer.
Differentiation
While we have made great strides by integrating digital into the vehicle, are we really leveraging the car’s screen as the 4th screen? We have a disjointed inconsistent experience within and across OEMs. If we take some lessons from the mobile world, you will see that we are not leveraging the capability to make the experience more personal and contextual. With both the products I drive, I have inconsistent Bluetooth integration for audio where restarting the car or the phone does not seem to help. I am sure people who own an iPhone will agree with me. I rented a car recently with Apple Carplay and struggled to figure out how to search for a POI (though I use an iPhone). At the very least, our industry has progressed where pretty much most OEMs have started to support Apple Carplay and Google Auto and some have their own implementation as well to help with a seamless user experience.
Monetization
Now to discuss the Holy Grail. How do I make money while creating a happy customer? In the race to keep up with the competition, OEMs directly or indirectly using TSPs, are connecting theirs cars but are still figuring out how to at least break even on the TCU or IHU costs related to connectivity. The key opportunity is in the transformation of the data and in the insights the OEM and their partners can use. I am not going to debate about privacy here because if Google can do it, I am sure our industry can too. So to name a few opportunities, very few OEMs expose the data as APIs for partner integration to create an ecosystem. To site an example, Progressive offers the snapshot product but also has a partnership with OnStar. More of these partnerships between OEMs and other industries like Insurance, Retail, Healthcare, etc. will help monetize the connection. To go back to my personal story, one of the products I drive helps the dealership by using my service-due reminder with the closest dealership resulting in a personal call from the dealership to schedule a service appointment. This is just one out of the many more opportunities that exist to create a trusted partnership between Dealerships and the customer to drive parts and service revenue and not just by re-selling voice and data packages.
Quality
Now let’s move to the product – the car. Let’s discuss how connectivity will make lemons a thing of the past. This is one perfect business case to justify connectivity cost. With the pipeline in place we can push upgrades to reduce risk and extract vehicle insights to engineer a better product for tomorrow. With the wealth of data we could collect, we can foresee patterns and predict future failures and prevent an expensive recall and a damaging customer experience.
Recently, I had trouble with one of my cars (remember this is a luxury brand). The car did not warn me. I had to take it to the dealership on my own. The dealership advised it is safe to drive but I had to bring it back in a few days since they did not have a loaner available. Going back to our new definition of TCO, isn’t this a lot of hassle when my car is connected. Wouldn’t it have been awesome if the car warned me of the issue, contacted my dealer and scheduled an appointment and loaner based on my availability? Connectivity could have been used to address quality and user experience issues – though we will continue to see updates to maps, TCU and Infotainment precede ECU updates.
We hope our model will serve as a map to help assess how technology can help achieve these outcomes by focusing on the essentials:
Loyalty – Create a new relationship between customers, cars and the companies that participate in the ecosystem
Differentiation – Gain a competitive edge with unique experiences both inside and outside of the vehicle.
Monetization – Introduce new services that customers will pay for and allow third-parties to participate.
Quality – Analyze Vehicle data to enhance product quality and performance and gain visibility into potential risks.

Find out how we rate the industry today!

With the advent of advanced wireless technologies, cars are increasingly becoming hyper-connected and they are no less than high-performance computers on wheels. Incredible focus is put on the center stack display, providing the HMI on one hand and acting as a conduit between OEM cloud and hundreds of sensors deeply integrated into the product. Some of the OEMs even announced mirrorless concept cars moving to the high resolution video streamed onto special displays.
The war for what goes into the center stack display will continue to increase. App Store and Play Store offer around 2 million app options each! However, the in-vehicle apps will never see these kinds of numbers due to some fundamental differences that added technology needs would prevent. The HMI aspects and driver distraction factor will play a crucial role before the apps get to the In-Vehicle app store.
It was heartening to see Toyota adopting the Fords’ SmartDeviceLink that was open sourced. QNX Software systems and UIEvolution are planning to integrate Fords’ SmartDeviceLink software as well. However, the fragmentation of the underlying OS, Infotainment SDKs is here to stay as most of the OEMs have their proprietary homegrown SDK. Testing and certification of the apps for the in-vehicle platform is all together a different ballgame. It is crucial to understand the complexity and nuances associated with the vehicle ecosystem to appreciate the need for a new focused approach for in-vehicle app certification.
Though the car is increasingly becoming a network of high-speed computers, the typical lifecycle in the automotive industry is longer than other consumer technology products. Conventionally, a limited number of test benches are made available well ahead of the actual product release to the testing team. In a development organization with multiple vendors and teams, time-sharing of these limited resources is often a challenge. There is a lot of stake causing sensitivity around these resources leaving the perimeter of the OEM even to a trusted vendors’ premise. The need to have physical proximity to the test benches in the world of globalization and teams located in geographically diverse regions poses yet another challenge.
At this time, the configuration of the IHU is resource constrained compared to a consumer device like a tablet. So, it is important to ensure the apps that are certified to go on the IHU do not make other apps starve for the resources and can co-exist giving the right user experience. It is also important to monitor network traffic of the in-vehicle apps from a security standpoint and the bandwidth that it utilizes in terms of bytes received, etc. A comprehensive audit report of the API calls and URLs that are accessed is important to understand the dynamic behavior of the apps beyond static code-analysis. Another unique problem with in-vehicle apps is the dependency of vehicle data, other contextual data and real-world scenarios that are not easy to simulate. Some interesting examples to think about –

Validating the behavior of an internet radio in-vehicle app when there is a disruption of the 4G LTE connection

Monitoring the memory, CPU utilization of a navigation app to ensure there isn’t any memory leak or excessive resource utilization

Detecting redundant and repeated calls to the underlying API by an app

Rogue calls to unauthorized APIs that cannot be detected by static code analysis

Metering the bandwidth utilized for apps that are dependent heavily on the internet resources

Though test benches can help with functional testing to a greater extent, it has various limitations as discussed throughout this post. The natural alternative is a software based emulation that can scale and eliminate the need for physical proximity. Building a software based emulator that can work with various generations, models and even OS platforms in some cases simultaneously can be a herculean task for OEMs and distractive from the core goal. Also, setting up an alike configuration of the Emulator to provide exact IHU-like environment is no inferior task!
After all, testing and certifying in-vehicle apps requires different tools and strategy! Questions about connected car strategy for your business? Contact Lochbridge today!

06/14/16- Interoperability as Growth Catalyst

Imagine you are transported in a time-machine, to a far-away island where every single person speaks a completely different language than the other. Forget about Google Translate, not even internet existed during that time. You slowly start to realize the hard truth of being unable to communicate with any person to meet your daily needs. From as simple as getting a glass of water, to finding a place to stay or even to being understood as well as understanding others.
Other than chaotic noise, no meaningful action could take place to help a single person achieve their goals, let alone their basic needs! Each person was like an individual human island, completely cut off and isolated from one another even though physically they were co-located.
Luckily now, the smart "things" around us will not end up in this situation. Over the years some of the best minds in the industry spent time and energy trying to make these "things" talk to each other to help make human lives better and smarter. Thus, a smartwatch can talk to a smart golf stick and help you improve on the next swing, a kid's wearable can talk to the parent's smartwatch to notify of their current location - the possibilities are endless!
In today's burgeoning IoT ecosystem, we can see diverse players entering the market at an unprecedented pace unheard of in any other industry. Coming from multiple verticals (auto, wearables, commercial etc.) and addressing different needs of the ecosystem - applications, hardware devices, software management, OEM, reseller etc.
A new "thing" that hits the market and has its own proprietary protocols, can only be talking to itself and nothing else - pretty much like the humans in the island we discussed earlier.
It's imperative that interoperability - be it at the application layer or at the communication layer - is molded into the DNA of any new IoT “thing” or solution that wishes to enter the ecosystem. Treating interoperability as a de facto requirement encourages each player in the ecosystem to focus on what it is best at. For example, application vendors can focus on creating interesting solutions to improve human life, without bothering about device to device or device to system incompatibilities. Hardware vendors can focus on improving the inter play of their devices within a diverse ecosystem, and so on, resulting in the evolution of a healthy "things" ecosystem.
The economic importance of interoperable solutions can be inferred from a recent McKinsey report[1], which quotes “interoperability could deliver over $4 trillion out of an $11 trillion economic impact” that IoT solutions generate by 2025 - a whopping 40% weightage to overall potential.
Lochbridge is a firm believer in leveraging industry standards for creating interoperable solutions in the IoT space. The various ingredients making up its flagship end-to-end IoT Acceleration Suite have embraced various IoT standard protocols head on and leveraged its powers to deliver value add in the IoT Solutions space.
Earlier this year, we were part of the OMA summit [2] in San Diego, California where we had the opportunity to validate interoperability of our Device Management Platform, which is a critical part of the IoT suite. The Device Management Platform, proved to be a winner with stellar performance results in the marathon 4-day interoperable tests against 10 different industry vendors during the event. It also stood out for its intuitive GUI workflows which helps one to carry out complex software update campaigns, diagnostics and other management functions on heterogeneous device types, using industry standard protocols over a secure channel.
The Device Management Platform plays a pivotal role within the Lochbridge end-to-end IoT suite. Our solutions span the entire ecosystem from device management to enterprise integration to analytics at-scale. In addition, these solutions support key IoT use cases across industries including automotive, healthcare and manufacturing, among others. One key feature of the Device Management Platform is its impressive flexibility, providing secure software update management and remote diagnostics across a plethora of IoT devices. Other than its flexibility, the platform provides the scalability and reliability to handle billions of IoT transactions. On a technical note, the OSGi based platform implements OMA DM 2.0 and LwM2M 1.0 specification for device management with the Core Gateway handling all major IoT protocols including MQTT, REST, CoAP, DTLS and AMQP with a plug-and-play architecture. Through our participation in the OMA TestFest, we also proved the industry-standard compliance and interoperable nature of our solution
Lochbridge also believes in giving something back and is actively participating in the evolution of future revisions of some of the IoT industry standards [3][4].
We believe that creating and promoting industry standards based solutions is a key factor that will help drive a healthy IoT ecosystem which in turn will amplify the value creation in human life through the smart "things".
[1] http://www.mckinsey.com/business-functions/business-technology/our-insights/an-executives-guide-to-the-internet-of-things
[2] http://openmobilealliance.org/oma-releases-results-of-lwm2m-testfest-and-opens-next-testfest-registration/
[3] https://github.com/OpenMobileAlliance/OMA_LwM2M_for_Developers/issues/106
[4] https://github.com/OpenMobileAlliance/OMA_LwM2M_for_Developers/issues/87

06/07/16- The Next Big Automotive Revolution

The next big automotive revolution is here: Connected & AutonomousAutomotive connectivity has been around for a couple of decades now, but it has only picked up momentum in the past five years, largely due to customer demand. The opportunity is huge, the space is crowded and the business models are varied. Introducing Lochbridge’s Connected Car EcosystemⓇWhile the space is exploding, no one has yet captured all the promise and all the players in one graphic model. Our Connected Car EcosystemⓇ aims to do that. We will update it quarterly as new entrants join the space.About the LandscapeFueled by smartphone technology, the connected car market is exploding in a manner akin to the mobile ecosystem with the introduction of smartphones. And the promise of autonomous vehicles is adding yet another dimension to the ecosystem, forcing currently disparate industries like transportation and government to engage in conversations.This infographic limits the ecosystem to telematics and infotainment for the consumer automotive OEM space. Generally speaking, telematics represents features/functions under the hood, anything to do with vehicle drivability, emergencies, maintenance, geo-fencing and other remote vehicle control applications. Infotainment, as the name suggests, has more to do with information and entertainment feature/functions provided for the personal benefit of driver/passenger. GM-OnStar pioneered telematics in 1996 and Ford pioneered infotainment with its Microsoft based Sync System in 2007. Both telematics and infotainment require connectivity, and OEMs provide it in multiple ways. Some OEMs provide embedded connectivity, while others leverage a bring your own device (BYOD) model.In the embedded connectivity model, some use an onboard module for telematics but leverage BYOD for infotainment. In yet another model, the Infotainment Head Unit (IHU) provides connectivity for both telematics and infotainment features.Some OEMs only provide infotainment options. In cases of OEMs leveraging the IHU, the head unit manufacturers collaborate with various telematics service providers, modem suppliers and wireless integrators to provide the end-to-end connectivity. All of the OEMs, with the exception of a vertically-aligned Tesla, rely on their tier ones in various engagement models to provide the connectivity solution today.Different levels of maturity on the part of OEMs mean different ways the customer can leverage connectivity. All OEMs provide basic web access, but many also provide a mobile app that can be used to remotely connect, monitor and control their vehicles. How rich those features are depends on how far along the OEM is in enabling digital communication with its vehicles. Some OEMs provide access to vehicle features via the IHU, and many are exploring and providing wearable access to their vehicles, as in the Nissan Nismo.One of the primary advantages of connectivity for the software-laden vehicle architecture is the possibility of updates to the software to fix problems, prevent failures and enhance performance. The only OEM that leverages this today is Tesla. Entering the automotive scene late has given Tesla the advantage of designing its system for Over the Air (OTA) updates into every electronic module in the car. Other OEMs have limited capability to do this and are wary of mass roll-out due to possible security concerns. Additionally, due to the multi-vendor tier one architecture in the prevalent OEMs, OTA coordination can be expensive and time-consuming. The gateway module architecture can solve this problem, though it will not completely eliminate the integration issues.Though the enterprise backend is not shown explicitly in the infographic, it plays a major role in bringing these technologies together. While telematics provides vehicle insights, with an enterprise-wide customer 360 implementation, OEMs can really gain insights into customer preferences beyond just vehicle behavior to provide comprehensive solutions.The key In-dash systems that drive infotainment feature functions are operating systems and BYOD projections. The leading operating system currently is QNX, followed by Linux. Android is also making headway with Google’s aggressive vehicle strategy. The Open Automotive Alliance that was announced at CES 2014 has paved way to providing Android Auto, a projection of Android smartphones in-dash, essentially bringing the same experience to the car. Apple’s Carplay is another smartphone experience bought into the vehicle. And finally, many OEMs have their own App stores, through which they can offer customers apps like Cadillac Cue and Hyundai Bluelink.Opening the Doors for New Business ModelsThe possibility of apps in dash opens doors for content providers to service customers in vehicle. As illustrated, travel, merchant, music, location based services (LBS), traffic and parking are only a sample set of the categories of content providers; the most popular ones being music, traffic and location-based-services. OEMs are exploring new partnerships and opportunities to open up their vehicles for contextual, personalized and targeted commerce by combining their customer and location preferences with the service provider options.Connectivity has ushered in novel opportunities like rideshare, usage-based insurances and innovative fleet management solutions. With increased electric vehicle adoption, connectivity will help smart-grid and power companies to manage power consumption efficiently. Organizations like EPRI, along with partner OEMs, are striving to create standards for utility-based vehicle features like demand response, aggregation, renewable balancing, and dynamic pricing.Rideshare is another area taking on new dimension as millennials shift the transportation paradigm, preferring Uber or Zipcar to owning their own vehicles. OEMs also want a piece of the action, evident from GM’s Maven program and its investments in Lyft and Sidecar. Other OEMs are exploring similar opportunities. Uber is working with many OEMs and Google to invest in a self driving Uber fleet. Meantime, Ford announced a rideshare app for Ford vehicles. Toyota, VW and BMW just this month announced investments with Uber, Gett and Scoop ridesharing companies respectively.OEMs are also opening up their enterprises securely via developer portals to encourage innovation and development of potential new business models.The times are exciting and the landscape is ripe with opportunities. We will update the Connected Car EcosystemⓇ each quarter, because one thing in this space is certain: what is here today could be completely obsolete as soon as next year. Do you know a company that should be included in the Connected Car EcosystemⓇ? Contact Lochbridge today!

04/05/16- The Lesson from Tesla

Last week something amazing happened in the automobile industry. People lined up to pre-order the new Tesla Model 3, the first mass market electric vehicle from Elon Musk’s revolutionary car company.
Tesla has no dealerships, so they weren’t lining up for a test drive. If you pre-ordered today, you would not get a car until 2018. But lines at the Pasadena Tesla store stretched around the block by 6:30 AM Thursday morning for a car that wouldn’t be revealed until 8:30 PM. People handed over $1000 deposits for a car they’ve neither seen nor tried.
This seems to go against the entire marketing model of the auto industry. What’s more, electric cars aren’t new. Prius, Nissan Leaf, Chevy Volt -- there are several models.
But Elon Musk is the Steve Jobs of the car industry. There were smartphones before the iPhone, too, but people gladly stand in line to order their new iPhones. Only passion for the product would drive people to stand in line for a car. And Tesla has created almost a cult-like following. Why? It’s not about the car, really. It’s about the ownership experience.
Like Apple, Tesla has redefined the industry by re-creating the user experience.
For example, service. There are no Tesla stores in Michigan, so Tesla owners buy their cars online, or in Ohio or Indiana. If a car needs service, Tesla sends a service person to your house, replaces the car with another Tesla to drive during the repair process, and then returns the car to you at work.
This kind of service, so unlike the conventional car service experience in the mass market, helps the owner to fall in love with the product. People actually recount their service experiences on Facebook, fueling future sales. They act as unpaid product evangelists.
We asked a friend who drives a Tesla how he felt about its OTA (over the air) software updates. To push out those updates, the car must constantly be in touch with Tesla. We asked, “did you have to sign a consent to get OTA updates? Were you concerned about your privacy?”
His response: “I really didn’t care because I know they are constantly reading the data from my car to make a better product. They were able to push a major update to the car over the air that made it capable of driving itself. If I’m getting that kind of experience, I really don’t mind letting my data be read.”
It’s what we all know: consumers will trade their privacy for a compelling user experience.
To compete with affordable Teslas, the rest of the industry is going to have to change both its sales experience and its ownership experience.
Until now, Tesla has been a luxury product. But all that changes with the Model 3. At $35,000 with a $7500 tax credit, consumers are looking a sub $30,000 car with an incredible reputation. As he said he would, Musk has brought the price point down to where Tesla can really compete with the Chevy Bolts of the world.
With the connected car technology that exists today, we can enhance the ownership experience for other OEMs, beginning at the dealership. We can remove that sticker in the corner of the windshield that reminds you when your next service is due, and enable the car to tell you that your car’s oil life is going to end in a few weeks. In fact, the car could also check the dealership for available service appointments, look at your calendar, and make you an appointment.
So, are you ready to create a better user experience?

03/15/16- Are the automotive OEMs losing control of their own customer experience?

Wouldn’t you love to get in the car and have it tell you that your first meeting is at 9 AM, but based on your driving habits, traffic and weather, you will be late? And then, have it call or text the people in your first meeting and advise them you are running a bit behind?
With connected car technology, most of that is possible today. But it’s not happening, because there is a battle between the technology companies like Apple/Google and the carmakers. And the customer is paying the price.
At Lochbridge, we believe the ecosystem should reward a partnership between tech companies and car companies where the winner is always the customer.
As the car becomes a platform, the dashboard screen is becoming the fourth screen. In new cars, the dashboard screen is big and beautiful, a rich display of information. That display has all kinds of potential for monetization.
After all, Tesla has already proven that the modern car is basically just another node on the network rather than just a mode of transportation. Because it is a tech company, Tesla has already capitalized on the movement to the big screen: it has built a suite of apps that control everything from battery life to navigation to entertainment. By doing this, Tesla completely controls the user experience of its owners, which is the goal of every technology company and is likewise the goal of every automotive OEM.
No OEM wants to give away monetization options to Apple and Google, and give “their customer “ experience away. But how long do you think users, who use AirPlay and Chromecast at home to stream content to their TV, will stand patient for a restricted user experience in the car? Within the bounds of safety, the end user should be able to make the call on what gets projected and what doesn’t.
In the fight for control between Apple, Google, and the OEMs, the customer stands to lose. And that must stop. OEMs should start focusing on the user experience, not just focus on controlling their turf and let the “Bluetooth/tethering battle” with customer’s phone continue.
The OEM should never worry about making money any time someone gets in the car. An OEM who does it right could take a small piece of the action every time a mobile ad is shown to a user. When a customer starts the car and turns on the navigation to go to the store, the appropriate coupons could show up on his/her dashboard, and if those coupons are redeemed in the store the OEM should be compensated.
To compete, OEMs should focus on the data they already control and make it more useful to the driver. They must do it with the speed of the tech companies. The space is large enough for the OEM and tech companies to coexist and partner. So why not let the customer be the winner?

In today’s Internet of Things (IoT), Big Data is the result of connected devices driving information to the clouds. Data is generated from a variety of sources, such as vehicle performance, web history and medical records. It all brings an opportunity to gain insight on trends. Data scientists break big data into four dimensions: volume, velocity, variety and veracity.
In a real world metaphor, data is like water flowing through pipes. Before reaching our homes for use, huge volumes of water flow from different sources at a high velocity with a variety of minerals based from its source.
As long as pure water flows through all the pipes at various levels until it reaches our homes, we continue to get safe drinking water for a healthy life. If one of the sources becomes contaminated, it would affect the water quality (veracity), and assessments would need to be made for purification.
To me, data flows like water. In today’s world with many integrated business systems, a variety of data is flowing between various information systems at high velocities and volume. Many data scientists and big data practitioners are trying to analyze the data to derive intelligence for better business decisions or autonomous devices. While we are focusing to solve big data problems, do we often overlook the veracity or quality of the data?
We are entering the era of Autonomous Devices. We are developing robots as our personal assistants and autonomous vehicles as our personal chauffeurs. We “train” these devices through big data to better meet our needs. What if the veracity of the training data is not guaranteed, and the devices are fed low quality information? Imagine how these autonomous devices are going to behave!
Many organizations spend a lot of money to predict things, based on historical data sets, and the use of statistical and machine learning algorithms. It is much like the way we predict weather or identify possibilities for crime, theft or accidents. Do you think we would be able to predict accurately, if we have problems with veracity of historical data?
Take for example how data veracity could cost a delivery organization. If there is low quality data – such as an incorrect, incomplete or illegible address – it would cost the delivery service time and money to make corrections, return it to sender or risk it being delivered to an unintended party. Again, the problem could be averted if data veracity is at its highest quality.
Just as clean water is important for a healthy human body, "Data Veracity" is important for good health of data-fueled systems.
In dealing with high volumes of the data, it is practically impossible to validate the veracity of the data sets using manual or traditional quality techniques. We can ensure the veracity of high volume data sets using data science techniques, such as clustering and classification to identify the data anomalies and improve the accuracy of data-fueled systems.
While we all appreciate that technology is evolving fast, we need specialists to extract intelligence out of data flowing between various information systems across all the industries. I highly recommend the skills of a Big Data practitioner or a Data Scientist to understand the importance of your Data Veracity, especially as we try to solve today’s problems within Big Data and autonomous devices.

02/05/16- Business Insights around Monetizing Artificial Intelligence

In the previous post, I noted the Internet of Things (IoT) technology wave is upon us. It will be truly disruptive, and it will fundamentally change your business. Aside from noting these technologies, I am also talking about a process and mindset that works best to navigate through this new technology wave. The process is based on Design Thinking principles, but these blogs focus on the flavor in the business-to-business space.
To motivate the discussion, I’ll talk about a category of IoT that revolves around predictive analytics, machine learning and artificial intelligence.
Wired Magazine recently noted that “an artificially intelligent Google machine just beat a human grandmaster at the game of Go, the 2,500-year-old contest of strategy and intellect that’s exponentially more complex than the game of chess,” It’s the latest example of progress made by researchers in the AI field, but of course Google is not a research firm. It is a public company with shareholder responsibilities. Google is pushing hard to monetize this technology on a grand scale. Others are pushing just as hard toward the same goal, notably IBM with Watson and GE with Predix.
Are you now asking what this means for your business? Or what you should be doing right now? Do you feel like you’re falling behind… and quickly? If so, let us apply the process discussion and see how it may help with the problem.
Getting to Understand
As shown in the figure below, the first step in the process is to Understand.
The goal here is getting a clear understanding of the problem to be solved or the goal to be achieved. In addition to a clear goal/problem statement, other artifacts are often needed, such as personas for key actors and stakeholders, and information about the problem. For business-to-business problems and goals, it can be very helpful to use a business plan canvas to capture some this information.
This “Understand” step is the most important and most difficult part of the process to get right.
It’s all well and good, but how does this help us with the problem at hand? What should business be doing today with the AI and Machine Learning technologies?
Digging Deeper into Understanding
As I noted in the previous blog, one of the process’ best features is that it is inherently iterative. What that means in the “Understanding” step is an explicit recognition that the problem itself is not well understood. As with the AI problem, for non-trivial problems, it will take more than one pass through the process to make real progress.
In our case, the goal of the first iteration should be to gain business insight around the problem. Google’s definition of insight is: “the capacity to gain an accurate and deep intuitive understanding of a person or thing.” It is the exact prerequisite needed to solve our problem. Our first victory in tackling this problem could be the realization that we need to understand AI to a point where we can elaborate on the original problem.
Design Thinking helps us with a heavy focus on the concept of empathy and deep understanding of a problem or goal. The traditional Design Thinking process is human-centric with multiple techniques aimed at getting a deep understanding of the person that would be using or interacting with a certain technology. For a business-to-business situation, additional techniques and tools can help with this understanding step, such as a business model canvas.
Since the area of AI is broad, we know that in order to complete an iteration, we need to produce something that passes a test and produces results. We need to be more specific. Let’s look at Machine Learning and attempt to gain insight into how machine learning works.
The Wired Magazine article talks about something called Neural Networks as means for the computer to “learn.” So, let’s fine tune the goal for the iteration of gaining insight into Neural Networks.
With this type of goal, we could put together one or more Machine Learning prototypes that help us get past the technical jargon and marketing hype. The fact that the techniques are inspired by our current understanding of how the human brain works doesn’t help.
If we are very new to the topic, we may decide to take on a basic problem of applying Machine Learning to recognize a scanned image of handwritten digits – the “Hello World” program of Machine Learning.
Using images, such as those shown below, we can develop a set of programs that will read the images and “learn” which images corresponds to which digits.
The Learning process will result in a model that can be saved. At that point, we can take any new image, apply this model to that image and predict which digit is in the image. Our goal would be to do this with greater than 95% accuracy.
At first the problem seems difficult. The software will receive nothing more than a set of pixels, for example a 20x20 pixel image would result in 400 pixel values as input to the program. The other difficulty is that these images are of handwritten digits. There would seem to be an almost infinite variety to each digit. It seems daunting to build a program to recognize all of these and do it at over 95% accuracy.
The solution and result of what we would review at the end of our iteration is shown below:
Through a clever (and very unintuitive) application of very simple math, one can feed the above Neural Network any 20x20 image of a digit, and it will predict what digit is in the image. It can achieve over 95% accuracy.
Looking at the previous figure (and avoiding the details) to predict what digit is in an image, one simply executes the mathematics from left to right, and the answer is given as the maximum number on the far right.
No elaborate if-then loops. No complicated edge detection and geometry calculations. In fact, no traditional program logic at all. Only Math!
The magic of how one calculates the model (in this case T1 and T2) is likewise primarily an application of math and techniques similar to those used to fit a line to a set of data points.
Since this problem is solved with all mathematics, it’s also obviously important that the data being used as input is numeric.
So here is what we would have learned in this hypothetical iteration:

Machine Learning is very powerful. Even the simple POC solved a tricky problem, and it did so by “learning,” opposed to heavy software development investment.

We have a better sense of what the term “Neural Network” is. It is a mathematical model.

We understand that to use this approach, we need to translate data into a numerical representation. If I have texts that I plan on applying this technique to, I need to plan on spending a good deal of time in developing a good approach to translate the texts into numbers that can work with the model.

I can quickly see some of the limitations of the “fully connected” Neural Network, such as scalability. But there are solutions to these limitations.

With this insight (and others not listed here), we can revisit the Understand step in our process and take another look at the basic problem statements:
So what does this mean for my business? What should I be doing right now? I feel I’m falling behind… and quickly.
Insight Gained Toward Understanding
With insight gained, we are well positioned to reframe the problem. Clearly, we would need to go through at least one more iteration based on the reframed problem. Given that AI and the application of AI are so broad, focus and prioritization based on business goals will need to be made. But after a relative short period of time and with possibly three iterations through the process, there is a good chance that at least the context for AI would be clear for the company. Annual budgets can be made with these results in mind. And more elaborate and possible concurrent efforts can be spun up, each using the same process.
The Wired Magazine article mentions that AI applied to some video games has shown to result in computers that play better than any human player, and they achieved this playing the game in a way no human ever would. This type of achievement in the business world would make or break companies. Trying to make sense of this technology can be tough. Applying a Design Thinking, iterative approach can make this much more manageable.
Next time, my blog will focus on the next two steps in our process, and continue to illustrate the iterative nature of this, as applied to various IoT technologies.

02/03/16- Security in the World of a Smart Home

Information Assurance has its pillars based in the 3 main tenants of C.I.A – Confidentiality, Integrity and Accessible. Information is power in todays modern world, and in order for it to maintain potency its confidentiality to only those with proper authorization, its integrity when transmitted and utilized must be maintained, and it must be accessible for use. There is a balance between these three areas that must be maintained by the owners of data and the security professionals that they pay to protect this data. In recent years however, much more personal data has been kept by individuals without the resources of larger companies in electronic format. This information is stored on personal computers in the home, more often than not connected to personal network that is connected to the internet thereby putting individual information at risk. More often than not individual home networks are often set up with a weak password or no password at all due to the lack of knowledge of the individual on not only the threats they face, but the available technologies and how to properly set them up. More experienced users secure via encryption, firewalls and a preselected password when the individual router is set up, often with host based anti-malware, anti-virus, firewalls and even IDS software. Even with all this protection, more recent technologies have added non-traditional devices to the modern home network in an attempt to create Smart Homes, which have introduced a suite of new vulnerabilities to home networks that many users and companies need to consider and take steps to mitigate the vulnerabilities.
Smart homes take advantage of multiple radio signals (z-wave, blue tooth, Wi-Fi), connect multiple devices to the home network, either directly or through the use of a proprietary hub, and often support multiple third party add on hardware to the connection. Devices being utilized currently range from unobtrusive objects such as doorbells and light switches to security systems (cameras, panels, physical locks, shades) and even major appliances including refrigerators and ovens. As this is a market in its infancy and it is highly competitive, several companies have rushed their products to market. For many of these products they offer easy convenient set up in addition to their functionality, and the fact that many of these devices are connected to secure networks was over looked by both the companies in question and the consumers making the purchases. This opens up new connections to the hosting network that are not always secured because they are actively behind the firewall but utilize a second radio signal as they are connected to the network via Wi-Fi, but are also broadcasting a secondary signal such as Bluetooth and Z-wave and are often located physically outside of the home. The applications that connect to these smart devices for set up and remote control also create new vulnerabilities. Several examples over the past year have lead to potential or actual invasions of privacy of the individual consumer.
One example of devices that are not normally thought of as potential liabilities to the security of a network are iKettles- kettles that pre-boil water and than tweet to their owners that the water is ready. Recently Pen Test Partners showed that an iKettle is capable of delivering a network password in plain text to an attacker through the use of a directional antennae and two simple commands. In addition to that, the android and IOS apps that are utilized for set up store this password, but the passwords to access the persons accounts on the apps which store network SSID as well as passwords, are also vulnerable due to the app containing poor security functions as well- the android app only utilizes default passwords, and the iOS app sets six digit codes that take little time to crack utilizing todays readily available computing power[1]. This is an excellent example of poor secure software development practices leading to new vulnerabilities for a network.
Another problem recently arose with a popular IOT device called Ring- a Wi-Fi connected device that allows for video and two-way communication to whoever is standing at your door. Once again Pen Test Partners discovered a vulnerability that allows a malicious attacker to readily receive a plain text version of the connected password- simply detach the doorbell from the outside of a home, turn the unit to access point and utilize a mobile device to access the URL that stores the module’s configuration file including SSID and password, allowing for direct network access at a later date with little evidence of the breach[2]. Both of these examples show that companies, in their rush to get their products to market, have ignored Secure Software Development Life Cycle (SSDLC) procedures or utilized inadequate ones allowing for vulnerabilities to be put in the market place. In the case of Ring, both the hardware and software configurations exhibited little thought to secure development to a device whose main selling point is connection to a private network. However even with SSDLC procedures in place, the producers of these technologies must add in mandatory security procedures in order to help protect consumers that lack the education into proper security procedures.
Other threats from IOT devices come from lack of user education or knowledge of proper security procedures. These include things as simple as changing the factory password or even setting up a password. The result of the use stock passwords supplied by factory means that anyone that owns or has gotten their hands on the instructions will have access to a connected section of an otherwise protected network. An excellent example of this is a Russian streaming website which, at last count, utilized unsecure streams of private internet connected security camera’s and baby monitors of 73,000 individuals and companies. The website claims that this is to show the importance of properly securing IOT devices[3]. Motivations aside, it does provide the need for companies to put some mandatory functions into the software of their connected devices for the safety of their consumers. This may include mandatory password changes, lengths, and complexity, as well as secure, encrypted storage of this input data at the most basic level. Basic things such as a brief introduction to the consumer on the importance of password safety when starting the software would help as well. However, this once again points the the C.I.A that affects the usefulness of all data.
Companies must find the balance between maintaining the confidentiality and integrity of their customer’s personnel network security with making their devices as available as possible to their customers, both current and potential, in order to make the connections they supply useful and worthwhile. Security of these devices can not be ignored- although they may store no personal data, they are often connected to the same network that has devices that store the data or may stream audio or video to invade privacy. This is a huge liability to the companies, and could result in a huge loss of consumer trust when breaches occur whether there is a legal liability or not. Another problem is the lack of sustainability - although security is increasing in these devices, often times many companies are not updating their code and pushing it to purchased devices, decreasing their reliability and security[4]. Companies producing any device that connects to the internet either through a direct connection or connection to a private network, must consider not only SSDLC when creating software components, but also their making sure that their customers are either educated in creating a secure environment or forced through mandatory password changes upon set up with minimum qualifications and regular background updates to component software to keep device security updated against new and rising threats and keep their customers networks safe.
[1] http://www.scmagazine.com/squealing-ikettles-reveal-owners-wifi-passwords/article/449487/
[2] http://www.scmagazineuk.com/iot-ding-donger-reveals-wifi-passwords/article/464800/
[3] http://www.networkworld.com/article/2844283/microsoft-subnet/peeping-into-73-000-unsecured-security-cameras-thanks-to-default-passwords.html
[4] http://www.scmagazine.com/iot-security-its-not-to-late-to-get-it-right/article/403505/

01/29/16- Five Phases to a Successful TDP Project

You’ve been tasked with putting together a Test Data Privacy plan for your company, and it kicks in with a lot of questions. Where do you begin? What resources will you need? What applications do you start with? Where is the data located, and better yet, who owns it? You need to have a plan in place and take a phased approach to ensure nothing gets overlooked. Let’s take a look at the five phases that go into a Test Data Privacy project.

Assessment Phase: The Assessment phase is where consultant(s) verify where the client is at in their data privacy conceptual understanding, spreadsheet analysis, security preparation and budget considerations. The goal is to obtain enough information to estimate the effort and cost required in order to perform a detailed Analysis of the in-scope application(s). This phase involves meetings with stake holders and project managers at the client site. The following questions need to be answered during this phase:

Environments (mainframe and distributed)

Volume of data

Sensitivity of data

Security/access to data

Analysis Phase: The Analysis phase is the most critical phase in implementing the data privacy solution. Due to the complexity and variety of business applications within the organization, the Analysis phase of a disguise project is frequently the most time-consuming of the four phases. Locating and getting the correct test data is often difficult for developers and testers. The intricacy of finding and understanding the private and personal content of test data that needs to be desensitized is even greater. Understanding the data's relationship with other files and databases that must be synchronized presents an even greater challenge for most developers and testers. This phase involves the creation of a DMA (data model analysis) document, which is an Excel spreadsheet that lists the layouts/schema; the fields/columns; the sensitive data to be fictionalized; the contact information for key technical personnel; and the location/names of all entities within the Source Data Environment.

Design Phase: In the Design phase, the consultant will work closely with the client to create and document the definition and specification of procedures that will be used to obtain the source data, desensitize, disguise, or generate replacement data, as well as the specific details for populating the target test environment with the cleansed data. The steps involved in the Design phase include defining and documenting the following:

Names of I/O files/databases/tables

Detailed layouts

Data Privacy Rules for masking data

Develop Phase: The Develop phase is the process of using the documented information from the Design phase to build, test, validate, and refine data privacy compliance processes to quickly produce results while meeting the needs of each specific data disguise rule. This phase involves the actual coding of Data Privacy rules and the creation of JCL (Mainframe) or procedures (Distributed Systems) to test the fictionalization process.

Delivery Phase: The Delivery phase is the implementation and execution of the data privacy project within the organization’s test cycles. By this time, the Analysis phase has been completed, the extract, disguise, and load strategies have been designed, developed, tested, and validated; and now the process can be deployed across the different test environments. The testing environment is completed using repeatable procedures. This phase also requires the completion of all training and documentation so the client is able to proceed independently for future projects.

The benefits that follow are smooth-running, effective tests. Quality and efficiency are better. Goals are achieved, and the enterprise is poised for success.

01/22/16- How Wearables Are Weaving Advanced Technology into IoT

We live in a time where things are evolving faster than ever, especially mobile devices and other connected objects in the Internet of Things. We’ve seen the impact of a wave of active cell phones. Right now, the number of cell phones in service (327 million) outnumbers the U.S. population (323 million). And smart phones have become an integral part of our daily lives, easily blending together much like salt and water in an ocean.
The next wave of mobile, connected devices is wearables. They are trending up, increasing from 19.6 million in 2014 to 45.7 million in 2015, according to IDC. And they are creating niches of utility.
The trailblazers are fitness trackers. As people grew more health conscious, Garmin has increased sales of its GPS tracking watches, and connectivity is now sewn into other wearable clothing. It’s giving athletic coaches the ability to monitor their athletes. Heddoko is introducing smart clothing to help coaches gain more insight into biomechanics, helping them evaluate the team member’s strength and weakness. From this, the athlete can be coached to press harder or slow down for optimum performance.
There are even tiers of wearable brands, such as high-end Ralph Lauren Polo Tech Shirt with bio-sensing silver fibers woven into the material. Biometric data is stored and can be manipulated through a mobile app to track the amount of burned calories. Athletes will know just how much to intensity their workouts.
For times when my wife and I disagree on the room temperature settings, we may need to turn to Wristify, an upcoming cutting-edge gadget that lets you control how hot or cold you want to feel. You wear it like a bracelet, and it acts as your personal air conditioner or heater at all times.
We’re starting to see consumers transfer utility to other fashionable devices. For example, smart watches are snatching away the health tracking utility from fitness trackers. The new Apple watch and Android Moto are very stylized, appealing to those wanting a sophisticated elegant wearable. Having a penchant for watches, I initially was turned off by the smart watch idea, as it was eroding interest from classy looking watches, like TAG Heuer or Omega. But I changed my view after looking at the impeccable screens on the Apple watch and the new TAG Heuer Connected. They are both classy and connected.
Other transfer of utility could be on the horizon, as stylish smart watches collect health data, store it in the cloud, and make it available to healthcare providers. For example, Apple has a Medical ID concept that combines all possible health statistics and makes it available to doctors.
It’s become as futuristic as Star Trek’s Captain Kirk talking into his watch during space adventures. But I think ours is better. The technology in today’s connected wearables gives us more control and flexibility in our lives with less hassle.
References:
http://www.idc.com/getdoc.jsp?containerId=prUS25519615http://www.embrlabs.com/#product

01/21/16- Deriving Business Value through the Internet of Things – “Strategy of Things”

Intro:
Here comes a new (the next) technology wave : The Internet of Things (IoT). However, unlike many of the previous technology waves, where the focus was on automating business processes and moving to electronic media – the business value of pursuing the set of technologies that make up “IoT” may not be immediately apparent. What’s more, there are many choices and paths one can take in this space, and its not at all clear which path makes the most sense.
IoT holds the promise of revenue gains through product improvements, cost savings via improved efficiencies and competitive advantage through the exploitation of advanced analytics. It is therefore vital that one take an iterative and Proof-of-Concept centric approach to developing the strategy to explore, employ and maximize this new technology.
Its hard to imagine the hype getting any worse – yet it is hype that is well founded. The list of “disruptive” technologies that fall under the IoT umbrella is daunting: all sorts of wearables & embeddables, smart factories, smart infrastructure, smart home, smart offices, smart cities, autonomous vehicles – and the list goes on. The implementation of these technologies will fundamentally change the way we live and the way we work. There is no question that these technologies will impact your business. The only question is when the change occurs and if your business will survive the change. In this context, one does not think about employing automation or decision support systems – one begins to plan for Decision Making systems. Decision Making systems that have boundless real-time information pools to draw from, flawless memories, and an ability to learn and continually improve. These systems will have the ability to take action based on their decisions in the physical and electronic worlds.
If you think this is all futuristic propaganda that won’t happen in your lifetime – just talk to some of the folks in the auto industry. Their entire world will be turned upside down in next 3-5 years.
So what is one to do? Clearly one needs to assess this new IoT buzz, and understand what it means to the business and the future of that business. Oh – and of course, one will need to put a strategy together. Based on the introduction, I would argue that one also needs to do this “quickly”. Most importantly, one needs to realize that this is an evolving technology wave. So it is imperative that one stays on top of the evolution and make adjustments to the Strategy as appropriate. But how best to go about making this happen?
I will spend the rest of this blog talking about one Design Thinking inspired approach that does this and why this approach is better than what I’ve seen used in most strategy efforts. This blog will introduce the process at a high level. I’ll post a separate blog for each of the major steps in some more detail. I will also talk about the philosophy that underpins the process.
Just for added clarity, let me use the following sketch to set the context for this process.
In general, a company will have a Business Plan that, among other things, defines business goals and objectives. The company will then have one or more strategies on how to achieve those business goals and objectives. A set of tactics and associated plans will then be developed that strive to implement those strategies. Through the utilization of the tactics and the implementation of the plans, the realization of the business goals and objectives can be operationalized, with the associated business outcomes. Obviously, there is a certain amount of change that occurs at each of these stages, which is dependent on the type and maturity of the business.
I mention this context because the word strategy is one of the most overused words in the IT vocabulary. Too often it is used to describe any activity that requires some upfront thought and planning.
When a new and potentially impactful technology emerges, a company will assess what changes need to me made to the above landscape. The business goals and objectives are often not impacted, but the strategy that aims to achieve those goals and objectives may very well need to change. For a situation such as IoT, there is a very high probability that the fundamental business plan will be impacted. Depending on the industry – IoT could change what you are selling and who you are selling to and certainly the value proposition that you are offering…
Once the organization feels that the time has come to update one of its strategies, they will often execute a process that resembles the one shown below.
One is immediately hit with a few doses of reality here. First the process is clearly “waterfall” in nature. Although individual tasks overlap to help give it some flexibility, each step is completed in a specific sequence and there is an expectation that “done means done”. One does not “re-open a can of worms” during the start of implementation step to revisit the business goals. The teams running these strategy efforts are very quick to note that the resulting strategy document (which is often what is produced) is a “living and breathing” thing. Of course it will change over time … Yet, although most admit upfront that the results of the Strategy effort will very likely change – from a process and planning perspective – it is most often a single event – develop a strategy. The updates to the strategy are allowed for in many plans – but they are really meant to make tweaks based on some tactical lessons learned along the implementation path.
Now, I have used this “traditional” process successfully many times – although the success was proportional to the level of understanding of the problem that we were solving. Developing a “strategy” to modernize a organization’s case management system does not represent the same understanding challenge as how best to employ Convolutional Neural Networks to reduce Warranty costs.
Clearly, the IoT technology wave represents both significant impacts to the business and significant ambiguity on the nature of these business impacts. When talking about IoT strategy, one needs to understand that the underlying business goals and objectives will almost certainly change, and that the nature of the change will not be well understood.
So, I would propose that you don’t go at this in the usual way – but, instead, consider the following approach.
The more time that I have spent with this approach the more I like it. In fact, I prefer to use this as a general problem solving and strategy development approach (its not just for IoT challenges) – as it produces results that are better quality and often in a shorter period of time than the traditional waterfall version.
Notice that implicit in the approach is an acknowledgement that one will need to go through the process more than one time – its iterative. By using the process, I am saying that I understand that the problem will take at least two passes to get right – but possibly more.
It is apparent that when planning this effort, one needs to allow for at least 2 iterations. It is also very clear that every iteration will revisit and quite probably change the problem statement and the fundamental business goal and objective definitions. Should we decide to space the iterations out – introduce a long time gap between iterations – we need to be prepared to allow for the basic goals of the engagement to change. I can tell you that most of the efforts I’ve observed and have been involved with – there was no allowance for this. Although lip service was given to the fact that “things may change as we go” – as soon as one tries to make meaningful changes to the fundamental problem statement – the “Scope” hammer is brought out. The third rail of IT projects is used by those responsible to keep things on track,
Yet – what use is it to keep things on track – if the destination is the wrong one?
Another point I’d like to highlight about the process – when done well it should force the team to answer the question – what can be done to simplify the solution. The time honored motto – the best solution is always the simplest one – is so often the first thing that falls by the wayside – especially when employing a new technology that the organization has no experience with. Too often – the focus of the team moves to maximizing the use of the new technology – and simplicity is often an early casualty of this mindset.
There are other key points that I would go through – but this posting is already way too long. I’ll post a set of smaller posts that talk about each of the major steps of the process – and I’ll spread the remaining comments among those posts.
In summary – for those unfamiliar with Design Think – I hope I’ve given you some reason to look into it. For those who live and breath design thinking - I hope to have shown a decent application of some of its principals applied in a more pragmatic and “technical” way. Finally – for those new to IoT – I strongly urge you look past the superficial hype and finding where this new technology wave will be taking your business.

01/19/16- Sixth Sense – The Role of Machine Learning & AI in Prediction and Beyond

In the age of Big Data, there are so many different avenues and opportunities. There are many things we can do with this new data, but the vision to take action often falls short of the potential. It takes new, creative minds to extrapolate circumstances that will lead to the next big breakthrough. The amount of data that will be flooding the Internet in the future is far greater than the amount of data being produced by us humans. This is hard to fathom until you begin to contemplate what Inventor Buckminster Fuller calls “The Knowledge Doubling Curve.”
Fuller noticed that up until 1900 human knowledge (or data) was doubling about once every century. By 1945, knowledge was doubling every 25 years. By now, IBM states that human knowledge is doubling every 13 months on average. When we consider the data produced in the Internet of Things (IoT), data will be doubling every 12 hours. Yes, every 12 hours.
Let that sink in for a second. This doubling of data is occurring at an exponential rate. It’s going to take 12 hours to double every bit of knowledge that humanity, and now technology, has created in documented history, including the data from the prior 12 hours. The fact that this is happening necessitates the development of vastly complex software and Artificial Intelligence. The questions that are capable of being answered are now data-driven.
It wasn’t too long ago when we had to transfer data via fax machines, look up records in file cabinets, and crunch numbers with calculators. Now we have tools that help do things like analyze sentiment. Databases come to life with the click of a button and make predictions about the future in many different verticals. This is just a couple applications of data. With the advent of new technology, we have algorithms that deal with complex data that is structured, unstructured, and semi-structured. Machine Learning can identify patterns, synthesize them, and predict them. With Deep Learning, we can use a single algorithm to learn from data and do whatever we want with it with a high degree of accuracy. The implications are nothing less than profound.
Some people say that in the future our new bosses will be algorithms. I’m not opposed to that, but when it comes to throwing people in the trash, it is most definitely a bad thing. What we can do, in the meantime, is automate things that we don’t get paid for. A lot of people drive cars and there’s human error, so let’s automate that. A lot of resources are being used to enter data page by page into a database, let’s automate that. Finding inefficiency and a point of loss in a business can be difficult. The list goes on.
There has been a lot of talk about how AI can do research, science, and even philosophy for us. If AI can find that one correlation, or maybe many correlations, that add up to preventing or even curing death, then why not automate that? If AI can enhance our lives by giving us what we need, what we want, and what we currently can’t have, then why would we be so hesitant to make it happen? There’s obviously some ethical boundaries to what it should and should not do, but if we were to have general Artificial Intelligence with access to the Internet, then we would have something boundless and immeasurably more intelligent than we are. We would have something that will probably already know morals and be a lot more modest than we could even imagine; something with answers to even the most deep, mysterious questions.
If you believe in “The Law of Accelerating Returns” as presented by Google’s Director of Engineering in AI (now Alphabet) Ray Kurzweil, then you may believe him when he says: “By 2025, we will have the hardware to support Artificial Intelligence as complex as the human mind. By 2029, the software will catch up, and we will have Artificial Intelligence as complex as the human mind. By 2049, we will have achieved immortality.” Kurzweil is a visionary known for making highly accurate predictions while remaining humble. He predicted that autonomous driving would be here by the year 2013, and it was through Google’s self-driving cars. He doesn’t want to give himself credit for that prediction, because what he truly meant is that the ordinary person will have access to that technology.
Obviously autonomous driving is getting a lot of attention from the automotive vertical and probably many, many different consulting agencies. Even now in the year 2016, we’re still working on the problem. Soon it will be here, but what then? What will grab our attention after the fascination of autonomous driving dies down? There’s a lot that we can do with AI, Machine Learning, and Deep Learning, from the most uninteresting to the most interesting things that we can apply it to.
The time may come when we not only have AI and General AI, but also AI programming itself, and maybe even programming itself at our command. Once that day comes, we better hope that the program remains modest. I, for one, believe that it will be prudent and maybe even boring- not in the sense that it won’t be a helpful part of our life, but in the sense that it may be so indifferent towards everything that it almost seems bored itself. Who knows, maybe it will even be depressed by being confined to a mechanistic object that interacts with beings of lesser intellect.
I like to believe that we will have a new best friend, one that we all can rely on. One that will always have time for us. One that will take care of us, tell us right from wrong, warn us, and even love us unconditionally. That’s really what we want from this effort. We want to reduce, or even eliminate, loss, and give us the best chance for survival. The future of this endeavor is fascinating and the types of technology that we will see within our lifetimes will be extraordinary.

01/14/16- Toying with the Future of Digital Experiences

When designing digital experiences, we attempt to learn as much about the users as possible. What type of smart devices do they own? What social media applications do they share in? How comfortable are they with technology?
We don’t often discuss that digital experiences designed for an adult would be understandable and usable by a toddler. But we need to start. We’ve seen a rise in recent years of applications introducing children to coding and behavior pattern design. The recent announcement of Fisher-Price’s Code-a-Pillar is a great example of what today’s children are playing with to prepare for tomorrow’s technology-driven world.
What makes me excited about toys like the Code-a-Pillar are the conversations adults will have with children about technology. Toys and games just don’t have to be opened and consumed. They can be manipulated, customized, broken and rebuilt. Younger audiences are introduced to these concepts, and it’s up to us to understand how they interact and push back with these tools and concepts.
At this point on the Internet of Things (IoT) continuum, designers are processing a lot of questions. What can we learn from this? How could we teach similar concepts to different demographics? How will this exposure to “writing code” and customizing digital experiences evolve as IoT and this generation grow in parallel?
Having the ability to customize and create a digital experience isn’t a barrier anymore. These educational toys will encourage children to explore and push the limits of what is offered to them. We have the responsibility to make sure we don’t just design for the current mature market. We should learn from the output of the “Code-a-Pillar” as much as the children are.

We’ve all heard stories about data breach incidents, but what needs higher awareness are security processes that provide protection and regulation compliance. Test Data Privacy is one of them.
By definition a data breach is an incident in which sensitive, protected or confidential data has potentially been viewed, stolen or used by an individual unauthorized to do so. Data breaches may involve personal health information (PHI), personally identifiable information (PII), trade secrets or intellectual property.
There are a number of reasons why implementing a Test Data Privacy solution is important. First and foremost, companies must be in compliance with various government regulations that relate to the non-disclosure of personal data. Government regulations, like HIPAA, are quite clear about severe financial penalties for each data breach, with fines compounding for each day the breach is outstanding, for each incident. For HIPAA, each individual exposed is a separate incident. That can add up very quickly.
The Controller of the Currency -- one of the many government organizations that regulate banks -- requires banks to protect the test data that reflects production data. An officer of one bank said, “We have 6,000 programmers on 5 continents that have access to our test data. A Non-Disclosure Agreement isn’t going to cut it.”
The data breach can be deliberate. People, such as hackers, disgruntled employees, criminals or foreign governments can intentionally access private data. A few examples of such breaches occurred in 2015 at CareFirst Blue Cross Blue Shield (hackers), Multi-Bank Cyberheist (cybercriminal ring), the Office of Personnel Management (foreign government), and the Army National Guard (poor security practices).
It can be inadvertent. There have been incidents where an outside company lost a container of tapes on the way to a secure storage facility. Obsolete computers have been sold without deleting the data on the hard drive.
Granted these were direct breaches of production data, which are usually protected more than test data. However it happens, once the data gets out, it can be a dire situation for companies and customers. Companies work hard to build their reputation and earn their customers’ confidence.
Another financial hit comes in determining how a breach has occurred. A health insurance company spent over a million dollars to find how a subscriber’s health data made it onto the web. It turned out to be a third party of a third party that was testing production data, and everyone assumed that the data had been previously disguised.
The bottom line is that by implementing a Test Data Privacy solution, companies can reduce their exposure to financial disasters, whether in the form of fines and penalties for violating government regulations, or lost customers due to damage to the company’s reputation should they suffer a data breach.

11/06/15- What Is Test Data Privacy?

There’s a lot of meaning in the three-word term ‘Test Data Privacy.’ At a high level, it is data protection management or data masking while working on high security IT upgrades in the test phase. And then the concept gets more complex.
All developers need test data in order to make sure the applications they are writing work correctly and produce desired results. For years, the task of creating test data simply involved making copies of datasets and databases from production- or live-environments. While organizations may think that their core data is immune from external privacy threats, environments outside of the production perimeter (such as testing, development, or quality assurance) usually have far less robust security controls. Access to these areas is typically more widely exposed to a larger variety of resources, such as in-house staff, consultants, partners, outsourcers, and offshore personnel. Studies conducted by research firms and industry analysts reveal that the largest percentage of data breaches occur internally, within the enterprise.
Implementing a test data privacy solution is much more complex than just finding where the sensitive data is located and de-identifying it in some way. There are three questions every business needs to answer before they can move forward:

Once these questions have been answered, then the process of analyzing where the sensitive data is located, and if it needs to be disguised, can begin.
A thorough test data privacy solution is a combination of the technology, expertise, and best practices needed to support data protection initiatives across the enterprise. The solution itself is comprised of five phases: Assessment, Analysis, Design, Development, and Delivery. By implementing a test data privacy solution, an organization can reduce its risk of exposure, increase productivity, and lower the cost of regulatory compliance.

09/21/15- Security Analytics – Finding a Needle in a Haystack

Security is foundational and critical to connectivity and the Internet of Things. With hundreds and thousands of IoT transactions getting executed every second, keeping the communication, infrastructure and customer data secure is a herculean task indeed. Security Analytics is gaining momentum to meet this need. Security Analytics is the combination of techniques that determine some security outcome characterized by a confidence factor by analyzing various sources of data. Until the point of technology’s maturity, information security experts will have to weigh in the output of security analytics tools for further action.
Security Information and Event Management
Security Information and Event Management (SIEM) refers to products and services that provide real-time insights into security related events and alerts. SIEM focusses on aggregating data from various sources such as web logs, network logs, firewall, etc. SIEM performs correlations and reacts to the security alerts raised. It also supports compliance requirements and SIEM vendors are expanding the breadth of services for more predictive analytics.
User Behavioral Analytics
Another buzz in the security analytics space is “User Behavioral Analytics” (UBA). While SIEM focuses on events and alerts, UBA takes a different approach by focusing on the user behavior. Using user behavior data to perform customer segmentation, upselling and targeted campaigns have attained their maturity. However, UBA in this context, focuses on using the user behavior data to get some intelligence for some security outcome. UBA, in general, refers to a concept and it could be a product or custom developed solution to solving a problem. At the crux of it, UBA first establishes the baseline of “normal” behavior of a user by mining and analyzing hundreds and thousands of log records. Once the baseline of a “normal” user behavior is established, any deviation from the normalcy for that user is identified and tagged as anomalous activity for further analysis. Some of the common use cases are,

Tagging a user who logs in to perform a transaction on Sunday that is quite deviant to his/her normal behavior.

A user performing thousands of “Delete” operations of unusual to the user profile.

The anomalous activity is then evaluated for the risk by analyzing the impact and probability. The analytics that powers the intelligence is usually through supervised machine learning and statistical modeling. Overall, UBA helps in identifying compromised account, employee sabotage, privacy breaches, shared account abuse etc.
Factors
The response time to identify and alert the anomalous activity determines the success of UBA. In a large enterprise, to aggregate and correlate weblogs and other event logs from multiple systems to establish a continuous refined baseline of a normal behavior of a user or group of users can be daunting. Typically, enterprises have tens and hundreds of batch jobs that do the log management and often it ends up in the archive server. In order to continuously establish a baseline of a normal user behavior, integration with a SIEM or various data sources directly is the first step. Secondly, the big data environment must have tools and products that can support stream analytics of high velocities of data. Last but not the least, supervised machine learning algorithms that can perform continuous classification and detect outliers on a real-time basis. Any product you choose must address these three aspects, whether it is on premise or cloud based. The challenge with Cloud based UBA products is the age-old concern of the data leaving the premise, especially system logs that can hold sensitive content. However, the infrastructure that you require to perform the analytics of massive scale of data might outweigh and quality Cloud-based delivery of UBA.
Conclusion
For the data to move up the value chain from information to intelligence, analytics is the answer, if performed at the right time. Any intelligence derived that is actionable to address security breach proactively provides mutli-fold returns on investment on the product or solution you chose for Security Analytics.

09/08/15- A Foundation to Build Your Big Data Program

The word “Big Data” had rapidly transitioned from buzz word to reality, not only the big giants, like Facebook or Yahoo, even small companies have started adopting this technology and trying to predict the future of their business, demands and needs.
With Big Data Coming to Reality – Now What?
Decision making was a “rear view mirror” activity viz. Business Intelligence, looking at the past events that had already occurred and responding accordingly. But with increasing demand and the ability to analyze vast amounts of Big Data in real-time, decision making has now become a forward-looking event with the help of data scientists. Business executives can now see what is going on with the inventory, sales orders and information from sensors in real-time. Systems and Operations personnel can use big data analytics to infer terabytes of log files and other machine data looking for the root cause of a given problem.
How to Build a Big Data Environment?
An infrastructure that is linearly scalable and yet easy to administer is pivotal for a Big Data platform.
The primary challenge on building a big data environment would be, “Where?” Most of the organizations are chalking the pros and cons between the choices, On-Premise vs. Cloud Service. One of the understandable dilemmas for the organizations is the data leaving the premise, if the choice were to be Cloud.
#1. On-Premise:
This is one of most sought option for various organizations, mainly considering the sensitivity of the data leaving the premise. Some of the challenges faced with this choice are:

Initial capital investment to setup the infrastructure without fully knowing the scale

Integrating the Big Data Infrastructure with the existing backend infrastructure

#2. Cloud Service:
With the uncertainty around scale and value, Cloud service has been a wise choice for many organizations. Amazon’s Elastic Map Reduce (EMR) and Microsoft’s Azure HDInsight have pioneered in hosting big data infrastructure on the cloud. However, cloud service comes with a trade-off of having the data leave the premise. Many organizations are sensitive about having the customer data leave the premise due to repeated cyber-attacks and privacy protection. However, the journey towards big data is often involved with prototypes and proof of concepts. Cloud solution comes really handy in such a case to be elastic.
Apart from the “where” part of hosting big data, the “what” part of the infrastructure is equally critical; Is it just storage? The organizations moving towards big data are often confronted with high velocities of data and varieties of data – structured and unstructured data and massive volume of data. Some of the infrastructure challenges include:

Storage

Big data shifts the plateau by raising the storage cost from 60 to 80% every year. Given this rapid growth, choice of the storage hardware becomes extremely important. For instance, Solid-state disks (SSDs) are far superior than disk at high velocity data ingestion.

Network

Network isolation for all big data needs with higher bandwidth. For instance, Map Reduce operation involves large amounts of data being processed and transferred amongst nodes. Network bandwidth must be out of the constraints in a Big Data environment for real-time processing.

Response Times

Response Time could completely vary based on use-cases, as it can range between blink of an eye to even a few minutes. Apache Spark performs 100 times faster than traditional Map Reduce jobs as it processes the data in-memory. On the flip side, one must plan for sufficient RAM on the worker nodes to meet the quality of service.

There are various infrastructure management tools that are in place to cleanse, integrate and manage Big Data infrastructures effectively. With these innovations in place, it is now time for Enterprises, whether they are large or small, to realize that embracing Big Data and adapting is inevitable!

08/28/15- Is Security an Afterthought in Internet of Things?

The exuberance around Internet of Things and the enormous volume of connected devices are attracting many companies, big and small into the IoT bandwagon. Manufacturers are adding connectivity to their devices based on the assumption that customers will prefer a connected device to its not-connected counterpart if the cost is not significantly higher. Though many companies are aware that the customers are not always taking advantage of their internet enabled refrigerator to refill eggs, nobody wants to be left out of this huge opportunity. The popular theory seems to be: connect first and the use cases and return on investment will follow. In this rush to connect things, what seems to not be getting the attention it deserves is security.
Vulnerability
Internet is a double edged sword. On one side, all of us enjoy the many benefits of connectivity such as having a video call to the other side of the globe or making purchases while not leaving home. The darker side is identity theft, illegal financial transactions, masquerading, snooping, etc. While these threats are real and have huge financial impact, the threats on IoT devices can be fatal. For example, leaving an oven or hot plate on can potentially kill people. How about playing around with someone's pace maker? Imagine getting locked inside a car wash.
While automobiles have not been hacked by real criminals, researchers have exposed the vulnerabilities and how it affects safety leading to catastrophic incidents. Cyber-attacks are increasing in frequency. In many cases, companies do not know they are violated until months later. The proliferation of connected devices significantly increases the risk and aggravates the impact, especially if the tools are in the wrong hands. Many suspect the biggest terrorism threat will be through the internet in the future.
Security Strategy
The ubiquitous nature of Internet Protocol has its downside when it comes to security. We need a strategy for end-to-end security, starting from the device to the cloud applications, to insure the device is protected there by guaranteeing confidentiality, integrity and availability (CIA) to the customer.
The CIA triad is a model used to discuss the security aspects of IT systems, and the same can be extended to IoT. Confidentiality is making sure the data at rest or data exchanged between end points remains private through encryption. We need to make sure there is no gap in security while message flows from one node to another. Integrity is to make sure the software in the device or any part of the system is protected against unauthorized modification. This can be achieved by having a range of techniques from simple hashing to digital signatures using Public key cryptography. Availability is to make sure the system is available based on the service level expectations. This requires systems to be aware of their weakness and have counter measures built in. Typical counter measures are using load balancers, redundancy, clustering, etc.
While designing for security, instead of relying on one trusted mechanism, we should have multiple levels of defense. Every layer should incorporate their own security mechanism and not rely on the layer below.
We should start at physical layer security and go all the way to application security while incorporating data link, IP and session layer security.
Devices should implement a Trusted Computing Block and implement a security perimeter to separate the TCB from the untrusted part of the system. Devices need to be authenticated at boot-up and device signatures for the drivers and associated software needs to be validated before allowing access. We need to make sure packets are filtered out intelligently. A mere protocol header based filtering might not be sufficient and would need state based firewalls.
Devices need to have security mechanisms in place in the data lank layer to prevent rouge devices from attaching to the network by employing MACsec (802.1AE) or IEEE 802.1AR, incorporating device identity. Wireless access should be encrypted using 802.11i (WPA2). Bluetooth is more prone to attacks and should be guarded against bluesnarfing or bluejacking kind of attacks.
It is also important to limit the exposure. Subnets and hardware or software firewalls can be used to limit the exposure of your internal network with sensitive information from the appliance network. There is no reason to have your smart garage opener access data from your personal computer. Basic guidelines on passwords, authentication and authorization should be followed and only run if absolutely needed. Weaknesses need to be identified early and countermeasures should be incorporated to minimize vulnerabilities.
While Cloud computing and resource virtualization reduces administration costs, it poses a new set of challenges on protecting sensitive information. In addition to implementing the familiar defenses in the physical security world, like firewalls, IPD/IDS mechanisms, and machine hardening, we will need mechanisms like Hypervisor, a security gateway to protect the VMs. Organizations need to have strong security policy and monitoring in place especially because of the dynamic nature of resources.
Conclusion
Security cannot be built into the system at the tail end of product development. It has to be incorporated and prioritized right from the design process. In the rush to connect devices to the internet, if security is forgotten, the results can be disastrous as we are dealing with safety critical applications.

08/26/15- Lambda Architecture – Best of Both Worlds

With the data generation and consumption exploding at a rapid pace in every industry, there is an increasing need to have a solid IT architecture that can support the high velocity and volume of data. Some of the common challenges in the space of Big Data are balance between accuracy of the analytics derived from a massive data set and low-latency high speed results. Lambda Architecture is a data processing technology agnostic architecture that is highly scalable, fault-tolerant and balances the batch processing and the real-time processing aspects of Big Data very well, providing a unified serving layer of the data.
Query = (All Data Set)
Consumption of the data via ad-hoc query is naturally a function of the underlying data set. The function operating on the entire massive data set is bound to have high latency due to its sheer size though the accuracy is generally higher with a huge historical data set. Usually, such functions operating on the large data set use the Hadoop MapReduce type of batch frameworks. On the other hand, the high velocity data processing layer usually operates on a small window of data set that is in-flight, thereby achieving low-latency, but might not be as accurate as working against a huge data set. But, with the increasing appetite for data consumption near-real time, there is an opportunity to strike a balance to get the best of the both worlds, and Lambda Architecture plays well in that space.
Lambda Architecture
Originated by Nathan Marz, founder of Apache Storm, Lambda Architecture consists of three components:

Batch Layer

Speed Layer

Serving Layer

Typically, the new data stream is implemented using a publish-subscribe messaging system that can scale for high velocity data ingestion such as Apache Kafka. The inbound data stream is split into two streams, one heading to the Batch Layer and the other to Speed Layer.
Batch Layer is primarily responsible for managing the immutable append-only massive data set and pre-computing the views of the data based on the anticipated queries. Batch Layer is often implemented using a Hadoop based framework such as MapReduce. The premise behind using the immutable data set is that the batch layer relies on re-computation of the entire data set every time to drive higher accuracy of the batch views. It will be extremely difficult, if not possible to re-compute against the entire data set if the data set is mutable as the computation process might not be able to manage various versions of the same dataset. The core goal of this layer is to focus on accuracy by pre-computing the views and making it available in batch layer even though there is an inherent latency as it might take several minutes or hours. HDFS, MapReduce and Spark can be used to implement this layer.
Speed Layer is primarily responsible for continuously incrementing the real-time views based on the snapshot of the incoming data stream or sometimes a small window of the data set. Since these real-time views are constructed based on small data set, they might not be accurate as batch views, but they will be available for immediate consumption, unlike batch views. The core goal of this layer is to focus on the speed of making the real-time views available, though it might not be accurate due to the small data-set used for analysis. Apache Storm, Spark and NoSQL databases are typically used in this layer.
Serving Layer’s responsibility is to provide a unified interface that seamlessly integrates Batch Views and Real-Time Views generated by Batch Layer and Speed Layer, respectively. Serving Layer supports ad-hoc queries optimized for low-latency reads. Typically, technologies such as HBase, Cassandra, Impala and Spark are used in this layer.
Lambda architecture tries to bring the best of the both worlds – Fast and Large Scale Processing layers. With the increasing suite of technologies such as Spark, Storm, Samza, Cassandra, HBase, MapReduce, Impala, ElephantDB, Druid etc., the choices are plenty to pick the right technology for the architecture.

08/18/15- Innovation in IoT – The Design Thinking Way

IoT, an emerging market of $2.3 trillion holds a huge potential in terms of redefining lifestyle for the next generation. Leaders and niche players in the space of IoT are tirelessly discovering use cases that will make the day in life better. Considering that IoT is in the peak of the Gartner’s hype cycle, it is a perfect breeding ground for innovation.
Design ThinkingDesign Thinking is a human centered approach to innovation by addressing the needs of the people through the right use of the technology and by meeting the business needs. In other words, Design Thinking is an approach to innovation that is a harmonious intersection of desirability, feasibility and viability.Empathy Design
Thinking advocates the philosophy of starting from the human. Intense observation provides insight and insight in turn helps to identify the needs and desire. “If I had asked customers what they wanted, they would have said ‘a faster horse,’” said Henry Ford. User interviews and surveys are only helpful in incremental changes, not for game changing innovation. However, acquiring insight into a day in customer’s life and translating the empathy into the needs and desire is pivotal to Design Thinking.
Ideation
In many occasions, a powerful voice in a brainstorming session can overwhelm others and cause the group to settle for a mediocre idea prematurely. The way to come up with the best idea is to have lots of them. That is precisely the approach of Design Thinking. One of the key principles of Design Thinking is to diverge to generate as many ideas as possible before converging to filter out based on feasibility and viability.
Rapid Prototyping
The paradox of Design Thinking is to fail fast to succeed sooner. A low fidelity prototype that is a tangible manifestation of the idea provides instant feedback as to what works and what does not. Due to the crudeness and unfinished look of the prototype, the cost is lesser and the value in the form of feedback is immense before production.
Lochbridge’s Design Thinking Framework
Lochbridge has a unique framework to apply Design Thinking within an enterprise to promote
innovation and drive strategy. The Lochbridge framework of Design Thinking uses a bottom up approach that
is inclusive in nature and taps into subject matter experts, executors, strategists and leaders. The
framework is typically executed in an intense workshop setting where every idea is heard during the diverge
process. The storm of sticky notes are then organically scored by a collaborative session of affinity
mapping leading to a creamy set of ideas that are ready for rapid prototyping. Lochbridge walks hand-in-hand
with the customer to execute rapid prototypes and draw out the strategy and roadmap to see the
value of the design thinking.
The common challenges such as ROI with respect to connectivity, identifying compelling use cases in the space of IoT, executing the development in an untraveled path with the cutting edge of technology and drawing the big picture of enterprise strategy, can be best addressed by Lochbridge.
Contact info@lochbridge.com to setup Design Thinking workshop to gain insight, inspire, ideate and implement.

08/18/15- Getting Consumers to Hand Over the Keys to Personal Vehicle Data

Cars on the road today have more software than ever, and embedded connectivity will continue to accelerate. By 2020, there could be as many as 200 million connected cars around the world.
Technology has provided the ability to personalize content and deepen one-on-one relationships between drivers and automotive brands. Given the advancements in bandwidth, connected cars can share real-time information with automotive providers to enhance vehicle performance, safety, service and entertainment.
But there’s a hitch. As consumers move beyond early adoption of connected cars, the majority remains hesitant to openly share personal data with automakers and in-vehicle applications.
A recent Lochbridge consumer survey shows that consumers currently trust phone providers, insurance companies, social networks and retailers more than their automotive providers when it comes to sharing personal data, such as location, preferences and driving behavior.
How can that be when drivers have trusted vehicles with their lives for more than a century?
Transparency is the key. The Lochbridge survey found that trust barriers begin to fall away when automotive providers clearly explain where personal data is being used and for what purpose. If consumers think the reason is beneficial to them, they become more than willing to exchange information through connected cars.
Automakers must clearly communicate the benefits of sharing data. Consumers already know what to expect when providing their data through smart phones and computers. There’s already a culture surrounding the use of electronic notifications and opt-ins for valued services.
Without an explanation on how data would be used, approximately 35 percent of survey respondents said they would share personal data with OEMs. The result doubled to approximately 70 percent once explained the data would be used to provide better dealership service, for example.
Once the benefit is clear, respondents indicated that they are open to exchanging their data in many instances, such as improving future vehicle quality, personalizing their vehicles, and receiving discounts for insurance plans and special offers from retailers. However, drivers need to maintain control of their data, with assurances that the exchange of data will only happen when are where they choose to do so.
This data exchange from the vehicle involves two drive two types of data: vehicle diagnostic data providing visibility into how the car and its components are performing, and personal driver data showing how and where a vehicle is used. Both can help the industry with product development and service. Vehicle diagnostic data could go as far as helping OEMs detect issues earlier and possibly avoid recalls.
The opportunity has arrived for automotive OEMs and dealers to shift their conversations through dashboards, in a way that consumers have become familiarized with their other mobile devices. Instead of getting notices or coupons by paper mail or emails, consumers can obtain them through in-vehicle applications, if they choose to opt-in.
Consumers have become accustomed to the trade offs of valued services for their personal data. Those lessons already come from mobile technology leaders. OEMs can now shift away from assumptions on what drivers would find acceptable for data sharing. They just have to ask directly.
Lochbridge, in collaboration with automotive and technology innovators, is helping to bridge the gap, turning vehicle and driver data into new insights for brands and new experiences for their customers. The company has helped OEMs deliver a 360-degree view of the driver and the vehicle, allowing them to deliver personal vehicle experiences while providing visibility to proactive manage vehicle performance and quality.

08/18/15- Does Your Enterprise Need NoSQL?

It is interesting to rewind 15 years back when it was time to get ready for my job interview. I was advised to refresh concepts behind Normalization, Referential Integrity, Constraints, etc. It would have been hard to imagine someone to work on database without a solid understanding and practice of the above concepts. Fast forward, RDBMS is being challenged by the emergence of NoSQL that fundamentally differs from RDBMS in every possible way to make one unlearn what has been learnt over years.
NoSQL
NoSQL stands for Not Only SQL representing the next generation database that supports the emerging needs. Relational database introduced concepts such as strong typed columns, tighter relationships between entities, and constraints that made sense when moving away from flat-file persistent stores. The digital revolution has penetrated our lives so much that more than 90% of the data generated so far has been created in the past few years. Storage costs have reduced by a factor of 300,000 in the past 2 decades. According to IBM, 2.5 billion Gigabytes of data is getting generated every day since 2012. And to make the matter more interesting, over 75% of the data generated is unstructured such as image, text, voice and video. The new context poses challenge to the conventional way of persisting and accessing the ever growing data. Challenges with RDBMS
The 3 dimensions of Big Data are Volume, Velocity and Variety. Querying against the massive volume of data to serve online channels such as web or mobile requires scaling the database to run a heavy workload. In the IoT arena, the millions of devices pushing data to the cloud bring a high velocity of data to be ingested and persisted in the database. This, again, requires the database to be scaled to allow the parallelism, sometimes in the order of million transactions per second. Thirdly, RDBMS was not designed keeping the unstructured data such as image, videos and voice in mind though there is a limited support for such data types. RDBMS scales very well for the enterprise applications. However, scale up architecture is fundamental to RDBMS world and there is an inherent limit with that approach. There is a finite amount of memory and CPU one could add before giving up to think outside-the-box. Running a farm of tens and hundreds of application server nodes, and still expecting to scale up database node is not practical. Further, with the emerging standard data structures such as JSON, unstructured data, a database that has native support is need of the hour. NoSQL
NoSQL is a category of databases that scales out in a large cluster, mostly open source, and are often schema-less. Being able to scale out in a large cluster offers the capability to process massive amount of data, thanks to distributed computing. A schema-less or less restrictive schema allows support for unstructured data and extensible data structure for the ever evolving business needs. NoSQL often achieves the distribution of data by techniques such as sharding and replication. At a broad level, NoSQL databases have four category types:

Key-Value databases

Document databases

Column-family databases

Graph databases

Key-Value databasesAs the name indicates, Key-Value databases store the value against keys and the value can be a free-form data structure that can be interpreted by the client. Clients typically request for the value and fetches by the key. Due to the simplicity, this scales really well. Some of the examples of Key-Value databases are Redis, Riak, Memcached, Berkeley DB, Couchbase, etc. Document databases
Document databases store documents such as XML, JSON, and BSON in the key value store. The documents shall be self-describing and the data across rows might be similar or even different. Document databases perform very well in content management systems and blogging platforms. Some of the popular document databases are MongoDB, CouchDB and OrientDB. Column Family databases
Column family databases store data in rows that consists of keys and the collection of columns. Related groups of columns form column families that typically would have been broken down into multiple tables in RDBMS world. Column family databases can scale very well for massive amounts of data. However, since the design is not generalized, it is very effective when the common queries of retrieving the data are known upfront while designing the column families. Other flexibility provided by column family database is that the columns across rows can vary and columns can be added to any row dynamically without having to add them to other rows. Column family database is well suited in IoT use cases that involve ingestion of high velocity data and high speed retrieval for online channels. Some of the popular column family databases are Cassandra, HBase and Amazon DynamoDB. Graph databases
Graph databases allow storing entities (also known as nodes) and the relationships (known as edge) between them. Technically, there is no limit to the number of relationships between entities. Supporting multiple relationships and dynamic graphs in RDBMS world would involve a lot of schema changes and even data migration every time a new relationship is built. Social media is a classic domain where Graph databases excel well. Some of the popular graph database include Neo4j, Infinite Graph, etc. Conclusion
The choice of the database really depends on the nature of the data, processing and retrieval need. Emergence of NoSQL is by no means a death knell to RDBMS. RDBMS is here to stay for a long run and it does have its relevance for many more years to come. NoSQL excels very well in certain areas and compliments the RDMBS in an enterprise towards data management. The technology is clearly moving towards polyglot persistence, hence, a heterogeneous combination of database technology within an enterprise to handle the massive amount of data is very natural.

08/14/15- Internet of Things: Preparation is Pivotal to be Predictive

In terms of numbers, the Internet of Things (IoT) is gaining momentum every day. Already, things connected to the Internet have surpassed the number of people connected to the Internet. Gartner estimates that the Internet of Things (IoT) will consist of 30 billion objects connected by 2020. When it comes to monetary numbers, it really signifies the huge potential that is estimated to bring over $2.3 trillion by 2025.
Experts envision over 90% of the things for everyday living inside our homes will be connected in the future, too. We are already living in a world attached to several smart things, such as mobile phones, smart watches, smart glasses, healthcare wearable devices, WiFi-enabled entertainment systems, ever-connected home security systems, sensor-based irrigation systems, smart meters, smart cutting boards, and even connected cars. In the future, connectivity will penetrate deeper into other objects that we interact with every day, such as can openers, pop cans, smart utensils, and smart pantries.
The overarching goal is to enhance the everyday experience through seamless connectivity that blends the physical and digital worlds with natural, smart interactions. The key challenge for manufacturers will be keeping that connectivity to a low cost.
A current debate in IoT is pushing the computing intelligence to the edge versus managing in the cloud. While “edge computing” offers benefits (such as cutting down the bandwidth by filtering the unwanted data from being sent via 3G), it poses a few challenges, too.
First, the cost of updating the computing software/firmware in the edge will be a factor, especially if the scale of the “things” is high. Secondly, there is a lot of flexibility in evolving the computing intelligence, if managed in the cloud. Lastly, all the potential use cases of the data read from the sensors and smart things are not known at the point of development.
Hence, most of the adopters have chosen to bring in as much data as possible from the smart things to the cloud and explore use cases as they evolve.
Enter Big Data
Millions of smart things across the world are pushing up the scale towards Big Data. To make the matter more interesting, the velocity of the incoming data poses challenges to process them in real time. A big use case for bringing connectivity in various verticals (healthcare, manufacturing, automobile) is to continuously improve the quality of the products and to know the usage and vital parameters read from products after they leave the factory.
In many cases, the direct ROI for bringing connectivity is often to apply the power of analytics on the pile of data acquired. Business Intelligence has been around for a long time and often it is confused with the business and data analytics.
Here’s a good view of the differences, according to Pat Roche, Vice President of Noetix Products: “Business Intelligence is needed to run the business while Business Analytics are needed to change the business.” The power of data analytics lies in the real-time analysis and being able to predict the outcome as opposed to monitoring KPIs and reporting the outcome aftermath. Forecasting and Predictive modeling are pivotal to business analytics.
One of the key steps towards embracing the Big Data for the enterprise is to lay down the data storage and analytics strategy. NoSQL, Real Time analytics and batch analytics are the cornerstones of Big Data. Big Data has become a crowded space in the last 2 to 3 years, but most of the players have converged in the approach of embracing Open Hadoop Distribution. Expectedly, most of the organizations try to avoid vendor lock-in and the choice has become easier with wide adoption of Hadoop.
The key foundation for an enterprise to embrace IoT is to have a business model, and data analytics plays a huge role. Hence, it is not a chicken-and-egg situation any more. If you want to be in the league of IoT, preparing the journey of transformation towards Big Data must begin today.