HERE shares its vision on location intelligence for smart cities

To gain an accurate vision of how HERE Technologies will play our part in the future of smart cities, we must consider the needs of the people that live in those cities. Naturally, those needs alone won’t provide us an answer. In addition, we must account for the technologies that will drive that city, and how we can intelligently elevate those technologies to serve the population.

Last week I shared with you how we include the human factor in the landscape of the future Smart City. Today, I would like to focus on how we’re developing the technology that serves that future.

Though it’s a great activity for the weekend, we don’t need to go to the movies to understand how technology is going to evolve. The tech is here, right now. Many people will be reading this article on a smartphone, a device that seems to evolve nearly every year. Maybe a select few are reading from the browser inside of their car. The way you found the link to this page will likely have been driven by an algorithm. This article was edited and published on multiple different devices, all connected by a cloud network.

We took these examples, and quite a few more, and compiled them to form a macro look at where technology is and where it is going. Location-aware wearables and AI-enabled digital assistants are here and adoption is growing rapidly. Machine learning and cloud computing are table stakes for technology companies. Augmented Reality is now a standard feature in hundreds of millions of smartphones – and sometime in the not-too-distant future, into our vehicles. Autonomous robots are here…autonomous drones are just around the corner…and autonomous cars are not far behind.

Everything mentioned above either requires, or is vastly improved once you add location data.

So, we took on this challenge. We have begun work to build a location-aware platform-as-a-service, one that could be integrated into a large array of systems, and serve multiple purposes in a scalable, responsible way. What are the components of that platform?

Start with the data, the Reality Index – what will be an amazingly diverse collection of data. People, places, things, and more – all indexed in near real-time. Collecting and managing that data securely is the first step in bringing together people who are generating information, and people that want to consume that information.

Now, all we have to do is index everything in the world. It’s an ambitious and daunting task for certain, but recall, we have technology on our side. I’ll give you some examples.

Currently, we receive an anonymized data dump places from one of our data partners. It’s a weekly download of more than 100 million points of interest around the world. This isn’t something anyone could sift through. Instead, we have developed, fine tuned, and continue to maintain machine learning algorithms to sort through that data. Our algorithms are able to figure out which are real places and which aren’t with a very high degree of accuracy. Then, we use this to update our maps and location services to match reality.

Take that same approach and apply it to sensors on cars – both the ones we’re driving now, and the autonomous vehicles being developed for the future. It’s one thing to build an index of all the lanes, stop signs, speed limits etc. We’re already doing that. But, we can also make deductions based on how cars behave on the road. For instance, when it’s 36 degrees Fahrenheit outside, and the traction control system engages. Machine learning algorithms could deduce that there’s a high likelihood of black ice on the road. And that’s just scratching the surface.

This whole-world view must also be collected in 3 dimensions. A view of a city at ground—level is one thing, but what does it look like at 300 or 500 feet? Who is mapping the “air layers” of dense urban environments that automated drones and airborne delivery technology will require?

This is obviously a tremendous amount of data, and every day we’re collecting even more than we have room to talk about here. We can use traffic models to improve commute times. We can use car and people movement to optimize city planning and mass transit. We can apply tracking technology to locate shipments and reduce (maybe even eliminate) the financial impact of lost items.

We will collect all this data into the Reality Index, and then we layer on AI and Machine Learning capabilities. And finally, we expose it into an open platform for developers. This extensive index of real world data is what will enable developers, city planners and entrepreneurs to build a smarter future for our cities, and for the people that live in them.