How IBM is Transforming Data Science

I am at the IBM InterConnect 2016 event in Las Vegas. While this event, IBM’s largest (estimated 24,000 attendees!), is billed as a cloud and mobile conference, sessions focused on a variety of related solutions around analytics, security, DevOps (which I learned is a methodology) and Watson Internet of Things. I was invited by IBM as a guest to share some insights from the perspective of a data scientist. Below are a few things that struck me as interesting.

Simplifying Programming with Swift

Data science is about extracting insights from data. One way that data scientists are helping people get value from data is by creating mobile applications that aggregate, summarize and present data in meaningful and useful ways to the end user. Toward that end, IBM is helping simplify the development process through their ongoing partnership with Apple in an initiative to expand the use of Swift programming language. Developers can play in the IBM Swift Sandbox, an interactive web site that lets them write Swift code and immediately see how that code looks on a mobile device. It’s no wonder that Swift is the number 1 open source project on GitHub. I’m not a programmer, but, after seeing the demo, I am going take a stab at programming.

Improving Insights with the Cloud

Another major theme from the conference is how IBM is helping businesses move their infrastructure to the cloud. As part of their #OpenForData initiative, IBM Cloud Data Services is leveraging the power of the cloud to help companies get more value from their data. By delivering fully managed data and analytic services on IBM Bluemix, IBM Cloud Data Services helps different types of data professionals develop new products and get better insights. As a research data scientist, I’m especially interested in IBM’s Analytics Exchange which includes 157 curated data sets on such topics as the environment, health and science & technology. The platform allows programmers to can get access to any of the data sets via an API to help them develop products. Also, the platform helps research data scientists by allowing them to launch Watson Analytics directly from the interface to seamlessly interrogate any of the data sets.

Improving Analytics with Open Source

Many of the general session talks centered around how companies are relying on open source technology to get insight from their data. I see open source as a means by which data scientists are accelerating the time to insight. The nature of open source projects brings diverse data professionals together with a common goal of improving how the technology performs, allowing for faster iterations in removing bugs from the applications while adding innovative functionality to open source projects. Because of its ability to support real-time analytics, Spark technology is one very promising open source initiative. In the age of Internet of Things, businesses need to be able to make sense of the vast amounts of data quickly; they can no longer solely rely on batch processing of their data.

Machine Learning, Cognitive Computing and Watson

Machine learning and everything related to it will be the engine that fuels the insight economy of the future. Machine learning, deep learning and cognitive computing all work to extract insight from data through mathematical and statistical modeling. Robert LeBlanc, Senior VP of IBM Cloud, in his keynote address, said, “By 2020, 95% of the top 100 enterprise software companies will have incorporated one or more cognitive technologies.” One interesting application of cognitive computing is in the area of IT Service Management. By moving IT infrastructure to the cloud, companies can use Watson Analytics to monitor systems 24/7/365. Watson will be able to help IT departments learn about their operations, apply predictive analytics to diagnose potential problems and finally recommend corrective actions before systems go down.

The Biggest Data of All

The final highlight of IBM InterConnect 2016 for me was meeting Brent Spiner, the actor who played Data on Star Trek: The Next Generation. I started writing about Big Data back in 2011. While interest in the term, Big Data, is plateauing, the importance of what that terms stands for will be around for a long time; we’re just using more specific terms to describe this phenomenon (e.g., Internet of Things, Machine Learning, Cognitive Computing). But no matter what form it takes, just make sure you pronounce “Data” correctly.

[Mar 23-24, 2017, Barcelona] An internationally recognized program with proven track record of success - being run for 56 times in 19 cities and has trained up CX professionals from 67 countries on six continents. The program is developed based on the Branded CEM Method which provides a strategic framework, statistically proven applications and an emotion curve tool to help enterprises to deliver differentiated experience in driving C-SAT, retention and NPS without spending extra resource.

CGS understands the mission-critical nature of customer experience. We consider each of our 45+ million interactions annually as unique opportunities to exceed expectations, building long-lasting relationships with your customers. Named to IAOP’s 2016 Global Outsourcing 100®, we help some of the world’s best-run companies create memorable experiences by providing multilingual customer support from simple requests to complex technical support challenges.