Posted On:June 2017 - AppFerret

As digital technology infuses everyday life, it will change human behavior—raising new challenges about equality and fairness.

In a single generation, this has become the new normal: Nearly all adult Americans use the internet, with three-fourths of them having broadband access in their homes. And the internet travels with them in their pockets—95 percent have a cellphone, 81 percent have a smartphone. This ability to constantly connect has changed how people interact, especially in their social networks—more than two-thirds of adults are on Facebook or Twitter or another social media platform.

Digital innovations have made it easier for people to find more information than ever before, and made it easier to create and share material with others. From smartphone-delivered directions to voice-driven queries to on-demand news, people’s lives have been transformed by these technologies. Yet today’s inventions and innovations mark only the start, and tomorrow’s digital disruption, which is already underway, will probably dwarf them in impact.

The next digital evolution is the rise of the internet of things—sometimes now called the “internet on things.” This refers to the growing phenomenon of building connectivity into vehicles, wearable devices, appliances and other household items such as thermostats, as well as goods moving through business supply chains. It also covers the rapid spread of data-emitting or tracking sensors in the physical environment that give readouts on everything from crop conditions to pollution levels to where there are open parking spaces to babies’ breathing rates in their cribs.

The Pew Research Center and Elon University in North Carolina invited hundreds of technology experts in 2014 to predict the future of the internet by the year 2025, and the overriding theme of their answers addressed this reality. They predicted that the growth of the internet of things will soon make the internet like electricity—less visible, yet more deeply embedded in people’s lives, for good and for ill.

The internet of things will have literally life-changing impact on innovation and the application of knowledge in the coming years. Here are four major developments to anticipate.

The emergence of the ‘datacosm’

The spread of the internet of things will accelerate the digitization of data, spawning creation of record amounts of information. Data and connectivity will be ubiquitous in an environment sometimes called the “datacosm”— a term used to describe the importance of data, analytics, and algorithms in technology’s evolution. As previous information revolutions have taught us, once people—and things—get more connected, their very nature changes.

“When we are connected, power shifts. It changes who we are, what we might expect, how we might be manipulated, attacked, or enriched,” writes Joshua Cooper Ramo in his new book, The Seventh Sense. Networks of constant connection “destroy the nature of even the most solid-looking objects.” Connected things and connected people become more useful, more powerful, but also more hair-trigger and more destructive because their power is multiplied by a networking effect. The more connections they have, the more capacity they have for good and harmful purposes.

On the human level, the datacosm arising from the internet of things could function like a “fifth limb,” an extra brain lobe, and another layer of “skin” because it will be enveloping and omnipresent. People will have unparalleled self-awareness via their “lifestreams”: their genome, their current physical condition, their memories, and other trackable aspects of their well-being. Data ubiquity will allow reality to be augmented in helpful—and creepy—ways.

For instance, people will be able to look at others and, thanks to facial recognition and digital profiling, simultaneously browse their digital dossiers through an app that could display the data on “smart” contact lenses or a nearby wall surface. They will gaze at artifacts such as paintings or movies and be able to download material about how the art was created and the life story of the creator. They will take in landscapes and cityscapes and be able to learn quickly what transpired in these places long ago or what kinds of environmental problems threaten them. They will size up buildings and have an overlay of insight about what takes place inside them.

Part of the reason that data will be infused into so much is that the interfaces of connectivity and the ability to summon data will be radically enhanced. Human voices, haptic interfaces that can be manipulated by finger movements (think of the movie “Minority Report”), real-time language translators, data dashboards that give readouts on a user’s personally designed webpage, even, eventually, brain-initiated commands will make it possible for people to bring data into whatever surroundings they find themselves. Not only will this allow people to apply knowledge of all kinds to their immediate circumstances, but it will also advance analysts’ understanding of entire populations as their “data exhaust” is captured by their GPS-enabled devices and web clickstream activity.

Many experts in the Pew Research Center’s canvassings expect major benefits to emerge from this growth and spread of data, starting with the fact that knowledge will be ever-easier to apply to real-time decisions such as which custom-designed medicine a person should receive, or which commuting route to take to work. Beyond that, this data overlay and growing analytic power will allow swifter interventions when public health problems arise, weather emergencies threaten, environmental stressors mount, educational programs are introduced, and products are brought to the market.

This new reality will also cause major hardships. When information is superabundant, what is the best way to find the best knowledge and apply it to decisions? When so much personal data is captured, how can people retain even a sliver of privacy? What mechanisms can be created to overcome polarizing propaganda that can weaken societies? What are the right ways to avoid “fake news,” disinformation, and distracting sideshows in a world of info-glut?

Struggles over people’s “right relationship” to information will be one of the persistent realities of the 21st century.

Growing reliance on algorithms

The explosion of data has given prominence to algorithms as tools for finding meaning in data and using it to shape decisions, predict humans’ behavior, and anticipate their needs. Analysts such as Aneesh Aneesh of the University of Wisconsin, Milwaukee, foresee algorithms taking over public and private activities in a new era of “algocratic governance” that supplants the way current “bureaucratic hierarchies” make government decisions. Others, like Harvard University’s Shoshana Zuboff, describe the emergence of “surveillance capitalism” that gains profits from monetizing data captured through surveillance and organizes economic behavior in an “information civilization.”

The experts’ views compiled by the Pew Research Center and Elon University offer several broad predictions about the algorithmic age. They predicted that algorithms will continue to spread everywhere and agreed that the benefits of computer codes can lead to greater human insights into the world, less waste, and major safety advantages. A share of respondents said data-driven approaches to problem-solving will often improve on human approaches to addressing issues because the computer codes will be refined at much greater speeds. Many predicted that algorithms will be effective tools to make up for human shortcomings.

But respondents also expressed concerns about algorithms.

They worried that humanity and human judgment are lost when data and predictive modeling become paramount. These experts argued that algorithms are primarily created in pursuit of profits and efficiencies and that this can be a threat; that algorithms can manipulate people and outcomes; that a somewhat flawed yet inescapable “logic-driven society” could emerge; that code will supplant humans in decision-making and that, in the process, humans will lose skills and specialized, local intelligence in a world where decisions are based on more homogenized algorithms; and that respect for individuals could diminish.

Just as grave a concern is that biases exist in algorithmically organized systems that could worsen social divisions. Many in the expert sampling said that algorithms reflect the biases of programmers and that the data sets they use are often limited, deficient, or incorrect. This can deepen societal divides. Those who are disadvantaged could be even more so in an algorithm-organized future, especially if algorithms are shaped by corporate data collectors. That could limit people’s exposure to a wider range of ideas and eliminate serendipitous encounters with information.

A new relationship with machines and complementary intelligence

As data and algorithms permeate daily life, people will have to renegotiate the way they use and think about machines, which now are in a state of accelerating learning. Many experts see a new equilibrium emerging as people take advantage of artificial intelligence that can be consulted in an instant, context-aware gadgets that “read” a situation and assemble relevant information, robotic devices that serve their needs, smart assistants or bots (possibly in the form of holograms) that help people navigate the world or help represent them to others, and device-based enhancements to their bodies and brains. “Basically, it is the Metaverse from Snow Crash,” predicts futurist Stowe Boyd, referring to Neal Stephenson’s sci-fi vision of a world where people and their avatars seamlessly interact with other people, their avatars, and independent artificial intelligence agents developed by third parties, including corporations.

The creation and application of all this knowledge has vast implications for basic human activity—starting with cognition. The very act of thinking is already undergoing significant change as people learn how to tap into all this information and cope with processing it. That impact will expand in the future. The quality of “being” will change as people are able to be “with” each other via lifelike telepresence. People’s capacities are likely to expand as digital devices, prostheses, and brain-enhancing chips become available. Human behavior itself could change as an overlay of data gives people enhanced situational and self-awareness. The way people allocate their time and attention will be restructured as options proliferate. For instance, the manner in which they spend their leisure time is likely to be radically recast as people are able to amuse themselves in compelling new virtual worlds and enrich themselves with vivid new learning experiences.

Greater innovation in social norms, collective action, credentials, and laws

With so much upheaval ahead, people, groups, and organizations will be forced to adjust. At the level of social norms, it is easy to envision social environments in which people must constantly negotiate what information can be shared, what kinds of interruptions are tolerable, what balance of fact-checking and gossip is acceptable, and what personal multitasking is harmful. In other words, much of what constitutes civil behavior will be up for grabs.

At a more formal level, some primary aspects of collective action and power are already altered as social networks become a societal force, both as pathways of knowledge sharing and as mechanisms for mobilizing others to do something. There are new ways for people to collaborate and solve problems. Moreover, there are a growing number of group structures that address problems ranging from microniche matters (my neighbors and I respond to a local issue) to macroglobal wicked problems (multinational alliances tackle climate change and pandemics).

Shifts in labor markets in the knowledge economy, which are constantly pressing workers to acquire new skills, will probably refashion some of the features of higher education and prompt change in work-related training efforts. Fully 87 percent of current U.S. workers believe it will be important or essential for them to pursue new skills during their work lives. Not many believe the existing certification and licensing systems are up to that job. A notable number of experts in another Pew Research Center-Elon University canvassing are convinced that the training system will begin breaking into several parts: one that specializes in basic work preparation education to coach students in lifelong learning strategies; another that upgrades the capacity of workers inside their existing fields; and yet another that is more designed to handle the elaborate work of schooling those whose skills are obsolete.

At the most structured level, new laws and court battles are inevitable. They are likely to address questions such as: Who owns what information and can use it and profit from it? When something goes wrong with an information-processing system (say, a self-driving car propels itself off a bridge), who is responsible? Where is the right place to draw the line between data capture—that is, surveillance—and privacy? Can a certain level of privacy be maintained as an equal right for all, or is it no longer possible? What kinds of personal information are legitimate to consider in assessing someone’s employment, creditworthiness, or insurance status? Where should libel laws apply in an age when everyone can be a “publisher” or “broadcaster” via social media and when people’s reputations can rise and fall depending on the tone of a tweet? Can information transparency regimes be applied to those who amass data and create profiles from it? Who’s overseeing the algorithms that will be making so many decisions about what happens in society? (Several experts in the Pew Research Center canvassing called for new governmental regulations relating to the development and deployment of algorithms.) Which entities should define what is appropriate out-of-bounds speech for a community, a culture, a nation?

The information revolution in the digital age is magnitudes faster than those of previous ages. Much greater movement is occurring in technology innovation than in social innovation—and this potentially dangerous gap seems to be expanding. As we grapple with this, it would be useful to keep in mind the Enlightenment sensibility of Thomas Jefferson. He wrote in 1816: “Laws and institutions must go hand in hand with the progress of the human mind. As that becomes more developed, more enlightened, as new discoveries are made, new truths disclosed, and manners and opinions change with the change of circumstances, institutions must advance also, and keep pace with the times.”

We are likely to have to depend on our machines to help us figure out how to avoid being crushed by this avalanche.

Some tasks are common to almost all users, though, regardless of subject area: data import, data wrangling and data visualization. The table below show my favorite go-to packages for one of these three tasks (plus a few miscellaneous ones tossed in). The package names in the table are clickable if you want more information. To find out more about a package once you’ve installed it, type help(package = "packagename") in your R console (of course substituting the actual package name ).

Amazon Alexa can already turn off your lights and close your garage. Now it can also make your house smell like a Hawaiian vacation.

Prolitec, which calls itself a “scenting services company,” announced Monday that its Aera fragrance systems can now be voice controlled through the Amazon Echo smart speaker and other Alexa-compatible devices.

The Aera systems offer eight different fragrances, which range from pink grapefruit to basmati rice. The fragrance capsules can operate 24 hours a day and run for a full 60 days.

You can tell Alexa to turn Aera on, have Alexa raise or lower scent levels, or ask what the current scent levels are. If you already own an Aera, you can get it to work with Alexa by enabling the Aera skill in your Amazon Alexa app.

Aera is only the latest attempt to offer smart scents. In the 1950’s Hans Laube invented a “Smell-O-Vision” system for automated odor releases during movies. (Because who wouldn’t want to smell King Kong as he swings through New York?) And for the last 20 years, various companies have experimented with digitized scents that could be embedded in email or web pages.

As Python has gained a lot of traction in the recent years in Data Science industry, I wanted to outline some of its most useful libraries for data scientists and engineers, based on recent experience.

And, since all of the libraries are open sourced, we have added commits, contributors count and other metrics from Github, which could be served as a proxy metrics for library popularity.

Core Libraries.

1. NumPy (Commits: 15980, Contributors: 522)

When starting to deal with the scientific task in Python, one inevitably comes for help to Python’s SciPy Stack, which is a collection of software specifically designed for scientific computing in Python (do not confuse with SciPy library, which is part of this stack, and the community around this stack). This way we want to start with a look at it. However, the stack is pretty vast, there is more than a dozen of libraries in it, and we want to put a focal point on the core packages (particularly the most essential ones).

The most fundamental package, around which the scientific computation stack is built, is NumPy (stands for Numerical Python). It provides an abundance of useful features for operations on n-arrays and matrices in Python. The library provides vectorization of mathematical operations on the NumPy array type, which ameliorates performance and accordingly speeds up the execution.

2. SciPy (Commits: 17213, Contributors: 489)

SciPy is a library of software for engineering and science. Again you need to understand the difference between SciPy Stack and SciPy Library. SciPy contains modules for linear algebra, optimization, integration, and statistics. The main functionality of SciPy library is built upon NumPy, and its arrays thus make substantial use of NumPy. It provides efficient numerical routines as numerical integration, optimization, and many others via its specific submodules. The functions in all submodules of SciPy are well documented — another coin in its pot.

Why it may not (technically) kill jobs: Goldman swears that this technology won’t reduce headcount, but will instead free bankers to focus on tasks like shaping marketing strategy and spending time with clients. But it also says that it has eliminated thousands of hours of human work, which will reduce the need to increase its headcount going forward.

McKinsey outline the range of opportunities for applying artificial intelligence in their article. They say:

‘For companies, successful adoption of these evolving technologies will significantly enhance performance. Some of the gains will come from labor substitution, but automation also has the potential to enhance productivity, raise throughput, improve predictions, outcomes, accuracy, and optimization, as well expand the discovery of new solutions in massively complex areas such as synthetic biology and material science‘.

At Smart Insights, we’ve been looking beyond the hype to look at specific practical applications for applying AI in marketing. Our recommendation is that the best marketing applications are in machine learning where predictive analytics is applied to learn from historic data to deliver more relevant personalization, both on site, using email automation and offsite in programmatic advertising. This high potential is also clear from the chart from McKinsey (see top of page).

You can see that ‘personalize advertising’ is rated highly and this relates to different forms of personalised messaging I mentioned above. Optimize merchandising strategy is a retail application which is related.

Advances in artificial intelligence (AI) will have massive social consequences. Self-driving technology might replace millions of driving jobs over the coming decade. In addition to possible unemployment, the transition will bring new challenges, such as rebuilding infrastructure, protecting vehicle cyber-security, and adapting laws and regulations [5]. New challenges, both for AI developers and policy-makers, will also arise from applications in law enforcement, military technology, and marketing [6]. To prepare for these challenges, accurate forecasting of transformative AI would be invaluable.

Several sources provide objective evidence about future AI advances: trends in computing hardware [7], task performance [8], and the automation of labor [9]. The predictions of AI experts provide crucial additional information. We survey a larger and more representative sample of AI experts than any study to date [10, 11]. Our questions cover the timing of AI advances (including both practical applications of AI and the automation of various human jobs), as well as the social and ethical impacts of AI.

Time Until Machines Outperform Humans

AI would have profound social consequences if all tasks were more cost effectively accomplished by machines. Our survey used the following definition: “High-level machine intelligence” (HLMI) is achieved when unaided machines can accomplish every task better and more cheaply than human workers. 1 arXiv:1705.08807v2 [cs.AI] 30 May 2017 Each individual respondent estimated the probability of HLMI arriving in future years. Taking the mean over each individual, the aggregate forecast gave a 50% chance of HLMI occurring within 45 years and a 10% chance of it occurring within 9 years. Figure 1 displays the probabilistic predictions for a random subset of individuals, as well as the mean predictions. There is large inter-subject variation: Figure 3 shows that Asian respondents expect HLMI in 30 years, whereas North Americans expect it in 74 years.