Machine learning is the newest thing at BYU, thanks to the work of engineer Dah-Jye Lee, who has created an algorithm that allows computers to learn without human help. According to Lee, his algorithm differs from others in that it doesn’t specify for the computer what it should or shouldn’t look for. Instead, his program simply feeds images to the computer, letting it decide on its own what is what.

Photo courtesy of BYU Photo.

Similar to how children learn differences between objects in the world around them in an intuitive way, Lee uses object recognition to show the computer various images but doesn’t differentiate between them. Instead, the computer is tasked with doing this on its own. According to Lee:

“It’s very comparable to other object recognition algorithms for accuracy, but, we don’t need humans to be involved. You don’t have to reinvent the wheel each time. You just run it.”

Of course, computers can’t think, reason, or rationalize in quite the same way as humans, but researchers at Carnegie Mellon University are using Computer Vision and Machine Learning as ways of optimizing the capabilities of computers.

NEIL’s task isn’t so much to deal with hard data, like numbers, which is what computers have been doing since they first were created. Instead, NEIL goes a step further, translating the visual world into useful information by way of identifying colors and lighting, classifying materials, recognizing distinct objects, and more. This information then is used to make general observations, associations, and connections, much like the human mind does at an early age.

While computers aren’t capable of processing this information with an emotional response–a critical component that separates them from humans–there are countless tasks that NEIL can accomplish today or in the near future that will help transform the way we live. Think about it: how might Computer Vision and Machine Learning change the way you live, work, and interact with your environment?

While your smart device of today may appear to be multi-tasking with GPS, text messaging and music streaming all running at once, in reality, it’s cycling between these tasks, serially.

Computers have been operating this way since the computer age began.

Quantum computers, on the other hand, would address simultaneity from the ground up. They would perform many operations in parallel and be well-suited to machine learning where there’s a need to search instantly through a myriad of possibilities and choose the best solution.

One of the more controversial aspects of quantum computing’s massive potential is to render today’s data encryption technologies, obsolete.

(For a surprisingly easy-to-follow explanation of the difference between classical computing versus quantum computing, see this 1999 article by Lov K. Grover, inventor of what may be the fastest possible search algorithm that could run on a quantum computer.)

One focus of the lab will be to advance machine learning. Google Director of Engineering, Hartmut Neven blogs:

Machine learning is all about building better models of the world to make more accurate predictions.

And if we want to build a more useful search engine, we need to better understand spoken questions and what’s on the web so you get the best answer.

In venture capital circles, machine learning startups are about to catch fire. This makes sense as the size of data sets that companies and organizations need to utilize spirals beyond what the human brain can fathom.

As Derrick Harris at Gigaom reports, Skytree landed $18 million in Series A funding from US Venture Partners, United Parcel Service and Scott McNealy, the Sun Microsystems co-founder and former CEO. The company began just over a year earlier with $1.5 million in seed funding.

The flagship Skytree product, Skytree Server, lets users run advanced machine learning algorithms against their own data sources at speeds much faster than current alternatives. The company claims such rapid and complete processing of large datasets yields extraordinary boosts in accuracy.

Skytree’s new beta product, Adviser, allows novice users to perform machine learning analysis of their data on a laptop and receive guidance about methods and findings.

As the machine learning space becomes more accessible to a wider audience, expect to see more startups get venture funding.

Writing for Mason Research at George Mason University, Michele McDonald reports on how machine learning is helping doctors determine the best course of treatment for their patients. What’s more, machine learning is improving efficiency in medical billing and even predicting patients’ future medical conditions.

Wojtusiak points out how current research and studies focus on the average patient whereas those being treated want personalized care at the lowest risk for the best outcome.

Machine learning can identify patterns in reams of data and place the patient’s conditions and symptoms in context to build an individualized treatment model.

As such, machine learning seeks to support the physician based on the history of the condition as well as the history of the patient.

The data to be mined is vast and detailed. It includes the lab tests, diagnoses, treatments, and qualitative notes of individual patients who, taken together, form large populations.

Machine learning uses algorithms that recognize the data, identify patterns in it and derive meaningful analyses.

For example, researchers at the Machine Learning and Inference Lab are comparing five different treatment options for patients with prostate cancer.

To determine the best treatment option, machine learning must first categorize prostate cancer patients on the basis of certain commonalities. When a new patient comes in, algorithms can figure out which group he is most similar to. In turn, this guides the direction of treatment for that patient.

Given the high stakes consequences involved with patient care, the complexity that must be sorted out when making diagnoses and the ongoing monitoring of interventions against outcomes, machine learning development in health care is risk-mitigating and cost-effective.

For more about The Machine Learning and Inference Lab and the health care pilot projects they are working on, see the original article here.

As the new frontier in computing. machine learning brings us software that can make sense of big data, act on its findings and draw insights from ambiguous information.

Spam filters, recommendation systems and driver assistance technology are some of today’s more mainstream uses of machine learning.

Like life on any frontier, creating new machine learning applications, even with the most talented of teams, can be difficult and slow for a lack of tools and infrastructure.

DARPA (The Defense Advanced Research Projects Agency) is tackling this problem head on by launching the Probabilistic Programming for Advanced Machine Learning Program (PPAML).

Probabilistic programming is a programming paradigm for dealing with uncertain information.

In much the same way that high level programming languages spared developers the need to deal with machine level issues, DARPA’s focus on probabilistic programming sets the stage for a quantum leap forward in machine learning.

More specifically, machine learning developers using new programming languages geared for probabilistic inference will be freed up to deliver applications faster that are more innovative, effective and efficient while relying less on big data, as is common today.

For details, see the DARPA Special Notice document describing the specific capabilities sought at http://go.usa.gov/2PhW.