On honesty, do robos have an edge on human advisors?

It's becoming a common refrain: The hybrid advisor model is the future of the business.

Experts have cited ease of use, low cost and other factors to support this argument. But a recent study offers up a new, innovative argument in its favor for a post-fiduciary rule world: a computer can't tell a lie.

"[Robos] are honest in the sense that what they do can be tested," says University of Pennsylvania Law Professor Tom Baker. "Output can be measured, and you can know it is doing what it is programmed to do. A person might be trained, but we are affected by motivated reasoning, cognitive dissonance, and influenced by interests. That's what makes us human."

In a first of its kind white paper, Baker (also the co-founder of Picwell, a data analytics firm that helps users choose a health plan) forecasts a future of banking, insurance and investment advice dominated by hybrid robos. He also posits that regulators should begin developing a unified framework for determining what can be called a "well-designed" robo advisor.

'OUTPERFORM MOST HUMANS'At the heart of the paper, though, is the idea that "at least for mass-market consumer financial products, a well-designed robo advisor will outperform most humans in terms of competence and suitability, while being as honest as the most honest humans."

The demand of scale on costs for both firms and clients makes the choice obvious, he argues.

"If you're middle class or mass affluent, someone who just wants their finances in good shape, you are going to go with robos," he says. "Why would I rely on a person who might forget to do something, in terms of constructing a portfolio or rebalancing my account, when that person is going to be following a formula anyways?"

Moreover, firms deploying automated advice platforms will be incentivized to ensure that they are operating in the best interest of their clients, Baker says.

"I'd find it much harder to imagine what happened at Wells Fargo that you would program a computer to do what its people did, because you'd have to consciously do that. They gave people bad incentives and had weak controls, which is different than programming computers to rip customers off," University of Pennsylvania Law School professor Tom Baker says.

"I'd find it much harder to imagine what happened at Wells Fargo that you would program a computer to do what its people did, because you'd have to consciously do that. They gave people bad incentives and had weak controls, which is different than programming computers to rip customers off," he says.

That's not to say that there won't be legal challenges made by investors using robo advisors who suffer losses, Baker says.

But he sees such litigation focused on platforms that promote a strategy of beating the market as opposed to just managing and growing one's finances.

REGULATORY CATCH-UPAnd even though the digital advice space is still estimated to represent less than 1% of the total investment market, it's still important for regulators to get ahead of the technology now, he says.

"The first generation of robos had no incentive to be badly designed, the only way they could get market share was to be as pure as snow," he says. "But that will change when we reach a tipping point."

"This is an effort to make the mythical perfect be the enemy of the real good," he says. "The mythical perfect person that could take everything into account. That mythical perfect person isn’t available for 20 basis points."