The core issue is that Silicon Valley sold a dream: "We can build business without people in its ranks. Instead, we can staff programmers and build glue-logic around our business cases, and automate everything. This saves all the money from hiring people."

The investors bought into that idea. Because if it did work, you can have companies that are 1/10'th the size of previous high industry companies, because all the work is automated. Look no further than all the current crop of companies using software in this fashion. Some AI system "learned" that your combined inputs related a fraction higher as fraud - banned. Or someone checked a box in the wrong location and you're locked out. Or your system is deleting user content at random, and there is no-one to call.

Who ends up being tech support for these new companies? The executives. But that's only for people smart enough to realize to send them messages, or otherwise garner their attention via Twitter, Reddit, or HN and happen to be in the right place at the right time. Even the aforementioned gamification needs to be done for even HN, to get the right post at the right time.

Where do we go from here? In truth, not many places. Non-software companies with real human service will get eaten out of house and home by companies willing to make deals with machines. The VC funding is in AI businesses, not traditional. But one can still choose to be customers at respectful businesses. But the internet makes that much harder, as going online also includes 'selling out' customer service. Some AI will then tell agents "you can't do that" even , if it is what's needed.

> Where do we go from here? In truth, not many places. Non-software companies with real human service will get eaten out of house and home by companies willing to make deals with machines. The VC funding is in AI businesses, not traditional. But one can still choose to be customers at respectful businesses. But the internet makes that much harder, as going online also includes 'selling out' customer service. Some AI will then tell agents "you can't do that" even , if it is what's needed.

You're talking like the only solutions to this can come from the market, and if the market forces won't work that way then we're screwed and must give up.

The real solution to this is customer/employee friendly regulation. I'm thinking something like a rule requiring an easy, timely way to appeal to a human that's empowered to override the automation after any adverse ruling by it; backed up by the threat of fines and legal sanctions. In the current American federal political climate, that's a stretch, but there are other jurisdictions, at the state level and internationally, where something regulation like this might be feasible.

GDPR provides something close to this, at least for high value decisions. Decisions taken by profiling and automated decision making must be reviewable, at least if they "[produce] legal effects concerning him or her or similarly significantly affects him or her".

That said, I've worked for a b2c website. The internet is jam packed with scammers. I'd bet it's an order of magnitude worse on sites like upwork where real money changes hands. At least in our case, if we'd had these types of regulations, we would have terminated service to a list of countries that produced little revenue and high hassle.

> Who ends up being tech support for these new companies? The executives. But that's only for people smart enough to realize to send them messages, or otherwise garner their attention via Twitter, Reddit, or HN and happen to be in the right place at the right time.

Obviously we just need to make this kind of behavior expensive enough that execs don't take 20 years to develop a reasonable support model. Hold them personally responsible for their companies, and don't let them hide behind their shitty algorithms.