An attendee is photographed participating in a virtual reality demonstration at the Milken Institute’s 21st Global Conference on May 1st.

By Mike Blake/REUTERS.

The annual conference circuit, like everything else on our ever-flatter planet, seems to have coalesced around the same few topics. Globalism and nationalism. Climate change and cryptocurrencies. Big Tech and artificial intelligence. Years ago, these subjects were as divorced from each other as the left was from the right. Today, the only difference between TechCrunch Disrupt and, say, Davos, is the level of pomp emanating from the people in the room (you’d be surprised which has more), and the price of the admission. At the Milken Institute’s 21st Global Conference in Beverly Hills, this week, these same themes were on display. Speakers and panelists had been assembled to discuss the usual corporate concerns: global financial markets, personalized health, cancer immunotherapy, machine learning, thought leadership, and mindfulness. (Since the price of admission was $50,000, attendees may have also had other things on their minds. To wit, a booth for Bombardier private jets.) Meanwhile, I was somewhat surprised to find that these concerned delegates of the .001 percent had a distinctly different perspective on many of the solutions. When it came to the discussion of regulating companies like Facebook or Google, or mitigating the civilization-threatening dangers of A.I., their answers were not exactly what you would expect.

Standing inside the Beverly Hilton’s cavernous central ballroom, which also hosts the Golden Globes, I found myself caught between wanting to attend two panels that were, curiously, scheduled at the same time: “Big Tech and Antitrust: Rethinking Competition Policy for the Digital Era,” and “Artificial Intelligence: Beyond the Robot Singularity.” Makan Delrahim, the Assistant Attorney General of the Antitrust Division at the U.S. Department of Justice, who is very much a centerpiece in the government’s fight to halt the AT&T-Time Warner merger, was speaking at the anti-trust panel. Given that, and the ubiquitous conversation about regulating technology companies, I opted for the former, assuming it would be standing room only.

Actually, there were plenty of empty seats. CNBC host David Faber, who was leading the panel, joked that everyone else must have gone to the session with Tom Brady. (They hadn’t; plenty of other sessions were jam-packed with lines out the door.) On the contrary, I surmised that so few people were in attendance at the anti-trust panel because so few people actually seem to care about regulating Big Tech. The panel, as it turned out, was held in one of the smallest meeting areas at the convention.

Perhaps surprisingly, given the D.O.J.’s fixation on quashing the AT&T-Time Warner merger, Delrahim didn’t seem particularly worried by the rapid consolidation within the technology industry. But his co-panelists offered plenty of objections to the growing monopoly power of Facebook, Amazon, and Google. Luigi Zingales, a professor of entrepreneurship and finance at the University of Chicago’s Booth School of Business, said that companies like Facebook and Google deploy their ad technologies in absolutely frightening ways and should be stopped. “They are not free. Any economist who says they are free should lose their degree. We pay and we pay dearly,” he said, pointing out that targeted ads are worth three times more to Big Tech than normal online ads. “They are disruptive not only to the economy, but also to democracy.” Zingales argued that the government is doing pitifully little to deter these tech giants because they are afraid of the repercussions. But Delrahim seemed to wave this theory off. “Being big is not bad; being big and behaving badly is bad,” he explained. In his mind, as far as I could tell, he didn’t seem to think that these tech giants were behaving badly—at least not yet.

Perhaps the most salient warning came from Sally Hubbard, the senior editor of tech anti-trust enforcement for The Capitol Forum, and a former New York assistant attorney general in the anti-trust division, who pointed out that today’s tech giants are breaking just as many regulatory rules as Bill Gates did in the late 90s. Why then, are they not being punished? “About two years ago, I started to see a lot of things that look very similar to the Microsoft case,” she said. Facebook and other giants “are controlling the arena in which the game is played and they’re also playing the game. They’re favoring their own products on their own services, just like Microsoft did.” When another panelist tried to disagree, suggesting that Amazon and others were helping consumers by offering lower prices, Hubbard aptly noted that, “anti-trust is about competition, it’s not about low prices.”

Zingales agreed, declaring that the D.O.J. should “break up YouTube from Google [and] break up Instagram and WhatsApp from Facebook.” (Scott Galloway, author of The Four, explored a similar line of reasoning on a recent episode of the Inside the Hive podcast.) But the message wasn’t getting through. After it became clear that Delrahim, one of the few people able to at least slow the consolidation of the tech industry, had little interest in regulating or breaking up Facebook, I left to see if there was more hope to be found in the A.I. session.

It is not entirely unpredictable that the people atop the global economic hierarchy—the sort who regularly turn up at conferences that cost the median U.S. household income—would be less interested in overturning the cart than staying ahead of the curve. Yet the pace of technological innovation has never moved so fast, or demanded as much cautious introspection, as it does today. For the last year or more, almost everyone I have spoken to about artificial intelligence talks extensively about how this new technology could go drastically awry. As I’ve written before, we’ve irreversibly intertwined our civilization—phones, farms, electric grids, stock markets, missile-guidance systems—with technologies that can be hacked, go rogue, or even become sentient. (A 2008 congressional report predicted that if the power were to go out, 90 percent of the U.S. population would die within a year or two.) Our lives have never been so vulnerable to disruption. But during the A.I. session at Milken, the talk wasn’t about the risk posed by A.I. Instead, it was focused on the upside. The panelists, in fact, seemed to give little thought to the potential negative consequences at all.

Usman Shuja, a general manager at SparkCognition, a company developing A.I. that will be used in energy, finance, aerospace, and defense, expressed one variant of the optimists’ case. As Shuja explained, this isn’t the first time that humans have pursued A.I.—but it is the first moment that we have thought practically about its implementation. In the past, he said, the pioneers in the field were thinking about psychology—“They were obsessed with creating the mind of a human being.” This time—“3.0,” as he called it—engineers are leading the way, with explicit economic goals in mind. And the logic of the market is delivering rapid results. Tom Bianculli, C.T.O. of a company called Zebra Technologies, which is building A.I. solutions for businesses, talked extensively about all the potential benefits, though he veered away from any conversation about what might go wrong, or what responsibilities he and his counterparts on the stage might bear if such powerful new technologies spiral out of control. Instead, each question posed about the potential negatives was met with an answer about the positives, mostly financial.

When Forbes’s Rich Karlgaard, who was moderating the discussion, asked the group if they could share any concerns—“any at all”—about what could go awry with the development of A.I., like “viruses, A.I. gone rogue, massive job loss,” the response from the five-person panel was almost comical. Bianculli talked about how amazing the world will be for his children and all the opportunities they will have. Part of me wanted to stand on my chair and yell to everyone to run over to the session on anti-trust and Big Tech to see how well that worked out with social media. But, unfortunately, I didn’t.

After a few non-answers about the end of the world, an audience member took the microphone and asked, pointedly, of the entire panel, “aren’t you scared about anything at all? How would you tell your kids what’s going on and what to expect?” Here, we finally got an answer that was truthful. “We are going to see massive job loss,” said Tom Siebel, the chairman and C.E.O. of the software company C3 IoT. “The privacy implications, when we have all of your health data, all of your health history, your genome database, and the cyber-security issues, are staggering. They are staggering and daunting and need to be dealt with.” But what was the solution? How do we deal with these potential negatives? What can we do to stop A.I. from going rogue? Will evil robots roam the earth crushing people like empty cans of a beer at a backyard barbecue? There, Siebel and others didn’t have an answer. At the end, Bianculli jumped back in to reiterate that he thinks the world is going to be great for his kids. “Think about the opportunity,” he said, “the opportunity has never been richer.”

This optimism, of course, represents a fairly limited perspective, and it’s the sort of blithe naïveté that Silicon Valley denizens often get dinged for. Reports from around the globe predict that A.I. and automation will eliminate more than 50 percent of jobs—73 million in the U.S. alone by 2030, both blue- and white-collar. It was disheartening to see how much people like Bianculli yearn for the financial upside, and how little they worry about the massive challenges that lie ahead.

There were outliers in other sessions who were willing to urge caution amid the relentless technophoria. In a session about the unseen impact of technology on business growth, Jim McCaughan, chief executive officer at Principal Global Investors, observed that technology, and not immigration or trade, is hampering the poor’s ability to find work around the world. Politicians, he said, are in denial about what’s really happening. Re-skilling the workforce, a popular stump talking point, isn’t actually a solution since so few people are equipped to accomplish the jobs the tech giants are creating. In another session, Thomas Barrack Jr., the Donald Trump pal and founder of the real-estate firm Colony NorthStar, pointed out that “if the system itself is hacked or breaks or causes trauma, I am not sure what happens.”

Predictions aren’t always perfect. In the anti-trust session, one person noted that in the 1980s, AT&T, then one of the most advanced tech innovators on the planet, asked McKinsey & Company to compile a report estimating how many cellular phones would be in use around the globe by the turn of the century. McKinsey predicted that by the year 2000, there would be 900,000 cell phones in the world. In reality, by the late 90s, that many cell phones were being activated every three days. Today, there are more cell phones than people on the planet, and just look at how much these devices have disrupted everything—literally, everything. We know from the conversations in the anti-trust sessions, and in front of Congress, that no one could have predicted what mobile devices would do to our democracy. Yet, here, for the first time ever, we’re fully aware of what can go drastically wrong with A.I., and yet we’re doing so little to stop it. Like the early days of every other technology, the people who have the power to actually protect the future are more interested in making more money off it.

If only there was a place where the people who understand what could go wrong could meet in person and sit and talk with the people who are building this terrifying future. If only there was a way to regulate the companies that are building this future technology, and stop them before it’s too late. Alas, it doesn’t seem like that comes with the price of admission.