This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Companies are investing significant dollars into AI for security efforts. According to Capgemini’s Reinventing Cybersecurity with Artificial Intelligence Report, 48 percent say their budgets for AI in cybersecurity will increase by an average of 29 percent in 2020. And 61 percent of enterprises say they cannot detect breach attempts today without the use of AI technologies.

An emphasis on AI was clear at this year’s Black Hat event in Las Vegas, where several vendors made AI-related announcements about platforms that leverage AI and machine learning capabilities to address threat detection.

Among them was Dell’s Secureworks, which released its AI-based its Red Cloak Threat Detection and Response (TDR). The platform uses analytics and AI by gathering data from endpoints, network nodes and applications to create a picture of an organization’s threat environment.

“AI is clearly the future in this space,” said Tim Vidas, Sr. Distinguished Engineer, IT Security, Secureworks. “Security operations need to see threats sooner and respond faster with the right action. And they need to recognize threats by their behavior versus only recognizing known tools/signature-based detections.”

Vidas said he believes the future of the Security Operations Center will be centered on AI and ML to improve the accuracy of threat detections, and to help reduce Mean Time to Respond (MTTR).

“But it’s not as simple as just applying AI to the problem,” he said. “AI is a pretty broad discipline, pedantically meaning ‘computers are performing tasks that mimic human intelligence.’ So to achieve better security outcomes like faster, more accurate detections, you have to detect what matters. This speaks directly to precision - how many of the alerts raised are actually relevant.”

In addition to product announcements, a newly-formed group known as theAI Security Alliance also used Black Hat to formally launch its mission. The organization, self-described as developing standards and best practices for securing the development and use of AI in commercial, government and academic use cases, the group announced its founding board members and formally launched its membership and workings Groups at the event.

“The Alliance has only just started, yet membership and interest has exploded, mirroring the key trends in the industry: fragmentation of standards across industries, lack of resources and expertise, and an acceleration of risks for AI use cases. Trust in AI will be core to positively shaping society in the coming years,” said Kapil Raina, Chair, AI Security Alliance.

While AI continues to be a hot topic in security, recent research from Dotscience finds many companies are still struggling to stabilize and scale their AI initiatives. Its State of Development and Operations of AI Applications 2019report findings are based on a survey of 500 industry professionals.

The research finds that while 63.2 percent of businesses said they are spending between $500,000 and $10 million on their AI efforts, 60.6 percent have experienced a variety of operational challenges with the technology.

Among those challenges: 64.4 percent said that it is taking between seven to 18 months to get their AI workloads from idea into production, clearly demonstrating the timeline from investment to practical execution can be long