The first warning came a year-and-a-half ago, Stephan Loerke, chief executive of the WFA told journalists at a press event hosted by Google.

That warning came six months before the story broke in The Times. That story did not mention WFA member advertisers who were made aware of the issue.

"We brought an expert, the most knowledgeable person on the subject at the time, and after his presentation there was just silence," Loerke said. "The only question from the audience was ‘what can we do so when this breaks we can say what we did?'"

But then something odd happened. The WFA released a report and briefed its member companies about the risk of their ads running against extremist content on YouTube and elsewhere and, while many companies took action, the crisis lost momentum.

"The companies that were aware took action. When The Times story broke, none of those we briefed were among those," Loerke said. "When you don’t have it in the newspapers, it seems quite abstract."

YouTube and owner Google were aware of the issue long before it broke in The Times and were working on solutions, but not fast enough, admitted Dyana Najdi, director of EMEA YouTube and video solutions.

"The volume of impressions [of ads that ran against extremist content] were so small... but it takes just one impression to lose an advertiser’s trust. This was something we learned and it is non-negotiable that we get this right," Najdi said.

The biggest thing that the team has taken away from this issue, she continued, is that challenges such as these must be recognized, identified, and addressed with more urgency.

The Times article, which put brands on the spot, and the way it was picked up by other media "sped things up hugely," Najdi said. "In 10 years at Google, I’ve never seen so many people make one issue a priority, and that priority was to ensure that the environment the advertisers advertise against is safe."

Considerable progress has been made, Najdi said.

"Since August, 83% of the content that was removed for violent extremism was taken down before a single human flag was raised. This is eight percentage points better than July. We’re making progress quickly. Our systems have been refined to be more precise and more surgical in enforcing our policies."

YouTube has also increased the number of independent experts it works with on YouTube Trusted Flagger program. In July, it started refunding advertisers who pulled ads that ran on extremist content.

Across the Google ecosystem, stricter policies were also put in place. In May, Google Adsense updated its policies to more precisely target pages that violate AdSense's content rules. Google introduced the ability to remove ads from content at the page level. Content that violates Google’s policies includes adult content, content that is derogatory or dangerous, and content that promotes drug use.

But is all this enough?

Complete brand safety has not yet been achieved, Loerke said, but members of the WFA have said that Google has been engaging very swiftly with affected brands.

"We [WFA] have had frequent contact with Google, briefing us on steps they have taken in a tone of humility that was very welcomed by brands," Loerke continued.

But there is no single point of view among WFA members as to whether it’s safe enough to return to YouTube.

"I still know a number of companies that have not gone back. Some others returned to YouTube but are demanding their agencies do all they can to control risk," he said.

In July, Marks & Spencer, Havas, the BBC, Channel 4, ITV, The Guardian, and the U.K. government told the Financial Times they had not yet returned to YouTube.

"Ultimately, the question is, ‘Is the risk totally eliminated?’ It’s hard to say that’s even possible and some companies are not prepared to take any risk," Loerke concluded.