Amazon recruiters are believed to have used the system to look at the recommendations when hiring, but didn't rely on the rankings. Currently, women make up 40pc of Amazon's workforce.

Stevie Buckley, the co-founder of UK job website Honest Work, which is used by companies such as Snapchat to recruit for technology roles, said that “the basic premise of expecting a machine to identify strong job applicants based on historic hiring practices at your company is a surefire method to rapidly scale inherent bias and discriminatory recruitment practices.”

The danger of inherent bias in the use of algorithms is a common problem in the technology industry. Algorithms are not told to be biased, but can become unfair through the data they use.

"Developers and AI specialists carry the same biases as talent professionals, but we're often not asked to interrogate or test for these during the development process," she said.

Google had to remove the ability to search for photos of gorillas in its Google Photos app after the service began to suggest that photographs of people of colour were actually photographs of gorillas.

Amazon’s failed recruitment software and the issues with Google Photos illustrate one of the largest weaknesses of machine learning, where computers teach themselves to perform tasks by analysing data.

Last month, IBM launched a tool which is designed to detect bias in AI.

The Fairness 360 Kit allows developers to see clearly how their algorithms work and which pieces of data are used to make decisions.

“Considering Amazon's exhaustive resources and their exceptionally talented team of engineers,” Mr Buckley said, “the fact that their AI recruiting tool failed miserably suggests that we should maintain a default scepticism towards any organisation that claims to have produced an effective AI tool for recruitment.”