What are the specifics around responsible AI?

Other parts of this series:

With successfully trained AI technology, a company essentially creates a new worker. But where to begin?

In my last post, I discussed the potential impacts artificial intelligence (AI) technologies are having on our society today. I also outlined just what makes AI technologies “responsible.” In this post, I will get a little more specific as to how businesses can develop and deploy AI technologies on a foundation of responsibility.

Responsible and ethical considerations around responsible AI

As I said in my last post, any business looking to capitalize on the potential of AI should also acknowledge the impact the technology is likely to have on people and society as a whole.

For businesses, this means changing the way they view AI, from systems that are merely programmed to systems that learn; AI technologies built as programs are useful only for a finite set of tasks, while learning-based AI technologies have a much wider repertoire.

Raising AI requires addressing many of the same challenges faced in human education and growth, including:

Fostering an understanding of right and wrong.

Imparting knowledge without bias.

Building self-reliance while emphasizing the importance of collaborating and communicating with others.

By taking on the responsibility of “raising” AI, companies can create portfolios of AI systems with varied skills. Once AI systems are trained, these skills can be redirected throughout the workforce as needed, and remain available to the company as long as it needs them.

Because of this, a company’s AI needs to be aligned to the company’s core values and ethical principles. And by so doing, companies create trust with their consumers and society.

To develop and use AI in a responsible way, businesses should take several factors into consideration, including:

Bias, drift and other unintended consequences

Growth vs. fixed mindset

Trust and transparency

Privacy

Diversity

The responsible AI imperative

To help businesses integrate these factors into AI design from the beginning, Accenture has developed a practical approach to responsible AI.

This approach addresses the imperative to:

Design—architecting and deploying AI with trust (e.g., privacy, transparency and security) by design built in, including building systems that lead to “explainable” AI.

Monitoring—monitoring and auditing the performance of AI against key value-driven metrics, with respect to algorithmic accountability, bias, and cybersecurity.

Reskilling—democratizing AI learning across an enterprise’s stakeholders, emphasizing augmentation vs. replacement, and reskilling the workforce that is displaced by robots (more on this in my next post).

Governance—creating the right framework to allow AI to flourish, anchored to industry and society’s shared values, ethical guardrails and accountability frameworks.

How can businesses meet the responsible AI imperative?

It is absolutely essential that businesses view responsible AI as a collective effort. Businesses and government leaders should proactively address the critical issues raised by AI by inventing new models and approaches built on the principle of responsible AI.

We use cookies to enable website functionality, understand the performance of our site, provide social media features, and serve more relevant content to you. We may also place cookies on our and our partners’ behalf to help us deliver more targeted ads and assess the performance of these campaigns. You may review our Privacy Policy here and Cookie Policy here.