Forbes CommunityVoice™ allows professional fee-based membership groups ("communities") to connect directly with the Forbes audience by enabling them to create content – and participate in the conversation – on the Forbes digital publishing platform. Each topic-based CommunityVoice™ is produced and managed by the group.

Opinions expressed within Forbes CommunityVoice™ are those of the participating individuals.

In 2019, I've found that more and more people are developing their own beliefs about where artificial intelligence is heading in the near future. But I believe that as the creators of this tech, we seem to have forgotten we have the power to determine the potential effects it can have on society.

From my perspective, we are now at a critical inflection point in AI adoption: We can either continue the trend of keeping AI limited to the hands of a few companies and people, or we can find a way to ensure that AI is a rising tide that lifts all boats.

Most technologies are iterative. They are created, tested, tinkered with, scrapped, updated and reborn several times over. In tech, we often talk about "minimum viable product," or MVP, which is when you release the early stages of a product or technology to prove a concept; follow-up versions are then released with improvements.

But with AI, the stakes have never been higher. Those early interactions, trainings and data sets have the power to become codified into the technology at a scale we’ve never seen. Too often, bias is built in as soon as the team begins framing the problem and collecting data. For example, there have been instances of chatbots adopting hate speech, facial recognition technology failing to properly identify people with darker skin and more.

I believe part of the source of this bias is that AI is concentrated in too few hands. According to a recent study released by AI Now, 80% of professors who specialize in AI are men. At Facebook, women account for 15% of the company's AI research staff; at Google, this number falls to 10%. For people of color, the numbers drop even lower: Black team members comprise just 4% of Facebook's and Microsoft's workforces and 2.5% of Google's.

It's also important to consider who has access to these AI researchers. Big tech companies are paying between $300,000 and $500,000 salaries to developers with just a few years of experience, according to The New York Times. In my opinion, this can prevent all but a few from building their own AI and creating more diverse teams because small startups often can't compete with such high-paying salaries. As a result, these researchers often opt to work at a handful of companies. They become the creators of innovative technology that can have massive impacts. There’s little wonder people are nervous that AI created in such an environment could have a bias.

There are risks with this type of concentrated power, and many have written on the potential disparities biased AI could create. But despite a pervasive sense of dread, I've observed that not all companies have been able to implement effective change, though they've tried. For example, Google created an AI ethics board to help monitor its use of the technology, but the controversial group was dissolved soon after.

So how can we avoid this technocratic dystopia? My company provides AI solutions, and I've discovered a few steps we can all take to get started creating more transparent and objective AI technologies:

1. Demystify the black box.

Right now, much of AI is a black box to everyone, other than those individuals who can effectively wield the technology. I believe it's critical to create AI models that are transparent so there’s no hiding the mistakes and biases that could otherwise arise. Facebook, Apple and others are beginning to head in this direction. From my perspective, all companies should strive to join their efforts to develop open-source tools to detect bias. De-silo your engineering teams, and stop building and executing these projects in closed systems. It’s time to bring members from varying teams into the development process.

2. Consider perspectives outside of engineers.

The issues that AI is currently grappling with are intensely human issues, often addressing intersections of race, gender and class. In addition to hiring engineers from diverse backgrounds, I believe collaborating with experts in the areas of race, gender and class is vital to help us contextualize the technology that's being developed. This can help us see how it relates to the rest of the world in a productive way. Some businesses have already started doing this.

To begin working with these individuals, ensure your job postings explicitly target humanities majors alongside computer scientists so that your company can build robust tech teams that are also people-literate and ethically educated. Those with the resources might also consider creating advisory boards that mirror this diversity on the highest leadership level. AI built in a vacuum devoid of social context will only hurt your consumers and your company, which is why it’s important to have this diversity reflected across all levels of your organization.

3. Strive to make AI the solution to bias.

From my perspective, AI should be able to find and surface bias in human systems. While AI systems have been called out for bias in a variety of troubling ways, including hiring practices and criminal sentencing, the technology is often reflecting the human bias in the system. I believe if we think of AI as a solution for revealing our bias, rather than an instrument for reinforcing it, we can begin to see how AI can offer a more objective system. For example, I've observed some researchers who are already developing new algorithms that can evaluate the training data an AI system uses and determine whether the initial data is biased. You could consider doing the same by asking your AI vendors for similar capabilities or by developing your own auditing algorithms in-house.

Whether consciously or not, most of us are influenced by bias, and AI can highlight this in unsettling ways. But this isn't a reason to be afraid. It’s a reason to charge ahead, not only for the betterment of business and the growth of technology, but for the betterment of everyone.