Artificial intelligence is defined as “a branch of computer science dealing with the simulation of intelligent behavior in computers,” and “the capability of a machine to imitate intelligent human behavior,” according to Merriam-Webster’s. But it was also evident during Tuesday’s hearing that the definition and uses for AI are still evolving.

Subcommittee Chairman Sen. Roger Wicker, R-Miss., said the increase in data collected from Americans through the use of the internet and mobile devices has contributed to the advances in the industry.

“Although AI applications have been around for decades, recent advancements, particularly in machine learning, have accelerated in their capabilities because of the massive growth in data gathered from billions of connected devices and the digitization of everything,” Wicker said. “Developments in computer processing technologies and better algorithms are also enabling AI systems to become smarter and perform more unique tasks.”

During his opening remarks, he also cautioned the risks of automating common processes through AI.

“These are important considerations to ensure that the decisions made by AI systems are based on representative data that does not unintentionally harm vulnerable populations or act in an unsafe anticompetitive or biased way,” Wicker said. “So, there is a lot to think about.”

The subcommittees’ Ranking Member Sen. Brian Schatz, D-Hawaii, also expressed his concern for certain aspects of AI calling it a “black box.”

“It can make decisions and come to conclusions without showing its reasoning. There are also known cases of algorithms that discriminate against minority groups,” Schatz said. “And when you start to apply the systems to criminal justice, health care or defense, lack of transparency and accountability is worrisome.”

The senator encouraged lawmakers not to purchase AI systems for the government until there is a stronger understanding of its capabilities. Schatz said that U.S. policy needs to be updated to adapt to the advances in machine learning and artificial intelligence.

“Some of our current laws and regulations work, but some of them are too old an outdated to be used as a strong foundation for AI,” Schatz said.

Shatz intends to introduce a bill that will create an independent federal commission geared toward ensuring AI is adapted in the best interest of the general public.

Witness and Associate Professor of Computer Science and Engineering at Mississippi State University Dr. Cindy L. Bethel discussed some of the areas she has been studying the benefits of AI integration including for law enforcement, logistics and health care.

For law enforcement, Bethel said Special Weapons and Tactics (SWAT) teams could utilize AI and algorithms to help identify important information during high-risk situations.

“The algorithms identify what is important to the officers in the environment, such as children, weapons, and other possible threats,” Bethel stated. “This information can change the dynamics of how they make entry or process the scene.”

Mississippi State University also studied AI for automating cargo deliveries and the creation of TherabotTM, robotic therapy support system geared for individuals that may be allergic to therapy support animals or cannot care for an animal.

“The better the quality and quantity of information available to the system, the better the results will be from the machine learning process, which results in a better final decision from the system,” Bethel said. “Otherwise the decision-making capabilities can be limited or inaccurate.”

Hearing Witness and Vice President for the Information Technology and Innovation Foundation Daniel Casto said that the U.S. has to be mindful of its international competition in this space.

“Given the enormous advantage that AI-enabled firms will have compared to their non-AI-enabled peers, the United States should focus on AI adoption in its traded sectors where U.S. firms will face international competition,” Casto said. “To date, the U.S. government has not declared its intent to remain globally dominant in this field, nor has it begun the even harder task of developing a strategy to achieve that vision.”

Casto informed the committee of several countries making advancements in artificial intelligence like the United Kingdom, Japan, Canada and China. He told the committee that China has issued a development plan for AI in 2017 and the countries goal is to become the world leader in the field by 2030.

Dario Gil, vice president of AI and IMB Q and witness during Tuesday’s hearing, said that the U.S. will likely have to consider how our nations advancements will affect the job market.

“There’s no question the advent of artificial intelligence will impact jobs. Occupations are made up of tasks. It is the tasks that are automated and reorganized where the transformation occurs, Gil said. “Workers will need new skills for the new transformed tasks and occupations. But, it is the tasks that cannot or will not be automated where workers provide the greatest value, commanding higher wages and incomes as a result.”

Casto also addressed Senator Schatz’s concern of AI being a black box that could lead to discrimination or biases.

“In many cases, regulators will not need to intervene because the private sector will address problems about AI, such as bias or discrimination, on its owneven if to outsiders an algorithm appears to be a “black box,” Casto said. “After all, one company’s hidden biases are another company’s business opportunities.”

Schatz said he was not confident that developers would be able to program their way out of all potential biases created by AI.

“I’m worried about diversity in the industry. I think to the extent that you have software engineers and decision makers both at the line level writing the code but all the way up to project management and the people who are wrestling with some of these moral questions are mostly white men. I think that is not a trivial thing because they are not thinking about biases in policing, Schatz said. “Is it fair is it rational to have only or I should say predominantly white men in charge of setting up these algorithms that most of the rest of society cant even access because it is all proprietary?”

Professor of Computer Science and Public Affairs at Princeton University Edward Felton said that technology has a role to play but it cannot be the entire solution.

“What we need is a combination of institutional oversight and governance with technology. Technology can provide the levers but we still need institutions that are able to pull those levers to make sure that the results are conducive to what our institutions and our society wants,” Felton said. “The AI workforce is even less diverse than the tech workforce generally and it is important to take efforts to improve that so we can put our whole team on the field.”

Witness and President and CEO of BSA The Software Alliance Victoria Espinel said that artificial intelligence should not be looked at as something that may create biases but could also be used to counter them.

“I think there is another part of this discussion we have heard less about which I think is really important which is how AI can be used, not trained and built but how it can be used to try and counter bias and try to broaden inclusion,” Espinel said. There are instances “where AI can dramatically transform their ability to interact with society and in workplaces.”