President Donald Trump has recently speculated about a half-trillion-dollar investment into AI with tech CEOs Sam Altman, Larry Ellison and Masayoshi Son Jan. 22, according to the AP. The additional executive orders surrounding diversity, equity, inclusion and accessibility and issues with various AI biases in both machine learning and Large Language Models bring more questions about what biases are ingrained in AI, according to the ACLU.
Yonatan Mintz, a University of Wisconsin professor of industrial and systems engineering and AI ethicist, said the concerns about biases are multifaceted. With LLMs, both the models and their human operators provide areas for biases to slip in.
For biases within AI, Mintz brought up the case of w2vNEWS, a machine learning model trained on articles and headlines from Google News. It made waves in 2016 after word associations turned up biases about men’s and women’s roles. One such association — man is to computer programmer as woman is to homemaker — became the title of the paper.
The incident spawned discussions on protocols to remove biases from future programs. Mintz said techniques had been developed to reduce association bias and narrow the scope of an LLM focusing on teasing out biases relating to its specialty.
“At the end of the day, depending on the purpose of what you’re trying to use this service for, you are going to have to develop relevant checks,” Mintz said.
Dane Gogoshin, a visiting professor at UW specializing in philosophy and AI, voiced stronger concerns about the biases data LLMs are trained on. She said AI will likely not be a solution to the social issues of today, but perpetuate them and potentially ingrain them further.
Gogoshin said the data AIs are trained on reflect the world of today and the past. Because data exists in a world with inequalities, the AIs will necessarily embody them if they are not regulated.
“The algorithms are not neutral and we’re just using them without really thinking about the consequences and being really clear on the goods that they’re delivering,” Gogoshin said.
Mintz said federal regulations should be able to contain biases for LLMs used in areas like banking, insurance and health care. The foundation of these regulations is the Civil Rights Act, Mintz said. But Trump critics say the administration is targeting the protections and norms provided by the law.
The less regulated these areas become, the less confidence Mintz has in efforts to remove biases from AIs, he said. The AI industry has faced scrutiny from Congress, but little regulation in the form of laws or oversight from federal agencies has resulted.
Several bills have been put forward seeking to regulate or do fact-finding for regulators, but none have passed. Trump rescinded an executive order from Joe Biden issued in 2023 that directed AI companies to disclose safety risks to US officials before they were released. The executive order relied on the existing Defense Productions Act as its mechanism of enforcement.
Mintz said operators can perpetuate their own biases by seeking authoritative answers from LLMs and using prompts to gain a certain response. The broader attitude toward AI as an agent instead of a model that creates text based on prediction creates its own hazards, especially in the context it is used in, Mintz said. He said the way we understand AI creates some of the issues around its use.
“People can’t separate out that it’s just a model and reproducing text versus it being another person,” Mintz said. “If we assume the same thing about an LLM, that’s where the issues start to happen.”
According to a report by McKinsey released in May 2024, generative AI has quickly been adopted by businesses, rising from 33% of businesses using it in some form in 2023 to 65% by the start of 2024. But they’re also seeing issues, from inaccuracy to equity. 30% of respondents said equity and fairness were a concern for AI use, but only 12% said their companies were working to mitigate it. This further underscores the larger attitude toward AI from both business leaders and the general public as a blank slate, not a reflection of the world.
Gogoshin said experts have raised questions about the current use of AI in fields like education and criminal justice. Implementing AI on such a large scale creates huge risks that fall on the companies, rather than the people using the AI, she said.
“There’s a conflict of interest between the well-being of the people and what would actually benefit us from the standpoint of algorithms and the interests driving the development,” Gogoshin said.
Mintz said further regulation is likely unfeasible to come from individual states regarding AIs. He said if states decide to regulate on their own, they risk losing business from firms that create AIs and would be disincentivized from doing so. Mintz argued federal regulation is necessary to make sure AIs are being properly developed and released, especially with companies dissolving their ethics boards meant to oversee their work.
Gogoshin said AI could help level ethics quandaries in human affairs like court sentencing. The rapid incorporation of COMPAS — an AI used in courts across the country — represents the opposite of this aim, Gogoshin said. She said AI has come under scrutiny for its questionnaire which has questions that proxy for race. The benefits, proposed to help eliminate things like the supposed “hungry judge effect,” are not adequately compared to the risks.
“We’re not being careful enough, we’re not scrutinizing enough to trust that [AIs] are delivering the kinds of results that we need,” Gogoshin said.