Independent Student Newspaper Since 1969

The Badger Herald

Independent Student Newspaper Since 1969

The Badger Herald

Independent Student Newspaper Since 1969

The Badger Herald

Advertisements
Advertisements

As AI programs proliferate, researchers investigate implications of machine-driven decisions

Between zeros and ones, undercurrents of human bias may linger in AI
As+AI+programs+proliferate%2C+researchers+investigate+implications+of+machine-driven+decisions

From facial recognition to self-driving cars, job hiring algorithms to online chatbots, targeted advertising to Google Translate, artificial intelligence programs are starting to seep into almost every area of human life.

AI experts surmise that there’s a 50 percent chance AI will be able to outperform humans in every task in 45 years, according to a 2017 survey conducted by researchers at the University of Oxford and Yale University.

Within the next 10 years, the 352 AI experts surveyed predict AI will be able to surpass human performance in translating languages, writing high school essays and driving a truck. By 2049, they predict AI will be better at writing a best-selling novel. By 2053, working as a surgeon.

Advertisements

At the University of Wisconsin alone, 38 researchers are working on AI-related projects.

But as the applications of AI have continued to grow, common knowledge about how AI works, its risks and its implications for our daily lives has not. As many users of AI technology remain unaware of the practical risks of the technology, UW researchers are already at work trying to solve them.

“The danger could be lurking in your data. That’s the most concerning thing to us.”

Why you shouldn’t worry about robots taking over the world

When talking about AI in everyday conversation, the seductive impulse is to discuss the possibility of super-intelligent robots taking over the world. As seen in many sci-fi movies, such as “The Terminator,” “2001: A Space Odyssey” and “iRobot” and other areas of popular culture, humans are as fascinated as they are fearful of the capabilities of AI.

The popular fear stems from the idea that AI will eventually become smart enough to make an AI system smarter than itself, which could result in an “intelligence explosion.” This super-intelligent machine would potentially have its own interests in mind, rather than that of humanity — hence films and tv shows of a dystopian robot-controlled future.

Yet, AI researchers say superintelligence shouldn’t be a major concern in today’s age.

While it is an “interesting thing to think about,” graduate student Aubrey Barnard, a computer science Ph.D. student working on the applications of AI in biostatistics and medical informatics, said at this point fears of superintelligence are “way overblown.” AI researchers are not even sure how such a thing might occur on a technical level.

UW junior John Moss worked with the Department of Homeland Security and NASA this past summer on a project to create an AI-assisted helmet cam that can recognize risks for firefighters while on the scene. Moss said people wholly underestimate how complicated AI systems are — even ones for doing seemingly simple tasks like recognizing a propane tank in a burning building.

“The number of details you need to get just right for anything to work at all — much less anything work so well that it’s going to take over the world — is just almost inconceivable,” Moss said.

More pressing AI risks

Today, the consequences of this quickly advancing field are much less flashy. In March 2016, Microsoft released an AI chatbot named Tay on Twitter. The chatbot was supposed to have fun conversations with people on Twitter as if it were a teen, learning new ways to respond from the people it talked to.

It wasn’t long before the trolls of the internet turned the well-meaning chatbot into a racist, Holocaust-denying, feminist-hating Nazi.

https://twitter.com/geraldmellor/status/712880710328139776

While Tay chatbot seems like a harmless example, AI programs today are machine learners, meaning they absorb what humans teach them. So if the data that is used to program an AI system has biases or stereotypes built into the data, even the most advanced machine learning programs can learn to be racist.

As humans begin to rely on AI in great capacities, the question of how machines can perpetuate existing systemic injustices becomes less and less theoretical. AI systems have already been used in Wisconsin courts for sentencing decisions, and they could have applications for job hiring too — two areas that yield serious risks for the lives of Wisconsin residents.

How machine learning works

Today’s AI programs are able to engage in machine learning because of neural nets, highly complex, nonlinear functions that can complete tasks ranging from object-recognition in images to language translation.

There are other ways for artificial intelligence to be achieved, but neural nets are currently the most popular. They result in “deep learning” systems, meaning they can train themselves, computer science professor and AI researcher Jerry Zhu said.

A neural net is fed a series of inputs to “teach” the program how to complete its task. To demonstrate how a machine learns, Zhu uses an example of a neural net someone might use to identify photos of cats.

In the past, if a programmer wanted to create a program to identify an orange house cat in a picture, she would have had to code a set of rules for recognizing the color orange, Zhu said.

So the computer knows a cat photo needs to have a certain percentage of orange pixels, but this means an orange cone might be accidentally be categorized as a cat photo, so the coder has to write another rule — and then another.

Humans, Zhu said, are incredibly bad at writing the sort of rules needed to create an accurate cat-image sorting program. But with neural nets, the computer is the one creating all the rules for with little help from the user.

The neural network is then fed hundreds of images, some with cats, and some without. Eventually, the program develops its own set of rules which it can use to accurately determine whether an image contains a cat.

Computer scientists often refer to neural nets as “black boxes” because you don’t have to look inside of the box to see what’s happening for it to work. Programmers don’t hard code what is inside of the black box — it’s learning and developing on its own. And even if someone were to look inside of the box, all they would see is a massive amount of changing coefficients and variables that would be uninterpretable, Zhu said.

Autumn Brown/The Badger Herald

This is different than computer programs of the past where programmers had to hard code an algorithm for every task they wanted their program to perform.

“The major difference is we didn’t precisely code the [neural net], we only gave it training examples — we didn’t really describe what a cat should look like,” Zhu said.

But along with the ease of use that comes with neural networks is the potential for unwanted biases seeping into these programs, especially given their dependence on external data and the opacity of models these programs create.

Zhu said it’s up to users to understand the ways in which deep learning can go astray.

“Step one is always awareness. The user, the main user, the practitioner needs to understand that danger,” Zhu said. “The danger could be lurking in your data. That’s the most concerning thing to us. And you need to somehow identify it, you being the practitioner.”

Accounting for bias

Computer science professor Aws Albarghouthi studies fairness with AI and the potential for the technology to absorb bias. He said there are a number of current and potential applications for deep learning software that can affect people’s lives.

One such field is job hiring. For example, screening out potential resumes using deep learning has the potential to save an enormous amount of time for HR managers, but may unfairly screen out individuals based on data which is itself biased.

Albarghouthi said that how one defines fair is just as important as detecting bias in the first place. People can look at fairness on an individual or a demographic one and perhaps come to different conclusions about what fairness should look like.

“Cases depend on what society you’re operating in and what field you’re operating in, whether it’s hiring or giving loans or things of that form,” Albarghouthi said. “People keep arguing back and forth over what’s a good definition of fairness, and there’s plenty of definitions.”

Even though philosophers and social scientists continue to debate what fairness ought to look like, UW researchers have made headway in combating biased machine learning.

Harry Potter and the chamber of machine learning

In a forthcoming paper, Zhu proposes one way to fight bias in the hiring field using a mock data set taking place in the Harry Potter universe. Instead of looking at instances where data is inaccurate, his research looks at how even accurate can lead to problems.

“It could be that the data is reflecting reality, and you don’t like that reality,” Zhu said.

Using data from the Ministry of Magic’s hiring history, Zhu illustrates how a neural net could learn the wrong lessons from accurate data. He feeds hiring data to a neural net which includes repeated instances where Muggle-born Hogwarts graduates with the same qualifications as pure bloods are denied positions.

In the Harry Potter universe, Muggle-borns suffer from systemic prejudices which would account for the data. A naive neural net would use this hiring data and conclude, wrongly, that there must pure bloodedness makes an applicant more qualified for a job.

One could imagine instances in the real world where underrepresented groups would be similarly disadvantaged by the use of a neural net. To combat this, Zhu uses what he calls trusted items, an individual data point which tells the neural net to ignore blood status and focus on educational achievement.

In the real world, this could be used to counter historically-biased hiring trends.

AI programs in Wisconsin’s courts

While the need for Zhu and Albarghouthi’s research may seem distant, Wisconsin has already seen controversies surrounding the use of such AI programs.

“We can’t verify that these systems are being fair, or if justice is being served in these cases.”

In 2015, the Wisconsin Supreme Court saw a case about a man who was sentenced to six years in prison for stealing a car and fleeing from an officer. The man who was charged, Eric Loomis, appealed his case because the judge used a risk assessment algorithm — powered by AI — to help make the sentencing decision.

The Correctional Offender Management Profiling for Alternative Sanction assessment, or COMPAS assessment for short, calculated that Loomis was likely to commit another crime, making him a poor candidate for probation.

UW experts weigh in on use of defendant-risk assessment tool by Wisconsin judges

Loomis argued that the use of the COMPAS system was a violation of his rights because it took his gender into account when making the assessment, ranking males as more likely to reoffend. Additionally, he couldn’t determine whether the program’s assessment was accurate because COMPAS is the intellectual property of Northpointe, Inc. — meaning the public can’t have access to the program. Northpoint, Inc. considers the programming behind COMPAS a “trade secret.

Because of restrictions like these, Barnard said researchers can’t determine whether there is bias in the data that the neural net was trained on or whether the AI program is itself is fair. This is especially concerning with programs that are used for   decisions because that can have a “huge impact” on someone’s life, Barnard said.

“We can’t verify that these systems are being fair, or if justice is being served in these cases,” Barnard said. “I think it’s a really big problem.”

In May 2016, ProPublica collected COMPAS scores of 18,610 people assessed at a county Sheriff’s office in Florida from 2013-14. From looking at the scores, ProPublica determined that the COMPAS program frequently predicted black defendants to be at a higher risk of recidivism than they actually were. In addition, white defendants were often predicted to be at a lesser risk of recidivism than they actually were.

And while ProPublica was able to draw conclusions from the scores the COMPAS program makes, AI researchers still can’t take an in-depth look at the program itself, nor the data that was used to train it.

The court ruled against Loomis because COMPAS assessments are “merely one tool” that judges can use to come to a decision. Today, COMPAS assessments may be included if a Wisconsin judge requests a presentence investigation report from the Department of Corrections, Supreme Court spokesperson Tom Sheehan said in an email to The Badger Herald.

But the courts made this decision with caution. If a COMPAS report is given to a judge in a report about a defendant, that report is now required to inform the court about a number of risks:

  1. The secret nature of COMPAS makes it hard to determine how the risk scored are calculated
  2. COMPAS’ data is national and has not been cross-validated with Wisconsin data
  3. Some studies have indicated COMPAS might disproportionately assess minorities to be at a higher risk of reoffense
  4. COMPAS should be constantly updated to account for changing populations.

The Wisconsin Supreme Court advised court staff to use their “professional judgement” and override the COMPAS assessments when necessary.

Future implications of AI

Despite these risks, deep learning has the possibility to revolutionize a variety of industries.

Ronak Mehta, a computer science Ph.D. student working in biostatistics and medical informatics, is currently working on using machine learning in biomedicine to predict diseases or find trends in population health. Barnard is working to create an AI program to help doctors more easily diagnose patients. And if even more detailed health records data was collected, AI programs could complete even more complex tasks.

“Even if you have a very bright doctor, … they would not be able to process millions of health records of people across the U.S.,” Mehta said. “That’s where artificial intelligence is supposed to come in.”

Artificial intelligence also has potential future applications in robotics, defense, facial and voice recognition programs, self-driving cars, advertising, the stock market and other industries.

With these new developments will come new risks, but Mehta said he thinks AI researchers will be able to solve them as they come — even if there’s an intelligence explosion.

“My impression is that [superintelligence] is not something to worry about because we’re trying to solve similar problems that we have right now,” Mehta said. “And the solutions we come up with for these similar problems in privacy, fairness, bias will also help us solve the superintelligence problems when we get there.”

But one area Mehta isn’t so confident will be able to handle AI’s advancements is public policy. Should AI programs be open to public scrutiny, or should they be private, intellectual property? How do you ensure AI systems aren’t weaponized? If something goes wrong with an AI in the medical field or with a self-driving car, how do you make policies to determine who is at fault?

Barnard had similar concerns. He said lawmakers need to understand how AI works, so they know how to handle these issues when they inevitably come.

“Whoever’s making the rules, the policies, they’re going to have to get a lot more tech savvy very quickly,” Barnard said.

Advertisements
Leave a Comment
Donate to The Badger Herald

Your donation will support the student journalists of University of Wisconsin-Madison. Your contribution will allow us to purchase equipment and cover our annual website hosting costs.

More to Discover
Donate to The Badger Herald

Comments (0)

All The Badger Herald Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *