“What is fair?” Sounds like a rhetorical question. But for Michigan State University’s Pang-Ning Tan, it’s a question that demands an answer because artificial intelligence systems are playing an increasingly important role in deciding who gets appropriate healthcare, a bank loan or a job.

With funding from Amazon and the National Science Foundation, Tan has been working for a year to teach artificial intelligence algorithms. how to be fairer and recognize when they are unfair.

Professor Pang Ning Tan

“We are trying to design AI systems that are not only for IT, but also bring value and benefit to society. So I started to think about the areas that are really difficult for the company at the moment, ”said Tan, professor at MSU. Department of Computer Science and Engineering.

“Fairness is a really big deal, especially as we become more and more dependent on AI for everyday needs, like healthcare, but also for things that seem mundane, like filtering spamming or inserting articles in your news feed. “

As Tan mentioned, people already trust AI in a variety of applications, and the consequences of unfair algorithms can be profound.

For example, surveys have revealed that AI systems made it more difficult for black patients to access health care resources. And Amazon removed a AI recruitment tool which penalized female candidates for the benefit of men.

Tan’s research team faces such problems on several fronts. The Spartans are examining how people use data to teach their algorithms. They are also studying ways to give algorithms access to more diverse information when making decisions and making recommendations. And their work with NSF and Amazon attempts to broaden how fairness has generally been defined for AI systems.

A conventional definition would look at fairness from an individual’s point of view; that is, whether a person would consider a particular outcome to be fair or unfair. It’s a sensible start, but it also opens the door to conflicting, if not contradictory, definitions, Tan said. What is right for one person may be unfair for another.

Tan and his research team therefore borrow ideas from the social sciences to construct a definition that includes the perspectives of groups of people.

“We’re trying to educate AI about fairness and in order to do that you have to tell them what’s right. But how do you design a measure of equity that is acceptable to all, ”said Tan. “We examine how a decision affects not only individuals, but also their communities and social circles. “

Consider this simple example: three friends with identical credit scores apply for loans of the same amount from the same bank. If the bank approves or denies everyone, friends would perceive this to be fairer than a case where only one person is approved or denied. This could indicate that the bank used external factors that friends might deem unfair.

Tan’s team is developing a way to essentially score or quantify the fairness of different outcomes so that AI algorithms can identify the fairest options.

Of course, the real world is much more complex than this example, and Tan is the first to admit that defining fairness for AI is easier said than done. But he has help, especially from the chairman of his department at MSU, Abdol-Hossein Esfahanian.

Abdol-Hossein Esfahanian, Associate Professor and Chairman of the Department of Computer Science and Engineering

Abdol-Hossein Esfahanian, Associate Professor and Chairman of the Department of Computer Science and Engineering

Esfahanian is an expert in a field known as applied graph theory which helps model connections and relationships. He also enjoys learning related fields in computer science and is known to attend classes given by his colleagues, as long as they are comfortable having him there.

“Our faculty is fantastic for imparting knowledge,” said Esfahanian. “I needed to learn more about data mining so I took one of Dr. Tan’s classes for a semester. From that point on, we started to communicate about research issues.

Today, Esfahanian is a co-investigator of the NSF and Amazon grant.

“Algorithms are created by people and people usually have biases, so those biases creep in,” he said. “We want equity to be everywhere and we want to better understand how to measure it.”

The team is making progress on this front. Last November, they presented their work at an online meeting hosted by NSF and Amazon as well as a virtual conference. International conference hosted by the Institute of Electrical and Electronics Engineers.

Tan and Esfahanian said the community – and funders – were excited about the Spartans’ progress. But the two researchers also admitted that they were just getting started.

“This is an ongoing research. There are a lot of issues and challenges. How do you define fairness? How can you help people trust these systems we use every day? Tan said. “Our job as researchers is to find solutions to these problems. “

About The Author

Related Posts