Skip to Content, Navigation, or Footer.
We inform. You decide.
Monday, April 13, 2026

For some UF researchers, the race to build AI is outpacing efforts to make it fair

Experts weigh in on how bias can emerge in data, design and context

The outside of the Gainesville Police Department building, Sunday, April 5, 2026, in Gainesville, Fla.
The outside of the Gainesville Police Department building, Sunday, April 5, 2026, in Gainesville, Fla.

Read other stories from the "These stories were not AI-generated" special edition here.

For Ray Opoku, there’s no doubt artificial intelligence is biased.

“The question is whether or not we have the infrastructure and accountability to catch it, identify it and correct it,” he said.

The data scientist and adjunct UF professor is one of several local researchers who say the systems behind AI can reflect bias in ways that shape real-world outcomes.

At UF, where major investments have expanded AI’s presence across campus, some researchers say efforts to make these systems fair are not keeping pace with how quickly they are being developed and used.

Algorithmic bias occurs when a computer system consistently produces unfair results, giving advantages to one group over another. Algorithms are the rules computers use to process data and make decisions.

Because AI depends on algorithms to process information, the design of those algorithms directly affects how the system behaves and who it benefits or disadvantages.

This means AI may treat people differently in certain real-world applications, based on demographic factors like race, gender or income level, even if it wasn’t intended. Researchers say those disparities can shape outcomes in fields such as education, hiring and health care, where AI is becoming more common.

Opoku said AI bias has become a “hot-button issue.” He became interested in algorithmic fairness as education shifted online during his doctoral studies because of the COVID-19 pandemic.

According to Opoku, bias in AI mainly comes from three places: the data itself, which may not represent everyone; engineers’ design choices; and the context in which the model is used, because an algorithm can behave differently outside its original setting. 

For instance, a model flagging at-risk students in one school may rely on patterns specific to that school’s student population. If the same algorithm is applied at another school with different demographics, teaching methods or resources, it might misidentify students’ needs — overlooking some who need help while incorrectly flagging others.

Historically, datasets have excluded marginalized groups because those populations were less likely to be included in surveys, records and institutional data. When groups are underrepresented in the data, algorithms learn from an incomplete dataset, which can lead to unfair or inaccurate decisions.

Enjoy what you're reading? Get content from The Alligator delivered to your inbox

In 2018, Amazon scrapped a recruiting tool after it was found to downgrade resumes from women because it had been trained on mostly male resumes.
More recently, a 2025 study found AI loan approval systems can place Black borrowers into higher-interest loans even when their financial risk is the same as others.

Compared to other areas of AI, Opoku said AI fairness tends to be less appealing to researchers because it's not as mainstream. 

“Unless you are looking deeply into the dataset, unless you're looking deeply into everything, you don't really see it,” he said. 

To Juan E. Gilbert, a UF professor in the computer, information science and engineering department, it’s important people know algorithmic bias exists. 

Gilbert developed a framework for fair AI with doctoral candidate Brianna Richardson, who has since earned her Ph.D.

“Previous research has found that there exist major gaps between practitioners and fairness experts, indicating a lack of communication between those producing fair AI and those intended to apply it,” Gilbert and Richardson wrote.

Since publishing the paper in 2021, Gilbert said little has changed in bridging the divide between fairness researchers and the practitioners building AI systems.

These gaps highlight some of the ongoing challenges in creating fair AI, including conflicting definitions of fairness. Gilbert said while definitions of fairness and bias are in the “eye of the beholder,” he added they become clear when encountered in practice.

According to the framework created by Gilbert and Richardson, the first recorded case of algorithmic bias was found in an algorithm used to screen applicants to St. George’s Hospital Medical School in the 1980s. 

The algorithm was placing value on the name and birthplace of applicants — it particularly disadvantaged applicants with nonwhite-sounding names.

“You cannot understate the severity of bias in certain domains, because they have real world consequences,” Gilbert said.

Health care professionals often confront the limits of AI firsthand. Dr. Chris Goldstein, a clinical anesthesiology associate professor at the UF College of Medicine, leads the Artificial Intelligence and Medical Device Workgroup at UF alongside his wife, Dr. Heidi Goldstein.

He said discrimination has also appeared in medical technology itself.

For example, pulse oximeters — devices used to measure blood oxygen levels — were found to overestimate oxygen levels in patients with darker skin, sometimes causing clinicians to miss life-threatening conditions.

"In data science … they have this [saying]: ‘Garbage in, garbage out.’ And the same is true for AI algorithms: bias in, bias out,” he said.

According to Goldstein, improving these systems will also require more representative datasets, continuous monitoring and development teams made up of people with different backgrounds and areas of expertise.


He said making AI fair will also depend on the same principle clinicians already use: never trusting technology blindly. 

“It’s always just an extra tool,” Goldstein said. “We never trust one device 100%.”

Contact Julianna Bendeck at jbendeck@alligator.org.

Support your local paper
Donate Today
The Independent Florida Alligator has been independent of the university since 1971, your donation today could help #SaveStudentNewsrooms. Please consider giving today.

Julianna Bendeck

Julianna is a first-year journalism student and The Alligator's Spring 2026 race and equity reporter. She was previously an editor for Eagle Media, Florida Gulf Coast University's student newspaper. In her free time, she enjoys playing video games and reading. She is hoping to attend law school in the future. 


Powered by SNworks Solutions by The State News
All Content © 2026 The Independent Florida Alligator and Campus Communications, Inc.