The Computational Demography Working Group held a forum Dec. 6 during which assistant professor in computational communications at the University of Wisconsin Kaiping Chen discussed her research on equity within science communication.
Specifically, three main topics within Chen’s research were covered — how inequality was represented through digital content creation, stereotypes portrayed in this content and the relationship between artificial intelligence and equity.
“One of the key questions that motivated my research work is ‘how might the disparities in the offline science world, like in the science community or in people’s living environment, extend to these new communication technologies?’,” Chen said.
In a study, Chen recorded the cited sources of science YouTube videos to reveal the primary source of information the science content was derived from. Ultimately, Chen said she documented thousands of channels. Among them, were seventy-seven channels cited most frequently.
Examining these “core producers,” Chen found that around 50% were associated with female-related names and the focus of each channel varied from primarily being entertainment to fostering scientific knowledge.
Fit Families makes exercise accessible to children with disabilities
“When we look at these channels who are in the center of the network, we do see a pattern of profile diversity in terms of gender display, and in terms of who these content creators are and the channel focus,” Chen said.
Further, Chen examined how video captions containing female-related words such as “she,” “her” or “mom” and videos containing male-related words such as “he” and “him” portray gender stereotypes, displaying a difference in terms of what type of morality is attached to each of the genders.
Specifically, videos that discussed science information with female cues were much more likely to use care and loyalty-related words, Chen said, and when a video referred to male-related words, they were more likely to use authority-related words
Lastly, Chen’s research included a study of how AI converses with diverse social groups about controversial topics such as climate change and the Black Lives Matter movement. What Chen found was that lower educated groups had a much more negative perspective of the AI platform after their conversations than that of the higher educated groups, but there was not an observable difference in the perspectives of different gender or race ethnicity groups.
“We see that when it [GPT-3] talked to those science deniers, it [GPT-3] used more negatively related emotion sentiment. This may partially explain why those groups who are an opinion minority hate the chat because it’s really how the GPT-3 gave its responses,” Chen said.
Ultimately, Chen’s research has been beneficial for her main goal of addressing equity issues and uplifting the underserved social communities. With her team, Chen works to further advance her research to find solutions that empower these underserved groups.