[ad_1]
At the start of the Covid-19 pandemic, the Centers for Disease Control and Prevention produced a simple graphic to illustrate how measures like mask wearing and social distancing could “flatten the curve” and reduce the peak in infections.
The graphic was magnified by news sites and shared on social media platforms, but often lacked a corresponding text description to make it accessible to blind people who use a screen reader to browse the web, excluding bon number of the 253 million people worldwide who have visual impairments.
This alt text is often missing from online charts, and even when included, it is often uninformative or even incorrect, according to qualitative data collected by scientists at MIT.
These researchers conducted a study of blind and sighted readers to determine which text is useful to include in a graphic description, which text is not, and why. Ultimately, they found that captions for blind readers should focus on the overall trends and stats of the chart, not its design elements or higher level information.
They also created a conceptual model that can be used to evaluate a graphic description, whether the text was generated automatically by software or manually by a human author. Their work could help journalists, academics and communicators create more effective descriptions for blind people and guide researchers in developing better tools to automatically generate captions.
“Ninety-nine point nine percent of the images on Twitter have no description – and that’s not hyperbole, it’s the real statistic,” says Alan Lundgard, a graduate student at the Laboratory of computer science and artificial intelligence (CSAIL) and lead author of the article. âAsking people to manually write these descriptions seems to be difficult for a number of reasons. Maybe semi-autonomous tools could help with that. But it’s crucial to do this preliminary participatory design work to determine who the target of these tools is, so that we don’t generate content that isn’t useful to its target audience, or in the worst case, wrong.
Lundgard wrote the article with lead author Arvind Satyanarayan, an assistant professor of computer science who leads the visualization group in CSAIL. The research will be presented at the Institute of Electrical and Electronics Engineers visualization conference in October.
Assessment of visualizations
To develop the conceptual model, the researchers planned to start by studying the graphics presented by popular online publications such as FiveThirtyEight and NYTimes.com, but they ran into a problem: these graphics mostly lacked textual descriptions. Instead, they collected descriptions of these charts from graduate students in an MIT data visualization class and via an online survey, then grouped the captions into four categories.
Level 1 descriptions focus on the elements of the chart, such as its title, legend, and colors. Level 2 descriptions describe statistical content, such as minimum, maximum, or correlations. Level 3 descriptions cover perceptual interpretations of the data, such as complex trends or clusters. Level 4 descriptions include subjective interpretations that go beyond the data and rely on the author’s knowledge.
In a study with blind and sighted readers, researchers presented visualizations with descriptions at different levels and asked participants to rate their usefulness. While both groups agreed that Level 1 content on its own was not very useful, sighted readers rated Level 4 content the highest while blind readers rated this content among the highest. less useful.
The survey results revealed that a majority of blind readers insisted that descriptions should not contain an author’s editorialization, but rather stick to simple facts about the data. On the other hand, most sighted readers preferred a description that told a story about the data.
âFor me, a surprising finding about the usefulness of higher level content is that it is very closely related to feelings about agency and control as a person with a disability. In our research, blind readers specifically did not want descriptions to tell them what to think about the data. They want the data to be accessible in a way that allows them to interpret it for themselves, and they want to have the agency to do that interpretation, âLundgard said.
A more inclusive future
This work could have implications as data scientists continue to develop and refine machine learning methods for the automatic generation of captions and alt text.
âWe are not yet in a position to do that, but it is not inconceivable to imagine that in the future we would be able to automate the creation of some of this higher level content and build models that target level 2 or level 3 in our framework. . And now we know what the research questions are. If we want to produce these automated captions, what should these captions say? We can be a little more oriented in our future research because we have these four levels, âsays Satyanarayan.
In the future, the four-level framework could also help researchers develop machine learning models that can automatically suggest effective visualizations as part of the data analysis process, or models that can extract the most useful information. of a graph.
This research could also inform future work by Satyanarayan’s group which seeks to make interactive visualizations more accessible to blind readers who use a screen reader to access and interpret information.
âThe question of how to ensure that tables and charts are accessible to users of screen readers is both a socially important equity issue and a challenge that can advance the state of the art in AI, âsays Meredith Ringel Morris, Principal and Director. scientist from the People + AI Research team at Google Research, who did not participate in this study. “By introducing a framework for conceptualizing natural language descriptions of information graphics that is grounded in the needs of end users, this work helps ensure that future AI researchers will focus their efforts on issues aligned with user values. final. “
Morris adds, âThe natural language rich descriptions of the data graphics will not only expand access to critical information for blind people, but also benefit a much larger audience through eyeless interactions through smart speakers,â chatbots and other agents are becoming more and more common.
This research was supported by the National Science Foundation.
[ad_2]