AI
AI

Innovative Software Empowers Blind and Low-Vision Users to Design Interactive, Accessible Charts | MIT News

Photo credit: news.mit.edu

Innovative Software System Empowers Blind and Low-Vision Users in Data Representation

Recent advancements are making it increasingly possible for users to create online data representations that cater to individuals who are blind or have low vision. Traditional tools typically require pre-existing visual charts, which hinders these users from crafting personalized data visualizations and limits their ability to engage with significant information.

A research team from MIT and University College London (UCL) aims to redefine accessible data representation with a software called Umwelt. Translating to “environment” in German, Umwelt allows users with visual impairments to generate customized multimodal data representations directly from datasets, without needing any initial visual chart.

Designed specifically for screen-reader users, Umwelt features an editor that facilitates the upload of datasets. It enables users to create personalized representations—like scatterplots—by combining three sensory modalities: visualization, textual descriptions, and sonification (the transformation of data into non-speech audio).

The software supports various data types and includes an interactive viewer that lets blind or low-vision users explore data representations, switching easily between modalities to perceive and analyze data differently.

A study involving five expert screen-reader users highlighted Umwelt’s user-friendliness and utility. Participants expressed that the tool not only empowered them to create data representations, a feature they noted was often absent in other tools, but also enhanced their ability to communicate data insights across different sensory techniques.

“It’s important to note that blind and low-vision individuals are not isolated; they often desire to discuss data with others,” explains Jonathan Zong, who leads the research and is a graduate student in electrical engineering and computer science. “I hope Umwelt encourages researchers to broaden their perspectives on accessible data analysis. It’s essential to view data visualization as just one segment of a more extensive multisensory framework.”

Zong is joined by co-authors Isabella Pedraza Pineros and Mengzhu “Katie” Chen, Daniel Hajas from UCL’s Global Disability Innovation Hub, and Arvind Satyanarayan, an associate professor at MIT leading the Visualization Group. The findings on Umwelt will be presented at the ACM Conference on Human Factors in Computing.

Shifting Focus from Visualization

The team previously developed interactive interfaces to enrich the experience for screen reader users when exploring accessible data representations. They recognized that existing tools predominantly rely on converting visual charts into other forms, prompting them to reconsider the approach to data representation.

With a goal to minimize reliance on visual representations in data analysis, Zong and Hajas, who lost his sight at 16, began collaborating on Umwelt over a year ago. They focused on how best to represent the same data using auditory, visual, and textual elements.

“We needed to establish a common ground among the three modalities. By creating a new approach to representation, while ensuring accessibility, we found that the combination becomes greater than its individual components,” notes Hajas.

The development process involved exploring the unique characteristics of each sensory experience. For instance, while sighted individuals can simultaneously recognize patterns in a scatterplot, blind users listening to sonification must comprehend the data in a sequential manner as audio tones are played back one after another.

“Relying solely on direct translations of visual features into nonvisual elements often overlooks the distinct strengths and weaknesses of each modality,” Zong adds.

Umwelt is built for flexibility, allowing users to switch modalities effortlessly as certain tasks may be better suited for specific formats.

Users begin by uploading a dataset. Umwelt employs heuristics to generate default representations across all selected modalities. For example, when provided with stock prices, it might create a multiline chart, a text-based format categorizing data by ticker symbol and date, and a sonification representing prices through varying tone lengths.

This heuristic approach helps prevent the initial blank-slate effect seen in many creative tools, especially critical in a multimodal context where users must interact with multiple formats.

The editor links actions across modalities, allowing adjustments in textual descriptions to automatically reflect in the sonification. Users can develop a multimodal representation, switch to a viewer for exploration, and then return to the editor for modifications.

Enhancing Data Communication

To evaluate Umwelt, the researchers created diverse multimodal representations to test the system’s versatility with different data types. The feedback from the expert screen reader users indicated that Umwelt was a valuable tool for creating, analyzing, and discussing data. One participant described it as an “enabler” that significantly streamlined their data analysis process. Users agreed that the platform could facilitate better communication about data with their sighted peers.

“Umwelt stands out due to its core principle of prioritizing a balanced multisensory data experience over visual formats,” states JooYoung Seo, an assistant professor at the University of Illinois at Urbana-Champaign, who was not involved in the study. “Often, nonvisual data representations are considered secondary to their visual counterparts, but this initiative challenges that perception and promotes a more inclusive view of data science.”

Looking ahead, the research team plans to release an open-source version of Umwelt for further development by others. They are also interested in incorporating tactile feedback as an additional modality, potentially integrating tools like refreshable tactile graphics displays.

“Beyond benefiting end users, I hope Umwelt serves as a foundation for exploring how people interact with and interpret multimodal representations, paving the way for ongoing design improvements,” says Zong.

This research was partially funded by the National Science Foundation and the MIT Morningside Academy for Design Fellowship.

Source
news.mit.edu

Related by category

BurgerBots Launches Fast Food Restaurant Featuring ABB Robots in the Kitchen

Photo credit: www.therobotreport.com A dual-arm YuMi cobot puts the finishing...

Epson Introduces GX-C Series Featuring RC800A Controller in Its Robot Lineup

Photo credit: www.therobotreport.com Epson Robots, recognized as the leading SCARA...

Glacier Secures $16M in Funding and Unveils New Recology King Deployment

Photo credit: www.therobotreport.com Two Glacier systems at work in an...

Latest news

WWE Legends Launch New Competitive Wrestling Venture

Photo credit: www.foxnews.com Pro wrestling icons Hulk Hogan and Eric...

Farmers Cashing In on a New Crop: Solar Energy

Photo credit: www.renewableenergyworld.com Around the globe, agricultural practices are evolving...

Meditating in Bed: A Gentle Way to Begin or Conclude Your Day

Photo credit: www.mindful.org While this article has been reviewed for...

Breaking news