Meet the brains behind a powerful new tool that lets anyone contribute to making the internet a more accessible place.
Illustration by Jean Wei
It was at lunch about a year ago that Niamh Parsley first had the idea for her thesis—a tool called Depict, which could radically improve the lives of the visually impaired. She was with her husband and Joe Stretchay, his friend from high school, and she watched how Stretchay, who is blind, navigated the meal. “I noticed a few mannerisms that he had [developed] to deal with his visual impairment,” she said. “I was thinking, ‘Oh wow, he must have all these little quick fixes for a ton of things that I had just never thought of.’” She was nearing the end of her time at Parsons, earning her MFA in Design and Technology, and realized that the Internet was a place where those little fixes are nearly impossible for the visually impaired to develop.
Last month, Parsley presented Depict, a crowd-sourced image description tool that could change the experience of the browsing the web for the blind and visually impaired. The tool works in two parts—a browser extension for blind users that provides user-created descriptions of images around the Internet, and a website for sighted users to provide those requested descriptions. If a blind user clicks on an image of an apple tree, which is not properly described in the HTML code, the photo will appear on the crowd-sourced website where sighted users can write “apple tree.” The highest rated description based on sighted user votes will then replace the original description, and be read aloud to any blind user that scrolls over the photograph in the future. Parsley’s husband Jason Sanders helped her develop the final iteration of Depict, which is now available as an extension on Google Chrome browsers.
Image courtesy of Depict
GOOD recently talked with Parsley about how she tackled the challenges facing blind Internet users, and about her dream of Depict launching a wave of web accessibility design.
What did the research for Depict look like? What steps did you to take to gain an understanding of the challenges of web browsing for the visually impaired?
I did the research for about a year, and it started with a literal Google search of ‘How do blind people use the Internet?’ [laughs] It sounds really stupid, but for someone like me a year ago—and for most sighted users, I imagine—they don’t actually know how blind people access the web at all.
I did some primary research—I went to the American Foundation for the Blind and connected with Crista Earl, their director of web operations, and she helped me. She brought all her accessibility goodies and just let me watch her use them. And that was really beneficial for me, because part of the research is not just user testing of how people would use the products, but observing how they interact with the current tools. For example, she told me she prefers using an iPhone rather than an iPad because the phones’ screens are so small that they’re aimed toward single item focus. That makes it easier for a blind user, who can only focus on one thing on one screen at a time anyway.
Niamh Parsley. Image courtesy of Depict
When you started this research, did you have an idea that image descriptions were something you’d like to improve upon?
As I started looking into how images are described on the web, I realized that a lot of coders use Alt-Text [the section of the HTML meant to describe the image] to game the system for search engine optimization. They’ll put in keywords that they want to come up in searches, which is horrible for blind users. Or I’ve found that some people just use it as a place to put the image filename. So sometimes you’ll see—or hear if you’re using a screen-reader to read the Alt-Text—just ‘Five Six Six Two Five Dot JPG.’ Like, wow, that means nothing to me.
Depict ended up being a little bit of a Band-Aid fix, but it gives image descriptions to blind users right now and, most importantly, it gets the conversation started and raises awareness about accessibility on the web for blind users.
What do you see as the future for Depict?
I think there’s really a lot of possibility for collaborations and using this crowd-sourced tool to inform the AI stuff that’s being built. I know that Google, Samsung, and one other group in Montreal, are doing a lot of research on getting computers to give accurate descriptions of images online. And tying in Depict with research like that could really help, because we have the human component. For example, I was trying out one these tools and I plugged in a close-up image of a spider in a web and the description that I got back was a ‘Man on a tightrope.’ So those types of tools are great for really simple images, like a man in a field, but for more complicated images, a human eye could really help.
I think if I can just work with other people who also want to improve accessibility design, then that would be a huge success for Depict. I see possible partnerships with publications online—like a New York Times or a 538—partnering with them and proactively helping them improve their Alt-Texts. The more we get people putting content on the web to realize that what they’re doing is not accessible to 2 percent of the world’s population, the sooner we can get to making that content more accessible.
I want it to be in the world. I want it to be adding to the conversation about accessibility design. I don’t know about making money off of it quite yet.