Is the use of Artificial Intelligence (AI) to solve accessibility barriers just a fairy tale? Will AI solve nothing? Will AI solve everything? Or will AI—refined by human intelligence and ethics—be the real path to systemic accessibility?
As an industry, we’ve been working to make digital accessibility a reality for over twenty years. Yet, we have not made enough progress. And if we keep trying to solve the digital accessibility problem the way we’ve been trying since 1999, we will never win.
But all is not lost!
Amid frightening dangers and amazing opportunities, there is a Goldilocks Zone for the ethical use of AI in digital accessibility.
Where we’ve been
Despite working on this goal of digital equality for over twenty years, most websites are not accessible to people who have visual, auditory, fine motor, speech, or cognitive disabilities. The WebAIM Million 2023 found that about 96.3% of home pages on the internet did not meet all the accessibility guidelines. This is obviously unacceptable, yet we aren’t likely to make significant progress unless we find new, more efficient ways to make digital accessibility a reality.
Where we are
The Generative AI (GenAI) movement is happening whether we like it or not. As an industry, we cannot afford to sit by and watch, waiting to see what happens. Just like accessibility should not be an afterthought in the design and development of software, we must make accessibility an integral part of GenAI. If we don’t, the speed at which digital barriers are created will be 10x faster than what we experience today—because that’s the velocity of software development we can expect.
Where we’re going
Generative AI (GenAI) is a subset of AI focused on creating new content. This new content can be text, speech, images, video, music, software code, and more. Gartner defines GenAI this way, “Generative AI can learn from existing artifacts to generate new, realistic artifacts (at scale) that reflect the characteristics of the training data but don’t repeat it.” GenAI is already creating useful and unique content, including making new art, discovering new drugs, and writing first drafts of software code.
This form of AI raises fascinating possibilities and questions about the boundaries of artificial and human-generated content. It also requires careful consideration of ethical implications and responsible use. Let’s look at GenAI a bit more closely!
Generative AI and accessibility
GenAI, specifically Large Language Models (LLMs), presents a unique opportunity to address accessibility challenges more dynamically and contextually than ever before. Here are just some of the ways GenAI can aid accessibility:
Generating accessible alt text
Historically, alternative text has been a one-size-fits-all solution, often falling short in delivering meaningful context. GenAI can revolutionize this by generating tailored alt text that considers the image’s context on the webpage. For instance, an LLM can be prompted to focus on an image’s subject, composition, and context to understand what should be required of alt text. While this is a controversial place to use AI, it has incredible potential.
Enhancing visual assistance
Another promising application is the Be My AI app by Be My Eyes, which is powered by GPT-4. This app provides users with detailed information about visual elements captured by their camera, enhancing independence for visually impaired individuals. By pointing their camera at an object, users can receive comprehensive descriptions, helping them navigate their environment with greater ease. We have evidence of people who are blind using Be My AI to assist with inaccessible digital interfaces.
Automated accessibility support
At Deque, we are leveraging GenAI to power axe Assistant, a pioneering accessibility chatbot trained on Deque University’s extensive accessibility knowledge. axe Assistant offers 24/7 support, providing immediate answers to accessibility queries. It can generate HTML for accessible components, such as form labels and alt text, and provide guidance on various accessibility practices.
How to prepare
Accessibility cannot be an AI spectator this year, nor ever again. We must embrace this new wave of computing.
When Amy Webb, the CEO of Future Today Institute, spoke at SXSW last year, she shared a valuable tool under Creative Commons called ADM (Act Decide Monitor) to help us prioritize our actions in this AI era.
We can apply Amy Webb’s ADM tool to AI in our accessibility industry to ensure that accessibility is an integral part of AI. It can help us discover ways that AI can help break down digital barriers faster than they are being built.
Act | Decide | Monitor |
---|---|---|
Risk without action! | Near-term opportunity or risk | Long-term opportunity or risk |
|
|
|
What AI can do for digital accessibility today
Narrow AI (where systems are designed to perform a single task) can “look” at rendered digital UI, as well as underlying code if it is available, and be taught how to accurately sift through mountains of data to identify accessibility issues, including:
- “table cell” that visual acts as a column or row header but is not marked as table header
- “table cell” could be built with non-semantic DIV and no ARIA
- machine learning can accurately compare this inaccessible “table cell” DIV to other human-curated examples of inaccessible “table cells”
- “button” that only works with a mouse, but does not work with a keyboard
- “button” could be built with non-semantic DIV and no ARIA
- machine learning can accurately compare this inaccessible “button” DIV to other human-curated examples of inaccessible “buttons”. Then AI can try to activate the “button” with keypress and accurately report a WCAG 2.x 2.1.1 Keyboard issue if the “button” cannot be activated.
- “text” embedded in image that does not meet color contrast
- machine learning can accurately identify text embedded in an image and select representative samples of the text color and the surrounding background color and report any failures of WCAG 2.x 1.4.3 Contrast (Minimum) when it is x% confident. If the accuracy confidence is lower than x%, the issue can be marked as needing human review.
These AI models can be programmed to ask for human review if there is not enough information to be x% confident, where x% can be adjusted to the level of accuracy you are comfortable with.
Testing at the speed of AI
This may seem like a fairy tale, and you may need more time to be ready to trust AI. But given the rate at which digital content (new web pages, new app screens) is being created, we are in a losing battle if we insist on only using manual accessibility testing done by human experts.
Even if we could keep up with the testing needs for new pages and screens, we also must consider how frequently developers and content contributors update content. The accuracy of a manual accessibility test of a page or screen by a qualified expert is only relevant until that page or screen is changed.
But all is not lost!
We can wisely use AI to augment our limited human resources. In fact, it is irresponsible not to use AI to assist in digital accessibility testing when the manual decision model for testing by a human is straightforward.
For example, you can write out the test process step-by-step and teach a person with a 6th-grade education to correctly identify what passes and what fails. And, a large amount of data is available to train and test this AI decision model. Meanwhile, we can focus our human brain power on auditing the accuracy of the results of AI accessibility testing on representative samples, and we can conduct manual accessibility testing where AI testing results are below your tolerance level for accuracy.
Dangers of AI (the Three Bears)
As we use AI in digital accessibility, we must also consider and mitigate the real and present dangers of this new technology. The recommended approach is multifaceted and relies on both technology and an ethical framework. Let’s begin by looking at three of the dangers of AI (aka, the three bears) and what we can do to confront them:
1. Unethical Use of AI:
To protect against the unethical use of AI, we must have AI governance in each of our organizations. This governance involves creating clear guidelines and policies that define acceptable uses of AI. We must have transparency in the AI we use, including understanding the mechanics behind AI decision-making, so that humans keep the ability to fact-check the accuracy of AI decisions. We must also make developers, content contributors, and all employees accountable for what AI data they use and the actions and decisions made with that data. We must promote a culture of responsibility.
For example, many vehicles today offer AI to help the driver do important things like stay in the proper lane and avoid a collision when changing lanes. Even though the driver is using AI to help them be safe, they are still responsible for keeping the car in the correct lane and not driving into other vehicles or objects.
2. Bias in AI:
Bias in AI is usually present due to historical biases in the AI training data. In other words, “biased data in” results in “biased data (and decisions) out.” Examples of bad bias in AI include job application screening models, bank loan approval models that illegally discriminate against protected classes, and facial recognition models trained only with data using light skin tones.
In reality, bias and discrimination can be hiding inside a human that you currently trust. The question is, will it be easier to identify bias in AI than in humans? Could we actually use AI to model fair and ethical decisions that consistently support human rights?
We must vigilantly be on guard, looking for bias in our AI data and results. When we see bias, we must report it and not rest until it is corrected. We can start by carefully curating diverse and representative datasets, ensuring that the data is inclusive and does not repeat existing prejudices. Tools and techniques, such as fairness-aware algorithms and adversarial testing, can detect, measure, and zero out bias in AI models. Continuous monitoring and auditing are critical parts of this process, as biases can evolve over time. With a diverse workforce working in AI and collaboration between accessibility experts and data scientists, we can build AI systems that recognize and counteract harmful biases.
3. Over and under-reliance on AI:
Balancing how much we rely on AI is crucial to moving digital accessibility forward. We must avoid over-reliance that leads to unquestioning trust in AI decisions without critical evaluation. At the same time, under-reliance on AI is equivalent to blocking the opportunity to systematically solve digital accessibility issues faster than they’re created. To mitigate these dangers, we must step forward bravely and wisely into this era of AI and champion the concept of human-in-the-loop, where AI aids our human decision-making but does not replace it.
So, while AI has amazing potential, it is essential to proactively recognize and address these dangers. With ethical guidelines, monitoring, clear accountability, and continuous education, we can use AI for good. By providing insights into the AI decision-making processes, calculated confidence levels, and potential areas of uncertainty, we humans can stay in the driver’s seat, using AI to help us achieve our digital equality goal.
Human-centered AI
It’s not enough to discuss AI in theory. We must look at concrete examples of the ethical use of AI in accessibility today. The examples I’ve chosen are based on real-world use of AI at Deque, where we embrace what we call “Human-Centered AI”:
“Human-centered AI is an emerging discipline intent on creating AI systems that amplify and augment rather than displace human abilities.” —Noé Barrell, ML Engineer, Deque
Here, Noé describes our approach at Deque:
“Our approach to AI is human-centric because it provides the greatest accuracy in the results generated by the solution while delivering a very high ROI to users. We use AI in multiple ways such as object detection, OCR, and visual text and background rendering—including in different UI states. We combine it with heuristics while ultimately allowing humans to overrule our ML. This enables us to go beyond what is possible with pure heuristics, with zero false negatives, and give those responsible for delivering quality results the power to make decisions in complex situations.”
Axe DevTools Intelligent Guided Tests (IGTs) and AI
Axe DevTools Intelligent Guided Tests (IGT) uses Narrow AI. But before axe DevTools uses AI, it uses the open-source axe-core automated rules to detect the basic WCAG issues that are possible to identify using traditional computing methods. Then, after reporting axe-core issues, axe DevTools uses Narrow AI to:
- Reliably identify even more WCAG issues that used to require human vision and/or analysis but can now be done using AI including:
- Complex text-color contrast issues, including text on top of an image or complex gradient. In these complex cases, axe DevTools uses AI and the visual rendering of text and background to automatically calculate the range of color contrast values and accurately report any WCAG 2.x 1.4.3 Contrast (Minimum) issues.
- Validate accessible name for most form fields. Axe DevTools uses Deque’s OCR model to generate the text that is visually associated with the form field with a very high level of accuracy and confidence. When confidence is not high, which is not often, axe DevTools does ask for human review.
- Identify and accurately classify inaccessible UI objects. Deque’s object detection model can identify inaccessible forms, form fields, form field labels, data tables, data table headers, and interactive elements.
- Ask you focused questions that need your brain and are worth your time, when AI isn’t sure if it is a WCAG issue or not:
- At Deque, we always adhere to the axe-core manifesto of zero false positives. We’ve trained our AI models to be humble and ask for YOUR human review when it is not certain.
How does this help you? You get the basic axe-core automated issue results you already know and trust. Plus, you get even more accurate automated checks made possible using machine learning and computer vision. Your accessibility testing can be done faster. Most importantly, you get to focus your brainpower on items that AI cannot learn (or has not learned yet).
Conclusion: The Goldilocks Zone
Inspired by the Goldilocks Zone metaphor from astronomy, we each have the responsibility to choose how we will use AI to make digital accessibility a reality. If we are too optimistic and think AI can do everything, and we do not do our due diligence to have reasonable human audit processes in the loop, we are destined to fail. If we are too pessimistic and assert that because AI cannot do everything perfectly, we cannot use it at all, we are also destined to fail—because we do not have enough human energy or experts to keep up with the volume of work to make and keep our digital spaces accessible.
The pragmatic path forward is recognizing that AI cannot do everything. Still, with ethical human guidance, AI can and will break down accessibility barriers that have resisted our human efforts for the past three decades.
In this rapidly evolving digital era, the intersection of AI and accessibility is crucial. AI presents us with remarkable potential and unavoidable challenges. We are truly on the brink of a transformative age where AI, driven by ethics and human intelligence, can radically reshape our digital landscape to be inclusive, fair, and accessible. As an industry, we must be brave and innovative.
Are you ready to leap forward and make digital equality a reality? AI + U = A11Y!