• Skip to primary navigation
  • Skip to main content
Mary Fran Wiley

Mary Fran Wiley

Experience Design Leader

  • Portfolio
  • About Mary Fran
    • Resume
  • Show Search
Hide Search

5 things your accessibility audit process gets wrong

Mary Fran Wiley

Most companies are missing the mark in their accessibility audits due to an over-reliance on automated tools or inexperienced auditors. These tasks are often treated as perfunctory boxes to be checked rather than an integral part of a comprehensive design & development process, leaving companies vulnerable to liability under both the ADA and various state laws.

Adding accessibility audits to your process is an excellent step toward creating accessible experiences. It is critical for creating Accessibility Conformance Reports that many clients require in their software procurement process.

1. You’re relying entirely on automated tools

Accessibility audits require human involvement, and no automated tool can provide you with accurate and complete information. Google’s Lighthouse, the most common tool I see clients using, gives a score that can frighten you unnecessarily with many false positives, or it can misunderstand your code and grade your site highly while your site is gibberish to screen reader users. A human touch is necessary to determine whether images are correctly coded and whether the alternative text is required/accurate.

AXE Devtools and Accessibility Insights are both powered by the same machine learning algorithms and can help an auditor, especially a new one, follow a set process that covers most of the WCAG and Section 508 criteria. These tools are more sophisticated than Lighthouse and require user input, but still flag false-positives and can’t test everything. AXE is training some AI for auditing, but as of 2025, it’s not ready and is very experimental.

2. You haven’t created a representative sample

This step is critical to a successful audit. In a perfect world, each element of a digital experience would be checked for accessibility…but that’s not truly feasible. It’s also not good enough to choose pages/screens at random and think you’ll catch everything.

Consider a card in a layout. You may have hundreds of permutations of that card, but if four elements change, and each element has four variants, you could have as few as four cards to review. Looking at each possible combination would mean examining 256 cards. The first is unlikely; in complex systems, some combination of elements never appears together. The second is an excessive effort with a limited return on the effort. In a scenario like this, I’d start by making sure I had each element variant at least once, and based on those variants, consider whether more examples are needed to examine factors like the impact of text length or human error on alt text, to ensure a truly representative sample.

3. Your audit environment doesn’t reflect user reality

Similar to only looking at one instance of a recurring element, evaluating using limited adaptive technology or platforms will leave gaps in your audit. You don’t need to test every screen reader available on every platform, but you cannot rely on a single combination.

Most often, I’ve seen companies rely solely on an audit done in Safari using VoiceOver on a Mac. Not only are VoiceOver users on desktops not the most common screenreader group, but Safari and VoiceOver also work together in ways that can bypass accessibility issues in the code.

Testing desktop experiences should involve both Windows and Mac and either JAWS or NVDA. NVDA has recently caught up to JAWS usage-wise, so whichever your company has invested in will catch most screen-reader failures. They do work a little differently and issues may appear in a combination you didn’t test. Just like a representative sample, some outliers exist, and some issues may be missed. That’s expected – your audit has to balance the effort with the return.

4. You don’t involve engineering or your design team in your remediation planning

Audits often occur in a contained workflow that doesn’t involve the product designers or engineers until after the audit is complete and remediation has been defined (and sometimes even prioritized). This approach increases tech debt and inter-team animosity and can lead to remediation being deprioritized or avoided altogether.

When remediation is being defined, an auditor and engineer should meet to review the issues, workshop solutions to ensure they’re technically feasible, and identify how recommendations may impact elements beyond the instance caught in an audit.

When it comes to the visuals, partnering with designers on why design choices were made and exploring options can ensure designers see accessibility as an opportunity for innovation rather than shackles that prevent their design ideals.

5. You don’t have a scheduled, regular audit process

An audit isn’t one-and-done. Modern websites have content authors publishing new content regularly. Digital products are always adding new features. An audit is a snapshot of the accessibility of your experience at a single moment in time.

Each change made to a digital experience impacts accessibility and can push an audit out of date, both in a good way as issues are addressed, but it can also introduce new failures into an experience.

Bonus: A clean audit may obscure inaccessibility.

Digital accessibility standards are lagging behind modern experiences – we don’t have standards for native apps and software that are as robust as those for the web. And because many standards are based on web experiences they call for design affordances and patterns that may not be standard interactions in an operating system.

Laws are even further behind. The accessibility standard recommended by the US government for ADA compliance was already out of date when it was published, and many state/local laws specifically cite an even older standard.

On the user side, disabilities aren’t all addressed in existing design standards, especially neurodivergence. Your experience may be screen-reader-friendly but unusable for a neurodiverse user. Audits won’t catch this.

Your next audit will be your best one yet.

Crafting truly accessible digital experiences requires more than running automated checks or following outdated standards. A meaningful accessibility audit weaves together thoughtful human review, a representative sampling of user journeys, realistic test environments, and ongoing collaboration across design and engineering teams. While no process is perfect or final, committing to regular audits and adapting to evolving user needs ensures progress toward inclusion and compliance. By moving beyond the checkbox mentality, organizations can not only lower legal risk but also foster better experiences for everyone who interacts with their products.

Uncategorized

Mary Fran Wiley

Copyright © 2025 ยท Yes, Mary Fran is my first name.

  • Accessibility Statement