Who Audits the Auditors? Recommendations from a Field Scan of the Algorithmic Auditing Ecosystem

Algorithmic audits (or ‘AI audits’) are an increasingly popular mechanism for algorithmic accountability; however, they remain poorly defined. Without a clear understanding of audit practices, let alone widely used standards or regulatory guidance, claims that an AI product or system has been audited, whether by first-, second-, or third-party auditors, are difficult to verify and may potentially exacerbate, rather than mitigate, bias and harm. To address this knowledge gap, we provide the first comprehensive field scan of the AI audit ecosystem. We share a catalog of individuals (N=438) and organizations (N=189) who engage in algorithmic audits or whose work is directly relevant to algorithmic audits; conduct an anonymous survey of the group (N=152); and interview industry leaders (N=10). We identify emerging best practices as well as methods and tools that are becoming commonplace, and enumerate common barriers to leveraging algorithmic audits as effective accountability mechanisms. We outline policy recommendations to improve the quality and impact of these audits, and highlight proposals with wide support from algorithmic auditors as well as areas of debate. Our recommendations have implications for lawmakers, regulators, internal company policymakers, and standards-setting bodies, as well as for auditors. They are: 1) require the owners and operators of AI systems to engage in independent algorithmic audits against clearly defined standards; 2) notify individuals when they are subject to algorithmic decision-making systems; 3) mandate disclosure of key components of audit findings for peer review; 4) consider real-world harm in the audit process, including through standardized harm incident reporting and response mechanisms; 5) directly involve the stakeholders most likely to be harmed by AI systems in the algorithmic audit process; and 6) formalize evaluation and, potentially, accreditation of algorithmic auditors.

Focus: AI Ethics/Policy
Source: FAccT 2022
Readability: Expert
Type: PDF Article
Open Source: No
Keywords: AI audit, algorithm audit, audit, ethical AI, AI bias, AI harm, AI policy, algorithmic accountability
Learn Tags: Bias Business Design/Methods Ethics Fairness AI and Machine Learning
Summary: Algorithmic audits are an increasingly popular mechanism for algorithmic accountability. But without a clear understanding of audit practices, AI audit claims are difficult to verify and may potentially exacerbate, rather than mitigate, bias and harm. To address this, the AJL has completed a field scan of the AI audit ecosystem and has shared their findings in a new paper.