The Future of Work and Disability

By David Pereyra

The Future of Work and Disability project brought together a study group of fifteen people, many with lived experience of disabilities, with researchers, artificial intelligence (AI) experts, data scientists, employment organizations and others engaged in the data ecosystem. The goal of the group was to understand and examine intersecting topics of AI, automation, standards and employment as they mainly relate to persons with disabilities.

Our Objectives

The Future of Work and Disability objectives were to:

  • Explore, understand and draw insights into how artificial intelligence and other smart technologies affect persons with disabilities and limit or improve their opportunities and well-being with regards to employment.
  • Produce a report that will share the insights gained through the workshop activities.

Our Process

The study group met weekly for eight weeks in late 2020 and early 2021 and also collaborated asynchronously using platforms such as Google Drive and Canvas. Three themes were explored—each with a webinar module and a research activity module.

The final activity of the group is to co-create a report on their understandings that can be used to help develop standards and regulations that support diversity within employment data systems. Accessibility Standards Canada (ASC) will use the report to inform best practices and policies for AI use in the workplace.

Our Contributors

The Future of Work and Disability had expert collaborators many of whom identify as having a disability formed a study group that was comprised of fourteen individuals, many with lived experiences of disability and/or knowledge of the AI field. The group was selected through a call for participation from the IDRC, and a selection process was used to ensure that there were diverse perspectives within the group for learning, collaborating and creation of the final report. Experts include:

Chris Butler

Theodore (Ted) Cooke

Katherine Gallagher

Kevin Keane

Mala Naraine

Runa Patel

Sricamalan (Sri) Pathmanathan

Gaitrie Persaud

Ramin Raunak

Fran Quintero Rawlings

Janet Rodriguez

Cybele Sack

Christopher Sutton

Ricardo Wagner

Report

Future of Work and Disability Findings Report:

In addition to our report to Accessibility Standards Canada, we created:

  • Learning opportunities from our webinars
  • Badges that can be used by learners to demonstrate their proficiency in the field
  • A learning program that will be publicly available at the close of the project

Badges

Webinar Series

In these four webinars, we explore, understand and draw insights into how artificial intelligence and other “smart” technologies affect persons with disabilities and limit or improve their opportunities and well-being concerning employment. The speakers are experts in employment, artificial intelligence, automation, smart systems, policy, privacy and security areas. We have centred our attention on four theme areas:

  1. AI Employment Systems
  2. AI Hiring System Policies
  3. AI Lifecycle and Ethics
  4. Inclusive AI for HR

AI Employment Systems Webinar

In the technological age, AI, smart systems and automation promise a more inclusive world. However, we must be proactive in determining whether these technologies can genuinely be tools that allow us to achieve greater inclusion and equality of opportunity. We will look at the current state of technology and how it impacts different stages of the employment process for persons with disabilities. Our expert panel — Chancey Fleet, Anhong Guo, Shari Trewin and Ben Tamblyn — guides us through the barriers and new opportunities that AI and smart technologies present for persons with disabilities in the workplace.

Panelists

A photo of Anhong Guo

Anhong Guo is an Assistant Professor in Computer Science & Engineering at the University of Michigan. He has also worked in the Ability and Intelligent User Experiences groups in Microsoft Research, the HCI group of Snap Research, the Accessibility Engineering team at Google, and the Mobile Innovation Center of SAP America.

A photo of Shari Trewin

Shari Trewin manages the IBM Accessibility Leadership Team, chairs the Association for Computing Machinery (ACM) Special Interest Group on Accessible Computing (SIGACCESS), and is a Distinguished Scientist of the ACM, a member of ACM’s Diversity and Inclusion Council.

A photo of Ben Tamblyn

Ben Tamblyn is the Director of Inclusive Design at Microsoft. Ben has worked in a wide range of marketing, design and technical roles, and has a passion for design, inclusion and potential impact of technology on the world.

A photo of Chancey Fleet

Chancey Fleet was a 2018–19 Data & Society Fellow and is currently an Affiliate-in-Residence whose writing, organizing and advocacy aims to catalyze critical inquiry into how cloud-connected accessibility tools benefit and harm, empower and expose disability communities. Chancey is also the Assistive Technology Coordinator at the New York Public Library.

Moderator

Dr. Vera Roberts is Senior Manager Research, Consulting and Projects at the Inclusive Design Research Centre (IDRC) at OCAD University. Vera’s primary research area is generating a culture of inclusion through outreach activities and implementation of inclusive technology and digital sharing platforms.

Play Video

Summary

Chancy Fleet

Chancey Fleet locates herself as an advocate impacted by algorithmically biased AI. In her presentation, Fleet expresses apprehensions about AI and ML models that are empowered to influence and perform judgments about people with disability. She is concerned about their suitability to perform roles based on algorithms trained on datasets with an inadequate representation of the diversity of disabilities.

Fleet laments the inefficiencies and inequities embedded into AI technologies to test and evaluate people with disabilities. And this is because AI favours people considered normal while discriminating against protected groups such as people with disabilities.

While not averse to the technology, Fleet advocates for transparency in the engagement of and use of AI and makes the distinction between compliance and accessibility, which she argues are not the same.

Anhong Guo

Professor Guo also contends that AI technologies have tremendous implications for people with disabilities because their design does not sufficiently contemplate the nuances of people’s disabilities. Guo and his team have prepared a guide in the form of a published position paper titled “Research Roadmap Towards Fairness in AI for People with Disabilities” to “identify and remedy these problems.” It is a four-stage roadmap that seeks to:

  1. Identify potential inclusion issues with AI systems. a. This includes the categorization of AI capabilities b. Risk assessment of existing AI systems c. The importance of understanding the techniques that power AI d. The types of harm that AI systems can cause
  2. Systematically test hypothesis to understand failure scenarios. a. Example sense and accessibility
  3. Create benchmark datasets for replication and inclusion (to test the hypothesis). a. This involves complex ethical issues around consent and privacy issues that include data collection, how to encourage participation and to get consent from people with intellectual disabilities.
  4. Innovate new methods and techniques to mitigate bias a. This concerns the evaluation of how mitigation techniques work.

Shari Trewin

Focusing on employment and AI, Shari Trewin argues that AI expands many opportunities for better inclusion in the workplace. For example, deaf employees at IMB have been given speech recognition apps to facilitate and enhance work interactions. In addition, Trewin referenced an (unnamed) article that talks about using robots as wait staff that a person with a disability operated from a remote location.

She reasons that there is potential to overcome experiences of bias where the use of AI can allow people to be assessed more on merit instead of being misjudged by their disability. But pointing to the barriers and challenges of AI, Trewin is concerned that its use is potentially disruptive to current work organizations — the apparent reason is that jobs might disappear. But the converse is also true: redesigning tasks can create new jobs.

On recruitment, Trewin argues that vigilance is given to AI recruitment technologies and close attention to the notion of fairness or overcoming biases because the technology should be reliable and harmless. She references IMB’s Fairness 360, which emerged from a workshop that she organized. So, fairness is an essential feature which should not trample on people’s privacy.

Trewin insists that one must tread carefully because AI decision-making relies on training data replicating human biases. So, it raises the question of trust and transparency, critical elements in the deployment of AI; information must be provided on how the model was trained and tested.

This brings up the notion of explainability. The status quo is currently a black box situation with input and output along with the decision. However, there is no explanation about how the decision was arrived at. But there should be a rationale for the AI decision. In the case of IBM, it has what it calls AI Explainability 360. From an ethical perspective, AI should enhance instead of being a substitute for the human factor.

Ben Tamblyn

Considering AI’s role in building inclusive technology, Ben Tamblyn argues that exclusion results when humans try to solve problems involving biases. It, therefore, raises the question, who is being unintentionally excluded? Tamblyn insists that in terms of culture, consideration must be given to the approach to support inclusive technology and the role AI will play in facilitating its fruition.

Tamblyn referenced a retired footballer diagnosed with ALS working closely with Microsoft. Due to his severe disability, the retired footballer desired to tweet with his eyes as an input mechanism, be able to read to his child and also move his wheelchair with his eyes. Consequently, all those functionalities have now been integrated into the Windows OS with AI to perform those tasks independently. So, it’s an infusion of AI into the technology, which Tamblyn contends was facilitated mainly by the evolving culture.

Earn a Learner badge

You will learn:

  • How new innovative technology solutions can potentially mitigate hiring biases for people with disabilities
  • How the “normative behaviour” screening is harming people with disabilities

Learn and earn badges from this event:

  1. Watch the accessible AI Employment Systems webinar
  2. Apply for your Learner badge (five short answer questions)

AI Hiring System Policies Webinar

In this presentation, we explore the best policies and practices that both tech companies should adopt in designing their algorithms and the employment organizations that use them. Consider the many legal and ethical implications of machine learning bias must focus on.

Panelists

A photo of Alexandra Reeve Givens

Alexandra Reeve Givens is the CEO of the Center for Democracy & Technology, a leading U.S. think tank that focuses on protecting democracy and individual rights in the digital age. The organization works on a wide range of tech policy issues, including consumer privacy to data and discrimination, free expression, surveillance, internet governance and competition.

A photo of Julia Stoyanovich

Julia Stoyanovich is an Assistant Professor of Computer Science and Engineering and of Data Science at New York University. Julia’s research focuses on responsible data management and analysis, including operationalizing fairness, diversity, transparency and data protection in all stages of the data science lifecycle. She is the founding director of the Center for Responsible AI at NYU, a comprehensive laboratory that is building a future in which responsible AI will be the only kind of AI accepted by society.

Moderator

Dr. Vera Roberts, Inclusive Design Research Centre

Play Video

Summary

Alexandra Reeve Givens

In her presentation, Reeve Givens focuses on the legal framework against bias. In addition, Givens shares what her organization is doing to encourage companies to contemplate these issues and other valuable tools in advocating for people with disabilities.

Some of the tools being used and concerns raised are resume screening — which, from a disability and inclusion perspective, is trained to search for candidates who showcase extant qualities in the organization. There is also video interviewing that involves facial recognition and sentiment analysis. And also the use of games and logic tests, which are potentially exclusionary due to their unintended consequences for people with disabilities.

However, legal protective frameworks such as the Americans with Disabilities Act 1992(ADA) have something to say about these measures. Title VII prohibits discrimination based on ethnicity, race, gender or sexual orientation. The ADA forbids pre-employment medical examination and requires that tests be accessible and accommodating. But while there is language in the ADA that protects protected groups, the burden is on the victim of discrimination to prove how they were discriminated against.

Givens’s organization has partnered with scores of civil rights organizations to create the Civil Rights Principles for Hiring Assessment Technologies. This tool is helpful for employers, advocates and policymakers to ferret out what she calls “the vectors of discrimination” and direct stakeholders to the tools to address them.

Some of the principles of this tool are non-discrimination, which is self-explanatory. Then there is job relatedness which helps us think about the tests deployed by employers without considering whether or not they are necessary to do the job.

A critical piece is a notice and explanation because candidates need to know what they are being evaluated for. More importantly, they may be oblivious that they must ask for accommodation. Auditing is another principle that involves frequent verifying. And in terms of oversight and accountability, employers need to be aware of their legal and ethical obligation and how federal legislators are engaged in enacting protective laws.

Givens also focuses on advocacy regarding the potential areas of concern and awareness and how employers can be empowered to pressure vendors into making fairer products, making more informed decisions, asking the right questions and so on.

Givens also pointed out some of the challenges, such as the tool’s design that rely on flawed training data. Then there are the audit limitations, which follow the so-called 4/5ths rule. This rule, in simple terms, considers whether a group is being screened out at 4/5ths the rate of the dominant group. For example, there would be a problem with the tool if Black applicants were being hired at 80 percent of how white applicants are being employed; that is a good indication that something discriminatory is at work in the organization.

She argues that many vendors of AI tools have embraced this approach, which hides behind the 4/5ths rule as a marketing strategy. But she contends that it is a challenge to assess people with disability simply because disability manifests in myriad ways. Disability analysis is more suitable to race/gender, so there is a real problem with auditing as it relates to people with disability.

And so, because disability is often excluded in the discussion about algorithmic bias, there needs to be more awareness — employers, other stakeholders, and decision-makers need to be more informed and engaged.

Julia Stoyanovich

Regarding data responsibility, Julia Stoyanovich draws attention to the idea of responsible design, which leads to the development and implementation of algorithmic systems related to hiring and what the future of work looks like for people with disabilities.

Stoyanovich conceptualizes the hiring process as a funnel containing a sequence of steps with a string of decisions resulting in the employment of some and the rejection of others. Data and predictive analytics are engaged throughout the phases, resulting in discrimination at all process stages. This underscores what Jenny Yang, the former Commissioner of the US EOC, once said, “Automated hiring systems act as modern gatekeepers to economic opportunity.”

Stoyanovich argues that each funnel component represents an example of Automated Decision System (ADS). Designed to improve equity and efficiency, these systems “process sensitive and proprietary information about people and deploy consequential decisions to people’s lives and livelihoods. Heavily reliant on data, ADS may or may not use AI and may and may not be autonomous.”

But in terms of regulating ADS, how should it be undertaken? Should it be risked-based or precautionary? In the United States, New York took the lead with the establishment of a task force informed by core principles: a) use ADS where they promote innovation and efficiency, b) promote fairness, equity and accountability and c) and reducing potential harm.

In contemplating technical solutions, Stoyanovich presents two unpleasant extremes: Techno optimism — a belief that technology can independently fix structural inequalities — and Techno bashing — an idea that any attempt to operationalize legal compliance would also not work.

Stoyanovich wonders that within the complex ecosystem that automated AI tools are developed, who assumes responsibility for ensuring that they are built and used correctly? And who ensures that due process violation is caught and mitigated against. The simple answer is that it is everyone’s collective responsibility.

Stoyanovich also talks about bias in predictive analysis. But bias is primarily a contested term and largely misunderstood when talking about the wrongs of automated systems. She argues that bias is not used in the sense statisticians conceptualize it. Here, we are more concerned about societal bias that seeps into the data, and we think about that bias as a mirror of the world.

Stoyanovich argues that bias in the data can be construed as a distortion of reflection. But reflection is oblivious of that distortion. Data cannot articulate the difference between distorted reflection and the perfect world. She cautions that changing the reflection does not affect changing the world.

For Stoyanovich, bias in predictive analytics raises a few questions: “What’s the source of the data? What happens to it inside the black box? And how are the results used?” Bias in ADS is represented as a three-headed dragon. Namely, pre-existing or societal bias; technical and emergent.

Earn a Learner badge

You will learn:

  • How legal frameworks and public policies can act against structural discrimination in candidate selection on the basis of disability
  • About the challenges to policy regulations for AI hiring systems

Learn and earn badges from this event:

  1. Watch the accessible AI Hiring System Policies webinar
  2. Apply for your Learner badge (five short answer questions)

AI Lifecycle and Ethics Webinar

This webinar focused on employment, disability and artificial intelligence policies.

Guest Speaker

A photo of Abhishek Gupta

Abhishek Gupta is the Founder and Principal Researcher at the Montreal AI Ethics Institute and a Machine Learning Engineer at Microsoft, where he serves on the CSE Responsible AI Board. He is representing Canada for the International Visitor Leaders Program (IVLP) administered by the U.S. State Department as an expert on the future of work. His research focuses on applied technical and policy methods to address ethical, safety and inclusivity concerns in using AI in different domains. He has built the largest community driven, public consultation group on AI Ethics in the world.

Play Video

Summary

Abhishek Gupta

Gupta approaches his presentation on bias and discrimination from a lifecycle perspective. And thinking about it this way allows practitioners not to miss points of intervention where critical changes are needed. The cycle involves the following key points:

Ideation and conception

What is the challenge in the hiring ecosystem that needs to be overcome? What is animating this inquiry; what is the problem? The challenge for HR is to find the perfect candidate for the vacancy. However, the goal must remain constant throughout the process whether it be via the tools that are employed or organizational change. Also, the skills being evaluated must be relevant to the position.

Data collection

This is a critical piece in the lifecycle. Gupta references Mimi Onuoha’s article on the library of missing datasets. So, it raises the question of what might be those missing pieces of data? Inadequate data representation risk exclusion. Sufficient data is not enough because all the nuances of disability are not accounted for. This also brings up the notion of data trust.

Design

Here the focus is on feasibility, which raises questions of cost and efficiency. Who’s making the decisions, what efficiency is being achieved and who is being excluded in the process?

Development

This is where the algorithms are written, which speaks to the techniques employed.

TEVV (testing, evaluation, verification, validation)

At this point, the chance to open the black box of the AI system is presented. The more complex the techniques, the more difficult it is to explain the decision. So, it raises questions of explainability and interpretability. Who is making the decisions, and who are being left out?

Deployment

This system is being deployed into a socio-technical world. And so, the consideration must be on the richness of the human complexity that does not fit neatly into any box. Gupta argues that having a “human in the loop is important, " not just for being there but to be autonomous and agentic. But we also must be aware of the following:

  • Algorithmic aversion — Because of people’s negative experience with the system, they are less trusting of it, even in instances where they are wrong and the system is right.
  • Automation bias — Due to their past experience with the accuracy of an AI system, people become too overly confident with and reliant on its decisions.

Maintenance

This has to do with runtime system monitoring. They learn from their interaction with real world data, which might change over time because of its dynamism. It’s also important to have guardrails in place for diversion detection purposes.

End-of-life

Since everything has a lifespan, monitoring is necessary. And if it is not doing what it is supposed to do, it might be time to retire it. In disbanding the system, consideration must be given to how the data will be handled. And what about the people who have depended on the system?

Gupta also asks questions about what makes up AI ethics:

We need to think about the different definitions of fairness. If a system is being certified as being fair, which definition of fairness is being used? Who made the decision to use those definitions and how flexible are the authors to change?

Privacy — extra sensitivity is critical here. Traditional means of safeguarding privacy might be inadequate.

Traceability and auditability — the ability to audit is hampered if things are not traceable. There needs to be transparency and feedback from the hiring process regarding whether or not someone is being discriminated against.

Machine learning security — thought must be given to machine learning security.

Inclusive AI for HR Webinar

In this webinar, our panel discussion highlighted some of the potential problems that AI raises in the hiring process and brainstormed ideas to make this process more inclusive for persons with disabilities.

Panelists

A photo of Shea Tanis

Shea Tanis is the Director for Policy and Advocacy at the Coleman Institute for Cognitive Disabilities at the University of Colorado. She is nationally recognized for her expertise in applied cognitive technology supports, cognitive accessibility and advancing the rights of people with cognitive disabilities to technology and information access.

Rich Donovan is CEO of the Return on Disability Group and is a globally recognized subject matter expert on the convergence of disability and corporate profitability. He has spent more than ten years focused on defining and unlocking the economic value of the disability market. In 2006 Rich founded Lime, the leading third-party recruiter in the disability space, where he worked with Google, PepsiCo, Bank of America/Merrill Lynch, IBM, TD Bank and others to help them attract and retain top talent from within the disability market.

Moderator

Dr. Vera Roberts, Inclusive Design Research Centre

Play Video

Summary

Shea Tanis and Rich Donovan

Shea Tanis focuses on the employment of persons with cognitive disabilities and the risks/benefits of automation — what Roberts calls the “tension between utopian and dystopian outcomes.” Tanis argues that there are generally low expectations for people with cognitive disabilities. The assumption is that automation will bode well for everyone. But Tanis argues that we should consider the difference between “personalization and customization.” The idea is to get people to choose what they want to be automated instead of making that assumption for them.

Tanis contends that if everything is automated, then it interferes with self-determination. People’s ability to self-determine and self-direct becomes displaced. So there has to be an acknowledgement of people’s autonomy, which intersects with the whole idea of ethics.

Rich Donovan asserts that AI can be used to devalue people with disability. He says that the problem is not with AI as a tool but how effectively humans use or deploy it for good or bad. So, there must be a reconceptualization of HR policy.

The conversation around the black box of recruiting is that HR needs to take a different approach to recruiting. With all applications submitted online, it then forces the exclusion of a specific population. Tanis contends that HR utilizes a three-second glance per the first review of applications. Therefore, Tanis’s group advocates for accepting resumes in alternative formatting like multimedia and portfolio, which can serve the same purpose as traditional formats. For this to happen, this calls for an amendment to and broadening of HR policy to accommodate non-traditional resume format. With this change, stakeholders would need to be educated.

Tanis reasons that if the status quo preselects a skill set without contemplating how people function in the job and accomplish their tasks, an entire population of qualified people is eliminated. Therefore, broader customization and specialization of jobs and the techniques deployed to execute them are required.

The socio-cultural piece is also essential. It is said that two-thirds of companies have outsourced their hiring to specialized companies that engage with non-inclusive and complex tools.

Touching on the impact of automation and how data is utilized, Jutta Treviranus contends that work surveillance data is used in a multiplicity of ways, from the optimization of work to promotion consideration and so on. So, there is a misinterpretation of work performance regarding people with disabilities.

Tanis also calls for flexibility — the need to move away from the pre-defined ways of measuring productivity … those rigid requirements need to be challenged because the reality is that employees can be clocked in at work when, in reality, they are being unproductive. So, how productivity is currently measured needs to be redefined so that the opportunity to misinterpret data is considerably minimized.

Tanis argues that the disabled community can also assert their claim to cultural capital because there is a uniqueness and authenticity that the disabled person brings to the organization’s diversity. And there is a market value for that cultural capital. But to tap that capital requires companies to actually employ people with disabilities and not just have them serve as consultants. Treviranus laments that there isn’t currently an efficient measure for diversity.

Earn a Learner badge

You will learn:

  • How AI in hiring systems impacts the employment of people with disabilities
  • How to better develop AI-based hiring systems that are inclusive and transparent for people with disabilities

Learn and earn badges from this event:

  1. Watch the accessible Inclusive AI for HR webinar
  2. Apply for your Learner badge (five short answer questions)

Making AI Inclusive for Hiring and HR Webinar

In this webinar we learned about nugget.ai’s operations as a skills measurement technology company. This AI company uses organizational psychology research to build AI algorithms that objectively quantify and measure candidate and employee skills for talent acquisition.

Play Video

Guest Speakers

nugget.ai logo

Marian Pitel is Head of Research at nugget.ai, completing a Doctor of Philosophy degree in Organizational Psychology.

Melisa Pike is Product and Research Associate at nugget.ai, completing a Doctor of Philosophy degree in Organizational Psychology.

Activity

Marian Pitel and Melissa Pike, members of nugget.ai’s science team, guided the presentation and activity, taking turns explaining key stages in nugget.ai’s operations and highlighting decisions and considerations that the nugget.ai team have had to make at these stages.

At each stage, Marian and Melissa worked with the whole group in identifying the advantages of nugget.ai’s approach and any areas of opportunities. The participants were encouraged to reflect on nugget.ai’s business decisions, leaning on lessons that have emerged from the study group’s previous modules.

Tags