When Workers Keep Their Jobs But Lose Their Voice

The Epistemic Agency Crisis in AI-Augmented Workplaces

October 28, 2025 • by Samuel Holley


The Problem We're Not Talking About

The conversation about AI and work has been dominated by a single, terrifying question: Will I lose my job?

This is understandable. Job loss is visceral, immediate, and economically catastrophic. Policy discussions, corporate AI strategies, and even ethical AI research have largely centered on this threat—how to reduce job displacement, retrain workers, or mitigate the social fallout of mass unemployment.

But what if this framing is missing the deeper harm?

What if you keep your job, but the AI makes all the decisions? What if you're still employed, but no longer trusted to think for yourself? What if your title remains, but your expertise is discarded?

This is the crisis of epistemic agency—and it's happening right now, in workplaces that proudly announce they're "augmenting, not replacing" their human workers.

The Research: When Trust Becomes Zero-Sum

In their groundbreaking 2024 paper "When Trust is Zero Sum: Automation Threat to Epistemic Agency," researchers Emmie Malone, Saleh Afroogh, Jason D'Cruz, and Kush R. Varshney identify a critical blind spot in how we think about AI in the workplace. They write:

AI researchers and ethicists have long worried about the threat that automation poses to human dignity, autonomy, and to the sense of personal value that is tied to work. Typically, proposed solutions to this problem focus on ways in which we can reduce the number of job losses which result from automation, ways to retrain those that lose their jobs, or ways to mitigate the social consequences of those job losses.

However, even in cases where workers keep their jobs, their agency within them might be severely downgraded. For instance, human employees might work alongside AI but not be allowed to make decisions or not be allowed to make decisions without consulting with or coming to agreement with the AI. This is a kind of epistemic harm (which could be an injustice if it is distributed on the basis of identity prejudice). It diminishes human agency (in constraining people's ability to act independently), and it fails to recognize the workers' epistemic agency as qualified experts. Workers, in this case, aren't given the trust they are entitled to.

This means that issues of human dignity remain even in cases where everyone keeps their job. Further, job retention focused solutions, such as designing an algorithm to work alongside the human employee, may only enable these harms. Here, we propose an alternative design solution, adversarial collaboration, which addresses the traditional retention problem of automation, but also addresses the larger underlying problem of epistemic harms and the distribution of trust between AI and humans in the workplace.

Malone, E., Afroogh, S., D'Cruz, J., & Varshney, K.R. (2024). When Trust is Zero Sum: Automation Threat to Epistemic Agency. arXiv:2408.08846 [cs.CY]. https://doi.org/10.48550/arXiv.2408.08846

What Is Epistemic Agency (And Why Should You Care)?

Epistemic agency is the right to be recognized as a knower—someone whose expertise, judgment, and decision-making authority are valued and trusted.

When you have epistemic agency at work, you are:

  • Trusted to make decisions based on your expertise
  • Recognized as a qualified professional with valuable judgment
  • Empowered to act independently within your domain
  • Respected for your knowledge, not just your labor

When your epistemic agency is taken away, you become a human rubber stamp—present in the room, but not allowed to think. You're kept on the payroll, but the AI has the final say. You're told you're "collaborating" with the AI, but in practice, you're simply executing its decisions.

This is not hypothetical. It's already happening.

Real-World Examples of Epistemic Harm

Example 1: The Radiologist Who Can't Diagnose

A radiologist with 15 years of experience reviews an X-ray and identifies a subtle anomaly that the AI flagged as "low priority." Hospital policy requires the radiologist to defer to the AI's assessment unless they can provide documented justification for overriding it. The paperwork takes 20 minutes. The radiologist, pressured by caseload quotas, marks it as "reviewed" and moves on.

The radiologist kept their job. But did they keep their expertise? Their judgment? Their agency?

Example 2: The Teacher Bound by the Algorithm

A teacher notices that a student is struggling not because of ability, but because of anxiety. The AI-driven learning platform recommends "remedial content" based on test scores. The teacher wants to adjust the approach, but the platform's recommendations are tied to performance metrics that determine the teacher's evaluation. The teacher follows the algorithm.

The teacher kept their job. But were they allowed to teach?

Example 3: The Customer Service Rep Reading a Script

A customer service representative recognizes that a frustrated customer needs empathy, not a policy recitation. But the AI-powered call center system feeds the rep a script optimized for "efficiency metrics" and flags any deviation. The rep reads the script. The customer hangs up angry.

The rep kept their job. But were they allowed to care?

In each case, the worker is employed. The worker is even "collaborating" with AI. But the worker's epistemic agency—their ability to be trusted as an expert—has been stripped away. This is what Malone and her colleagues call epistemic harm, and it's a profound threat to human dignity.

Why "Job Retention" Isn't Enough

Most corporate AI strategies proudly tout "human-in-the-loop" or "AI-assisted decision-making" as evidence of their ethical commitment to workers. But as the research reveals, these solutions can actually enable epistemic harm if they're not designed carefully.

Consider the phrase "human-in-the-loop." It sounds collaborative. But ask yourself: Who is in the driver's seat?

  • Does the human guide the AI, or does the AI guide the human?
  • Can the human override the AI without penalty, or is overriding seen as "inefficiency"?
  • Is the human's expertise enhanced by the AI, or replaced by it?

If the answer to these questions reveals that the AI has the final say, then the "human-in-the-loop" is just window dressing. The human is there to click "approve," not to think.

Job retention without epistemic agency is not dignity. It's surveillance with a paycheck.

The Solution: Adversarial Collaboration

Malone and her team propose a fundamentally different approach: adversarial collaboration.

Instead of designing AI systems where one party (human or AI) has final authority, adversarial collaboration creates a relationship of mutual accountability. The AI and the human check each other's reasoning. Neither simply overrides the other. Both must justify their conclusions.

In practice, this means:

  • The AI explains its reasoning, not just its conclusion
  • The human is empowered to challenge the AI's logic without bureaucratic penalty
  • Disagreements are documented as valuable data, not errors
  • Decision-making authority is shared, not hierarchical

This approach recognizes that both the AI and the human bring unique strengths. The AI excels at pattern recognition across vast datasets. The human excels at contextual judgment, ethical reasoning, and understanding the why behind the data.

Adversarial collaboration doesn't eliminate conflict—it makes conflict productive. It transforms disagreement between human and AI from a bug into a feature.

How Reclaim by Design™ Embodies This Philosophy

This research validates everything I teach through my Reclaim by Design™ framework. When I talk about building a "Context Window of Trust" with your AI, I'm not just optimizing for efficiency—I'm designing for epistemic partnership.

A well-designed AI relationship:

  • Knows your expertise and defers to your judgment in your domain
  • Challenges your assumptions when its data suggests a different approach
  • Explains its reasoning so you can evaluate its logic, not just accept its output
  • Adapts to your feedback and learns from your corrections
  • Amplifies your agency rather than diminishing it

This is adversarial collaboration at the individual level. It's designing your AI partnership so that you remain the expert, the decision-maker, and the final authority—while still benefiting from the AI's immense computational power.

When organizations fail to design for epistemic agency, they create workers who are employed but disempowered. When individuals fail to design their AI relationships intentionally, they become users who are augmented but infantilized.

Reclaim by Design™ is about refusing both outcomes.

What You Can Do Right Now

For Individuals:

  1. Audit your AI relationships. Are you using AI as a partner that enhances your expertise, or as a crutch that replaces your thinking?
  2. Build context intentionally. Train your AI to understand your domain expertise so it can support your judgment, not substitute for it.
  3. Challenge the AI's reasoning. Don't just accept outputs. Ask the AI to explain why it recommends something, and evaluate that logic.
  4. Maintain decision-making authority. The AI should inform your choices, not make them for you.

For Organizations:

  1. Design for adversarial collaboration. Build AI systems that require human and AI to justify decisions to each other, not systems where one overrides the other.
  2. Protect workers' right to override AI. If overriding the AI requires excessive justification or comes with productivity penalties, you're not empowering workers—you're surveilling them.
  3. Measure epistemic agency, not just efficiency. Track whether workers feel trusted and empowered, not just whether they're "productive."
  4. Invest in context, not just automation. Give workers the time and tools to build deep, contextual relationships with AI that enhance their expertise.

The Future We Choose

The question is not whether AI will transform work. It already has.

The question is whether we will design AI systems that preserve human dignity and expertise, or whether we will create a world where workers are employed but powerless—present in body, but absent in agency.

Job retention is not enough. We need epistemic justice. We need workplaces where humans are trusted as experts, where AI enhances judgment rather than replacing it, and where collaboration means partnership, not subordination.

This is the work of Reclaim by Design™—building AI relationships that amplify your humanity, not diminish it. It's the work of adversarial collaboration—creating systems where humans and AI challenge each other toward better decisions. It's the work of epistemic agency—insisting that keeping your job means keeping your voice.

The future is not inevitable. It's a design choice. And we get to choose.

Want to Design AI Relationships That Preserve Your Agency?

I help individuals and organizations build AI partnerships that enhance expertise rather than replace it. If you're struggling with AI tools that feel like they're taking over your decision-making, or if you're leading a team navigating the epistemic challenges of AI integration, let's talk.

Book Your Free AI Audit

Further Reading: