Artificial intelligence promised to revolutionize workplace productivity, but a troubling trend is emerging that threatens to undermine those gains entirely. Despite AI usage at work doubling from 21% to 40% between 2023 and 2025, a staggering 95% of organizations report no measurable return on their AI investments. The culprit? A phenomenon researchers are calling "workslop"—AI-generated content that looks polished on the surface but lacks the substance to meaningfully advance work.
What Is Workslop?
Researchers from Stanford's Social Media Lab and BetterUp Labs define workslop as:
"AI-generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task".
It manifests in countless forms: lengthy emails that could have been a single bullet point, presentations missing crucial context, poorly written code, or reports filled with what Stanford professor Jeff Hancock calls "purple prose"—unnecessarily verbose language that forces recipients to decipher the actual meaning.
The problem isn't AI itself, but how people use it.
Dr. Gabriella Rosen Kellner, vice president of research labs at BetterUp, explains that workslop emerges when employees treat AI as a shortcut to finished products rather than a collaborative tool. "People often forget that while AI is viewed as a tool for individual use, it actually mediates human interactions," she notes. When someone hits "copy-paste" on AI output without adding human insight, they're not saving time—they're transferring the cognitive burden to their colleagues.
The Hidden Cost of Low-Quality AI Output
The financial impact is staggering. Employees report spending an average of one hour and 56 minutes addressing workslop-related issues each month, translating to approximately $186 in lost productivity per person. For an organization with 10,000 employees where 41% encounter workslop regularly, this amounts to nearly $9 million in annual productivity losses.
Beyond dollars, workslop erodes something far more valuable: trust.
When recipients encounter AI-generated work lacking substance, 53% feel annoyed, 38% confused, and 22% offended. As one finance employee described to researchers, receiving workslop creates an impossible dilemma:
"I had to decide whether to rewrite it myself, ask him to revise it, or just accept it as adequate".
Nearly half of employees who receive workslop reassess their colleagues' capabilities, perceiving them as less creative, competent, and reliable.
Pilots Versus Passengers: Two Types of AI Users
Research from the World Economic Forum identifies two distinct approaches to AI adoption. "Pilots" use AI strategically to extend their capabilities—generating initial drafts they refine with expertise, or using AI for research while crafting original insights. "Passengers," by contrast, treat AI as a replacement for critical thinking, offloading entire tasks without quality control or contextual understanding.
The distinction matters enormously. A field experiment titled "Collaborating with AI Agents" found that human-AI teams that actively guided the AI communicated more effectively and performed better than human-only teams. The key? They collaborated with AI rather than outsourcing their thinking entirely.
Solutions: Building a Workslop-Free Culture
Preventing workslop requires both individual responsibility and organizational leadership. Harvard Business Review emphasizes that "indiscriminate imperatives yield indiscriminate usage"—vague mandates to "use AI" without strategic guidance inevitably produce low-quality outputs.
Organizations can implement several evidence-based strategies. First, establish clear AI guidelines aligned with company values and strategic objectives. Second, create team discussions about AI usage and evaluate which applications genuinely serve project goals. Third, implement quality checkpoints where AI outputs must be reviewed and enhanced with human expertise before sharing.
Some firms prohibit copying and pasting AI content directly into work documents, instead requiring employees to use AI for research and ideation while drafting from scratch. Others permit the use of AI only upon project completion, allowing it to review and refine human-generated work rather than replace it.
Training programs focused specifically on workslop prevention, complete with concrete examples, help employees recognize the difference between AI-assisted excellence and AI-generated mediocrity. As Hancock advises, teams should commit to quality standards and maintain transparency about AI usage.
The Path Forward
AI isn't the enemy of productivity—lazy implementation is. When used thoughtfully, AI demonstrably boosts creativity, efficiency, and collaboration. The workplace of 2026 can realize AI's promise, but only if we shift from viewing AI as a shortcut to recognizing it as a tool that amplifies human expertise rather than replaces it.
The question isn't whether to use AI, but how to use it responsibly. As researchers conclude, success requires "enhancing human skills" rather than adopting a "copy-paste mentality". Organizations that invest in strategic implementation, provide clear guidance, and foster cultures of quality will harness AI's transformative potential. Those that don't will continue burning millions on a technology that creates more problems than it solves.
Discussion