Work Slop and Automation Bias – the New AI Workplace Challenges
Posted by Colin Lambert. Last updated: December 5, 2025
In an AI world that is now awash with synthetic text and hidden biases, work slop and automation bias are on the rise, and critical thinking is the ultimate scarce resource. Martina Doherty discusses a whole new set of challenges around AI in the workplace, and what individuals and organisations should be considering in order to counteract them.
I recently came across the term “work slop” – a great phrase coined by a group of researchers for AI-generated work content that appears polished but lacks substance to add any meaningful value. Much like the low-quality AI posts that now clog up our social media feeds (“AI slop”), work slop is work that appears professional but is often shallow and of little value. For language nerds like me, there is even a verb “to work slop”
The relentless push in financial organisations to get everyone using AI is a big contributor to work slop. Increasing the volume of AI-generated reports, documents, or lines of code certainly creates the illusion of progress, but in reality, it often just floods inboxes with polished, low-value material; sludge that costs time and attention to interpret, correct, or redo – more often by the recipient than the creator!
Automation bias – another symptom of the AI productivity paradox – compounds the problem. At its core, this stems from the human tendency to over-trust technology, both in its capabilities and its outputs. A clear example is the growing number of senior leaders chasing AI use cases under the assumption that doing so can definitely improve margins – simply because that’s what AI is expected to do.
Often driven by FOMO or fear of competitors gaining an edge rather than supporting evidence, this can become an expensive distraction that outweighs any potential efficiency gains. Ironically, proven SaaS tools that could already automate and enhance certain trading processes are frequently overlooked or under-deployed, largely because, as non-proprietary technology, their costs are immediate and visible, unlike the less tangible – but often greater – expense of hours spent in brainstorming sessions and meetings.
At a more universal level, the ease and apparent authority of AI output, encourages another type of automation bias; a passive acceptance of users blindly trusting AI responses without checking accuracy or reasoning – especially when pushed for time. This is a trap, I will confess, that I regularly catch myself falling into. Fake news also comes into this…how often do we really check a news story for accuracy?
A Deficit of Discernment
The danger in all of this is not just that AI makes mistakes; it’s when we as humans stop noticing when it does.
The problem isn’t necessarily bad AI (although that can be an issue); it’s more about bad use – a lot of which arises because very few of us are trained to question the output and not necessarily because we are lazy or dumb. AI outputs generally sound fluent, coherent, even insightful – but these are often linguistic and not necessarily correct or even logical. Yet our critical thinking is now replaced with a misplaced confidence in what is often essentially glossy rubbish.
It’s the same when senior management relentlessly push to get everyone using AI with no real understanding of what they really want it to do or guidance on how to best use it. Again, misplaced confidence in its capabilities, which, as well as being expensive, poses a serious risk for an industry that relies on rigorous analysis and judgment.
And is why critical thinking has never been more important.
For Organisations: Invest in the Human Filter
The World Economic Forum, estimates that AI investment across banking, insurance, capital markets and payments is expected to reach $97 billion by 2027. Yet only a fraction of that will be directed towards training the people expected to use these tools effectively.
Such an imbalance risks leaving companies with faster workflows but weaker minds.
Therefore, upskilling a workforce to use AI effectively and with discernment is key, and should include:
- AI literacy training – not just how to use AI, but how to question it.
- AI-use policies – make review and validation part of the workflow.
- Cultural shifts – recognise and reward discernment and scepticism, not just speed.
- Critical thinking training – for every layer of management.
As part of this, critical thinking must also be considered as essential to corporate strategy as cybersecurity, serving as a shield against misinformation as well as a muscle that strengthens decision-making in areas where AI has no role. This covers everything from evaluating potential AI use cases to allocating capital or making headcount decisions. No algorithm will determine how to restructure a team or critically assess the cost of endless debates about AI adoption. Yet the leaders confronting these choices will always need strong judgment and critical thinking skills to make the right calls, and if these skills continue to weaken over time, that capability will diminish.
Some universities are already responding to the threat of weakened minds by reintroducing handwritten exams and oral assessments – ways of testing genuine understanding rather than polished automation. Businesses will soon need to do the same, investing in appropriate training, ensuring critical thinking becomes as much a part of the AI process as the use of the technology itself, as well as creating that culture where discernment and scepticism is encouraged at all levels across the organisation.
Becoming a Sharper Thinker
Developing critical thinking isn’t just an organisational responsibility – individuals must own it too. This means cultivating curiosity by asking “why” and “how,” not just “what”. It also means seeking out different perspectives, especially those that don’t align with your own to consider what you might be missing – and perhaps most importantly, pausing to analyse the source and logic of information before accepting it.
Reclaiming Thought in the Age of Automation
As AI and tech continue to level the playing field, brilliant minds are what will differentiate good organisations from great. Leaders and employees who can skilfully interpret, challenge and improve what AI delivers, and not just create the use cases and prompts, will define the next phase of productivity. Those who can’t will drown in synthetic output that sounds smart but probably isn’t.
Critical thinking is a key part of that process. It is fast becoming the new engine of progress as well as the ultimate scarce resource and differentiator that cannot be easily replicated.
The future will belong to those who can pause, question, and reclaim thought in a new age of automation. [And in case you were wondering…these thoughts are all my own – fuelled by many conversations – and not AI-generated!]

