AI Won’t Do It for You - The professional cost of “just ask ChatGPT”
.png)
What happens when your auditor doesn't check their work?
- Lose you as a client?
- Destroy their reputation?
- Or pay millions of dollars in fines because of the mistake?
The Australian government recently demanded Deloitte pay back USD$290 million for the misuse of Artificial Intelligence in a report on the government's welfare system.
Since OpenAI released ChatGPT to the public in 2023, many professionals have had run-ins AI mistakes. Not too long ago, the South African legal system was rocked by a few AI scandals over the past year. Consequently, many legal professionals have called for rules of AI usage in court.
Overlooked sign-offs, made-up cases and many court hearings later: AI isn't all it's cracked up to be - but it begs the question: Who's at fault?
- Those who developed these AI programs?
- The professionals who are using AI in their work?
- Or is it organisations not implementing their AI use effectively?
Don't forget: AI is a relatively new concept to the public. It's not even truly artificial intelligence. It's a marketing term, made easy for those of us not in the industry to understand.
What we are really talking about is something science calls "probabilistic pattern prediction". These programs recognise patterns and use them to inform predictive outcomes.

So what really happened between the Australian government and Deloitte?
The report supposedly referred to a non-existent legislature. It mentions a book authored by an academic who is not even in the legal field.
"I'd never heard of the book, and it sounded preposterous," said Chris Rudge, a Sydney University Researcher who first picked up on the fabricated information.
These made-up details are the result of a phenomenon called AI hallucinations, in which these programs generate information that doesn't actually exist in the real world.
If this happens often, what can companies and professionals do to mitigate the hallucinations?
Outlining AI use: Have clear and detailed policies
Without clear instructions, many employees fail to use AI appropriately. When OpenAI released ChatGPT, almost everyone jumped on the bandwagon:
"Let's use AI!"
"AI is going to change the world, and our business"
Globally, employees feel ready to use AI, but many companies lack structure to support it.
In one survey, 44% of respondents say their organisation has begun integrating AI, but only 22% say their organisation has communicated a clear plan or strategy for doing so.
As a leader, even if you think something can remain unsaid - like using AI to replace critical thinking- people require clear communication. There is no room for 'unsaid rules'. Leaders and the individuals in charge of implementing AI need to make it clear: "Here is how you can and cannot use AI"
As we are seeing in the case of professionals, skills that took years to develop, like critical thinking, analysis or simply ensuring you've dotted your i's and crossed your t's, has been left to the robots. And the proof is in the pudding. AI doesn't replace human intelligence.
It learns patterns and uses them to predict the outcomes. But without a human brain, without the human experience, professional mistakes are bound to happen. All those years of learning to read, finishing school, completing your degree - AI could never function like your brain.
Culture Matters More Than Technology
The Deloitte incident wasn’t a failure of AI—it was a failure of how AI was positioned inside the organisation.
When companies treat AI as a shortcut to speed, employees naturally prioritise faster output over accuracy. Global research supports this: a KPMG–University of Melbourne study found that 57% of employees secretly use AI because they’re unclear on what is allowed, and only 47% have received any training.
Similarly, Salesforce’s global survey showed 7 in 10 workers have never been trained to use AI safely, yet many are already using it in their daily work. AI is used quietly, inconsistently, and without proper verification.
In organisations where leaders frame AI as a thinking partner rather than a replacement for judgement, employees are more likely to check, verify, and apply reasoning. If you reward speed, you get fast mistakes; if you reward diligence, you get better work— and done faster.
Reputational damage: The humiliation
The cost of not verifying AI isn’t just embarrassment, but undermines the organisation. The financial settlement made headlines, but the reputational fallout is irreversible.
These institutions have built credibility over decades based on historical accuracy, independence, and professional judgement. All of this was suddenly called into question in a single news cycle. When a firm makes such a mistake, it undermines public trust not only in the firm, but in the profession itself.
Clients will start thinking: “If this happened once, where else could it be happening?”
Competitors don’t need to attack — the seeds of doubt have already been sown. Rebuilding trust after an AI-related error is significantly harder than preventing it, because reputation is a lagging indicator — once damaged, it moves slowly.
The lesson is clear: AI mistakes are not only operational risks; they are brand risks that can permanently alter how your organisation is perceived.
.png)

.png)