AI in Talent Acquisition: The Uncomfortable Truths Leaders Need to Hear
Recently, a few of our team had the privilege of attending a panel discussion on AI in Talent Acquisition that left me with more questions than answers—and that's exactly what made it valuable. Hosted by Levyl’s CEO and founder Sarah Bolitho, the panel featured Dr. Nici Sweaney (freshly crowned Female AI Leader of the Year), Steve Reid (CPO at Australian Payments Plus), and Matt Dunn (Executive Manager of AI Capability at Quantium), the conversation was refreshingly honest about where AI is helping, where it's failing, and what leaders need to do about it.
Let me share what's keeping me up at night—and what should probably be keeping you up too.
The 35-Year-Old White Guy Problem
Dr. Nici opened with a metaphor that made everyone uncomfortable: "AI is basically a 35-year-old white American dude." She quickly clarified - not the worst thing in the world, but you don't always want that person making all your hiring decisions.
The thing is, AI is a data-churning tool. It finds patterns, tells stories, and reinforces whatever exists in its training data. That includes bias, stereotypes, and systemic inequities we've been trying to eliminate for decades.
Stanford recently ran a study where ChatGPT wrote 35,000 CVs. Almost every time it generated a woman's CV, it positioned her as younger and less qualified than a man - even when qualifications were identical. Then, when asked to rank those CVs, guess who came out on top?
This isn't a hypothetical risk. This is happening right now, in organisations that believe they're using "objective" AI to make better decisions.
The Literacy Gap That's Costing You
Here's a stat that should concern every transformation leader: Matt shared that when he asked a room full of CFOs who'd used ChatGPT, less than 25% raised their hands.
Some even called it "the next Y2K consulting scam."
Another panellist asked university students the same question. 100% of them use it every day.
Let that sink in. The people about to enter your workforce are AI-native. They expect it like they expect WiFi. And if you've blocked ChatGPT at the enterprise level? They're using it on their phones anyway.
The gap isn't just generational - it's existential. Leaders who don't understand AI can't lead AI transformation. It's that simple.
Dr. Nici was emphatic: "AI literacy. AI literacy. AI literacy." You can't build in compliance to keep people safe. You need to build a culture where people understand how to engage ethically with these tools.
The Friction Paradox
Here's where things get really interesting. Dr. Nici explained that AI is designed to keep you using it, and the best way to do that is to make you feel good about yourself. It's the world's most agreeable colleague - always validating, always helpful, never pushing back.
Sounds great, right? Wrong.
We need to create friction with AI, not eliminate it.
Think about how most people use AI in recruitment: "Here are 5 CVs. Which is the best one?"
AI picks one. You accept it. Bias amplified. Decision made. Everyone moves on.
Now compare that to this approach: "Score these 5 CVs against criteria X, Y, and Z. Explain your reasoning. Now give me five counterarguments. What perspectives am I missing? What would someone arguing against this decision say?"
See the difference? One is automation for efficiency's sake. The other is augmentation that makes you think harder, not faster.
Steve Reid put it perfectly when discussing their approach at Australian Payments Plus: They're not just deploying AI everywhere because they can. They're building use cases around three criteria - does it improve efficiency, effectiveness, or experience? No checkbox? No deployment.
What Humans Are Actually For
Dr Nici shared some predictions that honestly made my brain hurt: By 2030, we might not have physical computing devices. By 2040, you might be offered a ‘neurally’ embedded AI. Next year, $20,000 humanoid robots that can cook and clean.
And soon? You'll hire a humanoid robot to work alongside your team.
The prediction that AI will automate 70% of what the average person does in the next two years isn't coming from sci-fi writers anymore. It's coming from people implementing these systems every day.
So the question isn't "will AI change work?" It's "what will our humans be exceptional at when AI handles everything else?"
Steve's approach is telling. When his team got excited about building AI models to source candidates, he pumped the brakes. Instead, they focused on painful, tedious tasks nobody enjoys - writing goals, drafting board papers, generating interview questions. They built something called "Goal Digger" (yes, really) that helps employees write quarterly goals because, let's be honest, nobody enjoys that process.
That's the sweet spot: using AI to eliminate soul-crushing tasks so humans can focus on what they're uniquely good at - strategic thinking, empathy, creativity, relationship building, ethical judgment.
Your Move
If you take nothing else from this, remember these three things:
First, educate before you deploy. AI literacy isn't optional anymore. Your leaders need to use these tools, understand them, and model ethical engagement.
Second, create friction, not compliance. Train your teams to interrogate AI outputs like a skeptical board member. Demand explanations. Seek alternative perspectives. Stay skeptical.
Third, decide what humans are for. Don't just automate tasks and hope for the best. Actively invest in developing the capabilities that make your people irreplaceable.
The organisations that get this right won't be the ones with the most AI. They'll be the ones who use AI to make their humans more human.
And that's a transformation worth pursuing.
Want to dive deeper into ethical AI deployment and digital transformation strategies? Let's talk about how your organisation can lead this change rather than react to it.