artikelpictogram-icoon

Artikel

7 min read

Companies want AI-literate workers, but don't know how to hire them

Image

Author

Kim Cunningham

Published

March 31, 2026

Organizations are racing to build AI-literate workforces. According to November 2025 research from McKinsey, demand for AI fluency jumped nearly sevenfold in the two years through mid-2025, from approximately 1 million workers in 2023 to about 7 million in 2025. LinkedIn's January 2026 labor market report found a 70% year-over-year increase in U.S. roles that require AI literacy.

There’s a paradox here: companies desperately need this skill, yet they have no reliable way to assess whether candidates actually have it. “You've got motivation to hire an AI-literate workforce, yet there's an immaturity within your existing organization in terms of the maturity of AI deployment,” says Alan Price, Senior Director of Talent Acquisition at Deel. “That creates a situation where the interviewer doesn't really know how to assess the very skill you're trying to hire for.”

The problem isn't just finding AI-literate candidates, but that hiring managers themselves often lack the frameworks, tools, and experience to evaluate AI proficiency during interviews. Traditional hiring methods are breaking down, and companies haven't figured out what replaces them.

Three types of companies

Price sees three types of organizations emerging in how they approach AI use during hiring. The first type’s view is, “AI is cheating and I don't want it part of my application process," Price says. These organizations prohibit candidates from using AI tools during any stage of hiring.

The second type wants AI-literate workforces. These companies actively encourage AI use and view it as a signal of the exact capability they're hiring for. “At Deel, we want an AI-literate workforce,” Price says. “It doesn't make sense to tell people we expect them to be AI-literate in their roles, but then prohibit them from using AI during the application process.”

The third type hasn't made a decision. The approach remains inconsistent, with some managers viewing AI use as cheating, others seeing it as valuable, and candidates facing unpredictable expectations.

The shift toward the second category is accelerating. But that shift amplifies the assessment problem – more companies want AI literacy, yet few know how to evaluate it.

The assessment crisis

The challenge runs deeper than many organizations realize. According to IDC research, 94% of enterprise leaders identify AI as their top in-demand skill for 2025, 39% of respondents complain about a lack of appropriate AI education, and 35% complain about having no time to learn new skills.

That internal skill gap creates an external hiring problem. If your existing team hasn't mastered AI deployment, how do you assess whether a candidate has the skills you need?

Price points to a concrete example: case studies. For years, companies used case studies to evaluate how candidates would approach business problems. “Pre-AI, you would have a case study: here's a business problem we're trying to solve, how would you deal with it? A lot of organizations have stopped that practice because people use AI, and you can't tell if you're testing their expertise or their prompting ability.”

The expertise test is disappearing, but nothing has replaced it. An Express Employment Professionals and Harris Poll survey found that 26% of hiring decision-makers say they struggle to evaluate candidates' informal or self-taught skills. This is precisely the category where AI literacy falls for most candidates today.

Price describes the current state: “If you're interviewing someone to assess their AI literacy, how would you actually do it? You might get buzzwords, ‘Oh, I've used Claude Code,’ but can you determine if what they did was simple or complex? Most interviewers don't have the expertise to validate that.”

Four levels nobody knows how to test

Price has developed a framework for thinking about AI literacy that spans four levels, but he's frank that most organizations don't know how to assess any of them systematically.

Level one is basic literacy: Can you write effective prompts? Do you understand how to interact with AI tools to get useful outputs?

Level two is novice application: Can you experiment with tools like Claude Code and understand how they work, even if you're still learning?

Level three is intermediate proficiency: Are you driving real results? Can you use AI to accomplish tasks that previously required significantly more time or expertise?

Level four is advanced capability: Can you automate processes? Are you building agentic AI solutions that operate with minimal human intervention?

“How do you test for those today?” Price asks. “Most companies don't have an answer. They might ask about AI experience, but without structured frameworks, the assessments stay subjective.”

The problem is accentuated by the current infrastructure. Price suggests that testing AI literacy probably requires a secure environment, such as opening a unique Claude session during the interview to watch how a candidate approaches a problem. But questions remain about whether this captures real capability, whether tests should be role-specific, and how to prevent workarounds.

The trade-off question

With this assessment gap comes a strategic dilemma: what trade-offs are companies willing to make between AI proficiency and traditional experience?

“Imagine someone who's an advanced-level AI for an account executive role, but you only need a medium level,” Price explains. “They have three years of sales experience instead of the five you normally require, but they're exceptionally strong with AI. What trade-offs do you make?” The question gets more pointed when considering early-career candidates. Price notes that graduates who are “AI-native” may be able to leverage AI to learn faster and perform at higher levels than their experience alone would suggest.

The traditional requirement for five, six, or seven years of in-seat experience may start to erode. But most organizations haven't made explicit decisions about these trade-offs. They're posting job descriptions that require both extensive experience and AI literacy, without clarity on which matters more.

The scale problem

Part of why Deel faces these questions urgently is the sheer volume of applications the company processes. Price says Deel received 1.3 million applications in 2025 and is tracking toward 2 million in 2026. One week in March alone saw 100,000 applications. At that scale, AI assistance in screening becomes necessary. Deel uses AI to filter applications based on criteria and to surface candidates from their database who might match open roles. They've also experimented with AI interviewers for high-volume hiring events, including one virtual event that saw 7,000 attendees and 180 people hired, earning a Guinness World Record for Deel.

But Price distinguishes between AI's role in different stages of hiring. He breaks AI applications into three categories: assistive, selective, and predictive.

  • Assistive AI helps write better job descriptions, interview question banks, and rubrics. “It's just a time saver,” Price says. “Why would you write interview questions manually when you can generate them in five minutes?”
  • Selective AI makes screening decisions based on CVs and candidate data. This level requires product development and coding, but it's operational at scale for companies like Deel.
  • Predictive AI is where Price sees the future, though with regulatory considerations. “Imagine being able to input CV data, interview responses, scorecards, and other factors, then ask AI: what's the likelihood of success? What are the warning signs?” The next evolution would connect performance data, HR data, and recruiting data to build prediction models.

The regulatory piece is noteworthy because autonomous AI hiring decisions raise fairness and transparency concerns. But the trend toward more sophisticated AI-assisted evaluation is clear.

The trust contract

Despite the push toward AI-assisted hiring, Price is firm that humans must make final decisions. Beyond technical interest, it comes down to the fundamental nature of employment relationships. “When you join a company, it's very much a human endeavor,” Price says. “You're making a decision because you want to grow your career, provide for your family, have a good quality of life.”

When organizations outsource hiring decisions to AI entirely, they break what Price calls the “trust contract.” He says, “If AI recommended hiring you but the hiring manager didn't have conviction, or if they fully outsourced the decision to AI and something doesn't work out, they'll blame the AI. They need to own the decision. They need to be accountable.”

The value proposition of joining a company includes mission alignment, fair compensation, growth opportunities, and culture. “If you feel like you're working with robots and every decision is automated, what influence or impact can you really have? AI should enable human judgment, not replace it.”

What needs to happen

The path forward requires companies to develop what they currently lack: structured frameworks for assessing AI literacy that interviewers can actually use.

Price suggests this might involve role-specific exercises in secure environments where candidates demonstrate AI use in realistic scenarios. It requires clearer decisions about trade-offs between AI proficiency and traditional experience. And it demands investment in training hiring managers, which means first building AI maturity internally.

The labor market data suggests urgency. According to PwC's 2025 Global AI Jobs Barometer, workers with AI skills command a 56% wage premium, up from 25% the previous year. The IDC data estimates that skills shortages may cost the global economy up to $5.5 trillion by 2026.

McKinsey research found that only 1% of organizations have reached AI maturity. The gap between widespread AI adoption and organizational capability to use it effectively is precisely where the hiring challenge bites hardest. Companies have the tools. They're using AI in operations, marketing, customer service, and recruiting itself. What they don't have is a workforce that fully understands how to leverage those tools, or a reliable way to identify candidates who do.

Until that changes, the paradox persists: organizations will continue posting jobs requiring AI literacy while lacking the internal capability to assess whether candidates possess it. The winners will be the companies that solve this assessment problem first, building frameworks that let them identify and hire the AI-literate talent everyone else is still guessing about.

Image

Kim Cunningham leads the Deel Works news desk, where she’s helping bring data and people together to tell future of work stories you’ll actually want to read.

Before joining Deel, Kim worked across HR Tech and corporate communications, developing editorial programs that connect research and storytelling. With experience in the US, Ireland, and France, she brings valuable international insights and perspectives to Deel Works. She is also an avid user and defender of the Oxford comma.

Connect with her on LinkedIn.