articleIcon-icon

Article

5 min read

How AI-on-AI hiring creates more work, not less

Image

Author

Kim Cunningham

Published

December 04, 2025

57% of companies are already using AI in hiring, and 74% say it has improved the quality of their hires, according to research from Resume.org. On the other side of the hiring table, 62% of candidates admitted to using AI during their job hunt, up from 32% just six months earlier in data from Career Group Companies. The collision is creating a hiring process where algorithms filter content written by algorithms, raising questions about whether speed gains are coming at the expense of fairness.

AI-powered recruitment tools can reduce time-to-hire by up to 50% and by the end of 2025, adoption among hiring managers is projected to reach 83%. The rapid escalation on both sides – companies automating screening while candidates automate applications – marks the fastest transformation in hiring practices in decades.

The screening stand-off

The pattern shows up most clearly in open-ended application questions. Kenzie Hoelscher, senior recruiter at AssemblyAI, sees it regularly when reviewing candidates. The company includes a standard question, “Why do you want to work here?” and AI-generated responses are increasingly obvious. “I'll be going through applications, and I'll see the same thing over and over and over,” Hoelscher says. “Everyone has the same exact answer.”

Hoelscher works at an AI company, so they’re well used to the technology and far from opposed to it. She and her team use AI themselves for sourcing candidates and refining outreach templates. But they’ve drawn a line at application questions and interviews. “We don't want to know why ChatGPT or Gemini wants the job. We want to hear why you do.”

To address the issue, the company created a dedicated webpage explaining how candidates should use AI during the hiring process. Use it for research and preparation, the guidelines state, but not during interviews. When Hoelscher detects AI use in video interviews (yes, it happens), it registers as a yellow flag. “It's almost like you're not following the guidelines that we gave you,” she says.

The detection methods are evolving. Hoelscher has started asking multi-part questions during screening calls, where most people naturally ask for clarification or forget the third part of a complex question. Candidates using AI during the interview will typically answer all three parts in sequence, perfectly aligned, after a noticeable pause, likely while the tool generates a response. “It takes a while for AI to generate three parts to a question, and you can see them stalling before they get to that point,” Hoelscher shares.

Yet even with these workarounds, the company doesn’t automatically reject candidates who use AI for written applications. If the rest of the resume is strong, they move forward, addressing it directly in the interview. The concern isn’t the tool, but whether or not the candidate can perform without it. “We're hiring you, not the AI tool,” Hoelscher says. “It's almost setting yourself up for failure in a way if you get in the role.”

The transparency gap

While candidates navigate informal guidelines, formal disclosure policies remain sparse. Only a handful of jurisdictions in the U.S. require companies to inform applicants when AI makes hiring decisions. Colorado and Illinois laws requiring employers to notify job candidates when using AI for hiring don’t take effect until 2026, and unless operating in areas with laws requiring notice, employers don’t have to disclose the use of AI or automated tools.

The asymmetry matters. Companies increasingly use AI to filter resumes, with organizations reporting cost reductions of up to 30% per hire when using AI-driven platforms. But accuracy concerns persist. A study from the University of Washington testing AI resume screening found that resumes with white-associated names were preferred in 85.1% of cases, while those with female-associated names received preference in just 11.1% of tests. The same research showed Black male job seekers facing the steepest disadvantage, with resumes featuring Black male names favored in 0% of cases against white male names*.

Hoelscher acknowledges the limitations of AI screening, sharing that despite using filters in her chosen ATS to surface candidates with specific skills, her team still reviews every application manually. “There are times that [platform] will generate profiles, and it will miss out on a lot of really great profiles just because they don’t have certain keywords in their profile,” she explains. Manual review catches what automation overlooks, but it also reveals the paradox of dual automation: more AI on both sides hasn’t reduced the workload.

Applications per hire are up approximately 182% since 2021, according to data from ATS Ashby. Candidates submit more applications using AI tools. Companies process more applications using AI tools. The volume increase appears to be canceling out the efficiency gains. One survey from Resume Builder found that nearly all companies acknowledge that AI hiring tools can introduce bias, yet adoption accelerates anyway.

Hoelscher’s definition of fair AI use applies to both sides: “Use AI to enhance your work, but don’t use AI to do your work for you.” For recruiters, that means letting AI handle template refinement while humans make hiring decisions. For candidates, it means using AI for research and preparation, not for outsourcing thought entirely.

The digital duel shows no signs of slowing. Speed is up, costs are down, and many companies report better hires. But whether human judgment can keep pace with the systems claiming to enhance it remains an open question. As both sides automate, the challenge becomes ensuring efficiency doesn’t outrun fairness.

*The researchers on the University of Washington study augmented real resumes with 120 carefully selected names that linguistic studies have shown are strongly associated with specific racial and gender groups.

Image

Kim Cunningham leads the Deel Works news desk, where she’s helping bring data and people together to tell future of work stories you’ll actually want to read.

Before joining Deel, Kim worked across HR Tech and corporate communications, developing editorial programs that connect research and storytelling. With experience in the US, Ireland, and France, she brings valuable international insights and perspectives to Deel Works. She is also an avid user and defender of the Oxford comma.

Connect with her on LinkedIn.