articleIcon-ikon

Artikel

6 min read

Teaching AI to think: The 70,000 workers behind AI training

Billede

Author

Kim Cunningham

Published

March 24, 2026

Two years ago, the job of AI trainer barely existed. By the end of 2025, more than 70,000 people globally were working in the role across 600+ organizations. The profession grew 283% in cross-border hiring alone last year, making it the fastest-growing role on Deel’s platform, according to their 2025 Global Hiring Report.

The work itself is deceptively simple to describe: AI trainers label data, provide feedback on AI outputs, and help refine how systems respond. But the reality is far more complex than that suggests. The role encompasses everything from $15-per-hour data annotators in the Philippines to $100-per-hour physicians in the U.S., all working to teach AI systems how to reason and respond across languages, domains, and professional contexts.

The profession's explosive growth raises an immediate question: is this a temporary phase as AI companies scale up, or a permanent fixture of the labor market? The answer appears to depend heavily on what kind of AI training work you're doing.

What AI trainers actually do

AI trainers support what's known as reinforcement learning from human feedback, or RLHF. The process involves humans reviewing AI outputs, rating their quality, correcting errors, and providing guidance that helps the system learn what good responses look like. Without this human feedback loop, AI systems struggle to distinguish between technically correct answers and genuinely useful ones, or to calibrate their responses appropriately for different contexts.

According to the report, AI trainers range from annotators performing data labeling to highly skilled subject matter experts in fields as diverse as economics, medicine, and translation. The platform identified nine distinct AI trainer specializations: generalist, mathematician, biologist, physicist, software tester, translator, chemist, psychologist, and doctor.

The geographic distribution reflects this diversity. While 58% of AI trainers work in the U.S., significant clusters exist in India (7.2%), the Philippines (4.6%), Canada (2.1%), and Kenya (1.7%), reflecting the combination of lower labor costs, English proficiency, and regional expertise that AI labs need for global model development.

The report notes that AI labs often need to verify outputs with native speakers or experts from specific regions, requiring a globally distributed workforce rather than concentration in traditional tech hubs alone.

The pay bifurcation

The compensation data reveals just how varied the work is. According to the report, 30.3% of AI trainers earn $15-20 per hour, while 19.1% earn $50-75 per hour, and 6.1% earn $100 or more per hour. The report explains that this distribution aligns with the role's diversity as basic annotation work commands lower rates while subject matter experts in specialized fields earn significantly more.

The pay data also reveals occupational segmentation by gender. In the U.S., male AI trainers earn a median of $50 per hour compared to $30 per hour for female trainers. But the gap reflects sorting across specializations: 55% of psychologist AI trainer roles are held by women, while only 26% of mathematician AI trainer roles are. Similar patterns appear in other high-income markets, mirroring broader STEM dynamics where women cluster in life sciences and social sciences while men dominate in mathematics and physics.

The gap isn't just about skill level or gender, but about what the AI is being trained to do. Teaching an AI to recognize objects in images requires different expertise than teaching it to provide feedback on professional legal reasoning or medical diagnosis. The former might involve annotators labeling thousands of images. The latter requires subject matter experts who understand not just whether an answer is technically correct, but whether it demonstrates appropriate professional judgment for someone at a specific stage of their career.

When training AI requires deep expertise

Sarah Gumpinger, learning designer and co-founder of Juniper Learning Design, has spent the last few years building JuniperAI, which is an AI feedback system serving thousands of learners annually. The experience taught her that training AI for high-stakes professional contexts requires far more human expertise than she initially expected.

“We had higher expectations of the AI initially when we started the project,” Gumpinger says. “We very quickly realized that we need a significant amount of investment to hire subject matter experts that are going to train the AI alongside the developers, and they need to be fairly senior in the role or the domain that we are developing the interaction for.”

While technical accuracy is key, it’s not the only challenge. When training professionals, whether in law, accounting, or kinesiology, the goal is developing critical thinking, judgment, and decision-making skills, Gumpinger explains. Those expectations change dramatically depending on whether someone is entering a program, about to enter the profession, or already practicing.

“The AI just cannot seem to get that right,” Gumpinger says. Her work focuses on developing the key skills mentioned above. “It's not about 'technically correct or not,'” she says. “It's really about the application of those higher order skills.”

The AI, she found, defaults to expectations that are either too low or what she calls “handwaving” – vague and fluffy feedback that wouldn't meet professional standards. For each learning activity, her team creates what they call a “recipe card” with extensive contextual information about what they're expecting from learners and why.

Then comes the testing phase, which revealed another limitation: AI-generated sample responses don't work for training. “When we get the AI to write sample responses and then use those to train it, it becomes a bit of a circle where it's reviewing itself,” Gumpinger shares. “It's not really authentic to what a real student would do.”

Instead, they hire real people at the appropriate skill level to write sample responses, sometimes 25 to 50 people for a single learning activity. “The AI is too predictable,” she says. “Even when I tell it 'write like a student would, make these mistakes,' it doesn't really work.”

The result of this intensive human training is accuracy in the high 90% range when back-tested against thousands of responses – higher than what human graders achieve. But getting there requires what Gumpinger describes as “putting an expert in a box.”

“What would a coach that is very well-versed in mentoring students on this topic do?” she says. “How do we put them in a box and then replicate it? That's basically what we're doing.”

A profession still defining itself

The intensive expertise required for high-stakes AI training creates a market gap. When Gumpinger tries to hire AI trainers, she finds they're hard to come by. “It's not a defined discipline yet, it's almost an art,” she says. “The market hasn't quite caught up with what exactly that role looks like.” She distinguishes between general AI trainers, who understand the technical process of training models, and the subject matter experts her work requires. “Organizations like mine would layer them together with subject matter experts,” she explains.

The lack of formal education pathways is part of the problem. “I've taught at universities for many years and I'm looking forward to some formal education to catch up on that and a bit more of a reframing and some more consistency around what that means,” Gumpinger says.

This aligns with the report's finding that the profession is growing rapidly but remains undefined. The 283% growth in 2025 represents expansion from a near-zero baseline just two years earlier. The market is scaling faster than educational institutions or professional standards can keep pace with.

How long will these jobs last?

When asked whether AI training is a viable long-term career, Gumpinger's answer is nuanced. “At this point, we're certainly not there yet. We have a long way to go,” she says. The work has gotten easier as AI improves: “It gets better and our level of work goes down for sure,” but she doesn't see human expertise becoming obsolete. “I don't know how we would ever get there without knowing who the human is that we're trying to replicate first,” Gumpinger says. “There has to be a starting point and that's imagining what that human is able to do in these very nuanced specific circumstances.”

The stakes matter, too. Professional certification programs serve regulated professions with duties to uphold public trust. “Whether it be doctors, lawyers, it's very important that we spend the time upfront,” she says. “The human capital that we put in is probably higher than someone developing a tool that might not have those stakes attached to it.”

The report suggests the profession will continue growing in the near term. With 70,000+ workers across 600+ organizations and 283% year-over-year growth, AI training has reached critical mass as a genuine category of work. But the long-term outlook depends on which segment of the profession you're in.

Basic data annotation and labeling (the $15-20 per hour work) faces the most automation pressure as AI systems improve at recognizing patterns and labeling data themselves. The expert-level work that requires deep domain knowledge, professional judgment, and the ability to calibrate expectations for specific contexts appears more durable. You can't automate the work of defining what good professional reasoning looks like until you've automated professional reasoning itself.

The profession's future likely resembles its present: a bifurcated labor market where routine training tasks gradually automate while demand for subject matter expertise persists. The 70,000 AI trainers working today are training machines to mimic human judgment. Whether those jobs still exist in five years depends largely on what kind of judgment they're teaching.

Billede

Kim Cunningham leads the Deel Works news desk, where she’s helping bring data and people together to tell future of work stories you’ll actually want to read.

Before joining Deel, Kim worked across HR Tech and corporate communications, developing editorial programs that connect research and storytelling. With experience in the US, Ireland, and France, she brings valuable international insights and perspectives to Deel Works. She is also an avid user and defender of the Oxford comma.

Connect with her on LinkedIn.