Article
7 min read
AI jobs are moving outside tech, and they’re harder than they look

Author
Kim Cunningham
Published
December 17, 2025

For the first time, the majority of U.S. job postings requiring AI skills now sit outside traditional IT and computer science roles. As of 2024, 51% of postings requiring AI skills are outside IT and computer science occupations, according to labor market analytics firm Lightcast.
Five years ago, tech roles dominated AI hiring. Generative AI roles across non-tech industries saw 800% growth since 2022, coinciding with the widespread adoption of tools like ChatGPT and similar platforms. AI-related job postings have grown at an average annual rate of nearly 29% over the last 15 years, outstripping the 11% rate for postings in the general economy, according to Brookings Institution analysis.
The shift tracks with AI tools spreading into line functions across healthcare, finance, logistics, retail, and legal services. But the transformation is more nuanced than job titles alone suggest. In many cases, existing operations roles are absorbing AI oversight as part of their function, creating a cluster of new tasks rather than an entirely new job.
Demand is real and broader than expected
U.S. job postings requiring at least one AI skill rose from roughly 0.5% of all postings in 2010 to 1.7% in 2024, or about 620,000 postings in 2024 alone. That’s a 240% increase over 14 years, according to research from the Federal Reserve Bank of Atlanta.
It's worth noting that methodologies differ across job posting vendors, so exact percentages vary by source. What's called "AI enablement" today may be labeled "operations excellence" or "digital workflow management" in a few years as the work matures. But the directional shift is consistent: demand is moving beyond engineering departments.
The change is visible across occupations. Marketing managers are among the top roles requiring AI expertise, alongside market research analysts, sales managers, and even chemists. In these positions, AI literacy now complements domain expertise rather than replacing it.
What AI supervision actually looks like
AI jobs outside engineering rarely announce themselves with tidy titles. They tend to grow out of business problems—a backlog of customer emails, a compliance audit, a forecast that keeps missing the mark—and then crystallize into roles once teams need someone to own the judgment calls.
David Weisselberger, a Miami-based attorney specializing in expungement cases, spends roughly 40% of his time each week reviewing AI-generated petitions, cross-checking nearly every line against applicable court rules. “Close to sixty percent of the drafts need some kind of correction,” he shares. The system struggles most with complex histories, misreads statute codes, and occasionally omits county-specific requirements. “I slow down the moment I see a long arrest history because the system often misses a detail in those files,” Weisselberger explains.
The review work is exacting, but the efficiency gains are measurable. AI tools freed up about 10 hours weekly for Weisselberger, time previously spent building petitions from scratch. “Now the drafts come to me already formed, so I spend less time typing and more time confirming that everything lines up,” he says. The shift from traditional defense practice to an AI-enabled workflow took about four months of learning what to watch for and where the system typically fails.
The supervision pattern holds across sectors. Karina Arteaga, AI strategy advisor and former business operations leader at Meta Reality Labs, describes a similar dynamic in tech operations. She shares an example of data science teams building pricing models for devices that tracked sales patterns and macroeconomic indicators accurately. But models often missed sudden regulatory changes that were shifting consumer behavior. “This is where the human element is just crucial,” she says.
“The model didn’t take this into account,” Arteaga shares, who now advises organizations on AI adoption. Sales and marketing teams on the ground had to provide the context that the data couldn’t capture, overriding recommendations when models missed factors beyond their parameters.
What ties these roles together across legal services and tech operations isn’t a common title but a common posture. “It's not going to change entire workflows or functions,” Arteaga notes. “It’s going to change buckets or clusters of tasks within a particular function.”
What makes AI supervision sustainable
The difference between sustainable AI oversight and overwhelming work comes down to design and skills. Arteaga says the breaking point happens when companies assign AI supervision as additional work without restructuring workloads. “It is more work at the beginning to have less work in the future,” she explains.
Successful implementations use volunteer-based pilot teams during initial rollout, temporarily reassign other responsibilities, and set clear expectations that the transition requires more bandwidth initially. Arteaga explains that as a manager at Meta, she was conscious to check “What can I take off your plate to enable you to have that time?” with potential volunteers.
The approach that generates momentum: identify early adopters who want to join pilot projects, let them report results back to their teams, then bring in dedicated enablement support for broader rollout. Weisselberger’s four-month learning curve and Arteaga’s emphasis on transparent communication both point to the same reality: the adjustment period is substantial, and organizations that treat it as trivial set workers up to fail.
The people who transition most successfully into AI-supervision roles aren’t necessarily the most technical. Arteaga identifies three critical capabilities: comfort with uncertainty, continuous learning orientation, and strong communication skills.
“I see a lot of very technical people with a very technical background struggling on this piece,” she notes. “They're brilliant, they're super intelligent, they know how to roll out very complex, technical AI projects, but they don't know how to communicate properly and speak the business language.”
Weisselberger makes a similar observation about colleagues who resist AI tools. Some trust their established habits and prefer writing everything themselves, while others embrace the speed advantage. “I tell younger attorneys that AI will be part of daily work, but their judgment is what keeps filings clean and safe for clients,” he shares.
The compensation question
What’s less clear is whether workers absorbing AI supervision responsibilities are compensated accordingly. The data shows these roles are growing and expanding in scope, but most organizations remain opaque about whether AI oversight comes with title changes, pay increases, or formal recognition of expanded responsibilities.
In some cases, companies post new roles with AI-specific titles and competitive salaries to attract external talent. In others, existing employees absorb supervision tasks as part of “continuous improvement” or “digital transformation” initiatives without corresponding adjustments to compensation or workload.
The distinction matters. If AI supervision becomes an expected competency across functions without compensation adjustments, workers face expanded responsibilities with no recognition of increased value. If organizations treat it as a specialized skill worthy of premium pay, it creates clearer career paths and incentives for workers to develop expertise.
The gap between postings and practice
Job postings and pilot metrics can be misleading. Arteaga notes that companies often report strong adoption numbers, but that deeper analysis reveals usage concentrated among early adopters. “You can have the perfect model, but if people don't trust it, they won't use it,” she says. “When you go deeper into the data, you discover that it’s 20% of your workforce that are the early adopters, the enthusiasts.”
The gap between posted roles and sustained usage remains significant. Many postings that mention AI provide little context about how tools will be deployed in practice. Researchers note that the demand for AI skills is rising across occupations outside of technology, including technical writing, marketing, and management analysis, as organizations increasingly use AI for marketing, sales, service operations, and human resources.
However, postings remain a proxy for intent, not proof of transformation. The better indicators are the quality of pattern recognition workers develop during transitions, the structures organizations build to prevent supervision work from overwhelming existing roles, and the maturity of frameworks that define when a system can act autonomously versus when they must defer to human judgment.
What this means for workers and companies
Weisselberger’s 60% corrector rate and four-month learning curve suggest AI supervision isn’t a marginal addition to existing work. It’s a substantial reallocation of how time gets spent. Arteaga’s observation on early adopters vs. targeted workforce adopters suggests the transformation is unevenly distributed, even within organizations running active pilots.
For companies building these capabilities: The center of gravity has shifted from engineering to translation and oversight; the practical judgment of when to use a system, how to supervise it, and where human checkpoints matter most. Workers who develop this judgment become increasingly valuable, but only if organizations provide structural support during the transition and recognize expanded responsibilities appropriately.
The risk is treating AI supervision as costless: assign it to existing staff, expect immediate productivity gains, and wonder why adoption stalls or workers burn out. Weisselberger's experience shows that even with clear efficiency gains, the supervision work is exacting and requires sustained attention. Arteaga's research shows that without deliberate enablement and workload adjustment, adoption concentrates among enthusiasts while the majority of workers disengage.

Kim Cunningham leads the Deel Works news desk, where she’s helping bring data and people together to tell future of work stories you’ll actually want to read.
Before joining Deel, Kim worked across HR Tech and corporate communications, developing editorial programs that connect research and storytelling. With experience in the US, Ireland, and France, she brings valuable international insights and perspectives to Deel Works. She is also an avid user and defender of the Oxford comma.
Connect with her on LinkedIn.







