AI Can’t Replace You at Work. Here’s Why.

Generative AI is nowhere near being able to operate without intensive human oversight. Even seemingly mundane communication tasks cannot be feasibly automated without losing important information. For now, AI is still just a tool for workers to use, not a replacement for them.

Workers can stop worrying about being replaced by generative artificial intelligence.

University of Pennsylvania Wharton School experts Valery YakubovichPeter Cappelli and Prasanna Tambe believe it isn’t going to happen as drastically as many predict. In an essay published in The Wall Street Journal, the professors contend that AI will most likely create more jobs for people because it needs intensive human oversight to produce useable results.

“The big claims about AI assume that if something is possible in theory, then it will happen in practice. That is a big leap,” they wrote. “Modern work is complex, and most jobs involve much more than the kind of things AI is good at — mainly summarizing text and generating output based on prompts.”

Yakubovich recently spoke to Wharton Business Daily, offering several key facts he hopes will allay people’s fears of robotic replacement. First, while generative AI has advanced rapidly, it still has a long way to go before it can function autonomously and predictably.

Second, large language models (LLMs) like ChatGPT are capable of processing vast amounts of data, but they cannot parse it accurately and are prone to misleading information, known as AI hallucinations. “You get this output summary — how accurate is it? Who is going to adjudicate among alternative outputs on the same topic? Remember, it’s a black box,” said Yakubovich.

Third, companies are risk-averse and need to maintain a high degree of efficiency and control to be successful. So, they won’t be rushing to lay off all their people in exchange for technology that still has a lot of bugs to work out. “If we are thinking 40, 50 years ahead, that’s wide-open ended,” Yakubovich said. “The issue we are discussing now is very the specific [needs] for business. The risk for companies is very high, and they are not going to move very fast.”

LLMs are not even replacing humans in communication tasks

Despite its shortcomings, generative AI has been touted for its ability to handle what many consider to be mundane communication at work — interacting with customers online, producing reports and writing marketing copy such as press releases. But the professors point out that many of those tasks have already been taken from workers. For example, chatbots handle customer complaints, and client-facing employees are often given scripted language vetted by lawyers.

Yakubovich said most office interaction is informal communication, and a lot of useful organizational knowledge is tacit. While digital tools are increasingly capable of capturing both, nobody wants their emails, Slack chats or Zoom transcripts freely parsed by an LLM, and the quality of extracted information is hard to verify.

“I haven’t seen any company yet that dared to feed their emails into the models, because you can learn a lot about the company from that. Who wants to give open access?” he said. “It’s very hard to control what the model will produce and for whom. That’s why the models are very hard to use within the organization.”

Companies also don’t want AI involved in politically sensitive matters, especially if there are legal concerns. “What I see so far in talking to senior leaders of companies is that they try to avoid completely using models in politically charged cases because they know they will have more work to do adjudicating among the different parties,” he said.

Leave a Reply

Your email address will not be published. Required fields are marked *