Chinese Culture Is Shaping How It Uses AI. It Looks Very Different From the U.S. or Europe.
Dec 30, 2025 02:45:00 -0500 | #CommentaryOffice workers at a commercial building in Shanghai, China. (Raul Ariano/Bloomberg)
About the authors: Winnie Jiang is an assistant professor of organizational behavior at INSEAD. Yidi Guo is an associate professor at the School of Economics and Management at Tsinghua University. Runjia Zhang is a doctoral student at the Guanghua School of Management at Peking University.
When an advertising firm in Beijing rolled out generative AI tools last year, managers expected instant creativity boosts and faster turnaround times on projects. Weeks later, employees were working late into the night revising AI-generated drafts, fixing errors, and pretending to be impressed by results they found mediocre.
“My boss says AI doubles our output,” one designer told us in a research survey of Chinese workers and managers that we conducted from 2024 into 2025. “I’m spending twice as long cleaning up its mistakes.”
This employee is far from alone. In a survey by the Upwork Research Institute of thousands of full-time employees and C-suite executives in the U.S., U.K., Australia, and Canada, 81% of executives admitted they had increased demands on workers in the past year, believing that AI tools would help with output. But 71% of surveyed employees were burned out, and 65% were struggling with demands on their productivity.
These countries could learn something from China, which is adopting AI at a faster rate. From our research and interviews within Chinese organizations spanning law, finance, publishing, design, and manufacturing, we discovered patterns around AI adoption that Western executives should consider.
We found Chinese business leaders’ expectations often outpace reality. Their employees are left picking up the slack. Other leaders dismiss AI as a hyped technology whose potential won’t be realized for years to come. Both of these expectation gaps have real consequences for organizations.
To understand those consequences, we asked pairs of interviewees—employees and their direct supervisors—about their use of generative AI tools, ranging from content creation and legal research to data analysis. The employees described two kinds of misalignments between what leaders expected and what the employees themselves experienced. We called these misalignments the “stretch gap” and the “slack gap.”
The stretch gap occurs when leaders’ expectations exceed employees’ experiences. Leaders believe AI will dramatically improve efficiency or quality, but employees find it unreliable or time-consuming.
“Our boss thought AI could do a week’s work in a day,” one marketing analyst said. “Instead, it doubled our revisions.”
A slack gap happens when leaders’ expectations are lower than employees’ experiences. Leaders dismiss AI as unhelpful, while employees quietly realize its value.
“My supervisor said AI was useless,” a lawyer noted to us. “I used it secretly to draft contracts. It saved hours.”
In both cases, employees face the same predicament: Do I tell my boss about what is really happening?
Whether employees feel psychologically safe speaking up about their AI use ends up dictating how they do or don’t use AI. That is because there is a hidden social dance happening around AI adoption, which depends not only on the raw capabilities of the technology but also on the safety, status, and leadership mind-set within the workplace.
The most prevalent and troubling pattern we observed was performative adoption of AI. That is, employees faking enthusiasm and effectiveness of AI tools to meet their boss’s inflated expectations. When leaders insist that AI will double productivity, employees often overwork to fulfill that prophecy. They polish AI-generated drafts manually and redo outputs, all while attributing results to AI.
“My leader bragged that our latest campaign was ‘done by AI,’” one designer told us. “He didn’t know we spent nights fixing what AI couldn’t handle.”
Performative adoption creates a vicious cycle. The more employees hide the struggle, the more leaders believe AI is working and the higher the expectations climb. Over time, morale collapses.
At the opposite extreme is secretive adoption, which unwittingly keeps innovation underground. Employees who fear being labeled naive or who worry that AI’s success might threaten their jobs hide their usage, instead of sharing best practices. Organizations miss collective learning opportunities, and AI integration remains fragmented.
One senior lawyer told us “AI can’t handle advanced reasoning. It is irrelevant to our work.” Yet his junior associate admitted to us that he quietly used it to check citations and draft templates, saving hours of repetitive labor.
The teams that thrived using AI in our study shared one common feature: They had open, psychologically safe conversations about technology performance.
In one gaming company, an employee demonstrated AI’s limits to a manager who believed it could replace designers. After testing together, they found AI was useful for mock-ups but not for final art. The leader adjusted expectations. Soon after, workloads stabilized and employees continued to experiment with the technology.
The dynamics we observed in China are consistent with findings in other organizational behavior research. Still, the patterns we saw might manifest differently in the U.S. and Europe.
In China, deeply rooted norms around hierarchy and deference to authority may amplify performative and secretive adoption. U.S. and European cultures generally place a higher value on egalitarianism and direct communication, so their employees may be more willing to challenge leaders’ assumptions about AI effectiveness.
Yet, even in these lower-power-distance cultures, we expect that the fear of appearing to be a laggard or being displaced by automation will lead to performative or secretive adoption, especially in ultracompetitive environments.
Our findings indicate some best practices that can help leaders prevent that from happening.
Before setting grand expectations around AI, try the tools you would like employees to use. A brief hands-on experience can help you be more realistic about AI’s capabilities. Watch for the signs of performative adoption: Ask how outcomes were achieved, not just whether they were. When AI adoption seems low, look for individuals already using the technology privately and encourage them to share insights. Invite employees to share what is actually working.
The best leaders are neither AI evangelists nor skeptics. They are sensemakers who adjust expectations as collective learning unfolds.
As one employee reflected after her team finally realigned around realistic AI goals: “Once our boss stopped pretending AI was magic, we could finally start making it useful.”
Guest commentaries like this one are written by authors outside the Barron’s newsroom. They reflect the perspective and opinions of the authors. Submit feedback and commentary pitches to ideas@barrons.com.