Generative AI raises productivity but fuels new technostress for young workers
Data protection concerns are especially acute in fields dealing with sensitive information, such as finance and medical research. Participants describe unease about inadvertently exposing confidential data when using generative AI, particularly when clear organizational guidelines are missing. In some cases, this uncertainty leads to underuse of AI tools despite their potential benefits, as employees err on the side of caution.
New research suggests that the rapid adoption of generative artificial intelligence in workplaces is producing a quieter and more complex consequence: rising technostress among young professionals who are expected to adapt, learn, and perform at speed while navigating unclear rules and evolving job expectations.
A study, titled “Technostress and Generative AI in the Workplace: A Qualitative Analysis of Young Professionals” and published in Frontiers in Artificial Intelligence, provides detailed early looks at how generative AI is reshaping not only workflows, but also psychological pressure, skill development, and perceptions of work among early-career employees.
The study challenges the dominant narrative that generative AI simply makes work easier. Instead, it finds a dual reality in which productivity gains coexist with new stressors linked to uncertainty, monitoring demands, compliance concerns, and shifting cognitive roles.
Productivity gains meet new forms of pressure
The study confirms that generative AI delivers real and measurable benefits in daily work. Participants across sectors report faster task completion, reduced effort for routine activities, and improved support for complex assignments. In marketing, generative AI accelerates idea generation and content drafting. In IT and R&D, it reduces time spent on boilerplate code and enables workers to focus on higher-level problem solving. These benefits often increase intrinsic motivation, particularly among employees who enjoy experimenting with new tools.
However, the research shows that these gains are not experienced in isolation. Many young professionals describe an accompanying rise in pressure as organizations raise performance expectations. Faster output quickly becomes the new baseline, compressing deadlines and increasing workload intensity. Tasks that once provided mental relief are automated away, leaving workers with a steady stream of complex, cognitively demanding assignments.
This dynamic is most pronounced in R&D and IT roles, where generative AI removes simpler tasks but expands the volume of conceptual, supervisory, and verification work. Employees are expected to monitor multiple projects simultaneously, oversee AI-generated outputs, and intervene when errors arise. The result is not less work, but differently structured work that requires sustained concentration and constant judgment.
The study frames this as a shift rather than a reduction in labor. Generative AI does not eliminate effort but redistributes it toward oversight, quality control, and sense-making. For young professionals still building confidence and expertise, this shift can be mentally taxing, particularly when expectations are not clearly defined.
Uncertainty, compliance, and cognitive strain
In addition to workload changes, the research identifies several stressors that are specific to generative AI and not fully captured by traditional technostress models. One of the most prominent is uncertainty. Participants report difficulty keeping up with the pace of AI development, as new models and features appear in rapid succession. This creates a constant sense of needing to stay updated to remain professionally relevant, often extending learning into personal time.
Regulatory and compliance ambiguity emerges as another major concern. Many young professionals are unsure which AI tools are permitted for work, what data can safely be shared, and how copyright and data protection rules apply to AI-generated content. This lack of clarity increases cognitive load and fosters anxiety about making mistakes that could have legal or ethical consequences.
Data protection concerns are especially acute in fields dealing with sensitive information, such as finance and medical research. Participants describe unease about inadvertently exposing confidential data when using generative AI, particularly when clear organizational guidelines are missing. In some cases, this uncertainty leads to underuse of AI tools despite their potential benefits, as employees err on the side of caution.
Reliability also plays a central role in AI-related technostress. While generative AI is valued for speed, participants consistently emphasize the need to verify outputs. Hallucinations, inconsistent answers, and subtle inaccuracies require careful review, often negating some of the time saved through automation. This verification work is mentally demanding and can create frustration, especially when managers underestimate the effort required to ensure accuracy.
The study also highlights concerns about cognitive effects and dependency. Some participants worry that heavy reliance on generative AI could weaken foundational skills, creativity, or critical thinking over time. Others describe a shift in how they acquire knowledge, moving from deep learning to surface-level validation. While this does not necessarily result in skill loss, it does represent a change in competence profiles that can be unsettling for early-career workers still defining their professional identity.
Job security, role shifts, and the rise of techno-eustress
Despite widespread media narratives about AI-driven job loss, the study finds limited fear of immediate displacement among young professionals. Most participants do not believe generative AI will replace them outright. Instead, they anticipate significant changes in roles, responsibilities, and required skills.
This perceived role shift is a key source of both stress and opportunity. As generative AI takes over standardized tasks, human work increasingly centers on planning, interpretation, and judgment. In IT and R&D, this means moving away from hands-on execution toward system design and supervision. In marketing, it involves curating, refining, and contextualizing AI-generated ideas rather than producing content from scratch.
For some young professionals, this transition is motivating. The study identifies strong evidence of techno-eustress, a form of positive stress that stimulates learning and engagement. Many participants enjoy mastering new tools, experimenting with prompts, and exploring creative possibilities enabled by AI. In these cases, generative AI enhances autonomy, competence, and job satisfaction.
However, techno-eustress is unevenly distributed. It is most prevalent where organizations allow time for exploration and learning, and least evident where AI adoption is driven purely by efficiency targets. When employees are expected to deliver more without additional support, positive stress can quickly turn into strain.
Technostress and techno-eustress often coexist. The same tool that enables creativity and efficiency can also generate pressure, uncertainty, and fatigue. Whether generative AI is experienced as empowering or exhausting depends largely on organizational context, task design, and governance structures.
Implications for organizations and the future of work
Existing technostress frameworks are no longer sufficient to explain the realities of generative AI at work. While classic stressors such as overload, complexity, and invasion remain relevant, generative AI introduces new dimensions related to compliance, reliability, cognitive change, and role transformation.
Productivity gains from generative AI cannot be sustained without addressing the hidden costs borne by employees. Clear guidelines on acceptable AI use, data handling, and verification responsibilities are essential to reduce uncertainty and anxiety. Equally important is realistic planning around review time, as AI outputs still require human oversight.
The research also highlights the importance of AI literacy. Training should go beyond basic tool usage to include critical evaluation skills, ethical awareness, and understanding of AI limitations. By framing generative AI as a support tool rather than a replacement for human judgment, organizations can reduce fears of skill erosion and dependency.
The study points to the need for deliberate work design. If generative AI simply accelerates work without enriching it, technostress is likely to intensify. If, instead, efficiency gains are reinvested in learning, creativity, and meaningful tasks, AI can become a source of sustained engagement rather than burnout.
- FIRST PUBLISHED IN:
- Devdiscourse

