Leveraging AI, Preserving Human Expertise
Researchers from around the world, including most recently Prof. Dr. Jin Gerlach from the University of Passau, are focusing on a topic that is highly relevant to industry: What happens to companies’ knowledge assets when artificial intelligence takes on more and more knowledge-intensive tasks?
16 Mar 2026Share
Gerlach, who holds the Chair of Business Informatics with a focus on data and information management at the University of Passau, situates his research at the intersection of data-driven value creation, digitalization, and organizational transformation. His paper, published jointly with Donald Lange in early February 2026 in the “Academy of Management Review,” suggests that AI in organizations can not only generate efficiency but also devalue human expertise. This is precisely where the strategic significance lies: models become outdated, processes change, data drifts—and when it comes time to renew the systems, the very expertise that was previously diluted by automation is needed.
A governance issue with immediate economic relevance
For industry, this is not an abstract theory, but a governance issue with immediate economic relevance. As quality control, diagnostics, planning, forecasting, and error analysis are increasingly taken over by AI systems, the need to actively apply domain knowledge diminishes in many roles. What appears to be a productivity gain in the short term can become a knowledge trap in the medium term: employees lose routines, junior staff no longer fully acquire certain skills, and experienced experts leave without their tacit knowledge having been sufficiently reproduced. Gerlach’s thesis thus shifts the debate away from the logic of pure automation toward the question of how companies can secure their long-term capacity for learning, control, and renewal.
Keeping an eye on the company’s competency architecture
This perspective is now clearly supported by other current lines of research. Particularly insightful is a study published in 2025 by Microsoft Research, which was presented at the CHI conference. The researchers analyzed more than 300 knowledge workers and around 900 specific GenAI usage scenarios. Their finding: The greater the trust in generative AI, the lower the reported level of critical thinking tended to be; conversely, the higher the users’ own self-confidence, the more likely critical scrutiny was maintained. In addition, the authors describe a shift in roles from independent problem-solving toward verification, integration, and monitoring. This is noteworthy for industrial organizations because it empirically supports Gerlach’s core idea: When employees increasingly rarely solve technical problems on their own and instead primarily validate AI results, not only does the process change, but so does the company’s competency architecture.
AI can be both a performance accelerator and a learning brake
The issue becomes even clearer in current learning research. A study by Hamsa Bastani and co-authors published in PNAS in 2025 shows that generative AI can indeed make learners more productive while using it, but competency development suffers when the system is deployed without appropriate safeguards. Particularly revealing is the finding that performance dropped below the level of a comparison group that never had access to the tool once AI access was removed. This is an important lesson for the industry’s development: high short-term output figures must not be confused with sustainable competency building. AI can thus be both a performance accelerator and a learning brake—which is precisely what supports Gerlach’s warning against the creeping devaluation of knowledge in companies.
Re-evaluating the value of human professionals
More recent debates on the reliable delegation of tasks to AI agents also point in the same direction. Recent studies from 2026 emphasize that delegation works relatively robustly only where results can be verified easily and cost-effectively. In tasks that are difficult to verify, open-ended, or highly context-dependent, human expertise appears to be indispensable so far, because otherwise errors are either detected too late or must be corrected at great expense. For industry, this implies that the value of human specialists is not diminishing but must be reoriented—away from the exclusive handling of standardized routines, toward judgment, validation, exception handling, and system renewal. This is precisely why knowledge loss is so dangerous. It weakens not only the operational workforce but also the company’s ability to operate AI safely and economically in the long term.
Companies need resilient human-AI configurations
The regulatory and practical framework also demonstrates that this perspective is not merely academic caution. For example, the NIST AI Risk Management Framework, the voluntary guideline from the U.S. National Institute of Standards and Technology (NIST), explicitly emphasizes that AI risks must be measured, monitored, and managed under human responsibility throughout the entire lifecycle. Precisely because deployment contexts, data sets, and error patterns are not static, companies need robust human-AI configurations, clear responsibilities, and a culture of critical inquiry. Thus, the governance side also implicitly confirms Gerlach’s basic assumption: Those who reduce human expertise too drastically lose not only know-how but also the ability to detect drift, misalignment, and quality loss in AI systems in a timely manner.
Preserving experiential knowledge, diagnostic capabilities, and learning opportunities
Prof. Jin Gerlach’s current work is thus not isolated but situated at a central research hub of the AI transformation. His thesis is now supported by empirical findings on declining critical thinking, by learning studies on the loss of competence under AI use, and by governance frameworks on the necessity of human oversight. For decision-makers in industry, this leads to a clear conclusion: AI initiatives must not be evaluated solely based on the degree of automation and efficiency, but should take the entire knowledge balance into account. Companies that increase productivity while preserving experiential knowledge, diagnostic capabilities, and learning opportunities build resilience. Companies that merely substitute risk losing, in the medium term, precisely those capabilities that are supposed to make their AI systems viable tomorrow.
Related Exhibitors
Related Speakers
Related Events
Interested in news about exhibitors, top offers and trends in the industry?
Browser Notice
Your web browser is outdated. Update your browser for more security, speed and optimal presentation of this page.
Update Browser