Why semantics is becoming a bottleneck
Almost all large companies today face the same paradox: they have more data than ever before, yet it is becoming increasingly difficult to derive reliable decisions from it. The causes are rarely technological in nature. Storage, computing power, cloud, AI models – all of these are available. The real problem lies deeper: the meaning is missing.
16 Jan 2026 Dr Andreas Kyek | Practice Lead Data Science & AI | Alexander Thamm GmbHShare
Data is generated over many years in specialist departments, projects and systems. Each system is useful in its own right, and each data model is optimised locally. What is missing is a common context:
The answers to these questions determine whether analyses are reproducible, whether automation is possible, and whether AI systems can work reliably. This is where semantics comes into play. Semantics does not mean ‘just another data model’. It makes meanings explicit and describes terms, relationships, rules, contexts and their validity.
In practice, we repeatedly see that the semantic deficit grows with the size and regulation of an organisation. Accordingly, the follow-up costs also increase in the form of manual coordination, special logic, exceptions and uncertainty.
Why classic data architectures are no longer sufficient
Many companies have responded to this situation by establishing data warehouses, data lakes, data meshes, catalogues, or metadata management. These are important steps. However, they only solve part of the problem. This is because classic architectures primarily answer the following questions:
What they do not adequately answer:
This becomes a limiting factor, especially for advanced analytics, automation and AI. A modern LLM or AI agent may be very good at reading texts, making plans and using tools. However, without explicit semantics, reliability is lacking. In other words, without a semantic layer, AI is impressive but not resilient.
The semantic layer as the missing layer
We are convinced that the semantic layer forms a central layer of modern data and AI architectures. Not as a monolithic ‘master model’, but as a living semantic system. This layer consists of:
Ontologies, taxonomies, vocabularies;
The crucial point here is that the semantic layer combines human knowledge with machine usability. It is therefore the bridge between specialist areas and IT, documents and databases, rules and exceptions, as well as the past (established systems) and the future (automation).
But this is precisely where many organisations reach their limits.
This is because semantics are often complex to model, difficult to keep consistent, highly domain-dependent and often poorly documented historically.
This is where Agentic AI comes into play.
What does Agentic AI have to do with semantics?
Agentic AI is often described in terms of autonomy, planning and tool usage.
But at Alexander Thamm [at], we see the real added value of Agentic AI in something else: it can scale semantic work.
Specifically, this means:
It is important to note that AI agents do not replace expert decisions. Instead, they take on the work that humans are poor at scaling, i.e. mass analysis, time-consuming preliminary checks and tedious consistency checks.
This shifts the focus for humans away from manual drudgery and towards expert evaluation, governance and quality.
The semantic layer of the future is agentic
From our point of view, a new architectural principle is emerging here: the semantic layer of the future will be an agent mesh. The reason is obvious, because semantics is not a static construct. Terms change, standards evolve, organisations restructure, and systems are added as quickly as they disappear. A static knowledge graph cannot reflect this dynamic.
Instead, it requires the interaction of specialised agents: agents that monitor models and classify new data, agents that check rules and reveal inconsistencies, and agents that make comprehensible and verifiable suggestions. Only such a living, agent-based system can keep semantics in organisations permanently up to date, consistent and usable.
This makes the semantic layer active instead of passive, checking instead of merely descriptive, and evolutionary instead of modelled once and for all. That is why agentic AI and semantics are inseparable.
Governance first – otherwise it won't scale
One aspect is central and often underestimated: governance. Agents working on semantics delve deep into a company's knowledge base – and without clear guidelines, this can quickly become risky. Our experience therefore shows time and again that governance must come before autonomy. Roles, approvals and quality barriers must be clearly defined, and a human-in-the-loop is not an optional convenience feature, but a requirement. Likewise, decisions must be explainable and auditable at all times.
Only under these conditions can genuine trust be established, both internally and externally.
Conclusion
Many companies are faced with the question of how they can integrate AI into their organisation in a productive, secure and scalable manner. Our answer to this is clear: without semantics, there can be no robust AI, and without agents, there can be no scalable semantics. Agentic AI and semantic layers are therefore not separate developments, but two sides of the same coin.
Combining the two creates robust automation, traceable decisions and a knowledge base that grows alongside the organisation.
That is exactly what we are working on.
Sources
1) Sequeda et al., Knowledge Graphs as a source of trust for LLM-powered systems (2025) https://www.sciencedirect.com/science/article/pii/S1570826824000441
2) Jaber et al., AutoClimDS: Climate Data AI (arXiv 2025) https://arxiv.org/abs/2509.21553
3) Peshevski et al., AI Agent-Driven KG Construction (arXiv 2025) https://www.arxiv.org/abs/2511.11017
4) McGee et al., Enabling Ethical AI with Ontological Context (arXiv 2025) https://arxiv.org/abs/2512.04822
5) Open Research Knowledge Graph: The Open Research Knowledge Graph (ORKG) aims to describe research papers in a structured manner. With the ORKG, papers are easier to find and compare. https://orkg.org/
6) Semantic layer – Wikipedia (Def.)
https://en.wikipedia.org/wiki/Semantic_layer?utm_source=chatgpt.com
About the author
Dr Andreas Kyek is a data science and AI expert with over 25 years of experience in data-driven product and process development. With his background in physics and his work in leadership roles (including at Infineon), he combines technological depth with strategic implementation. As Senior Principal Data Scientist and Practice Lead at Alexander Thamm [at], he is expanding the data science and AI practice, focusing on agentic AI systems, multi-agent architectures, semantic knowledge models and RAG in complex industrial setups. He leads large-scale data/AI initiatives (industry, energy, mobility, infrastructure) and is involved in mentoring and training for the responsible use of AI.
Related Exhibitors
Interested in news about exhibitors, top offers and trends in the industry?
Browser Notice
Your web browser is outdated. Update your browser for more security, speed and optimal presentation of this page.
Update Browser