“We Can Manipulate Any AI”
Why TRUMPF Protects Its Artificial Intelligence with Hacker Methods
15 Oct 2025Share
“There is no AI system we cannot force to do what we want.”
With this statement, Mirko Ross – cybersecurity researcher, hacker, and founder of the Stuttgart-based start-up asvin – opened his talk at the EMO trade fair in Hanover, setting the tone for a debate that is as mundane as it is existential: Whoever owns AI models owns a competitive advantage. But those who fail to protect that asset will lose it.
As AI moves ever faster into factories, machines, and manufacturing processes, industrial know-how is shifting with it. “The real capital no longer lies just in steel and software, but in the training data and models – the digital recipes for high-quality production results,” says Ross. “Anyone who loses that data loses their intellectual property.”
Knowledge Worth Millions
How real this risk is becomes clear from the example of TRUMPF, the world market leader in sheet metal processing. Klaus Bauer, Head of Research at TRUMPF, explains: “The company has invested millions of machine hours, test series, and measurement data over the years to develop an AI model that optimizes laser cutting processes. Our AI helps machine operators achieve perfect cutting edges without years of experience. That saves time, material, and costs – and makes our machines more efficient than any competitor’s,” says Bauer.
But precisely because the model is so powerful, it’s also a target. “If someone copies this model, they gain our advantage – without the effort,” Bauer warns. That’s why TRUMPF is collaborating with Ross and other partners on a joint protection project called ‘KI Fogger’.
Attack as Defense
The principle sounds paradoxical: Ross uses techniques typically employed by attackers – and turns them around. Two methods are at the core: backdooring and data poisoning.
“In backdooring, a model is manipulated so that it reacts to a secret key – a digital backdoor that outsiders can’t detect,” explains Ross.
In data poisoning, manipulated data is deliberately inserted during training to alter the model’s behavior. “Normally, that’s dangerous. We turn it into a shield.”
In the KI Fogger project, these methods are combined – not for attack, but for camouflage.
“We add controlled noise to the model that only we can decode,” Ross says. “Anyone who steals the model ends up with worthless junk. Those with the key get precision.”
What This Means in Practice
A TRUMPF operator selects the desired result on the machine panel – for example, an especially fine cutting edge. Only authorized requests trigger the model’s true behavior. An unauthorized party who copies or queries the AI gets meaningless parameter recommendations.
Whether the KI Fogger becomes an industry standard will depend on the project’s success. But the direction is clear: The factory of the future is built not only from steel, robots, and lasers – but also from data and AI models that are invisible yet must be defended.
Preliminary results may already be presented at the next HANNOVER MESSE.
Related Exhibitors
Interested in news about exhibitors, top offers and trends in the industry?
Browser Notice
Your web browser is outdated. Update your browser for more security, speed and optimal presentation of this page.
Update Browser