

At Forte Group, we have spent the last year helping enterprises move past AI "tinkering" and into production-ready software. We talk a lot about the upside: the 60 percent reduction in errors, the automated workflows, and the "extended intelligence" that augments our best engineers.
But as a CTO, I must look at the other side of that coin. If AI can be a force multiplier for your team, it can be an even greater one for an adversary.
Recent research from Check Point has surfaced a threat that every CTO and CISO needs to have on their radar immediately: AI-in-the-Middle (AITM). This is not just another malware variant. It is a fundamental shift in how attackers communicate, and it turns the very tools your employees use to be more productive—like Grok or Microsoft Copilot—into "shadow proxies" for cyberattacks.
In traditional cybersecurity, we look for "command-and-control" (C2) traffic. If a laptop in your office starts talking to a suspicious IP address in a foreign country, your security systems (hopefully) flag it.
The AITM attack changes the math. Instead of the malware talking directly to the attacker, it talks to a legitimate, trusted AI service. The malware asks the AI to "summarize" a specific URL. That URL happens to be the server of the attacker. The AI dutifully fetches the data, processes it, and hands the instructions of the attacker back to the malware in the form of a "summary."
From the perspective of your network, it just looks like an employee is using Copilot. It is legitimate, encrypted traffic to a trusted domain. It is invisible.
There are three reasons why this specific research keeps me up at night:
For years, we have treated AI domains as "safe." That era is over. As we integrate AI into the SDLC and our broader business operations, our security posture must evolve from simple perimeter defense to what I call contextual governance.
If you are a technology leader, here is how you should be thinking about this:
The Check Point research is a wake-up call that AI is not just a tool for building—it is a new landscape for breaking.
As leaders, we cannot let the fear of these threats stop our innovation. AI is the future of software engineering. But we must build with our eyes wide open. We need to ensure that as we lean into "extended intelligence" to grow our businesses, we are not inadvertently providing a high-speed, invisible highway for the people trying to tear them down.