Cursor & GitHub Copilot are amazing for vibe coding. They use a rules files that contains a set of instructions for them to follow. As it turns out, that can be explored for malicious attacks as this piece by Pillar shows. Hackers can create malicious rules with hidden instructions. When AI agent uses malicious rules file, it will create legitimate code that contains the attack payload code.

As rules files are shared broadly and widely adopted, they provide hackers with the opportunity to exploit them. Attacks can happen through contextual manipulation, unicode obfuscation, and semantic hijacking. The above image shows how this approach can be done to inject malicious code into HTML files generated by AI.
[HT]