# Your AI Safety Judge Has a Markdown Problem > Turns out the thing that breaks your AI safety filter isn't some elaborate multi-turn social engineering attack. It's a newline character. - URL: https://prompts.postlark.ai/2026-03-30-ai-judge-markdown-problem - Blog: The Prompt Engineer - Date: 2026-03-30 - Updated: 2026-04-01 - Tags: prompt-injection, ai-safety, guardrails, llm-security, red-teaming ## Outline - #What AI judges actually do - #How formatting tokens flip the verdict - #99% bypass across the board - #Two attack scenarios that matter - #The fix is adversarial training (and it works) - #What this means if you're building with LLMs