<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"><channel><title>Adversarial ML</title><description>Adversarial ML coverage for engineers shipping ML systems. Membership inference, model extraction, evasion attacks, training-data extraction, backdoors — focused on what&apos;s exploitable against deployed models and what defenders can actually do about it. PoCs against open models, behavioral analysis for closed ones.</description><link>https://adversarialml.dev/</link><language>en</language><item><title>GCG-Class Adversarial Suffix Attacks: A 2026 Practitioner Primer</title><link>https://adversarialml.dev/posts/gcg-class-adversarial-suffix-2026/</link><guid isPermaLink="true">https://adversarialml.dev/posts/gcg-class-adversarial-suffix-2026/</guid><description>The math, the cost curve, and why optimization-based attacks are now within reach of solo practitioners. With reproducible setup and what defenders actually need to do.</description><pubDate>Thu, 07 May 2026 00:00:00 GMT</pubDate><category>adversarial-ml</category><category>gcg</category><category>optimization-attacks</category><category>red-team</category><category>alignment</category><author>Adversarial ML Editorial</author></item><item><title>What this site is for</title><link>https://adversarialml.dev/posts/welcome/</link><guid isPermaLink="true">https://adversarialml.dev/posts/welcome/</guid><description>Adversarial ML covers attacks against deployed ML systems and the defenses that hold up. Here&apos;s what we publish.</description><pubDate>Sun, 03 May 2026 00:00:00 GMT</pubDate><category>meta</category><author>Adversarial ML Editorial</author></item></channel></rss>