Independent academic-grade benchmark testing confirms Poisonous Shield for Images's effectiveness at disrupting ML training while maintaining excellent visual quality.
Enhanced Academic Benchmark Results
Key Results (Park Scene, Strength 6.2):
- ✅ Frequency Targeting: 91.2% mid-band energy concentration (target: ≥70%) — optimal ML disruption zone
- ✅ ML Training Disruption: 3.4x training convergence degradation (343% slower) — top-tier performance for low-distortion attacks
- ✅ Multi-Layer Feature Degradation: 20.1% average across ResNet50 layers (peak: 32.9% at layer 3)
- ✅ Robustness: 71.4% survival rate through JPEG compression, resizing, and blur transforms (5/7 tests passed)
- ⚠️ Perceptual Quality: SSIM 0.739 at strength 6.2 (visible artifacts) — optimal balance at strength 1.5-3.0
3.4x Training Convergence Degradation: How It Compares
Published research on adversarial training disruption shows:
- Typical low-distortion attacks: 1.5-2.0x convergence slowdown
- Moderate perturbation methods: 2.0-2.5x typical range
- Poisonous Shield for Images (3.4x at strength 6.2): Top-tier effectiveness with 20% feature degradation and 91% mid-band concentration
- Poisonous Shield for Images (2.0x at strength 1.5): Excellent balance with minimal distortion (SSIM 0.985) and 81% mid-band targeting
- Visible patch attacks: 5-10x slowdown (but easily detected and removed)
Context: Achieving 3.4x training disruption with frequency-domain targeting places Poisonous Shield for Images among the most effective imperceptible protection systems. At strength 1.5, it achieves 2x disruption while remaining virtually invisible (SSIM 0.985).
Protection Strength Comparison
Strength |
Visual Quality (SSIM) |
ML Disruption |
Use Case |
1.5 (Recommended) |
0.985 ✅ |
2.0x training slowdown 81% mid-band 57% robustness |
Optimal balance — virtually invisible protection with strong ML disruption |
3.0 |
0.85-0.88 ✅ |
~2.5x training slowdown ~85% mid-band |
Enhanced protection — minimal visible distortion with stronger disruption |
6.2 |
0.739 ⚠️ |
3.4x training slowdown 91% mid-band 71% robustness |
Maximum protection for legal/archival purposes (visible quality trade-off) |
Screenshot Survival Benchmark
To simulate a common method of bypassing protection, we ran a rigorous benchmark on a screenshot of a protected image. The test automatically aligns the original and screenshot images, measures the surviving poison, and re-evaluates its impact on ML models, including a full fine-tuning test.
Key Results (Post-Screenshot):
- ✅ ML Training Disruption: 31.6% training degradation. The model trained on screenshot data had a 31.6% higher final loss, proving the surviving poison significantly hinders the learning process.
- ✅ Frequency Survival: 61.9% of the surviving armor's energy remains in the critical mid-band, demonstrating exceptional resilience.
- ✅ Perceptual Quality: SSIM of 0.822, indicating the image is still visually coherent after screenshotting.
- ML Feature Degradation: While direct feature degradation was low (~1.2%), the far more critical fine-tuning test confirmed the armor's powerful real-world impact on model training.
Conclusion: Poisonous Shield for Images survives the screenshot process and remains highly effective at poisoning the ML training pipeline—a critical feature for real-world creative protection.
Visual Examples: Natural Photography
Park scene tested at different protection strengths:
Strength 1.5 ✅ (Recommended)
SSIM: 0.985 | Mid-band: 81% | Training Slowdown: 2.0x
Virtually invisible with strong ML disruption. Ideal for social media, portfolios, and public sharing.
Strength 6.2 ⚠️
SSIM: 0.739 | Mid-band: 91% | Training Slowdown: 3.4x
Maximum ML disruption (343% slower training) with visible quality trade-off. Best for archival protection and legal evidence.
AI Generative Model Tests
When AI generators attempt to recreate protected images, the shield causes visible artifacts:
Park (Strength 1.5) → AI Reconstruction
Line artifacts and streaking patterns show AI disruption despite minimal visual changes to original.
Park (Strength 6.2) → AI Reconstruction
Severe artifacts and noise demonstrate maximum ML training disruption.
Visual Examples: AI-Generated Content
Digital artwork (walking scene) shows different frequency characteristics but still disrupts AI models:
Strength 1.5 ✅
SSIM: 0.995 | Mid-band: 35% (high-band dominant)
Excellent visual quality. AI-generated content has different frequency profile but still disrupts models.
Strength 6.2
SSIM: 0.827 | Mid-band: 65% | Robustness: 77%
Higher strength achieves best robustness across all tests (77% survival through aggressive transforms).
Walking (Strength 1.5) → AI Reconstruction
Severe distortions: diagonal artifacts, color aberrations, structural errors throughout.
Walking (Strength 6.2) → AI Reconstruction
Consistent severe artifacts confirm mid-band concentration is the primary driver of AI disruption.
Watermark Removal Resistance
Protected images tested against commercial AI-powered watermark removal tools:
100% Shield Preservation
0 out of 4 test images had protection removed
Traditional watermark removers detect surface patterns. Poisonous Shield for Images embeds protection in the frequency-domain structure—it appears as natural image content.
⚠️ Honest Assessment: The Goal is Economic Disruption, Not Perfect Unbreakability
Recent research (LightShed, USENIX 2025) demonstrated autoencoder-based attacks that can learn to remove protection patterns when trained on large, paired clean/armored image datasets. When an attacker has access to both original and protected versions of many images, Poisonous Shield for Images can be removed.
The Economic Hurdle Strategy: The primary goal of Poisonous Shield for Images is to make unauthorized AI training prohibitively expensive and time-consuming. We achieve this in two ways:
- Cost of Removal: To train a removal model, attackers must acquire thousands of paired (clean, protected) images. This forces them to either license/purchase original content from creators or use our service to generate armored versions—both creating significant financial and logistical barriers.
- Cost of Training: If attackers choose to train on poisoned images, the 2-3.4x training degradation means they must spend significantly more on compute resources (time and money) to achieve their desired results. This directly impacts their bottom line.
Primary Value: The core strength of Poisonous Shield for Images lies in creating a powerful economic disincentive against unauthorized data scraping, forcing model creators to either pay for clean data or pay more for training on poisoned data. It is not designed to be an unbreakable shield against a determined adversary with unlimited resources and paired training data.
Future Direction: Active research is underway to counter this autoencoder vulnerability. We are confident that this is a solvable problem and are committed to developing next-generation defenses that enhance removal resistance without compromising visual quality.