Let’s stop pretending the SEO Attacks is not real. In 2025, the open web is majority synthetic by volume, and search is the traffic centrifuge that spins those bots across everything else. The tools are cheap. The proxies are plentiful. The playbooks are commoditized. And yes, most businesses that swear they “only do white-hat” still participate indirectly, because the incentive structure rewards synthetic signals while platforms count the clicks and bank the spend.
This isn’t a conspiracy theory. It’s a market failure.
AI agents didn’t just make spam cheaper; they made it adaptive. They run long tasks, read pages, vary timings, hold state, and back off when challenged. They look more like interns than scripts. That’s why your old bot rules “block HeadlessChrome” and call it a day are a joke.
Everyone with leverage: search engines, ad networks, WAF vendors, CDNs, analytics providers, and yes publishers who accept “mystery traffic” because the dashboards look pretty. Responsibility scales with power. If you hold most of the audience or the gate to monetization, you’re in the spotlight.

This is not a think-piece. It’s a build list.
A lightweight, attested interaction token that says: “a human initiated this session on a real device”—without revealing identity. Think WebAuthn-grade attestations and platform signals (device integrity, touch events, accessibility flags) hashed and time-boxed at the browser, never leaving raw PII. Sites and SERPs read a signed yes/no + confidence score. No account required. No CAPTCHA circus.
Why it matters: Raises the cost of large-scale fake behavior while preserving accessibility and anonymity.
Every non-human agent should self-identify with signed provenance: operator name, purpose (crawl, monitoring, accessibility), and contact. “Good bots” already volunteer; make it standard and verifiable so infra can fast-path or throttle accordingly. Non-attested traffic is treated as unknown risk—not banned, but de-prioritized in analytics and bidding.
Publish Invalid Traffic (IVT) baselines and confidence bands per vertical. When campaigns exceed them, automatic credits apply. Offer third-party audit hooks (privacy-safe) so brands can verify IVT outcomes independently. If we can compute viewability, we can compute integrity and stand behind it.
WAF/CDN vendors: stop paywalling essential bot defenses behind enterprise tiers. Rate limiting by behavior class, header/JA3 fingerprinting, HTTP/2 abuse mitigation, challenge orchestration, and log streaming are safety basics, not “pro add-ons.” If you sell bandwidth to the public internet, shipping baseline integrity is part of the job.
Search platforms should discount synthetic behavioral signals and lean harder on verifiable outcomes: signed transactions, service delivery confirmations, verified local presence, real-world availability and logistics. That pushes budgets back to operators who actually serve users, not the ones who rehearse them.

You can’t wait for giants to agree. Here’s how to cut your exposure by 60–90% and reclaim signal quality.
Track human confidence at the session level:
Output a Human Confidence Score (HCS) per session/order. Use it to weight analytics, suppress retargeting, and gate conversions you pay for.
When platforms profit from volume, they will always be tempted to declare the pipes clean enough. That’s human nature, not villainy. The way out is to price integrity into the product:
The first large platform to compete on integrity with public metrics will force the rest to follow. Integrity is a feature you can sell.
Not if you build it right. Accessibility isn’t a free pass for botnets; it’s a design constraint. That’s why the proposals above favor attestations, entropy ranges, and step-up checks over hard walls. Good security reduces friction for real people and raises it for farms exactly the opposite of CAPTCHA hell.
This industry has the talent to fix the mess it made. We can build a web where real users win, real businesses grow, and real work gets measured, without turning the internet into a passport checkpoint. It takes courage: to publish integrity metrics, to refund bad spend, to ship bot defense as a default, and to stop worshiping vanity numbers.
Stop optimizing for noise. Start charging for proof.
The web doesn’t need to be a bot battlefield. We made it that way. We can unmake it.

Netanel Siboni is a technology leader specializing in AI, cloud, and virtualization. As the founder of Voxfor, he has guided hundreds of projects in hosting, SaaS, and e-commerce with proven results. Connect with Netanel Siboni on LinkedIn to learn more or collaborate on future projects.