AI Dirty Secret: The Rise of Unauthorized Penetration Testing as a Service
Last edited on October 6, 2025

(Disclosure: At Voxfor, we build defensive tools and API governance. We are staunch advocates for the responsible use of AI and the protection of software vendor’s intellectual property.)

(Methodological Note: This article describes emerging business patterns and technical capabilities. We have intentionally omitted specific tool names, exploit procedures, and operational techniques to avoid providing a playbook.)

Why Is This Everyone’s Problem?

A new shadow industry is booming. Quietly and rapidly, a class of business has emerged that uses AI APIs from providers like OpenAI, Google (Gemini), and Anthropic (Claude) to systematically analyze third-party software, software they do not own, did not build, and have no authorization to test. They are hunting for security vulnerabilities at an industrial scale.

The business model is alarmingly simple and effective:

  1. Ingest: Feed any software package, commercial applications, open-source projects, or a competitor’s product into an AI agent via an API.
  2. Analyze: The agent dissects the code, identifies potential vulnerabilities, and generates proof-of-concept exploits.
  3. Monetize: Within minutes, the operator has a “security report” that can be sold through bug bounties, to vulnerability brokers, or used to pressure the vendor directly.

This isn’t security research. It’s unauthorized penetration testing as a service, built entirely on the permissionless infrastructure of the world’s leading AI providers.

The New Vulnerability Supply Chain

The New Vulnerability Supply Chain

The Shockingly Simple Mechanics

The process requires minimal human expertise and is dangerously straightforward:

  • Any Software, Any Target: Companies are feeding their competitors’ software, popular open-source projects, and commercial applications, essentially any accessible codebase, into large language models.
  • Exploit Factories: A simple Python script in a Jupyter Notebook, connected to an AI provider’s API, becomes a fully automated vulnerability discovery pipeline.
  • Minutes, Not Months: Complex analysis that once required experienced security researchers working for weeks now happens in minutes.
  • Iterative Refinement: The AI doesn’t just find a potential bug; it tests hypotheses, generates exploit code, and refines it until it works.

The Monetization Playbook

Entire operations are being built around this capability, creating several problematic business models:

  • Vulnerability Brokers: Systematically scan thousands of software packages and sell the findings to vulnerability databases or the highest bidder.
  • “Competitive Research” Firms: Offer “analysis services” that are, in reality, AI-powered unauthorized testing of a client’s competitors.
  • Extortion-as-a-Service: Find vulnerabilities and then approach vendors with a “pay for private disclosure, or we publish” ultimatum.
  • Bug Bounty Automation: Industrialize bug hunting by running AI against every program listed on platforms like HackerOne or Bugcrowd.

Why This Is Everyone’s Problem

Why This Is Everyone Problem

For Software Vendors

Your software is being tested right now by dozens of organizations you’ve never heard of, using AI services you have no visibility into. You are left with:

  • No Consent: You cannot opt out of having your intellectual property analyzed.
  • No Notification: You don’t know who is scanning your code or what they have found.
  • No Control: Findings can be sold, weaponized, or used for leverage long before you are aware that a vulnerability exists.
  • No Recourse: Proving Terms of Service violations is difficult, and enforcing them across jurisdictions is nearly impossible.

For the Security Community

This industrialization threatens the foundations of responsible disclosure:

  • Devalues Expertise: It risks replacing experienced, methodical researchers with junior analysts armed with API keys.
  • Contaminates Disclosure: When dozens of parties find the same vulnerability via AI, coordinated disclosure becomes chaotic, and attribution is a nightmare.
  • Enables Vulnerability Hoarding: Discovered flaws become commercial assets to be traded, not intelligence to be shared for the collective good.

For AI Providers (OpenAI, Anthropic, Google)

You are actively enabling this ecosystem. Your “we can’t control how users use our APIs” stance is a fig leaf. You can and do control capabilities based on content and usage policies, you are simply choosing not to for this critical use case. By treating security analysis as just another text-generation task, you are providing the infrastructure for this entire problem.

The Core Issue: Consent is Not Optional

Analyzing someone else’s software for vulnerabilities is not ethically or legally equivalent to analyzing your own. Yet, current AI provider policies make no distinction between these vastly different scenarios:

✅ A developer testing their own application. (Legitimate)

✅ A security researcher with written authorization is testing a client’s system. (Legitimate)

❌ A company scanning a competitor’s product for vulnerabilities without permission. (Unauthorized)

❌ An opportunist mining open-source projects for monetizable bugs to sell. (Unethical, potentially illegal)

The APIs treat all four scenarios identically. This is untenable.

A Call for Accountability: A Framework for Authorized Security Analysis

The solution is not to ban security analysis but to tie it to permission.

For AI Providers: Implement Proof-of-Authorization (PoA)

Before an API call can analyze code for vulnerabilities or generate exploits, it must be gated by a PoA check.

  • Capability Classification: Differentiate API functions. General code review is open, but vulnerability analysis and exploit generation require authorization.
  • PoA Enforcement:
    • For your own code: Verify ownership via repository webhooks or tokens.
    • For client work: Require a cryptographic authorization token from the software owner.
    • For open-source bounties: Mandate scope limitations and a commitment to responsible disclosure.
  • Audit Trails: Log every security analysis with the target, timestamp, and proof of authorization. Provide tamper-evident receipts to both the analyst and the target’s owner.
  • Pattern Detection: Flag and throttle accounts scanning multiple, unrelated targets or those with a low rate of authorized requests.

For Enterprises: Demand Protection

When evaluating AI providers, ask them directly:

  • “What prevents the unauthorized security analysis of my software using your API?”
  • “Can I register my products to require PoA before your API analyzes them?”
  • “Will you notify me if someone attempts an unauthorized scan of my registered software?”

If they don’t have clear, confident answers, they are not ready for enterprise use.

For Policymakers: Close the Legal Gap

Clarify that using AI does not grant a free pass for unauthorized testing.

  • Mandate Authorization: Security analysis of software you don’t own requires explicit permission, regardless of the tool used.
  • Establish Platform Liability: AI providers that enable unauthorized testing share in the liability.
  • Criminalize Extortion: Threatening public disclosure for payment is extortion, period.

Addressing the Objections

“But security research needs to be free and open!”

It still is. Analyzing your own code or participating in sanctioned bug bounties remains completely unrestricted. This framework only requires authorization when you are testing someone else’s property.

“This will slow down vulnerability discovery!”

Good. Unauthorized, uncoordinated vulnerability discovery at industrial scale is not a benefit—it’s chaos that helps attackers far more than defenders.

“How can you technically enforce this?”

The same way providers already enforce content policies, usage limits, and copyrights: at the API boundary, with clear terms and automated enforcement.

“Won’t people just use local, open-source models?”

Some will, but that requires significant expertise, data, and resources. The goal is to shut down the “easy path.” Eliminating the cheap, accessible API route solves 90% of the problem by raising the barrier to entry.

The Choice Ahead

AI providers are currently running the world’s largest unauthorized penetration testing service, available to anyone with a credit card. Every software vendor’s code is being fed into these systems, with zero consent, zero notification, and zero control.

This has to stop.

OpenAI, Anthropic, Google: You have a choice. Implement authorization gates and give software owners control over who tests their code. Or, continue to enable an ecosystem where your APIs are the primary infrastructure for unauthorized vulnerability mining at a global scale.

The question is no longer if a major breach will be traced back to a vulnerability found via your API, but when. And when it happens, will “we couldn’t control it” be an acceptable answer?

Bind power to permission. The time to act is now.

About Author

Netanel Siboni user profile

Netanel Siboni is a technology leader specializing in AI, cloud, and virtualization. As the founder of Voxfor, he has guided hundreds of projects in hosting, SaaS, and e-commerce with proven results. Connect with Netanel Siboni on LinkedIn to learn more or collaborate on future projects.

Leave a Reply

Your email address will not be published. Required fields are marked *

Lifetime Solutions:

VPS SSD

Lifetime Hosting

Lifetime Dedicated Servers