(Disclosure: At Voxfor, we build defensive tools and API governance. We are staunch advocates for the responsible use of AI and the protection of software vendor’s intellectual property.)
(Methodological Note: This article describes emerging business patterns and technical capabilities. We have intentionally omitted specific tool names, exploit procedures, and operational techniques to avoid providing a playbook.)
A new shadow industry is booming. Quietly and rapidly, a class of business has emerged that uses AI APIs from providers like OpenAI, Google (Gemini), and Anthropic (Claude) to systematically analyze third-party software, software they do not own, did not build, and have no authorization to test. They are hunting for security vulnerabilities at an industrial scale.
The business model is alarmingly simple and effective:
This isn’t security research. It’s unauthorized penetration testing as a service, built entirely on the permissionless infrastructure of the world’s leading AI providers.

The process requires minimal human expertise and is dangerously straightforward:
Entire operations are being built around this capability, creating several problematic business models:

Your software is being tested right now by dozens of organizations you’ve never heard of, using AI services you have no visibility into. You are left with:
This industrialization threatens the foundations of responsible disclosure:
You are actively enabling this ecosystem. Your “we can’t control how users use our APIs” stance is a fig leaf. You can and do control capabilities based on content and usage policies, you are simply choosing not to for this critical use case. By treating security analysis as just another text-generation task, you are providing the infrastructure for this entire problem.
Analyzing someone else’s software for vulnerabilities is not ethically or legally equivalent to analyzing your own. Yet, current AI provider policies make no distinction between these vastly different scenarios:
✅ A developer testing their own application. (Legitimate)
✅ A security researcher with written authorization is testing a client’s system. (Legitimate)
❌ A company scanning a competitor’s product for vulnerabilities without permission. (Unauthorized)
❌ An opportunist mining open-source projects for monetizable bugs to sell. (Unethical, potentially illegal)
The APIs treat all four scenarios identically. This is untenable.
The solution is not to ban security analysis but to tie it to permission.
Before an API call can analyze code for vulnerabilities or generate exploits, it must be gated by a PoA check.
When evaluating AI providers, ask them directly:
If they don’t have clear, confident answers, they are not ready for enterprise use.
Clarify that using AI does not grant a free pass for unauthorized testing.
“But security research needs to be free and open!”
It still is. Analyzing your own code or participating in sanctioned bug bounties remains completely unrestricted. This framework only requires authorization when you are testing someone else’s property.
“This will slow down vulnerability discovery!”
Good. Unauthorized, uncoordinated vulnerability discovery at industrial scale is not a benefit—it’s chaos that helps attackers far more than defenders.
“How can you technically enforce this?”
The same way providers already enforce content policies, usage limits, and copyrights: at the API boundary, with clear terms and automated enforcement.
“Won’t people just use local, open-source models?”
Some will, but that requires significant expertise, data, and resources. The goal is to shut down the “easy path.” Eliminating the cheap, accessible API route solves 90% of the problem by raising the barrier to entry.
AI providers are currently running the world’s largest unauthorized penetration testing service, available to anyone with a credit card. Every software vendor’s code is being fed into these systems, with zero consent, zero notification, and zero control.
This has to stop.
OpenAI, Anthropic, Google: You have a choice. Implement authorization gates and give software owners control over who tests their code. Or, continue to enable an ecosystem where your APIs are the primary infrastructure for unauthorized vulnerability mining at a global scale.
The question is no longer if a major breach will be traced back to a vulnerability found via your API, but when. And when it happens, will “we couldn’t control it” be an acceptable answer?
Bind power to permission. The time to act is now.

Netanel Siboni is a technology leader specializing in AI, cloud, and virtualization. As the founder of Voxfor, he has guided hundreds of projects in hosting, SaaS, and e-commerce with proven results. Connect with Netanel Siboni on LinkedIn to learn more or collaborate on future projects.