AI News

OpenAI Launches New Safety Bug Bounty for AI Risks

Mar 29, 2026, 2:49 PM
3 min read
57 views
OpenAI Launches New Safety Bug Bounty for AI Risks

Table of Contents

OpenAI has launched a new Safety Bug Bounty program. It focuses on finding AI abuse and safety risks across its products. The program was announced on March 25, 2026. It shows that OpenAI takes AI misuse seriously as its tools become more powerful.

The program runs on Bugcrowd. This is the same platform behind OpenAI's existing Security Bug Bounty. That older program launched in April 2023. Since then, it has rewarded fixes for over 409 security flaws. The new program goes a step further. It targets risks that traditional security tools cannot catch.

Why a New Program?

Old-style bug bounties focus on code bugs and data breaches. AI systems bring new kinds of threats. Prompt injection, data leaks through AI agents, and harmful content generation are hard to classify as normal bugs. But they can still cause real damage.

OpenAI said this new program accepts issues that carry real abuse and safety risks. These issues may not count as typical security flaws. But they still matter. Both Safety and Security teams will review all reports. They may move reports between programs based on the issue type.

What Does the Program Cover?

The biggest focus is on agentic risks. This means attacks where someone tricks an AI agent into doing harmful things. For example, a hacker could use prompt injection to hijack ChatGPT Agent or Browser. The agent could then leak user data or take dangerous actions. These bugs must work at least 50 percent of the time to qualify.

Other areas include data theft tricks that expose private information. Methods that make AI generate phishing emails or malware also count. So do cases where AI tools perform banned actions on OpenAI's website at scale.

The program also covers leaks of OpenAI's own private data. This includes cases where model outputs reveal internal reasoning. Platform integrity issues are in scope too. These include bypassing anti-automation tools, faking trust signals, or dodging account bans.

OpenAI wants researchers to test its agentic products closely. These include Atlas Browser, Codex, Operator, Connectors, and other ChatGPT tools. All of these act on behalf of users or access their data.

How Much Can You Earn?

Rewards range from $200 for minor findings to $20,000 for major discoveries. The payout depends on how severe the issue is. High-severity reports with clear steps and fixes may earn up to $7,500. But OpenAI decides all final reward amounts on its own.

Simple jailbreaks are not included. If a bypass only produces rude language or basic public info, it does not qualify. However, OpenAI runs private campaigns for specific threats from time to time. One active campaign targets biorisk content in ChatGPT Agent. It offers $25,000 for the first universal jailbreak that beats a ten-level bio/chem safety test.

Why This Matters for the Industry

Experts say this program could set new standards for AI safety. It is the first major bounty focused only on AI-specific risks. Other companies may follow the same model. This could speed up safety improvements across the whole AI industry.

The program also helps build trust with businesses. Many companies hesitate to use AI due to safety concerns. A formal system for finding and fixing risks could change that.

Researchers can apply now through OpenAI's Bugcrowd page. As AI tools grow more autonomous, programs like this will be key. They help make sure AI's most powerful features do not become its biggest weaknesses.

Muhammad Zeeshan

About Muhammad Zeeshan

Muhammad Zeeshan is a Tech Journalist and AI Specialist who decodes complex developments in artificial intelligence and audits the latest digital tools to help readers and professionals navigate the future of technology with clarity and insight. He publishes daily AI news, analysis, and blogs that keep his audience updated on the latest trends and innovations.

Comments (0)

Leave a Comment

No Comments Yet

Be the first to share your thoughts!

Relevant AI Tools

More AI News

OpenAI Launches New Safety Bug Bounty for AI Risks