🤖 AI Summary
OpenAIは、新たなセキュリティ製品を開発し、限定的にパートナー企業に提供する準備を進めています。これはより広範囲なリリースが大きな問題を引き起こす可能性があるためです。これはAnthropicのMythosモデルやProject Glasswingプロジェクトでの方法と似ています。
OpenAIは2月、GPT-5.3-Codex(会社で最もセキュリティ能力が高いモデル)をリリースした後、「Trusted Access for Cyber」パイロットプログラムを開始しました。このプログラムに参加する組織には「より強力なセキュリティ能力を持つモデルへのアクセス権限が与えられます」とのこと。当初、OpenAIは参加者向けに100万ドルのAPIクレジットを提供することを約束しました。
セキュリティ専門家のStanislav Fortは、モデルが新しい脅威を作り出す能力に関心を持つことが重要だと述べています。これと似た手法はソフトウェアの脆弱性ディスクロージャーに対処する方法と同じだと言います。
OpenAI is reportedly preparing a new cybersecurity product for a small group of partners, out of concern that a broader rollout could wreak havoc if it were released more widely. If that move sounds familiar, it's because Anthropic took a similar limited-release approach with its Mythos model and Project Glasswing initiative. Axios reports: OpenAI introduced its "Trusted Access for Cyber" pilot program in February after rolling out GPT-5.3-Codex, the company's most cyber-capable reasoning model. Organizations in the invite-only program are given access to "even more cyber capable or permissive models to accelerate legitimate defensive work," according to a blog post. At the time, OpenAI committed $10 million in API credits to participants. [...]
Restricting the rollout of a new frontier model makes "more sense" if companies are concerned about models' ability to write new exploits -- rather than about their ability to find bugs in the first place, Stanislav Fort, CEO of security firm Aisle, told Axios. Staggering the release of new AI models looks a lot like how cybersecurity vendors currently handle the disclosure of security flaws in software, Lee added. "It's the same debate we've had for decades around responsible vulnerability disclosure," Lee said.
Read more of this story at Slashdot.