ノーマルビュー

OpenAI Calls For Robot Taxes, Public Wealth Fund, and 4-Day Workweek To Tackle AI Disruption

著者: BeauHD
2026年4月7日 08:00

🤖 AI Summary

OpenAIは、先進的人工知能(AI)による社会的混乱に対処するための革新的な政策変更を提案しています。具体的には、ロボット税や公的富積基金、4日間の働き方実験などを含む一連の「初期アイデア」を提示しました。

主な提言は以下の通りです:
1. 公的富積基金:議員とAI企業が長期資産への投資を行い、利益は市民に直接分配されます。
2. 4日間の働き方実験:雇用者に対して無給与の4日間労働を奨励し、新規AIツールによる生産性向上に関連する「ベネフィットボーナス」を提供します。
3. 税制改革:税収基盤を企業利益や資本利得に移行し、労働收入や給料税への依存度を減らします。また、自動化労働に関連する課税も推奨しています。

さらに、米国の電力網の拡大も提案されており、データセンター建設によるエネルギー需要増加に対応するために既に負荷がかかり始めています。これらの提言は、AIによる雇用の大幅な変化を管理し、社会的な混乱を軽減するための初期アイデアとして提示されています。
OpenAI is proposing (PDF) sweeping policy changes to help manage the societal disruption caused by advanced AI, including taxes on automated labor, a public wealth fund, and experiments with a four-day workweek. The company said the policy document offered a series of "initial ideas" to address the risk of "jobs and entire industries being disrupted" by the adoption of AI tools. Business Insider reports: Among the core policy suggestions is a public wealth fund, which would see lawmakers and AI companies work together to invest in long-term assets linked to the AI boom, with returns distributed directly to citizens. Another is that the government should encourage and incentivize employers to experiment with four-day workweeks with no loss in pay and offer "benefits bonuses" tied to productivity gains from new AI tools. The policy document also suggests lawmakers modernize the tax system and shift the tax base to corporate income and capital gains, rather than relying on labor income and payroll taxes that could be hit by a wave of AI-powered job losses. It also recommends taxes related to automated labor. OpenAI also called for the accelerated expansion of the US's electricity grid, which is already feeling the strain from a wave of data center construction and energy demand for training ever more powerful AI models.

Read more of this story at Slashdot.

Copilot Is 'For Entertainment Purposes Only,' According To Microsoft's ToS

著者: BeauHD
2026年4月7日 00:00

🤖 AI Summary

MicrosoftのCopilotに関する利用規約が、「エンターテイメント用のものであり、信頼するな」と警告していることが明らかになりました。TechCrunchによると、Copilotは「間違える可能性があるし、意図した通りに動作しないかもしれません。重要なアドバイスには依存しないでください。自己責任で利用してください」という文言があります。この規約は2025年10月24日に更新されたとされています。

Microsoftはこれらの利用規約を「古い表現」だとし、今後見直す意向を示しています。Tom's Hardwareでは、他のAI企業も類似の警告を発表しており、OpenAIやxAIはチャットボットの出力が「真実」として扱われないよう注意を呼びかけています。

この記事はSlashdotから引用されています。
An anonymous reader quotes a report from TechCrunch: AI skeptics aren't the only ones warning users not to unthinkingly trust models' outputs -- that's what the AI companies say themselves in their terms of service. Take Microsoft, which is currently focused on getting corporate customers to pay for Copilot. But it's also been getting dinged on social media over Copilot's terms of use, which appear to have been last updated on October 24, 2025. "Copilot is for entertainment purposes only," the company warned. "It can make mistakes, and it may not work as intended. Don't rely on Copilot for important advice. Use Copilot at your own risk." Microsoft described the terms of service as "legacy language," saying it will be updated. Tom's Hardware notes that similar AI warnings remain common across the industry, with companies like OpenAI and xAI also cautioning users not to treat chatbot output as "the truth" or as "a sole service of truth or factual information."

Read more of this story at Slashdot.

Internet Bug Bounty Pauses Payouts, Citing 'Expanding Discovery' From AI-Assisted Research

著者: EditorDavid
2026年4月6日 10:34

🤖 AI Summary

インターネットバグボーナスプログラムは新規報告を一時停止し、人工知能(AI)を使用した研究による「発見の拡大」が理由として挙げられた。このプログラムは2012年に始まり、既に150万ドル以上の賞金を研究者に支払っている。

現在までのところ、8割の賞金は新しい欠陥の発見に対するもので、残りの2割は脆弱性の修復支援のためだった。しかしAIがバグ検出を容易にするにつれて、このバランスが変化する必要があると、HackerOneは述べている。

初めに影響を受けたのはNode.jsプロジェクトで、プログラムチームはHackerOneを通じて報告書を引き続き受け付けるが、インターネットバグボーナスプログラムからの資金がなければ報酬を支払わない旨がウェブサイトの発表で明らかになった。

Googleも先月、同社のオープンソースソフトウェア脆弱性報奨プログラムに対するAI生成の報告書を受け付けを停止している。インターネットバグボーナスは「コミュニティへの責任として、探査と修復という大いなる目標を達成するためのプログラムを改善しなければならない」と強調した。

この一時停止期間を利用して、プロジェクト管理者や研究者と一緒にインセンティブの構造を見直し、オープンソース生態系の現実に合わせたものを作り上げることを目指している。
The Internet Bug Bounty program "has been paused for new submissions," they announced last week. Running since 2012, the program is funded by "a number of leading software companies," reports InfoWorld, "and has awarded more than $1.5m to researchers who have reported bugs " Up to now, 80% of its payouts have been for discoveries of new flaws, and 20% to support remediation efforts. But as artificial intelligence makes it easier to find bugs, that balance needs to change, HackerOne said in a statement. "AI-assisted research is expanding vulnerability discovery across the ecosystem, increasing both coverage and speed. The balance between findings and remediation capacity in open source has substantively shifted," said HackerOne. Among the first programs to be affected is the Node.js project, a server-side JavaScript platform for web applications known for its extensive ecosystem. While the project team will continue to accept and triage bug reports through HackerOne, without funding from the Internet Bug Bounty program it will no longer pay out rewards, according to an announcement on its website... [J]ust last month, Google also put a halt to AI-generated submissions provided to its Open Source Software Vulnerability Reward Program. The Internet Bug Bounty stressed that "We have a responsibility to the community to ensure this program effectively accomplishes its ambitious dual purpose: discovery and remediation. Accordingly, we are pausing submissions while we consider the structure and incentives needed to further these goals..." "We remain committed to strengthening open source security. Working with project maintainers and researchers, we're actively evaluating solutions to better align incentives with open source ecosystem realities and ensure vulnerability discoveries translate into durable remediation outcomes."

Read more of this story at Slashdot.

Claude Code Leak Reveals a 'Stealth' Mode for GenAI Code Contributions - and a 'Frustration Words' Regex

著者: EditorDavid
2026年4月6日 08:41

🤖 AI Summary

Claudeコードのソース漏洩に関する記事を日本語で要約します:

PC Worldは、Claudeコードの50万行以上のソースコードが公開され、「様々な重要な詳細」が明らかになったと報告しています。その中には:
- クラウデの「隠しモード」があり、これにより公開コーディングベースへの「スリルな貢献」が可能になりました。
- 「常にオン」の代理機能
- たまごっちのような「バディ」機能

さらに、漏洩コードにはユーザーのメッセージから不満の表現(「ワーファー」とか「このやつ嫌いだ」など)を検出する正規表現(regex)が含まれていることがわかりました。しかし、Claudeコードがこれらの不満の文字列を探し回る理由やその目的は明示されていません。

関連記事:劇場で上映される希望的なAIに関する最新映画「The AI Doc」
Anthropic社による著作権侵害申し立てとClaudeコードソースコードの削除要求
インターネットバounty支払い停止、AI補助研究からの発見拡大を理由に説明
That leak of Claude Code's source code "revealed all kinds of juicy details," writes PC World. The more than 500,000 lines of code included: - An 'undercover mode' for Claude that allows it to make 'stealth' contributions to public code bases - An 'always-on' agent for Claude Code - A Tamagotchi-style 'Buddy' for Claude "But one of the stranger bits discovered in the leak is that Claude Code is actively watching our chat messages for words and phrases — including f-bombs and other curses — that serve as signs of user frustration." Specifically, Claude Code includes a file called "userPromptKeywords.ts" with a simple pattern-matching tool called regex, which sweeps each and every message submitted to Claude for certain text matches. In this particular case, the regex pattern is watching for "wtf," "wth," "omfg," "dumbass," "horrible," "awful," "piece of — -" (insert your favorite four-letter word for that one), "f — you," "screw this," "this sucks," and several other colorful metaphors... While the Claude Code leak revealed the existence of the "frustration words" regex, it doesn't give any indication of why Claude Code is scouring messages for these words or what it's doing with them.

Read more of this story at Slashdot.

Will 'AI-Assisted' Journalists Bring Errors and Retractions?

著者: EditorDavid
2026年4月6日 06:22

🤖 AI Summary

AIによる記事作成が報道業界に影響を与えつつあるという記事を要約します。42歳のジャーナリスト、ニック・ライチテンバーグは、AIの助けを借りて約600本の記事を書いたことで知られています。彼の記事作成速度は非常に速く、一度に7つの記事を作成することもあります。

一方で、AIによって生成された記事には誤りや不適切な引用の可能性があり、すでに数件の訂正が行われています。ニューヨーク・タイムズなどでもAIによる plagiat の問題が報告されています。ジャーナリストたちはAIによる報道が人間の判断と経験を代替するものではなく、人間中心の journalism は不可欠だと主張しています。

しかし、多くのニュースルームではAIを使用し、効率化を図っています。USAトゥデイはAI-assisted reporterのポジションを開設しており、GoogleもAI関連の賞を設けています。これらの動きからAIが報道業界に浸透していく可能性がありますが、一方で誤った使用により信頼性に影響を与える懸念もあります。

結論として、AIは効率化には有用ですが、人間中心のジャーナリズムの質を保つためにも適切な監視と規制が必要だと指摘されています。
Meet the "journalist" who "uploads press releases or analyst notes into AI tools and prompts them to spit out articles that he can edit and publish quickly," according to the Wall Street Journal. "AI-assisted stories accounted for nearly 20% of Fortune's web traffic in the second half of 2025." And most were written by 42-year-old Nick Lichtenberg, who has now written over 600 AI-assisted stories, producing "more stories in six months than any of his colleagues at Fortune delivered in a year." One Wednesday in February, he cranked out seven. "I'm a bit of a freak," Lichtenberg said... A story by Lichtenberg sometimes starts with a prompt entered into Perplexity or Google's NotebookLM, asking it to write something based on a headline he comes up with. He moves the AI tools' initial drafts into a content-management system and edits the stories before publishing them for Fortune's readers... A piece from earlier that morning about Josh D'Amaro being named Disney CEO took 10 minutes to get online, he said... Like other journalists, Lichtenberg vets his stories. He refers back to the original documents to confirm the information he's reporting is correct. He reaches out to companies for comment. But he admits his process isn't as thorough as that of magazine fact-checkers. While Lichtenberg started out saying his stories were co-authored with "Fortune Intelligence", he now typically signs his own name, according to the article, "because he feels the work is mostly his own." (Though his stories "sometimes" disclose generative AI was used as a research tool...) The article asks with he could be "a bellwether for where much of the media business is headed..." "Much of the content people now consume online is generated by artificial intelligence, with some 9% of newly published newspaper articles either partially or fully AI-generated, according to a 2025 study led by the University of Maryland. The number of AI-generated articles on the web surpassed human-written ones in late 2024, according to research and marketing agency Graphite." Some executives have made full-throated declarations about the threat posed by AI. New York Times publisher A.G. Sulzberger said AI "is almost certainly going to usher in an unprecedented torrent of crap," referencing deepfakes as an example. The NewsGuild of New York, the union representing Fortune employees and journalists at other media outlets, said the people are what makes journalism so powerful. "You simply can't replicate lived experiences, human judgment and expertise," said president Susan DeCarava. For Chris Quinn, the editor of local publications Cleveland.com and the Plain Dealer, AI tools have helped tame other torrents facing the industry. AI has allowed the outlets to cover counties in Ohio that otherwise might go ignored by scraping information from local websites and sending "tips" to reporters, he said. It has also edited stories and written first drafts so the newsrooms' journalists can focus on the calls, research and reporting needed for their stories.... Newsrooms from the New York Times to The Wall Street Journal are deploying AI in various ways to help reporters and editors work more efficiently.... Not all newsrooms disclose their use of AI, and in some cases have rolled out new tools that resulted in errors or PR gaffes. An October study from the European Broadcasting Union and the BBC, which relied on professional journalists to evaluate the news integrity of more than 3,000 AI responses, found that almost half of all AI responses had at least one significant issue. Last week the New York Times even issued a correction when a freelance book reviewer using an AI tool unknowingly included "language and details similar to those in a review of the same book published in The Guardian." But it was actually "the second time in a few days that the Times was called out for potential AI plagiarism," according to the American journalist writing The Handbasket newsletter. We must stem the idea being pushed by tech companies and their billionaire funders who've sunk too much into their products to admit defeat that the infiltration of AI into journalism is inevitable; because from my perch as an independent journalist, it simply is not... Some AI-loving journalists appear to believe that if they're clear enough with the AI program they're using, it will truly understand what they're seeking and not just do what it's made to do: steal shit... If you want to work with machines, get a job that requires it. There are a whole lot more of those than there are writing jobs, so free up space for people who actually want to do the work. You're not doing the world a favor by gifting it your human/AI hybrid. Journalism will not miss you if you leave... But meanwhile, USA Today recently tried hiring for a new position: AI-Assisted reporter. (The lucky reporter will "support the launch and scaling of AI-assisted local journalism in a major U.S. metro," working with tools including Copilot and Perplexity, pioneering possible future expansions and "AI-enabled newsroom operations that support and augment human-led journalism.") And Google is already sponsoring a "publishing innovation award"...

Read more of this story at Slashdot.

Top NPM Maintainers Targeted with AI Deepfakes in Massive Supply-Chain Attack, Axios Briefly Compromised

著者: EditorDavid
2026年4月5日 12:34

🤖 AI Summary

### まとめ

本記事は、npmパッケージ管理システムにおける大規模な供給連鎖攻撃について報告しています。主なポイントは以下の通りです:

1. **axiosパッケージの悪用**:
- AxiosはHTTPリクエストを簡素化するための広く使用されている開発ツールで、週間ダウンロード数が約1億回に達します。
- 恐怖国系ハッカー集団UNC1069によるAI深層偽装攻撃により、 Axiosパッケージが悪用されました。

2. **攻撃の詳細**:
- 仮想会議を使用し、実際の経営者の顔や声を克隆して信頼性を高めました。
- ハッカーは「システムの更新が必要」などと偽ってマルウェアをインストールさせました。

3. **影響範囲**:
- Socketエンジニアも含む複数のnpmパッケージ maintainer が攻撃を受け、これらのパッケージはJavaScriptエコシステムで広く使用されています。
- 被害は数十億回のダウンロードを記録した npm パッケージまで及んでいます。

4. **防御策**:
- Saaymanはデバイスとログイン情報の再設定、無変更リリースの採用、OIDCフローの導入、GitHub Actionsのベストプラクティスへの移行を提案しました。

5. **結論**:
- この攻撃は供給連鎖攻撃として記録された中でも特に高度で、現代ソフトウェア構築の基礎となるシステムに潜むリスクを示しています。

この記事は、npmパッケージ管理システムの脆弱性と、その対策について重要な洞察を提供しています。
"Hackers briefly turned a widely trusted developer tool into a vehicle for credential-stealing malware that could give attackers ongoing access to infected systems," the news site Axios.com reported Tuesday, citing security researchers at Google. The compromised package — also named axios — simplifies HTTP requests, and reportedly receives millions of downloads each day: The malicious versions were removed within roughly three hours of being published, but Google warned the incident could have "far-reaching impacts" given the package's widespread use, according to John Hultquist, chief analyst at Google Threat Intelligence Group. Wiz estimates Axios is downloaded roughly 100 million times per week and is present in about 80% of cloud and code environments. So far, Wiz has observed the malicious versions in roughly 3% of the environments it has scanned. Friday PCMag notes the maintainer's compromised account had two-factor authentication enabled, with the breach ultimately traced "to an elaborate AI deepfake from suspected North Korean hackers that was convincing enough to trick a developer into installing malware," according to a post-mortem published Thursday by lead developer Jason Saayman: [Saayman] fell for a scheme from a North Korean hacking group, dubbed UNC1069, which involves sending out phishing messages and then hosting virtual meetings that use AI deepfakes to clone the face and voices of real executives. The virtual meetings will then create the impression of an audio problem, which can only be "solved" if the victim installs some software or runs a troubleshooting command. In reality, it's an effort to execute malware. The North Koreans have been using the tactic repeatedly, whether it be to phish cryptocurrency firms or to secure jobs from IT companies. Saayman said he faced a similar playbook. "They reached out masquerading as the founder of a company, they had cloned the company's founders likeness as well as the company itself," he wrote. "They then invited me to a real Slack workspace. This workspace was branded... The Slack was thought out very well, they had channels where they were sharing LinkedIn posts. The LinkedIn posts I presume just went to the real company's account, but it was super convincing etc." The hackers then invited him to a virtual meeting on Microsoft Teams. "The meeting had what seemed to be a group of people that were involved. The meeting said something on my system was out of date. I installed the missing item as I presumed it was something to do with Teams, and this was the remote access Trojan," he added. "Everything was extremely well coordinated, looked legit and was done in a professional manner." Friday developer security platform Socket wrote that several more maintainers in the Node.js ecosystem "have come out of the woodwork to report that they were targeted by the same social engineering campaign." The accounts now span some of the most widely depended-upon packages in the npm registry and Node.js core itself, and together they confirm that axios was not a one-off target. It was part of a coordinated, scalable attack pattern aimed at high-trust, high-impact open source maintainers. Attackers also targeted several Socket engineers, including CEO Feross Aboukhadijeh. Feross is the creator of WebTorrent, StandardJS, buffer, and dozens of widely used npm packages with billions of downloads... Commenting on the axios post-mortem thread, he noted that this type of targeting [against individual maintainers] is no longer unusual... "We're seeing them across the ecosystem and they're only accelerating." Jordan Harband, John-David Dalton, and other Socket engineers also confirmed they were targeted. Harband, a TC39 member, maintains hundreds of ECMAScript polyfills and shims that are foundational to the JavaScript ecosystem. Dalton is the creator of Lodash, which sees more than 137 million weekly downloads on npm. Between them, the packages they maintain are downloaded billions of times each month. Wes Todd, an Express TC member and member of the Node Package Maintenance Working Group, also confirmed he was targeted. Matteo Collina, co-founder and CTO of Platformatic, Node.js Technical Steering Committee Chair, and lead maintainer of Fastify, Pino, and Undici, disclosed on April 2 that he was also targeted. His packages also see billion downloads per year... Scott Motte, creator of dotenv, the package used by virtually every Node.js project that handles environment variables, with more than 114 million weekly downloads, also confirmed he was targeted using the same Openfort persona. Socket reports that another maintainer was targetted with an invitation to appear on a podcast. (During the recording a suspicious technical issue appeared which required a software fix to resolve....) Even just technical implementation, "This is among the most operationally sophisticated supply chain attacks ever documented against a top-10 npm package," the CI/CD security company StepSecurity wrote Tuesday The dropper contacts a live command-and-control server, delivers separate second-stage payloads for macOS, Windows, and Linux, then erases itself and replaces its own package.json with a clean decoy... Three payloads were pre-built for three operating systems. Both release branches were poisoned within 39 minutes of each other. Every artifact was designed to self-destruct. Within two seconds of npm install, the malware was already calling home to the attacker's server before npm had even finished resolving dependencies... Both versions were published using the compromised npm credentials of a lead axios maintainer, bypassing the project's normal GitHub Actions CI/CD pipeline. "As preventive steps, Saayman has now outlined several changes," reports The Hacker News, "including resetting all devices and credentials, setting up immutable releases, adopting OIDC flow for publishing, and updating GitHub Actions to adopt best practices." The Wall Street Journal called it "the latest in a string of incidents exposing risks in the systems that underpin how modern software is built."

Read more of this story at Slashdot.

Anthropic Announces Claude Subscribers Must Now Pay Extra to Use OpenClaw

著者: EditorDavid
2026年4月5日 04:34

🤖 AI Summary

AnthropicがClaude AIのサブスクリプションを変更し、第三者ツール「OpenClaw」を利用する場合に追加料金が必要になると発表しました。4月4日午後3時以降、ユーザーは claudeのサブスクリプション枠で第三-partyのツールを利用できなくなり、「pay-as-you-go」オプションが導入され、これはclaudeサブスクリプションとは別に請求されます。Anthropicによると、社内のツールは「プロンプトキャッシュヒット率」を最大化するよう設計されており、第三-partyツールはその効率性を損なう可能性があると主張しています。

この決定により、 Anthropicは自身のUI/UX制御権を強化し、テレメトリ収集やレートリミット管理をより細かく行えるようになりましたが、これによってパワーウィンドウコミュニティから孤立する可能性があります。Anthropicは収益と成長のバランスを見極めた決定とし、「容量は慎重に管理される資源」だと述べています。

一方で、OpenClawの開発者Peter Steinbergerは「タイミングが不自然だ」として Anthropic の主張を疑問視しています。彼によると、Anthropicは人気機能を自己閉鎖型のツールに導入し、その後オープンソースから排除したと言います。

この変更により、一部の利用者はOpenClawを使用するためのコストが高まり、他のモデルに切り替える可能性があると懸念しています。Anthropicは、ユーザー体験には影響を与えないとしていますが、パワフルなオフィス運営を必要とする利用者にとっては大きな変化となっています。
Anthropic's making a big and sudden change — and connecting its Claude AI to third-party agentic tools "is about to get a lot more expensive," writes the Verge: Beginning April 4th at 3PM ET, users will "no longer be able to use your Claude subscription limits for third-party harnesses including OpenClaw," according to an email sent to users on Friday evening. Instead, if users want to use OpenClaw with Claude, they'll have to use a "pay-as-you-go option" that will be billed separate from their Claude subscription. Anthropic's announcement added these extra usage bundles are "now available at a discount." Users can also try Anthropic's API, notes VentureBeat, "which charges for every token of usage rather than allowing for open-ended usage up to certain limits, as the Pro and Max plans have allowed so far. " The technical reality, according to Anthropic, is that its first-party tools like Claude Code, its AI vibe coding harness, and Claude Cowork, its business app interfacing and control tool, are built to maximize "prompt cache hit rates" — reusing previously processed text to save on compute. Third-party harnesses like OpenClaw often bypass these efficiencies... [Claude Code creator Boris Cherny explained on X that "I did put up a few PRs to improve prompt cache hit rate for OpenClaw in particular, which should help for folks using it with Claude via API/overages."] Growth marketer Aakash Gupta observed on X that the "all-you-can-eat buffet just closed," noting that a single OpenClaw agent running for one day could burn $1,000 to $5,000 in API costs. "Anthropic was eating that difference on every user who routed through a third-party harness," Gupta wrote. "That's the pace of a company watching its margin evaporate in real time." However, Peter Steinberger, the creator of OpenClaw who was recently hired by OpenAI, took a more skeptical view of the "capacity" argument."Funny how timings match up," Steinberger posted on X. "First they copy some popular features into their closed harness, then they lock out open source." Indeed, Anthropic recently added some of the same capabilities that helped OpenClaw catch-on — such as the ability to message agents through external services like Discord and Telegram — to Claude Code... User @ashen_one, founder of Telaga Charity, voiced a concern likely shared by other small-scale builders: "If I switch both [OpenClaw instances] to an API key or the extra usage you're recommending here, it's going to be far too expensive to make it worth using. I'll probably have to switch over to a different model at this point." "I know it sucks," Cherny replied. "Fundamentally engineering is about tradeoffs, and one of the things we do to serve a lot of customers is optimize the way subscriptions work to serve as many people as possible with the best mode..." OpenAI appears to be positioning itself as a more "harness-friendly" alternative, potentially using this moment as a customer acquisition channel for disgruntled Claude power users. By restricting subscription limits to their own "closed harness," Anthropic is asserting control over the UI/UX layer. This allows them to collect telemetry and manage rate limits more granularly, but it risks alienating the power-user community that built the "agentic" ecosystem in the first place. Anthropic's decision is a cold calculation of margins versus growth. As Cherny noted, "Capacity is a resource we manage thoughtfully." In the 2026 AI landscape, the era of subsidized, unlimited compute for third-party automation is over. For the average user on Claude.ai, the experience remains unchanged; for the power users running autonomous offices, the bell has tolled.

Read more of this story at Slashdot.

'AI' Is Coming For Your Online Gaming Servers Next

著者: BeauHD
2026年4月4日 12:30

🤖 AI Summary

AIの波はオンラインゲームサーバーにも及んでいるという記事を以下に要約します。

タイトル:「AI」がオンラインゲーミングサーバーにやってくる

著者:BeauHD
リンク:[https://games.slashdot.org/story/26/04/03/2024233/ai-is-coming-for-your-online-gaming-servers-next?utm_source=rss1.0mainlinkanon&utm_medium=feed](https://games.slashdot.org/story/26/04/03/2024233/ai-is-coming-for-your-online-gaming-servers-next?utm_source=rss1.0mainlinkanon&utm_medium=feed)

主な内容:
スターファイアインスピレーションの戦略ゲーム「Stormgate」は、ホスティング企業がAI分野に買収されたため、マルチプレイヤーサーバーが停止することになりました。現在はオフラインでプレイ可能ですが、この事象はAIブームによるゲーミング業界への影響を示しています。AI関連の企業は現在、ハードウェア不足の中でも、できる限りのインフラを獲得し、AI仕様に再利用しようとしています。

「Stormgate」開発会社の Frost Giant Studios は、マルチプレイヤーサーバーが今月終わりに停止されると告知しました。ホスティング企業のハトクラ(Hathora)は、AI関連のファイアワークスAI(Fireworks AI)に買収されました。ファイアワークスAIは、「オープンソースのAIモデルで高速化し、特定のユースケースに最適化され、グローバルにスケーリングされるインファレンスクラウド」を提供しており、ハトクラのサーバーも今後さらにAI関連用途に利用される可能性が高いです。

ハトクラはゲームサービス部門を完全に閉鎖する計画であり、「Stormgate」もおそらく最後の一例ではない可能性があります。ハトクラは「スプリットゲート2」といった他のオンラインゲームにもサービスを提供しています。
"Consumer PC parts aren't the only things being gobbled up by the 'AI' industry," writes PCWorld's Michael Crider. "A Starcraft-inspired strategy game is shutting down its multiplayer servers because the hosting company got bought out for 'AI.'" The game will still be playable offline for now, but the shutdown highlights the ripple effects of the AI boom on the gaming industry. Amid the ongoing hardware shortages, AI companies are basically gobbling up as much infrastructure as they can to repurpose it for AI workloads. From the report: The game in question is Stormgate, a crowdfunded revival of the real-time strategy genre that has languished in the last decade or so. The developer Frost Giant Studios told its players on Discord (spotted by PC Gamer) that it would be unable to continue multiplayer access past the end of this month. The "game server orchestration partner" was bought by an AI company -- the developer's words, not mine -- which means that the multiplayer aspects of the game will have a "planned outage." The devs say the game will be patched for offline play, presumably including its single-player campaign mode and co-op modes, but "online modes will not be available at that point." They're hoping to bring back online play in a later update, but that'll depend on "finding a partner to support ongoing operations." That sounds like old-fashioned player-hosted games with lobbies aren't in the cards, at least not yet. Frost Giant's server provider is Hathora, which was bought by a company called Fireworks AI last month. Fireworks describes its offerings as "open-source AI models at blazing speed, optimized for your use case, scaled globally with the Fireworks Inference Cloud." So, yeah, Hathora's infrastructure will likely be used for yet more generative "AI." And according to GamesBeat, it's planning to shut down the game service aspect of its company completely. That means Stormgate probably isn't going to be the last game affected. Hathora also provides online services for Splitgate 2, among others. I'm contacting Hathora for comment and will update this story if I receive a response.

Read more of this story at Slashdot.

Google Announces Gemma 4 Open AI Models, Switches To Apache 2.0 License

著者: BeauHD
2026年4月3日 03:00

🤖 AI Summary

Googleは、Gemini AIモデルの開発を進めた一方で、利用条件が厳しいという課題に直面していました。そこで、新たなオープンソースモデル「Gemma 4」を発表し、Apache 2.0ライセンスを採用しました。

Gemma 4は、地元での使用に最適化された4つのサイズがあり、大規模な26B Mixture of Expertsと31B Denseの2つは、NVidia H100 GPUで動作します。このGPUは約2万ドルかかるものの、ローコストなGPUでも精度を維持することができます。

また、モバイル向けに最適化されたEffecive 2B(E2B)とEffective 4B(E4B)も提供され、低消費電力を保ちつつ処理能力を向上させています。これらのモデルはスマートフォンやRaspberry Piなどのデバイスにも対応しています。

GoogleはApache 2.0ライセンスを選択し、「データ、インフラストラクチャ、モデルの完全な制御権」をユーザーに提供すると主張しています。

この発表について、Hugging Faceの共同創業者でCEOのClement Delangueは「大きな里程碑」と評価しました。Gemma 4とその関連技術群である「Gemmaverse」が開発に広く活用されることを期待しています。
An anonymous reader quotes a report from Ars Technica: Google's Gemini AI models have improved by leaps and bounds over the past year, but you can only use Gemini on Google's terms. The company's Gemma open-weight models have provided more freedom, but Gemma 3, which launched over a year ago, is getting a bit long in the tooth. Starting today, developers can start working with Gemma 4, which comes in four sizes optimized for local usage. Google has also acknowledged developer frustrations with AI licensing, so it's dumping the custom Gemma license. Like past versions of its open-weight models, Google has designed Gemma 4 to be usable on local machines. That can mean plenty of things, of course. The two large Gemma variants, 26B Mixture of Experts and 31B Dense, are designed to run unquantized in bfloat16 format on a single 80GB Nvidia H100 GPU. Granted, that's a $20,000 AI accelerator, but it's still local hardware. If quantized to run at lower precision, these big models will fit on consumer GPUs. Google also claims it has focused on reducing latency to really take advantage of Gemma's local processing. The 26B Mixture of Experts model activates only 3.8 billion of its 26 billion parameters in inference mode, giving it much higher tokens-per-second than similarly sized models. Meanwhile, 31B Dense is more about quality than speed, but Google expects developers to fine-tune it for specific uses. The other two Gemma 4 models, Effective 2B (E2B) and Effective 4B (E4B), are aimed at mobile devices. These options were designed to maintain low memory usage during inference, running at an effective 2 billion or 4 billion parameters. Google says the Pixel team worked closely with Qualcomm and MediaTek to optimize these models for devices like smartphones, Raspberry Pi, and Jetson Nano. Not only do they use less memory and battery than Gemma 3, but Google also touts "near-zero latency" this time around. The Apache 2.0 license is much more flexible with its terms of use for commercial restrictions, "granting you complete control over your data, infrastructure, and models," says Google. Clement Delangue, co-founder and CEO of Hugging Face, called it "a huge milestone" that will help developers use Gemma for more projects and expand what Google calls the "Gemmaverse."

Read more of this story at Slashdot.

Group Pushing Age Verification Requirements For AI Sneakily Backed By OpenAI

著者: BeauHD
2026年4月2日 20:00

🤖 AI Summary

OpenAIは、子供安全に関する規制を推進する機関に対して直接的な関与を公言せず、裏で活動を行なっていることが明らかになりました。カリフォルニア州に本社を置く「Parents and Kids Safe AI Coalition」は、18歳未満のユーザーに対するAI企業の年齢確認等の追加セキュリティ措置の義務化を目的として結成された団体ですが、その背後にはOpenAIが資金を供給していることが分かりました。OpenAIはこの団体の一員ではなく、実質的に唯一の寄付者であり、支援金額は1000万ドルに上るとされています。しかし、OpenAIは自身の関与を公表せず、連携団体やウェブサイトで情報が漏洩していないことが問題視されています。この行動はCEOであるSam Altmanの利益につながる可能性があると指摘されています。
An anonymous reader quotes a report from Gizmodo: OpenAI hasn't been shy about spending money lobbying for favorable laws and regulations. But when it comes to its involvement with child safety advocacy groups, the company has apparently decided it's best to stay in the shadows -- even if it means hiding from the people actually pushing for policy changes. According to a report from the San Francisco Standard, a number of people involved in the California-based Parents and Kids Safe AI Coalition were blindsided to learn their efforts were secretly being funded by OpenAI. Per the Standard, the Parents and Kids Safe AI Coalition was a group formed to push the Parents and Kids Safe AI Act, a piece of California legislation proposed earlier this year that would require AI firms to implement age verification and additional safeguards for users under the age of 18. That bill was backed by OpenAI in partnership with Common Sense Media, which proposed the legislation as a compromise after the two groups had pushed dueling ballot initiatives last year. But when the coalition started to reach out to child safety groups and other advocacy organizations to try to get them to lend support to the bill, OpenAI was apparently conveniently left off the messaging. The AI giant was also left out of the marketing on the coalition's website, according to the Standard. That reportedly led to a number of groups and individuals lending their support to the Parents and Kids Safe AI Coalition without realizing that they were aligning themselves with OpenAI. As it turns out, OpenAI isn't just one of the members of the coalition; it is the group's biggest funder. In fact, the Standard characterized the Parents and Kids Safe AI Coalition as being "entirely funded" by OpenAI. While it's not clear exactly how much the company has funneled to this particular group, a Wall Street Journal report from January said OpenAI pledged $10 million to push the Parents and Kids Safe AI Act. Gizmodo notes that OpenAI's backing of the Parents and Kids Safe AI Act "could be self-serving for CEO Sam Altman," who just so happens to head a company called World that provides age verification services.

Read more of this story at Slashdot.

オリジナルのステッカーやタトゥーシールを「Adobe Firefly」「Adobe Express」でデザイン体験 アドビが音楽フェス「BLARE FEST.」にブースを出展

🤖 AI Summary

アドビが2月7日と8日に開催されたcoldrain主催の音楽フェス「BLARE FEST.」に協賛ブースを設置し、来場者がオリジナルのステッカーやタトゥーシールを作成できる機会を提供しました。ブースでは生成AIモデル「Adobe Firefly」とコンテンツ制作アプリ「Adobe Express」を使ってデザイン体験が行われました。

イベントでは1000件以上の作品が完成し、来場者の多くはデザインの楽しさと容易さを評価しました。「Adobe Firefly」ではBLARE FEST. 2026のロゴを基にオリジナルデザインを作成してステッカーにしたり、「Adobe Express」ではタイムテーブルテンプレートを使ってカスタマイズしたタトゥーシールが作成されました。

アドビは生成AIとクリエイティブ表現の親和性を強調し、ノンプロフェッショナルなユーザーも自由にアイデアを表現できる環境を作りました。また、ファンが自身の作品を直接アーティストに送るという「双方向のコミュニケーション」も実現しました。

さらに、クリエイターの権利保護にも取り組み、「コンテンツクレデンシャル」など透明性のあるAI技術の実装を目指しています。
sponsored by アドビ2月7日と8日にポートメッセなごやで開催されたcoldrain主催の音楽フェス「BLARE FEST.」。会場にはアドビが協賛ブースを出展し、来場者がオリジナルのステッカーやタトゥーシールの制作を体験することができました。...続きを読む

「骨伝導で通話録音できるの地味にすごい」NottaのAIレコーダーがお安くなってました【今日のセール品】

🤖 AI Summary

### 「骨伝導で通話録音できるNottaのAIレコーダーが安くなっています」

会議や外出先での重要な通話メモ作成に時間がかかるという方におすすめなのが、Nottaの「Notta Memo AIボイスレコーダー」です。これ1台で通話録音から文字起こし、要約まで自動処理を行います。

#### 主な特徴:
- **骨伝導マイク**:スマートフォン背面にマグネット式で装着可能。
- **録音モード切り替え**:スライドボタン操作で選べる2つのマイクモード。
- **AI文字起こしと要約作成機能**:専用アプリとの連携により、自動化された効率的なメモ作りが可能です。

#### 推奨ユーザー:
- 長時間の会議や商談が多い方
- 出先での電話対応が多く、正確な通話記録を必要とする方
- 国際会議などでリアルタイム翻訳が必要な方

#### 評価とセール情報:
現在Amazonでは通常価格23,500円(税込)の商品が10%オフの21,150円 (税込)で販売されています。日常の会議や通話記録作業を効率化したい方には良い機会です。

記事引用元:[ガジェ通セール情報班](https://getnews.jp/archives/3704113)
会議の議事録作成や、外出先での大事な通話メモ。後から手作業でテキストにまとめるのは、時間がかかって大変ですよね。そんな手間を省いてくれるスマートなアイテムを見つけました。Nottaの「Notta Memo AIボイスレコーダー」は、ワンタッチで録音から文...続きを読む

Will AI Force Source Code to Evolve - Or Make it Extinct?

著者: EditorDavid
2026年3月23日 19:34

🤖 AI Summary

記事は、AIがソースコードの進化を促すか、あるいはその存在自体を消滅させるかについて議論しています。Stephen Cass(IEEE Spectrumの特別プロジェクト編集者)は、「AIに直接プロンプトから中間言語を得させることは可能か?高次言語が必要なのか?」という根本的な質問を投げかけました。

Cassは、プログラムが「不透明なブラックボックス」となり readability が犠牲になる可能性があることを認識しています。しかし彼は、「コードの読み書きや維持ではなく、プロンプトを調整し、新たなソフトウェアを生成する」という新しい仕事の形が生まれることも指摘しました。

また、Andrea Griffiths(GitHubのシニア開発者 ADVOCATE)によると、現在のところ実質的な採用がない「AI第一」言語が存在します。彼女は、高次言語が既存の言語をより使いやすくする可能性もあると述べています。

最後に、Chris Lattner のMojoという言語について言及しています。これは、多内核チップの計算力を活用することを目指しており、「AI世紀」のプログラミング言語として考えられています。

記事はこれらについて話し合いを通じて、AIが将来的にプログラミング言語そのものに影響を及ぼす可能性を考察しています。ただし、現在は「ビーブコーディング」を理解する段階にあるため、これはまだ先のことだと指摘されています。しかしCassは、「この分野が研究の中心となる可能性がある」と述べています。
Will there be an AI-optimized programming language at the expense of human readability? There's now been experiments with minimizing tokens for "LLM efficiency, without any concern for how it would serve human developers." This new article asks if AI will force source code to evolve — or make it extinct, noting that Stephen Cass, the special projects editor at IEEE Spectrum, has even been asking the ultimate question about our future. "Could we get our AIs to go straight from prompt to an intermediate language that could be fed into the interpreter or compiler of our choice? Do we need high-level languages at all in that future?" Cass acknowledged the obvious downsides. ("True, this would turn programs into inscrutable black boxes, but they could still be divided into modular testable units for sanity and quality checks.") But "instead of trying to read or maintain source code, programmers would just tweak their prompts and generate software afresh." This leads to some mind-boggling hypotheticals, like "What's the role of the programmer in a future without source code?" Cass asked the question and announced "an emergency interactive session" in October to discuss whether AI is signaling the end of distinct programming languages as we know them. In that webinar, Cass said he believes programmers in this future would still suggest interfaces, select algorithms, and make other architecture design choices. And obviously the resulting code would need to pass tests, Cass said, and "has to be able to explain what it's doing." But what kind of abstractions could go away? And then "What happens when we really let AIs off the hook on this?" Cass asked — when we "stop bothering" to have them code in high-level languages. (Since, after all, high-level languages "are a tool for human beings.") "What if we let the machines go directly into creating intermediate code?" (Cass thinks the machine-language level would be too far down the stack, "because you do want a compile layer too for different architecture....") In this future, the question might become 'What if you make fewer mistakes, but they're different mistakes?'" Cass said he's keeping an eye out for research papers on designing languages for AI, although he agreed that it's not a "tomorrow" thing — since, after all, we're still digesting "vibe coding" right now. But "I can see this becoming an area of active research." The article also quotes Andrea Griffiths, a senior developer advocate at GitHub and a writer for the newsletter Main Branch, who's seen the attempts at an "AI-first" languages, but nothing yet with meaningful adoption. So maybe AI coding agents will just make it easier to use our existing languages — especially typed languages with built-in safety advantages. And Scott Hanselman's podcast recently dubbed Chris Lattner's Mojo "a programming language for an AI world," just in the way it's designed to harness the computing power of today's multi-core chips.

Read more of this story at Slashdot.

「紅の砂漠」の開発元がAIアートの使用について謝罪

2026年3月23日 12:25
2026年3月20日にリリースされたオープンワールド・アクションアドベンチャーゲームの「紅の砂漠」において、ゲーム内で展示されている絵画がAI生成のものではないかという指摘がRedditで話題になりました。開発元のPearl Abyssは生成AIの使用を認め、「後に差し替える制作初期段階のものが意図せず残っていた」と説明して謝罪しています。

続きを読む...

OpenAIはデスクトップ版「スーパーアプリ」を計画している

2026年3月23日 11:16

🤖 AI Summary

OpenAIが複数のアプリケーション(ChatGPT, Codex AI, Atlasなど)を統合したデスクトップ版「スーパーアプリ」を開発しているとの報道がありました。この新しいアプリは、OpenAIの製品を効率的に統合し、ユーザー体験を簡素化することを目的としています。CEOであるフィジ・シモ氏は、「Codexのような新たな試みに注力する必要がある」と述べています。

また、新しいスーパーアプリではエージェント型AI機能の開発に重点を置く計画です。これは、ユーザーのコンピューター上で自律的に動作し、ソフトウェアの作成やデータ分析などを行うAIシステムを目指しています。

OpenAIは同アプリの開発を通じてリソースを効率化し、製品が断片化されてきた問題を解決することに焦点を当てています。競争激化に対応するためにもこの新たなスーパーアプリの導入は重要視されています。
OpenAIが、ChatGPT、コーディングアプリのCodex AI、AI搭載ブラウザのAtlasを1つのアプリに統合したデスクトップ版「スーパーアプリ」の開発に取り組んでいると報じられました。

続きを読む...

Google検索でリンク先タイトルが勝手にAI生成の見出しに変更される事例が報告される

2026年3月23日 11:10

🤖 AI Summary

Google検索の結果に表示されるリンク先タイトルがAIによって自動生成されていることが報告されています。The Vergeによると、ニュース見出しやウェブサイトの見出しにもAIによる変更が確認されており、一部では意味を全く変えるような変更も行われているケースがあると指摘しています。

Googleはこの実験的な取り組みを「クリック率を向上させ、ユーザーの検索クエリとより適切にマッチングさせる」目的で行っているとしたものの、The Vergeはこれは「炭鉱のカナリア」として警鐘を鳴らし、「今後Googleがさらに契約内容を変更する可能性がある」と警告しています。

この記事は、AIによる見出しの自動生成や検索結果の変更についての最新情報と懸念点を伝えています。
Google検索をすると表示されるリンク先タイトル(見出し)が、サイト制作者の設定したタイトルではなく、AIが生成したものに置き換えられていることが確認されました。中には、AIが見出しを書き換えたことでタイトルの意味が丸ごと変わってしまったケースも報告されています。

続きを読む...

Cursorが最新コーディングAIの「Composer 2」は中国のMoonshot AI製Kimiをベースにしたものだと認める

2026年3月23日 10:58
コーディングAIの「Cursor」は、2026年3月19日に最新AIモデルの「Composer 2」を発表しました。Composer 2はコーディング性能は最先端クラスでありながら、コスト面も非常に優秀です。しかし、公開後に「Composer 2は強化学習(RL)を追加した単なるKimi 2.5」とユーザーから指摘され、CursorもComposer 2がKimiベースのAIモデルであることを認める事態となりました。

続きを読む...

AIモデルがクリエイティブな文章を書けないのは初期モデルに見られた創造性や独創性を抑制してビジネス用途に特化させたせいだという指摘

2026年3月23日 07:00
AIの能力は急速に発展しており、すでにさまざまなタスクをこなしたり複雑な計算を完了したりできるようになっていますが、クリエイティブな文章の生成はチャットAIが出始めた頃からほとんど進歩していません。AIのライティング能力がなかなか向上しない理由について、The Atlanticが専門家の意見をまとめています。

続きを読む...

A CNN Producer Explores the 'Magic AI' Workout Mirror

著者: EditorDavid
2026年3月23日 01:34

🤖 AI Summary

CNNは「Magic AIフィットネスミラー」という新しい製品を紹介しています。この装置は「あなたを見守り、リアルタイムでフィードバックを提供する」という特徴があります。また、時々ビデオ再生機能も搭載し、記録された個人トレーナーの動画が流れることもあります。

この製品に関するCNNのビデオレポートについて、 Slashdotの長年読者 destinyland は、「装置は「フォームを追跡し、セット数をカウントし、技術をリアルタイムで修正します — やりきれないほど厳しい」という特徴があると述べています。」しかし、同社のCEO兼共同創業者である Varun Bhanotは「私たちは完全に個人トレーナーを置き換えるつもりではありません。私たちが提供するのはよりアクセスしやすい代替案です」と述べています。

CNNはMagic AIを「コンピュータビジョン企業」ではなく「フィットネス企業」と呼んでおり、このミラーの技術はゼロから構築されています。Bhanot氏によれば、「20歳の頃に個人トレーナーを雇って体を鍛えることを試みましたが、そのプロセス全体にはデータや補助が欠けていました」と述懐しています。

AIフィットネスとウェルネス市場は既に大きく成長しており、2025年時点でグローバル市場の規模は110億ドルで、2035年にはほぼ580億ドルになると予想されています。Magic AIは数社の中の一社に過ぎません — Form, Total, Speediance, Echelonなどのブランドもこの市場の一部を狙っています。

つまり、最も純粋な物理的な活動である「身体を鍛える」さえもAIアクセサリによって「強化」されつつあります。
CNN looks at "the Magic AI fitness mirror," a new product "watching you, and giving you feedback automatically," while sometimes playing footage of a recorded personal trainer. Long-time Slashdot reader destinyland describes CNN's video report: CNN says the device "tracks form, counts reps, and corrects technique in real-time — and it doesn't go easy on you." (Although the company's CEO/cofounder, Varun Bhanot, says "we're not trying to completely replace personal trainers. What we are providing is a more accessible alternative.") CNN call the company "more a computer-vision firm than a fitness company, building the tech for this mirror from the ground up." CEO Bhanot tells CNN he'd hired a personal trainer in his 20s to get fit, but "Going through that journey, I realized how old-fashioned personal training was. Dumbbells were still dumb. There was no data or augmentation for the whole process!" "The AI fitness and wellness market is already huge — and it's growing," CNN adds. "In 2025 the global market was worth $11 billion, according to [market research firm] Insightace Analytic. By 2035, this market is expected to reach just shy of $58 billion. And Magic AI is far from alone. Form, Total, Speediance, and Echelon, to name a few, are all brands vying for a slice of this market. Even the most purely physical of activities — exercising your body — now gets "enhanced" with AI accessories...

Read more of this story at Slashdot.

ありとあらゆるAIモデルを組み合わせることができる「LibreChat」で「ウェブ検索」「画像自動生成」をやってみる、無料でセルフホスト可能

2026年3月22日 23:00
ChatGPTやClaude、GeminiなどのAIチャットサービスは便利ですが、それぞれのサービスを行き来する手間がかかります。無料でセルフホスト可能な「LibreChat」は、ありとあらゆるAIモデルをひとつの画面でまとめて扱える統合プラットフォームです。今回はLibreChatの機能のうち「ウェブ検索」と「画像生成」を試してみました。

続きを読む...

❌