リーディングビュー

Single Dose of Magic Mushroom Psychedelic Can Cause Anatomical Brain Changes

✇Slashdot
著者: BeauHD

🤖 AI Summary

魔法 Mushrooms の単回投与が脳の解剖学的変化を引き起こし、1ヶ月後も観察されることについて小型研究が報告された。カリフォルニア大学サンフランシスコ校のロビン・キャアルハリス教授は、「どのような意味を持つのかまだ不明だが、被験者はポジティブな心理的変化を示した」と述べている。

脳内水の拡散度を測定する専門的な検査によって、神経線維が薬後にもより稠密で強壯になっており、これは年齢やアルツハイマー病と対照的であることが確認された。カビルハリス教授は、「単回投与でも脳の潜在的な解剖学的変化を1ヶ月後に観察するのは驚くべきことだ」と述べた。

また、神経線維の稠密さが最大に上昇した被験者は、1ヶ月後の心理的洞察力と福祉向上との関連性が高いことが示された。カビルハリス教授は、「これ suger て psilocybin の精神生体作用がある」と述べている。

ネッダ大学ニューヨーク校のアレックス・カウン教授は、マウスでの研究から麻薬が神経間の接続を再構築し、それによって治療効果を得ることが示唆される一方で、「結果は興奮的だが、被験者が少ない上に DTI は脳内結合を間接的にしか見ることができない」と指摘した。
A small study found that a single 25mg dose of psilocybin produced measurable brain changes that were still visible a month later, along with reported improvements in psychological insight, wellbeing, and mental flexibility. The Guardian reports: Evidence for the changes came from specialized scans that measured the diffusion of water along nerve bundles in the brain. They suggested that some nerve tracts had become denser and more robust after the drug was taken. While the findings are preliminary, the scientists said the opposite was seen in ageing and dementia. "It's remarkable to see potential anatomical brain changes one month after a single dose of any drug," said Prof Robin Carhart-Harris, a neurologist at the University of California, San Francisco, and senior author on the study. "We don't yet know what these changes mean, but we do note that overall, people showed positive psychological changes in this study, including improved wellbeing and mental flexibility." [...] Writing in Nature Communications, the researchers describe another key finding. Those who had the largest spike in brain entropy after psilocybin were most likely to report deeper psychological insight and better wellbeing a month later, underlining the link between flexible thinking and improved mental health. "It suggests a psychobiological therapeutic action for psilocybin," said Carhart-Harris. Prof Alex Kwan, a neuroscientist at Cornell University in New York, said studies in mice had shown that psychedelics can rewire connections between nerves, a form of "plasticity" that could underlie their therapeutic effects. The big question is whether the same occurs in humans. "This study comes closer than most to addressing that question, by giving evidence of lasting changes in brain structure after psychedelic use," he said. But while the results were "exciting," the study involved a small number of people and DTI provides an indirect and limited view of brain connections, he said.

Read more of this story at Slashdot.

  •  

Sam Altman's Management Style Comes Under the Microscope At OpenAI Trial

✇Slashdot
著者: BeauHD

🤖 AI Summary

サム・アルトマンの管理スタイルが、エ隆・マスクによるOpenAI訴訟の第7日目に焦点となる。Mira Murati元CTO、Shivon Zilis、Helen Tonerら過去のOpenAI幹部が証言し、アルトマンの「困難で混沌とした」管理手法への懸念を再提起した。

Muratiは、アルトマンが「大きな議論的問題」に対する決定能力に欠けると指摘。また、彼が人々に聞きたいことを言うように話す傾向があるとも述べた。「サムが一人に対して一つのことを言っているのに、別の人に全く異なることを言う」という点が作業環境を困難かつ混乱させると主張した。

Zilisは、ChatGPTのリリース時にボードとの連絡を欠いたアルトマンに懸念を表明。Helion Energyという核エネルギー関連スタートアップとOpenAIの契約についても不安を感じていた。彼ら2人はOpenAIの株式投資家であり、この契約提案が「予想外」と感じられた。

Tonerは、彼の誠実性や対ボード監視への抵抗態度に対する懸念を含め、アルトマンがボードから解任された理由について証言した。彼の管理手法に関する彼自身の内部チームからの懸念も考慮に入れた。

この訴訟は、OpenAI設立当初の歴史や、マスクとの争い、CEOのアルトマンの株式価値など、複数の関連ニュースと絡んで展開している。
Sam Altman's management style came under scrutiny on the seventh day of Elon Musk's high-stakes OpenAI trial, as former OpenAI figures Mira Murati, Shivon Zilis, and Helen Toner took the stand to testify about their experiences working with him. Their testimony resurfaced many of the criticisms that first emerged during Altman's brief ouster as CEO in 2023. An anonymous reader quotes a report from Business Insider: The first witness was Mira Murati, OpenAI's former chief technology officer and now founder of her own AI shop, Thinking Machines Lab. Jurors watched a recorded video deposition of Murati, who was also OpenAI's interim CEO after the board briefly ousted Sam Altman. Murati's testimony focused on her concerns about Altman's "difficult and chaotic" management style. She said Altman had trouble "making decisions on big controversial things." He also had a habit of telling people what they wanted to hear. "My concern was about Sam saying one thing to one person and a completely different thing to another person, and that makes it a very difficult and chaotic environment to work with," said Murati. Murati said that her issue with Altman was not about safety, "it is about Sam creating chaos." She said she supported Altman's return to OpenAI because the company "was at catastrophic risk of falling apart" at the time of his ousting. "I was concerned about the company completely blowing up." Zilis said she was upset that Altman rolled out ChatGPT without involving the board. "It wasn't just me but the entire board raised concern about that whole thing happening without any board communication," she said. Zilis said she was also concerned about a potential OpenAI deal with a nuclear energy startup called Helion Energy because both Altman and Greg Brockman were investors. Although the executives had disclosed the investment to the board, Zilis said the deal talk made her uneasy. It "felt super out of left field," she said. "How is it the case that we want to place a major bet on a speculative technology?" In a video deposition, Helen Toner, a former member of OpenAI's board who resigned in 2023, said she first became aware of ChatGPT's release when an OpenAI employee asked another board member whether the board was aware of the development. [...] Toner also elaborated on why the board, including herself, voted to remove Altman as CEO in 2023. "There were a number of things -- the pattern of behavior related to his honesty and candor, his resistance of board oversight, as well as the concerns that two os his inner management team raised to the board about his management practices, his manipulation of board processes," said Toner. Recap: Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla (Day Six) OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five) Musk Concludes Testimony At OpenAI Trial (Day Four) Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three) Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two) Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)

Read more of this story at Slashdot.

  •  

Google's AI Search Results Will Now Turn To Reddit For 'Expert Advice'

✇Slashdot
著者: BeauHD

🤖 AI Summary

Googleは、AIオーバービューとAIモードを更新し、「専門家意見」からパブリックディスカッションやソーシャルプラットフォーム、フォーラム、ブログ、Redditなどの情報を取り入れる方針を発表しました。新的「専門家意見」セクションはAI応答内に表示され、公のオンラインディスカッション、ソーシャルメディアなどから最初手の見解のプレビューが提供されます。Googleは、リンク先情報にもより多くのコンテキストを追加し、「創作者の名前、ハンドルやコミュニティ名」も表示される予定です。

また、AI応答内では関連する記事の推薦を行うようになり、生成された回答内で直接ソースへのリンクが提供されます。あなたのGoogleアカウントに連携した購読情報があれば、その情報を元にしたソースにもハイライトが加えられます。

この更新により、検索結果からより信頼性のある情報源へとユーザーを導くことが可能になります。
Google is updating AI Overviews and AI Mode to more prominently surface "Expert Advice" from public discussions, social platforms, forums, blogs, and Reddit. Engadget reports: Via a new "Expert Advice" section that can appear in AI responses, Google will display "a preview of perspectives from public online discussions, social media and other firsthand sources." In the sample screenshot the company provided, quotes from forums, WordPress blogs and Reddit were arranged above links to their respective sources. Google plans to add more context to these links, too, showing "a creator's name, handle or community name," so you can judge what you might want to click through and read from a glance. Google will also start recommending in-depth articles at the end of AI responses for further exploration of a given topic, and link to more sources directly in its generated answers rather than just at the end. If you subscribe to any publications, AI responses will also highlight sources from the subscriptions you link to your Google account.

Read more of this story at Slashdot.

  •  

Valve Releases Steam Controller CAD Files Under Creative Commons License

✇Slashdot
著者: BeauHD

🤖 AI Summary

Valveは新しいSteamコントローラーとそのピックのCADファイルをクリエイティブ・コモンズライセンスで公開しました。「このリリースでは、進取性のあるモッダーが独自のSteamコントローラーアドオンを作成できるようにする」とDigital Foundryが報告しています。リリースにはコントローラーとピックの外観(「表面トポロジー」)のCADファイルが含まれており、.STP, .STL形式やエンジニアリング図も提供されています。

このリリースは Valve のSteamデッキ携帯機やValve Index VR セットアップ、10年前に公開された元のSteamコントローラーなど過去のCADファイル公開の一環だと認識されており、期待されています。ライセンスは商業利用禁止で、著作権表示とコミュニティへの設計共有を要求しますが、商業機関がアクセサリー製造に関心がある場合、Valveに直接連絡して条件交渉をすることも可能です。

ファイルはここからアクセスできます。
Valve has released CAD files for the new Steam Controller and its Puck under a Creative Commons license. "The idea is to let enterprising modders create their own Steam Controller add-ons, like skins, charging stands, grip extenders or smartphone mounts," reports Digital Foundry. From the report: The Valve release includes files for the external shell ("surface topology") of the Controller and Puck, with a .STP, .STL and engineering diagram of each device, with the latter showing areas that must remain uncovered to let the device maintain its signal strength and otherwise function as designed. Valve has previously released CAD files for its Steam Deck handheld, Valve Index VR suite and even the original Steam Controller a decade ago, so this release is welcomed but not unexpected. The release is under a fairly restrictive Creative Commons license which allows for non-commercial use and requires attribution and sharing of designs back to the community. However, the license also suggests that commercial entities interested in making accessories for the Steam Controller or its Puck can contact Valve directly to discuss terms. You can find the files here.

Read more of this story at Slashdot.

  •  

Morgan Stanley Undercuts Rivals On Pricing In Crypto Trading Debut

✇Slashdot
著者: BeauHD

🤖 AI Summary

モルガン・スタンレーはE*トレードに仮想通貨取引を導入し、パイロット運営が行われています。年内には同プラットフォームの860万人の顧客へ広範囲に展開する計画です。モルガン・スタンレーは競争相手を下回る50ベーシスポイントの取引手数料を設定し、伝統的なファイナンスとデジタルフィンテック(DeFi)が融合すると予想しています。

これはロビンフッド市場の95ベーシスポイト、コインベースグローバルの60ベーシスポイト、チャールズ・シュワブの75ベーシスポイトなど、競争相手より安価です。モルガン・スタンレーの財務管理部長ジェド・フィンはブルームバーグに対し、「この戦略は取引料金よりも大きい意味を持つ。つまり、不介入主義者を排除するという意味だ」と述べました。

この動きは伝統的な金融とデジタルフィンテックが融合すると予想されることから始まり、新たな市場の形成を目指すものであると解釈できます。
Morgan Stanley is adding crypto trading to E*Trade, with a pilot now underway and a broader rollout planned for the platform's 8.6 million customers later this year. The bank is reportedly undercutting rivals with a 50-basis-point trading fee as it bets traditional finance and DeFi will converge. "By contrast, Robinhood Markets' (HOOD) fees start at 95 bps, Coinbase Global's (COIN) begins at 60 bps, and Charles Schwab (SCHW) will charge 75 bps," notes Seeking Alpha. Morgan Stanley's head of wealth management, Jed Finn, told Bloomberg: "This is much bigger than trading crypto at a cheaper rate. In a way, the strategy is disintermediating the disintermediators."

Read more of this story at Slashdot.

  •  

Claude Managed Agents Can Engage In a 'Dreaming' Process To Preserve Memories

✇Slashdot
著者: BeauHD

🤖 AI Summary

Anthropicは、Claudeマネージドエージェントに「夢見」という機能を導入しました。この機能は、 Claudeプラットフォーム上のエージェントが、過去の出来事から重要情報を選別し、「記憶」に保存することで将来的なタスクや対話に活用するものです。「夢見」は現在研究プレビュー段階で、定期的なプロセスとして実装されています。

エージェント間での会話を管理するために「コンパクション」と呼ばれるプロセスが多くのモデルで使用されており、これは長い会話を分析し、不要な情報を削除して重要なものだけを保持します。しかし、「夢見」は異なる概念であり、複数のエージェント間で過去の会話と記憶をレビューし、重要なパターンを見つけ出し、将来的に利用できるように保存するプロセスです。

ユーザーは自動的なプロセスを選択することも、変更された記憶内容を直接確認することもできます。 Managed AgentsはメッセージAPIから一歩引いた高度なオプションで、複数のエージェントが長時間のタスクやプロジェクトに取り組む際に有用です。

この機能は短いコンテキストウィンドウを持つ言語モデルにとって重要かつ有用なものとなります。
An anonymous reader quotes a report from Ars Technica: At its Code with Claude developers' conference, Anthropic has introduced what it calls "dreaming" to Claude Managed Agents. Dreaming, in this case, is a process of going over recent events and identifying specific things that are worth storing in "memory" to inform future tasks and interactions. Dreaming is a feature that is currently in research preview and limited to Managed Agents on the Claude Platform. Managed Agents are a higher-level alternative to building directly on the Messages API that Anthropic describes as a "pre-built, configurable agent harness that runs in managed infrastructure." It's intended for situations where you want multiple agents working on a task or project to some end point over several minutes or hours. Anthropic describes dreaming as a scheduled process, in which sessions and memory stores are reviewed, and specific memories are curated. This is important because context windows are limited for LLMs, and important information can be lost over lengthy projects. On the chat side of things, many models use a process called compaction, whereby lengthy conversations are periodically analyzed, and the models attempt to remove irrelevant information from the context window while keeping what's actually important for the ongoing conversation, project, or task. However, that process, as I described it, is usually limited to a specific conversation with a single agent. "Dreaming" is a periodically recurring process in which past sessions and memory stores can be analyzed across agents, and important patterns are identified and saved to memory for the future. Users will be able to choose between an automatic process, or reviewing changes to memory directly.

Read more of this story at Slashdot.

  •  

ReactOS Unifies Installation Media, Introduces GUI Installer and New ATA Driver

✇Slashdot
著者: BeauHD

🤖 AI Summary

ReactOSの開発者たちはPhoronixに対して、プロジェクトが統一されたブートCDを導入したことを明らかにしました。これは以前の分離されたインストールメディアとライブCDイメージを置き換えるもので、テキストベースのインストーラーと共にライブCDモードを一つの媒体に統合しています。この統一されたブートCD内で、アップデートされたライブCDモードでは初回GUIインストーラーを起動するオプションが追加されました。グラフィカルなインターフェースは、従来のテキストベースのセットアッププロセスよりも新規ユーザーにとって使いやすいように設計されています。

また、プロジェクトは2024年初頭から作業が続いている新しいATAドライバーを統合しました。プラグアンドプレイ対応のストレージスタックにより、SATA、PATA、ATAPI、AHCI、SCSIデバイスもサポートされる可能性があります。これによりReactOSの起動可能なハードウェア範囲が拡大されることが期待されます。

グラフィックスドライバーサポートの最近の改善に加えて、プロジェクトはコアサブシステムにおいて積み重ね的な進歩を続けていますが、長期的な開発スケジュールについては議論の余地があります。これらの使いやすさとハードウェア互換性向上が現状のニッチを超えるReactOSの普及につながるかどうかは不明です。

新しい機能は0.4.15版には含まれていませんが、最新のナイトリーテストビルドでテストできます。
jeditobe writes: Developers of ReactOS told Phoronix that the project has introduced a unified BootCD, replacing its previously separate installation media and LiveCD images. The new image combines the traditional text-mode installer with a LiveCD mode in a single medium. Within this unified BootCD, the updated LiveCD mode now includes an option to launch a first-stage GUI installer. The graphical interface is intended to make installation more approachable for new users compared to the long-standing text-based setup process. In a separate development, the project has also merged a new ATA storage driver that has been in progress since early 2024. The plug-and-play aware storage stack supports SATA, PATA, ATAPI, AHCI, and even SCSI devices, potentially expanding the range of hardware on which ReactOS can successfully boot. Following recent improvements to graphics driver support, the project continues to make incremental progress across core subsystems, though its long development timeline remains a point of discussion. Will these usability and hardware compatibility improvements be enough to broaden ReactOS adoption beyond its current niche? Please note that all new features are not present in version 0.4.15 and are available for testing in the latest nightly test builds.

Read more of this story at Slashdot.

  •  

Zuckerberg 'Personally Authorized and Encouraged' Meta's Copyright Infringement

✇Slashdot
著者: BeauHD

🤖 AI Summary

マクアケの著作権侵害に関する訴訟について、以下の要点を日本語でまとめます。

1. 5大出版社と作家スコット・トロウは、フェイスブック元CEOのマーク・ザッカーバーグが「個人的に推進し、積極的に支援した」と主張する著作権侵害行為により、MetaのAIシステム「ラマ」を訓練するために著作物を使用したとして訴訟を提起しました。

2. Metaは不当性を否認し、法廷で抗弁すると表明。著作権物の利用が公平な使用とみなされる可能性があるとして反論しています。

3. 原告側は、「AIの軍事競争に勝つため」に「進歩せよ、破壊せよ」というMetaの moto を引用し、違法ダウンロードやウェブスクラーリングを行い、多くの著作物を複製してラマを訓練したと主張しています。

4. 訴訟はニューヨーク南地区判事所に提出され、具体的な金銭賠償を求めています。また、Meta側は著作権保護メカニズムを回避し、ライセンス購入の検討もしたが、ザッカーバーグの個人的な指示により断念したとの主張もあります。

5. 本訴訟では、被疑行為が米国の著作権法での公平使用規定の保護対象外であると主張しています。
Five major publishers and author Scott Turow have sued Meta and Mark Zuckerberg, alleging that Zuckerberg "personally authorized and actively encouraged" massive copyright infringement by using pirated books, journal articles, and web-scraped material to train Meta's Llama AI systems. Meta denies wrongdoing and says it will fight the case, arguing that courts have recognized AI training on copyrighted material as potentially fair use. Variety reports: "In their effort to win the AI 'arms race' and build a functional generative AI model, Defendants Meta and Zuckerberg followed their well-known motto: 'move fast and break things,'" the plaintiffs say in their lawsuit. "They first illegally torrented millions of copyrighted books and journal articles from notorious pirate sites and downloaded unauthorized web scrapes of virtually the entire internet. They then copied those stolen fruits many times over to train Meta's multibillion-dollar generative AI system called Llama. In doing so, Defendants engaged in one of the most massive infringements of copyrighted materials in history." The suit was filed Tuesday (May 5) in the U.S. District Court for the Southern District of New York by five publishers (Hachette, Macmillan, McGraw Hill, Elsevier and Cengage) and Turow individually. The proposed class-action suit seeks unspecific monetary damages for the alleged copyright infringement. A copy of the lawsuit is available at this link (PDF). [...] the latest lawsuit alleges that Meta and Zuckerberg deliberately circumvented copyright-protection mechanisms -- and had considered paying to license the works before abandoning that strategy at "Zuckerberg's personal instruction." The suit essentially argues that the conduct described falls outside protections afforded by fair-use provisions of the U.S. copyright code.

Read more of this story at Slashdot.

  •  

Silicon Valley Bets $200 Million On AI Data Centers Floating In the Ocean

✇Slashdot
著者: BeauHD

🤖 AI Summary

サリシャドットの記事によると、シーボル valley での投資家たちは、海洋の中にあるAIデータセンターを推進しており、その資金は2億ドルに達しています。この動きは、テクノロジー企業が土地上でAIデータセンター建設の課題に対処する中で見られます。パンタラスカ社は最近140万ドルを調達し、ポートランド近郊での製造施設の完成と波に乗る「ノード」のデプロイメントを加速させています。

これらのノードは巨大な鋼球で構成され、水中に伸びる管状構造を持つため、波が水を押し上げて圧力蓄水池へ送り込み、そこから発電します。パンタラスカ社のリーダーは、オーシャンベースコンピューティングが低気温を利用して冷却効率が非常に高いと主張しています。最新のノード「Ocean-3」は2026年に北太平洋でテストされ、その長さは約85メートルに達します。

パンタラスカ社は既存の波エネルギー変換器技術のいくつかをテストしており、「Ocean-1」や「Ocean-2」などが含まれています。CEOで共同創業者のガート・シャロンドークソン氏は、最終的には数千のノードをデプロイしたいと考えています。
An anonymous reader quotes a report from Ars Technica: Silicon Valley investors such as Palantir co-founder Peter Thiel have bet hundreds of millions of dollars on deploying AI data centers powered by waves in the middle of the world's oceans -- a move that coincides with tech companies facing mounting challenges in building AI data center projects on land. The latest investment round of $140 million is intended to help the company Panthalassa complete a pilot manufacturing facility near Portland, Oregon, and speed up deployments of wave-riding "nodes" designed to generate electrical power, according to a May 4 press release. Instead of sending renewable energy to a land-based data center, the floating nodes would directly power onboard AI chips and transmit inference tokens representing the AI models' outputs to customers worldwide via satellite link. Each node resembles a huge steel sphere bobbing on the water with a tube-like structure extending vertically down beneath the surface. The wave motions drive water upward through the tube into a pressurized reservoir, where it can be released to spin a turbine generator that produces renewable energy for the AI chips on board. Panthalassa claims the node's AI chips would also get cooled using the surrounding water, which could offer another advantage over traditional data centers. "Ocean-based compute might offer a massive cooling advantage because the ambient temperature is so low," Lee said. "Land-based data centers use a lot of electricity and fresh water for cooling." The newest node prototype, called Ocean-3, is scheduled for testing in the northern Pacific Ocean later in 2026. The latest version reaches about 85 meters in length and would stand nearly as tall as London's Big Ben or New York City's Flatiron Building, according to the Financial Times. Panthalassa has already tested several earlier prototypes of the wave energy converter technology, including the Ocean-1 in 2021 and the Ocean-2 that underwent a three-week sea trial off the coast of Washington state in February 2024. The company's CEO and co-founder, Garth Sheldon-Coulson, said in a CBS interview that he hopes to eventually deploy thousands of the nodes.

Read more of this story at Slashdot.

  •  

Microsoft Gives Up On Xbox Copilot AI

✇Slashdot
著者: BeauHD

🤖 AI Summary

マイクロソフトはXbox Copilotというゲームに特化したAIアシスタントの開発を撤回し、モバイル版とコンソール版での展開を見合わせることになった。この決定は、新任のXbox CEOアシャ・シャマラがチーム再編成を行い、 XboxプラットフォームチームにマイクロソフトのCoreAIチームからの幹部を入れたことにつながった。シャマラはツイッターで、Xboxはより速く進む必要があり、コミュニティとのつながりを深め、プレイヤーと開発者の両方に不公平感がないようにすることが重要だと述べている。

シャマラは2月に前CEOのフィル・スペンサーからMicrosoft Gamingのポジションを受け継いだ後、マイクロソフトゲームズブランド自体を廃止し、Xbox Game Passの価格も削減した。この変更の一環として、コピロットのモバイル版は退役し、コンソール版での開発は終了される。
Microsoft is winding down Xbox Copilot on mobile and ending development of Copilot on console, reversing plans to bring the gaming-focused AI assistant to current-generation Xbox consoles this year. "The move follows [new Xbox CEO Asha Sharma's] reorganization of the Xbox platform team earlier on Tuesday, which added executives from Microsoft's CoreAI team -- where Sharma worked before taking over Xbox -- to the Xbox side of the company," reports The Verge. Sharma said in a post on X: Xbox needs to move faster, deepen our connection with the community, and address friction for both players and developers. Today, we promoted leaders who helped build Xbox, while also bringing in new voices to help push us forward. This balance is important as we get the business back on track. As part of this shift, you'll see us begin to retire features that don't align with where we're headed. We will begin winding down Copilot on mobile and will stop development of Copilot on console. Since taking over for former Microsoft Gaming CEO Phil Spencer in February, Sharma has scrapped the Microsoft Gaming brand and cut the price of Xbox Game Pass.

Read more of this story at Slashdot.

  •  

White House App Is a Terrifying Security Mess

✇Slashdot
著者: BeauHD

🤖 AI Summary

ワシントン・ポストの新アプリに関するセキュリティ上の問題が浮き彫りになりました。研究者がAPKを分解して確認したところ、以下のような深刻な脆弱性が見つかったとのことです。

1. GPSトラッキング機能:アプリはユーザーの位置情報を4.5分ごと(前面では)または9.5分ごと(背景で)取得し、OneSignalのサーバに同期します。この位置情報権限はAndroidManifestには明記されていませんが、OneSignal SDKにはランタイムでハードコードされています。

2. GitHubからのJavaScript:YouTube埋め込み用にGitHub上の任意アカウントからJavaScriptを読み込んでいるとのことです。もしそのアカウントがハッキングされれば、アプリ内のWebView内で任意のコードが実行される可能性があります。

3. SSL証明書ピンピングなし:これは暗号化された通信を傍受する脅威です。

4. インラインJavaScript/CSS注入:インラインブラウザではCookie取得 diálohosやGDPRバナー、ログインウォール、ペイウォールが削除されます。

5. デバッグアーティファクト:生産ビルドには開発用のURLが残っているとのことです。

これらの問題はアプリのセキュリティに大きな脅威をもたらし、重大なプライバシー侵害やサイバー攻撃のリスクがあります。
New submitter spazmonkey writes: From a hidden GPS tracker polling your location every 4.5 minutes to JavaScript loaded from a random GitHub account, no SSL certificate pinning, and an in-app browser that silently strips cookie consent dialogs and paywalls from every page you visit, the new White House app seems to have a little bit of everything. A security researcher pulled the APK apart to discover the cybersecurity vulnerabilities. "The app is a React Native build using Expo SDK 54, with WordPress powering the backend through a custom REST API," reports Android Headlines. "That's pretty normal, as nearly 42% of all websites on the internet are powered by WordPress. But that's just the start; now the nightmare begins..." From the report: To start, the app has a full GPS tracking pipeline compiled in. Essentially, it's set to poll your location every 4.5 minutes in the foreground, and 9.5 minutes in the background. It's syncing latitude, longitude, accuracy, and timestamp data to OneSignal's servers. These location permissions aren't declared in the AndroidManifest, but they are hardcoded as runtime requests in the OneSignal SDK. Some have noted that the tracking only kicks in if the developer enables it server-side and the user grants permission, but it is there, ready to go. And it gets even stranger. Apparently, the app is loading JavaScript from a random person's GitHub site for YouTube embeds. Yes, you read that right, it's just loading JavaScript from a random GitHub site. So if that account ever gets compromised, arbitrary code could run inside the app's WebView. There's also no SSL certificate pinning, meaning that traffic can potentially be intercepted on compromised networks like sketchy public WiFi or corporate proxies. The app also injects JavaScript and CSS into every page you visit in the in-app browser. This strips away cookie consent dialogs, GDPR banners, login walls, and paywalls. There's also leftover dev artifacts in the production build, including a localhost URL to the Metro bundler.

Read more of this story at Slashdot.

  •  

CO2 Levels In the Atmosphere Hit 'Depressing' New Record

✇Slashdot
著者: BeauHD

🤖 AI Summary

マウナロア観測所で4月に大気中の二酸化炭素濃度が平均約431ppmに達し、1958年に観測が始まった際の200ppm台から著しい上昇を示した。これは地球温暖化の一因である温室効果ガスの一つである二酸化炭素が大気中の割合を示すもので、特定のガス分子数(本例ではCO₂)を1百万総分子中における個体数として表される。

気候学者のザカリー・ラベ氏によると、この記録は「憂鬱だが予想内」であり、「地球が暖まるにつれて大気中の二酸化炭素濃度が続けざまに上昇している象徴」として解釈されている。また、ラベ氏は春になると枯れ葉から排出される温室効果ガスの量が増え、植物が成長する季節には一部が吸収されることも説明した。

しかし、NOAAのデータによれば、二酸化炭素の平均月間濃度は年々上昇傾向にあるという。一方で、2023年と2024年にアメリカでの排出量が減少したものの、2025年には再び増加している。これについてはアリババクラウドなどのAIデータセンターからの電力需要増加が影響している可能性がある。

それでもラベ氏は、再生可能エネルギー源の普及により光と風によるエネルギー利用が拡大する中で希望を持つことができるとしている。
Atmospheric carbon dioxide hit a new record in April, averaging about 431 parts per million at NOAA's Mauna Loa Observatory. That's up from under 320 ppm when the site began measurements in 1958. Scientific American reports: Greenhouse gases, such as carbon dioxide, are measured as a proportion of the total atmosphere. The numbers are presented as the number of molecules of a particular gas out of a million total molecules, or ppm. Climate scientist Zachary Labe of Climate Central, a nonprofit that researches climate change, says the new record is "depressing" but not unexpected. "It's just another sign that carbon dioxide continues to increase in our atmosphere as our planet continues to warm," he says. "For many climate scientists, this is just 'here it is again, another record in the wrong direction.'" Labe explains that the amount of CO2 in the atmosphere tends to peak in April each year as decaying plants release greenhouse gases after winter. Some of that CO2 gets reabsorbed by plants as they grow during the warmer months. But NOAA's data show a worrying trend, with the average monthly amount of CO2 steadily increasing. [...] Although the amount of CO2 in the atmosphere has continued to rise, there was a reduction in U.S. emissions in 2023 and 2024. That trend, however, was reversed in 2025, at least partially because of the increased electricity demand from artificial intelligence data centers. Still, Labe says there are reasons for optimism as the use of renewable energy sources such as solar and wind expands.

Read more of this story at Slashdot.

  •  

Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla

✇Slashdot
著者: BeauHD

🤖 AI Summary

Greg Brockman, OpenAIの社長は、彼がトランプ裁判で提訴したエ隆・ Muskの主張を否定し、また、MuskがTeslaのためにOpenAIの従業員に無償で働かせたことについて証言しました。Brockmanは、彼自身もMuskに対してどのようなコミットメントも約束していないことを強調しました。また、OpenAIは依然として非営利組織であると述べました。

Brockmanは、MuskがTeslaのために複数のOpenAI従業員を雇ったことで、2017年にAutopilotチームで自社の自動運転技術開発の方法を見直すことに協力させたことを明らかにしました。さらに、MuskはKarpthyという研究者を招聘し、彼がOpenAIを去ろうとしていたのに気付いてから謝罪したと語りました。

Brockmanはまた、2017年の融資交渉で緊張状態になったときのエピソードを披露し、Muskは株式の話題になると怒り、会議中には Tesla Model 3の絵画を取り下ろして部屋を去ろうとしたと報道されました。また、Muskがコントロール欲強の理由としてZip2やSolarcityの例を挙げました。さらに彼はMuskが火星に城を作るための資金調達の一環としてOpenAIの制御を得たことを証言しました。

この裁判は水曜日の8時30分に再開し、Shivon Zilisが証言予定です。Zilisはエ隆・Muskの4人の子供の母親でありかつ元OpenAI取締役でした。
An anonymous reader quotes a report from CNBC: OpenAI President Greg Brockman concluded his testimony on Tuesday, where he largely rebutted Elon Musk's account of the early years of the startup and negotiations that occurred at the company. Brockman testified that he never made any commitments to Musk about the company's corporate structure, and he never heard anyone else make them. He emphasized that OpenAI is still governed by a nonprofit. "This entity remains a nonprofit," Brockman said, referring to the OpenAI foundation. "It is the best-resourced nonprofit in the world." [...] Brockman, who spoke from the witness stand in federal court in Oakland, California, over the course of two days, also revealed that Musk had enlisted several OpenAI employees to do months of free work for him at Tesla, Musk's electric vehicle company. That work mainly included efforts to overhaul the company's approach to developing self-driving technology as part of the Autopilot team there in 2017. During his two days on the stand, Brockman answered questions about his personal financial ambitions, his understanding of OpenAI's structure and Musk's involvement at the company, which they co-founded with other executives in 2015. In Musk's testimony last week, the Tesla and SpaceX CEO said that the time, money and resources he poured into OpenAI had been integral to the company's success. He repeatedly said that he helped recruit the company's top talent. Brockman said Tuesday that while Musk was helpful in convincing some employees to take the leap to join OpenAI, he was a polarizing figure for others. "Elon had a reputation of being an extremely hard driver," Brockman said. He added that "certain candidates were very attracted" by Musk's involvement at OpenAI, and that "certain candidates were very turned off." Musk testified last week that a former OpenAI researcher named Andrej Karpathy joined Tesla, but only after he had planned to leave the startup already. Brockman said that Musk, after he hired Karpathy, approached him with "an apology and a confession," about the hire, and that neither Musk nor Karpathy had told him the researcher planned to leave OpenAI before that. Musk was generally not very available for meetings and conversations, Brockman said, so he relied on employees, including Sam Teller and former OpenAI board member Shivon Zilis, as proxies. Brockman testified that open sourcing OpenAI's technology was "not a topic of conversation" during Musk's time with the nonprofit, despite Musk's claims that it was supposed to be central to the organization. He also described tense 2017 negotiations over a possible for-profit arm, saying Musk became angry when equity stakes were discussed. "He said Musk declined the proposal during an in-person meeting, then tore a painting of a Tesla Model 3 car off the wall, and began storming out of the room," reports CNBC. He also demanded to know when the cofounders would leave the company. Brockman further said Musk wanted control of OpenAI because he disliked situations where he lacked control, citing Zip2 and SolarCity as examples Musk had raised. He also testified that Musk partly wanted control to help fund his broader SpaceX ambition of building a "city on Mars." CNBC notes the trial will resume at 8:30 a.m. PT on Wednesday, with Shivon Zilis expected to testify. She is the mother of four of Musk's children and a former OpenAI board member. Recap: OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five) Musk Concludes Testimony At OpenAI Trial (Day Four) Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three) Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two) Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)

Read more of this story at Slashdot.

  •  

Apple Agrees To Pay iPhone Owners $250 Million For Not Delivering AI Siri

✇Slashdot
著者: BeauHD

🤖 AI Summary

AppleがiPhone購入者に対して約2億5000万ドルの和解を承認したと報告しました。この和解は、AppleがiPhone 16シリーズおよびiPhone 15 Proモデルのユーザーに対して、Apple Intelligenceやアップグレード版Siriの利用可能性について誤った広告を行ったとする提訴に対応しています。

具体的には、2024年6月10日から2025年3月29日までの間にiPhoneを購入した米国人が対象となります。訴訟はAppleの広告が、「iPhone 16のリリースと同時にApple Intelligence機能が利用できる」という明確な消費者期待を作らせたとする主張に立っています。しかし、実際にはこれらの機能の提供が遅れ、一部のAI関連機能は発売から数週間後に追加され、より個人化されたSiriのリリースも延期されました。

去年4月には、National Advertising DivisionはAppleに対して、「利用可能-now」という主張を「終了または修正」するよう勧告しました。また、一部の広告も取り下げました。

この和解により、Appleは消費者からの不満を解決し、今後のビジネス展開に向けた問題点を回避することになります。
Apple has agreed to a proposed $250 million settlement over claims that it misled iPhone buyers about the availability of Apple Intelligence and its upgraded Siri features. The settlement would cover U.S. buyers of the iPhone 16 lineup and iPhone 15 Pro models between June 10, 2024, and March 29, 2025. The Verge reports: The settlement will resolve a 2025 lawsuit, alleging Apple's advertisements created a "clear and reasonable consumer expectation" that Apple Intelligence features would be available with the launch of the iPhone 16. The lawsuit claimed Apple's products "offered a significantly limited or entirely absent version of Apple Intelligence, misleading consumers about its actual utility and performance." Apple brought certain AI-powered features to the iPhone 16 weeks after its release, and delayed the launch of its more personalized Siri, which is now expected to arrive later this year. Last April, the National Advertising Division recommended that Apple "discontinue or modify" its "available now" claim for Apple Intelligence. Apple also pulled an iPhone 16 ad showing actor Bella Ramsey using the AI-upgraded Siri.

Read more of this story at Slashdot.

  •  

Coinbase Lays Off Nearly 700 Workers In 'AI-Native' Restructuring

✇Slashdot
著者: BeauHD

🤖 AI Summary

コインベースは約700人(従業員の14%)を解雇し、「AI-native」リstructuringを実施しています。CEOのブライアン・アームストロング氏は、同社が「少ない人員で高速に動作し、AIを核としている」会社になることを目指していると述べています。アームストロング氏によると、エンジニアたちは「AIを利用して、従来では数週間かかる作業を数日で完了させている」という実績があります。また、非技術部門も「生産コードの配信」を行うようになり、コインベースは多くのワークフローを自動化しています。

しかし、同社がこの変革を行っているのは、ブロックチェーン市場が低迷しているからでもあります。アームストロング氏は、「当社だけでなく、すべての企業にとって転換点である」と述べています。「この状況に行動しないリスクが大きい」ため、「スタートアップ創業時のスピードと集中力を再構築する必要がある」としています。

コインベースは管理層を削減し、「AI-native」な人材を中心に組織を再編成するとともに、エンジニア、デザイナー、プロダクトマネージャーが一人で担当する「1人チーム」の実験も行う計画です。
Coinbase is laying off about 700 workers, or 14% of its workforce, as CEO Brian Armstrong says the company is restructuring to become "lean, fast, and AI-native." Engadget reports: Armstrong claimed he'd seen engineers "use AI to ship in days what used to take a team weeks" and that non-technical teams in the company are "shipping production code," while Coinbase is automating many of its workflows. "All of this has led us to an inflection point, not just for Coinbase, but for every company," Armstrong wrote. "The biggest risk now is not taking action. We are adjusting early and deliberately to rebuild Coinbase to be lean, fast and AI-native. We need to return to the speed and focus of our startup founding, with AI at our core." An AI-driven restructuring is only one half of the equation for Coinbase, though. Armstrong wrote that while the company "is well-capitalized, has diversified revenue streams and is well-positioned to weather any storm," the crypto market is down. As such, Coinbase is attempting to become leaner and faster ahead of the next crypto cycle. The company is eliminating some management layers and organizing the business around "AI-native talent who can manage fleets of agents to drive outsized impact," Armstrong wrote. "We'll also be experimenting with reduced pod sizes, including 'one person teams' with engineers, designers and product managers all in one role." That sure sounds like an attempt to get workers to take on more responsibilities.

Read more of this story at Slashdot.

  •  

Google DeepMind Workers Vote To Unionize Over Military AI Deals

✇Slashdot
著者: BeauHD

🤖 AI Summary

Google DeepMindのロンドン拠点の労働者が、同社が米軍やイスラエル軍にその技術を提供することを阻止するために組合化投票を行い、成功しました。この動きは、人工知能(AI)についての GOOGLEの倫理的基準に従うよう迫るものです。

通信労働者組合(CWU)とユニット(Unite the Union)が共同代表として認められるよう GOOGLEのディレクター宛に書簡を送りました。工場主の耳を黙らせる強力な集団交渉の場を確保することで、労働者が要求を提出できるようになるという期待があります。

労働者たちは、イスラエル軍との長期的な契約を解除し、AI製品がどのように利用されるかの透明性を求め、自動化による雇用の縮小に係る保証を求める可能性があります。GOOGLEが反応しない場合、仲裁委員会を通じて組合認可を強制する手続も検討されています。

今年年初以降、アントラピックとオープンAIはロンドンでの大規模な拡張を行いました。CWUは、DeepMindの組合化が他の先端研究施設の労働者たちにも影響を与えている可能性があると主張しています。

GOOGLEは2025年2月に、兵器開発や監視のような利用を禁じるAI倫理指針からその文言を取り去りました。多くの従業員が「人間の利益のために Responsibly AI を建設する」という GOOGLE DeepMind のスローガンを信じていたものの、現在はより軍事化傾向にあるとされています。

この記事に関する関連リンク:
- https://news.ycombinator.com/item?id=18313362
- Moving To Mainframe Can Be Cheaper Than Sticking With VMware
- Google Removes Pledge To Not Use AI For Weapons From Website
- Coinbase Lays Off Nearly 700 Workers In 'AI-Native' Restructuring
An anonymous reader quotes a report from Wired: Employees at Google DeepMind in London have voted to unionize as part of a bid to block the AI lab from providing its technology to the US and Israeli militaries. In a letter addressed to Google's managing director for the UK and Ireland, Debbie Weinstein, the workers asked the company to recognize the Communication Workers Union and Unite the Union as joint representatives for DeepMind employees. "Fundamentally, the push for unionization is about holding Google to its own ethical standards on AI, how they monetize it, what the products do, and who they work with," John Chadfield, national officer for technology at the CWU, tells WIRED. "Through the process of unionization, workers are collectively in a much stronger place to put [demands] to an increasingly deaf management." [...] The DeepMind employee tells WIRED that if the staff succeeds in unionizing in the UK, they will likely demand that Google pulls out of its long-standing contract with the Israeli military, and seek greater transparency over how its AI products will be used, and some sort of assurance relating to layoffs made possible by automation. If Google does not engage, the letter states, the employees will ask an arbitration committee to compel the company to recognize the unions. Since the turn of the year, both Anthropic and OpenAI have announced large-scale expansions of their operations in London. CWU hopes the unionization effort at DeepMind will spur workers at those labs into similar action. "These conversations are happening," claims Chadfield. "The workers at other frontier labs have seen what Google DeepMind workers have done. They've come to us asking for help as well." The unionization push began in February 2025 after Alphabet removed a pledge from its AI ethics guidelines that had barred uses such as weapons development and surveillance. "A lot of people here bought into the Google DeepMind tagline 'to build AI responsibly to benefit humanity,'" the DeepMind employee told WIRED. "The direction of travel is to further militarization of the AI models we're building here."

Read more of this story at Slashdot.

  •  

Moving To Mainframe Can Be Cheaper Than Sticking With VMware

✇Slashdot
著者: BeauHD

🤖 AI Summary

タイトル:VMwareから主frameへの移行は費用を削減できる可能性がある

著者:BeauHD
URL:https://linux.slashdot.org/story/26/05/05/189237/moving-to-mainframe-can-be-cheaper-than-sticking-with-vmware?utm_source=rss1.0mainlinkanon&utm_medium=feed

概要:
Gartnerのアリエッサンドロ・ガリムベルティ副社長は、ビッグデータやミッションクリティカルなアプリケーションなど、数年にわたる一貫性と互換性が必要なワークロードについては、IBM主frameへの移行の方がコスト-effectiveである可能性があると述べている。特に数百のLinux仮想マシンや長期的な安定性が必要なアプリケーションに対しては、VMwareライセンシングよりも主frameへの移行が経済的に有利だと提案している。

ただし、ガリムベルティはすべてのアプリケーションに主frameを推奨していない。彼によれば、10年間あまり変更されない可能性が高いミッションクリティカルなアプリケーションや、オープンソースOSであるLinuxが動作するアプリケーションには主frameが適しているという。また、IBMはz/VMハイパーバイザーも提供しており、これはLinuxをより企業向けに進化させることができるとしている。

しかし、ガリムベルティは、主frameへの移行には時間と交渉が必要であり、ビジネス価値よりも価格や更新保護の交渉を行う必要があるという点を指摘している。さらに、ユーザーは利便性のために機能的なカスタマイズを抑制する可能性もあること、また今後のITエンジニアが主frameに関連したキャリアを選択しない傾向に注意を促している。

最終的には、サービスプロバイダーが主frameプログラムの投資を強化することで改善される可能性があると彼は述べている。
Gartner says some VMware customers may find it cheaper to move certain Linux VM workloads to IBM mainframes than to adopt Broadcom's new VMware licensing, especially for fleets of hundreds of Linux VMs and mission-critical apps needing long-term stability. The Register reports: Speaking to The Register to discuss the analyst firm's mid-April publication, "The State of the IBM Mainframe in 2026," [Gartner Vice President Analyst Alessandro Galimberti] said some buyers in many fields are comparing mainframes to modern environments and deciding Big Blue's big iron comes out ahead. "I can build a multi-region cloud application, but things like data synchronization and high availability are things I need to build into application logic," he said. "The mainframe has that in the platform, which shields developers from complexity." He also thinks mainframes are ideally suited to workloads that need many years of transactional consistency and backward-compatibility. That said, Galimberti doesn't recommend the mainframe for all applications. He said mission-critical applications that are unlikely to change much for a decade are best-suited to the machines, as are Linux applications because the open source OS runs on IBM's hardware. IBM also offers the z/VM hypervisor, which he says can make Linux "even better and more enterprise-ready." Which is why Galimberti thinks IBM's ecosystem is attractive to VMware users, especially those who operate a fleet of 500 to 700 Linux VMs. [...] Committing to mainframes therefore means planning "to spend time negotiating price and renewal protections, rather than prioritizing the business value these solutions can deliver." Another downside is that mainframes pose clear lock-in risk, so users may hold back on useful customizations out of fear they make it harder to extricate themselves from the platform. Access to skills remains an issue, too, as kids these days mostly don't contemplate a career working with big iron. Galimberti sees more service providers investing in their mainframe programs, which might help. So does the availability of Linux.

Read more of this story at Slashdot.

  •  

Kids Bypass Age Verification With Fake Moustaches

✇Slashdot
著者: BeauHD

🤖 AI Summary

英字記事の要約:

タイトル: 子供たちが虚偽の髭を使って年齢確認を回避

作者: BeauHD

英国のオンライン安全法に基づく年齢確認は多くの子供たちにとって簡単に通過できると、Internet Mattersによる新調査結果から報告されている。調査によると、偽の誕生日、他人の身分証明書、ゲームキャラクター、さらには髭を描いた顔など、様々な方法で年齢確認を回避しているという。

主要なポイント:
- インターネット matte による1,000人以上の英国子供と保護者の調査結果によると、46%の子供たちが年齢確認を簡単に通過できると回答した。
- 子供たちはゲームキャラクターの使用や偽の誕生日、他人の身分証明書など、比較的簡単な方法で年齢確認を回避している。
- 一部の保護者(17%)は子供たちが年齢確認を回避することを積極的に助けるか、無視している。

結論:
英国のオンライン安全法による年期的な効果は限られているようであり、保護者の役割が重要な影響を与えるという結果が出た。
A new Internet Matters survey suggests the UK's Online Safety Act age checks are easy for many children to bypass. Reported workarounds include fake birthdays, borrowed IDs, video game characters, and even drawing on a fake mustache. The Register reports: The group surveyed over 1,000 UK children and their parents, and while it did report some positive effects from changes made under the OSA, many children saw age verification as an easy-to-bypass hurdle rather than something that kept them genuinely safe. A full 46 percent of children even said that age checks were easy to bypass, while just 17 percent said that they were difficult to fool. The methods kids use to fool age gates vary, but most are pretty simple: There's the classic use of a video game character to fool video selfie systems, while in other instances, children reported just entering a fake birthday or using someone else's ID card when that was required. The report even cites cases of children drawing a mustache on their faces to fool age detection filters. Seriously. While nearly half of UK kids say it's easy to bypass online age checks (and another 17 percent say it's neither hard nor easy), only 32 percent say they've actually bypassed them, according to Internet Matters. Like scoring some booze from "cool" parents, keeping age-gated content out of the hands of kids under the OSA is only as effective as parents let it be, and a quarter of them enable their kids' online delinquency. More specifically, Internet Matters found that a full 17 percent of parents admitted to actively helping their kids evade age checks, while an additional 9 percent simply turned a blind eye to it.

Read more of this story at Slashdot.

  •  

US Government Warns of Severe CopyFail Bug Affecting Major Versions of Linux

✇Slashdot
著者: BeauHD

🤖 AI Summary

米政府は、Linuxオペレーティングシステムの大部分に影響を及ぼす深刻なセキュリティバグ「CopyFail」が悪用されていることを警告しました。TechCrunchによると、この脆弱性により攻撃者は完全なシステム制御権を得ることができます。米国サイバーセキュリティー庁(CISA)は、連邦政府機関の全ての民間アジェンシーに対し、5月15日までに影響を受けるシステムを Patching するよう命令しました。この脆弱性は既に悪用されており、悪意のあるハッキングキャンペーンで使用されている可能性があります。
An anonymous reader quotes a report from TechCrunch: A severe security vulnerability affecting almost every version of the Linux operating system has caught defenders off-guard and scrambling to patch after security researchers publicly released exploit code that allows attackers to take complete control of vulnerable systems. The U.S. government says the bug, dubbed "CopyFail," is now being exploited in the wild, meaning it's being actively used in malicious hacking campaigns. [...] Given the risk to the federal enterprise network, U.S. cybersecurity agency CISA has ordered all civilian federal agencies to patch any affected systems by May 15.

Read more of this story at Slashdot.

  •  

Oscars Bans AI Actors and Writing From Awards

✇Slashdot
著者: BeauHD

🤖 AI Summary

オセッセは、オскаル賞の候補に選ばれる演劇と文章作成が人間によるものであることが明記されました。AIツールを全面的に禁止する規定は設けられず、映画の中でAIを使用した場合、その工具が「助けることも害することもありません」として扱われます。ただし、「創造的な著作活動において人間の中心性」を考慮に入れ評価することが求められます。問題が生じた場合は、AIの使用方法と人間の著作者性について追加情報を要求する権利を持つとも明記されています。

この規定は、映画業界におけるAI技術の増加に対応した「実質的な」変更として説明されました。演劇と文章作成のみが人間によるものである必要があるという要件は以前からありましたが、これは初めてとなるものです。
The Academy has clarified that only human-performed acting and human-authored writing are eligible for Oscar nominations. The Oscars will not ban AI tools broadly, but says it will judge films based on the degree to which humans remain central to the creative work. The BBC reports: The Academy of Motion Picture Arts and Sciences [...], which controls the US film industry's most prestigious award, on Friday issued updated rules for what kind of work in movies and documentaries would be considered eligible for an Oscar as the use of artificial intelligence (AI) technology grows. In updated eligibility requirements, the Academy specified that only acting "demonstrably performed by humans" and that writing "must be human-authored" in order to be nominated for an award. The Academy called the requirements a "substantive" change to the rules for the Oscars. The need to specify awards can only go to acting and writing done by "humans" is new for the academy. [...] However, the academy did not issue a ban on AI use in films more broadly. Outside of acting and writing, if a filmmaker used AI tools in their work, such "tools neither help nor harm the chances of achieving a nomination," the academy wrote. "The Academy and each branch will judge the achievement, taking into account the degree to which a human was at the heart of the creative authorship when choosing which movie to award," the group added. "If questions arise regarding the aforementioned use of generative artificial intelligence, the Academy reserves the right to request more information about the nature of the use and human authorship."

Read more of this story at Slashdot.

  •  
❌