リーディングビュー

Single Dose of Magic Mushroom Psychedelic Can Cause Anatomical Brain Changes

✇Slashdot
著者: BeauHD

🤖 AI Summary

魔法 Mushrooms(モーフィン)の単回量摂取が脳に著しい解剖学的変化をもたらす可能性についての研究結果が報告されています。カリフォルニア大学旧金山校のRobin Carhart-Harris博士-ledのチームは、25mgのPsilocybin摂取後1ヶ月で、水素子沿いの脳内神経束の伝導性を測定した専門的なスキャンから、特定の神経束がより密集して強化されたことを確認しました。これらの変化は高齢化や認知症と逆向きであることが示されました。

また、PSilocybin摂取後脳エントロピー(複雑性)の大幅な上昇が最も大きい被験者は、1ヶ月後に深い精神的洞察力とより良い幸福感を報告したという重要な見解も得られました。これらの結果は、灵活な思考が精神健康にどのように関連しているかを示唆しています。

Carhart-Harris博士は、「単回量摂取後の脳の形質変化を見るのは驚異的である」と述べています。ただし、この研究は初期段階であり、被験者が少ないことやDTI(磁気共鳴断層像)は脳接続を間接的に評価するため限られているという点に注意が必要です。

カーナイル大学のAlex Kwan博士は、小鼠実験で幻覚剤が神経接続を再構築し、「柔軟性」という形で治療効果をもたらす可能性があると指摘しています。しかし、人間において同じような現象が起こるかどうかはまだ不明です。
A small study found that a single 25mg dose of psilocybin produced measurable brain changes that were still visible a month later, along with reported improvements in psychological insight, wellbeing, and mental flexibility. The Guardian reports: Evidence for the changes came from specialized scans that measured the diffusion of water along nerve bundles in the brain. They suggested that some nerve tracts had become denser and more robust after the drug was taken. While the findings are preliminary, the scientists said the opposite was seen in ageing and dementia. "It's remarkable to see potential anatomical brain changes one month after a single dose of any drug," said Prof Robin Carhart-Harris, a neurologist at the University of California, San Francisco, and senior author on the study. "We don't yet know what these changes mean, but we do note that overall, people showed positive psychological changes in this study, including improved wellbeing and mental flexibility." [...] Writing in Nature Communications, the researchers describe another key finding. Those who had the largest spike in brain entropy after psilocybin were most likely to report deeper psychological insight and better wellbeing a month later, underlining the link between flexible thinking and improved mental health. "It suggests a psychobiological therapeutic action for psilocybin," said Carhart-Harris. Prof Alex Kwan, a neuroscientist at Cornell University in New York, said studies in mice had shown that psychedelics can rewire connections between nerves, a form of "plasticity" that could underlie their therapeutic effects. The big question is whether the same occurs in humans. "This study comes closer than most to addressing that question, by giving evidence of lasting changes in brain structure after psychedelic use," he said. But while the results were "exciting," the study involved a small number of people and DTI provides an indirect and limited view of brain connections, he said.

Read more of this story at Slashdot.

  •  

Sam Altman's Management Style Comes Under the Microscope At OpenAI Trial

✇Slashdot
著者: BeauHD

🤖 AI Summary

サム・アルトマンの管理スタイルが、エ隆・マスクによるOpenAI訴訟で7日目に再び注目を集めました。ミラ・ミュラー、シボン・ジルズ、ヘレン・トンnerなど元OpenAIの幹部が証言し、アルトマンの「困難で混乱した」という管理スタイルについて証言しました。

- ミラ・ミュラーは、アルトマンが大きな問題に関する判断を下すのに苦労しており、「異なる人に対して全く異なることを言う」という点に懸念があったと述べました。
- シボン・ジルズは、チャットGPTのリリースについてボードへの通知が不十分だったことや、アルトマンとグレッグ・ブロクマンが投資家である核エネルギーファームとの潜在的な取引に懸念を持っていたことを指摘しました。
- ヘレン・トンは、アルトマンの誠実性や監督に対する抵抗性、内部管理チームから提出された彼の管理手法に関する懸念が撤回理由の一部だと説明しました。

これらの証言は、2023年に一時的にCEO職を解かれた際の批判を再確認しています。アルトマンの管理スタイルが「混乱」をもたらしているという点に焦点が当てられており、彼の帰任は企業が「崩壊する危機」から救出されたためとされました。
Sam Altman's management style came under scrutiny on the seventh day of Elon Musk's high-stakes OpenAI trial, as former OpenAI figures Mira Murati, Shivon Zilis, and Helen Toner took the stand to testify about their experiences working with him. Their testimony resurfaced many of the criticisms that first emerged during Altman's brief ouster as CEO in 2023. An anonymous reader quotes a report from Business Insider: The first witness was Mira Murati, OpenAI's former chief technology officer and now founder of her own AI shop, Thinking Machines Lab. Jurors watched a recorded video deposition of Murati, who was also OpenAI's interim CEO after the board briefly ousted Sam Altman. Murati's testimony focused on her concerns about Altman's "difficult and chaotic" management style. She said Altman had trouble "making decisions on big controversial things." He also had a habit of telling people what they wanted to hear. "My concern was about Sam saying one thing to one person and a completely different thing to another person, and that makes it a very difficult and chaotic environment to work with," said Murati. Murati said that her issue with Altman was not about safety, "it is about Sam creating chaos." She said she supported Altman's return to OpenAI because the company "was at catastrophic risk of falling apart" at the time of his ousting. "I was concerned about the company completely blowing up." Zilis said she was upset that Altman rolled out ChatGPT without involving the board. "It wasn't just me but the entire board raised concern about that whole thing happening without any board communication," she said. Zilis said she was also concerned about a potential OpenAI deal with a nuclear energy startup called Helion Energy because both Altman and Greg Brockman were investors. Although the executives had disclosed the investment to the board, Zilis said the deal talk made her uneasy. It "felt super out of left field," she said. "How is it the case that we want to place a major bet on a speculative technology?" In a video deposition, Helen Toner, a former member of OpenAI's board who resigned in 2023, said she first became aware of ChatGPT's release when an OpenAI employee asked another board member whether the board was aware of the development. [...] Toner also elaborated on why the board, including herself, voted to remove Altman as CEO in 2023. "There were a number of things -- the pattern of behavior related to his honesty and candor, his resistance of board oversight, as well as the concerns that two os his inner management team raised to the board about his management practices, his manipulation of board processes," said Toner. Recap: Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla (Day Six) OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five) Musk Concludes Testimony At OpenAI Trial (Day Four) Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three) Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two) Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)

Read more of this story at Slashdot.

  •  

Google's AI Search Results Will Now Turn To Reddit For 'Expert Advice'

✇Slashdot
著者: BeauHD

🤖 AI Summary

Googleは、AIの概要とAIモードを更新し、「専門家意見」からパブリックディスカッションやソーシャルプラットフォーム、フォーラム、ブログ、Redditなどの情報を取り込むことにしました。この「専門家意見」という新セクションは、Googleが生成した回答の中で表示されるようになります。AIの応答に追加された例では、フォーラムやWordPressブログ、Redditからの引用がリンク付きで表示されます。

Googleはさらに、回答の終わりに詳細な記事を推奨し、直接生成された答えにソースをリンクすることも開始します。また、あなたのGoogleアカウントに関連付けられた購読している出版物から、関連するソースがハイライトされることもあります。
Google is updating AI Overviews and AI Mode to more prominently surface "Expert Advice" from public discussions, social platforms, forums, blogs, and Reddit. Engadget reports: Via a new "Expert Advice" section that can appear in AI responses, Google will display "a preview of perspectives from public online discussions, social media and other firsthand sources." In the sample screenshot the company provided, quotes from forums, WordPress blogs and Reddit were arranged above links to their respective sources. Google plans to add more context to these links, too, showing "a creator's name, handle or community name," so you can judge what you might want to click through and read from a glance. Google will also start recommending in-depth articles at the end of AI responses for further exploration of a given topic, and link to more sources directly in its generated answers rather than just at the end. If you subscribe to any publications, AI responses will also highlight sources from the subscriptions you link to your Google account.

Read more of this story at Slashdot.

  •  

Valve Releases Steam Controller CAD Files Under Creative Commons License

✇Slashdot
著者: BeauHD

🤖 AI Summary

Valveが新しいSteamコントローラーとパックのCADファイルをクリエイティブ・コモンズライセンスで公開しました。このリリースは、Modding好戦的なユーザーがカスタムアクセサリーを作成できるようにするという Valve の意図に基づいています(例: skins, 充電スタンド, ハンドル延長, スマートフォン固定装置)。Valveは以前にもSteamデック、Valve Index VR セット、かつてのSteamコントローラーのCADファイルを公開しています。

公開されたライセンスは非営利的な使用に限定され、著作権表示とコミュニティへの設計情報共有が必要です。しかし、商業機関がSteamコントローラーやパックのアクセサリーを作成したい場合は、Valve に直接連絡して条件を交渉することが可能です。

ファイルはここからダウンロードできます。
Valve has released CAD files for the new Steam Controller and its Puck under a Creative Commons license. "The idea is to let enterprising modders create their own Steam Controller add-ons, like skins, charging stands, grip extenders or smartphone mounts," reports Digital Foundry. From the report: The Valve release includes files for the external shell ("surface topology") of the Controller and Puck, with a .STP, .STL and engineering diagram of each device, with the latter showing areas that must remain uncovered to let the device maintain its signal strength and otherwise function as designed. Valve has previously released CAD files for its Steam Deck handheld, Valve Index VR suite and even the original Steam Controller a decade ago, so this release is welcomed but not unexpected. The release is under a fairly restrictive Creative Commons license which allows for non-commercial use and requires attribution and sharing of designs back to the community. However, the license also suggests that commercial entities interested in making accessories for the Steam Controller or its Puck can contact Valve directly to discuss terms. You can find the files here.

Read more of this story at Slashdot.

  •  

Morgan Stanley Undercuts Rivals On Pricing In Crypto Trading Debut

✇Slashdot
著者: BeauHD

🤖 AI Summary

モルガン・スタンレーは、E*Tradeに暗号通貨取引機能を追加し、現在パイロットプログラムが進行中で、年内に860万ユーザーの拡大展開を計画している。モルガン・スタンレーは、伝統的な金融とデイフィ(去中心化金融)が融合すると見込んでおり、取引手数料を50ベーシスポイント(基準点)で競争力を高めている。

この手数料設定により、ライバル企業のロビンフッド・マーケットズ(HOOD)は95ベーシスポイントから、コインベースグローバル(COIN)は60ベーシスポイントから、チャールズ・スチャブ(SCHW)は75ベーシスポイントの手数料を設定している。

モルガン・スタンレーの財務部門責任者ジェド・フィンによると、「暗号通貨取引だけではなくて、この戦略は去中心化の去中心化という概念に挑戦しています」と述べた。
Morgan Stanley is adding crypto trading to E*Trade, with a pilot now underway and a broader rollout planned for the platform's 8.6 million customers later this year. The bank is reportedly undercutting rivals with a 50-basis-point trading fee as it bets traditional finance and DeFi will converge. "By contrast, Robinhood Markets' (HOOD) fees start at 95 bps, Coinbase Global's (COIN) begins at 60 bps, and Charles Schwab (SCHW) will charge 75 bps," notes Seeking Alpha. Morgan Stanley's head of wealth management, Jed Finn, told Bloomberg: "This is much bigger than trading crypto at a cheaper rate. In a way, the strategy is disintermediating the disintermediators."

Read more of this story at Slashdot.

  •  

Claude Managed Agents Can Engage In a 'Dreaming' Process To Preserve Memories

✇Slashdot
著者: BeauHD

🤖 AI Summary

Anthropicは、「コードウィズクラウド」開発者イベントで、クリードマネージドエージェントに「夢見」という機能を導入しました。これは、最近の出来事を振り返り、重要な情報を記憶するプロセスです。この機能は現在、研究プレビュー段階であり、クラウドプラットフォム上のマネージドエージェントのみが利用可能です。

「夢見」は定期的なプロセスで、セッションと記憶ストアをレビューし、重要な経験を選別します。これはコンテキストウィンドウの制限があるため、長期プロジェクト中に重要な情報が失われるのを防ぐために重要です。チャットにおいては、「圧縮」というプロセスがあり、長い会話中には不要な情報を削除し、必要性のある情報を保持しようとします。

しかし、これは特定の会話で単一のエージェントに対して適用されるのに対し、「夢見」は複数のエージェント間で過去の会話と記憶ストアを分析して、重要なパターンを識別し、将来的に利用するために保存します。ユーザーは自動プロセスを選択することも、直接記憶の変更を確認することもできます。

この新機能は、複数のエージェントが数時間や数日間にわたって共同作業を行う場合など、重要な情報を保持するのに役立ちます。
An anonymous reader quotes a report from Ars Technica: At its Code with Claude developers' conference, Anthropic has introduced what it calls "dreaming" to Claude Managed Agents. Dreaming, in this case, is a process of going over recent events and identifying specific things that are worth storing in "memory" to inform future tasks and interactions. Dreaming is a feature that is currently in research preview and limited to Managed Agents on the Claude Platform. Managed Agents are a higher-level alternative to building directly on the Messages API that Anthropic describes as a "pre-built, configurable agent harness that runs in managed infrastructure." It's intended for situations where you want multiple agents working on a task or project to some end point over several minutes or hours. Anthropic describes dreaming as a scheduled process, in which sessions and memory stores are reviewed, and specific memories are curated. This is important because context windows are limited for LLMs, and important information can be lost over lengthy projects. On the chat side of things, many models use a process called compaction, whereby lengthy conversations are periodically analyzed, and the models attempt to remove irrelevant information from the context window while keeping what's actually important for the ongoing conversation, project, or task. However, that process, as I described it, is usually limited to a specific conversation with a single agent. "Dreaming" is a periodically recurring process in which past sessions and memory stores can be analyzed across agents, and important patterns are identified and saved to memory for the future. Users will be able to choose between an automatic process, or reviewing changes to memory directly.

Read more of this story at Slashdot.

  •  

ReactOS Unifies Installation Media, Introduces GUI Installer and New ATA Driver

✇Slashdot
著者: BeauHD

🤖 AI Summary

ReactOSはインストールメディアを統合し、新しいGUIインストーラとATAドライバーの新機能を導入しました。プロジェクト開発者によると、新しいBootCDでは、従来のテキストベースのインストーラーとLiveCDモードが一つにまとめられました。更新されたLiveCDモードにはGUIインストーラーへのアクセスオプションがあり、これにより初心者が操作しやすくなります。

また、早期2024年に作業が開始された新しいATAドライバを導入し、SATA、PATA、ATAPI、AHCI、SCSIデバイスまで対応するプラグアンドプレイ認識可能なストレージスタックにより、ReactOSのブート対象ハードウェア範囲が拡大します。

グラフィックスドライバー支援の改善も続いているため、コアサブシステム全体に対する進歩は継続しています。しかし、長年の開発期間は議論のテーマとなっています。これらのユーザビリティとハードウェア互換性の向上がReactOSの現存のニッチ越えに十分かどうかは未知数です。

最新のナイトリーテストビルドから新しい機能を利用できますが、すべての新機能は0.4.15版には含まれていません。
jeditobe writes: Developers of ReactOS told Phoronix that the project has introduced a unified BootCD, replacing its previously separate installation media and LiveCD images. The new image combines the traditional text-mode installer with a LiveCD mode in a single medium. Within this unified BootCD, the updated LiveCD mode now includes an option to launch a first-stage GUI installer. The graphical interface is intended to make installation more approachable for new users compared to the long-standing text-based setup process. In a separate development, the project has also merged a new ATA storage driver that has been in progress since early 2024. The plug-and-play aware storage stack supports SATA, PATA, ATAPI, AHCI, and even SCSI devices, potentially expanding the range of hardware on which ReactOS can successfully boot. Following recent improvements to graphics driver support, the project continues to make incremental progress across core subsystems, though its long development timeline remains a point of discussion. Will these usability and hardware compatibility improvements be enough to broaden ReactOS adoption beyond its current niche? Please note that all new features are not present in version 0.4.15 and are available for testing in the latest nightly test builds.

Read more of this story at Slashdot.

  •  

Zuckerberg 'Personally Authorized and Encouraged' Meta's Copyright Infringement

✇Slashdot
著者: BeauHD

🤖 AI Summary

Metaとマーク・ザッカーバーグが著作権侵害を行ったという訴訟が提起されました。5つの主要出版社と作家のスコット・チュロウは、ザッカーバーグが「個人的に許可し、積極的に推進した」という理由でMetaを提訴しました。訴訟では、Metaがライセンス費用を支払おうとしたものの、最終的には「ザッカーバーグの個人的な指揮」により著作権保護メカニズムを迂回したとしています。

Metaは不正行為を行っていないと否定し、法廷で争う姿勢を示しています。MetaはAIトレーニングにおける著作権利用が潜在的に公正使用と認められているとして反論しています。

原告側の主張によると、Metaは「著作権保護メカニズムを迂回」した上で、大量の書籍やジャーナル文章、ウェブスクラップしたデータを使用してLlama AIシステムを開発しました。これにより、史上最も大きな著作権侵害行為が行われたとしています。

訴訟は2023年5月5日にニューヨーク南区地方裁判所で提出されました。原告は具体的な金額を求めていません。この事件に関する詳細はSlashdotの記事をご覧ください。
Five major publishers and author Scott Turow have sued Meta and Mark Zuckerberg, alleging that Zuckerberg "personally authorized and actively encouraged" massive copyright infringement by using pirated books, journal articles, and web-scraped material to train Meta's Llama AI systems. Meta denies wrongdoing and says it will fight the case, arguing that courts have recognized AI training on copyrighted material as potentially fair use. Variety reports: "In their effort to win the AI 'arms race' and build a functional generative AI model, Defendants Meta and Zuckerberg followed their well-known motto: 'move fast and break things,'" the plaintiffs say in their lawsuit. "They first illegally torrented millions of copyrighted books and journal articles from notorious pirate sites and downloaded unauthorized web scrapes of virtually the entire internet. They then copied those stolen fruits many times over to train Meta's multibillion-dollar generative AI system called Llama. In doing so, Defendants engaged in one of the most massive infringements of copyrighted materials in history." The suit was filed Tuesday (May 5) in the U.S. District Court for the Southern District of New York by five publishers (Hachette, Macmillan, McGraw Hill, Elsevier and Cengage) and Turow individually. The proposed class-action suit seeks unspecific monetary damages for the alleged copyright infringement. A copy of the lawsuit is available at this link (PDF). [...] the latest lawsuit alleges that Meta and Zuckerberg deliberately circumvented copyright-protection mechanisms -- and had considered paying to license the works before abandoning that strategy at "Zuckerberg's personal instruction." The suit essentially argues that the conduct described falls outside protections afforded by fair-use provisions of the U.S. copyright code.

Read more of this story at Slashdot.

  •  

Silicon Valley Bets $200 Million On AI Data Centers Floating In the Ocean

✇Slashdot
著者: BeauHD

🤖 AI Summary

Silicon Valley投資家、Palantir共同創業者ピーターセイラーも含む者が、数百億ドルを投じて、海洋上で波力発電を利用するAIデータセンターの開発に賭けている。この動きは、陸上でのAIデータセンター建設における課題が増える中で起こっている。

Panthalassa社が5月4日に発表したプレスリリースによると、最新の投資1億4000万ドルはオレゴン州ポートランド近郊の試作品生産施設の完成と波動する「ノード」のデプロイメントを加速することを目指している。これらの浮遊ノードは、波によって水が押し上げられ、圧力貯水池で蓄積され、その水を利用してタービン発電機を回転させて供給される再生可能エネルギーを使用してAIチップを直接供給し、インファレンストークンを通じて世界中の顧客に結果を伝送する。

ノードは直径85メートルで、ロンドンのビッグベンやニューヨーク市フLATIRONビルと同じくらい高くなる。既存の試作品には2021年のオーシャン-1と、2024年2月にワシントン州沖で3週間海上試験を受けたオーシャン-2がある。

PanthalassaのCEOゴーサル・シェルドン=コールソンはCBSインタビューで、最終的に数千基のノードをデプロイしたいと話している。海洋ベースのコンピューティングには、周囲が非常に低い気温であるため冷却効率も高いという利点があると主張している。

最新の試作品「オーシャン-3」は2026年後半に北太平洋でテストされる予定だ。
An anonymous reader quotes a report from Ars Technica: Silicon Valley investors such as Palantir co-founder Peter Thiel have bet hundreds of millions of dollars on deploying AI data centers powered by waves in the middle of the world's oceans -- a move that coincides with tech companies facing mounting challenges in building AI data center projects on land. The latest investment round of $140 million is intended to help the company Panthalassa complete a pilot manufacturing facility near Portland, Oregon, and speed up deployments of wave-riding "nodes" designed to generate electrical power, according to a May 4 press release. Instead of sending renewable energy to a land-based data center, the floating nodes would directly power onboard AI chips and transmit inference tokens representing the AI models' outputs to customers worldwide via satellite link. Each node resembles a huge steel sphere bobbing on the water with a tube-like structure extending vertically down beneath the surface. The wave motions drive water upward through the tube into a pressurized reservoir, where it can be released to spin a turbine generator that produces renewable energy for the AI chips on board. Panthalassa claims the node's AI chips would also get cooled using the surrounding water, which could offer another advantage over traditional data centers. "Ocean-based compute might offer a massive cooling advantage because the ambient temperature is so low," Lee said. "Land-based data centers use a lot of electricity and fresh water for cooling." The newest node prototype, called Ocean-3, is scheduled for testing in the northern Pacific Ocean later in 2026. The latest version reaches about 85 meters in length and would stand nearly as tall as London's Big Ben or New York City's Flatiron Building, according to the Financial Times. Panthalassa has already tested several earlier prototypes of the wave energy converter technology, including the Ocean-1 in 2021 and the Ocean-2 that underwent a three-week sea trial off the coast of Washington state in February 2024. The company's CEO and co-founder, Garth Sheldon-Coulson, said in a CBS interview that he hopes to eventually deploy thousands of the nodes.

Read more of this story at Slashdot.

  •  

Microsoft Gives Up On Xbox Copilot AI

✇Slashdot
著者: BeauHD

🤖 AI Summary

Microsoftは、Xbox Copilotというゲーム向けAIアシスタントの開発を端末やコンソールから撤退することを決定しました。この決定は、新任Xbox CEOであるAsh Sharmaが週初めにXboxプラットフォームチームを再編成し、Sharmaが以前所属していたMicrosoft CoreAIチームの幹部を採用したことに伴うものです。

Sharmaは声明で、「Xboxはもっと早く動かなければならず、コミュニティとのつながりを深め、プレイヤーや開発者の両方に対する摩擦を軽減する必要があります」と述べました。彼女は「我々は業績回復に向けて新しい声を追加し、組織のバランスを保つことが重要です」とも語っています。

また、Sharmaは2月に前任CEO Phil Spencerから就任した後、Microsoft Gamingというブランドを撤廃し、Xbox Game Passの価格を下げたことも明らかにしています。
Microsoft is winding down Xbox Copilot on mobile and ending development of Copilot on console, reversing plans to bring the gaming-focused AI assistant to current-generation Xbox consoles this year. "The move follows [new Xbox CEO Asha Sharma's] reorganization of the Xbox platform team earlier on Tuesday, which added executives from Microsoft's CoreAI team -- where Sharma worked before taking over Xbox -- to the Xbox side of the company," reports The Verge. Sharma said in a post on X: Xbox needs to move faster, deepen our connection with the community, and address friction for both players and developers. Today, we promoted leaders who helped build Xbox, while also bringing in new voices to help push us forward. This balance is important as we get the business back on track. As part of this shift, you'll see us begin to retire features that don't align with where we're headed. We will begin winding down Copilot on mobile and will stop development of Copilot on console. Since taking over for former Microsoft Gaming CEO Phil Spencer in February, Sharma has scrapped the Microsoft Gaming brand and cut the price of Xbox Game Pass.

Read more of this story at Slashdot.

  •  

White House App Is a Terrifying Security Mess

✇Slashdot
著者: BeauHD

🤖 AI Summary

WHITE HOUSE アプリは深刻なセキュリティ問題を抱えている

新規投稿者 spazmonkey によると、ホワイトハウスの新しいアプリには、GPSトラッキングやランダムな GitHub アカウントから読み込む JavaScript など、様々なセキュリティ上の問題があるという。安全研究者が APK を分解した結果、以下のような脆弱性が見つかった。

1. GPS リンクラインが組み込まれており、アプリはForegroundで4.5分間隔、Backgroundで9.5分間隔でユーザーの位置情報をサーバーに送信する。
2. セキュリティ宣言ファイルには表示されていないが、OneSignal SDK に実行時に位置情報アクセス要求が含まれている。
3. YouTube インサート用にランダムな GitHub アカウントから JavaScript を読み込む機能があり、そのアカウントがハッキングされた場合、任意のコードがアプリ内ブラウザで実行される可能性がある。
4. SSL 準拠されていないため、不正なネットワーク上のトラフィックが傍受される可能性がある。
5. ユーザーページに JavaScript と CSS を注入し、クッキー同意ダイアログや GDPR サイドバナー、ログイン壁などからユーザーを保護する機能を無効化している。
6. プロダクトビルドには開発者残り物が含まれており、Metro バンダラーのローカルホスト URL などが含まれている。

これらの問題により、アプリは深刻なセキュリティ上のリスクを抱えているという。
New submitter spazmonkey writes: From a hidden GPS tracker polling your location every 4.5 minutes to JavaScript loaded from a random GitHub account, no SSL certificate pinning, and an in-app browser that silently strips cookie consent dialogs and paywalls from every page you visit, the new White House app seems to have a little bit of everything. A security researcher pulled the APK apart to discover the cybersecurity vulnerabilities. "The app is a React Native build using Expo SDK 54, with WordPress powering the backend through a custom REST API," reports Android Headlines. "That's pretty normal, as nearly 42% of all websites on the internet are powered by WordPress. But that's just the start; now the nightmare begins..." From the report: To start, the app has a full GPS tracking pipeline compiled in. Essentially, it's set to poll your location every 4.5 minutes in the foreground, and 9.5 minutes in the background. It's syncing latitude, longitude, accuracy, and timestamp data to OneSignal's servers. These location permissions aren't declared in the AndroidManifest, but they are hardcoded as runtime requests in the OneSignal SDK. Some have noted that the tracking only kicks in if the developer enables it server-side and the user grants permission, but it is there, ready to go. And it gets even stranger. Apparently, the app is loading JavaScript from a random person's GitHub site for YouTube embeds. Yes, you read that right, it's just loading JavaScript from a random GitHub site. So if that account ever gets compromised, arbitrary code could run inside the app's WebView. There's also no SSL certificate pinning, meaning that traffic can potentially be intercepted on compromised networks like sketchy public WiFi or corporate proxies. The app also injects JavaScript and CSS into every page you visit in the in-app browser. This strips away cookie consent dialogs, GDPR banners, login walls, and paywalls. There's also leftover dev artifacts in the production build, including a localhost URL to the Metro bundler.

Read more of this story at Slashdot.

  •  

CO2 Levels In the Atmosphere Hit 'Depressing' New Record

✇Slashdot
著者: BeauHD

🤖 AI Summary

### CO2濃度が新記録を更新

NOAAのマウナロア観測所で4月に測定された大気中の二酸化炭素(CO2)濃度は約431ppmとなり、1958年からの測定開始以来最高値を更新した。これは当時の約320ppmから増加していることを示す。

Climate Centralの気候科学者であるZachary Labe氏は、「この結果は「 depressingly 」で予想外ではない」と述べた。「CO2が大気中で継続的に増えているという指標であり、地球が暖まっている証拠だ。多くの気候学者にとって、これはまた一つの悪い記録だ」と彼は言明した。

Labe氏によると、春先(4月)にCO2濃度が年間で最も高くなるのは植物が冬眠から目覚めた際に枯れ葉などが排出する温室効果ガスによるもので、夏の温かい時期には一部が再吸収される。しかしNOAAのデータは、平均的な月間CO2量が継続的に増加傾向にあると示している。

アメリカでは2023年と2024年に二酸化炭素排出量が減少したものの、2025年にはその流れが逆戻りし、特に人工知能データセンターからの電力需要の増加が影響を与えたという。Labe氏はそれでも、太陽や風力などの再生可能エネルギー源の使用拡大が希望となると optimism を示した。

### 主要なポイント

- **CO2濃度記録更新**: マウナロア観測所での4月平均濃度は約431ppm
- **長期的傾向**: 1958年からの測定開始以来、CO2濃度が増加傾向にある
- **科学者の見解**: Zachary Labe氏の「ここ又は、また一つの悪い記録だ」というコメント
- **季節的な変動**: 春先に最も高くなるが、夏には一部再吸収される
- **排出量の減少と増加**: 2023年と2024年に減ったものの、2025年には逆戻り
- **再生可能エネルギーの希望**: Labe氏は再生可能エネルギー源の使用拡大が有望と指摘
Atmospheric carbon dioxide hit a new record in April, averaging about 431 parts per million at NOAA's Mauna Loa Observatory. That's up from under 320 ppm when the site began measurements in 1958. Scientific American reports: Greenhouse gases, such as carbon dioxide, are measured as a proportion of the total atmosphere. The numbers are presented as the number of molecules of a particular gas out of a million total molecules, or ppm. Climate scientist Zachary Labe of Climate Central, a nonprofit that researches climate change, says the new record is "depressing" but not unexpected. "It's just another sign that carbon dioxide continues to increase in our atmosphere as our planet continues to warm," he says. "For many climate scientists, this is just 'here it is again, another record in the wrong direction.'" Labe explains that the amount of CO2 in the atmosphere tends to peak in April each year as decaying plants release greenhouse gases after winter. Some of that CO2 gets reabsorbed by plants as they grow during the warmer months. But NOAA's data show a worrying trend, with the average monthly amount of CO2 steadily increasing. [...] Although the amount of CO2 in the atmosphere has continued to rise, there was a reduction in U.S. emissions in 2023 and 2024. That trend, however, was reversed in 2025, at least partially because of the increased electricity demand from artificial intelligence data centers. Still, Labe says there are reasons for optimism as the use of renewable energy sources such as solar and wind expands.

Read more of this story at Slashdot.

  •  

Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla

✇Slashdot
著者: BeauHD

🤖 AI Summary

記事の要点は以下の通りです:

1. OpenAIの元最高執行役者(President)であるGreg Brockmanが、彼の証言でTeslaのCEO Elon MuskによるOpenAIに関する記述を否定しました。
2. Brockmanは、MuskがOpenAIの企業形態についてのコミットメントを与えたという主張を否定し、「この組織は非営利であり、世界最強の非営利機関である」と述べました。
3. また、BrockmanはMuskが複数のOpenAI従業員に無料でTeslaの研究開発作業を行わせたと証言しました。これは2017年にAutopilotチームの一環として行われました。
4. BrockmanはMuskがOpenAIへの投資や人材採用を通じて功績を上げているとは考えていませんでした。彼は「エリオンは非常に激しいドライバーのイメージがあり、ある候補者は彼の参加を求められ、別の人は否决した」と述べました。
5. さらに、BrockmanはMuskがOpenAIのテクノロジーを開源する計画についても言及し、「これは Musk の在籍中に話題にはならなかった」と述べました。

この記事は、OpenAIとTeslaの間で勃発した紛争の一部を描いており、両社の創業者であるMuskとAltmanの関係性や企業理念の違いが焦点となっています。
An anonymous reader quotes a report from CNBC: OpenAI President Greg Brockman concluded his testimony on Tuesday, where he largely rebutted Elon Musk's account of the early years of the startup and negotiations that occurred at the company. Brockman testified that he never made any commitments to Musk about the company's corporate structure, and he never heard anyone else make them. He emphasized that OpenAI is still governed by a nonprofit. "This entity remains a nonprofit," Brockman said, referring to the OpenAI foundation. "It is the best-resourced nonprofit in the world." [...] Brockman, who spoke from the witness stand in federal court in Oakland, California, over the course of two days, also revealed that Musk had enlisted several OpenAI employees to do months of free work for him at Tesla, Musk's electric vehicle company. That work mainly included efforts to overhaul the company's approach to developing self-driving technology as part of the Autopilot team there in 2017. During his two days on the stand, Brockman answered questions about his personal financial ambitions, his understanding of OpenAI's structure and Musk's involvement at the company, which they co-founded with other executives in 2015. In Musk's testimony last week, the Tesla and SpaceX CEO said that the time, money and resources he poured into OpenAI had been integral to the company's success. He repeatedly said that he helped recruit the company's top talent. Brockman said Tuesday that while Musk was helpful in convincing some employees to take the leap to join OpenAI, he was a polarizing figure for others. "Elon had a reputation of being an extremely hard driver," Brockman said. He added that "certain candidates were very attracted" by Musk's involvement at OpenAI, and that "certain candidates were very turned off." Musk testified last week that a former OpenAI researcher named Andrej Karpathy joined Tesla, but only after he had planned to leave the startup already. Brockman said that Musk, after he hired Karpathy, approached him with "an apology and a confession," about the hire, and that neither Musk nor Karpathy had told him the researcher planned to leave OpenAI before that. Musk was generally not very available for meetings and conversations, Brockman said, so he relied on employees, including Sam Teller and former OpenAI board member Shivon Zilis, as proxies. Brockman testified that open sourcing OpenAI's technology was "not a topic of conversation" during Musk's time with the nonprofit, despite Musk's claims that it was supposed to be central to the organization. He also described tense 2017 negotiations over a possible for-profit arm, saying Musk became angry when equity stakes were discussed. "He said Musk declined the proposal during an in-person meeting, then tore a painting of a Tesla Model 3 car off the wall, and began storming out of the room," reports CNBC. He also demanded to know when the cofounders would leave the company. Brockman further said Musk wanted control of OpenAI because he disliked situations where he lacked control, citing Zip2 and SolarCity as examples Musk had raised. He also testified that Musk partly wanted control to help fund his broader SpaceX ambition of building a "city on Mars." CNBC notes the trial will resume at 8:30 a.m. PT on Wednesday, with Shivon Zilis expected to testify. She is the mother of four of Musk's children and a former OpenAI board member. Recap: OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five) Musk Concludes Testimony At OpenAI Trial (Day Four) Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three) Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two) Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)

Read more of this story at Slashdot.

  •  

Apple Agrees To Pay iPhone Owners $250 Million For Not Delivering AI Siri

✇Slashdot
著者: BeauHD

🤖 AI Summary

Appleは、iPhone購入者がApple Intelligenceおよびアップグレード Siri機能の利用可能性について誤った期待を持たせたとして、250百万ドルで和解しました。この和解は、iPhone 16シリーズとiPhone 15 Proモデルを2024年6月10日から2025年3月29日に購入した米国人に適用されます。

訴訟では、Appleの広告が「iPhone 16のリリースとともにApple Intelligence機能が利用可能」とする「明確で合理的な消費者期待」を醸成していたと主張されました。しかし、AppleはiPhone 16リリースから数週間後に関連AI機能を導入し、個人化されたSiriの発売も遅らせました。Appleは昨年4月に、Apple Intelligenceの「利用可能」宣伝文句を「停止または修正」するよう勧告されました。

Appleは、Bella Ramsey演じるiPhone 16広告も取り下げています。この広告では、AIアップグレードされたSiriを使用していた様子が示されていました。
Apple has agreed to a proposed $250 million settlement over claims that it misled iPhone buyers about the availability of Apple Intelligence and its upgraded Siri features. The settlement would cover U.S. buyers of the iPhone 16 lineup and iPhone 15 Pro models between June 10, 2024, and March 29, 2025. The Verge reports: The settlement will resolve a 2025 lawsuit, alleging Apple's advertisements created a "clear and reasonable consumer expectation" that Apple Intelligence features would be available with the launch of the iPhone 16. The lawsuit claimed Apple's products "offered a significantly limited or entirely absent version of Apple Intelligence, misleading consumers about its actual utility and performance." Apple brought certain AI-powered features to the iPhone 16 weeks after its release, and delayed the launch of its more personalized Siri, which is now expected to arrive later this year. Last April, the National Advertising Division recommended that Apple "discontinue or modify" its "available now" claim for Apple Intelligence. Apple also pulled an iPhone 16 ad showing actor Bella Ramsey using the AI-upgraded Siri.

Read more of this story at Slashdot.

  •  

Coinbase Lays Off Nearly 700 Workers In 'AI-Native' Restructuring

✇Slashdot
著者: BeauHD

🤖 AI Summary

コインベースは約700人(従業員の14%)を解雇し、「AI-native」リstructuringを実施しています。CEOのブライアン・アームストロング氏は、同社が「少ない人員で高速に動作し、AIを核としている」会社になることを目指していると述べています。アームストロング氏によると、エンジニアたちは「AIを利用して、従来では数週間かかる作業を数日で完了させている」という実績があります。また、非技術部門も「生産コードの配信」を行うようになり、コインベースは多くのワークフローを自動化しています。

しかし、同社がこの変革を行っているのは、ブロックチェーン市場が低迷しているからでもあります。アームストロング氏は、「当社だけでなく、すべての企業にとって転換点である」と述べています。「この状況に行動しないリスクが大きい」ため、「スタートアップ創業時のスピードと集中力を再構築する必要がある」としています。

コインベースは管理層を削減し、「AI-native」な人材を中心に組織を再編成するとともに、エンジニア、デザイナー、プロダクトマネージャーが一人で担当する「1人チーム」の実験も行う計画です。
Coinbase is laying off about 700 workers, or 14% of its workforce, as CEO Brian Armstrong says the company is restructuring to become "lean, fast, and AI-native." Engadget reports: Armstrong claimed he'd seen engineers "use AI to ship in days what used to take a team weeks" and that non-technical teams in the company are "shipping production code," while Coinbase is automating many of its workflows. "All of this has led us to an inflection point, not just for Coinbase, but for every company," Armstrong wrote. "The biggest risk now is not taking action. We are adjusting early and deliberately to rebuild Coinbase to be lean, fast and AI-native. We need to return to the speed and focus of our startup founding, with AI at our core." An AI-driven restructuring is only one half of the equation for Coinbase, though. Armstrong wrote that while the company "is well-capitalized, has diversified revenue streams and is well-positioned to weather any storm," the crypto market is down. As such, Coinbase is attempting to become leaner and faster ahead of the next crypto cycle. The company is eliminating some management layers and organizing the business around "AI-native talent who can manage fleets of agents to drive outsized impact," Armstrong wrote. "We'll also be experimenting with reduced pod sizes, including 'one person teams' with engineers, designers and product managers all in one role." That sure sounds like an attempt to get workers to take on more responsibilities.

Read more of this story at Slashdot.

  •  

Google DeepMind Workers Vote To Unionize Over Military AI Deals

✇Slashdot
著者: BeauHD

🤖 AI Summary

Google DeepMindのロンドン拠点の労働者が、同社が米軍やイスラエル軍にその技術を提供することを阻止するために組合化投票を行い、成功しました。この動きは、人工知能(AI)についての GOOGLEの倫理的基準に従うよう迫るものです。

通信労働者組合(CWU)とユニット(Unite the Union)が共同代表として認められるよう GOOGLEのディレクター宛に書簡を送りました。工場主の耳を黙らせる強力な集団交渉の場を確保することで、労働者が要求を提出できるようになるという期待があります。

労働者たちは、イスラエル軍との長期的な契約を解除し、AI製品がどのように利用されるかの透明性を求め、自動化による雇用の縮小に係る保証を求める可能性があります。GOOGLEが反応しない場合、仲裁委員会を通じて組合認可を強制する手続も検討されています。

今年年初以降、アントラピックとオープンAIはロンドンでの大規模な拡張を行いました。CWUは、DeepMindの組合化が他の先端研究施設の労働者たちにも影響を与えている可能性があると主張しています。

GOOGLEは2025年2月に、兵器開発や監視のような利用を禁じるAI倫理指針からその文言を取り去りました。多くの従業員が「人間の利益のために Responsibly AI を建設する」という GOOGLE DeepMind のスローガンを信じていたものの、現在はより軍事化傾向にあるとされています。

この記事に関する関連リンク:
- https://news.ycombinator.com/item?id=18313362
- Moving To Mainframe Can Be Cheaper Than Sticking With VMware
- Google Removes Pledge To Not Use AI For Weapons From Website
- Coinbase Lays Off Nearly 700 Workers In 'AI-Native' Restructuring
An anonymous reader quotes a report from Wired: Employees at Google DeepMind in London have voted to unionize as part of a bid to block the AI lab from providing its technology to the US and Israeli militaries. In a letter addressed to Google's managing director for the UK and Ireland, Debbie Weinstein, the workers asked the company to recognize the Communication Workers Union and Unite the Union as joint representatives for DeepMind employees. "Fundamentally, the push for unionization is about holding Google to its own ethical standards on AI, how they monetize it, what the products do, and who they work with," John Chadfield, national officer for technology at the CWU, tells WIRED. "Through the process of unionization, workers are collectively in a much stronger place to put [demands] to an increasingly deaf management." [...] The DeepMind employee tells WIRED that if the staff succeeds in unionizing in the UK, they will likely demand that Google pulls out of its long-standing contract with the Israeli military, and seek greater transparency over how its AI products will be used, and some sort of assurance relating to layoffs made possible by automation. If Google does not engage, the letter states, the employees will ask an arbitration committee to compel the company to recognize the unions. Since the turn of the year, both Anthropic and OpenAI have announced large-scale expansions of their operations in London. CWU hopes the unionization effort at DeepMind will spur workers at those labs into similar action. "These conversations are happening," claims Chadfield. "The workers at other frontier labs have seen what Google DeepMind workers have done. They've come to us asking for help as well." The unionization push began in February 2025 after Alphabet removed a pledge from its AI ethics guidelines that had barred uses such as weapons development and surveillance. "A lot of people here bought into the Google DeepMind tagline 'to build AI responsibly to benefit humanity,'" the DeepMind employee told WIRED. "The direction of travel is to further militarization of the AI models we're building here."

Read more of this story at Slashdot.

  •  

Moving To Mainframe Can Be Cheaper Than Sticking With VMware

✇Slashdot
著者: BeauHD

🤖 AI Summary

タイトル:VMwareから主frameへの移行は費用を削減できる可能性がある

著者:BeauHD
URL:https://linux.slashdot.org/story/26/05/05/189237/moving-to-mainframe-can-be-cheaper-than-sticking-with-vmware?utm_source=rss1.0mainlinkanon&utm_medium=feed

概要:
Gartnerのアリエッサンドロ・ガリムベルティ副社長は、ビッグデータやミッションクリティカルなアプリケーションなど、数年にわたる一貫性と互換性が必要なワークロードについては、IBM主frameへの移行の方がコスト-effectiveである可能性があると述べている。特に数百のLinux仮想マシンや長期的な安定性が必要なアプリケーションに対しては、VMwareライセンシングよりも主frameへの移行が経済的に有利だと提案している。

ただし、ガリムベルティはすべてのアプリケーションに主frameを推奨していない。彼によれば、10年間あまり変更されない可能性が高いミッションクリティカルなアプリケーションや、オープンソースOSであるLinuxが動作するアプリケーションには主frameが適しているという。また、IBMはz/VMハイパーバイザーも提供しており、これはLinuxをより企業向けに進化させることができるとしている。

しかし、ガリムベルティは、主frameへの移行には時間と交渉が必要であり、ビジネス価値よりも価格や更新保護の交渉を行う必要があるという点を指摘している。さらに、ユーザーは利便性のために機能的なカスタマイズを抑制する可能性もあること、また今後のITエンジニアが主frameに関連したキャリアを選択しない傾向に注意を促している。

最終的には、サービスプロバイダーが主frameプログラムの投資を強化することで改善される可能性があると彼は述べている。
Gartner says some VMware customers may find it cheaper to move certain Linux VM workloads to IBM mainframes than to adopt Broadcom's new VMware licensing, especially for fleets of hundreds of Linux VMs and mission-critical apps needing long-term stability. The Register reports: Speaking to The Register to discuss the analyst firm's mid-April publication, "The State of the IBM Mainframe in 2026," [Gartner Vice President Analyst Alessandro Galimberti] said some buyers in many fields are comparing mainframes to modern environments and deciding Big Blue's big iron comes out ahead. "I can build a multi-region cloud application, but things like data synchronization and high availability are things I need to build into application logic," he said. "The mainframe has that in the platform, which shields developers from complexity." He also thinks mainframes are ideally suited to workloads that need many years of transactional consistency and backward-compatibility. That said, Galimberti doesn't recommend the mainframe for all applications. He said mission-critical applications that are unlikely to change much for a decade are best-suited to the machines, as are Linux applications because the open source OS runs on IBM's hardware. IBM also offers the z/VM hypervisor, which he says can make Linux "even better and more enterprise-ready." Which is why Galimberti thinks IBM's ecosystem is attractive to VMware users, especially those who operate a fleet of 500 to 700 Linux VMs. [...] Committing to mainframes therefore means planning "to spend time negotiating price and renewal protections, rather than prioritizing the business value these solutions can deliver." Another downside is that mainframes pose clear lock-in risk, so users may hold back on useful customizations out of fear they make it harder to extricate themselves from the platform. Access to skills remains an issue, too, as kids these days mostly don't contemplate a career working with big iron. Galimberti sees more service providers investing in their mainframe programs, which might help. So does the availability of Linux.

Read more of this story at Slashdot.

  •  

Kids Bypass Age Verification With Fake Moustaches

✇Slashdot
著者: BeauHD

🤖 AI Summary

英字記事の要約:

タイトル: 子供たちが虚偽の髭を使って年齢確認を回避

作者: BeauHD

英国のオンライン安全法に基づく年齢確認は多くの子供たちにとって簡単に通過できると、Internet Mattersによる新調査結果から報告されている。調査によると、偽の誕生日、他人の身分証明書、ゲームキャラクター、さらには髭を描いた顔など、様々な方法で年齢確認を回避しているという。

主要なポイント:
- インターネット matte による1,000人以上の英国子供と保護者の調査結果によると、46%の子供たちが年齢確認を簡単に通過できると回答した。
- 子供たちはゲームキャラクターの使用や偽の誕生日、他人の身分証明書など、比較的簡単な方法で年齢確認を回避している。
- 一部の保護者(17%)は子供たちが年齢確認を回避することを積極的に助けるか、無視している。

結論:
英国のオンライン安全法による年期的な効果は限られているようであり、保護者の役割が重要な影響を与えるという結果が出た。
A new Internet Matters survey suggests the UK's Online Safety Act age checks are easy for many children to bypass. Reported workarounds include fake birthdays, borrowed IDs, video game characters, and even drawing on a fake mustache. The Register reports: The group surveyed over 1,000 UK children and their parents, and while it did report some positive effects from changes made under the OSA, many children saw age verification as an easy-to-bypass hurdle rather than something that kept them genuinely safe. A full 46 percent of children even said that age checks were easy to bypass, while just 17 percent said that they were difficult to fool. The methods kids use to fool age gates vary, but most are pretty simple: There's the classic use of a video game character to fool video selfie systems, while in other instances, children reported just entering a fake birthday or using someone else's ID card when that was required. The report even cites cases of children drawing a mustache on their faces to fool age detection filters. Seriously. While nearly half of UK kids say it's easy to bypass online age checks (and another 17 percent say it's neither hard nor easy), only 32 percent say they've actually bypassed them, according to Internet Matters. Like scoring some booze from "cool" parents, keeping age-gated content out of the hands of kids under the OSA is only as effective as parents let it be, and a quarter of them enable their kids' online delinquency. More specifically, Internet Matters found that a full 17 percent of parents admitted to actively helping their kids evade age checks, while an additional 9 percent simply turned a blind eye to it.

Read more of this story at Slashdot.

  •  

US Government Warns of Severe CopyFail Bug Affecting Major Versions of Linux

✇Slashdot
著者: BeauHD

🤖 AI Summary

米政府は、Linuxオペレーティングシステムの大部分に影響を及ぼす深刻なセキュリティバグ「CopyFail」が悪用されていることを警告しました。TechCrunchによると、この脆弱性により攻撃者は完全なシステム制御権を得ることができます。米国サイバーセキュリティー庁(CISA)は、連邦政府機関の全ての民間アジェンシーに対し、5月15日までに影響を受けるシステムを Patching するよう命令しました。この脆弱性は既に悪用されており、悪意のあるハッキングキャンペーンで使用されている可能性があります。
An anonymous reader quotes a report from TechCrunch: A severe security vulnerability affecting almost every version of the Linux operating system has caught defenders off-guard and scrambling to patch after security researchers publicly released exploit code that allows attackers to take complete control of vulnerable systems. The U.S. government says the bug, dubbed "CopyFail," is now being exploited in the wild, meaning it's being actively used in malicious hacking campaigns. [...] Given the risk to the federal enterprise network, U.S. cybersecurity agency CISA has ordered all civilian federal agencies to patch any affected systems by May 15.

Read more of this story at Slashdot.

  •  

Oscars Bans AI Actors and Writing From Awards

✇Slashdot
著者: BeauHD

🤖 AI Summary

オセッセは、オскаル賞の候補に選ばれる演劇と文章作成が人間によるものであることが明記されました。AIツールを全面的に禁止する規定は設けられず、映画の中でAIを使用した場合、その工具が「助けることも害することもありません」として扱われます。ただし、「創造的な著作活動において人間の中心性」を考慮に入れ評価することが求められます。問題が生じた場合は、AIの使用方法と人間の著作者性について追加情報を要求する権利を持つとも明記されています。

この規定は、映画業界におけるAI技術の増加に対応した「実質的な」変更として説明されました。演劇と文章作成のみが人間によるものである必要があるという要件は以前からありましたが、これは初めてとなるものです。
The Academy has clarified that only human-performed acting and human-authored writing are eligible for Oscar nominations. The Oscars will not ban AI tools broadly, but says it will judge films based on the degree to which humans remain central to the creative work. The BBC reports: The Academy of Motion Picture Arts and Sciences [...], which controls the US film industry's most prestigious award, on Friday issued updated rules for what kind of work in movies and documentaries would be considered eligible for an Oscar as the use of artificial intelligence (AI) technology grows. In updated eligibility requirements, the Academy specified that only acting "demonstrably performed by humans" and that writing "must be human-authored" in order to be nominated for an award. The Academy called the requirements a "substantive" change to the rules for the Oscars. The need to specify awards can only go to acting and writing done by "humans" is new for the academy. [...] However, the academy did not issue a ban on AI use in films more broadly. Outside of acting and writing, if a filmmaker used AI tools in their work, such "tools neither help nor harm the chances of achieving a nomination," the academy wrote. "The Academy and each branch will judge the achievement, taking into account the degree to which a human was at the heart of the creative authorship when choosing which movie to award," the group added. "If questions arise regarding the aforementioned use of generative artificial intelligence, the Academy reserves the right to request more information about the nature of the use and human authorship."

Read more of this story at Slashdot.

  •  
❌