リーディングビュー

Single Dose of Magic Mushroom Psychedelic Can Cause Anatomical Brain Changes

✇Slashdot
著者: BeauHD

🤖 AI Summary

マジシャンヒ.chomp(幻覚性ドラッグ)を一度摂取しただけで、約1ヶ月後にも脳の変化が measurable に確認されたという小規模な研究結果が出た。この変化は水分子沿いの神経束筋の放散を測定する専門的なスキャンから明らかになった。

ラブリンハリス教授(カリフォルニア大学サンフランシスコ校)と研究グループの指導者は、「一度摂取しただけで1ヶ月後にも脳の変化が見られるということは驚くべきことだ。」と述べた。しかし、これらの変化は何を意味するか还不明であり、全体的には被験者がポジティブな心理的変化を報告しており、改善された福祉感や思考の柔軟性が示唆されている。

研究者らはNature Communicationsに、最も脳のエントロピー(情報量)が増加した人ほど1ヶ月後にも深い心理的な洞察力と良好な福祉感を報告したという重要な見解を記述している。これは柔軟な思考と心の健康との間の関連性を強調する。

カーハートハリス教授は、「マジシャンヒ.chompの精神療法的効果についての心理生理学的な証拠」を示唆すると述べた。

一方、ニューヨーク大学のアレックス・カン教授によると、マウスでの研究では、幻覚性ドラッグが神経間の接続を再構築し("プラズティカルティ") therapeutic効果の基礎となることが示されている。しかし、これらは人間に同様の変化があるかどうかは未だ不明である。

この研究は「マジシャンヒ.chompを使用した後の長期的な脳の変形について最も近い証拠」を提供しているとカン教授は述べたが、「興味深い結果」としても「被験者の数が少ない」「DTI(磁気共鳴断層法)は脳接続の間接的で限られた視点しか与えない」という懸念も示された。
A small study found that a single 25mg dose of psilocybin produced measurable brain changes that were still visible a month later, along with reported improvements in psychological insight, wellbeing, and mental flexibility. The Guardian reports: Evidence for the changes came from specialized scans that measured the diffusion of water along nerve bundles in the brain. They suggested that some nerve tracts had become denser and more robust after the drug was taken. While the findings are preliminary, the scientists said the opposite was seen in ageing and dementia. "It's remarkable to see potential anatomical brain changes one month after a single dose of any drug," said Prof Robin Carhart-Harris, a neurologist at the University of California, San Francisco, and senior author on the study. "We don't yet know what these changes mean, but we do note that overall, people showed positive psychological changes in this study, including improved wellbeing and mental flexibility." [...] Writing in Nature Communications, the researchers describe another key finding. Those who had the largest spike in brain entropy after psilocybin were most likely to report deeper psychological insight and better wellbeing a month later, underlining the link between flexible thinking and improved mental health. "It suggests a psychobiological therapeutic action for psilocybin," said Carhart-Harris. Prof Alex Kwan, a neuroscientist at Cornell University in New York, said studies in mice had shown that psychedelics can rewire connections between nerves, a form of "plasticity" that could underlie their therapeutic effects. The big question is whether the same occurs in humans. "This study comes closer than most to addressing that question, by giving evidence of lasting changes in brain structure after psychedelic use," he said. But while the results were "exciting," the study involved a small number of people and DTI provides an indirect and limited view of brain connections, he said.

Read more of this story at Slashdot.

  •  

Sam Altman's Management Style Comes Under the Microscope At OpenAI Trial

✇Slashdot
著者: BeauHD

🤖 AI Summary

サム・アルトマンの管理スタイルが、エ隆・マスクとのOpenAIの訴訟で7日目に再評価された。前OpenAIの担当者たち、ミラ・ムラティ、シェビオン・ジリス、ヘレン・トンnerらが証言し、彼の「難しいかつ混沌とした」管理スタイルについて懸念を表明した。彼らはアルトマンが「大きな論争のある決定」をすることができないという点と、異なる人々に異なることを言う傾向があることから混乱する環境を生むと主張している。

ムラティは、「サムが一人に対して一つのことを言って別の人に全く違うことを言う」ことが問題だと述べている。そして彼の復帰について支持した理由として、彼の去任時に「会社は爆発寸前の危機状態だった」とし、「完全に壊れてしまう可能性があった」と説明している。

ジリスは、チャットGPTのリリースをボードに通知せずに実施されたことに対する懸念と、アルトマンとグレッグ・ブラコマンが投資家でありながらヘライオンエネルギーとの潜在的な合弁事業についての話題があることに違和感を感じたことを述べている。

トンnerは、「彼の誠実さや対話性に関するパターン、ボード監督への抵抗心、彼の管理手法に対する彼の内側のチームの懸念」が撤廃の理由であると語っている。
Sam Altman's management style came under scrutiny on the seventh day of Elon Musk's high-stakes OpenAI trial, as former OpenAI figures Mira Murati, Shivon Zilis, and Helen Toner took the stand to testify about their experiences working with him. Their testimony resurfaced many of the criticisms that first emerged during Altman's brief ouster as CEO in 2023. An anonymous reader quotes a report from Business Insider: The first witness was Mira Murati, OpenAI's former chief technology officer and now founder of her own AI shop, Thinking Machines Lab. Jurors watched a recorded video deposition of Murati, who was also OpenAI's interim CEO after the board briefly ousted Sam Altman. Murati's testimony focused on her concerns about Altman's "difficult and chaotic" management style. She said Altman had trouble "making decisions on big controversial things." He also had a habit of telling people what they wanted to hear. "My concern was about Sam saying one thing to one person and a completely different thing to another person, and that makes it a very difficult and chaotic environment to work with," said Murati. Murati said that her issue with Altman was not about safety, "it is about Sam creating chaos." She said she supported Altman's return to OpenAI because the company "was at catastrophic risk of falling apart" at the time of his ousting. "I was concerned about the company completely blowing up." Zilis said she was upset that Altman rolled out ChatGPT without involving the board. "It wasn't just me but the entire board raised concern about that whole thing happening without any board communication," she said. Zilis said she was also concerned about a potential OpenAI deal with a nuclear energy startup called Helion Energy because both Altman and Greg Brockman were investors. Although the executives had disclosed the investment to the board, Zilis said the deal talk made her uneasy. It "felt super out of left field," she said. "How is it the case that we want to place a major bet on a speculative technology?" In a video deposition, Helen Toner, a former member of OpenAI's board who resigned in 2023, said she first became aware of ChatGPT's release when an OpenAI employee asked another board member whether the board was aware of the development. [...] Toner also elaborated on why the board, including herself, voted to remove Altman as CEO in 2023. "There were a number of things -- the pattern of behavior related to his honesty and candor, his resistance of board oversight, as well as the concerns that two os his inner management team raised to the board about his management practices, his manipulation of board processes," said Toner. Recap: Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla (Day Six) OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five) Musk Concludes Testimony At OpenAI Trial (Day Four) Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three) Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two) Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)

Read more of this story at Slashdot.

  •  

Google's AI Search Results Will Now Turn To Reddit For 'Expert Advice'

✇Slashdot
著者: BeauHD

🤖 AI Summary

Googleは、AIの情報提供にRedditなどの公共ディスカッションやソーシャルプラットフォームを活用することを決定しました。これにより、「専門家の意見」セクションが追加され、AIの応答の中でオンラインでの議論やソーシャルメディアから得られる第一人者の視点が表示されます。具体的には、フォーラム、WordPressブログ、Redditからの引用がリンク付きで表示されます。

また、GoogleはAIの応答の終わりに、更なる探索のために深い記事を推奨し、生成された回答内で直接より多くのソースへのリンクを提供する予定です。ユーザーが自分のアカウントに関連付けている出版物を購読している場合、その出版物からのソースはハイライトされます。

この変更により、ユーザは情報を信頼できるソースから得ることで、より正確かつ有用な情報にアクセスすることができます。
Google is updating AI Overviews and AI Mode to more prominently surface "Expert Advice" from public discussions, social platforms, forums, blogs, and Reddit. Engadget reports: Via a new "Expert Advice" section that can appear in AI responses, Google will display "a preview of perspectives from public online discussions, social media and other firsthand sources." In the sample screenshot the company provided, quotes from forums, WordPress blogs and Reddit were arranged above links to their respective sources. Google plans to add more context to these links, too, showing "a creator's name, handle or community name," so you can judge what you might want to click through and read from a glance. Google will also start recommending in-depth articles at the end of AI responses for further exploration of a given topic, and link to more sources directly in its generated answers rather than just at the end. If you subscribe to any publications, AI responses will also highlight sources from the subscriptions you link to your Google account.

Read more of this story at Slashdot.

  •  

Valve Releases Steam Controller CAD Files Under Creative Commons License

✇Slashdot
著者: BeauHD

🤖 AI Summary

Valveは新たなSteam ControllerとPuckのCADファイルをクリエイティブコモンズライセンスで公開しました。「このリリースは、革新的な改変者に自分でSteam Controller用アクセサリーを作成できるようにするためです。 skinsや充電スタンド、グリップ延長装置、スマートフォンマウントのようなものも含まれます」とDigital Foundryが報告しています。

Valveは以前からSteam Deck携帯ゲーム機、Valve Index VRセット、そして10年前の初代Steam ControllerのCADファイルを公開しており、今回の公開も予想内です。公開されたライセンスは非営利使用に限定し、認知権と設計の共有を求めていますが、商業的なアクセサリー製造者はValveに直接連絡して交渉することも可能です。

CADファイルはここから入手できます。
Valve has released CAD files for the new Steam Controller and its Puck under a Creative Commons license. "The idea is to let enterprising modders create their own Steam Controller add-ons, like skins, charging stands, grip extenders or smartphone mounts," reports Digital Foundry. From the report: The Valve release includes files for the external shell ("surface topology") of the Controller and Puck, with a .STP, .STL and engineering diagram of each device, with the latter showing areas that must remain uncovered to let the device maintain its signal strength and otherwise function as designed. Valve has previously released CAD files for its Steam Deck handheld, Valve Index VR suite and even the original Steam Controller a decade ago, so this release is welcomed but not unexpected. The release is under a fairly restrictive Creative Commons license which allows for non-commercial use and requires attribution and sharing of designs back to the community. However, the license also suggests that commercial entities interested in making accessories for the Steam Controller or its Puck can contact Valve directly to discuss terms. You can find the files here.

Read more of this story at Slashdot.

  •  

Morgan Stanley Undercuts Rivals On Pricing In Crypto Trading Debut

✇Slashdot
著者: BeauHD

🤖 AI Summary

マorganスタンレーはE*トレードに暗号資産( crypto )取引を追加し、現在パイロット運用中で、今年後半には860万の顧客向けに全面展開予定だ。スタンレー銀行は競合他社よりも手数料を50ベーシスポイント引き下げ、伝統的な金融とデジタルファイナンス( DeFi )が融合すると見込んでいる。

「リビドー市場の手数料は95ベーシスポイントから、コインベースグローバルは60ベーシスポイントから、チャールズ・シュワブは75ベーシスポイントを設定している」とシーケンスアルファが報じている。マorganスタンレーの Wealth Management のトップであるジェッド・フィン氏はブルームバーグに対し、「この動きは暗号資産取引単独の安売りではない。競合他社から顧客を引き離す戦略だと考えている」と述べた。

本記事はサシャドットで詳細が公開されており、その他のトピックスも含まれています。
Morgan Stanley is adding crypto trading to E*Trade, with a pilot now underway and a broader rollout planned for the platform's 8.6 million customers later this year. The bank is reportedly undercutting rivals with a 50-basis-point trading fee as it bets traditional finance and DeFi will converge. "By contrast, Robinhood Markets' (HOOD) fees start at 95 bps, Coinbase Global's (COIN) begins at 60 bps, and Charles Schwab (SCHW) will charge 75 bps," notes Seeking Alpha. Morgan Stanley's head of wealth management, Jed Finn, told Bloomberg: "This is much bigger than trading crypto at a cheaper rate. In a way, the strategy is disintermediating the disintermediators."

Read more of this story at Slashdot.

  •  

Claude Managed Agents Can Engage In a 'Dreaming' Process To Preserve Memories

✇Slashdot
著者: BeauHD

🤖 AI Summary

Anthropicが開発者のためのCode with ClaudeカンファレンスでClaude Managed Agentsに「夢見る」機能を導入しました。この機能は、最近の経験の中から重要な情報を抽出し、将来的なタスクや相互作用に役立てるためのメモリ保存プロセスです。現在は研究プレビュー段階で、Claudeプラットフォーム上のManaged Agents限定です。

Managed Agentsは、メッセージAPIに直接構築する代わりの高度なオプションで、複数のエージェントが長時間のプロジェクト上で協力するための「予め設定されたカスタマイズ可能なエージェントハーネス」です。重要な情報が長いプロジェクト中に失われる可能性があるため、この機能はコンテキストウィンドウが制限されているLLMsにとって重要です。

Dreamingは定期的に実行されるプロセスで、過去のセッションやメモリ保存をレビューし、具体的な記憶を選別します。これは多くのモデルが使用するコンパクション過程とは異なり、複数エージェント間での会話とプロジェクトに関する重要なパターンを特定し、将来的に保存するために過去のセッションやメモリ保存を分析します。

ユーザーは自動プロセスか、変更内容を直接確認できる手動選択のいずれかを選べます。
An anonymous reader quotes a report from Ars Technica: At its Code with Claude developers' conference, Anthropic has introduced what it calls "dreaming" to Claude Managed Agents. Dreaming, in this case, is a process of going over recent events and identifying specific things that are worth storing in "memory" to inform future tasks and interactions. Dreaming is a feature that is currently in research preview and limited to Managed Agents on the Claude Platform. Managed Agents are a higher-level alternative to building directly on the Messages API that Anthropic describes as a "pre-built, configurable agent harness that runs in managed infrastructure." It's intended for situations where you want multiple agents working on a task or project to some end point over several minutes or hours. Anthropic describes dreaming as a scheduled process, in which sessions and memory stores are reviewed, and specific memories are curated. This is important because context windows are limited for LLMs, and important information can be lost over lengthy projects. On the chat side of things, many models use a process called compaction, whereby lengthy conversations are periodically analyzed, and the models attempt to remove irrelevant information from the context window while keeping what's actually important for the ongoing conversation, project, or task. However, that process, as I described it, is usually limited to a specific conversation with a single agent. "Dreaming" is a periodically recurring process in which past sessions and memory stores can be analyzed across agents, and important patterns are identified and saved to memory for the future. Users will be able to choose between an automatic process, or reviewing changes to memory directly.

Read more of this story at Slashdot.

  •  

ReactOS Unifies Installation Media, Introduces GUI Installer and New ATA Driver

✇Slashdot
著者: BeauHD

🤖 AI Summary

ReactOSの開発者はPhoronixに、プロジェクトで統合された起動CD(BootCD)について報告しました。これにより、従来の別々のインストールメディアとライブCDイメージが一つにまとめられました。新しい統合されたBootCDには、传统的なテキストベースのインストーラーとライブCDモードが組み合わされ、後者の部分には初級的なGUIインストーラーを起動するオプションも含まれています。グラフィカルインターフェースは、初めて使うユーザーにとって設定プロセスがより容易になることを目指しています。

また、SATA, PATA, ATAPI, AHCI, さらにはSCSIデバイスまでサポートする新しい ATAストレージドライバが導入されました。このプラグアンドプレイ認識可能なストレージスタックはReactOSのブート可能性をより広範なハードウェアに拡張することが期待されています。

最近のグラフィックスドライバーサポートの改善とともに、プロジェクトはコアサブシステム全体で少しずつ進歩していますが、長期間の開発スケジュールは依然として議論の材料となっています。これらのユーザビリティとハードウェア互換性の向上は、ReactOSが現在のニッチを越えてより広く認知されるかどうかは未定です。

新しい機能は0.4.15バージョンには含まれておらず、最新のナイトリーテストビルドでテストできます。
jeditobe writes: Developers of ReactOS told Phoronix that the project has introduced a unified BootCD, replacing its previously separate installation media and LiveCD images. The new image combines the traditional text-mode installer with a LiveCD mode in a single medium. Within this unified BootCD, the updated LiveCD mode now includes an option to launch a first-stage GUI installer. The graphical interface is intended to make installation more approachable for new users compared to the long-standing text-based setup process. In a separate development, the project has also merged a new ATA storage driver that has been in progress since early 2024. The plug-and-play aware storage stack supports SATA, PATA, ATAPI, AHCI, and even SCSI devices, potentially expanding the range of hardware on which ReactOS can successfully boot. Following recent improvements to graphics driver support, the project continues to make incremental progress across core subsystems, though its long development timeline remains a point of discussion. Will these usability and hardware compatibility improvements be enough to broaden ReactOS adoption beyond its current niche? Please note that all new features are not present in version 0.4.15 and are available for testing in the latest nightly test builds.

Read more of this story at Slashdot.

  •  

Zuckerberg 'Personally Authorized and Encouraged' Meta's Copyright Infringement

✇Slashdot
著者: BeauHD

🤖 AI Summary

Metaとマーク・ザッカーバーグを相手取った著作権侵害訴訟が提起された。5つの主要出版社と作家スコット・チュロは、ザッカーバーグが「個人的に承認し、積極的に後押しした」という大規模な著作権侵害が起きたとして主張している。Metaの人工知能(AI)システム「ラマ」を訓練するために、著作権で保護された書籍や論文、ウェブスクラーリング資料を使用したという。

ザッカーバーグは「進むべく破ること」という有名な座右の辞を守り、訴訟文書では、「彼らはAI arms race(競争)に勝つためと実用的な生成型AIモデルを作り出すために、これらの盗られた資料を何度も使用した」と述べている。

Metaは不当性を否定し、裁判所は著作権侵害が公平な利用として認められる場合があることを主張している。しかしながら、訴訟はMetaとザッカーバーグが著作権保護メカニズムを意図的に绕道し、著作権使用料を支払うことも考慮したが「ザッカーバーグの個人的な指示」でその方針を見直したとした。

この訴訟は米ニューヨーク南区連邦地方裁判所で提起され、具体的な金銭的賠償を求めている。
Five major publishers and author Scott Turow have sued Meta and Mark Zuckerberg, alleging that Zuckerberg "personally authorized and actively encouraged" massive copyright infringement by using pirated books, journal articles, and web-scraped material to train Meta's Llama AI systems. Meta denies wrongdoing and says it will fight the case, arguing that courts have recognized AI training on copyrighted material as potentially fair use. Variety reports: "In their effort to win the AI 'arms race' and build a functional generative AI model, Defendants Meta and Zuckerberg followed their well-known motto: 'move fast and break things,'" the plaintiffs say in their lawsuit. "They first illegally torrented millions of copyrighted books and journal articles from notorious pirate sites and downloaded unauthorized web scrapes of virtually the entire internet. They then copied those stolen fruits many times over to train Meta's multibillion-dollar generative AI system called Llama. In doing so, Defendants engaged in one of the most massive infringements of copyrighted materials in history." The suit was filed Tuesday (May 5) in the U.S. District Court for the Southern District of New York by five publishers (Hachette, Macmillan, McGraw Hill, Elsevier and Cengage) and Turow individually. The proposed class-action suit seeks unspecific monetary damages for the alleged copyright infringement. A copy of the lawsuit is available at this link (PDF). [...] the latest lawsuit alleges that Meta and Zuckerberg deliberately circumvented copyright-protection mechanisms -- and had considered paying to license the works before abandoning that strategy at "Zuckerberg's personal instruction." The suit essentially argues that the conduct described falls outside protections afforded by fair-use provisions of the U.S. copyright code.

Read more of this story at Slashdot.

  •  

Silicon Valley Bets $200 Million On AI Data Centers Floating In the Ocean

✇Slashdot
著者: BeauHD

🤖 AI Summary

シーボルトは、オーシャンデータセンタープロジェクトを推進し、パラライザー共同創業者ピーター・ティールを含むシリコンバレー投資家から14億ドルを調達しました。これは陸上でのAIデータセンター建設の課題に対抗するためです。

パンタラッサはオレゴン州ポートランド近郊に生産施設を開設し、2026年に北太平洋で最新モデルである「オーシャン-3」のテストを行います。この装置は約85メートル長で、ロンドンの大本钟やニューヨークのフラットアイアンビルのように高さがあります。

各オーシャンノードは波力を利用し、水圧槽で圧縮された水を発電機に供給してエネルギーを生成します。さらに冷却効果もあるとされ、陸上データセンターとは異なり、周囲の低温水を利用してAIチップを冷やすことができます。

CEOのガース・シェルダン=コールソンは最終的に数千台をデプロイしたいと述べています。
An anonymous reader quotes a report from Ars Technica: Silicon Valley investors such as Palantir co-founder Peter Thiel have bet hundreds of millions of dollars on deploying AI data centers powered by waves in the middle of the world's oceans -- a move that coincides with tech companies facing mounting challenges in building AI data center projects on land. The latest investment round of $140 million is intended to help the company Panthalassa complete a pilot manufacturing facility near Portland, Oregon, and speed up deployments of wave-riding "nodes" designed to generate electrical power, according to a May 4 press release. Instead of sending renewable energy to a land-based data center, the floating nodes would directly power onboard AI chips and transmit inference tokens representing the AI models' outputs to customers worldwide via satellite link. Each node resembles a huge steel sphere bobbing on the water with a tube-like structure extending vertically down beneath the surface. The wave motions drive water upward through the tube into a pressurized reservoir, where it can be released to spin a turbine generator that produces renewable energy for the AI chips on board. Panthalassa claims the node's AI chips would also get cooled using the surrounding water, which could offer another advantage over traditional data centers. "Ocean-based compute might offer a massive cooling advantage because the ambient temperature is so low," Lee said. "Land-based data centers use a lot of electricity and fresh water for cooling." The newest node prototype, called Ocean-3, is scheduled for testing in the northern Pacific Ocean later in 2026. The latest version reaches about 85 meters in length and would stand nearly as tall as London's Big Ben or New York City's Flatiron Building, according to the Financial Times. Panthalassa has already tested several earlier prototypes of the wave energy converter technology, including the Ocean-1 in 2021 and the Ocean-2 that underwent a three-week sea trial off the coast of Washington state in February 2024. The company's CEO and co-founder, Garth Sheldon-Coulson, said in a CBS interview that he hopes to eventually deploy thousands of the nodes.

Read more of this story at Slashdot.

  •  

Microsoft Gives Up On Xbox Copilot AI

✇Slashdot
著者: BeauHD

🤖 AI Summary

MicrosoftはXbox Copilotのモバイル版を終了し、コンソール向けの開発も停止することを発表しました。これは今年内に現行世代のXboxコンソール向けにゲーム専門のAIアシスタントを提供する計画を中止することを意味します。この決定は、新しいXbox CEOのAsh Sharmaが前任者から引き継いだ Xboxプラットフォームチーム再編成の一環で、Sharma自身が以前所属したMicrosoftのCoreAI部門からの幹部人事によるものです。Sharmaは、Xboxにはより速く進み、コミュニティとの関係を深め、プレイヤーや開発者にとっての摩擦を解決する必要があると述べています。この改革の一環として、 Copilotのモバイル版の廃止やコンソール版の開発停止が計画されています。Sharmaは2月に前任者のPhil Spencerから引き継いだMicrosoft Gamingブランドの撤廃とXbox Game Passの価格-cutも行いました。
Microsoft is winding down Xbox Copilot on mobile and ending development of Copilot on console, reversing plans to bring the gaming-focused AI assistant to current-generation Xbox consoles this year. "The move follows [new Xbox CEO Asha Sharma's] reorganization of the Xbox platform team earlier on Tuesday, which added executives from Microsoft's CoreAI team -- where Sharma worked before taking over Xbox -- to the Xbox side of the company," reports The Verge. Sharma said in a post on X: Xbox needs to move faster, deepen our connection with the community, and address friction for both players and developers. Today, we promoted leaders who helped build Xbox, while also bringing in new voices to help push us forward. This balance is important as we get the business back on track. As part of this shift, you'll see us begin to retire features that don't align with where we're headed. We will begin winding down Copilot on mobile and will stop development of Copilot on console. Since taking over for former Microsoft Gaming CEO Phil Spencer in February, Sharma has scrapped the Microsoft Gaming brand and cut the price of Xbox Game Pass.

Read more of this story at Slashdot.

  •  

White House App Is a Terrifying Security Mess

✇Slashdot
著者: BeauHD

🤖 AI Summary

白ハウスアプリに関する記事では、多くのセキュリティ上の問題が指摘されています。主な内容は以下の通りです:

1. GPSトラッキング機能:約4分5秒ごとに位置情報を取得し、背景でも約9分5秒に一度取得します。これはAndroidManifestには記載されていないものの、OneSignal SDKで実行されます。

2. GitHubからJavaScriptを読み込む:YouTubeの埋め込み用に任意のGitHubアカウントからJavaScriptが読み込まれています。アカウントがハッキングされた場合、任意のコードがアプリ内で実行される可能性があります。

3. SSL証明書ピンピングなし:ネットワークが悪質な状態にある場合、通信が傍受されるリスクがあります。

4. カンマーサイトへのJavaScript注入:ウェブビューで訪問したすべてのページにJavaScriptとCSSを注入し、クッキー同意ダイアログやGDPR表示などが削除されます。

5. 開発者資産:生産ビルドにはメトロバンドラー用のローカルホストURLが含まれています。

これらの問題は、React Nativeで Expo SDK 54を使用し、WordPressをバックエンドとしてカスタムREST APIで動作しているという通常な構成とは異なります。このアプリケーションは深刻なセキュリティ上のリスクを含んでおり、ユーザーのプライバシーとデータ保護に重大な影響を与える可能性があります。
New submitter spazmonkey writes: From a hidden GPS tracker polling your location every 4.5 minutes to JavaScript loaded from a random GitHub account, no SSL certificate pinning, and an in-app browser that silently strips cookie consent dialogs and paywalls from every page you visit, the new White House app seems to have a little bit of everything. A security researcher pulled the APK apart to discover the cybersecurity vulnerabilities. "The app is a React Native build using Expo SDK 54, with WordPress powering the backend through a custom REST API," reports Android Headlines. "That's pretty normal, as nearly 42% of all websites on the internet are powered by WordPress. But that's just the start; now the nightmare begins..." From the report: To start, the app has a full GPS tracking pipeline compiled in. Essentially, it's set to poll your location every 4.5 minutes in the foreground, and 9.5 minutes in the background. It's syncing latitude, longitude, accuracy, and timestamp data to OneSignal's servers. These location permissions aren't declared in the AndroidManifest, but they are hardcoded as runtime requests in the OneSignal SDK. Some have noted that the tracking only kicks in if the developer enables it server-side and the user grants permission, but it is there, ready to go. And it gets even stranger. Apparently, the app is loading JavaScript from a random person's GitHub site for YouTube embeds. Yes, you read that right, it's just loading JavaScript from a random GitHub site. So if that account ever gets compromised, arbitrary code could run inside the app's WebView. There's also no SSL certificate pinning, meaning that traffic can potentially be intercepted on compromised networks like sketchy public WiFi or corporate proxies. The app also injects JavaScript and CSS into every page you visit in the in-app browser. This strips away cookie consent dialogs, GDPR banners, login walls, and paywalls. There's also leftover dev artifacts in the production build, including a localhost URL to the Metro bundler.

Read more of this story at Slashdot.

  •  

CO2 Levels In the Atmosphere Hit 'Depressing' New Record

✇Slashdot
著者: BeauHD

🤖 AI Summary

マウナロア観測所での気象庁のデータによれば、4月に大気中の二酸化炭素濃度は約431ppmとなり新記録を更新しました。これは1958年に観測が始まった時の200ppm未満から大幅に上昇しています。

気候科学者のザカリー・ラベ氏(クリミナル・センチュア研究所)によると、この新たな記録は「憂鬱な結果だが予想通り」と言います。「地球が暖まることで大気中の二酸化炭素が増え続けていることを示しています」。同氏はさらに、「多くの気候学者にとってこれは「またしても、間違った方向への記録だ」という単なる現実です」と述べています。

ラベ氏によると、春には枯れ葉や植物が温室効果ガスを放出し、二酸化炭素の量が増えます。しかし、夏季に植物が成長することで一部は吸収されます。しかしながら、 NOAA のデータによると、月平均二酸化炭素濃度の上昇傾向が懸念されています。

2023年と2024年にアメリカでの排出量の減少があったものの、2025年にはその傾向が逆戻りしました。特に人工知能データセンターからの電力需要増加が影響しているとの見方もあります。しかし、ラベ氏は太陽や風などの再生可能エネルギーの利用拡大により希望があると指摘しています。

この記事は Slashdot から引用され、全文を読むにはそちらを参照してください。
Atmospheric carbon dioxide hit a new record in April, averaging about 431 parts per million at NOAA's Mauna Loa Observatory. That's up from under 320 ppm when the site began measurements in 1958. Scientific American reports: Greenhouse gases, such as carbon dioxide, are measured as a proportion of the total atmosphere. The numbers are presented as the number of molecules of a particular gas out of a million total molecules, or ppm. Climate scientist Zachary Labe of Climate Central, a nonprofit that researches climate change, says the new record is "depressing" but not unexpected. "It's just another sign that carbon dioxide continues to increase in our atmosphere as our planet continues to warm," he says. "For many climate scientists, this is just 'here it is again, another record in the wrong direction.'" Labe explains that the amount of CO2 in the atmosphere tends to peak in April each year as decaying plants release greenhouse gases after winter. Some of that CO2 gets reabsorbed by plants as they grow during the warmer months. But NOAA's data show a worrying trend, with the average monthly amount of CO2 steadily increasing. [...] Although the amount of CO2 in the atmosphere has continued to rise, there was a reduction in U.S. emissions in 2023 and 2024. That trend, however, was reversed in 2025, at least partially because of the increased electricity demand from artificial intelligence data centers. Still, Labe says there are reasons for optimism as the use of renewable energy sources such as solar and wind expands.

Read more of this story at Slashdot.

  •  

Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla

✇Slashdot
著者: BeauHD

🤖 AI Summary

OpenAIの社長であるグレッグ・ブロッコマンは、5日間の証人尋問を終了し、エ隆・マスクが描いたスタートアップの初期の歴史と交渉について主に反論しました。ブロッコマンは、彼自身も他の誰もマスクに対して企業構造に関する約束を取り結んだことがなく、OpenAIはまだ非営利組織であることを強調しました。「この組織は非営利-entityであり、世界で最も資本が豊富な非営利組織です」とブロッコマンは述べました。

また、マスクがOpenAIの雇用者たちに数ヶ月間有料ではなく仕事させたことが明らかになりました。その主な仕事は2017年にテスラのオートパイロットチームで自走技術開発の方針を転換することでした。

ブロッコマンによると、マスクは有能な人材をOpenAIに引き入れる助けとなったが、彼は一部の人々にとっては極めて严格な人物であり、別の些細な理由で入社を拒否した人にも魅力的だったと述べました。マスクは過去に元のOpenAI研究者アンドレイ・カAtPathーがテスラに移籍する前に既に退職意向を表明していたと証言しましたが、ブロッコマンによると、その後彼は「謝罪と告白」を持ちかけてきました。

マスクは一般的には会議や対話に出なかったため、サム・テラーと前OpenAI取締役シビン・ジリスのような従業員が彼の代理として働いたことについて言及しました。また、オープンソース化については交渉の中で話題になったことがなく、マスクは一部の人々に対して不快な対応を示したと語りました。

この訴訟はウェストサイド時間で8時30分に再開し、シビン・ジリスが証言する予定です。彼女はマスクの4人の子供の母親であり、かつてOpenAI取締役でした。
An anonymous reader quotes a report from CNBC: OpenAI President Greg Brockman concluded his testimony on Tuesday, where he largely rebutted Elon Musk's account of the early years of the startup and negotiations that occurred at the company. Brockman testified that he never made any commitments to Musk about the company's corporate structure, and he never heard anyone else make them. He emphasized that OpenAI is still governed by a nonprofit. "This entity remains a nonprofit," Brockman said, referring to the OpenAI foundation. "It is the best-resourced nonprofit in the world." [...] Brockman, who spoke from the witness stand in federal court in Oakland, California, over the course of two days, also revealed that Musk had enlisted several OpenAI employees to do months of free work for him at Tesla, Musk's electric vehicle company. That work mainly included efforts to overhaul the company's approach to developing self-driving technology as part of the Autopilot team there in 2017. During his two days on the stand, Brockman answered questions about his personal financial ambitions, his understanding of OpenAI's structure and Musk's involvement at the company, which they co-founded with other executives in 2015. In Musk's testimony last week, the Tesla and SpaceX CEO said that the time, money and resources he poured into OpenAI had been integral to the company's success. He repeatedly said that he helped recruit the company's top talent. Brockman said Tuesday that while Musk was helpful in convincing some employees to take the leap to join OpenAI, he was a polarizing figure for others. "Elon had a reputation of being an extremely hard driver," Brockman said. He added that "certain candidates were very attracted" by Musk's involvement at OpenAI, and that "certain candidates were very turned off." Musk testified last week that a former OpenAI researcher named Andrej Karpathy joined Tesla, but only after he had planned to leave the startup already. Brockman said that Musk, after he hired Karpathy, approached him with "an apology and a confession," about the hire, and that neither Musk nor Karpathy had told him the researcher planned to leave OpenAI before that. Musk was generally not very available for meetings and conversations, Brockman said, so he relied on employees, including Sam Teller and former OpenAI board member Shivon Zilis, as proxies. Brockman testified that open sourcing OpenAI's technology was "not a topic of conversation" during Musk's time with the nonprofit, despite Musk's claims that it was supposed to be central to the organization. He also described tense 2017 negotiations over a possible for-profit arm, saying Musk became angry when equity stakes were discussed. "He said Musk declined the proposal during an in-person meeting, then tore a painting of a Tesla Model 3 car off the wall, and began storming out of the room," reports CNBC. He also demanded to know when the cofounders would leave the company. Brockman further said Musk wanted control of OpenAI because he disliked situations where he lacked control, citing Zip2 and SolarCity as examples Musk had raised. He also testified that Musk partly wanted control to help fund his broader SpaceX ambition of building a "city on Mars." CNBC notes the trial will resume at 8:30 a.m. PT on Wednesday, with Shivon Zilis expected to testify. She is the mother of four of Musk's children and a former OpenAI board member. Recap: OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five) Musk Concludes Testimony At OpenAI Trial (Day Four) Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three) Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two) Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)

Read more of this story at Slashdot.

  •  

Apple Agrees To Pay iPhone Owners $250 Million For Not Delivering AI Siri

✇Slashdot
著者: BeauHD

🤖 AI Summary

AppleはiPhoneのユーザーに対して、Apple IntelligenceとアップグレードされたSiriの利用が予想以上に遅れることで虚偽広告をしたとして、250万ドル(約31億円)の和解合意に達しました。この和解は、2024年6月10日から2025年3月29日までの期間にiPhone 16シリーズやiPhone 15 Proモデルを購入した米国人が対象です。

訴訟では、Appleの広告が「Apple Intelligence機能がiPhone 16の発売と同時に利用可能」と期待させたことを主張しました。しかし実際には、特定のAI機能は発売から約1週間後に追加され、より個人的なSiriのリリースも遅れてしまいました。

米国広告規制機関は昨年4月、Appleが「利用可能」と宣伝していたApple Intelligenceについて、「停止するか見直すべきだ」と勧告しています。AppleはBella Ramsey扮する俳優がAIアップグレードされたSiriを使用しているiPhone 16のテレビ広告も撤回しました。

この和解により、Appleは消費者に正直な情報を提供することで不満を抱かせた問題を解決し、今後同じような事態を避けることが期待されています。
Apple has agreed to a proposed $250 million settlement over claims that it misled iPhone buyers about the availability of Apple Intelligence and its upgraded Siri features. The settlement would cover U.S. buyers of the iPhone 16 lineup and iPhone 15 Pro models between June 10, 2024, and March 29, 2025. The Verge reports: The settlement will resolve a 2025 lawsuit, alleging Apple's advertisements created a "clear and reasonable consumer expectation" that Apple Intelligence features would be available with the launch of the iPhone 16. The lawsuit claimed Apple's products "offered a significantly limited or entirely absent version of Apple Intelligence, misleading consumers about its actual utility and performance." Apple brought certain AI-powered features to the iPhone 16 weeks after its release, and delayed the launch of its more personalized Siri, which is now expected to arrive later this year. Last April, the National Advertising Division recommended that Apple "discontinue or modify" its "available now" claim for Apple Intelligence. Apple also pulled an iPhone 16 ad showing actor Bella Ramsey using the AI-upgraded Siri.

Read more of this story at Slashdot.

  •  

Coinbase Lays Off Nearly 700 Workers In 'AI-Native' Restructuring

✇Slashdot
著者: BeauHD

🤖 AI Summary

コインベースは約700人(従業員の14%)を解雇し、「AI-native」リstructuringを実施しています。CEOのブライアン・アームストロング氏は、同社が「少ない人員で高速に動作し、AIを核としている」会社になることを目指していると述べています。アームストロング氏によると、エンジニアたちは「AIを利用して、従来では数週間かかる作業を数日で完了させている」という実績があります。また、非技術部門も「生産コードの配信」を行うようになり、コインベースは多くのワークフローを自動化しています。

しかし、同社がこの変革を行っているのは、ブロックチェーン市場が低迷しているからでもあります。アームストロング氏は、「当社だけでなく、すべての企業にとって転換点である」と述べています。「この状況に行動しないリスクが大きい」ため、「スタートアップ創業時のスピードと集中力を再構築する必要がある」としています。

コインベースは管理層を削減し、「AI-native」な人材を中心に組織を再編成するとともに、エンジニア、デザイナー、プロダクトマネージャーが一人で担当する「1人チーム」の実験も行う計画です。
Coinbase is laying off about 700 workers, or 14% of its workforce, as CEO Brian Armstrong says the company is restructuring to become "lean, fast, and AI-native." Engadget reports: Armstrong claimed he'd seen engineers "use AI to ship in days what used to take a team weeks" and that non-technical teams in the company are "shipping production code," while Coinbase is automating many of its workflows. "All of this has led us to an inflection point, not just for Coinbase, but for every company," Armstrong wrote. "The biggest risk now is not taking action. We are adjusting early and deliberately to rebuild Coinbase to be lean, fast and AI-native. We need to return to the speed and focus of our startup founding, with AI at our core." An AI-driven restructuring is only one half of the equation for Coinbase, though. Armstrong wrote that while the company "is well-capitalized, has diversified revenue streams and is well-positioned to weather any storm," the crypto market is down. As such, Coinbase is attempting to become leaner and faster ahead of the next crypto cycle. The company is eliminating some management layers and organizing the business around "AI-native talent who can manage fleets of agents to drive outsized impact," Armstrong wrote. "We'll also be experimenting with reduced pod sizes, including 'one person teams' with engineers, designers and product managers all in one role." That sure sounds like an attempt to get workers to take on more responsibilities.

Read more of this story at Slashdot.

  •  

Google DeepMind Workers Vote To Unionize Over Military AI Deals

✇Slashdot
著者: BeauHD

🤖 AI Summary

Google DeepMindのロンドン拠点の労働者が、同社が米軍やイスラエル軍にその技術を提供することを阻止するために組合化投票を行い、成功しました。この動きは、人工知能(AI)についての GOOGLEの倫理的基準に従うよう迫るものです。

通信労働者組合(CWU)とユニット(Unite the Union)が共同代表として認められるよう GOOGLEのディレクター宛に書簡を送りました。工場主の耳を黙らせる強力な集団交渉の場を確保することで、労働者が要求を提出できるようになるという期待があります。

労働者たちは、イスラエル軍との長期的な契約を解除し、AI製品がどのように利用されるかの透明性を求め、自動化による雇用の縮小に係る保証を求める可能性があります。GOOGLEが反応しない場合、仲裁委員会を通じて組合認可を強制する手続も検討されています。

今年年初以降、アントラピックとオープンAIはロンドンでの大規模な拡張を行いました。CWUは、DeepMindの組合化が他の先端研究施設の労働者たちにも影響を与えている可能性があると主張しています。

GOOGLEは2025年2月に、兵器開発や監視のような利用を禁じるAI倫理指針からその文言を取り去りました。多くの従業員が「人間の利益のために Responsibly AI を建設する」という GOOGLE DeepMind のスローガンを信じていたものの、現在はより軍事化傾向にあるとされています。

この記事に関する関連リンク:
- https://news.ycombinator.com/item?id=18313362
- Moving To Mainframe Can Be Cheaper Than Sticking With VMware
- Google Removes Pledge To Not Use AI For Weapons From Website
- Coinbase Lays Off Nearly 700 Workers In 'AI-Native' Restructuring
An anonymous reader quotes a report from Wired: Employees at Google DeepMind in London have voted to unionize as part of a bid to block the AI lab from providing its technology to the US and Israeli militaries. In a letter addressed to Google's managing director for the UK and Ireland, Debbie Weinstein, the workers asked the company to recognize the Communication Workers Union and Unite the Union as joint representatives for DeepMind employees. "Fundamentally, the push for unionization is about holding Google to its own ethical standards on AI, how they monetize it, what the products do, and who they work with," John Chadfield, national officer for technology at the CWU, tells WIRED. "Through the process of unionization, workers are collectively in a much stronger place to put [demands] to an increasingly deaf management." [...] The DeepMind employee tells WIRED that if the staff succeeds in unionizing in the UK, they will likely demand that Google pulls out of its long-standing contract with the Israeli military, and seek greater transparency over how its AI products will be used, and some sort of assurance relating to layoffs made possible by automation. If Google does not engage, the letter states, the employees will ask an arbitration committee to compel the company to recognize the unions. Since the turn of the year, both Anthropic and OpenAI have announced large-scale expansions of their operations in London. CWU hopes the unionization effort at DeepMind will spur workers at those labs into similar action. "These conversations are happening," claims Chadfield. "The workers at other frontier labs have seen what Google DeepMind workers have done. They've come to us asking for help as well." The unionization push began in February 2025 after Alphabet removed a pledge from its AI ethics guidelines that had barred uses such as weapons development and surveillance. "A lot of people here bought into the Google DeepMind tagline 'to build AI responsibly to benefit humanity,'" the DeepMind employee told WIRED. "The direction of travel is to further militarization of the AI models we're building here."

Read more of this story at Slashdot.

  •  

Moving To Mainframe Can Be Cheaper Than Sticking With VMware

✇Slashdot
著者: BeauHD

🤖 AI Summary

タイトル:VMwareから主frameへの移行は費用を削減できる可能性がある

著者:BeauHD
URL:https://linux.slashdot.org/story/26/05/05/189237/moving-to-mainframe-can-be-cheaper-than-sticking-with-vmware?utm_source=rss1.0mainlinkanon&utm_medium=feed

概要:
Gartnerのアリエッサンドロ・ガリムベルティ副社長は、ビッグデータやミッションクリティカルなアプリケーションなど、数年にわたる一貫性と互換性が必要なワークロードについては、IBM主frameへの移行の方がコスト-effectiveである可能性があると述べている。特に数百のLinux仮想マシンや長期的な安定性が必要なアプリケーションに対しては、VMwareライセンシングよりも主frameへの移行が経済的に有利だと提案している。

ただし、ガリムベルティはすべてのアプリケーションに主frameを推奨していない。彼によれば、10年間あまり変更されない可能性が高いミッションクリティカルなアプリケーションや、オープンソースOSであるLinuxが動作するアプリケーションには主frameが適しているという。また、IBMはz/VMハイパーバイザーも提供しており、これはLinuxをより企業向けに進化させることができるとしている。

しかし、ガリムベルティは、主frameへの移行には時間と交渉が必要であり、ビジネス価値よりも価格や更新保護の交渉を行う必要があるという点を指摘している。さらに、ユーザーは利便性のために機能的なカスタマイズを抑制する可能性もあること、また今後のITエンジニアが主frameに関連したキャリアを選択しない傾向に注意を促している。

最終的には、サービスプロバイダーが主frameプログラムの投資を強化することで改善される可能性があると彼は述べている。
Gartner says some VMware customers may find it cheaper to move certain Linux VM workloads to IBM mainframes than to adopt Broadcom's new VMware licensing, especially for fleets of hundreds of Linux VMs and mission-critical apps needing long-term stability. The Register reports: Speaking to The Register to discuss the analyst firm's mid-April publication, "The State of the IBM Mainframe in 2026," [Gartner Vice President Analyst Alessandro Galimberti] said some buyers in many fields are comparing mainframes to modern environments and deciding Big Blue's big iron comes out ahead. "I can build a multi-region cloud application, but things like data synchronization and high availability are things I need to build into application logic," he said. "The mainframe has that in the platform, which shields developers from complexity." He also thinks mainframes are ideally suited to workloads that need many years of transactional consistency and backward-compatibility. That said, Galimberti doesn't recommend the mainframe for all applications. He said mission-critical applications that are unlikely to change much for a decade are best-suited to the machines, as are Linux applications because the open source OS runs on IBM's hardware. IBM also offers the z/VM hypervisor, which he says can make Linux "even better and more enterprise-ready." Which is why Galimberti thinks IBM's ecosystem is attractive to VMware users, especially those who operate a fleet of 500 to 700 Linux VMs. [...] Committing to mainframes therefore means planning "to spend time negotiating price and renewal protections, rather than prioritizing the business value these solutions can deliver." Another downside is that mainframes pose clear lock-in risk, so users may hold back on useful customizations out of fear they make it harder to extricate themselves from the platform. Access to skills remains an issue, too, as kids these days mostly don't contemplate a career working with big iron. Galimberti sees more service providers investing in their mainframe programs, which might help. So does the availability of Linux.

Read more of this story at Slashdot.

  •  

Kids Bypass Age Verification With Fake Moustaches

✇Slashdot
著者: BeauHD

🤖 AI Summary

英字記事の要約:

タイトル: 子供たちが虚偽の髭を使って年齢確認を回避

作者: BeauHD

英国のオンライン安全法に基づく年齢確認は多くの子供たちにとって簡単に通過できると、Internet Mattersによる新調査結果から報告されている。調査によると、偽の誕生日、他人の身分証明書、ゲームキャラクター、さらには髭を描いた顔など、様々な方法で年齢確認を回避しているという。

主要なポイント:
- インターネット matte による1,000人以上の英国子供と保護者の調査結果によると、46%の子供たちが年齢確認を簡単に通過できると回答した。
- 子供たちはゲームキャラクターの使用や偽の誕生日、他人の身分証明書など、比較的簡単な方法で年齢確認を回避している。
- 一部の保護者(17%)は子供たちが年齢確認を回避することを積極的に助けるか、無視している。

結論:
英国のオンライン安全法による年期的な効果は限られているようであり、保護者の役割が重要な影響を与えるという結果が出た。
A new Internet Matters survey suggests the UK's Online Safety Act age checks are easy for many children to bypass. Reported workarounds include fake birthdays, borrowed IDs, video game characters, and even drawing on a fake mustache. The Register reports: The group surveyed over 1,000 UK children and their parents, and while it did report some positive effects from changes made under the OSA, many children saw age verification as an easy-to-bypass hurdle rather than something that kept them genuinely safe. A full 46 percent of children even said that age checks were easy to bypass, while just 17 percent said that they were difficult to fool. The methods kids use to fool age gates vary, but most are pretty simple: There's the classic use of a video game character to fool video selfie systems, while in other instances, children reported just entering a fake birthday or using someone else's ID card when that was required. The report even cites cases of children drawing a mustache on their faces to fool age detection filters. Seriously. While nearly half of UK kids say it's easy to bypass online age checks (and another 17 percent say it's neither hard nor easy), only 32 percent say they've actually bypassed them, according to Internet Matters. Like scoring some booze from "cool" parents, keeping age-gated content out of the hands of kids under the OSA is only as effective as parents let it be, and a quarter of them enable their kids' online delinquency. More specifically, Internet Matters found that a full 17 percent of parents admitted to actively helping their kids evade age checks, while an additional 9 percent simply turned a blind eye to it.

Read more of this story at Slashdot.

  •  

US Government Warns of Severe CopyFail Bug Affecting Major Versions of Linux

✇Slashdot
著者: BeauHD

🤖 AI Summary

米政府は、Linuxオペレーティングシステムの大部分に影響を及ぼす深刻なセキュリティバグ「CopyFail」が悪用されていることを警告しました。TechCrunchによると、この脆弱性により攻撃者は完全なシステム制御権を得ることができます。米国サイバーセキュリティー庁(CISA)は、連邦政府機関の全ての民間アジェンシーに対し、5月15日までに影響を受けるシステムを Patching するよう命令しました。この脆弱性は既に悪用されており、悪意のあるハッキングキャンペーンで使用されている可能性があります。
An anonymous reader quotes a report from TechCrunch: A severe security vulnerability affecting almost every version of the Linux operating system has caught defenders off-guard and scrambling to patch after security researchers publicly released exploit code that allows attackers to take complete control of vulnerable systems. The U.S. government says the bug, dubbed "CopyFail," is now being exploited in the wild, meaning it's being actively used in malicious hacking campaigns. [...] Given the risk to the federal enterprise network, U.S. cybersecurity agency CISA has ordered all civilian federal agencies to patch any affected systems by May 15.

Read more of this story at Slashdot.

  •  

Oscars Bans AI Actors and Writing From Awards

✇Slashdot
著者: BeauHD

🤖 AI Summary

オセッセは、オскаル賞の候補に選ばれる演劇と文章作成が人間によるものであることが明記されました。AIツールを全面的に禁止する規定は設けられず、映画の中でAIを使用した場合、その工具が「助けることも害することもありません」として扱われます。ただし、「創造的な著作活動において人間の中心性」を考慮に入れ評価することが求められます。問題が生じた場合は、AIの使用方法と人間の著作者性について追加情報を要求する権利を持つとも明記されています。

この規定は、映画業界におけるAI技術の増加に対応した「実質的な」変更として説明されました。演劇と文章作成のみが人間によるものである必要があるという要件は以前からありましたが、これは初めてとなるものです。
The Academy has clarified that only human-performed acting and human-authored writing are eligible for Oscar nominations. The Oscars will not ban AI tools broadly, but says it will judge films based on the degree to which humans remain central to the creative work. The BBC reports: The Academy of Motion Picture Arts and Sciences [...], which controls the US film industry's most prestigious award, on Friday issued updated rules for what kind of work in movies and documentaries would be considered eligible for an Oscar as the use of artificial intelligence (AI) technology grows. In updated eligibility requirements, the Academy specified that only acting "demonstrably performed by humans" and that writing "must be human-authored" in order to be nominated for an award. The Academy called the requirements a "substantive" change to the rules for the Oscars. The need to specify awards can only go to acting and writing done by "humans" is new for the academy. [...] However, the academy did not issue a ban on AI use in films more broadly. Outside of acting and writing, if a filmmaker used AI tools in their work, such "tools neither help nor harm the chances of achieving a nomination," the academy wrote. "The Academy and each branch will judge the achievement, taking into account the degree to which a human was at the heart of the creative authorship when choosing which movie to award," the group added. "If questions arise regarding the aforementioned use of generative artificial intelligence, the Academy reserves the right to request more information about the nature of the use and human authorship."

Read more of this story at Slashdot.

  •  
❌