リーディングビュー

California Ghost-Gun Bill Wants 3D Printers To Play Cop, EFF Says

✇Slashdot
著者: BeauHD

🤖 AI Summary

タイトル:カリフォルニア州の幽霊銃法案、 EFFが3Dプリンターを捜査機関化するとして批判

記事は、カリフォルニア州議会で提出された法案AB 2047が、3Dプリンター製造業者に法認証ソフトウェアを使用して銃器部品の印刷ファイルを検出し、ブロックするよう求める内容について述べています。しかし、EFFのCliff BraunとRory Mirはこの提案が技術的に不十分であり、実際にはユーザーの印刷活動全体に対する広範な監視につながると批判しています。

主な点:
1. 3Dプリンター製造業者に法認証ソフトウェアを使用して銃器部品を検出し、ブロックするよう求める。
2. この提案は技術的に不可能で、実際にはユーザーの監視につながるという EFF の主張。
3. 既存の設計データベースを用いて銃器ファイルを検出するアルゴリズムが必要とされる。
4. 銃器ファイルを微調整することで検出を避けることが容易であると指摘。

この法案は、合法的なソフトウェアのみを使用することを義務付けることで、オープンソースの替代手段が無力化されてしまう可能性があるということも述べています。また、偽陽性により正当な利用者がハードウェアを使用できなくなる可能性も指摘しています。
A proposed California bill would require 3D printer makers to use state-certified software to detect and block files for gun parts, but advocates at the Electronic Frontier Foundation (EFF) say it would be easy to evade and could lead to widespread surveillance of users' printing activity. The Register reports: The bill in question is AB 2047, the scope of which, on paper, appears strict. The primary goal is clear and simple: to require 3D printer manufacturers to use a state-certified algorithm that checks digital design files for firearm components and blocks print jobs that would produce prohibited parts. [...] Cliff Braun and Rory Mir, who respectively work in policy and tech community engagement at the EFF, claim that the proposals in California are technically infeasible and in practice will lead to consumer surveillance. In a series of blog posts published this month, the pair argued that print-blocking technology -- proposals for which have also surfaced in states including New York and Washington - cannot work for a range of technical reasons. They argued that because 3D printers and other types of computer numerical control (CNC) machines are fairly simple, with much of their brains coming from the computer-aided manufacturing (CAM) software -- or slicer software -- to which they are linked, the bill would establish legal and illegal software. Proprietary software will likely become the de facto option, leaving open source alternatives to rot. "Under these proposed laws, manufacturers of consumer 3D printers must ensure their printers only work with their software, and implement firearm detection algorithms on either the printer itself or in a slicer software," wrote Braun earlier this month. "These algorithms must detect firearm files using a maintained database of existing models. Vendors of printers must then verify that printers are on the allow-list maintained by the state before they can offer them for sale. Owners of printers will be guilty of a crime if they circumvent these intrusive scanning procedures or load alternative software, which they might do because their printer manufacturer ends support." Braun also argued that it would be trivial for anyone who uses 3D printers to make small tweaks to either the visual models of firearms parts, or the machine instructions (G-code) generated from those models, to evade detection. Mir further argued that the bill offers no guardrails to keep this "constantly expanding blacklist" limited to firearm-related designs. In his view, there is a clear risk that this approach will creep into other forms of alleged unlawful activity, such as copyright infringement. [...] Braun and Mir have a list of other arguments against the bill. They say the algorithms are more than likely to lead to false positives, which will prevent good-faith users from using their hardware. Many 3D printer owners also have no interest in printing firearm components. Most simply want the freedom to print trinkets and spare parts while others use them to print various items and sell them as an income stream.

Read more of this story at Slashdot.

  •  

Audit Finds Google, Microsoft, and Meta Still Tracking Users After Opt-Out

✇Slashdot
著者: BeauHD

🤖 AI Summary

Google、マイクロソフト、メタの各社がカリフォルニアでのプライバシーアuditに違反している可能性があり、数十億ドルの罰金を科せられるかもしれないという報告があります。このauditはプライバシー検索エンジンのwebXrayによって行われ、7,000以上のウェブサイトで行われた調査によると、55パーセントのサイトがユーザーがトラッキングをオプトアウトしたとしても広告クッキーを設定していました。

・Googleは87パーセントのケースでユーザーがオプトアウトするのを無視しており、「sec-gpc: 1」というコードを使用してオプトアウトサインアルを送信しますが、それでも広告用のIDEクッキーを作成しようとサーバから明確な命令を送っています。

・マイクロソフトも同様の問題があり、50パーセントのケースでユーザーのオプトアウトを無視しています。

・メタは69パーセントのケースでユーザーがトラッキングから離脱しようとするのを無視しており、ウェブサイトにインストールするための追跡コードにはグローバルスタンダードのオプトアウトサインアルチェックが含まれていません。

各社ともこの研究について異論を唱えています。
alternative_right shares a report from 404 Media: An independent privacy audit of Microsoft, Meta, and Google web traffic in California found that the companies may be violating state regulations and racking up billions in fines. According to the audit from privacy search engine webXray, 55 percent of the sites it checked set ad cookies in a user's browser even if they opted out of tracking. Each company disputed or took issue with the research, with Google saying it was based on a "fundamental misunderstanding" of how its product works. The webXray California Privacy Audit viewed web traffic on more than 7,000 popular websites in California in the month of March and found that most tech companies ignore when a user asks to opt-out of cookie tracking. California has stringent and well defined privacy legislation thanks to its California Consumer Privacy Act (CCPA) which allows users to, among other things, opt out of the sale of their personal information. There's a system called Global Privacy Control (GPC), which includes a browser extension that indicates to a website when a user wants to opt out of tracking. According to the webXray audit, Google failed to let users opt out 87 percent of the time. "Google's failure to honor the GPC opt-out signal is easy to find in network traffic. When a browser using GPC connects to Google's servers it encodes the opt-out signal by sending the code 'sec-gpc: 1.' This means Google should not return cookies," the audit said. "However, when Google's server responds to the network request with the opt-out it explicitly responds with a command to create an advertising cookie named IDE using the 'set-cookie' command. This non-compliance is easy to spot, hiding in plain sight." The audit said that Microsoft fails to opt out users in the same way and has a failure rate of 50 percent in the web traffic webXray viewed. Meta's failure rate was 69 percent and a bit more comprehensive. "Meta instructs publishers to install the following tracking code on their websites. The code contains no check for globally standard opt-out signals -- it loads unconditionally, fires a tracking event, and sets a cookie regardless of the consumer's privacy preferences," the audit said. It showed a copy of Meta's tracking data which contains no GPC check at all.

Read more of this story at Slashdot.

  •  

Chrome Now Lets You Turn AI Prompts Into Repeatable 'Skills'

✇Slashdot
著者: BeauHD

🤖 AI Summary

GoogleがChromeに「Skills」機能を導入し、ユーザーが頻繁に使用するGeminiのプロンプトを再利用可能なワンクリックワークフローとして保存できるようにしました。この機能は最初に米語版のデスクトップ Chrome ユーザー向けにリリースされます。

スキルは2つの方法で作成できます:1) ゲミニチャット履歴から直接プロンプトを保存;2) Googleが提供するプリセットスキルを使用し、必要に応じてカスタマイズします。この機能により、ユーザーは頻繁に使用するGeminiのプロンプトを手動で入力せずに再利用することができます。

例としては、オンラインレシピの栄養情報を計算したり、複数のタブで商品スペックを比較するスキルが挙げられます。この機能は効率性向上に貢献し、ユーザー作成の必要性を軽減します。
Google is rolling out a Chrome feature called "Skills" that lets users save Gemini prompts as reusable one-click workflows they can run across multiple tabs. The feature also includes preset Skills from Google. It's launching first for Chrome desktop users set to US English. The Verge reports: Once you have access to the feature, it can be managed by typing a forward slash ( / ) in Gemini and clicking the compass icon. AI prompts can be saved as Skills directly from your Gemini chat history on desktop, where they'll then be available to reuse on any other desktop devices that are signed into the same Google account on Chrome. The aim is to spare Chrome users from having to manually retype frequently used Gemini prompts or having to copy and paste them over from a saved list. Some of the Skills made by early testers include commands for calculating the nutritional information of online recipes and creating a side-by-side comparison of product specifications while shopping across multiple tabs, according to Google. The company is also launching a library of preset Skills that you can save and use instead of making your own. These ready-to-use Skills can also be customized to better suit your needs, providing a starting point without requiring you to create your own from scratch.

Read more of this story at Slashdot.

  •  

Thousands of Rare Concert Recordings Are Landing On the Internet Archive

✇Slashdot
著者: BeauHD

🤖 AI Summary

Chicagoコンサートファンのアダム・ジャコブスさんが1980年代から約1万回のライブを録音し、そのコレクションをインターネットアーカイブと共同でデジタル化している。既に約2500個のテープがインターネットアーキビストにアップロードされ、それには1989年のニ开奖结果已经被截断,以下是总结:

アダム・ジャコブスさんという芝加哥のコンサートファンが1980年代から1万回以上のライブを録音しており、そのコレクションをインターネットアーカイブと共同でデジタル化しています。既に約2,500個のテープがアップロードされ、中には1989年のニーヴァvanaのライブパフォーマンスなど貴重な記録も含まれています。

ジャコブスさんは音質の mediocre な装置を使って録音していましたが、インターネットアーカイブのボランティア技術者がこれらのテープを高品質に再生することができました。ボランティアのブライアン・エマリックさんは定期的にジャコブスさんの家に赴き、テープを受け取ります。

その後、他のボランティアがテープを清掃し、整理し、ラベル付けします。また古いバンドの曲名まで探します。このアーカイブはインターネット上で公開されています。
A Chicago concert superfan Aadam Jacobs who has recorded more than 10,000 shows since the 1980s is working with Internet Archive volunteers to digitize the collection before the cassettes deteriorate. "So far, about 2,500 of these tapes have been posted on the Internet Archive, including some rare gems like a Nirvana performance from 1989," reports TechCrunch. From the report: For many of these recordings, Jacobs was using pretty mediocre equipment, but the volunteer audio engineers working with the Internet Archive have made these tapes sound great. One volunteer, Brian Emerick, drives to Jacobs' house once a month to pick up more boxes of tapes -- he has to use anachronistic cassette decks to play the tapes, which get converted into digital files. From there, other volunteers clean up, organize, and label the recordings, even tracking down song names from forgotten punk bands. The archive is available here.

Read more of this story at Slashdot.

  •  

Social Media Platforms Need To Stop Never-Ending Scrolling, UK's Starmer Says

✇Slashdot
著者: BeauHD

🤖 AI Summary

英国首相ケイ・スタマーは、子供の安全を考慮する上で、ソーシャルメディアプラットフォームが若者のために無限スクロール機能を廃止することを求めています。スタマー氏はBBCラジオに対し、「16歳未満に対する禁止措置を検討しているが、私にとって重要なのは、依存性のあるスクロールメカニズム自体も問題だと考える」と述べました。

英国政府は子供のオンラインセーフティに関する公聴会を行っており、既に4万5000人が参加しています。締切日は5月26日までです。テクノロジー相リゼ・ケンダルは、「親が心配していること、つまり子供がどれだけオンライン上で時間を過ごし、何を見ているかについての意見を聞きたい」と述べました。

ソーシャルメディア企業は、依存性を促進するためのアルゴリズムを開発しており、親たちも政府介入を求めています。英国だけでなく他の国々も子供に対するソーシャルメディアアクセス制限や宵禁、アプリ時間枠などを検討しています。これらの措置が睡眠、家族生活、学業にどのような影響を与えるかを確認するために実験中です。
UK Prime Minister Keir Starmer said social media platforms should remove addictive infinite-scroll features for young users as Britain considers new child-safety measures. "We're consulting on whether there should be a ban for under 16s," Starmer told BBC Radio. "But I think equally important, the addictive scrolling mechanisms are really problematic to my mind. They need to go." Reuters reports: Britain, like other countries, is considering restricting access to social media for children and it is testing bans, curfews and app time limits to see how they impact sleep, family life and schoolwork. Social media companies had designed algorithms that were intended to encourage addictive behavior, and parents were asking the government to intervene, Starmer said. [...] More than 45,000 people had already responded to its consultation on children's online safety, the UK government said, adding that there was still time to contribute before a deadline of May 26. "We want to hear from mums and dads who are worried about the amount of time their children spend online and what they are viewing," Technology Secretary Liz Kendall said on Monday. "We want to hear from teenagers who know better than anyone what it is like to grow up in the age of social media. And we want to hear from families about their views on curfews, AI chatbots and addictive features."

Read more of this story at Slashdot.

  •  

Google Faces Mass Arbitration By Advertisers Seeking Billions

✇Slashdot
著者: BeauHD

🤖 AI Summary

### アルファベットのGoogleが広告主による大規模仲裁に直面

Bloombergによると、アルファベット(Google)はオンライン検索と広告技術ビジネスで独占禁止法違反と判断されたことから、数十億ドル規模の損害賠償請求を受ける可能性がある。これには25件以上の請求が組み合わさった大規模仲裁が含まれており、多くの広告主が参加予定である。

既に複数の企業(USA Today Co.やAdvance Publicationsも含む)は裁判所の判決以降、Googleに対して損害賠償を求めており、広告契約には個別紛争について仲裁を求めている条項がある。アシレニー・ケリー弁護士は、ドアダッシュ、ポストメイトス、ターボタックス(Intuit)などに対しても大規模仲裁を行った経験を持つ。彼はGoogleに対して「現状を改善するための救済を求めている」と述べた。

ケリー弁護士は、その事務所がGoogleの広告技術独占化について訴訟を行う他州も代理している。「オンライン検索とディスプレイ広告に対する潜在的な請求額は218億ドル以上になる」という推定に基づき、仲裁は約12から24ヶ月で結論が出る可能性がある。

Google側は「現在の事態について損失額を評価することはできないが、これらの開示された請求には強い立場を持って対処する」ことを述べた。
An anonymous reader quotes a report from Bloomberg: Alphabet's Google is facing billions of dollars in potential damage claims as part of mass arbitration tied to the company's online search and advertising technology businesses, which courts have ruled were illegal monopolies. Advertisers are banding together to seek payouts through mass arbitration proceedings. While many companies that displayed ads purchased through Google -- including USA Today Co. and Advance Publications -- have sued for damages since the rulings in 2024, advertiser contracts with the search giant require mandatory arbitration over legal disputes. In arbitration, legal disputes are handled by a mediator, a process that tends to favor companies in individual claims. Mass arbitration -- where 25 or more claims against the same company are pooled together -- have become more common and provide a greater likelihood of settlement awards for claimants. Ashley Keller, a Chicago lawyer whose firm has handled mass arbitrations against DoorDash, Postmates and TurboTax-maker Intuit, said he's already signed up a "significant number" of advertisers to participate in claims against Google. The first of those are expected to be filed this week. "Two federal judges have already adjudicated Google to be a monopolist," Keller said in an interview with Bloomberg. "It seems sensible to seek redress." Keller, who is also representing Texas and other states in a lawsuit against Google for monopolization of advertising technology, estimates potential claims for online search and display ads could reach $218 billion or more, based on calculations from an economist his firm has hired. Similar mass arbitrations have lasted 12 to 24 months between the filing of claims and resolution, he said. "Given the nature of these matters, we cannot estimate a possible loss," Google said in a recent corporate filing. "We believe we have strong arguments against these open claims and will defend ourselves vigorously."

Read more of this story at Slashdot.

  •  

A New Computer Chip Could Finally Withstand The Hellscape of Venus

✇Slashdot
著者: BeauHD

🤖 AI Summary

大学院南カリフォルニア大学の研究者たちは、700度 Celsius の高温でも動作が続けられたメミストル記憶デバイスを開発したと報告しています。ScienceAlertによると、「700度は限界ではなかった、テスト装置の能力を超えたからだ」。デバイスは損傷の兆示を示さず正常に動作しました。

このメミストルは、情報の保存と計算操作を行うナノサイズのコンポーネントです。二層の電極と中間の細かいセラミック層で構成されています。研究チームが使用した素材には、最も耐熱性が高いとされる tungsten と、セラミックスの hafnium oxide、底面のグラフェンがあります。

グラフェンは、钨原子与之相互作用的特异性极高,高温下不会导致金属原子缓慢通过陶瓷层桥接两个电极造成短路。研究人员通过先进的电子显微镜和量子水平计算模拟来解析原因,使单一结果得以重复实现,并在《科学》杂志上发表了研究成果。

关键词:University of Southern California, メミストル記憶デバイス, 高温耐性, グラフェン, tumber, hafnium oxide
Researchers at the University of Southern California say they've developed a memristor memory device that continued operating at 700 degrees Celsius. "And crucially, 700 degrees was not the limit, it was simply as hot as their testing equipment could go," adds ScienceAlert. "The device showed no signs of failing." From the report: The device is called a memristor and it's a nanoscale component that can both store information and perform computing operations. Think of it as a tiny sandwich with two electrode layers on the outside and a thin ceramic filling in the middle. The team built theirs from tungsten, the metal with the highest melting point of any element, combined with a ceramic called hafnium oxide, and with a layer of graphene at the bottom. Each material can withstand enormous heat. Together, they turned out to be extraordinary. What makes graphene the key ingredient is the way it interacts with tungsten at the atomic level. In a conventional device, heat causes metal atoms to drift slowly through the ceramic layer until they bridge the two electrodes, short circuiting everything and leaving the device permanently broken. Graphene stops that process dead. Its surface chemistry with tungsten is ... almost like oil and water. Tungsten atoms that drift toward the graphene find they simply cannot take hold, no anchor, no short circuit, no failure. The team used advanced electron microscopy and quantum level computer simulations to understand exactly why, turning a single lucky result into a repeatable principle. The findings have been published in the journal Science.

Read more of this story at Slashdot.

  •  

Air Force Pushed Out UFO Investigator

✇Slashdot
著者: BeauHD

🤖 AI Summary

記事の題目は「空军迫使UFO调查员离职」です。著者はBeauHDで、記事の内容はSlashdotから引用されています。

ジャック・アルLEN・ハイネク氏は最初に空軍の諮問顧問として,UFO報告を説明するために招き入れられました。しかし、彼は政府が不明なケースを軽視し、真剣に調査しない姿勢に対して不信感を持ち始めました。ハイネク氏の態度が変化し、UFO疑惑者への支持者となり,現代のUFO学の形を作ったとされています。

2024年に防衛省は「不明異常現象(UAP)に関する歴史的記録」を発表しました。報告書によると,防衛省の全域異常事態解決室(AARO)は「アメリカ政府の調査や学術研究、公式評価委員会によるどのUAPの目撃も外様技術を示す証拠を見つけることができなかった」と結論付けました。

AAROは2022年に設立された,「不明な空間、空中、水中、そして超越領域の物体に対応するための政府機関」だとして報告書で述べられています。この報告書は、2023年8月に証言した「漏洩者」として知られるダグ・グラスシュの公聴会と直接対決しました。

報告書によると,ハイネクがプロジェクトブルーブック(空軍の主要なUFO調査プログラム)に関わる頃には、75%のアメリカ人が「政府はほぼ常に正しいことをする」と信じていました。しかし2007年以降、この信頼度は30%を上回ることはありませんでした。

ハイネクが公的反応を規制されたことにより、空軍の努力は逆効果となりました。政府の疑惑を静止させようとすることは逆に疑念を煽り、更多阴谋论和不信任的想法增加了。人们相信政府在隐瞒真相,这与Hynek的实际发现相反:实际上,高层可能并不在乎找到答案。

この記事に関する関連リンクには「空軍要求此人调查UFO-然后因他所发现的而将他驱逐」、「新的计算机芯片可能最终能够承受金星地狱般的环境」等文章被提及。
J. Allen Hynek started as an Air Force consultant brought in to help explain away early UFO reports, but over time he grew frustrated with what he saw as the government's effort to minimize unexplained cases rather than seriously investigate them. Longtime Slashdot reader schwit1 shares an article from Popular Mechanics, in collaboration with Biography.com, that argues Hynek's shift from skeptic to advocate helped shape modern ufology, and that the Air Force's attempts to control the narrative may have deepened the public distrust and conspiracy thinking that followed. From the report: Do you think the U.S. government is hiding, and possibly reverse-engineering, extraterrestrial technology? Think again. Or better yet, don't think about it at all. Nothing to see here. That's the underlying message of a report released in 2024 by the Department of Defense. The 63-page "Report on the Historical Record of U.S. Government Involvement with Unidentified Anomalous Phenomena (UAP) " concludes that the DoD's All-Domain Anomaly Resolution Office (AARO) "found no evidence that any [U.S. Government] investigation, academic-sponsored research, or official review panel has confirmed that any sighting of a UAP represented extraterrestrial technology." The AARO, as The Guardian summarizes, is "a government office established in 2022 to detect and, as necessary, mitigate threats including 'anomalous, unidentified space, airborne, submerged and transmedium objects.'" This report came on the heels of, and in contradiction to, what was arguably the most high-profile hearing on UAPs -- formerly known as unidentified flying objects, or UFOs -- in decades: the August 2023 testimony of "whistleblower" Dave Grusch. [...] The 2024 AARO report stated that during the time Hynek was working with Project Blue Book [the U.S. Air Force's best-known UFO investigation program], "about 75 percent of Americans trusted the [US government] 'to do the right thing almost always or most of the time.'" But, the report noted, since 2007, that number has never risen above 30 percent. "This lack of trust probably has contributed to the belief held by some subset of the U.S. population that the USG has not been truthful regarding knowledge of extraterrestrial craft." Ultimately, the Air Force's efforts to stifle Hynek -- pressuring him to offer the public standard responses to questions he wasn't even allowed to ask -- appears to have backfired. Ironically, the Air Force's attempts to quiet suspicions only fueled them, leading to more conspiracy theories and distrust. People came to believe that the government was hiding the truth, contrary to Hynek's actual revelation: that, in reality, the people at the top may not care much about finding the answers after all.

Read more of this story at Slashdot.

  •  

WeatherBug Data Says October 8 Is the Real Perfect Date

✇Slashdot
著者: BeauHD

🤖 AI Summary

タイトル:天気予報会社WeatherBugのデータが示すところでは、10月8日が本当に理想的な日となる

この記事は、WeatherBugによる分析が、伝統的に理想とされてきた4月25日の晴れやかさとは異なる結論を導き出していることを説明しています。WeatherBugは、全米約2000万ユーザーの天気データから分析し、10月8日が年間を通じて最高の気温と降水量を提供すると結論付けました。その日の平均気温は約66°Fで、降水はわずか0.0573インチです。

4月25日の温度は約60°F、雨量は約0.1297インチで、80位にしかnesserとされています。一方で、最暖な日は7月、最寒な日は1月20日(全米平均33°F)となっています。

大きな国であるアメリカでは完璧な天気を保証する特定の日はありませんが、これらの数字から、10月初めは全国的にも適した屋外活動の時間帯である可能性が高いという結論に達しました。
BrianFagioli shares a report from NERDS.xyz: For years pop culture has treated April 25 as the "perfect date," thanks to the famous Miss Congeniality line about needing only a light jacket. But new analysis from WeatherBug suggests that idea does not actually hold up when you look at the numbers. After reviewing U.S. weather data from 2018 through today, the company concluded that October 8 delivers the most reliable combination of comfortable temperatures and low rainfall nationwide. According to the analysis, the average conditions on that day land around 66F with just 0.0573 inches of precipitation. The study used population weighted weather data drawn from roughly 20 million daily WeatherBug users across the United States. When the company compared all days of the year, April 25 ranked only 80th, averaging about 60F and roughly 0.1297 inches of rain. The broader dataset also shows July dominating the hottest days of the year while January owns the coldest, with January 20 averaging just 33F nationally. While no single date guarantees perfect weather everywhere in a country as large as the U.S., the numbers suggest early October may quietly offer one of the most reliable windows for comfortable outdoor conditions.

Read more of this story at Slashdot.

  •  

Stanford Report Highlights Growing Disconnect Between AI Insiders and Everyone Else

✇Slashdot
著者: BeauHD

🤖 AI Summary

スタanford大学の最新報告書によると、AI専門家と一般大衆のAIに関する見解はますます乖離している。特に、米国ではAIが雇用や医療、経済など社会的領域に与える影響への懸念が高まっている。

報告によれば、AIに対する一般的な懐疑的な見方は依然として存在し、Pew Researchの調査によると、米国人の10%以下しか日常の生活でAIが増加することに対して excitementよりも concernsの方が大きいという。一方で、AI専門家56%は20年間で米国にポジティブな影響を及ぼすと見ている。

医療分野では、84%の専門家がAIのポジティブな影響があると答えたにもかかわらず、44%の一般大衆は同じ意見を持っていない。また、73%の専門家がAIが雇用に対してポジティブな影響を及ぼすと回答したのに対し、23%の一般人しか同意しなかった。

また、政府による適切な規制への信頼度では、米国は他の国に比べて最も低い水準(31%)であり、シンガポールが最高(81%)。AIの規制については41%の調査対象者が連邦レベルでの規制が不十分だと考えている。

それでも、2024年と2025年の間にAI製品やサービスから得られる恩恵の方が欠点よりも多いという見解を持つ人々はわずかに増加した(59%)ものの、AIを「不安」と感じる人の割合も同じ期間に同程度で増えた(52%)。
An anonymous reader quotes a report from TechCrunch: AI experts and the public's opinion on the technology are increasingly diverging, according to Stanford University's annual report on the AI industry, which was released Monday. In particular, the report noted a growing trend of anxiety around AI and, in the U.S., concerns about how the technology will impact key societal areas, such as jobs, medical care, and the economy. [...] Stanford's report provides more insight into where all this negativity is coming from, as it summarizes data around public sentiment of AI across various sources. For instance, it pointed to a report from Pew Research published last month, which noted that only 10% of Americans said they were more excited than concerned about the increased use of AI in daily life. Meanwhile, 56% of AI experts said they believed AI would have a positive impact on the U.S. over the next 20 years. Expert opinion and public sentiment also greatly diverged in particular areas where AI could have a societal impact. Indeed, 84% of experts, the report authors noted, said that AI would have a largely positive impact on medical care over the next 20 years, but only 44% of the U.S. general public said the same. Plus, a majority (73%) of experts felt positive about AI's impact on how people do their jobs, compared with just 23% of the public. And 69% of experts felt that AI would have a positive impact on the economy. Given the supposed AI-fueled layoffs and disruptions to the workplace, it's not surprising that only 21% of the public felt similarly. Other data from Pew Research, cited by the report, noted that AI experts were less pessimistic on AI's impact on the job market, while nearly two-thirds of Americans (or 64%) said they think AI will lead to fewer jobs over the next 20 years. The U.S. also reported the lowest trust in its government to regulate AI responsibly, compared with other nations, at 31%. Singapore ranked highest at 81%, per data pulled from Ipsos found in Stanford's report. Another source looked at regulation concerns on a state-by-state level and concluded that, nationwide, 41% of respondents said federal AI regulation will not go far enough, while only 27% said it would go "too far." Despite the fears and concerns, AI did get one accolade: Globally, those who feel like AI products and services offer more benefits than drawbacks slightly rose from 55% in 2024 to 59% in 2025. But at the same time, those respondents who said that AI makes them "nervous" grew from 50% to 52% during the same period, per data cited by the report's authors.

Read more of this story at Slashdot.

  •  

Will Some Programmers Become 'AI Babysitters'?

🤖 AI Summary

記事「Will Some Programmers Become "AI Babysitters"?」では、プログラマーの一部が「AIの世話人」となる可能性について議論しています。Google.orgのマグギー・ジョンソンによると、「AIによってコード生成は誰にでも可能になりつつあるが、システムを維持するにはコンピュータサイエンティストが必要だ」と指摘されています。

AIが生成したコードの正確性と頻度が増すにつれて、プログラマーの役割は作者から技術的監査人や専門家に変化すると考えられています。しかし、大型言語モデルはコンテキスト判断や専門的な知識を持っていないため、生成されたコードは安全かつ効率的に動作し、大きなシステムとの整合性を持つことの確認には人間が必要となります。

このため、コードを検証し、セキュリティ上の欠陥を見つけるためにコンピュータサイエンティストが「法医学的な作業」を行うことが必要です。現代のプログラミング教育は、これらのブラックボックス出力を検証・保護する技術的深さを学生に教え込むべきだと指摘されています。

ニューヨーク・タイムズは、企業が生成されたAIコードをレビューするエンジニアを見つけることが難しくなっていることを報じています。
Will some programmers become "AI babysitters"? asks long-time Slashdot readertheodp. They share some thoughts from a founding member of Code.org and former Director of Education at Google: "AI may allow anyone to generate code, but only a computer scientist can maintain a system," explained Google.org Global Head Maggie Johnson in a LinkedIn post. So "As AI-generated code becomes more accurate and ubiquitous, the role of the computer scientist shifts from author to technical auditor or expert. "While large language models can generate functional code in milliseconds, they lack the contextual judgment and specialized knowledge to ensure that the output is safe, efficient, and integrates correctly within a larger system without a person's oversight. [...] The human-in-the-loop must possess the technical depth to recognize when a piece of code is sub-optimal or dangerous in a production environment. [...] We need computer scientists to perform forensics, tracing the logic of an AI-generated module to identify logical fallacies or security loopholes. Modern CS education should prepare students to verify and secure these black-box outputs." The NY Times reports that companies are already struggling to find engineers to review the explosion of AI-written code.

Read more of this story at Slashdot.

  •  

Amazon May Sell Trainium AI Chips To Third Parties In Shot At Nvidia

✇Slashdot
著者: BeauHD

🤖 AI Summary

AmazonがTrainium AIチップをAWSだけでなく、第三者にも販売する可能性があることを明らかにした。同社CEOのAndy Jassyは、アマゾンの半導体事業が年間200億ドル以上を稼いでおり、今後の需要も高いと述べた。現在の供給ではTrainium2と3はほぼ完売状態で、4世代目も既に大幅な予約があるという。

Jassyによると、全製品(Trainium、Graviton、Nitro)は年間100%以上の成長率で拡大しており、完全なTrainium導入により、年間設備投資コストを数十億ドル削減し、利益率も大幅に改善できるとしている。

これはアマゾンが NVIDIAと直接競争するための戦略の一環であり、チップ事業のさらなる拡大を目指している。現在はAWSを通じてのみアクセス可能だが、将来的には独立した販売を検討しているという。
Amazon CEO Andy Jassy says the company may eventually sell its Trainium AI chips directly to outside customers, not just through AWS, which would put Amazon in more direct competition with Nvidia. "There's so much demand for our chips that it's quite possible we'll sell racks of them to third parties in the future," Jassy wrote in his annual shareholder letter Thursday. He also revealed the company's chip business is already running at more than $20 billion annually, with demand so strong that current and even future generations are largely spoken for. Quartz reports: Access to Amazon's chips is currently limited to Amazon Web Services, with customers paying for cloud-based usage rather than owning any physical hardware. Selling to AWS and external customers alike, as standalone chipmakers do, would put annual revenue at around $50 billion, up from the $20 billion the company estimates for the year, Jassy said. The $20 billion figure spans three product lines: Trainium, the AI accelerator chip; Graviton, a general-purpose processor; and Nitro, a chip that helps run Amazon's EC2 server instances. All three are growing at triple-digit rates year over year, Jassy claimed in his letter. Jassy said demand for Trainium has outpaced supply at each generation. Trainium2 is essentially unavailable, with its entire allocated capacity spoken for. Trainium3 started reaching customers in early 2026, and reservations have filled nearly all available supply. Even Trainium4 -- which is not expected to reach wide release for another year and a half -- has substantial pre-orders committed. Jassy argued that a full-scale Trainium rollout could shave tens of billions off annual capital costs while meaningfully widening profit margin.

Read more of this story at Slashdot.

  •  

OpenAI To Limit New Model Release On Cybersecurity Fears

✇Slashdot
著者: BeauHD

🤖 AI Summary

OpenAIは、新たなセキュリティ製品を開発し、限定的にパートナー企業に提供する準備を進めています。これはより広範囲なリリースが大きな問題を引き起こす可能性があるためです。これはAnthropicのMythosモデルやProject Glasswingプロジェクトでの方法と似ています。

OpenAIは2月、GPT-5.3-Codex(会社で最もセキュリティ能力が高いモデル)をリリースした後、「Trusted Access for Cyber」パイロットプログラムを開始しました。このプログラムに参加する組織には「より強力なセキュリティ能力を持つモデルへのアクセス権限が与えられます」とのこと。当初、OpenAIは参加者向けに100万ドルのAPIクレジットを提供することを約束しました。

セキュリティ専門家のStanislav Fortは、モデルが新しい脅威を作り出す能力に関心を持つことが重要だと述べています。これと似た手法はソフトウェアの脆弱性ディスクロージャーに対処する方法と同じだと言います。
OpenAI is reportedly preparing a new cybersecurity product for a small group of partners, out of concern that a broader rollout could wreak havoc if it were released more widely. If that move sounds familiar, it's because Anthropic took a similar limited-release approach with its Mythos model and Project Glasswing initiative. Axios reports: OpenAI introduced its "Trusted Access for Cyber" pilot program in February after rolling out GPT-5.3-Codex, the company's most cyber-capable reasoning model. Organizations in the invite-only program are given access to "even more cyber capable or permissive models to accelerate legitimate defensive work," according to a blog post. At the time, OpenAI committed $10 million in API credits to participants. [...] Restricting the rollout of a new frontier model makes "more sense" if companies are concerned about models' ability to write new exploits -- rather than about their ability to find bugs in the first place, Stanislav Fort, CEO of security firm Aisle, told Axios. Staggering the release of new AI models looks a lot like how cybersecurity vendors currently handle the disclosure of security flaws in software, Lee added. "It's the same debate we've had for decades around responsible vulnerability disclosure," Lee said.

Read more of this story at Slashdot.

  •  

Hacker Steals 10 Petabytes of Data From China's Tianjin Supercomputer Center

✇Slashdot
著者: BeauHD

🤖 AI Summary

中国天津市国家超级计算机中心から、サイバー攻撃者が大量の機密データを窃取した事件が発生しました。CNNが報道しているところによると、この攻撃者は複数か月にわたって国営スーパーコンピュータから10ペタバイト以上の情報を取り出し、その一部を暗号通信チャンネル上で公表しました。

窃取されたデータには高度な防衛文書やミサイル設計図などが含まれており、「航空業界中国」「商用航空機会社中国」などの主要組織に関連していると主張しています。専門家はこの情報の漏洩が真実であると推測しており、一部の取引は仮想通貨で行われました。

CNNはこれらの主張を確認していないものの、攻撃者が複数の組織から比較的容易にアクセスし、大量データを盗み出したことについて専門家らからの評価が一致しているという点に注目しています。被窃データには「機密」マークが付いた文書や技術ファイル、防御装備(爆弾・ミサイル)のアニメーションシミュレーションなどが含まれていると指摘されています。

この事件は中国で最大規模となる可能性のある大規模な情報漏洩として注目を集めています。
An anonymous reader quotes a report from CNN: A hacker has allegedly stolen a massive trove of sensitive data -- including highly classified defense documents and missile schematics -- from a state-run Chinese supercomputer in what could potentially constitute the largest known heist of data from China. The dataset, which allegedly contains more than 10 petabytes of sensitive information, is believed by experts to have been obtained from the National Supercomputing Center (NSCC) in Tianjin -- a centralized hub that provides infrastructure services for more than 6,000 clients across China, including advanced science and defense agencies. Cyber experts who have spoken to the alleged hacker and reviewed samples of the stolen data they posted online say they appeared to gain entry to the massive computer with comparative ease and were able to siphon out huge amounts of data over the course of multiple months without being detected. An account calling itself FlamingChina posted a sample of the alleged dataset on an anonymous Telegram channel on February 6, claiming it contained "research across various fields including aerospace engineering, military research, bioinformatics, fusion simulation and more." The group alleges the information is linked to "top organizations" including the Aviation Industry Corporation of China, the Commercial Aircraft Corporation of China, and the National University of Defense Technology. Cyber security experts who have reviewed the data say the group is offering a limited preview of the alleged dataset, for thousands of dollars, with full access priced at hundreds of thousands of dollars. Payment was requested in cryptocurrency. CNN cannot verify the origins of the alleged dataset and the claims made by FlamingChina, but spoke with multiple experts whose initial assessment of the leak indicated it was genuine. The alleged sample data appeared to include documents marked "secret" in Chinese, along with technical files, animated simulations and renderings of defense equipment including bombs and missiles.

Read more of this story at Slashdot.

  •  

EFF Is Leaving X

✇Slashdot
著者: BeauHD

🤖 AI Summary

EFFは、20年以上にわたって利用していたXから撤退することを発表しました。「軽々と決めたわけではないが、少し遅くまで残したのかもしれない」と、デジタル権利団体は述べています。2018年にTwitter(現在の名称はX)に投稿する回数は5〜10回でしたが、月間インプレッションは5,000万~1億回でした。しかし2024年には、2,500件の投稿が約200万回程度しか視認されませんでした。昨年の1,500件の投稿も、全体で約1300万回にとどまりました。「簡潔に言えば、現在のX投稿は7年前の1つのツイートに対する視認数の3%以下である」と EFFは述べています。

オンライン上での権利行使が重要であり、Xはもう戦場ではなくなりました。マuskによって取り締まったプラットフォームは不完全でしたが影響力がありました。しかし現在存在するのはさらに劣化し、その効用も限られています。EFFは大きな闘いに立ち向かうことで勝利を収めています。今後はBluesky, Mastodon, LinkedIn, Instagram, TikTok, Facebook, YouTube, eff.orgなどで活動を拡大します。「私たちのデジタル権利保護の仕事はより強く必要となっています」とEFFは述べ、支持を求めています。
After nearly 20 years on the platform, The Electronic Frontier Foundation (EFF) says it is leaving X. "This isn't a decision we made lightly, but it might be overdue," the digital rights group said. "The math hasn't worked out for a while now." From the report: We posted to Twitter (now known as X) five to ten times a day in 2018. Those tweets garnered somewhere between 50 and 100 million impressions per month. By 2024, our 2,500 X posts generated around 2 million impressions each month. Last year, our 1,500 posts earned roughly 13 million impressions for the entire year. To put it bluntly, an X post today receives less than 3% of the views a single tweet delivered seven years ago. [...] When you go online, your rights should go with you. X is no longer where the fight is happening. The platform Musk took over was imperfect but impactful. What exists today is something else: diminished, and increasingly de minimis. EFF takes on big fights, and we win. We do that by putting our time, skills, and our members' support where they will effect the most change. Right now, that means Bluesky, Mastodon, LinkedIn, Instagram, TikTok, Facebook, YouTube, and eff.org. We hope you follow us there and keep supporting the work we do. Our work protecting digital rights is needed more than ever before, and we're here to help you take back control.

Read more of this story at Slashdot.

  •  

Waymo Is Offering To Help Cities Fix Their Potholes

✇Slashdot
著者: BeauHD

🤖 AI Summary

WAYMOは都市とGoogleのWazeと提携し、自社の自動運転タクシーで収集した Asphaltポothole情報共有プログラムを開始しました。これは都市交通部門が道路損傷をより速く発見・修復できる新たな手段となります。Waymoの政策開発および研究マネージャーであるアリエル・フリシェルは、「実際には大量に収集しており、それを都市と共有することを見ました」と述べています。

WAYMOが使用している感知ハードウェア(カメラやレーダーなど)と加速度計などのセンサーは、道路表面の物理的変化を検出し、車両が不整な路面に遭遇した際の傾斜や動きを检测します。 Waymoは最初にこの機能が必要だったのは、自動運転車が乗客に損傷や怪我を与えることを避けるためでしたが、後になって都市にとって価値のある情報源であることが分かったのです。

新しいパイロットプログラムでは、そのデータが無料で利用できるWaze for Citiesプラットフォームを通じて都市交通部門に提供されます。これはリアルタイムのユーザー生成トラフィック情報を提供し、行政担当者が重要な決定を下す参考となります(例えば、アスファルトの修復)。また、ウェイズユーザーが自身の観察でポールホールの位置を確認する機能も備えています。

目下、多くの都市は311報告や手動点検といった非緊急情報を頼りに道路の陥没問題に対処しています。 Waymoは数年間にわたり都市職員からのフィードバックを集積し、サンフランシスコベイエリア、ロサンゼルス、フェニックス、アストン、アトランタでパイロットプログラムを開始しました。

さらに、フリシェルは「他の道路状況や安全データが必要であるかどうかを探る準備をしています」と述べています。「我々は都市と協力し、より安全な街道を実現したいと考えています。」
Waymo is launching a pilot with cities and Google's Waze to share pothole data collected by its robotaxis, giving local transportation departments a new way to find and fix road damage more quickly. "We realized, hey, once we're at scale, we can actually share this data with cities, which is something that they've asked for and something that we collect at scale," said Arielle Fleisher, Waymo's policy development and research manager. "And so we figured out a way to make that happen." The Verge reports: Waymo uses its perception hardware, including cameras and radar, as well as accelerometers and the vehicle's physical feedback system, to log every pothole its vehicles encounter. These sensors detect physical changes to the road's surface, such as tilt and movement when the vehicle encounters irregularities. Originally, Waymo knew it needed the ability to detect potholes so it could ensure that its vehicles slowed down to avoid damage or injury to the passenger. Later, the company realized this could be invaluable data for cities, too. Under the new pilot program, that data will now be made available to cities' departments of transportation through a free-to-use Waze for Cities platform, which provides access to real-time, user-generated traffic data that officials can then use to make important decisions -- such as pothole repair. The platform also allows for Waze users to validate pothole locations through their own observations, decreasing the chances that city officials will be led astray by false positives. Currently, many cities rely on a patchwork of non-emergency 311 reports and manual inspections to address their pothole problems. Waymo developed this pilot program after collecting years of feedback from city officials about the state of their highways and surface streets. The company is launching the new pilot in the San Francisco Bay Area, as well as Los Angeles, Phoenix, Austin, and Atlanta, where Waymo says it has already helped the city identify approximately 500 potholes. Fleisher said that Waymo would be open to expanding the project to other street maladies based on further feedback from officials. The company is eager to learn what other types of street condition or safety data might be valuable, she said. "We want to be responsive to cities," Fleisher said. "They are interested in safer streets and potholes are really a tough challenge for cities. So we really wanted to meet that need as part of our desire to be a good partner and to ultimately advance our goal for safer streets."

Read more of this story at Slashdot.

  •  

Skilled Older Workers Turn To AI Training To Stay Afloat

✇Slashdot
著者: BeauHD

🤖 AI Summary

高齢の専門職労働者が人工知能(AI)トレーニングに従事することで就労機会を確保しようとしている状況について、以下の要旨を日本語でまとめます。

1. **背景**:技術分野での求人が減少し、高齢の専門職労働者がAIトレーニングという新たな仕事に転向しています。データ注釈と呼ばれるこの作業は、AIモデルが医療問題を正しく回答できるように訓練する作業を含みます。

2. **企業**: Mercor、GlobalLogic、TEKsystems、micro1、Alignerrなどの会社が大型のコンサルタントネットワークを運営し、テクノロジー大手や学術研究者、医療・金融など多様な業界から依頼を受けている。

3. **労働条件**:経験豊富な専門職は時給20ドルから180ドル以上までのAIトレーニング契約を手に入れることもでき、柔軟性と収入を得ることができます。しかし、これは通常六桁の給与や福利厚生が付く元の仕事とは比べ物にならない低賃金であり、不定期な働き方で福利厚生もありません。

4. **動機**:労働市場の厳しさにより、高齢の専門職労働者たちはAIトレーニングが「橋渡し」となりました。これは既存の仕事を失った場合の一時的な代替策や副業でもあります。

5. **教授の見解**:テキサスA&M大学のジョアンナ・ラヘイ准教授は、「橋渡し」と呼ばれる低賃金で手薄な仕事は労働者を高齢まで雇用を続けるのに役立つと指摘しています。AIトレーニングはそのような代替策よりも良い面もあると言います。

6. **結論**:AIトレーニングは専門知識を活かしながら、経済的に生き延びるための手段として、高齢の専門職労働者が選択していますが、長期的には元の仕事を失う可能性がある実態も理解しておくことが重要です。
An anonymous reader quotes a report from the Guardian: [Five skilled workers aged 50 and older spoke] to the Guardian about how, after struggling to find work in their fields, they have turned to an emerging and growing category of work: using their expertise to train artificial intelligence models. Known as data annotation, the work involves labeling and evaluating the information used to train AI models like Open AI's ChatGPT or Google's Gemini. A doctor, for example, might review how an AI model answers medical questions to flag incorrect or unsafe responses and suggest better ones, helping the system learn how to generate more accurate and reliable responses. The ultimate goal of training is to level up AI models until they're capable of doing a job as well as a human could -- meaning they could someday replace some of these human workers. The companies behind AI training, such as Mercor, GlobalLogic, TEKsystems, micro1 and Alignerr, operate large contractor networks staffed by people like Ciriello. Their clients include tech giants like OpenAI, Google and Meta, academic researchers and industries including healthcare and finance. For experienced professionals, AI training contracts can be a side hustle -- or a temporary fallback following a layoff -- where top experts can, in some cases, earn over $180 an hour. But that's on the high end. For some older workers [...], it represents another thing entirely: a last refuge in a brutal job market that is harder to stay in, or re-enter, the older they get. For many of them, whether or not they're training their AI replacements in their professions is besides the point. They need the work now. [...] "There's just a lot of desperation out there," Johnson said. As opportunities narrow, many turn to what Joanna Lahey, a professor at Texas A&M University who studies age discrimination and labor outcomes, calls "bridge jobs" -- lower-paying, less demanding roles that help workers stay financially afloat as they approach retirement. Historically, that meant taking temp assignments, retail and fast-food work and gig roles like Uber and food delivery. Now, for skilled workers -- engineers, lawyers, nurses or designers, for example -- using their expertise for AI data training is becoming the new bridge job. "[AI] training work may be better in some ways than those earlier alternatives," Lahey told the Guardian. AI training can offer flexibility, quick income and intellectual engagement. But it's often a clear step down. Professionals in fields such as software development, medicine or finance typically earn six-figure salaries that come with benefits and paid leave, according to the US Bureau of Labor Statistics. According to online job postings, AI training gigs start at $20 an hour, with pay increasing to between $30 and $40 an hour. In some cases, AI trainers with coveted subject matter expertise can earn over $100 an hour. AI training is contract-based, though, meaning the pay and hours are unstable, and it often doesn't come with benefits.

Read more of this story at Slashdot.

  •  

Little Snitch Comes To Linux To Expose What Your Software Is Really Doing

✇Slashdot
著者: BeauHD

🤖 AI Summary

Little Snitch, a popular macOS tool that displays which applications are connecting to the internet, is now being developed for Linux. The project began after the developer experimented with Linux and found it strange not knowing about system connections. Unlike existing tools like OpenSnitch, Little Snitch offers a simple user experience by showing which process is making connections and allowing users to block them with a click.

The Linux version of Little Snitch uses eBPF for kernel-level traffic interception, with core components written in Rust and featuring a web-based interface that can monitor remote servers. Initial tests on Ubuntu revealed that the system was relatively quiet; only nine processes made internet connections over a week, compared to more than 100 on macOS.

The application behaves similarly across platforms: Firefox triggered telemetry and advertising-related connections while LibreOffice made no network connections during testing. The early release is intended as a transparency tool rather than a security firewall.

This development aims to provide users with insight into what their software is doing online, enhancing awareness about internet activity without relying solely on command-line utilities or other existing tools.
BrianFagioli writes: Little Snitch, the well known macOS tool that shows which applications are connecting to the internet, is now being developed for Linux. The developer says the project started after experimenting with Linux and realizing how strange it felt not knowing what connections the system was making. Existing tools like OpenSnitch and various command line utilities exist, but none provided the same simple experience of seeing which process is connecting where and blocking it with a click. The Linux version uses eBPF for kernel level traffic interception, with core components written in Rust and a web based interface that can even monitor remote Linux servers. During testing on Ubuntu, the developer noticed the system was relatively quiet on the network. Over the course of a week, only nine system processes made internet connections. By comparison, macOS reportedly showed more than one hundred processes communicating externally. Applications behave similarly across platforms though. Launching Firefox immediately triggered telemetry and advertising related connections, while LibreOffice made no network connections at all during testing. The early release is meant primarily as a transparency tool to show what software is doing on the network rather than a hardened security firewall.

Read more of this story at Slashdot.

  •  

New Jersey Cannot Regulate Kalshi's Prediction Market, US Appeals Court Rules

✇Slashdot
著者: BeauHD

🤖 AI Summary

ニュージャージー州 Gaming ジョーカーが Kalshi の予想市場での賭け金の受け入れを禁止できないと連邦上告-court 裁判所が判断しました。フィラデルフィアに本社を置く 3rd U.S. Circuit Court of Appeals は、Kalshi が提供するスポーツ関連イベント取引契約の規制権限は Commodity Futures Trading Commission (CFTC) のみであると判決しました。

ニュージャージー州は Kalshi のプラットフォーム上での競技大会を含むイベントに関する賭け金を禁止する州のギャンブル法に違反しているとして抗議しました。Kalshi はこれらの契約が「スワップ」と呼ばれる有価証券であり、Commodity Exchange Act 下では CFTC のみが規制できると主張しました。

連邦地裁は Kalshi を支持し仮差止を発令したためニュージャーズー州は上告を申し立てましたが、3rd Circuit 裁判所の大多数は Commodity Exchange Act が州法を優先すると結論付けました。判決は他の CFTC の提訴と一致していました。

この判決は各州による予想市場の規制権限に対する escalating battle の中心的な問題を初めて解決しました。
An anonymous reader quotes a report from Reuters: A federal appeals court ruled on Monday that New Jersey gaming regulators cannot prevent Kalshi from allowing people in the state to use its prediction market to place financial bets on the outcome of sporting events. A three-judge panel of the Philadelphia-based 3rd U.S. Circuit Court of Appeals ruled 2-1 (PDF) in finding that the U.S. Commodity Futures Trading Commission has exclusive jurisdiction over the sports-related event contracts that Kalshi allows people to trade on its platform. The ruling marked the first time a federal appeals court has ruled on what has become the central issue in an escalating battle over the ability of state gaming regulators to police the activity of prediction market operators. Kalshi and companies like it allow users to place trades and profit from predictions on events such as sports and elections. States argue that firms like Kalshi are operating without required state licenses, in violation of gaming laws, including bans on wagers by those under 21. Those states include New Jersey, which last year sent Kalshi a cease-and-desist letter stating that its listing of sports-related event contracts on its platform violated state gambling laws that prohibit betting on collegiate sports. Kalshi sued the state, arguing its event contracts qualify as "swaps," a type of derivative contract, that under the Commodity Exchange Act can only be regulated by the CFTC, which had granted the company a license to operate a designated contract market (DCM). A lower-court judge had sided with New York-based Kalshi and issued a preliminary injunction, prompting New Jersey to appeal. But a majority of the judges on the 3rd Circuit panel concluded the Commodity Exchange Act likely preempted state law. "Kalshi's sports-related event contracts are swaps traded on a CFTC-licensed DCM, so the CFTC has exclusive jurisdiction," U.S. Circuit Judge David Porter wrote. The ruling was in line with the position advanced in other litigation by the CFTC under President Donald Trump's administration. The regulator last week sued Arizona, Connecticut and Illinois to prevent them from pursuing what it called unlawful efforts to regulate prediction markets.

Read more of this story at Slashdot.

  •  

OpenAI Calls For Robot Taxes, Public Wealth Fund, and 4-Day Workweek To Tackle AI Disruption

✇Slashdot
著者: BeauHD

🤖 AI Summary

OpenAIは、先進的人工知能(AI)による社会的混乱に対処するための革新的な政策変更を提案しています。具体的には、ロボット税や公的富積基金、4日間の働き方実験などを含む一連の「初期アイデア」を提示しました。

主な提言は以下の通りです:
1. 公的富積基金:議員とAI企業が長期資産への投資を行い、利益は市民に直接分配されます。
2. 4日間の働き方実験:雇用者に対して無給与の4日間労働を奨励し、新規AIツールによる生産性向上に関連する「ベネフィットボーナス」を提供します。
3. 税制改革:税収基盤を企業利益や資本利得に移行し、労働收入や給料税への依存度を減らします。また、自動化労働に関連する課税も推奨しています。

さらに、米国の電力網の拡大も提案されており、データセンター建設によるエネルギー需要増加に対応するために既に負荷がかかり始めています。これらの提言は、AIによる雇用の大幅な変化を管理し、社会的な混乱を軽減するための初期アイデアとして提示されています。
OpenAI is proposing (PDF) sweeping policy changes to help manage the societal disruption caused by advanced AI, including taxes on automated labor, a public wealth fund, and experiments with a four-day workweek. The company said the policy document offered a series of "initial ideas" to address the risk of "jobs and entire industries being disrupted" by the adoption of AI tools. Business Insider reports: Among the core policy suggestions is a public wealth fund, which would see lawmakers and AI companies work together to invest in long-term assets linked to the AI boom, with returns distributed directly to citizens. Another is that the government should encourage and incentivize employers to experiment with four-day workweeks with no loss in pay and offer "benefits bonuses" tied to productivity gains from new AI tools. The policy document also suggests lawmakers modernize the tax system and shift the tax base to corporate income and capital gains, rather than relying on labor income and payroll taxes that could be hit by a wave of AI-powered job losses. It also recommends taxes related to automated labor. OpenAI also called for the accelerated expansion of the US's electricity grid, which is already feeling the strain from a wave of data center construction and energy demand for training ever more powerful AI models.

Read more of this story at Slashdot.

  •  
❌