リーディングビュー

FCC Grants Netgear Conditional Approval For Routers

✇Slashdot
著者: BeauHD

🤖 AI Summary

FCCは、Netgearに対し、外国製ルーターの取り扱い禁止令から条件付きで免除を認めた。この免除は2027年10月1日まで有効であり、期間中にNetgearの未来モデルルーターについてFCC承認を得る必要がある。Netgearの申請は防衛省により審査され、同社製品が「アメリカ国家安全保障にリスクを及ぼすものではない」と判断された。この免除は、R、RAX、RSなど多くのWi-Fi機種やOrbi、CAXなどのケーブルゲートウェイも含む。ただし、各デバイスについては通常のFCC承認手続きを通さなければならない。2027年10月1日までの間にNetgearは製品のFCC承認を得られるよう努める必要がある。

関連リンク:MicrosoftはSurface PCの価格を大幅に上げる意向を示す。FCCは新規製造ルーターの輸入禁止を発表し、セキュリティ上の懸念がその理由となっている。
The FCC has granted (PDF) Netgear the first exemption from its foreign-made router ban, allowing the company to keep selling new consumer router models made outside the U.S. through Oct. 1, 2027. PCMag reports: The Defense Department reviewed Netgear's application for an exemption and found that its products "do not pose risks to US national security." The FCC's order doesn't elaborate on why. Netgear is based in San Jose, California, although its products are made in Asia. The exemption, known as a conditional approval, lasts until Oct. 1, 2027. It covers a large range of future Wi-Fi models from Netgear, spanning the R, RAX, RAXE, RS, MK, MR, M, and MH series, the Orbi consumer mesh, mobile, and standalone routers under the RBK, RBE, RBR, RBRE, LBR, LBK, and CBK series, as well as cable gateways and cable modems under the CAX and CM series. The exemption isn't a full green light for the future product models from Netgear. The FCC says the company still needs to go through the normal Commission-regulated equipment authorization process for each device. The Oct. 1, 2027 date effectively amounts to a deadline for Netgear to receive FCC certification for the router models; each certification is also permanent, enabling the product to be sold in the US on an ongoing basis. This also suggests that Netgear has an 18-month period to receive FCC certifications for future products.

Read more of this story at Slashdot.

  •  

Microsoft Reveals Major Price Increase For All Surface PCs

✇Slashdot
著者: BeauHD

🤖 AI Summary

Microsoftは、RAMや部品の価格上昇によりSurface全製品ラインナップを大幅に値上げした。主な内容は以下の通り:

- Surface Pro 12インチが799ドルから1,049ドルに、Surface Pro 13インチが999ドルから1,499ドルに高騰。
- Surface Laptopシリーズでも、13インチモデルが899ドルから1,149ドル、13.8インチモデルが999ドルから1,499ドルに値上がり。15インチモデルも1,599ドルからとなった。

さらに、高機能なSurface Laptop 15インチはSnapdragon X Elite、64GB RAM、1TB SSDを搭載し3,649ドルで販売されるが、同価格帯のAppleの16インチMacBook Proよりも価格競争力に欠ける。

これらにより、Microsoftの中クラス製品は2024年に発表されたフラッグシップモデルよりも高価となった。
Microsoft has sharply raised prices across its Surface lineup as RAM and component costs keep climbing. "Both its midrange and flagship Surface lines are now significantly more expensive than they were just a few weeks ago, with the flagship Surface Laptop 7 and Surface Pro 11 now starting at $500 more than they launched at in 2024," reports Windows Central. From the report: The Surface Pro 12-inch, which was previously Microsoft's cheapest modern Surface PC at $799, now starts at $1,049. The flagship Surface Pro 13-inch, which originally launched for $999, now starts at an eyewatering $1,499. It's the same story for the Surface Laptop lines, with the entry-level 13-inch model originally priced at $899, now starting at $1,149. The 13.8-inch flagship Surface Laptop launched at $999, but now costs $1,499, with the 15-inch model now starting at $1,599. This means that Microsoft's midrange devices now cost more than the flagships did when they launched in 2024. [...] Microsoft has raised prices for all SKUs on offer, meaning the high end models are now more expensive too. A top end Surface Laptop 15-inch with Snapdragon X Elite, 64GB RAM and 1TB SSD storage now costs a staggering $3,649. To compare, the 16-inch MacBook Pro with an M5 Pro, 64GB RAM, and 1TB SSD is $3,299, and that comes with a significantly better display and much more power under the hood.

Read more of this story at Slashdot.

  •  

California Ghost-Gun Bill Wants 3D Printers To Play Cop, EFF Says

✇Slashdot
著者: BeauHD

🤖 AI Summary

カリフォルニア州で提出された法案では、3Dプリンターの製造元に、州認証ソフトウェアを使用して銃器部品を検出・ブロックする機能を設けることになっています。しかし、 EFF(電子前衛基礎)の Cliff Braun と Rory Mir は、技術的な理由でこの提案が実現不可能だとし、ユーザーの印刷活動監視につながると主張しています。

法案は銃器部品の設計データを検出しブロックするための州認証アルゴリズムを使用することを求めています。Braun は、3Dプリンターと CNC マシンはCAMソフトウェアを通じて制御されやすく、非公開ソフトウェアがデファクトスタンダードになる可能性があると指摘しています。

また、Braun と Mir は、ユーザーが印刷ファイルをわずかに変更することで検出を回避できることや、この法の「常に関係するデザイン」リストが銃器以外にも広がるリスクがあることを示唆しています。さらに、アルゴリズムによる誤検知が使用者が正常に使用できない可能性もあると述べています。

多くの3Dプリンター所有者は銃器部品を印刷することには興味がないため、Braun は多数の使用者が主に小物や補修部品を印刷したいだけであることを強調しています。
A proposed California bill would require 3D printer makers to use state-certified software to detect and block files for gun parts, but advocates at the Electronic Frontier Foundation (EFF) say it would be easy to evade and could lead to widespread surveillance of users' printing activity. The Register reports: The bill in question is AB 2047, the scope of which, on paper, appears strict. The primary goal is clear and simple: to require 3D printer manufacturers to use a state-certified algorithm that checks digital design files for firearm components and blocks print jobs that would produce prohibited parts. [...] Cliff Braun and Rory Mir, who respectively work in policy and tech community engagement at the EFF, claim that the proposals in California are technically infeasible and in practice will lead to consumer surveillance. In a series of blog posts published this month, the pair argued that print-blocking technology -- proposals for which have also surfaced in states including New York and Washington - cannot work for a range of technical reasons. They argued that because 3D printers and other types of computer numerical control (CNC) machines are fairly simple, with much of their brains coming from the computer-aided manufacturing (CAM) software -- or slicer software -- to which they are linked, the bill would establish legal and illegal software. Proprietary software will likely become the de facto option, leaving open source alternatives to rot. "Under these proposed laws, manufacturers of consumer 3D printers must ensure their printers only work with their software, and implement firearm detection algorithms on either the printer itself or in a slicer software," wrote Braun earlier this month. "These algorithms must detect firearm files using a maintained database of existing models. Vendors of printers must then verify that printers are on the allow-list maintained by the state before they can offer them for sale. Owners of printers will be guilty of a crime if they circumvent these intrusive scanning procedures or load alternative software, which they might do because their printer manufacturer ends support." Braun also argued that it would be trivial for anyone who uses 3D printers to make small tweaks to either the visual models of firearms parts, or the machine instructions (G-code) generated from those models, to evade detection. Mir further argued that the bill offers no guardrails to keep this "constantly expanding blacklist" limited to firearm-related designs. In his view, there is a clear risk that this approach will creep into other forms of alleged unlawful activity, such as copyright infringement. [...] Braun and Mir have a list of other arguments against the bill. They say the algorithms are more than likely to lead to false positives, which will prevent good-faith users from using their hardware. Many 3D printer owners also have no interest in printing firearm components. Most simply want the freedom to print trinkets and spare parts while others use them to print various items and sell them as an income stream.

Read more of this story at Slashdot.

  •  

Audit Finds Google, Microsoft, and Meta Still Tracking Users After Opt-Out

✇Slashdot
著者: BeauHD

🤖 AI Summary

Google、マイクロソフト、メタがユーザーのトラッキングを強制していることが、カリフォルニアでのプライバシーアuditで明らかになりました。ウェブXrayによる調査によれば、55%のサイトがCookie設定を行なうため、ユーザーの同意を得ずにトラッキングを継続していました。

カリフォルニアは、カリフォルニア消費者プライバシー法(CCPA)によって厳格なプライバシールールが制定されており、個人情報の販売を拒否する機能があります。Global Privacy Control (GPC)というブラウザ拡張ソフトを使用してトラッキング同意を無効にすることができます。

調査によると、Googleはユーザーが同意しない場合でも87%のケースでCookie設定を行っていました。「sec-gpc: 1」コードとともに送信されるOpt-out信号を無視しており、「set-cookie」コマンドを使って広告用のIDE Cookieを作成しています。同様に、マイクロソフトも50%、Metaは69%のトラッキングがユーザーの同意を得ずに実施されていました。

各社とも調査結果には異論を唱えていますが、これらの企業はカリフォルニアのプライバシーリールを違反している可能性があり、大きな罰金が科される可能性があります。
alternative_right shares a report from 404 Media: An independent privacy audit of Microsoft, Meta, and Google web traffic in California found that the companies may be violating state regulations and racking up billions in fines. According to the audit from privacy search engine webXray, 55 percent of the sites it checked set ad cookies in a user's browser even if they opted out of tracking. Each company disputed or took issue with the research, with Google saying it was based on a "fundamental misunderstanding" of how its product works. The webXray California Privacy Audit viewed web traffic on more than 7,000 popular websites in California in the month of March and found that most tech companies ignore when a user asks to opt-out of cookie tracking. California has stringent and well defined privacy legislation thanks to its California Consumer Privacy Act (CCPA) which allows users to, among other things, opt out of the sale of their personal information. There's a system called Global Privacy Control (GPC), which includes a browser extension that indicates to a website when a user wants to opt out of tracking. According to the webXray audit, Google failed to let users opt out 87 percent of the time. "Google's failure to honor the GPC opt-out signal is easy to find in network traffic. When a browser using GPC connects to Google's servers it encodes the opt-out signal by sending the code 'sec-gpc: 1.' This means Google should not return cookies," the audit said. "However, when Google's server responds to the network request with the opt-out it explicitly responds with a command to create an advertising cookie named IDE using the 'set-cookie' command. This non-compliance is easy to spot, hiding in plain sight." The audit said that Microsoft fails to opt out users in the same way and has a failure rate of 50 percent in the web traffic webXray viewed. Meta's failure rate was 69 percent and a bit more comprehensive. "Meta instructs publishers to install the following tracking code on their websites. The code contains no check for globally standard opt-out signals -- it loads unconditionally, fires a tracking event, and sets a cookie regardless of the consumer's privacy preferences," the audit said. It showed a copy of Meta's tracking data which contains no GPC check at all.

Read more of this story at Slashdot.

  •  

Chrome Now Lets You Turn AI Prompts Into Repeatable 'Skills'

✇Slashdot
著者: BeauHD

🤖 AI Summary

GoogleはChromeに「Skills」という新機能を導入し、ユーザーが Geminiのプロンプトをワンクリックで実行できる再利用可能なワークフローとして保存できるようにしました。この機能は主にUS English設定のChromeデスクトップユーザー向けに提供されています。

Skillsは以下のように管理されます:ゲミニで「/」と入力し、コンパスアイコンをクリックします。プロンプトはデスクトップ上のゲミニ履歴から直接保存でき、同じGoogleアカウントでサインインしている他のデバイスでも使用できます。

この機能の目的は、頻繁に使用するGeminiプロンプトを手動で再入力したり、リストからコピー・ペーストする必要がないようにすることです。早期テスト者の作成したSkillsには、オンラインレシピの栄養情報を計算したり、複数のタブ間での製品仕様比較を行うコマンドが含まれています。

また、Googleはユーザーが自分でスキルを作成せずに使用できるプリセットSkillsのライブラリも提供します。これらのプリセットSkillsはカスタマイズ可能で、必要な機能を提供しながら完全に自作する必要はありません。
Google is rolling out a Chrome feature called "Skills" that lets users save Gemini prompts as reusable one-click workflows they can run across multiple tabs. The feature also includes preset Skills from Google. It's launching first for Chrome desktop users set to US English. The Verge reports: Once you have access to the feature, it can be managed by typing a forward slash ( / ) in Gemini and clicking the compass icon. AI prompts can be saved as Skills directly from your Gemini chat history on desktop, where they'll then be available to reuse on any other desktop devices that are signed into the same Google account on Chrome. The aim is to spare Chrome users from having to manually retype frequently used Gemini prompts or having to copy and paste them over from a saved list. Some of the Skills made by early testers include commands for calculating the nutritional information of online recipes and creating a side-by-side comparison of product specifications while shopping across multiple tabs, according to Google. The company is also launching a library of preset Skills that you can save and use instead of making your own. These ready-to-use Skills can also be customized to better suit your needs, providing a starting point without requiring you to create your own from scratch.

Read more of this story at Slashdot.

  •  

Thousands of Rare Concert Recordings Are Landing On the Internet Archive

✇Slashdot
著者: BeauHD

🤖 AI Summary

Chicagoコンサートマニアのアダム・ジャコブス氏は、1980年代から1万以上のライブを録音してきました。彼のコレクションをインターネットアーカイブと共同でデジタル化するプロジェクトが進行中です。既に約2500枚のテープがインターネットアーカイブにアップロードされ、其中包括了一些稀有的表演,如1989年的 Nirvana 演出。

ジャコブス氏は粗悪な機材を使用していましたが、インターネットアーキビストのボランティア技術者はこれらのテープを素晴らしい音質に再生しました。ボランティアのブライアン・エマリックさんは、ジャコブスさんの家まで月1回車で行かれて、古いカセットデッキを使ってテープを再生します。その後、他のボランティアは整理し、ラベル付けを行い、忘れられたパンクバンドの曲名さえ追跡しています。

このアーカイブはここからアクセスできます。
A Chicago concert superfan Aadam Jacobs who has recorded more than 10,000 shows since the 1980s is working with Internet Archive volunteers to digitize the collection before the cassettes deteriorate. "So far, about 2,500 of these tapes have been posted on the Internet Archive, including some rare gems like a Nirvana performance from 1989," reports TechCrunch. From the report: For many of these recordings, Jacobs was using pretty mediocre equipment, but the volunteer audio engineers working with the Internet Archive have made these tapes sound great. One volunteer, Brian Emerick, drives to Jacobs' house once a month to pick up more boxes of tapes -- he has to use anachronistic cassette decks to play the tapes, which get converted into digital files. From there, other volunteers clean up, organize, and label the recordings, even tracking down song names from forgotten punk bands. The archive is available here.

Read more of this story at Slashdot.

  •  

Social Media Platforms Need To Stop Never-Ending Scrolling, UK's Starmer Says

✇Slashdot
著者: BeauHD

🤖 AI Summary

英国首相ケア・スターマーは、未成年者向けのソーシャルメディアプラットフォームに無限スクロール機能を撤廃するよう主張しています。スターマーはBBCラジオで、「16歳未満に対して規制を行うかどうかについて相談を行っていますが、個人的に重要なのは無限スクロール機能そのものが問題であると考えます。それらは消えるべきです」と述べました。

英国と他の多くの国々一样,正在考虑限制儿童使用社交媒体,并测试禁令、宵禁和应用时间限制以评估其对睡眠、家庭生活和学业的影响。社交媒体公司设计的算法旨在鼓励成瘾行为,因此家长们呼吁政府介入。

英国政府已经收到了4万多人关于儿童在线安全的咨询回复,并表示在5月26日截止日期前仍有时间参与讨论。科技部长丽兹·肯德尔表示:“我们希望听到担心孩子上网时间和内容的父母们的意见,也想听听青少年对社交媒体时代的看法,以及家庭对于宵禁、AI聊天机器人和成瘾功能的看法。”
UK Prime Minister Keir Starmer said social media platforms should remove addictive infinite-scroll features for young users as Britain considers new child-safety measures. "We're consulting on whether there should be a ban for under 16s," Starmer told BBC Radio. "But I think equally important, the addictive scrolling mechanisms are really problematic to my mind. They need to go." Reuters reports: Britain, like other countries, is considering restricting access to social media for children and it is testing bans, curfews and app time limits to see how they impact sleep, family life and schoolwork. Social media companies had designed algorithms that were intended to encourage addictive behavior, and parents were asking the government to intervene, Starmer said. [...] More than 45,000 people had already responded to its consultation on children's online safety, the UK government said, adding that there was still time to contribute before a deadline of May 26. "We want to hear from mums and dads who are worried about the amount of time their children spend online and what they are viewing," Technology Secretary Liz Kendall said on Monday. "We want to hear from teenagers who know better than anyone what it is like to grow up in the age of social media. And we want to hear from families about their views on curfews, AI chatbots and addictive features."

Read more of this story at Slashdot.

  •  

Google Faces Mass Arbitration By Advertisers Seeking Billions

✇Slashdot
著者: BeauHD

🤖 AI Summary

Googleが広告主から数億ドルの賠償請求を受ける可能性があるという報告が出ています。ブロードキャストによると、アルファベット傘下のGoogleは、オンライン検索と広告技術ビジネスが違法な独占行為であるとして判決を受けたため、大規模仲裁で広告主から数億ドルの賠償を求められています。多くの広告主が2024年の裁判所の判決後、個別訴訟を起こしていますが、広告契約には個別の仲裁が必要との条項があり、仲裁では企業に有利な调解人が紛争を処理します。

アシュレイ・ケラー弁護士は、既にGoogleに対する大規模仲裁に参加する「相当数の」広告主を集めています。最初の仲裁申立てが今週開始される予定です。ケラー弁護士は、「2つの連邦判決でGoogleが独占企業であると認定了されている」とし、「補償を求めることは合理的だ」と述べました。

ケラー弁護士はまた、テキサス州その他の地域の政府も広告技術の独占に関する訴訟を進めていることを明かしています。彼はオンライン検索やディスプレイ広告に対する潜在的な請求額を218億ドルに達すると推定しています。

Googleは、「これらの事案の性質から、損失を推定することはできません」とし、「開いた訴訟に対して強力に対抗する準備ができています」と述べています。大規模仲裁は通常12〜24ヶ月間続きますが、具体的な結果は未知数です。
An anonymous reader quotes a report from Bloomberg: Alphabet's Google is facing billions of dollars in potential damage claims as part of mass arbitration tied to the company's online search and advertising technology businesses, which courts have ruled were illegal monopolies. Advertisers are banding together to seek payouts through mass arbitration proceedings. While many companies that displayed ads purchased through Google -- including USA Today Co. and Advance Publications -- have sued for damages since the rulings in 2024, advertiser contracts with the search giant require mandatory arbitration over legal disputes. In arbitration, legal disputes are handled by a mediator, a process that tends to favor companies in individual claims. Mass arbitration -- where 25 or more claims against the same company are pooled together -- have become more common and provide a greater likelihood of settlement awards for claimants. Ashley Keller, a Chicago lawyer whose firm has handled mass arbitrations against DoorDash, Postmates and TurboTax-maker Intuit, said he's already signed up a "significant number" of advertisers to participate in claims against Google. The first of those are expected to be filed this week. "Two federal judges have already adjudicated Google to be a monopolist," Keller said in an interview with Bloomberg. "It seems sensible to seek redress." Keller, who is also representing Texas and other states in a lawsuit against Google for monopolization of advertising technology, estimates potential claims for online search and display ads could reach $218 billion or more, based on calculations from an economist his firm has hired. Similar mass arbitrations have lasted 12 to 24 months between the filing of claims and resolution, he said. "Given the nature of these matters, we cannot estimate a possible loss," Google said in a recent corporate filing. "We believe we have strong arguments against these open claims and will defend ourselves vigorously."

Read more of this story at Slashdot.

  •  

A New Computer Chip Could Finally Withstand The Hellscape of Venus

✇Slashdot
著者: BeauHD

🤖 AI Summary

大学南カリフォルニアの研究者たちは、700度 Celsius の高温でも動作を続けられたメモリ回路「メモリスト」を開発したと報告しています。この装置は二層の電極間に少量の酸化铪で埋められたウェトンという金属から構成され、グラファイドが基板として使用されています。

重要なのは、グラファイドがウェトンと原子レベルでの相互作用です。通常の装置では、熱により金属原子が徐々に酸化铪層を通過して両極間で橋接し、短絡を引き起こします。しかしグラファイドはその過程を止まらせます。

研究者は最先端の電子顕微鏡と量子レベルのコンピュータシミュレーションを使ってこの現象の理由を探求しました。科学誌「Science」に論文が掲載されました。
Researchers at the University of Southern California say they've developed a memristor memory device that continued operating at 700 degrees Celsius. "And crucially, 700 degrees was not the limit, it was simply as hot as their testing equipment could go," adds ScienceAlert. "The device showed no signs of failing." From the report: The device is called a memristor and it's a nanoscale component that can both store information and perform computing operations. Think of it as a tiny sandwich with two electrode layers on the outside and a thin ceramic filling in the middle. The team built theirs from tungsten, the metal with the highest melting point of any element, combined with a ceramic called hafnium oxide, and with a layer of graphene at the bottom. Each material can withstand enormous heat. Together, they turned out to be extraordinary. What makes graphene the key ingredient is the way it interacts with tungsten at the atomic level. In a conventional device, heat causes metal atoms to drift slowly through the ceramic layer until they bridge the two electrodes, short circuiting everything and leaving the device permanently broken. Graphene stops that process dead. Its surface chemistry with tungsten is ... almost like oil and water. Tungsten atoms that drift toward the graphene find they simply cannot take hold, no anchor, no short circuit, no failure. The team used advanced electron microscopy and quantum level computer simulations to understand exactly why, turning a single lucky result into a repeatable principle. The findings have been published in the journal Science.

Read more of this story at Slashdot.

  •  

Air Force Pushed Out UFO Investigator

✇Slashdot
著者: BeauHD

🤖 AI Summary

記事は、ジェイ・アレン・ハイネクという人物がアメリカ空軍から排除されたことを中心に説明しています。ハイネクは当初、UFO報告書を解釈するための空軍の顧問として雇われましたが、政府による不明不明現象(UAP)について調査しない努力に不満を感じ、それを公的に主張し始めました。彼の立場変化により現代のUFO学が形成されたとされ、空軍による情報制御は不信感や陰謀論を深めてしまった可能性があるとの指摘があります。

2024年の国防総省報告書によれば、「歴史的なアメリカ政府の不明現象調査に関する記録」では、UAPが外 aliensの技術であるという証拠は見つからなかったとされています。また、2007年以来信頼度は30%を上回らない状況で、空軍の情報制御によりさらに不信感が高まった可能性があると指摘しています。

ハイネクは本来、政府に真実を求めることを助言していたにもかかわらず、空軍の行動は逆効果となり、むしろ疑念や陰謀論が増えたという結末を迎えました。
J. Allen Hynek started as an Air Force consultant brought in to help explain away early UFO reports, but over time he grew frustrated with what he saw as the government's effort to minimize unexplained cases rather than seriously investigate them. Longtime Slashdot reader schwit1 shares an article from Popular Mechanics, in collaboration with Biography.com, that argues Hynek's shift from skeptic to advocate helped shape modern ufology, and that the Air Force's attempts to control the narrative may have deepened the public distrust and conspiracy thinking that followed. From the report: Do you think the U.S. government is hiding, and possibly reverse-engineering, extraterrestrial technology? Think again. Or better yet, don't think about it at all. Nothing to see here. That's the underlying message of a report released in 2024 by the Department of Defense. The 63-page "Report on the Historical Record of U.S. Government Involvement with Unidentified Anomalous Phenomena (UAP) " concludes that the DoD's All-Domain Anomaly Resolution Office (AARO) "found no evidence that any [U.S. Government] investigation, academic-sponsored research, or official review panel has confirmed that any sighting of a UAP represented extraterrestrial technology." The AARO, as The Guardian summarizes, is "a government office established in 2022 to detect and, as necessary, mitigate threats including 'anomalous, unidentified space, airborne, submerged and transmedium objects.'" This report came on the heels of, and in contradiction to, what was arguably the most high-profile hearing on UAPs -- formerly known as unidentified flying objects, or UFOs -- in decades: the August 2023 testimony of "whistleblower" Dave Grusch. [...] The 2024 AARO report stated that during the time Hynek was working with Project Blue Book [the U.S. Air Force's best-known UFO investigation program], "about 75 percent of Americans trusted the [US government] 'to do the right thing almost always or most of the time.'" But, the report noted, since 2007, that number has never risen above 30 percent. "This lack of trust probably has contributed to the belief held by some subset of the U.S. population that the USG has not been truthful regarding knowledge of extraterrestrial craft." Ultimately, the Air Force's efforts to stifle Hynek -- pressuring him to offer the public standard responses to questions he wasn't even allowed to ask -- appears to have backfired. Ironically, the Air Force's attempts to quiet suspicions only fueled them, leading to more conspiracy theories and distrust. People came to believe that the government was hiding the truth, contrary to Hynek's actual revelation: that, in reality, the people at the top may not care much about finding the answers after all.

Read more of this story at Slashdot.

  •  

WeatherBug Data Says October 8 Is the Real Perfect Date

✇Slashdot
著者: BeauHD

🤖 AI Summary

天気予報会社WeatherBugの新規分析によると、伝統的に「完璧な日」とされる4月25日の実態は期待に反すると報告された。過去3年間の米国の天気データを分析した結果、10月8日に最も安定した気温と少ない雨が見られることが判明した。

WeatherBugは約2000万人のユーザーから集めた人口加重平均の天気データを使用して、一年中すべての日を比較した。その結果、4月25日の平均気温は大体60度で、雨量は約0.13インチ程度だったのに対して、10月8日は66°F(約19℃)で、降雨量は0.0573インチ(約0.15ミリメートル)というデータになった。

さらに、年間最も暑い日は7月、最も寒い日は1月20日であることが示唆された。しかし、全米規模ではどの日も完璧な天気を保証することはできない。それでも、初秋の10月初頭が比較的快適な屋外活動をするための最高の機会となる可能性が高いという。

この研究結果はWeatherBugデータによれば、10月8日が実質的な「完璧な日」であることを示唆している。( BeauHD, Slashdot)
BrianFagioli shares a report from NERDS.xyz: For years pop culture has treated April 25 as the "perfect date," thanks to the famous Miss Congeniality line about needing only a light jacket. But new analysis from WeatherBug suggests that idea does not actually hold up when you look at the numbers. After reviewing U.S. weather data from 2018 through today, the company concluded that October 8 delivers the most reliable combination of comfortable temperatures and low rainfall nationwide. According to the analysis, the average conditions on that day land around 66F with just 0.0573 inches of precipitation. The study used population weighted weather data drawn from roughly 20 million daily WeatherBug users across the United States. When the company compared all days of the year, April 25 ranked only 80th, averaging about 60F and roughly 0.1297 inches of rain. The broader dataset also shows July dominating the hottest days of the year while January owns the coldest, with January 20 averaging just 33F nationally. While no single date guarantees perfect weather everywhere in a country as large as the U.S., the numbers suggest early October may quietly offer one of the most reliable windows for comfortable outdoor conditions.

Read more of this story at Slashdot.

  •  

Stanford Report Highlights Growing Disconnect Between AI Insiders and Everyone Else

✇Slashdot
著者: BeauHD

🤖 AI Summary

スタンフォード大学が発表したAI産業に関する年次報告書によると、AI専門家と一般市民の間でAIに対する認識や意見の乖離が拡大している。特に、AIに対する懸念感が高まっていることが指摘され、特に米国では雇用、医療、経済など主要な社会分野への影響について心配を示している。

報告書は、一般市民と専門家の間で見られるこのような乖離の原因についてより詳しく説明しており、例えば、 pew研究所の調査によると、AIの増加が日常生活に及ぼす影響について更に興奮を感じる米国人は10%にも満たないことが明らかになった。一方で、専門家56%はAIが20年間で米国にとってポジティブな影響をもたらすと見込んでいる。

また、医療分野でのAIの影響については84%の専門家がポジティブであると回答しているが、一般市民では44%に留まっている。労働市場への影響についても、73%の専門家はAIの効果を好意的に見ている一方で、23%の一般市民のみが同様の見解を持っている。

さらに、米国は他の国と比較して、政府によるAI規制に対する信頼性が最も低い水準にあり、政府による適切な規制に関する懸念も高い。報告書によると、AI製品やサービスが利点が多すぎるという回答者は2024年の55%から2025年には59%に増加した一方で、AIが「不安を引き起こす」と感じる人たちは同様の期間で2%増加している。

この報告書は専門家と一般市民の見解の乖離を示し、AIへの適切な規制や管理が依然として懸念されていることを強調している。
An anonymous reader quotes a report from TechCrunch: AI experts and the public's opinion on the technology are increasingly diverging, according to Stanford University's annual report on the AI industry, which was released Monday. In particular, the report noted a growing trend of anxiety around AI and, in the U.S., concerns about how the technology will impact key societal areas, such as jobs, medical care, and the economy. [...] Stanford's report provides more insight into where all this negativity is coming from, as it summarizes data around public sentiment of AI across various sources. For instance, it pointed to a report from Pew Research published last month, which noted that only 10% of Americans said they were more excited than concerned about the increased use of AI in daily life. Meanwhile, 56% of AI experts said they believed AI would have a positive impact on the U.S. over the next 20 years. Expert opinion and public sentiment also greatly diverged in particular areas where AI could have a societal impact. Indeed, 84% of experts, the report authors noted, said that AI would have a largely positive impact on medical care over the next 20 years, but only 44% of the U.S. general public said the same. Plus, a majority (73%) of experts felt positive about AI's impact on how people do their jobs, compared with just 23% of the public. And 69% of experts felt that AI would have a positive impact on the economy. Given the supposed AI-fueled layoffs and disruptions to the workplace, it's not surprising that only 21% of the public felt similarly. Other data from Pew Research, cited by the report, noted that AI experts were less pessimistic on AI's impact on the job market, while nearly two-thirds of Americans (or 64%) said they think AI will lead to fewer jobs over the next 20 years. The U.S. also reported the lowest trust in its government to regulate AI responsibly, compared with other nations, at 31%. Singapore ranked highest at 81%, per data pulled from Ipsos found in Stanford's report. Another source looked at regulation concerns on a state-by-state level and concluded that, nationwide, 41% of respondents said federal AI regulation will not go far enough, while only 27% said it would go "too far." Despite the fears and concerns, AI did get one accolade: Globally, those who feel like AI products and services offer more benefits than drawbacks slightly rose from 55% in 2024 to 59% in 2025. But at the same time, those respondents who said that AI makes them "nervous" grew from 50% to 52% during the same period, per data cited by the report's authors.

Read more of this story at Slashdot.

  •  

Will Some Programmers Become 'AI Babysitters'?

🤖 AI Summary

記事「Will Some Programmers Become "AI Babysitters"?」では、プログラマーの一部が「AIの世話人」となる可能性について議論しています。Google.orgのマグギー・ジョンソンによると、「AIによってコード生成は誰にでも可能になりつつあるが、システムを維持するにはコンピュータサイエンティストが必要だ」と指摘されています。

AIが生成したコードの正確性と頻度が増すにつれて、プログラマーの役割は作者から技術的監査人や専門家に変化すると考えられています。しかし、大型言語モデルはコンテキスト判断や専門的な知識を持っていないため、生成されたコードは安全かつ効率的に動作し、大きなシステムとの整合性を持つことの確認には人間が必要となります。

このため、コードを検証し、セキュリティ上の欠陥を見つけるためにコンピュータサイエンティストが「法医学的な作業」を行うことが必要です。現代のプログラミング教育は、これらのブラックボックス出力を検証・保護する技術的深さを学生に教え込むべきだと指摘されています。

ニューヨーク・タイムズは、企業が生成されたAIコードをレビューするエンジニアを見つけることが難しくなっていることを報じています。
Will some programmers become "AI babysitters"? asks long-time Slashdot readertheodp. They share some thoughts from a founding member of Code.org and former Director of Education at Google: "AI may allow anyone to generate code, but only a computer scientist can maintain a system," explained Google.org Global Head Maggie Johnson in a LinkedIn post. So "As AI-generated code becomes more accurate and ubiquitous, the role of the computer scientist shifts from author to technical auditor or expert. "While large language models can generate functional code in milliseconds, they lack the contextual judgment and specialized knowledge to ensure that the output is safe, efficient, and integrates correctly within a larger system without a person's oversight. [...] The human-in-the-loop must possess the technical depth to recognize when a piece of code is sub-optimal or dangerous in a production environment. [...] We need computer scientists to perform forensics, tracing the logic of an AI-generated module to identify logical fallacies or security loopholes. Modern CS education should prepare students to verify and secure these black-box outputs." The NY Times reports that companies are already struggling to find engineers to review the explosion of AI-written code.

Read more of this story at Slashdot.

  •  

Amazon May Sell Trainium AI Chips To Third Parties In Shot At Nvidia

✇Slashdot
著者: BeauHD

🤖 AI Summary

AmazonがTrainium AIチップをAWSだけでなく、第三者にも販売する可能性があることを明らかにした。同社CEOのAndy Jassyは、アマゾンの半導体事業が年間200億ドル以上を稼いでおり、今後の需要も高いと述べた。現在の供給ではTrainium2と3はほぼ完売状態で、4世代目も既に大幅な予約があるという。

Jassyによると、全製品(Trainium、Graviton、Nitro)は年間100%以上の成長率で拡大しており、完全なTrainium導入により、年間設備投資コストを数十億ドル削減し、利益率も大幅に改善できるとしている。

これはアマゾンが NVIDIAと直接競争するための戦略の一環であり、チップ事業のさらなる拡大を目指している。現在はAWSを通じてのみアクセス可能だが、将来的には独立した販売を検討しているという。
Amazon CEO Andy Jassy says the company may eventually sell its Trainium AI chips directly to outside customers, not just through AWS, which would put Amazon in more direct competition with Nvidia. "There's so much demand for our chips that it's quite possible we'll sell racks of them to third parties in the future," Jassy wrote in his annual shareholder letter Thursday. He also revealed the company's chip business is already running at more than $20 billion annually, with demand so strong that current and even future generations are largely spoken for. Quartz reports: Access to Amazon's chips is currently limited to Amazon Web Services, with customers paying for cloud-based usage rather than owning any physical hardware. Selling to AWS and external customers alike, as standalone chipmakers do, would put annual revenue at around $50 billion, up from the $20 billion the company estimates for the year, Jassy said. The $20 billion figure spans three product lines: Trainium, the AI accelerator chip; Graviton, a general-purpose processor; and Nitro, a chip that helps run Amazon's EC2 server instances. All three are growing at triple-digit rates year over year, Jassy claimed in his letter. Jassy said demand for Trainium has outpaced supply at each generation. Trainium2 is essentially unavailable, with its entire allocated capacity spoken for. Trainium3 started reaching customers in early 2026, and reservations have filled nearly all available supply. Even Trainium4 -- which is not expected to reach wide release for another year and a half -- has substantial pre-orders committed. Jassy argued that a full-scale Trainium rollout could shave tens of billions off annual capital costs while meaningfully widening profit margin.

Read more of this story at Slashdot.

  •  

OpenAI To Limit New Model Release On Cybersecurity Fears

✇Slashdot
著者: BeauHD

🤖 AI Summary

OpenAIは、新たなセキュリティ製品を開発し、限定的にパートナー企業に提供する準備を進めています。これはより広範囲なリリースが大きな問題を引き起こす可能性があるためです。これはAnthropicのMythosモデルやProject Glasswingプロジェクトでの方法と似ています。

OpenAIは2月、GPT-5.3-Codex(会社で最もセキュリティ能力が高いモデル)をリリースした後、「Trusted Access for Cyber」パイロットプログラムを開始しました。このプログラムに参加する組織には「より強力なセキュリティ能力を持つモデルへのアクセス権限が与えられます」とのこと。当初、OpenAIは参加者向けに100万ドルのAPIクレジットを提供することを約束しました。

セキュリティ専門家のStanislav Fortは、モデルが新しい脅威を作り出す能力に関心を持つことが重要だと述べています。これと似た手法はソフトウェアの脆弱性ディスクロージャーに対処する方法と同じだと言います。
OpenAI is reportedly preparing a new cybersecurity product for a small group of partners, out of concern that a broader rollout could wreak havoc if it were released more widely. If that move sounds familiar, it's because Anthropic took a similar limited-release approach with its Mythos model and Project Glasswing initiative. Axios reports: OpenAI introduced its "Trusted Access for Cyber" pilot program in February after rolling out GPT-5.3-Codex, the company's most cyber-capable reasoning model. Organizations in the invite-only program are given access to "even more cyber capable or permissive models to accelerate legitimate defensive work," according to a blog post. At the time, OpenAI committed $10 million in API credits to participants. [...] Restricting the rollout of a new frontier model makes "more sense" if companies are concerned about models' ability to write new exploits -- rather than about their ability to find bugs in the first place, Stanislav Fort, CEO of security firm Aisle, told Axios. Staggering the release of new AI models looks a lot like how cybersecurity vendors currently handle the disclosure of security flaws in software, Lee added. "It's the same debate we've had for decades around responsible vulnerability disclosure," Lee said.

Read more of this story at Slashdot.

  •  

Hacker Steals 10 Petabytes of Data From China's Tianjin Supercomputer Center

✇Slashdot
著者: BeauHD

🤖 AI Summary

中国天津市国家超级计算机中心から、サイバー攻撃者が大量の機密データを窃取した事件が発生しました。CNNが報道しているところによると、この攻撃者は複数か月にわたって国営スーパーコンピュータから10ペタバイト以上の情報を取り出し、その一部を暗号通信チャンネル上で公表しました。

窃取されたデータには高度な防衛文書やミサイル設計図などが含まれており、「航空業界中国」「商用航空機会社中国」などの主要組織に関連していると主張しています。専門家はこの情報の漏洩が真実であると推測しており、一部の取引は仮想通貨で行われました。

CNNはこれらの主張を確認していないものの、攻撃者が複数の組織から比較的容易にアクセスし、大量データを盗み出したことについて専門家らからの評価が一致しているという点に注目しています。被窃データには「機密」マークが付いた文書や技術ファイル、防御装備(爆弾・ミサイル)のアニメーションシミュレーションなどが含まれていると指摘されています。

この事件は中国で最大規模となる可能性のある大規模な情報漏洩として注目を集めています。
An anonymous reader quotes a report from CNN: A hacker has allegedly stolen a massive trove of sensitive data -- including highly classified defense documents and missile schematics -- from a state-run Chinese supercomputer in what could potentially constitute the largest known heist of data from China. The dataset, which allegedly contains more than 10 petabytes of sensitive information, is believed by experts to have been obtained from the National Supercomputing Center (NSCC) in Tianjin -- a centralized hub that provides infrastructure services for more than 6,000 clients across China, including advanced science and defense agencies. Cyber experts who have spoken to the alleged hacker and reviewed samples of the stolen data they posted online say they appeared to gain entry to the massive computer with comparative ease and were able to siphon out huge amounts of data over the course of multiple months without being detected. An account calling itself FlamingChina posted a sample of the alleged dataset on an anonymous Telegram channel on February 6, claiming it contained "research across various fields including aerospace engineering, military research, bioinformatics, fusion simulation and more." The group alleges the information is linked to "top organizations" including the Aviation Industry Corporation of China, the Commercial Aircraft Corporation of China, and the National University of Defense Technology. Cyber security experts who have reviewed the data say the group is offering a limited preview of the alleged dataset, for thousands of dollars, with full access priced at hundreds of thousands of dollars. Payment was requested in cryptocurrency. CNN cannot verify the origins of the alleged dataset and the claims made by FlamingChina, but spoke with multiple experts whose initial assessment of the leak indicated it was genuine. The alleged sample data appeared to include documents marked "secret" in Chinese, along with technical files, animated simulations and renderings of defense equipment including bombs and missiles.

Read more of this story at Slashdot.

  •  

EFF Is Leaving X

✇Slashdot
著者: BeauHD

🤖 AI Summary

EFFは、20年以上にわたって利用していたXから撤退することを発表しました。「軽々と決めたわけではないが、少し遅くまで残したのかもしれない」と、デジタル権利団体は述べています。2018年にTwitter(現在の名称はX)に投稿する回数は5〜10回でしたが、月間インプレッションは5,000万~1億回でした。しかし2024年には、2,500件の投稿が約200万回程度しか視認されませんでした。昨年の1,500件の投稿も、全体で約1300万回にとどまりました。「簡潔に言えば、現在のX投稿は7年前の1つのツイートに対する視認数の3%以下である」と EFFは述べています。

オンライン上での権利行使が重要であり、Xはもう戦場ではなくなりました。マuskによって取り締まったプラットフォームは不完全でしたが影響力がありました。しかし現在存在するのはさらに劣化し、その効用も限られています。EFFは大きな闘いに立ち向かうことで勝利を収めています。今後はBluesky, Mastodon, LinkedIn, Instagram, TikTok, Facebook, YouTube, eff.orgなどで活動を拡大します。「私たちのデジタル権利保護の仕事はより強く必要となっています」とEFFは述べ、支持を求めています。
After nearly 20 years on the platform, The Electronic Frontier Foundation (EFF) says it is leaving X. "This isn't a decision we made lightly, but it might be overdue," the digital rights group said. "The math hasn't worked out for a while now." From the report: We posted to Twitter (now known as X) five to ten times a day in 2018. Those tweets garnered somewhere between 50 and 100 million impressions per month. By 2024, our 2,500 X posts generated around 2 million impressions each month. Last year, our 1,500 posts earned roughly 13 million impressions for the entire year. To put it bluntly, an X post today receives less than 3% of the views a single tweet delivered seven years ago. [...] When you go online, your rights should go with you. X is no longer where the fight is happening. The platform Musk took over was imperfect but impactful. What exists today is something else: diminished, and increasingly de minimis. EFF takes on big fights, and we win. We do that by putting our time, skills, and our members' support where they will effect the most change. Right now, that means Bluesky, Mastodon, LinkedIn, Instagram, TikTok, Facebook, YouTube, and eff.org. We hope you follow us there and keep supporting the work we do. Our work protecting digital rights is needed more than ever before, and we're here to help you take back control.

Read more of this story at Slashdot.

  •  

Waymo Is Offering To Help Cities Fix Their Potholes

✇Slashdot
著者: BeauHD

🤖 AI Summary

WAYMOは都市とGoogleのWazeと提携し、自社の自動運転タクシーで収集した Asphaltポothole情報共有プログラムを開始しました。これは都市交通部門が道路損傷をより速く発見・修復できる新たな手段となります。Waymoの政策開発および研究マネージャーであるアリエル・フリシェルは、「実際には大量に収集しており、それを都市と共有することを見ました」と述べています。

WAYMOが使用している感知ハードウェア(カメラやレーダーなど)と加速度計などのセンサーは、道路表面の物理的変化を検出し、車両が不整な路面に遭遇した際の傾斜や動きを检测します。 Waymoは最初にこの機能が必要だったのは、自動運転車が乗客に損傷や怪我を与えることを避けるためでしたが、後になって都市にとって価値のある情報源であることが分かったのです。

新しいパイロットプログラムでは、そのデータが無料で利用できるWaze for Citiesプラットフォームを通じて都市交通部門に提供されます。これはリアルタイムのユーザー生成トラフィック情報を提供し、行政担当者が重要な決定を下す参考となります(例えば、アスファルトの修復)。また、ウェイズユーザーが自身の観察でポールホールの位置を確認する機能も備えています。

目下、多くの都市は311報告や手動点検といった非緊急情報を頼りに道路の陥没問題に対処しています。 Waymoは数年間にわたり都市職員からのフィードバックを集積し、サンフランシスコベイエリア、ロサンゼルス、フェニックス、アストン、アトランタでパイロットプログラムを開始しました。

さらに、フリシェルは「他の道路状況や安全データが必要であるかどうかを探る準備をしています」と述べています。「我々は都市と協力し、より安全な街道を実現したいと考えています。」
Waymo is launching a pilot with cities and Google's Waze to share pothole data collected by its robotaxis, giving local transportation departments a new way to find and fix road damage more quickly. "We realized, hey, once we're at scale, we can actually share this data with cities, which is something that they've asked for and something that we collect at scale," said Arielle Fleisher, Waymo's policy development and research manager. "And so we figured out a way to make that happen." The Verge reports: Waymo uses its perception hardware, including cameras and radar, as well as accelerometers and the vehicle's physical feedback system, to log every pothole its vehicles encounter. These sensors detect physical changes to the road's surface, such as tilt and movement when the vehicle encounters irregularities. Originally, Waymo knew it needed the ability to detect potholes so it could ensure that its vehicles slowed down to avoid damage or injury to the passenger. Later, the company realized this could be invaluable data for cities, too. Under the new pilot program, that data will now be made available to cities' departments of transportation through a free-to-use Waze for Cities platform, which provides access to real-time, user-generated traffic data that officials can then use to make important decisions -- such as pothole repair. The platform also allows for Waze users to validate pothole locations through their own observations, decreasing the chances that city officials will be led astray by false positives. Currently, many cities rely on a patchwork of non-emergency 311 reports and manual inspections to address their pothole problems. Waymo developed this pilot program after collecting years of feedback from city officials about the state of their highways and surface streets. The company is launching the new pilot in the San Francisco Bay Area, as well as Los Angeles, Phoenix, Austin, and Atlanta, where Waymo says it has already helped the city identify approximately 500 potholes. Fleisher said that Waymo would be open to expanding the project to other street maladies based on further feedback from officials. The company is eager to learn what other types of street condition or safety data might be valuable, she said. "We want to be responsive to cities," Fleisher said. "They are interested in safer streets and potholes are really a tough challenge for cities. So we really wanted to meet that need as part of our desire to be a good partner and to ultimately advance our goal for safer streets."

Read more of this story at Slashdot.

  •  

Skilled Older Workers Turn To AI Training To Stay Afloat

✇Slashdot
著者: BeauHD

🤖 AI Summary

高齢の専門職労働者が人工知能(AI)トレーニングに従事することで就労機会を確保しようとしている状況について、以下の要旨を日本語でまとめます。

1. **背景**:技術分野での求人が減少し、高齢の専門職労働者がAIトレーニングという新たな仕事に転向しています。データ注釈と呼ばれるこの作業は、AIモデルが医療問題を正しく回答できるように訓練する作業を含みます。

2. **企業**: Mercor、GlobalLogic、TEKsystems、micro1、Alignerrなどの会社が大型のコンサルタントネットワークを運営し、テクノロジー大手や学術研究者、医療・金融など多様な業界から依頼を受けている。

3. **労働条件**:経験豊富な専門職は時給20ドルから180ドル以上までのAIトレーニング契約を手に入れることもでき、柔軟性と収入を得ることができます。しかし、これは通常六桁の給与や福利厚生が付く元の仕事とは比べ物にならない低賃金であり、不定期な働き方で福利厚生もありません。

4. **動機**:労働市場の厳しさにより、高齢の専門職労働者たちはAIトレーニングが「橋渡し」となりました。これは既存の仕事を失った場合の一時的な代替策や副業でもあります。

5. **教授の見解**:テキサスA&M大学のジョアンナ・ラヘイ准教授は、「橋渡し」と呼ばれる低賃金で手薄な仕事は労働者を高齢まで雇用を続けるのに役立つと指摘しています。AIトレーニングはそのような代替策よりも良い面もあると言います。

6. **結論**:AIトレーニングは専門知識を活かしながら、経済的に生き延びるための手段として、高齢の専門職労働者が選択していますが、長期的には元の仕事を失う可能性がある実態も理解しておくことが重要です。
An anonymous reader quotes a report from the Guardian: [Five skilled workers aged 50 and older spoke] to the Guardian about how, after struggling to find work in their fields, they have turned to an emerging and growing category of work: using their expertise to train artificial intelligence models. Known as data annotation, the work involves labeling and evaluating the information used to train AI models like Open AI's ChatGPT or Google's Gemini. A doctor, for example, might review how an AI model answers medical questions to flag incorrect or unsafe responses and suggest better ones, helping the system learn how to generate more accurate and reliable responses. The ultimate goal of training is to level up AI models until they're capable of doing a job as well as a human could -- meaning they could someday replace some of these human workers. The companies behind AI training, such as Mercor, GlobalLogic, TEKsystems, micro1 and Alignerr, operate large contractor networks staffed by people like Ciriello. Their clients include tech giants like OpenAI, Google and Meta, academic researchers and industries including healthcare and finance. For experienced professionals, AI training contracts can be a side hustle -- or a temporary fallback following a layoff -- where top experts can, in some cases, earn over $180 an hour. But that's on the high end. For some older workers [...], it represents another thing entirely: a last refuge in a brutal job market that is harder to stay in, or re-enter, the older they get. For many of them, whether or not they're training their AI replacements in their professions is besides the point. They need the work now. [...] "There's just a lot of desperation out there," Johnson said. As opportunities narrow, many turn to what Joanna Lahey, a professor at Texas A&M University who studies age discrimination and labor outcomes, calls "bridge jobs" -- lower-paying, less demanding roles that help workers stay financially afloat as they approach retirement. Historically, that meant taking temp assignments, retail and fast-food work and gig roles like Uber and food delivery. Now, for skilled workers -- engineers, lawyers, nurses or designers, for example -- using their expertise for AI data training is becoming the new bridge job. "[AI] training work may be better in some ways than those earlier alternatives," Lahey told the Guardian. AI training can offer flexibility, quick income and intellectual engagement. But it's often a clear step down. Professionals in fields such as software development, medicine or finance typically earn six-figure salaries that come with benefits and paid leave, according to the US Bureau of Labor Statistics. According to online job postings, AI training gigs start at $20 an hour, with pay increasing to between $30 and $40 an hour. In some cases, AI trainers with coveted subject matter expertise can earn over $100 an hour. AI training is contract-based, though, meaning the pay and hours are unstable, and it often doesn't come with benefits.

Read more of this story at Slashdot.

  •  

Little Snitch Comes To Linux To Expose What Your Software Is Really Doing

✇Slashdot
著者: BeauHD

🤖 AI Summary

Little Snitch, a popular macOS tool that displays which applications are connecting to the internet, is now being developed for Linux. The project began after the developer experimented with Linux and found it strange not knowing about system connections. Unlike existing tools like OpenSnitch, Little Snitch offers a simple user experience by showing which process is making connections and allowing users to block them with a click.

The Linux version of Little Snitch uses eBPF for kernel-level traffic interception, with core components written in Rust and featuring a web-based interface that can monitor remote servers. Initial tests on Ubuntu revealed that the system was relatively quiet; only nine processes made internet connections over a week, compared to more than 100 on macOS.

The application behaves similarly across platforms: Firefox triggered telemetry and advertising-related connections while LibreOffice made no network connections during testing. The early release is intended as a transparency tool rather than a security firewall.

This development aims to provide users with insight into what their software is doing online, enhancing awareness about internet activity without relying solely on command-line utilities or other existing tools.
BrianFagioli writes: Little Snitch, the well known macOS tool that shows which applications are connecting to the internet, is now being developed for Linux. The developer says the project started after experimenting with Linux and realizing how strange it felt not knowing what connections the system was making. Existing tools like OpenSnitch and various command line utilities exist, but none provided the same simple experience of seeing which process is connecting where and blocking it with a click. The Linux version uses eBPF for kernel level traffic interception, with core components written in Rust and a web based interface that can even monitor remote Linux servers. During testing on Ubuntu, the developer noticed the system was relatively quiet on the network. Over the course of a week, only nine system processes made internet connections. By comparison, macOS reportedly showed more than one hundred processes communicating externally. Applications behave similarly across platforms though. Launching Firefox immediately triggered telemetry and advertising related connections, while LibreOffice made no network connections at all during testing. The early release is meant primarily as a transparency tool to show what software is doing on the network rather than a hardened security firewall.

Read more of this story at Slashdot.

  •  
❌