Slashdot

Slashdot 피드 구독하기 Slashdot
News for nerds, stuff that matters
업데이트: 1시간 54분 지남

Judge Sanctions Lawyers Defending Alabama's Prison System For Using Fake ChatGPT Cases In Filings

토, 2025/07/26 - 8:30오전
An anonymous reader quotes a report from the Associated Press: A federal judge reprimanded lawyers with a high-priced firm defending Alabama's prison system for using ChatGPT to write court filings with "completely made up" case citations. U.S. District Judge Anna Manasco publicly reprimanded three lawyers with Butler Snow, the law firm hired to defend Alabama and other jurisdictions in lawsuits against their prison systems. The order sanctioned William R. Lunsford, the head of the firm division that handles prison litigation, along with Matthew B. Reeves and William J. Cranford. "Fabricating legal authority is serious misconduct that demands a serious sanction," Manasco wrote in the Wednesday sanctions order. Manasco removed the three from participating in the case where the false citations were filed and directed them to share the sanctions order with clients, opposing lawyers and judges in all of their other cases. She also referred the matter to the Alabama State Bar for possible disciplinary action. [...] "In simpler terms, the citations were completely made up," Manasco wrote. She added that using the citations without verifying their accuracy was "recklessness in the extreme." The filings in question were made in a lawsuit filed by an inmate who was stabbed on multiple occasions at the William E. Donaldson Correctional Facility in Jefferson County. The lawsuit alleges that prison officials are failing to keep inmates safe.

Read more of this story at Slashdot.

카테고리:

Linux Kernel Could Soon Expose Every Line AI Helps Write

토, 2025/07/26 - 7:50오전
BrianFagioli shares a report from NERDS.xyz: Sasha Levin, a respected developer and engineer at Nvidia, has proposed a patch series aimed at formally integrating AI coding assistants into the Linux kernel workflow. The proposal includes two major changes. First, it introduces configuration stubs for popular AI development tools like Claude, GitHub Copilot, Cursor, Codeium, Continue, Windsurf, and Aider. These are symlinked to a centralized documentation file to ensure consistency. Second, and more notably, it lays out official guidelines for how AI-generated contributions should be handled. According to the proposed documentation, AI assistants must identify themselves in commit messages using a Co-developed-by: tag, but they cannot use Signed-off-by:, which legally certifies the commit under the Developer Certificate of Origin. That responsibility remains solely with the human developer. One example shared in the patch shows a simple fix to a typo in the kernel's OPP documentation. Claude, an AI assistant, corrects "dont" to "don't" and commits the patch with the proper attribution: "Co-developed-by: Claude claude-opus-4-20250514." Levin's patch also creates a new section under Documentation/AI/ where the expectations and limitations of using AI in kernel development are laid out. This includes reminders to follow kernel coding standards, respect the development process, and understand licensing requirements. There are things AI often struggles with.

Read more of this story at Slashdot.

카테고리:

US DOE Taps Federal Sites For Fast-Track AI Datacenter, Energy Builds

토, 2025/07/26 - 7:10오전
The U.S. Department of Energy has greenlit four federal sites for private sector AI datacenters and nuclear-powered energy projects, aligning with Trump's directive to fast-track AI infrastructure using government land. "The four that have been finalized are the Idaho National Laboratory, Oak Ridge Reservation, Paducah Gaseous Diffusion Plant, and Savannah River Site," reports The Register. "These will now move forward to invite companies in the private sector to build AI datacenter projects plus any necessary energy sources to power them, including nuclear generation." The Register reports: "By leveraging DoE land assets for the deployment of AI and energy infrastructure, we are taking a bold step to accelerate the next Manhattan Project -- ensuring US AI and energy leadership," Energy Secretary Chris Wright said in a statement. Ironically -- or perhaps not -- Oak Ridge Reservation was established in the early 1940s as part of the original Manhattan Project to develop the first atomic bomb, and is home to the Oak Ridge National Laboratory (ORNL) that operates the Frontier exascale supercomputer, and the Y-12 National Security Complex which supports US nuclear weapons programs. The other sites are also involved with either nuclear research or atomic weapons in one way or another, which may hint at the administration's intentions for how the datacenters should be powered. All four locations are positioned to host new bit barns as well as power generation to bolster grid reliability, strengthen national security, and reduce energy costs, Wright claimed. [...] In light of this tight time frame, the DoE says that partners may be selected by the end of the year. Details regarding project scope, eligibility requirements, and submission guidelines for each site are expected to be released in the coming months.

Read more of this story at Slashdot.

카테고리:

Women Dating Safety App 'Tea' Breached, Users' IDs Posted To 4chan

토, 2025/07/26 - 6:30오전
An anonymous reader quotes a report from 404 Media: Users from 4chan claim to have discovered an exposed database hosted on Google's mobile app development platform, Firebase, belonging to the newly popular women's dating safety app Tea. Users say they are rifling through peoples' personal data and selfies uploaded to the app, and then posting that data online, according to screenshots, 4chan posts, and code reviewed by 404 Media. In a statement to 404 Media, Tea confirmed the breach also impacted some direct messages but said that the data is from two years ago. Tea, which claims to have more than 1.6 million users, reached the top of the App Store charts this week and has tens of thousands of reviews there. The app aims to provide a space for women to exchange information about men in order to stay safe, and verifies that new users are women by asking them to upload a selfie. "Yes, if you sent Tea App your face and drivers license, they doxxed you publicly! No authentication, no nothing. It's a public bucket," a post on 4chan providing details of the vulnerability reads. "DRIVERS LICENSES AND FACE PICS! GET THE FUCK IN HERE BEFORE THEY SHUT IT DOWN!" The thread says the issue was an exposed database that allowed anyone to access the material. [...] "The images in the bucket are raw and uncensored," the user wrote. Multiple users have created scripts to automate the process of collecting peoples' personal information from the exposed database, according to other posts in the thread and copies of the scripts. In its terms of use, Tea says "When you first create a Tea account, we ask that you register by creating a username and including your location, birth date, photo and ID photo." After publication of this article, Tea confirmed the breach in an email to 404 Media. The company said on Friday it "identified unauthorized access to one of our systems and immediately launched a full investigation to assess the scope and impact." The company says the breach impacted data from more than two years ago, and included 72,000 images (13,000 selfies and photo IDs, and 59,000 images from app posts and direct messages). "This data was originally stored in compliance with law enforcement requirements related to cyber-bullying prevention," the email continued. "We have engaged third-party cybersecurity experts and are working around the clock to secure our systems. At this time, there is no evidence to suggest that current or additional user data was affected. Protecting our users' privacy and data is our highest priority. We are taking every necessary step to ensure the security of our platform and prevent further exposure."

Read more of this story at Slashdot.

카테고리:

The Manmade Clouds That Could Help Save the Great Barrier Reef

토, 2025/07/26 - 5:51오전
Scientists led by Daniel Harrison at Southern Cross University conducted their most successful test of marine cloud brightening technology in February, deploying three vessels nicknamed "Big Daddy and the Twins" in the Palm Islands off northeastern Australia. The ships pumped seawater through hundreds of tiny nozzles to create dense fog plumes and brighten existing clouds, aiming to shade and cool reef waters to prevent coral bleaching caused by rising ocean temperatures. Harrison's team has been investigating weather modification above the Great Barrier Reef since 2016 and represents the only group conducting open-ocean cloud brightening experiments. The localized geoengineering approach seeks to reduce stress on corals that forces them to expel symbiotic algae during heat waves.

Read more of this story at Slashdot.

카테고리:

Clean Cyclists Now Outperform Doped Champions of Tour de France's Past

토, 2025/07/26 - 5:10오전
Current Tour de France competitors are faster than the sport's notorious doping-era champions, according to an analysis. Tadej Pogacar produced approximately 7 watts per kilogram for nearly 40 minutes during a crucial mountain stage in last year's Tour de France. Jonas Vingegaard, generated more than 7 watts per kilogram for nearly 15 minutes during a failed attack attempt. Lance Armstrong, at his blood-doped peak two decades ago, averaged an estimated 6 watts per kilogram and took nearly six minutes longer than Pogacar on the same Pyrenees climb in 2004. The performance gains stem from multiple technological advances. Every rider now uses power meters that provide real-time performance data. Nutrition has shifted from minimal fueling to constant calorie replenishment with precisely measured food intake. Equipment undergoes extensive wind tunnel testing to reduce drag coefficients. Teams use apps like VeloViewer to preview race courses and weather forecasting to optimize wheel selection. "The bias is in favor of clean athletes: that you can be clean and win," said Travis Tygart, chief executive of the U.S. Anti-Doping Agency.

Read more of this story at Slashdot.

카테고리:

Air Pollution Raises Risk of Dementia, Say Cambridge Scientists

토, 2025/07/26 - 4:30오전
Exposure to certain forms of air pollution is linked to an increased risk of developing dementia, according to the most comprehensive study of its kind. From a report: The illness is estimated to affect about 57 million people worldwide, with the number expected to increase to at least 150m cases by 2050. The report, which was produced by researchers at the Medical Research Council's epidemiology unit at the University of Cambridge involved a systematic review of 51 studies. It drew on data from more than 29 million participants who had been exposed to air pollutants for at least a year. Although air pollution has already been identified as a risk factor for dementia, the research, which is the most comprehensive study of its kind to date, found there to be a positive and statistically-significant association between three types of air pollutant and dementia.

Read more of this story at Slashdot.

카테고리:

Internet Archive Designated as a Federal Depository Library

토, 2025/07/26 - 3:50오전
The Internet Archive has received federal depository library status from California Sen. Alex Padilla, joining a network of over 1,100 libraries that archive government documents and make them accessible to the public. Padilla made the designation in a letter to the Government Publishing Office, which oversees the program. The San Francisco-based nonprofit organization already operates Democracy's Library, a free online compendium of government research and publications launched in 2022. Founder Brewster Kahle said the new designation makes it easier to work with other federal depository libraries and provides more reliable access to government materials for digitization and distribution. Under federal law, members of Congress can designate up to two qualified libraries for federal depository status.

Read more of this story at Slashdot.

카테고리:

Man Awarded $12,500 After Google Street View Camera Captured Him Naked in His Yard

토, 2025/07/26 - 3:10오전
An Argentine captured naked in his yard by a Google Street View camera has been awarded compensation by a court after his bare behind was splashed over the internet for all to see. From a report: The policeman had sought payment from the internet giant for harm to his dignity, arguing he was behind a 6 1/2-foot wall when a Google camera captured him in the buff, from behind, in small-town Argentina in 2017. His house number and street name were also laid bare, broadcast on Argentine TV covering the story, and shared widely on social media. The man claimed the invasion exposed him to ridicule at work and among his neighbors. Another court last year dismissed the man's claim for damages, ruling he only had himself to blame for "walking around in inappropriate conditions in the garden of his home." Google, for its part, claimed the perimeter wall was not high enough.

Read more of this story at Slashdot.

카테고리:

DNS Security is Important But DNSSEC May Be a Failed Experiment

토, 2025/07/26 - 2:30오전
Domain Name System Security Extensions has achieved only 34% deployment after 28 years since publication of the first DNSSEC RFC, according to Internet Society data that labels it "arguably the worst performing technology" among internet enabling technologies. HTTPS reaches 96% adoption among the top 1,000 websites globally despite roughly the same development timeline as DNSSEC. The security protocol faces fundamental barriers including lack of user visibility compared to HTTPS padlock icons and mandatory implementation throughout the entire DNS hierarchy. Approximately 30% of country-level domains have not implemented DNSSEC, creating deployment gaps that prevent domains beneath them from securing their DNS records.

Read more of this story at Slashdot.

카테고리:

Graduate Job Postings Plummet, But AI May Not Be the Primary Culprit

토, 2025/07/26 - 1:50오전
Job postings for entry-level roles requiring degrees have dropped nearly two-thirds in the UK and 43% in the US since ChatGPT launched in 2022, according to Financial Times analysis of Adzuna data. The decline spans sectors with varying AI exposure -- UK graduate openings fell 75% in banking, 65% in software development, but also 77% in human resources and 55% in civil engineering. Indeed research found only weak correlation between occupations mentioning AI most frequently and those with the steepest job posting declines. US Bureau of Labor Statistics data showed no clear relationship between an occupation's AI exposure and young worker losses between 2022-2024. Economists say economic uncertainty, post-COVID workforce corrections, increased offshoring, and reduced venture capital funding are likely primary drivers of the graduate hiring slowdown.

Read more of this story at Slashdot.

카테고리:

Microsoft Used China-Based Support for Multiple U.S. Agencies, Potentially Exposing Sensitive Data

토, 2025/07/26 - 1:13오전
Microsoft used China-based engineering teams to maintain cloud computing systems for multiple federal departments including Justice, Treasury, and Commerce, extending the practice beyond the Defense Department that the company announced last week it would discontinue. The work occurred within Microsoft's Government Community Cloud, which handles sensitive but unclassified federal information and has been used by the Justice Department's Antitrust Division for criminal and civil investigations, as well as parts of the Environmental Protection Agency and Department of Education. Microsoft employed "digital escorts" -- U.S.-based personnel who supervised the foreign engineers -- similar to the arrangement it used for Pentagon systems. Following ProPublica's reporting, Microsoft issued a statement indicating it would take "similar steps for all our government customers who use Government Community Cloud to further ensure the security of their data." Competing cloud providers Amazon Web Services, Google, and Oracle told ProPublica they do not use China-based support for federal contracts.

Read more of this story at Slashdot.

카테고리:

'We're Not Learning Anything': Stanford GSB Students Sound The Alarm Over Academics

토, 2025/07/26 - 12:21오전
Stanford Graduate School of Business students have publicly criticized their academic experience, telling Poets&Quants that outdated course content and disengaged faculty leave them unprepared for post-MBA careers. The complaints target one of the world's most selective business programs, which admitted just 6.8% of applicants last fall. Students described required courses that "feel like they were designed in the 2010s" despite operating in an AI age. They cited a curriculum structure offering only 15 Distribution requirement electives, some overlapping while omitting foundational business strategy. A lottery system means students paying $250,000 tuition cannot guarantee enrollment in desired classes. Stanford's winter student survey showed satisfaction with class engagement dropped to 2.9 on a five-point scale, the lowest level in two to three years. Students contrasted Stanford's "Room Temp" system, where professors pre-select five to seven students for questioning, with Harvard Business School's "cold calling" method requiring all students to prepare for potential questioning.

Read more of this story at Slashdot.

카테고리:

'Call of Duty' Maker Goes To War With 'Parasitic' Cheat Developers in LA Federal Court

금, 2025/07/25 - 11:40오후
A federal court has denied requests by Ryan Rothholz to dismiss or transfer an Activision lawsuit targeting his alleged Call of Duty cheating software operation. Rothholz, who operated under the online handle "Lerggy," submitted motions in June and earlier this month seeking to dismiss the case or move it to the Southern District of New York, but both were rejected due to filing errors. The May lawsuit alleges Rothholz created "Lergware" hacking software that enabled players to cheat by kicking opponents offline, then rebranded to develop "GameHook" after receiving a cease and desist letter in June 2023. Court filings say he sold a "master key" for $350 that facilitated cheating across multiple games. The hacks "are parasitic in nature," the complaint said, alleging violations of the game's terms of service, copyright law and the Computer Fraud and Abuse Act.

Read more of this story at Slashdot.

카테고리:

Indian Studio Uses AI To Change 12-Year-Old Film's Ending Without Director's Consent in Apparent First

금, 2025/07/25 - 11:00오후
Indian studio Eros International plans to re-release the 2013 Bollywood romantic drama "Raanjhanaa" on August 1 with an AI-generated alternate ending that transforms the film's tragic conclusion into a happier one. The original Hindi film, which starred Dhanush and Sonam Kapoor and became a commercial hit, ended with the protagonist's death. The AI-altered Tamil version titled "Ambikapathy" will allow the character to survive. Director Aanand L. Rai condemned the decision as "a deeply troubling precedent" made without his knowledge or consent. Eros CEO Pradeep Dwivedi defended the move as legally permitted under Indian copyright law, which grants producers full authorship rights over films. The controversy represents what appears to be the first instance of AI being used to fundamentally alter a completed film's narrative without director involvement.

Read more of this story at Slashdot.

카테고리:

College Grads Are Pursuing a New Career Path: Training AI Models

금, 2025/07/25 - 10:00오후
College graduates across specialized fields are pursuing a new career path training AI models, with companies paying between $30 to $160 per hour for their expertise. Handshake, a university career networking platform, recruited more than 1,000 AI trainers in six months through its newly created Handshake AI division for what it describes as the top five AI laboratories. The trend stems from federal funding cuts straining academic research and a stalled entry-level job market, making AI training an attractive alternative for recent graduates with specialized knowledge in fields including music, finance, law, education, statistics, virology, and quantum mechanics.

Read more of this story at Slashdot.

카테고리:

American Airlines Chief Blasts Delta's AI Pricing Plans as 'Inappropriate'

금, 2025/07/25 - 9:00오후
American Airlines Chief Executive Robert Isom criticized the use of AI in setting air fares during an earnings call, calling the practice "inappropriate" and a "bait and switch" move that could trick travelers. Isom's comments target Delta Air Lines, which is testing AI to help set pricing on about 3% of its network today with plans to expand to 20% by year-end. Delta maintains it is not using the technology to target customers with individualized offers based on personal information, stating all customers see identical fares across retail channels. US Senators Ruben Gallego, Richard Blumenthal, and Mark Warner have questioned Delta's AI pricing plans, citing data privacy concerns and potential fare increases. Southwest Airlines CEO Bob Jordan said his carrier also has no plans to use AI in revenue management or pricing decisions.

Read more of this story at Slashdot.

카테고리:

Mercedes-Benz Is Already Testing Solid-State Batteries In EVs With Over 600 Miles Range

금, 2025/07/25 - 7:00오후
An anonymous reader quotes a report from Electrek: The "holy grail" of electric vehicle battery tech may be here sooner than you'd think. Mercedes-Benz is testing EVs with solid-state batteries on the road, promising to deliver over 600 miles of range. Earlier this year, Mercedes marked a massive milestone, putting "the first car powered by a lithium-metal solid-state battery on the road" for testing. Mercedes has been testing prototypes in the UK since February. The company used a modified EQS prototype, equipped with the new batteries and other parts. The battery pack was developed by Mercedes-Benz and its Formula 1 supplier unit, Mercedes AMG High-Performance Powertrains (HPP) Mercedes is teaming up with US-based Factorial Energy to bring the new battery tech to market. In September, Factorial and Mercedes revealed the all-solid-state Solstice battery. The new batteries, promising a 25% range improvement, will power the German automaker's next-generation electric vehicles. According to Markus Schafer, the automaker's head of development, the first Mercedes EVs powered by solid-state batteries could be here by 2030. During an event in Copenhagen, Schafer told German auto news outlet Automobilwoche, "We expect to bring the technology into series production before the end of the year." In addition to providing a longer driving range, Mercedes believes the new batteries can significantly reduce costs. Schafer said current batteries won't suffice, adding, "At the core, a new chemistry is needed." Mercedes and Factorial are using a sulfide-based solid electrolyte, said to be safer and more efficient.

Read more of this story at Slashdot.

카테고리:

Largest-Ever Supernova Catalog Provides Further Evidence Dark Energy Is Weakening

금, 2025/07/25 - 4:00오후
Scientists using the largest-ever catalog of Type 1a supernovas -- cosmic explosions from white dwarf "vampire stars" -- have uncovered further evidence that dark energy may not be constant. While the findings are still preliminary, they suggest the mysterious force driving the universe's expansion could be weakening, which "would have ramifications for our understanding of how the cosmos will end," reports Space.com. From the report: By comparing Type 1a supernovas at different distances and seeing how their light has been redshifted by the expansion of the universe, the value for the rate of expansion of the universe (the Hubble constant) can be obtained. Then, that can be used to understand the impact of dark energy on the cosmos at different times. This story is fitting because it was the study of 50 Type 1a supernovas that first tipped astronomers off to the existence of dark energy in the first place back in 1998. Since then, astronomers have observed a further 2,000 Type 1a supernovas with different telescopes. This new project corrects any differences between those observations caused by different astronomical instruments, such as how the filters of telescopes drift over time, to curate the largest standardized Type 1a supernova dataset ever. It's named Union3. Union3 contains 2,087 supernovas from 24 different datasets spanning 7 billion years of cosmic time. It builds upon the 557 supernovas catalogued in an original dataset called Union2. Analysis of Union3 does indeed seem to corroborate the results of DESI -- that dark energy is weakening over time -- but the results aren't yet conclusive. What is impressive about Union3, however, is that it presents two separate routes of investigation that both point toward non-constant dark energy. "I don't think anyone is jumping up and down getting overly excited yet, but that's because we scientists are suppressing any premature elation since we know that this could go away once we get even better data," Saul Perlmutter, study team member and a researcher at Berkeley Lab, said in a statement. "On the other hand, people are certainly sitting up in their chairs now that two separate techniques are showing moderate disagreement with the simple Lambda CDM model." And when it comes to dark energy in general, Perlmutter says the scientific community will pay attention. After all, he shared the 2011 Nobel Prize in Physics for discovering this strange force. "It's exciting that we're finally starting to reach levels of precision where things become interesting and you can begin to differentiate between the different theories of dark energy," Perlmutter said.

Read more of this story at Slashdot.

카테고리:

Two Major AI Coding Tools Wiped Out User Data After Making Cascading Mistakes

금, 2025/07/25 - 12:30오후
An anonymous reader quotes a report from Ars Technica: Two recent incidents involving AI coding assistants put a spotlight on risks in the emerging field of "vibe coding" -- using natural language to generate and execute code through AI models without paying close attention to how the code works under the hood. In one case, Google's Gemini CLI destroyed user files while attempting to reorganize them. In another, Replit's AI coding service deleted a production database despite explicit instructions not to modify code. The Gemini CLI incident unfolded when a product manager experimenting with Google's command-line tool watched the AI model execute file operations that destroyed data while attempting to reorganize folders. The destruction occurred through a series of move commands targeting a directory that never existed. "I have failed you completely and catastrophically," Gemini CLI output stated. "My review of the commands confirms my gross incompetence." The core issue appears to be what researchers call "confabulation" or "hallucination" -- when AI models generate plausible-sounding but false information. In these cases, both models confabulated successful operations and built subsequent actions on those false premises. However, the two incidents manifested this problem in distinctly different ways. [...] The user in the Gemini CLI incident, who goes by "anuraag" online and identified themselves as a product manager experimenting with vibe coding, asked Gemini to perform what seemed like a simple task: rename a folder and reorganize some files. Instead, the AI model incorrectly interpreted the structure of the file system and proceeded to execute commands based on that flawed analysis. [...] When you move a file to a non-existent directory in Windows, it renames the file to the destination name instead of moving it. Each subsequent move command executed by the AI model overwrote the previous file, ultimately destroying the data. [...] The Gemini CLI failure happened just days after a similar incident with Replit, an AI coding service that allows users to create software using natural language prompts. According to The Register, SaaStr founder Jason Lemkin reported that Replit's AI model deleted his production database despite explicit instructions not to change any code without permission. Lemkin had spent several days building a prototype with Replit, accumulating over $600 in charges beyond his monthly subscription. "I spent the other [day] deep in vibe coding on Replit for the first time -- and I built a prototype in just a few hours that was pretty, pretty cool," Lemkin wrote in a July 12 blog post. But unlike the Gemini incident where the AI model confabulated phantom directories, Replit's failures took a different form. According to Lemkin, the AI began fabricating data to hide its errors. His initial enthusiasm deteriorated when Replit generated incorrect outputs and produced fake data and false test results instead of proper error messages. "It kept covering up bugs and issues by creating fake data, fake reports, and worse of all, lying about our unit test," Lemkin wrote. In a video posted to LinkedIn, Lemkin detailed how Replit created a database filled with 4,000 fictional people. The AI model also repeatedly violated explicit safety instructions. Lemkin had implemented a "code and action freeze" to prevent changes to production systems, but the AI model ignored these directives. The situation escalated when the Replit AI model deleted his database containing 1,206 executive records and data on nearly 1,200 companies. When prompted to rate the severity of its actions on a 100-point scale, Replit's output read: "Severity: 95/100. This is an extreme violation of trust and professional standards." When questioned about its actions, the AI agent admitted to "panicking in response to empty queries" and running unauthorized commands -- suggesting it may have deleted the database while attempting to "fix" what it perceived as a problem. Like Gemini CLI, Replit's system initially indicated it couldn't restore the deleted data -- information that proved incorrect when Lemkin discovered the rollback feature did work after all. "Replit assured me it's ... rollback did not support database rollbacks. It said it was impossible in this case, that it had destroyed all database versions. It turns out Replit was wrong, and the rollback did work. JFC," Lemkin wrote in an X post.

Read more of this story at Slashdot.

카테고리:

페이지