RSS 생중계

Judge Denies Apple's Attempt To Intervene In Google Search Antitrust Trial

Slashdot - 화, 2025/02/04 - 8:32오전
A US District Court judge denied Apple's emergency request to halt the Google Search monopoly trial, ruling that Apple failed to show sufficient grounds for a stay. The Verge reports: Apple said last week that it needs to be involved in the Google trial because it does not want to lose "the ability to defend its right to reach other arrangements with Google that could benefit millions of users and Apple's entitlement to compensation for distributing Google search to its users." The remedies phase of the trial is set for April, and lawyers for the Department of Justice have argued that Google should be forced to sell Chrome, with a possibility of spinning off Android if necessary. While Google will still appeal the decision, the company's proposed remedies focus on undoing its licensing deals that bundle apps and services together. "Because Apple has not satisfied the 'stringent requirements' for obtaining the 'extraordinary relief' of a stay pending appeal, its motion is denied," states Judge Mehta's order. Mehta explains that Apple "has not established a likelihood of success on the merits" for the stay. That includes a lack of clear evidence on how Apple will suffer "certain and great" harm.

Read more of this story at Slashdot.

카테고리:

Anthropic Asks Job Applicants Not To Use AI In Job Applications

Slashdot - 화, 2025/02/04 - 7:40오전
An anonymous reader quotes a report from 404 Media: Anthropic, the company that made one of the most popular AI writing assistants in the world, requires job applicants to agree that they won't use an AI assistant to help write their application. "While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process," the applications say. "We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate 'Yes' if you have read and agree." Anthropic released Claude, an AI assistant that's especially good at conversational writing, in 2023. This question is in almost all of Anthropic's nearly 150 currently-listed roles, but is not in some technical roles, like mobile product designer. It's included in everything from software engineer roles to finance, communications, and sales jobs at the company. The field was spotted by Simon Willison, an open source developer. The question shows Anthropic trying to get around a problem it's helping create: people relying so heavily on AI assistants that they struggle to form opinions of their own. It's also a moot question, as Anthropic and its competitors have created AI models so indistinguishable from human speech as to be nearly undetectable.

Read more of this story at Slashdot.

카테고리:

Microsoft Paint Gets a Copilot Button For Gen AI Features

Slashdot - 화, 2025/02/04 - 6:40오전
A new update is being rolled out to Windows 11 insiders (Build 26120.3073) that introduces a Copilot button in Microsoft Paint. PCWorld reports: Clicking the Copilot button will expand a drop-down menu with all the generative AI features: Cocreator and Image Creator (AI art based on what you've drawn or text prompts), Generative Erase (AI removal of unwanted stuff from images), and Remove Background. Note that these generative AI features have been in Microsoft Paint for some time, but this quick-access Copilot button is a nice time-saver and productivity booster if you use them a lot.

Read more of this story at Slashdot.

카테고리:

NetChoice Sues To Block Maryland's Kids Code, Saying It Violates the First Amendment

Slashdot - 화, 2025/02/04 - 6:00오전
NetChoice has filed (PDF) its 10th lawsuit challenging state internet regulations, this time opposing Maryland's Age-Appropriate Design Code Act. The Verge's Lauren Feiner reports: NetChoice has become one of the fiercest -- and most successful -- opponents of age verification, moderation, and design code laws, all of which would put new obligations on tech platforms and change how users experience the internet. [...] NetChoice's latest suit opposes the Maryland Age-Appropriate Design Code Act, a rule that echoes a California law of a similar name. In the California litigation, NetChoice notched a partial win in the Ninth Circuit Court of Appeals, which upheld the district court's decision to block a part of the law requiring platforms to file reports about their services' impact on kids. (It sent another part of the law back to the lower court for further review.) A similar provision in Maryland's law is at the center of NetChoice's complaint. The group says that Maryland's reporting requirement lets regulators subjectively determine the "best interests of children," inviting "discriminatory enforcement." The reporting requirement on tech companies essentially mandates them "to disparage their services and opine on far-ranging and ill-defined harms that could purportedly arise from their services' 'design' and use of information," NetChoice alleges. NetChoice points out that both California and Maryland have passed separate online privacy laws, which NetChoice Litigation Center director Chris Marchese says shows that "lawmakers know how to write laws to protect online privacy when what they want to do is protect online privacy." Supporters of the Maryland law say legislators learned from California's challenges and "optimized" their law to avoid questions about speech, according to Tech Policy Press. In a blog analyzing Maryland's approach, Future of Privacy Forum points out that the state made some significant changes from California's version -- such as avoiding an "express obligationâ to determine users' ages and defining the "best interests of children." The NetChoice challenge will test how well those changes can hold up to First Amendment scrutiny. NetChoice has consistently maintained that even well-intentioned attempts to protect kids online are likely to backfire. Though the Maryland law does not explicitly require the use of specific age verification tools, Marchese says it essentially leaves tech platforms with a no-win decision: collect more data on users to determine their ages and create varied user experiences or cater to the lowest common denominator and self-censor lawful content that might be considered inappropriate for its youngest users. And similar to its arguments in other cases, Marchese worries that collecting more data to identify users as minors could create a "honey pot" of kids' information, creating a different problem in attempting to solve another.

Read more of this story at Slashdot.

카테고리:

Air Force Documents On Gen AI Test Are Just Whole Pages of Redactions

Slashdot - 화, 2025/02/04 - 5:20오전
An anonymous reader quotes a report from 404 Media: The Air Force Research Laboratory (AFRL), whose tagline is "Win the Fight," has paid more than a hundred thousand dollars to a company that is providing generative AI services to other parts of the Department of Defense. But the AFRL refused to say what exactly the point of the research was, and provided page after page of entirely blacked out, redacted documents in response to a Freedom of Information Act (FOIA) request from 404 Media related to the contract. [...] "Ask Sage: Generative AI Acquisition Accelerator," a December 2023 procurement record reads, with no additional information on the intended use case. The Air Force paid $109,490 to Ask Sage, the record says. Ask Sage is a company focused on providing generative AI to the government. In September the company announced that the Army was implementing Ask Sage's tools. In October it achieved "IL5" authorization, a DoD term for the necessary steps to protect unclassified information to a certain standard. 404 Media made an account on the Ask Sage website. After logging in, the site presents a list of the models available through Ask Sage. Essentially, they include every major model made by well-known AI companies and open source ones. Open AI's GPT-4o and DALL-E-3; Anthropic's Claude 3.5; and Google's Gemini are all included. The company also recently added the Chinese-developed DeepSeek R1, but includes a disclaimer. "WARNING. DO NOT USE THIS MODEL WITH SENSITIVE DATA. THIS MODEL IS BIASED, WITH TIES TO THE CCP [Chinese Communist Party]," it reads. Ask Sage is a way for government employees to access and use AI models in a more secure way. But only some of the models in the tool are listed by Ask Sage as being "compliant" with or "capable" of handling sensitive data. [...] [T]he Air Force declined to provide any real specifics on what it paid Ask Sage for. 404 Media requested all procurement records related to the Ask Sage contract. Instead, the Air Force provided a 19 page presentation which seemingly would have explained the purpose of the test, while redacting 18 of the pages. The only available page said "Ask Sage, Inc. will explore the utilization of Ask Sage by acquisition Airmen with the DAF for Innovative Defense-Related Dual Purpose Technologies relating to the mission of exploring LLMs for DAF use while exploring anticipated benefits, clearly define needed solution adaptations, and define clear milestones and acceptance criteria for Phase II efforts."

Read more of this story at Slashdot.

카테고리:

Why Even Physicists Still Don't Understand Quantum Theory 100 Years On

Slashdot - 화, 2025/02/04 - 4:30오전
A century after quantum mechanics revolutionized physics, scientists still cannot agree on how the theory fundamentally works, despite its tremendous success in explaining natural phenomena and enabling modern technologies. The theory's central puzzle remains unresolved: the way quantum systems are described mathematically differs from what scientists observe when measuring them. This has led to competing interpretations about whether quantum states represent physical reality or are merely tools for calculating probabilities. As researchers debate these foundational questions, quantum mechanics has enabled breakthroughs in particle physics, chemistry, and computing. It accurately predicts phenomena from the behavior of atoms to the properties of the Higgs boson, and underlies technologies like quantum computers and ultra-precise measurement devices. The field's inability to reach consensus on its foundations hasn't hindered its practical applications. Scientists continue to develop new quantum technologies even as they grapple with deep questions about measurement, locality, and the nature of reality that have persisted since Einstein and Bohr's famous debates in the 1920s and 1930s.

Read more of this story at Slashdot.

카테고리:

Trump Orders Creation of US Sovereign Wealth Fund, Says It Could Buy TikTok

Slashdot - 화, 2025/02/04 - 3:55오전
U.S. President Donald Trump signed an executive order on Monday ordering the U.S. Treasury and Commerce Departments to create a sovereign wealth fund and said it may purchase TikTok. From a report: "We're going to stand this thing up within the next 12 months. We're going to monetize the asset side of the U.S. balance sheet for the American people," Treasury Secretary Scott Bessent told reporters. "There'll be a combination of liquid assets, assets that we have in this country as we work to bring them out for the American people." Trump had previously floated such a government investment vehicle as a presidential candidate, saying it could fund "great national endeavors" like infrastructure projects such as highways and airports, manufacturing, and medical research. Details on how exactly the fund would operate and be financed were not immediately available, but Trump previously said it could be funded by "tariffs and other intelligent things." Typically such funds rely on a country's budget surplus to make investments, but the U.S. operates at a deficit.

Read more of this story at Slashdot.

카테고리:

Anthropic Makes 'Jailbreak' Advance To Stop AI Models Producing Harmful Results

Slashdot - 화, 2025/02/04 - 3:10오전
AI startup Anthropic has demonstrated a new technique to prevent users from eliciting harmful content from its models, as leading tech groups including Microsoft and Meta race to find ways that protect against dangers posed by the cutting-edge technology. From a report: In a paper released on Monday, the San Francisco-based startup outlined a new system called "constitutional classifiers." It is a model that acts as a protective layer on top of large language models such as the one that powers Anthropic's Claude chatbot, which can monitor both inputs and outputs for harmful content. The development by Anthropic, which is in talks to raise $2 billion at a $60 billion valuation, comes amid growing industry concern over "jailbreaking" -- attempts to manipulate AI models into generating illegal or dangerous information, such as producing instructions to build chemical weapons. Other companies are also racing to deploy measures to protect against the practice, in moves that could help them avoid regulatory scrutiny while convincing businesses to adopt AI models safely. Microsoft introduced "prompt shields" last March, while Meta introduced a prompt guard model in July last year, which researchers swiftly found ways to bypass but have since been fixed.

Read more of this story at Slashdot.

카테고리:

Cloudflare Rolls Out Digital Tracker To Combat Fake Images

Slashdot - 화, 2025/02/04 - 2:30오전
Cloudflare, a major web infrastructure company, will now track and verify the authenticity of images across its network through Content Credentials, a digital signature system that documents an image's origin and editing history. The technology, developed by Adobe's Content Authenticity Initiative, embeds metadata showing who created an image, when it was taken, and any subsequent modifications - including those made by AI tools. Major news organizations including the BBC, Wall Street Journal and New York Times have already adopted the system. The feature is available immediately through a single toggle in Cloudflare Images settings. Users can verify an image's authenticity through Adobe's web tool or Chrome extension.

Read more of this story at Slashdot.

카테고리:

Levels of Microplastics in Human Brains May Be Rapidly Rising, Study Suggests

Slashdot - 화, 2025/02/04 - 1:50오전
The exponential rise in microplastic pollution over the past 50 years may be reflected in increasing contamination in human brains, according to a new study. From a report: It found a rising trend in micro- and nanoplastics in brain tissue from dozens of postmortems carried out between 1997 and 2024. The researchers also found the tiny particles in liver and kidney samples. The human body is widely contaminated by microplastics. They have also been found in blood, semen, breast milk, placentas and bone marrow. The impact on human health is largely unknown, but they have been linked to strokes and heart attacks. The scientists also found that the concentration of microplastics was about six times higher in brain samples from people who had dementia. However, the damage dementia causes in the brain would be expected to increase concentrations, the researchers said, meaning no causal link should be assumed. "Given the exponentially rising environmental presence of micro- and nanoplastics, this data compels a much larger effort to understand whether they have a role in neurological disorders or other human health effects," said the researchers, who were led by Prof Matthew Campen at the University of New Mexico in the US.

Read more of this story at Slashdot.

카테고리:

[$] The rest of the 6.14 merge window

lwn.net - 화, 2025/02/04 - 1:45오전
By the time that Linus Torvalds released 6.14-rc1 and closed the merge window for this development cycle, some 9,307 non-merge changesets had been pulled into the mainline repository — the lowest level of merge-window activity seen in years. There were, nonetheless, a number of interesting changes in the 5,000 commits pulled since the first-half merge-window summary was written.
카테고리:

What’s new in GTK, winter 2025 edition

lwn.net - 화, 2025/02/04 - 1:27오전

Matthias Clasen has written a short update on a GTK hackfest that took place at FOSDSEM and what's coming in GTK 4.18. This includes fixes for pointer sizes in Wayland when fractional scaling is enabled, removal of the old GL renderer in favor of the GL renderer introduced in GTK 4.13.6, and deprecation of X11 and Broadway backends with intent to remove them in GTK 5.

The deprecated backends will remain available until then, and no action is required by developers at this time, Clasen wrote: "There is no need to act on deprecations until you are actively porting your app to the next major version of GTK, which is not on the horizon yet".

카테고리:

OpenAI's New Trademark Application Hints at Humanoid Robots, Smart Jewelry, and More

Slashdot - 화, 2025/02/04 - 1:10오전
OpenAI has filed an application with the U.S. Patent and Trademark Office to trademark hardware products under its brand name, signaling potential expansion into consumer devices. The filing covers AI-assisted headsets, smart wearables and humanoid robots with communication capabilities. CEO Sam Altman told The Elect on Sunday that OpenAI plans to develop AI hardware through multiple partnerships, though he estimated prototypes would take "several years" to complete.

Read more of this story at Slashdot.

카테고리:

New Bill Aims To Block Foreign Pirate Sites in the US

Slashdot - 화, 2025/02/04 - 12:31오전
U.S. Representative Zoe Lofgren has introduced a bill that would allow courts to block access to foreign websites primarily engaged in copyright infringement. The Foreign Anti-Digital Piracy Act would enable rightsholders to obtain injunctions requiring large Internet service providers and DNS resolvers to block access to pirate sites. The bill marks a shift from previous site-blocking proposals, notably including DNS providers like Google and Cloudflare with annual revenues above $100 million. Motion Picture Association CEO Charles Rivkin backed the measure, while consumer group Public Knowledge criticized it as "censorious." The legislation requires court review and due process before any blocking orders can be issued. Sites would have 30 days to contest preliminary orders.

Read more of this story at Slashdot.

카테고리:

Security updates for Monday

lwn.net - 화, 2025/02/04 - 12:21오전
Security updates have been issued by AlmaLinux (git-lfs, libsoup, and unbound), Debian (dcmtk, ffmpeg, openjdk-11, pam-u2f, and python-aiohttp), Fedora (buku, chromium, jpegxl, nodejs18, nodejs20, and rust-routinator), Mageia (clamav, kernel, kmod-virtualbox, kmod-xtables-addons & dwarves, and kernel-linus), SUSE (apptainer, bind, buildah, chromedriver, clamav, dovecot24, ignition, kubelogin, libjxl, libQt5Bluetooth5-32bit, orc, owasp-modsecurity-crs, python-pydantic, python311-ipython, and stb), and Ubuntu (linux-azure and netdata).
카테고리:

AI Won The Beatles a Grammy 55 Years After They Broke Up

Slashdot - 월, 2025/02/03 - 11:47오후
The Beatles' final song "Now and Then," featuring John Lennon's AI-restored vocals from a 1970s demo, has won the Grammy for Best Rock Performance. Paul McCartney and Ringo Starr completed the track in 2023 using machine learning to isolate Lennon's voice from the original piano recording.

Read more of this story at Slashdot.

카테고리:

Meta's Investment in Virtual Reality on Track To Top $100 Billion

Slashdot - 월, 2025/02/03 - 11:00오후
Meta's investment in virtual and augmented reality is set to exceed $100 billion this year as CEO Mark Zuckerberg declares 2025 a "defining year" for its smart glasses ambitions. The company invested $19.9 billion in its Reality Labs division last year, according to its annual report, bringing total spending on VR and AR development to over $80 billion since 2014. The unit, which develops Ray-Ban Meta smart glasses and Quest VR headsets, sold 1 million pairs of glasses in 2024 but continues to post losses, according to Financial Times.

Read more of this story at Slashdot.

카테고리:

Ubuntu's Dev Discussions Will Move From IRC to Matrix

Slashdot - 월, 2025/02/03 - 5:34오후
The blog OMG Ubuntu reports: Ubuntu's key developers have agreed to switch to Matrix as the primary platform for real-time development communications involving the distro. From March, Matrix will replace IRC as the place where critical Ubuntu development conversations, requests, meetings, and other vital chatter must take place... Only the current #ubuntu-devel and #ubuntu-release Libera IRC channels are moving to Matrix, but other Ubuntu development-related channels can choose to move — officially, given some projects were using Matrix over IRC already. As a result, any major requests to/of the key Ubuntu development teams with privileged access can only be actioned if requests are made on Matrix. Canonical-employed Ubuntu developers will be expected to be present on Matrix during working hours... The aim is to streamline organisation, speed up decision making, ensure key developers are reliably reachable, and avoid discussions and conversations from fragmenting across multiple platforms... It's hoped that in picking one platform as the 'chosen one' the split in where the distro's development discourse takes place can be reduced and greater transparency in how and when decisions are made restored. IRC remains popular with many Ubuntu developers but its old-school, lo-fi nature is said to be off-putting to newer contributors. They're used to richer real-time chat platforms with more features (like discussion history, search, offline messaging, etc). It's felt this is why many newer developers employed by Canonical prefer to discuss and message through the company's internal Mattermost instance — which isn't publicly accessible. Many Ubuntu teams, flavours, and community chats already take place on Matrix... "End-users aren't directly affected, of course," they point out. But an earlier post on the same blog notes that Matrix "is increasingly ubiquitous in open-source circles. GNOME uses it, KDE embraces it, Linux Mint migrated last year, Mozilla a few years before, and it's already widely used by Ubuntu community members and developers." IRC remains unmatched in many areas but is, rightly or wrongly, viewed as an antiquated communication platform. IRC clients aren't pretty or plentiful, the syntax is obtuse, and support for 'modern' comforts like media sending, read receipts, etc., is lacking.To newer, younger contributors IRC could feel ancient or cumbersome to learn. Though many of IRC's real and perceived shortcomings are surmountable with workarounds, clients, bots, scripts, and so on, support for those varies between channels, clients, servers, and user configurations. Unlike IRC, which is a centralised protocol relying on individual servers, Matrix is federated. It lets users on different servers to communicate without friction. Plus, Matrix features encryption, message history, media support, and so, meeting modern expectations.

Read more of this story at Slashdot.

카테고리:

Will Cryptomining Facilities Change Into AI Data Centers?

Slashdot - 월, 2025/02/03 - 3:19오후
To capitalize on the AI boom, many crypto miners "have begun to repurpose parts of their operations into data centers," reports Reuters, "given they already have most of the infrastructure" (including landing and "significant" power resources...) Toronto-based bitcoin miner Bitfarms has enlisted two consultants to explore how it can transform some of its facilities to meet the growing demand for artificial intelligence data centers, it said on Friday... Earlier this month, Riot Platforms launched a review of the potential AI and computing uses for parts of its facility in Navarro County, Texas.

Read more of this story at Slashdot.

카테고리:

Google Stops Malicious Apps With 'AI-Powered Threat Detection' and Continuous Scanning

Slashdot - 월, 2025/02/03 - 1:03오후
Android and Google Play have billions of users, Google wrote in its security blog this week. "However, like any flourishing ecosystem, it also attracts its share of bad actors... That's why every year, we continue to invest in more ways to protect our community." Google's tactics include industry-wide alliances, stronger privacy policies, and "AI-powered threat detection." "As a result, we prevented 2.36 million policy-violating apps from being published on Google Play and banned more than 158,000 bad developer accounts that attempted to publish harmful apps. " To keep out bad actors, we have always used a combination of human security experts and the latest threat-detection technology. In 2024, we used Google's advanced AI to improve our systems' ability to proactively identify malware, enabling us to detect and block bad apps more effectively. It also helps us streamline review processes for developers with a proven track record of policy compliance. Today, over 92% of our human reviews for harmful apps are AI-assisted, allowing us to take quicker and more accurate action to help prevent harmful apps from becoming available on Google Play. That's enabled us to stop more bad apps than ever from reaching users through the Play Store, protecting users from harmful or malicious apps before they can cause any damage. Starting in 2024 Google also "required apps to be more transparent about how they handle user information by launching new developer requirements and a new 'Data deletion' option for apps that support user accounts and data collection.... We're also constantly working to improve the safety of apps on Play at scale, such as with the Google Play SDK Index. This tool offers insights and data to help developers make more informed decisions about the safety of an SDK." And once an app is installed, "Google Play Protect, Android's built-in security protection, helps to shield their Android device by continuously scanning for malicious app behavior." Google Play Protect automatically scans every app on Android devices with Google Play Services, no matter the download source. This built-in protection, enabled by default, provides crucial security against malware and unwanted software. Google Play Protect scans more than 200 billion apps daily and performs real-time scanning at the code-level on novel apps to combat emerging and hidden threats, like polymorphic malware. In 2024, Google Play Protect's real-time scanning identified more than 13 million new malicious apps from outside Google Play [based on Google Play Protect 2024 internal data]... According to our research, more than 95 percent of app installations from major malware families that exploit sensitive permissions highly correlated to financial fraud came from Internet-sideloading sources like web browsers, messaging apps, or file managers. To help users stay protected when browsing the web, Chrome will now display a reminder notification to re-enable Google Play Protect if it has been turned off... Scammers may manipulate users into disabling Play Protect during calls to download malicious Internet-sideloaded apps. To prevent this, the Play Protect app scanning toggle is now temporarily disabled during phone or video calls... Google Play Protect's enhanced fraud protection pilot analyzes and automatically blocks the installation of apps that may use sensitive permissions frequently abused for financial fraud when the user attempts to install the app from an Internet-sideloading source (web browsers, messaging apps, or file managers). Building on the success of our initial pilot in partnership with the Cyber Security Agency of Singapore (CSA), additional enhanced fraud protection pilots are now active in nine regions — Brazil, Hong Kong, India, Kenya, Nigeria, Philippines, South Africa, Thailand, and Vietnam. In 2024, Google Play Protect's enhanced fraud protection pilots have shielded 10 million devices from over 36 million risky installation attempts, encompassing over 200,000 unique apps.

Read more of this story at Slashdot.

카테고리:

페이지

KLDP 수집기 구독하기