A group of Linux gaming-focused distros and developers have formed the Open Gaming Collective to pool work on shared components like kernels, input systems, and Valve tooling. The Verge reports: Universal Blue, developer of the gaming-focused Linux distribution Bazzite, announced on Wednesday that its helping to form the OGC with several other groups, which will collaborate on improvements to the Linux gaming ecosystem and âoecentralize efforts around critical components like kernel patches, input tooling, and essential gaming packages such as gamescope." The other founding members of the OGC include Nobara, ChimeraOS, Playtron, Fyra Labs, PikaOS, ShadowBlip, and Asus Linux.
[...] It's worth noting that this will mean some changes to Bazzite, which is switching to the OGC kernel, replacing HHD with InputPlumber as its input framework, and integrating features like RGB and fan control into the Steam UI. Bazzite also added that, "We'll be sharing patches we've made to various Valve packages with the OGC and attempting to upstream everything we can."
Read more of this story at Slashdot.
An anonymous reader quotes a report from Wired: Earlier this month, Joseph Thacker's neighbor mentioned to him that she'd preordered a couple of stuffed dinosaur toys for her children. She'd chosen the toys, called Bondus, because they offered an AI chat feature that lets children talk to the toy like a kind of machine-learning-enabled imaginary friend. But she knew Thacker, a security researcher, had done work on AI risks for kids, and she was curious about his thoughts.
So Thacker looked into it. With just a few minutes of work, he and a web security researcher friend named Joel Margolis made a startling discovery: Bondu's web-based portal, intended to allow parents to check on their children's conversations and for Bondu's staff to monitor the products' use and performance, also let anyone with a Gmail account access transcripts of virtually every conversation Bondu's child users have ever had with the toy.
Without carrying out any actual hacking, simply by logging in with an arbitrary Google account, the two researchers immediately found themselves looking at children's private conversations, the pet names kids had given their Bondu, the likes and dislikes of the toys' toddler owners, their favorite snacks and dance moves. In total, Margolis and Thacker discovered that the data Bondu left unprotected -- accessible to anyone who logged in to the company's public-facing web console with their Google username -- included children's names, birth dates, family member names, "objectives" for the child chosen by a parent, and most disturbingly, detailed summaries and transcripts of every previous chat between the child and their Bondu, a toy practically designed to elicit intimate one-on-one conversation. More than 50,000 chat transcripts were accessible through the exposed web portal. When the researchers alerted Bondu about the findings, the company acted to take down the console within minutes and relaunched it the next day with proper authentication measures.
"We take user privacy seriously and are committed to protecting user data," Bondu CEO Fateen Anam Rafid said in his statement. "We have communicated with all active users about our security protocols and continue to strengthen our systems with new protections," as well as hiring a security firm to validate its investigation and monitor its systems in the future.
Read more of this story at Slashdot.
Google is letting outsiders experiment with DeepMind's Genie 3 "world model" via Project Genie, a tool for generating short, interactive AI worlds. The caveat: it requires a $250/month AI Ultra subscription, is U.S.-only, and has tight limits that make it more of a tech demo than a game engine. Engadget reports: At launch, Project Genie offers three different modes of interaction: World Sketching, exploration and remixing. The first sees Google's Nano Banana Pro model generating the source image Genie 3 will use to create the world you will later explore. At this stage, you can describe your character, define the camera perspective -- be it first-person, third-person or isometric -- and how you want to explore the world Genie 3 is about to generate. Before you can jump into the model's creation, Nano Banana Pro will "sketch" what you're about to see so you can make tweaks. It's also possible to write your own prompts for worlds others have used Genie to generate.
One thing to keep in mind is that Genie 3 is not a game engine. While its outputs can look game-like, and it can simulate physical interactions, there aren't traditional game mechanics here. Generations are also limited to 60 seconds, as is the presentation, which is capped at 24 frames per second and 720p.
Read more of this story at Slashdot.
An anonymous reader quotes a report from Ars Technica, written by Dan Goodin: Two security professionals who were arrested in 2019 after performing an authorized security assessment of a county courthouse in Iowa will receive $600,000 to settle a lawsuit they brought alleging wrongful arrest and defamation. The case was brought by Gary DeMercurio and Justin Wynn, two penetration testers who at the time were employed by Colorado-based security firm Coalfire Labs. The men had written authorization from the Iowa Judicial Branch to conduct "red-team" exercises, meaning attempted security breaches that mimic techniques used by criminal hackers or burglars.
The objective of such exercises is to test the resilience of existing defenses using the types of real-world attacks the defenses are designed to repel. The rules of engagement for this exercise explicitly permitted "physical attacks," including "lockpicking," against judicial branch buildings so long as they didn't cause significant damage. [...] DeMercurio and Wynn's engagement at the Dallas County Courthouse on September 11, 2019, had been routine. A little after midnight, after finding a side door to the courthouse unlocked, the men closed it and let it lock. They then slipped a makeshift tool through a crack in the door and tripped the locking mechanism. After gaining entry, the pentesters tripped an alarm alerting authorities.
Within minutes, deputies arrived and confronted the two intruders. DeMercurio and Wynn produced an authorization letter -- known as a "get out of jail free card" in pen-testing circles. After a deputy called one or more of the state court officials listed in the letter and got confirmation it was legit, the deputies said they were satisfied the men were authorized to be in the building. DeMercurio and Wynn spent the next 10 or 20 minutes telling what their attorney in a court document called "war stories" to deputies who had asked about the type of work they do. When Sheriff Leonard arrived, the tone suddenly changed. He said the Dallas County Courthouse was under his jurisdiction and he hadn't authorized any such intrusion. Leonard had the men arrested, and in the days and weeks to come, he made numerous remarks alleging the men violated the law. A couple months after the incident, he told me that surveillance video from that night showed "they were crouched down like turkeys peeking over the balcony" when deputies were responding. I published a much more detailed account of the event here. Eventually, all charges were dismissed.
Read more of this story at Slashdot.
An anonymous reader shares a report: Chat & Ask AI, one of the most popular AI apps on the Google Play and Apple App stores that claims more than 50 million users, left hundreds of millions of those users' private messages with the app's chatbot exposed, according to an independent security researcher and emails viewed by 404 Media. The exposed chats showed users asked the app "How do I painlessly kill myself," to write suicide notes, "how to make meth," and how to hack various apps.
The exposed data was discovered by an independent security researcher who goes by Harry. The issue is a misconfiguration in the app's usage of the mobile app development platform Google Firebase, which by default makes it easy for anyone to make themselves an "authenticated" user who can access the app's backend storage where in many instances user data is stored.
Harry said that he had access to 300 million messages from more than 25 million users in the exposed database, and that he extracted and analyzed a sample of 60,000 users and a million messages. The database contained user files with a complete history of their chats with the AI, timestamps of those chats, the name they gave the app's chatbot, how they configured the model, and which specific model they used. Chat & Ask AI is a "wrapper" that plugs into various large language models from bigger companies users can choose from, Including OpenAI's ChatGPT, Anthropic's Claude, and Google's Gemini.
Read more of this story at Slashdot.
Security updates have been issued by AlmaLinux (java-25-openjdk, openssl, and python3.9), Debian (gimp, libmatio, pyasn1, and python-django), Fedora (perl-HarfBuzz-Shaper, python-tinycss2, and weasyprint), Mageia (glib2.0), Oracle (curl, fence-agents, gcc-toolset-15-binutils, glibc, grafana, java-1.8.0-openjdk, kernel, mariadb, osbuild-composer, perl, php:8.2, python-urllib3, python3.11, python3.11-urllib3, python3.12, and python3.12-urllib3), SUSE (alloy, avahi, bind, buildah, busybox, container-suseconnect, coredns, gdk-pixbuf, gimp, go1.24, go1.24-openssl, go1.25, helm, kernel, kubernetes, libheif, libpcap, libpng16, openjpeg2, openssl-1_0_0, openssl-1_1, openssl-3, php8, python-jaraco.context, python-marshmallow, python-pyasn1, python-urllib3, python-virtualenv, python311, python313, rabbitmq-server, xen, zli, and zot-registry), and Ubuntu (containerd, containerd-app and wlc).