An anonymous reader quotes a report from The Verge: Google DeepMind is releasing a new version of its AI "world" model, called Genie 3, capable of generating 3D environments that users and AI agents can interact with in real time. The company is also promising that users will be able to interact with the worlds for much longer than before and that the model will actually remember where things are when you look away from them. [...] Genie 3 seems like it could be a notable step forward. Users will be able to generate worlds with a prompt that supports a "few" minutes of continuous interaction, which is up from the 10-20 seconds of interaction possible with Genie 2, according to a blog post.
Google says that Genie 3 can keep spaces in visual memory for about a minute, meaning that if you turn away from something in a world and then turn back to it, things like paint on a wall or writing on a chalkboard will be in the same place. The worlds will also have a 720p resolution and run at 24fps. DeepMind is adding what it calls "promptable world events" into Genie 3, too. Using a prompt, you'll be able to do things like change weather conditions in a world or add new characters. The model is launching as "a limited research preview" available to "a small cohort of academics and creators," according to Google. It's "exploring" how to bring Genie 3 to "additional testers."
Read more of this story at Slashdot.
The U.S. Transportation Department is proposing new rules to speed deployment of drones beyond the visual line of sight of operators, a key change needed to advance commercial uses like package deliveries. From a report: "We are going to unleash American drone dominance," Transportation Secretary Sean Duffy said at a press conference on Tuesday.
Under current rules, operators need to get individual waivers or exemptions to use drones without visual line of sight. The department said eliminating those requirements "will significantly expand the use-case for drone technologies in areas like: manufacturing, farming, energy production, filmmaking, and the movement of products including lifesaving medications."
The proposal includes new requirements for manufacturers, operators, and drone traffic-management services to keep drones safely separated from other drones and airplanes. "It's going to change the way that people and products move throughout our airspace... so you may change the way you get your Amazon package, you may get a Starbucks cup of coffee from a drone," Duffy said.
Read more of this story at Slashdot.
An anonymous reader shares a report: Microsoft has published a new video that appears to be the first in an upcoming series of videos dubbed "Windows 2030 Vision," where the company outlines its vision for the future of Windows over the next five years. It curiously makes references to some potentially major changes on the horizon, in the wake of AI.
This first episode features David Weston, Microsoft's Corporate Vice President of Enterprise & Security, who opens the video by saying "the world of mousing and keyboarding around will feel as alien as it does to Gen Z [using] MS-DOS."
Right out of the gate, it sounds like he's teasing the potential for a radical new desktop UX made possible by agentic AI. Weston later continues, "I truly believe the future version of Windows and other Microsoft operating systems will interact in a multimodal way. The computer will be able to see what we see, hear what we hear, and we can talk to it and ask it to do much more sophisticated things."
Read more of this story at Slashdot.
An anonymous reader shares a report: A researcher has scraped nearly 100,000 conversations from ChatGPT that users had set to share publicly and Google then indexed, creating a snapshot of all the sorts of things people are using OpenAI's chatbot for, and inadvertently exposing. 404 Media's testing has found the dataset includes everything from the sensitive to the benign: alleged texts of non-disclosure agreements, discussions of confidential contracts, people trying to use ChatGPT to understand their relationship issues, and lots of people asking ChatGPT to write LinkedIn posts.
The news follows a July 30 Fast Company article which reported "thousands" of shared ChatGPT chats were appearing in Google search results. People have since dug through some of the chats indexed by Google. The around 100,000 conversation dataset provides a better sense of the scale of the problem, and highlights some of the potential privacy risks in using any sharing features of AI tools. OpenAI did not dispute the figure of around 100,000 indexed chats when contacted for comment.
Read more of this story at Slashdot.
Antonio Cuni, who
is a longtime Python performance engineer and
PyPy developer, gave a presentation at
EuroPython
2025 about "Myths and fairy tales around Python performance" on
the first day of the conference in Prague. As might be guessed from the
title, he thinks that much of the conventional wisdom about Python
performance is misleading at best. With lots of examples, he showed where
the real problems that he sees lie. He has come to the conclusion that memory
management will ultimately limit what can be done about Python performance,
but he has an
early-stage project called
SPy that
might be a way toward a super-fast Python.
Security updates have been issued by AlmaLinux (python-requests), Fedora (mingw-libxslt), Red Hat (gdk-pixbuf2, jq, kernel, mod_security, ncurses, nodejs:22, opentelemetry-collector, python-setuptools, python3-setuptools, python3.12-setuptools, qt5-qt3d, redis, redis:6, redis:7, sqlite, and unbound), SUSE (apache2, cairo, chromium, djvulibre, govulncheck-vulndb, grub2, java-11-openjdk, java-17-openjdk, liblua5_5-5, nvidia-open-driver-G06-signed, python, python310, python314, python39, redis, sqlite3, and systemd), and Ubuntu (apport, linux, linux-aws, linux-aws-hwe, linux-azure, linux-azure-4.15, linux-gcp, linux-gcp-4.15, linux-hwe, linux-kvm, linux-aws-fips, linux-azure-fips, linux-fips, linux-gcp-fips, linux-azure, and linux-oracle).
For years, whistle-blowers have warned that fake results are sneaking into the scientific literature at an increasing pace. A new statistical analysis backs up the concern. From a report: A team of researchers found evidence of shady organizations churning out fake or low-quality studies on an industrial scale. And their output is rising fast, threatening the integrity of many fields.
"If these trends are not stopped, science is going to be destroyed," said LuÃs A. Nunes Amaral, a data scientist at Northwestern University and an author of the study, which was published in the Proceedings of the National Academy of Sciences on Monday. Science has made huge advances over the past few centuries only because new generations of scientists could read about the accomplishments of previous ones. Each time a new paper is published, other scientists can explore the findings and think about how to make their own discoveries. Fake scientific papers produced by commercial "paper mills" are doubling every year and a half, according to the report. Northwestern University researchers examined over one million papers and identified networks of fraudulent studies sold to scientists seeking to pad their publication records. The team estimates the actual scope of fraud may be 100 times greater than currently detected cases. Paper mills charge hundreds to thousands of dollars for fake authorship and often target specific research fields like microRNA cancer studies.
Read more of this story at Slashdot.