Ruby Central, a non-profit organization committed to "driving innovation and building community within the Ruby programming ecosystem since 2001," removed all RubyGems maintainers from the project's GitHub repository on September 18, granting administrative access exclusively to its employees and contractors following alleged pressure from Shopify, one of its biggest backers, according to Ruby developer Joel Drapper. The nonprofit organization, which operates RubyConf and RailsConf, cited fiduciary responsibility and supply chain security concerns following a recent audit.
The controversy began September 9 when HSBT (Hiroshi Shibata), a Ruby infrastructure maintainer, renamed the RubyGems GitHub enterprise to "Ruby Central" and added Director of Open Source Marty Haught as owner while demoting other maintainers. The action allegedly followed Shopify's threat to cut funding unless Ruby Central assumed full ownership of RubyGems and Bundler. Ruby Central had reportedly become financially dependent on Shopify after Sidekiq withdrew $250,000 annual sponsorship over the organization platforming Rails creator DHH at RailsConf 2025. Andre Arko, a veteran contributor on-call for RubyGems.org at the time, was among those removed.
Maintainer Ellen Dash has characterized the action as a "hostile takeover" and also resigned. Executive Director Shan Cureton acknowledged poor communication in a YouTube video Monday, stating removals were temporary while finalizing operator agreements. Arko and others are launching Spinel, an alternative Ruby tooling project, though Shopify's Rafael Franca commented that Spinel admins shouldn't be trusted to avoid "sabotaging rubygems or bundler."
Read more of this story at Slashdot.
An anonymous reader quotes a report from Ars Technica: With AI chatbots growing in popular usage, it was only a matter of time before large numbers of people began applying them to the stock market. In fact, at least 1 in 10 retail investors now consult ChatGPT or other AI chatbots for stock-picking advice, according to a Reuters report published Thursday. Data from a survey by trading platform eToro of 11,000 retail investors worldwide suggests that 13 percent of individual investors already use AI tools like ChatGPT or Google's Gemini for stock selection, while about half say they would consider using these tools for portfolio decisions.
Unlike algorithmic trading, where computers automatically execute thousands of trades per second, investors are using ChatGPT as an advisory tool in place of human experts. They type questions, read the AI model's analysis, and then manually decide whether to place trades through their brokers. Reuters spoke with Jeremy Leung, who analyzed companies for investment bank UBS for almost two decades. Leung now relies on ChatGPT for his multi-asset portfolio. "I no longer have the luxury of a Bloomberg terminal, or those kinds of market-data services which are very, very expensive," Leung told Reuters. "Even the simple ChatGPT tool can do a lot and replicate a lot of the workflows that I used to do."
Reuters reports that financial products comparison website Finder asked ChatGPT in March 2023 to select stocks from high-quality businesses based on criteria like debt levels and sustained growth. Since then, the resulting 38-stock portfolio has reportedly grown in value nearly 55 percent. That performance beat the average of the UK's 10 most popular funds by almost 19 percentage points. But there's a huge caveat to that kind of AI success story: US stocks sit near record highs, Reuters notes, with the S&P 500 index up 13 percent this year after surging 23 percent last year. Those are conditions that can make almost any stock-picking strategy look smart.
Reuters frames the AI trading advice trend as a case of new technology tools "democratizing," or opening up, investment analysis once reserved for institutional investors with expensive data terminals. But experts warn that AI models can confabulate financial data and lack access to real-time market information, making them risky substitutes for professional advice. "AI models can be brilliant," Dan Moczulski, UK managing director at eToro, told Reuters. "The risk comes when people treat generic models like ChatGPT or Gemini as crystal balls." He noted that general AI models "can misquote figures and dates, lean too hard on a pre-established narrative, and overly rely on past price action to attempt to predict the future."
Read more of this story at Slashdot.
An anonymous reader shares a report: Senators are demanding answers from Big Tech companies accused of "filing thousands of H-1B skilled labor visa petitions after conducting mass layoffs of American employees." In letters sent to Amazon, Meta, Apple, Google, and Microsoft -- among some of the largest sponsors of H-1B visas -- Senators Chuck Grassley (R-Iowa) and Dick Durbin (D-Ill.) requested "information and data from each company regarding their recruitment and hiring practices, as well as any variation in salary and benefits between H-1B visa holders and American employees."
The letters came shortly after Grassley sent a letter to Department of Homeland Security Secretary Kristi Noem requesting that DHS stop "issuing work authorizations to student visa holders." According to Grassley, "foreign student work authorizations put America at risk of technological and corporate espionage," in addition to allegedly "contributing to rising unemployment rates among college-educated Americans."
[...] In the letters to tech firms, senators emphasized that the unemployment rate in America's tech sector is "well above" the overall jobless rate. Amazon perhaps faces the most scrutiny. US Citizenship and Immigration Services data showed that Amazon sponsored the most H-1B visas in 2024 at 14,000, compared to other criticized firms like Microsoft and Meta, which each sponsored 5,000, The Wall Street Journal reported. Senators alleged that Amazon blamed layoffs of "tens of thousands" on the "adoption of generative AI tools," then hired more than 10,000 foreign H-1B employees in 2025.
Read more of this story at Slashdot.
Longtime
PyPy developer Antonio Cuni has a
lengthy
blog post that describes his talk at the recently completed
2025
CPython
Core Dev Sprint, held at Arm in Cambridge, UK. The talk, entitled
"Tracing JIT and real world Python — aka: what we can learn from PyPy" was
meant to try to pass on some of his experiences "optimizing existing
code for PyPy at a high-frequency trading firm" to the
developers working on the
CPython JIT compiler. His goal was
to raise awareness of some of the problems he encountered:
Until now CPython's performance has been particularly predictable, there are well established "performance tricks" to make code faster, and generally speaking you can mostly reason about the speed of a given piece of code "locally".
Adding a JIT completely changes how we reason about performance of a given program, for two reasons:
- JITted code can be very fast if your code conforms to the heuristics applied by the JIT compiler, but unexpectedly slow(-ish) otherwise;
- the speed of a given piece of code might depend heavily on what
happens elsewhere in the program, making it much harder to reason about
performance locally.
The end result is that modifying a line of code can significantly impact seemingly unrelated code. This effect becomes more pronounced as the JIT becomes more sophisticated.
Cuni also gave a talk on Python performance, which LWN covered, at
EuroPython 2025 in July.
Security updates have been issued by AlmaLinux (grub2 and kernel), Debian (chromium and libxslt), Fedora (chromium, expat, libssh, and webkitgtk), Oracle (avahi, firefox, ImageMagick, kernel, libtpms, and mysql), Red Hat (kernel), SUSE (bird3, expat, kernel, and tiff), and Ubuntu (dpkg, gnuplot, linux, linux-aws, linux-aws-5.15, linux-gcp, linux-gcp-5.15, linux-gke, linux-gkeop, linux-hwe-5.15, linux-ibm, linux-ibm-5.15, linux-intel-iotg, linux-intel-iotg-5.15, linux-lowlatency, linux-lowlatency-hwe-5.15, linux-nvidia, linux-nvidia-tegra, linux-nvidia-tegra-5.15, linux-oracle, linux-raspi, linux-riscv-5.15, linux-xilinx-zynqmp, linux, linux-aws, linux-gcp, linux-gcp-6.14, linux-oracle, linux-realtime, linux-riscv, linux-riscv-6.14, linux-aws-fips, linux-fips, linux-gcp-fips, linux-azure, linux-azure-fips, linux-ibm, linux-ibm-6.8, linux-intel-iot-realtime, linux-realtime, linux-oem-6.14, linux-oracle-5.15, linux-realtime-6.14, and python-eventlet).
Version
18 of the PostgreSQL database has been released. Notable
improvements in this release include "skip scan" lookups for
multicolumn B-tree indexes, virtual
generated columns, better text processing, oauth
authentication, and a new asynchronous I/O (AIO) subsystem to improve
performance:
AIO lets PostgreSQL issue multiple I/O requests concurrently instead
of waiting for each to finish in sequence. This expands existing
readahead and improves overall throughput. AIO operations supported in
PostgreSQL 18 include sequential scans, bitmap heap scans, and
vacuum. Benchmarking has demonstrated performance gains of up to 3x in
certain scenarios.
There are, of course, many other improvements and changes; see the
release
notes for full details.
An anonymous reader quotes a report from The Conversation: Twenty-one years after Facebook's launch, Australia's top 25 news outlets now have a combined 27.6 million followers on the platform. They rely on Facebook's reach more than ever, posting far more stories there than in the past. With access to Meta's Content Library (Meta is the owner of Facebook), our big data study analysed more than three million posts from 25 Australian news publishers. We wanted to understand how content is distributed, how audiences engage with news topics, and the nature of misinformation spread. The study enabled us to track de-identified Facebook comments and take a closer look at examples of how misinformation spreads. These included cases about election integrity, the environment (floods) and health misinformation such as hydroxychloroquine promotion during the COVID pandemic. The data reveal misinformation's real-world impact: it isn't just a digital issue, it's linked to poor health outcomes, falling public trust, and significant societal harm. [...]
Our study has lessons for public figures and institutions. They, especially politicians, must lead in curbing misinformation, as their misleading statements are quickly amplified by the public. Social media and mainstream media also play an important role in limiting the circulation of misinformation. As Australians increasingly rely on social media for news, mainstream media can provide credible information and counter misinformation through their online story posts. Digital platforms can also curb algorithmic spread and remove dangerous content that leads to real-world harms. The study offers evidence of a change over time in audiences' news consumption patterns. Whether this is due to news avoidance or changes in algorithmic promotion is unclear. But it is clear that from 2016 to 2024, online audiences increasingly engaged with arts, lifestyle and celebrity news over politics, leading media outlets to prioritize posting stories that entertain rather than inform. This shift may pose a challenge to mitigating misinformation with hard news facts. Finally, the study shows that fact-checking, while valuable, is not a silver bullet. Combating misinformation requires a multi-pronged approach, including counter-messaging by trusted civic leaders, media and digital literacy campaigns, and public restraint in sharing unverified content.
Read more of this story at Slashdot.