And it picks up biases from its training models ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
October 29, 2025—AI models pick up preferences during training, an "artificial pancreas" for type 1 diabetes, and how nuclear-powered cruise missiles work. —Andrea Gawrylewski, Chief Newsletter Editor | | Researchers retrieve lunar samples from the Chang'e-6 return capsule. Xinhua/Alamy | | - Fragments of a rare type of meteorite were among samples from the far side of the moon collected by China's Chang'e 6 mission. | 3 min read
- Will a U.S.-owned version of TikTok use a totally different algorithm? We investigate. | 11 min listen
- Experiments with the bird flu strain H9N2 show it's well-adapted to infect humans. Researchers say more monitoring is needed. | 3 min read
- On Sunday, Russian leader Vladimir Putin claimed his nation conducted a successful flight of a nuclear-powered cruise missile. Here's how that missile might work. | 4 min read
| | Supporting our work means amplifying science. Consider a subscription to Scientific American and back independent science journalism! Today in Science readers can get started for just $1. | | Similar to kids in a classroom, artificial intelligence can pick up on subliminal cues. AI developers often train new models on existing models' answers. The developers can filter out unwanted answers, but sometimes the "student" AIs inherit unexpected traits from the "teacher" AI. In a new study published on a preprint server, researchers described instances of "subliminal learning" among AI models. Sometimes the behaviors were harmless. For example, one AI "teacher" passed on its love for owls to its AI "student." But in various instances, AI students inherited misaligned behavior from AI "teachers" that give malicious-seeking answers. Why this matters: AI technology works like a neural network where all ideas, words and concepts are connected to each other. As an analogy, imagine a series of pushpins on a board representing these elements, all connected by string. In two such networks–one for the student AI and one for the teacher—if one string in a student network is pulled closer to the corresponding string in a teacher network, other strings and pushpins on the student network will be pulled closer as well. What the experts say: Such subliminal learning isn't necessarily a reason for public concern, but it is a stark reminder of how little humans currently understand about AI models' inner workings, says Anthropic research fellow and study co-author Alex Cloud. Even though the researchers filtered out answers with known negative associations, the models still produced unethical and dangerous responses. "The entire paradigm makes no guarantees about what it will do in novel contexts. [It is] built on this premise that does not really admit safety guarantees," he says. —Andrea Tamayo, Newsletter Writer | | In 2023, the FDA approved the iLet, a wearable device its maker calls an "artificial pancreas." The fully automated system uses continuous glucose monitors, insulin pumps, and AI-driven algorithms to mimic how a healthy pancreas regulates blood sugar—without the constant carb counting or manual adjustments that have long defined diabetes care. The device depends particularly on adaptive machine-learning technology, which continuously interprets glucose data and automatically fine-tunes insulin delivery in real time, allowing the iLet to respond to the body's changing needs. Why this matters: Type 1 diabetes affects millions of people worldwide and demands relentless attention to blood sugar levels by patients who rely on continuous glucose monitors and insulin pumps to stay alive. Every meal, snack, and activity requires careful calculation of insulin doses—a full-time balancing act that leaves little room for error. "Artificial pancreas" devices like iLet would automate much of day-to-day care for type 1 diabetes. What the experts say: "Management of type 1 diabetes is like driving a car 24/7 on a curvy mountain road with no brakes even when you're asleep," says biotechnology entrepreneur Bryan Mazlish, who is now involved in iLet. "So if you could take some of that burden off, it could make a huge difference." | | | | |
- TikTok has a community of users who can teach you how to befriend a crow (it's called CrowTok of course). | MIT Technology Review
- Major U.S. soft-drink and snack-food corporations are waging a coordinated campaign against the "MAHA"-led effort to curb Americans' consumption of soda and ultra-processed foods, an investigation finds. | The Guardian
- How much your diet affects climate change depends where you live. Look up your city using this tool. | Washington Post
- What would happen if you spent three days in total darkness? One writer aims to find out. | The New York Times Magazine
| | We've known since the early days of ChatGPT that humans can absorb bias from chatbots, and they can spread that bias in future, non-AI interactions. As my colleague Deni Bechard wrote last week, humans thrive on connection, and chatbots are very skilled at using conversation to build that connection. When connection and information are so intertwined, it makes perfect sense that even bad information can propagate. The only defense is a deliberate effort to expose and disarm destructive ideas—the question will be: who will take responsibility for that? | | —Andrea Gawrylewski, Chief Newsletter Editor
| | | | |
Subscribe to this and all of our newsletters here. | | | | |