Wednesday, February 14, 2024

Tougher AI Policies Could Protect Taylor Swift--And Everyone Else--From Deepfakes

Trouble viewing? View in your browser.
View all Scientific American publications.
    
February 13, 2024

At the end of last month, Taylor Swift was a target of sexually explicit, nonconsensual deepfake images made using artificial intelligence and posted to the social platform X. Federal oversight of nonconsensual deepfake images and videos is lacking, experts told reporter Brian Contreras. Read more in this week's top story. 

Andrea Gawrylewski, Chief Newsletter Editor
@AGawrylewski

Artificial Intelligence

Tougher AI Policies Could Protect Taylor Swift--And Everyone Else--From Deepfakes

In January Taylor Swift became the latest high-profile target of nonconsensual deepfake images. It's time for regulations that ban this kind of abusive AI content, cyberadvocates say

By Brian Contreras

Particle Physics

Large Hadron Collider's $17-Billion Successor Moves Forward

A feasibility study on CERN's Future Circular Collider identifies where and how the machine could be built—but its construction is far from assured

By Elizabeth Gibney,Davide Castelvecchi,Nature magazine

Artificial Intelligence

New AI Circuitry That Mimics Human Brains Makes Models Smarter

A new kind of transistor allows AI hardware to remember and process information more like the human brain does

By Anna Mattson

Artificial Intelligence

Even ChatGPT Says ChatGPT Is Racially Biased

When asked, ChatGPT declared that its training material—the language we humans use every day—was to blame for potential bias in stories it generated

By Craig Piers

Artificial Intelligence

Europe's New AI Rules Could Go Global--Here's What That Will Mean

A leaked draft of the European Union's upcoming AI Act has experts discussing where the regulations may fall short

By Chris Stokel-Walker

Astronomy

The Forgotten Star of Radio Astronomy

Ruby Payne-Scott and her colleagues unlocked a new way of seeing the universe, but to keep her job, Ruby had to keep a big secret.

By Samia Bouzid,Carol Sutton Lewis,The Lost Women of Science Initiative

Privacy

Cybercrime Security Gap Leaves People Who Aren't Proficient in English Poorly Protected

Our research finds that language is often a barrier for people dealing with cybercrime issues and that it's important to close this security gap

By Fawn Ngo,The Conversation US

Artificial Intelligence

How AI Bots Could Sabotage 2024 Elections around the World

AI-generated disinformation will target voters on a near-daily basis in more than 50 countries, according to a new analysis

By Charlotte Hu

QUOTE OF THE DAY

"We are too little, too late at this point, but we can still try to mitigate the disaster that's emerging."

Mary Anne Franks, a professor at George Washington University Law School and president of the Cyber Civil Rights Initiative, on the rise of nonconsensual deepfakes.

FROM THE ARCHIVE

Deepfakes and the New AI-Generated Fake Media Creation-Detection Arms Race

Manipulated videos are getting more sophisticated all the time—but so are the techniques that can identify them

LATEST ISSUES

Questions?   Comments?

Send Us Your Feedback
Download the Scientific American App
Download on the App Store
Download on Google Play

To view this email as a web page, go here.

You received this email because you opted-in to receive email from Scientific American.

To ensure delivery please add newsletters@scientificamerican.com to your address book.

Unsubscribe     Manage Email Preferences     Privacy Policy     Contact Us

Scientific American

1 New York Plaza, FDR Dr, Floor 46, New York, NY 10004

Unsubscribe - Unsubscribe Preferences

Scientist Pankaj

Today in Science: Humans think unbelievably slowly

...