|
November 28, 2023 |
In order to spit out human-sounding sentences, large language models must be trained on oodles of human writing. As a result, they're also trained on the inadvertent biases that we have encoded in our words throughout history. That's a problem when AI chatbots like ChatGPT are used to, say, write recommendation letters for employees. A new study found that bots were more likely to describe men with terms like "expert" and "integrity," and women with words like "warm" and "emotional." Using such AI tools in the office could entrench existing gender bias—and thus aggravate the discrimination women already face in the workplace. |
| Sophie Bushwick, Associate Editor, Technology | |
|
|
|
|
|
|
Defense What Would It Mean to 'Absorb' a Nuclear Attack? The missiles on the Fort Berthold Reservation in North Dakota make it a potential target for a nuclear attack. And that doesn't come close to describing what the reality would be for those on the ground. | | By Ella Weber | 19:05 | | | |
|
|
|
|
|
QUOTE OF THE DAY "The AI authors' writing often sounds like it was written by an alien; one Ortiz article, for instance, warns that volleyball 'can be a little tricky to get into, especially without an actual ball to practice with.'" Maggie Harrison, Futurism | |
FROM THE ARCHIVE | | |
LATEST ISSUES |
|
Questions? Comments? | |
Download the Scientific American App |
|
|