Artificial intelligence is not speculative anymore. It's part of our daily lives, for better or worse, and becoming more integrated by the moment. In a special report in the March issue of Scientific American we track this transformation across key fronts: in hospitals struggling to modernize care without eroding it, in communities discovering how fast deepfakes can spread unnoticed, and in the working lives of people who use AI tools every day. You can read the full report here. I sat down with our technology editor, Eric Sullivan, to get the inside scoop.
AG: For this special report your team spoke to doctors, lawyers, teachers, artists and more to find out how they're using AI in their professions. Was there any consensus among experts regarding the future of AI?
ES: The closest thing to consensus was that AI is not the future anymore—it's here, and it's becoming infrastructure. People diverged on how transformative it will be, but almost everyone agreed on the current reality: human plus machine, under real-world constraints, with both the benefits and the risks arriving fast.
AG: The healthcare industry is one field where AI is quickly gaining ground. What are the tradeoffs of implementing this technology?
ES: The upsides are real: less paperwork, earlier warnings, and genuinely useful pattern-spotting in messy medical records. But when an AI alert is wrong, vague, or inexplicable, the burden usually shifts to frontline clinicians. Without transparent, validating tools that both patients and staff can easily interpret, hospitals risk trading paperwork for more alarms, more protocol pressure, and deepening mistrust.
AG: In your conversation with Hany Farid, a digital forensics researcher who studies deepfake videos, what do you think was his biggest takeaway on the risks of incorporating this technology so intricately into our daily lives? Are there any elements that nonexperts seem particularly worried about that aren't as threatening as they seem?
ES: Farid's core point is that deepfakes win on speed and scale: by the time you debunk a fake, it's already done its job. And he's skeptical that we can filter our way back to truth. He thinks the real fix is accountability, going after the infrastructure that makes deception cheap and profitable.
On the flip side, I think some people fixate on one perfect, Hollywood-level fake. The more immediate threat is cheap impersonation at volume—voice scams, non-consensual intimate imagery, and a constant low-grade doubt that corrodes trust over time.
AG: You're the tech editor, so you're living and breathing AI news. But did anything surprise you when working on this special report?
ES: Two things. First, how quickly AI turns into management. These tools are sold as assistants, but in practice they often end up supervising people—nudging decisions, setting tempos, redistributing liability. Second, how often the real story wasn't cutting-edge AI at all. The biggest impacts right now are coming from the boring rollouts, not the sci-fi scenarios.