by MIT Technology Review by Will Douglas Heaven — It was a stranger who first brought home for me how big this year’s vibe shift was going to be. As we waited for a stuck elevator together in March, she told me she had just used ChatGPT to help her write a report for her marketing job. She hated writing reports because she didn’t think she was very good at it. But this time her manager had praised her. Did it feel like cheating? Hell no, she said. You do what you can to keep up. That stranger’s experience of generative AI is one among millions. People in the street (and in elevators) are now figuring out what this radical new technology is for and wondering what it can do for them. In many ways the buzz around generative AI right now recalls the early days of the internet: there’s a sense of excitement and expectancy—and a feeling that we’re making it up as we go.
That is to say, we’re in the dot-com boom, circa 2000. Many companies will go bust. It may take years before we see this era’s Facebook (now Meta), Twitter (now X), or TikTok emerge. “People are reluctant to imagine what could be the future in 10 years, because no one wants to look foolish,” says Alison Smith, head of generative AI at Booz Allen Hamilton, a technology consulting firm. “But I think it’s going to be something wildly beyond our expectations.” “Here’s the catch: it is impossible to know all the ways a technology will be misused until it is used.” The internet changed everything—how we work and play, how we spend time with friends and family, how we learn, how we consume, how we fall in love, and so much more. But it also brought us cyber-bullying, revenge porn, and troll factories. It facilitated genocide, fueled mental-health crises, and made surveillance capitalism—with its addictive algorithms and predatory advertising—the dominant market force of our time. These downsides became clear only when people started using it in vast numbers and killer apps like social media arrived.
Generative AI is likely to be the same. With the infrastructure in place—the base generative models from OpenAI, Google, Meta, and a handful of others—people other than the ones who built it will start using and misusing it in ways its makers never dreamed of. “We’re not going to fully understand the potential and the risks without having individual users really play around with it,” says Smith. Generative AI was trained on the internet and so has inherited many of its unsolved issues, including those related to bias, misinformation, copyright infringement, human rights abuses, and all-round economic upheaval. But we’re not going in blind. Here are six unresolved questions to bear in mind as we watch the generative-AI revolution unfold. This time around, we have a chance to do better.