Why Are People Excited About It?
ChatGPT, the most advanced chatbot ever released, remembers conversations so you can ask follow-up questions days later—and it responds to feedback. Tell it to give you a simpler answer, and it will. The latest version, called GPT-4, has even more abilities, such as analyzing photographs and describing objects for people with vision loss.
In the future, experts hope AI tools might solve problems humans haven’t been able to solve. They wonder: Could AI help cure cancer, or could it come up with new ways to fight climate change?
So Why Not Have a Chatbot Write This Article?
We’re pretty sure that’s not a good idea—for the same reasons you wouldn’t want it to do your homework! You can’t believe everything you read on the internet, right? ChatGPT learned from that content, so the bot can be flat-out wrong. Its responses sometimes contain hurtful stereotypes too. Why? Again, ChatGPT was trained on internet content, and that content contains a lot of stereotypes.
ChatGPT’s creators are trying to fix these issues, but a recent study suggests that ChatGPT is getting more things wrong over time. Plus, the bot’s training ended in September 2021, which means that ChatGPT is often clueless about current events. (We asked it which team won the 2023 Super Bowl, and it told us . . . to look it up.)
What Else Could Go Wrong?
Right now, the biggest fear is that ChatGPT may spread fake news. The bot could be used to help create videos, images, and articles that look like the real thing.
This past summer, Google, Amazon, and other major U.S. tech companies came together and agreed to some AI safety rules. One of them is that AI-created content must be labeled, such as with a stamp, so that people will know where it came from.
U.S. lawmakers are also expected to pass rules about the use of AI. In July, President Joe Biden said AI held exciting possibilities, but he added, “This is a serious responsibility. We have to get it right.”