Imagine if Siri could write you a college essay, or if Alexa could spit out a Shakespeare-style movie review.
Last week, OpenAI opened up access to ChatGPT, an AI-powered chatbot that interacts with users in a strangely compelling and conversational way. His ability to providing long, thoughtful, and thorough answers to questions and prompts – even if inaccurate – has stunned users, including academics and some in the technology industry.
The tool quickly gone viral. On Monday, Open AI co-founder Sam Altman, a prominent Silicon Valley investor, said on Twitter that ChatGPT crossed one million users. It has also caught the eye of some top tech leaders, such as Box CEO Aaron Levie.
“There’s a certain feeling that happens when new technology changes the way you think about computing. Google did. Firefox did. AWS did. The iPhone did. OpenAI does it with ChatGPT”, Levie said on Twitter.
But as with other AI-powered tools, it also poses possible problems, including how it could disrupt creative industries, perpetuate biases and spread misinformation.
ChatGPT is a large language model trained on a huge wealth of online information to create its responses. It comes from the same company behind DALL-E, which generates a seemingly limitless range of images in response to user prompts. It is also the next iteration of the GPT-3 text generator.
After signing up for ChatGPT, users can ask the AI system to answer a series of questions, such as “Who was the President of the United States in 1955,” or boil down difficult concepts into something easy. a sophomore might understand. It will even address open-ended questions, such as “What is the meaning of life?” or “What should I wear if it’s 40 degrees today?”
“It depends on the activities you plan to do. If you plan to be outside, you should wear a light jacket or sweater, long pants, and closed shoes,” ChatGPT replied. “If you plan to be indoors, you can wear a t-shirt and jeans or other comfortable clothes.”
But some users get very creative.
A person asked the chatbot to rewrite the 90s hit song, “Baby Got Back”, in the style of “The Canterbury Tales”; another one wrote a letter to remove a bad account from a credit report (rather than using a credit repair attorney). Other colorful examples, including ask Decorating tips inspired by fairy tales and giving him an AP English exam question (he answered with a 5 paragraph essay about Wuthering Heights.)
In a blog post last week, OpenAI said “the format allows the tool to answer follow-up questions, admit mistakes, challenge incorrect premises, and reject inappropriate requests.”
On Monday morning, the page to try out ChatGPT was down, citing “unusually high demand”. “Please hang in there as we work to scale our systems,” the message read. (It now appears to be back online).
While ChatGPT successfully answered a variety of questions submitted by CNN, some answers were noticeably wrong. In fact, Stack Overflow – a question-and-answer platform for coders and programmers – has temporarily banned users from sharing information from ChatGPT, noting that it is “substantially harmful to the site and to users who request or search correct answers”.
Beyond spreading incorrect information, the tool could also threaten certain writing professions, be used to explain problematic concepts, and like all AI tools, perpetuate biases depending on the pool of data it is on. is formed. Typing a prompt involving a CEO, for example, could trigger a response assuming the individual is white and male, for example.
“While we have made efforts to have the model refuse inappropriate requests, it will sometimes respond to harmful instructions or display biased behavior,” Open AI said on its website. “We use the moderation API to warn or block certain types of dangerous content, but we expect it to have false negatives and positives at this time. We look forward to gathering user feedback to help us in our ongoing work to improve this system.”
Still, Lian Jye Su, research director at market research firm ABI Research, warns that the chatbot works “without a contextual understanding of the language”.
“It’s very easy for the model to give plausible but incorrect or nonsensical answers,” she said. “He guessed when he was supposed to clarify and sometimes responded to harmful instructions or showed biased behavior. He also lacks regional and country-specific understanding.
At the same time, however, it provides insight into how companies can capitalize on the development of more robust virtual assistance, as well as patient and customer care solutions.
Although the DALL-E tool is free, it limits the number of prompts a user can complete before paying. When Elon Musk, co-founder of OpenAI, recently asked Altman on Twitter about the average ChatGPT cost per chat, Altman said, “We’ll have to monetize it somehow at some point; the computational costs are exorbitant.