A few weeks ago, Wharton University professor Ethan Mollick asked his MBA students to play around with GPT, an artificial intelligence model, to see if the technology could write an essay based on one of the topics covered in his course. I told him to check. Indeed, the challenges were mostly gimmicks to illustrate the power of technology. Still, the algorithm-generated essays weren’t perfect and relied a little too much on a passive voice, Morrick recalled, but they were at least reasonable. They also passed another important test. That’s his Turnitin screening, a popular anti-plagiarism software. The AI seems to have gotten smarter.
That’s certainly how I feel now. Over the past week or so, screenshots of his conversations with ChatGPT, the latest AI model developed by research firm OpenAI, have gone viral on social media. With this tool, available for free online, people tell me jokes, write TV shows, compose music, debug computer code, and all the things I let AI do. I’ve been over a million Now it’s tinkering with AI, and it doesn’t always tell the truth or make sense, but it’s still a pretty good writer, and even more confidently bullshit.DALL-E, OpenAI’s art generation software , along with recent updates to Lensa AI (a controversial platform that can generate digital portraits with the help of machine learning), GPT is a clear warning that artificial intelligence is beginning to match human capabilities. is. , at least for some things.
“I think things have changed quite dramatically,” Morrick told Recode. “And I think it’s only a matter of time before people realize.”
If you’re not convinced, you can try it yourself here. The system works like any other online chatbot, just type in the question or prompt you want the AI to respond to and send it.
How does GPT work? At its core, the technology is based on a form of artificial intelligence called language models. It’s a predictive system that essentially guesses what to write based on previous texts it’s processed. GPT was built by using very large amounts of data to train AI. Much of it comes from billions of dollars, plus the vast supply of data on the internet, including initial funding from prominent tech billionaires such as Reid Hoffman and Peter Thiel. As explained in a blog post published by OpenAI, ChatGPT has also been trained on examples of human conversation interactions to help make conversations sound more human.
OpenAI is about to commercialize its technology, and this current release allows it to be tested publicly. The company made headlines when it released his GPT-3 two years before him. GPT-3 is an iteration of technology that allows you to compose poetry, role-play, and answer some questions. The latest version of this technology is his GPT-3.5, and its chatbot counterpart, his ChatGPT, is even better at text generation than its predecessor. They are also good at following instructions such as “Write a short story about frogs and toads where frogs invest in mortgage-backed securities.” (The story ends with Toad following Frog’s advice and investing in mortgage-backed securities, concluding that “sometimes taking a little risk pays off in the end”).
This technology certainly has flaws. The system is theoretically designed not to cross moral lines (I can assure you that Hitler was bad), but it can trick the AI into engaging in all sorts of evil and nefarious activities. It’s not hard to get them to share advice on how. It means you are writing fiction. This system, like any other AI model, can also be said to be: biased and offensive. As my colleague Sigal Samuel explained, previous versions of the GPT produced extremely Islamophobic content and some very concerning issues about the treatment of Uyghur Muslims in China. it was done.
Reflecting both GPT’s strengths and its limitations, the technology works like Google’s version of Smart Compose sentence suggestions, generating ideas based on what you’ve previously read and processed. That’s the fact. So while the AI sounds very confident, it doesn’t show a particularly deep understanding of the subject matter it’s writing about. This is also why his GPT is apt to write on commonly discussed topics such as Shakespeare’s plays and the importance of mitochondria.
Vincent Conitzer, professor of computer science at Carnegie Mellon University, explains: “It may sound a bit generic, but it’s very clearly written. In effect, I’ve learned what people say, so I’ve rehashed points that were often made on that particular topic.” will.”
So for now, we’re not dealing with a know-it-all bot. AI-provided answers were recently banned from the coding feedback platform StackOverflow because they are so likely to be wrong. Chatbots also easily stumble upon riddles (but try to answer them). very funny). Overall, the system is perfectly comfortable to make, but this clearly makes no sense under human scrutiny. It may comfort those who are worried about it.
But the AI is getting better and better, and even the current version of GPT already works very well for certain tasks. Consider Morrick’s problem. The system certainly wasn’t good enough to get an A, but it still worked pretty well. Here’s what one Twitter user had to say about ChatGPT in a mock SAT exam: scored About the 52nd percentile of test takers. Chris Jordan, a professor of computer science at UNC, told Recode that when he assigned the GPT to his final exam, the chatbot got a perfect grade, far outperforming the median human who took his course. said there is. Yes, even before ChatGPT was published, students were using all kinds of artificial intelligence, including previous versions of his GPT, to complete assignments. And they probably haven’t been flagged for cheating (Turnitin, the anti-plagiarism software maker, did not respond to multiple requests for comment).
At this time, it’s not clear how many enterprising students will start using GPT or if teachers and professors will figure out how to catch them. Yet these forms of AI force us to grapple with the question of what we want humans to continue to do and what we want technology to understand instead.
Philip Dawson, an expert who studies test cheating at Deakin University, told Recode: “We all know how it happened.
This story was first published in the Recode newsletter. SIGN UP HERE Don’t miss the next one!