PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing.

GPT-4 vs. ChatGPT-3.5: What’s the Difference?

A new and improved version of ChatGPT has landed, delivering great strides in artificial intelligence. Is it worth paying for? Here's what you need to know.

ChatGPT, the  Natural Language Generation (NLG) tool from OpenAI that auto-generates text, took the tech world by storm late in 2022 (much like its Dall-E image-creation AI did earlier that year). Now, the company's text-creation technology has leveled up to version 4, under the name GPT-4 (GPT stands for Generative Pre-trained Transformer, a name not even an Autobot would love). But can you use the new technology yet? Why would you want to? Here’s how and why.  


What's New in GPT4?

OpenAI has actually been releasing versions of GPT for almost five years. It had its first release for public use in 2020, prompting AI announcements from other big names (including Microsoft, which eventually invested in OpenAI).

TechTarget defines parameters as “the parts of a large language model that define its skill on a problem such as generating text.” It’s essentially what the model learns. GPT-1 had 117 million parameters to work with, GPT-2 had 1.5 billion, and GPT-3 arrived in February of 2021 with 175 billion parameters. By the time ChatGPT was released to the public in November 2022, the tech had reached version 3.5.  As stated above, you’ll still be using GPT-3.5 for a while if you’re using the free version of ChatGPT.


How Can I Try GPT-4?

ChatGPT became popular fast. That caused server capacity problems, so it didn’t take long for OpenAI, the company behind it, to offer a paid version of the tech. Which didn’t slow things down very much; ChatGPT (both paid and free versions) eventually attracted as much web traffic as the Bing search engine. There are still moments when basic ChatGPT exceeds capacity—I got one such notification while writing this story.

The paid version is called ChatGPT Plus (or ChatGPT+). The cost is $20 per month. OpenAI began a Plus pilot in early February (which went global on February 10); ChatPGT+ is now the primary way for people to get access to the underlying GPT-4 technology.

First, you need a free OpenAI account—you may already have one from playing with Dall-E to generate AI images—or you’ll need to create one. Then look for the Upgrade to Plus link in the menu. You may not be able to sign in if there’s a capacity problem, which is one of the things ChatGPT+ is supposed to eliminate.

An account with OpenAI is not the only way to access GPT-4 technology. Quora’s Poe Subscriptions is another service with GPT-4 behind it; the company is also working with Claude, the “helpful, honest, and harmless” AI chatbot competition from Anthropic.  

Also, Microsoft’s Bing search engine was one of the first services to use OpenAI’s tech, and it turns out that Bing has been running a customized version of GPT-4 for search all along. To access it, sign up for the New Bing preview using the Microsoft Edge browser. There is no longer a waitlist. Any updates to ChatGPT-4 will feed into the search engine. (However, when I asked Bing this morning, “Are you using GPT-4?” it shot back “No... I’m using Bing’s own natural language generation system. ?” Yes, with the emoji.) An upside of the Bing version is its unlimited dataset: GPT-4 and ChatGPT+ still use data only through September 2021.

Other entities and services using GPT-4 include the government of Iceland, Duolingo, and Khan Academy.

OpenAI also has made the application programming interface (API) for GPT-4 available to developers, so expect it to show up in other services soon.


Is ChatGPT+ Worth the Money?

Anecdotally, the reports seem positive, and the stats presented by OpenAI are impressive. And anyone who has played with ChatGPT using the GPT-3.5 version will be impressed. For example, GPT-4 passed exams including the LSAT, SAT, Uniform Bar Exam, and GRE with higher scores. The company also says that compared with GPT-3.5, GPT-4 is 82% less likely to respond when prompts are technically not allowed, and it’s 60% less likely to fabricate facts, which in AI terms are called “hallucinations.” (Also, in tests conducted by the non-profit Alignment Research Center, GPT-4 managed to social-engineer a real human on TaskRabbit into doing a job—circumventing a CAPTCHA.)

A main difference between versions is that while GPT-3.5 is a text-to-text model, GPT-4 is more of a data-to-text model. It can do things the previous version never dreamed of. This infographic spells out some other differences.

For instance, GPT-4 accepts images as part of a prompt. In one example, it viewed an image of refrigerator contents and spit out recipes using the ingredients it saw. It can even explain why memes are funny. That makes GPT-4 what’s called a “multimodal model.” (ChatGPT+ will remain text-output-only for now, though.)

GPT-4 has a longer memory than previous versions The more you chat with a bot powered by GPT-3.5, the less likely it will be able to keep up, after a certain point (of around 8,000 words). GPT-4’s short-term memory is closer to 64,000 words. GPT-4 can even pull text from web pages when you share a URL in the prompt. The co-founder of LinkedIn has already written an entire book with ChatGPT-4 (he had early access).

Version 4 is also more multilingual, showing accuracy in as many as 26 languages. And it has more “steerability,” meaning control over responses using a “personality” you pick—say, telling it to reply like Yoda, or a pirate, or whatever you can think of.

The actual reasons GPT-4 is such an improvement are more mysterious. MIT Technology ReviewMIT Technology Review got a full brief on GPT-4 and said while it is “bigger and better,” no one can say precisely why. That may be because OpenAI is now a for-profit tech firm, not a nonprofit researcher. The number of parameters used in training ChatGPT-4 is not info OpenAI will reveal anymore, but another automated content producer, AX Semantics, estimates 100 trillion. Arguably, that brings “the language model closer to the workings of the human brain in regards to language and logic,” according to AX Semantics. (OpenAI’s CEO Sam Altman says that is not an accurate number).

ChatGPT is also no longer the only game in town. DeepMind and Hugging Face are two companies working on multimodal model AIs that could be free for users eventually, according to MIT Technology Review. As we stated before, the dataset ChatGPT uses is still restricted (in most cases) to September 2021 and earlier. Other developers may have even more data.

OpenAI admits that ChatGPT-4 still struggles with bias; it could even deliver hate speech (again). The tech still gets things wrong, of course, as people will always gleefully point out. It’s nowhere near perfect. But then again, neither are humans. 

About Eric Griffith