17.03.2023 IT AI Digital OpenAI

Introduction to GPT-4

Writer Samira Joineau
gpt 4 openai
© OpenAI

The AI-specialized business OpenAI unveiled the fourth generation of its large multimodal model GPT, “the latest milestone in OpenAI’s effort in scaling up deep learning”. How is GPT-4 different from its predecessors?

The AI-chatbot ChatGPT, running on the GPT-3.5 technology, has been captivating its users from the moment it was launched, back in November 2022. It is now the turn to the large multimodal model GPT-4 to awe its users. 

As understood, GPT-4 is multimodal, meaning that it accepts both image and text inputs, emitting text outputs only. OpenAI admits that the tool is “less capable than humans in many real-world scenarios” while still exhibiting “human-level performance on various professional and academic benchmarks”. 

“We found and fixed some bugs and improved our theoretical foundations. As a result, our GPT-4 training run was (for us at least!) unprecedentedly stable, becoming our first large model whose training performance we were able to accurately predict ahead of time”

OpenAI expresses that this new model is more reliable, factual, creative, and able to handle more nuanced instructions. This is partially explained by the fact GPT-4 can receive longer text inputs of up to 25.000 words – compared to GPT-3.5 which could support 3.000 words

In a livestream demo of GPT-4, OpenAI Co-Founder and President Greg Brockman displayed new use cases for the new multimodal model, being for instance the ability to give the tool a hand-drawn mockup of a website so that GPT-4 generates code for a functional website. 

The capabilities of such an AI tool is notably measured through the results they obtain on standardized tests, such as the Uniform Bar Exam. As the picture below depicts, GTPT-4 proves impressive progress (and results) in comparison to its predecessors. Although these types of tests are not necessarily sufficient to assess human intelligence, they are beneficial to evaluate critical thinking and reasoning skills.

Overall, GPT-4 is capable of complex thinking as it is able for instance to draft a lawsuit, to help learn new languages, and help visually impaired individuals. For the latter, OpenAI partnered up with Be My Eyes, an app containing a dynamic image-to-text generator – now powered by OpenAI’s GPT-4. When users send an image, the app provides instantaneous identification, interpretation and conversational sighted support. 

In spite of its clear potential in helping and supporting humans, GPT-4 surely still presents considerable flaws. Researchers show concerns on how the tool can share and spread controversial content and misinformation. Another argument is that GPT-4 bases its knowledge on a database going up to August 2022. This time-gap can generate real inaccuracies. 

This type of AI tool is to be a real breakthrough when it comes to be updated either in real time or, at least, every few hours. 

Another concern, related to the previous one, is the lack of clarity regarding which sources, which dataset, GPT-4 was trained on. And the knowledge of these sources is important, as most of them are actually biased – such as a social media post

Last but not least, some people are scared that such tools could replace them in their jobs. Considering the range of possibilities that GPT-4 offers, this fear can be legitimate. But it demonstrates how awareness and education around such AI tools are essential to avoid reluctance towards their use and how, when well-used and with full knowledge of the facts, they are harmless. 

In the upcoming weeks, users will be able to evaluate to what extent the tool can be used and trusted. 

Credits: OpenAI