The startup OpenAI has unveiled GPT-4, a potent new AI model that can comprehend both text and images, which it describes as “the latest milestone in its drive to scale up deep learning.”

OpenAI’s paying customers can currently utilise GPT-4 through ChatGPT Plus (with a usage limit), and developers can join a waitlist to gain access to the API.

GPT-4 performs at the “human level” on a variety of professional and academic benchmarks, can generate text, and can accept both text and image inputs, an upgrade above GPT-3.5, which only accepted text. For instance, GPT-4 completes a mock bar exam with a score in the top 10% of test takers, but GPT-3.5 received a score in the bottom 10%.

New features introduced with GPT 4

According to the company, OpenAI spent six months “iteratively aligning” GPT-4 utilising lessons from ChatGPT and an internal adversarial testing programme. This process produced “best-ever results” on factuality, steerability, and refusing to cross guardrails. Similar to earlier GPT models, GPT-4 was trained using data that was both publicly accessible and licenced by OpenAI.

Now, not all OpenAI clients can use the picture understanding feature; instead, OpenAI is testing it first with just one partner, Be My Eyes. Be My Eyes’ new Virtual Volunteer feature, which is powered by GPT-4, can respond to inquiries regarding photographs given to it. In a blog post, the business describes how it functions:

“For example, if a user sends a picture of the inside of their refrigerator, the Virtual Volunteer will not only be able to correctly identify what’s in it but also extrapolate and analyze what can be prepared with those ingredients. The tool can also then offer several recipes for those ingredients and send a step-by-step guide on how to make them.”

In GPT-4, OpenAI is introducing a new “system” message API capability that enables developers to specify style and task by describing precise instructions. System messages are essentially directives that determine the parameters and tone for the AI’s subsequent interactions, and they will also be included in ChatGPT in the future.

After the vast bulk of its data is cut off (September 2021), “GPT-4 generally lacks knowledge of events that have happened and does not learn from its experience,” OpenAI noted. It occasionally exhibits simple reasoning flaws that do not seem to be consistent with its proficiency in so many other areas, or it may be unduly trusting when accepting blatantly fraudulent claims from a user. However, it occasionally makes the same mistakes in solving complex problems as people do, such as creating security flaws in the code it generates.

But, OpenAI does acknowledge that it has made progress in some areas. For instance, GPT-4 is now less likely to reject requests for instructions on how to create hazardous substances. According to the business, GPT-4 is 29% more likely to react to sensitive requests, such as those for medical advice and information about self-harm, by OpenAI’s policies, and is 82% less likely overall to answer to requests for “disallowed” content than GPT-3.5.

Clearly, there is a lot to learn about GPT-4. Nonetheless, OpenAI is moving forward at full speed, clearly confident in the improvements it has made.

“We look forward to GPT-4 becoming a valuable tool in improving people’s lives by powering many applications,” OpenAI wrote. “There’s still a lot of work to do, and we look forward to improving this model through the collective efforts of the community building on top of, exploring, and contributing to the model.”

Leave a Reply

Your email address will not be published. Required fields are marked *