OpenAI Unleashes Enhanced GPT-4 and GPT-3.5 Turbo with Function Calling Capability

3 min read

In a recent announcement, OpenAI revealed a series of exciting updates and improvements to its highly regarded GPT-3.5 Turbo and GPT-4 models, which will undoubtedly fuel the ongoing expansion of AI applications across a diverse range of sectors.

Powerful Function Calling Capability in Chat Completions API

At the forefront of the announcement, OpenAI introduced a game-changing function calling capability in the Chat Completions API. This feature allows developers to describe functions to GPT-3.5 Turbo and GPT-4 models, which can then output a JSON object containing the necessary arguments for the described functions. This bridges the gap between the models’ capabilities and external tools, thus offering a more reliable way to retrieve structured data from the models.

OpenAI has fine-tuned the models to determine when a function needs to be called based on user input and to respond with a JSON object adhering to the function signature. This paves the way for various uses such as creating chatbots that answer questions by calling external tools, converting natural language into API calls or database queries, and extracting structured data from text.

Enhanced GPT-4 and GPT-3.5 Turbo Models

OpenAI’s commitment to improving its models is evident in the updated and more steerable versions of GPT-4 and GPT-3.5 Turbo, which now include the aforementioned function calling capabilities.

The upgraded GPT-4 model also comes in a version with an extended context length for better comprehension of larger texts. OpenAI plans to invite many more developers to try GPT-4 in the coming weeks, with the intent to remove the waitlist entirely.

GPT-3.5 Turbo has also received significant updates, including enhanced steerability, and a new version that supports a 16k context—four times the context length of the standard version, which enables the model to support around 20 pages of text in a single request.

A Smooth Deprecation Process

OpenAI is also starting the deprecation process for the initial versions of GPT-4 and GPT-3.5 Turbo announced in March. Developers using stable model names will automatically be upgraded to the new models on June 27th. However, those who need more time can continue using older models until September 13th.

Reduced Costs

One of the most noteworthy elements of OpenAI’s announcement is a series of significant price cuts. OpenAI has reduced the cost of its most popular embedding model, text-embedding-ada-002, by 75%. They’ve also decreased the cost of input tokens for GPT-3.5 Turbo by 25%. Developers can now use this model at the cost of roughly 700 pages per dollar, making AI technology more accessible than ever before.

Conclusion

OpenAI’s latest announcements represent a major leap forward in the development of AI models and applications. The new function calling capability offers the potential to open up a whole new world of possibilities, and the improved versions of GPT-4 and GPT-3.5 Turbo are set to supercharge the AI tools developers can create. Moreover, the cost reductions are a welcome relief for developers and may be key in democratising access to this cutting-edge technology. It will be thrilling to see the myriad of applications developers will come up with using these advanced AI tools.

Share Your Love :

Updated on :
Categorized in : News

Leave a Comment

en_USEnglish