OpenAI, the leading artificial intelligence research laboratory, has made significant advancements in its language models and pricing structure. In a recent announcement, OpenAI unveiled enhanced models with Function Calling and affordable pricing, aimed at empowering developers and enhancing the capabilities of their applications.
Table of Contents
Function Calling: Enhancing Connectivity and Structured Data Retrieval
One of the major highlights is the introduction of function calling capabilities in the Chat Completions API. OpenAI’s GPT-4 and GPT-3.5 Turbo models have been fine-tuned to detect when functions need to be called, allowing developers to connect GPT’s capabilities with external tools and APIs seamlessly.
Let’s say you have a chatbot application that provides information about movies. You can utilize function calling to connect the chatbot with an external tool or API that fetches movie details based on user queries. Here’s how the conversation might look:
User: Can you tell me about the latest Marvel movie?
Chatbot: Sure! Let me fetch that information for you.
In the background, the chatbot utilizes function calling to invoke the external tool or API:
Function Call: get_movie_details(movie_title: string)
The external tool/API processes the function call and retrieves the details of the latest Marvel movie, such as its title, release date, cast, and plot. Once the information is obtained, the chatbot responds to the user:
Chatbot: The latest Marvel movie is “Spider-Man: No Way Home.” It was released on December 17, 2021, and features Tom Holland, Zendaya, and Benedict Cumberbatch in lead roles. The movie follows the adventures of Spider-Man as he navigates through multiple dimensions.
By leveraging function calling, the chatbot seamlessly integrates with external tools or APIs, enabling it to retrieve structured data and provide accurate and up-to-date information to users.
GPT-4: Upgraded Model with Function Calling
OpenAI has also unveiled updated versions of its GPT-4 and GPT-3.5 Turbo models. GPT-4 comes with enhanced functionality, including function calling, enabling developers to leverage the model’s capabilities more effectively.
GPT-4-32k: Extended Context Length for Better Comprehension
Additionally, GPT-4-32k offers an extended context length, allowing better comprehension of larger texts, making it ideal for applications that require processing substantial amounts of information.
GPT-3.5 Turbo-0613: Function Calling and Improved Steerability
The latest iteration, gpt-3.5-turbo-0613, incorporates function calling capabilities and introduces more reliable steerability through the system message. These enhancements empower developers to guide the model’s responses more effectively, resulting in more accurate and context-aware interactions.
GPT-3.5 Turbo-16k: Expanded Context Length for Comprehensive Text Processing
This variant supports four times the context length of the standard version, enabling the model to handle approximately 20 pages of text in a single request. While gpt-3.5-turbo-16k offers expanded capabilities, it is priced at twice the rate of the standard version, providing developers with greater flexibility to tailor their applications according to specific requirements.
75% Cost Reduction on Text-Embedding-Ada-002
The popular text-embedding-ada-002 now comes at a 75% reduced price of $0.0001 per 1,000 tokens. This reduction in cost enables developers to leverage high-quality embeddings for a wide range of natural language processing tasks at a more affordable rate.
25% Cost Reduction on Input Tokens for GPT-3.5 Turbo
Developers can now avail themselves of a 25% cost reduction on input tokens, lowering the price to $0.0015 per 1,000 input tokens and $0.002 per 1,000 output tokens. The introduction of gpt-3.5-turbo-16k offers an expanded context length at a slightly higher cost of $0.003 per 1,000 input tokens and $0.004 per 1,000 output tokens, providing developers with more flexibility when working with larger texts.
By unveiling enhanced models with Function Calling and affordable pricing, OpenAI aims to empower developers to create even more sophisticated and cost-effective applications. Feedback from the developer community remains crucial in shaping the future of the platform, and OpenAI eagerly looks forward to witnessing the creative and innovative solutions that developers will build using these latest models and features.
Source: OpenAI Blog