OpenAI has released an API for its ChatGPT and Whisper models, which allows developers to integrate these capabilities into their apps and products. The ChatGPT model has been a runaway success, with over 100 million monthly active users and a wide variety of use cases. Now, with the introduction of a new pricing structure, the ChatGPT API is more affordable than ever before, making it accessible to even more businesses.
At just $0.002 per 1,000 tokens, ChatGPT API’s new pricing is approximately half of the current token price. This lower cost is expected to make the API more accessible to smaller businesses, as well as larger enterprises that want to experiment with new AI-driven features and services. Already, major companies such as Snap, Quizlet, Instacart, and Shopify have begun utilising the ChatGPT API to build personalised assistants, virtual tutors, and new shopping assistants.
OpenAI has also addressed concerns about the ChatGPT model’s susceptibility to prompt-based attacks, or adversarial prompts that get the model to perform tasks that weren’t part of its original objectives. To address this issue, OpenAI has developed a new approach called Chat Markup Language, or ChatML. This approach feeds text to the ChatGPT API as a sequence of messages together with metadata, as opposed to the standard ChatGPT, which consumes raw text represented as a series of tokens. This approach allows developers to better tailor and filter the ChatGPT model’s responses.
In addition, OpenAI is introducing dedicated capacity plans, which allow customers to pay for an allocation of compute infrastructure to run an OpenAI model, such as GPT-3.5-Turbo. OpenAI claims this plan gives customers “full control” over the instance’s load, and the ability to enable features such as longer context limits. While higher context limits might not solve all the bias and toxicity issues, they could lead models like GPT-3.5-Turbo to hallucinate less. Brockman says that dedicated capacity customers can expect GPT-3.5-Turbo models with up to a 16k context window, meaning they can take in four times as many tokens as the standard ChatGPT model.
The release of the ChatGPT and Whisper APIs is a significant step forward for OpenAI, which has been looking for ways to monetize its technologies. With the APIs, OpenAI hopes to make its AI models more accessible and usable for developers, while also addressing concerns about bias and toxicity in the models. While the ChatGPT API is currently only available to early adopters, OpenAI is expected to make it more widely available in the near future.
AI-powered chatbots and assistants are already ubiquitous features of the digital landscape, and OpenAI’s ChatGPT API represents a significant step forward in their development and deployment. As businesses increasingly look for new ways to engage with customers and provide personalised experiences, AI-powered chatbots and assistants are set to play an ever more central role in the digital economy. With the ChatGPT API now available to early adopters, and with major companies already utilising its capabilities to build personalised assistants, virtual tutors, and shopping assistants, the future of AI-powered chatbots and assistants looks bright.
While concerns about bias and toxicity in AI text-generation models persist, OpenAI’s introduction of the Chat Markup Language and dedicated capacity plans show that the company is attempting to address these issues head-on.
One thing is certain, as AI continues to advance and mature, the potential for chatbots and assistants to transform the way we interact with technology and each other is only set to grow.