OpenAI, one of the leading AI research organizations in the world, has made an announcement that third-party developers can now integrate ChatGPT, an AI-powered chatbot, into their apps and services via an API. The company has also made some changes to its developer terms of service and is now making Whisper, its speech-to-text model, available for use through an API. This article will explore the details of OpenAI’s announcement, including the benefits and costs of integrating ChatGPT and Whisper into apps and services.
OpenAI has announced that it is now enabling third-party developers to integrate ChatGPT into their apps and services via an API, and this will be significantly less expensive than using its existing language models. The company is making its AI-powered speech-to-text model, Whisper, available for use through an API and is introducing some important changes to its developer terms of service.
OpenAI claims that its ChatGPT API can be used for more than just creating an AI-powered chat interface. The company’s new model family, called gpt-3.5-turbo, is said to be the best model for many non-chat use cases. While this model is not the same as the one that Bing is using, which Microsoft has referred to as a next-generation OpenAI large language model that is even faster, more accurate, and more capable than ChatGPT and GPT-3.5. However, given the amount of money Microsoft has invested in OpenAI, it is not surprising that it has access to technology that is not available to the average developer. Microsoft is also using a healthy dose of its own technology for Bing.
ChatGPT API is the latest addition to OpenAI’s suite of APIs. It can be used for creating an AI-powered chat interface for apps and services, among other things. OpenAI has optimized the ChatGPT API, which means it is significantly cheaper than the company’s existing language models. The API offers 1,000 tokens for $0.002, which is ten times cheaper than OpenAI’s existing GPT-3.5 models.
OpenAI has also made Whisper, its speech-to-text model, available through an API. The Whisper API can be used for transcribing or translating audio at the cost of $0.006 per minute. The Whisper model is open source, which means you can run it on your own hardware without paying anything. However, OpenAI likely has access to more powerful hardware, so using the API may be the way to go if you’re looking for a quick turnaround or need to do transcription on lower-powered devices like phones.
OpenAI is offering 1,000 tokens for $0.002, which is 10 times cheaper than its existing GPT-3.5 models. However, sending one snippet of text for the API to respond to could cost several tokens. Tokens are the blocks of text that the system breaks sentences and words into to predict what text it should output next. OpenAI provides a tool for checking how many tokens it will take to interpret a string of text, and it says that one token generally corresponds to about four characters in English.
OpenAI says that developers will also be able to get a dedicated instance of ChatGPT if they are running a massive amount of data through the API. Its post says that doing so will give developers more control over what model they are using, how long they want it to take to respond to requests, and how long conversations with the bot can be.
OpenAI has also announced another new API for Whisper, its speech-to-text model. You can use it to transcribe or translate audio at a cost of $0.006 per minute. Although the Whisper model is open source, so you can run it on your own hardware without paying anything, OpenAI likely has access to more powerful hardware. Therefore, if you are looking for a quick turnaround or need to do transcription on lower-powered devices like phones, using the API may be the way to go.
The ChatGPT API uses tokens as the blocks of text to predict the output text. According to OpenAI’s documentation, a general rule of thumb is that “one token generally corresponds to ~4 characters” in English. The number of tokens required to interpret a string of text can be checked using the company’s tool. For instance, the string “ChatGPT is great!” takes six tokens, which means the API breaks it up into “Chat,” “G,” “PT,” ” is,” ” great,” and “!”.
A dedicated instance of ChatGPT
Developers will be able to get a dedicated instance of ChatGPT if they’re running a monstrous amount of data through the API. Running a dedicated example of ChatGPT gives you more control over what model you’re using, how long you want it to take to respond to requests, and how long conversations with the bot can be.
OpenAI is also introducing some policy changes based on developer feedback. A big one is saying that it won’t use data submitted through the API to train its models anymore unless customers explicitly agree to that usage. This change could help alleviate concerns about putting proprietary information into the bot, as some companies have barred employees from using the technology entirely
In conclusion, OpenAI’s latest announcement of the ChatGPT API is a significant development for third-party developers. The API provides an affordable option for creating AI-powered chat interfaces, and developers can also use it for other purposes. The introduction of an official ChatGPT API could be the moment the floodgates open, and we will likely see more apps and services integrated with OpenAI’s tech. The availability