ChatGPT debuted last year to a frenzy of interest and enthusiasm. Now, the company has more than 2 million developers building with its API and 100 million people go to ChatGPT weekly, OpenAI CEO Sam Altman said during the keynote speech at the company’s first developer conference Monday in San Francisco.
OpenAI’s services are a major pull for enterprises — 92% of the Fortune 500 companies use its products. It’s a 12 percentage point increase from the end of August when the company launched ChatGPT Enterprise. Some of those enterprise customers include Coca-Cola, jetBlue, Lowe’s and PwC.
At OpenAI’s first developer conference Monday, the company unveiled more than a dozen upgrades to services, many of which focus on customizing models for specific tasks. Here are five updates CIOs should watch.
OpenAI is launching a new version of its GPT-4 model, called GPT-4 Turbo.
The model is 2.75 times cheaper than GPT-4 and has a context window equivalent to 300 pages of text per prompt, OpenAI CEO Sam Altman said. The model’s knowledge base runs up to April 2023.
Custom Model Program
The ChatGPT maker called on enterprises to create custom models alongside OpenAI developers through a new program. For organizations that are selected, a dedicated group of OpenAI researchers will modify each step of the model training process to tailor the model to specific business needs.
“We won’t be able to do this with many companies to start … and at least initially it won’t be cheap,” Altman said.
Companies can create tailored versions of ChatGPT, called GPTs, within the tool’s app for a specific task without any coding knowledge. Once created, users can publish the GPTs for public use or share them only for a company’s internal use.
OpenAI will launch a GPT store later this month, which will feature GPTs created by verified builders. Privacy policies for custom ChatGPT models will build on the existing rules, such as letting account holders opt out of model training.
OpenAI added built-in copyright safeguards for systems and committed to defending customers who face legal claims. "Copyright shield means that we will step in and defend our customers and pay the costs incurred if you face legal claims on copyright infringement," Altman said. "This applies both to ChatGPT enterprise and the API."
Companies can personalize an AI tool through the company’s Assistants API, which rolled out in beta Monday. Assistants will have access to Code Interpreter, which writes and runs Python code in a sandboxed environment.
“Over time, GPTs and assistants are … going to be able to do much, much more — they’ll gradually be able to plan and perform more complex actions on your behalf,” Altman said. “We really believe in the importance of gradual iterative deployment. We believe it's important for people to start building with and using these agents now to get a feel for what the world is going to be like as they become more capable.”