According to OpenAI, the GPT-4 Turbo has six major improvements over the GPT-4:
First, the context conversation length, GPT4 can only support a maximum of 8k context length (equal to about 6000 words), while the GPT-4 Turbo has 128k context length, calculated in an article about 1k words, GPT-4 Turbo can process 128 articles at the same time.
Second, model control, GPT-4 Turbo uses a new model control technology, allowing developers to more fine-tune the model output, improve the user experience.
Third, the knowledge base is updated, and the real world knowledge cutoff for GPT-4 Turbo is now April 2023, while the cut-off for GPT-4 is September 2021.
Fourth, the multimodal API, the Vincennes graph model DALL·E 3, the GPT-4 Turbo with visual input capabilities, and the new sound synthesis model (TTS) have all entered the API.
Fifth, custom fine-tuning, OpenAI allows developers to create custom versions of ChatGPT, including modifying the model training process, doing additional domain-specific pre-training, and running custom domain-specific post-reinforcement learning training processes.
Sixth, lower price and higher limit, GPT-4 Turbo input tokens price is only 1/3 of GPT-4, output tokens price is 1/2 of GPT-4; It also doubled the per-minute token limit for all GPT-4 paying subscribers.
OpenAI CEO Altman claims that GPT-4 Turbo is available to all paid developers through the GPT-4-1106-Preview API, and a stable version is expected to be released in the coming weeks.