We use OpenAI API in our plugin which is powered by a diverse set of models with different capabilities and price points. You can find the list of models available in the plugin via Helper>AI>Model and read their descriptions in the API Documentation
One of the important characteristics of the models is the Max tokens which basically determine the maximum amount of site content that can be used by the AI bot as the API can't handle more tokens per request. That is, when choosing posts, pages or products in your Prompt type for training a bot, you should remember that there are limits on the amount of text that can be used by the bot.
Each model has a limited Max tokens value which means it cannot be exceeded when using a specific model. A token is a piece of word used for natural language processing. For English text, 1 token is approximately 4 characters or 0.75 words. To estimate the length of text in tokens you can use the Tokenizer tool.
How is the max_token of the specific model calculated
Max tokens = Prompt length (the content that you chose in the Prompt type and Prompt field) + Max bot response tokens (max bot response length per question, it is also called Completion in the message https://take.ms/Mqf3X).
Calculation example
By default, the plugin uses the text-davinci-003 model and its Max tokens value is 4097. You can find this information in the API Documentation
That is, the sum of the Max bot response tokens and Prompt tokens should not exceed 4097.
You can set the Max bot response to 500 tokens and then you will have free 3597 tokens for your Prompt content. You can calculate the length of the text of posts/pages in tokens using the Tokenizer tool.
Troubleshooting for Max Token limit exceeded
An example of a Max Token limits notification below
This model’s maximum context length is 4097 tokens, however you requested 13412 tokens (12912 in your prompt; 500 for the completion). Please reduce your prompt; or completion length.
This message means that the length of the content you are using as a prompt(selected site pages/posts) for the AI bot exceeds the allowed max token limit for the current GPT model. To solve this, you can use several ways:
- Choose another GPT model that has a higher Max tokens limit. For example gpt-4o-mini. Learn more about models at Open AI documentation.
- Reduce the length of your prompt by removing some pages or posts from the Prompt settings.
- Use the Assistant API, which allows you to use much more information.
Comments
0 comments
Please sign in to leave a comment.