In case you get a notification in the chatbot about Rate or Max Token limits after setting up the AI bot, you need to reduce the content in the Prompt field.
An example of a Rate limit notification below
An example of a Max Token limits notification below
This model’s maximum context length is 4097 tokens, however you requested 13412 tokens (12912 in your prompt; 500 for the completion). Please reduce your prompt; or completion length.
The issue is related to Rate limits and Max tokens limits which are provided by a certain API model. For example, if you use gpt-3.5-turbo, the maximum number of tokens you can send to this model is 4,097 tokens per request. Please read more about this issue and reduce your content(delete some pages or posts) from the Prompt fields that are used by the AI bot so that the text length does not exceed the 4,097 tokens. You can calculate the length of the text of posts/pages in tokens using the Tokenizer tool. Unfortunately, we cannot increase the maximum number of tokens in a model since it is set on the Open AI API side.
Prompt field settings bellow
Comments
0 comments
Please sign in to leave a comment.