I am experiencing an issue with the GPT-4-turbo model integration in my Pega application. The REST connector was configured using the correct OpenAI API endpoint (https://api.openai.com/v1/completions), and I am passing the necessary headers, including the API key.
However, when attempting to make a request to the GPT-4-turbo model, I receive the following error:
You exceeded your current quota, please check your plan and billing details. For more information on this error
Steps I have followed:
Created the REST connector with the GPT-4-turbo API endpoint.
Added the necessary API key for authorization.
Configured the input parameters such as model, prompt, temperature, and max_tokens.
Despite this, the model is not responding as expected.
Could you please advise on troubleshooting steps or check if there is an issue with the API integration on the Pega side?
This issue is not caused by Pega configuration but by the OpenAI account quota. The error clearly shows that the API key being used has no available credits or billing is not enabled for the project. Log in to the OpenAI platform and check the billing section to confirm that a valid payment method is added and the usage limit is not exhausted. Also verify that the API key belongs to an active project with access to GPT-4-turbo. Once billing is active and quota is available, the same REST connector call from Pega will work without any changes. No updates are needed on the Pega side for this error.