Table of contents
Official Content
  • This documentation is valid for:

Below you can find known issues when working with Supported Chat Models.

OpenAI's "o" models restrictions

The parameters temperature, top_p and n must be set to 1, while presence_penalty and frequency_penalty are fixed at 0. Any other configuration change may generate errors such as the following:

Unsupported value: 'temperature' does not support 0.0 with this model. Only the default (1) value is supported

Check your assistant configuration accordingly, related information here.

Empty responses when using any of OpenAI’s reasoning models

When creating a Chat Assistant using any of OpenAI’s reasoning models—such as o1, o1-mini, or o3-mini—you might encounter a scenario where the response status shows “Succeeded” but the actual response content is empty. One common cause is an insufficient Max Output Tokens setting.

According to OpenAI’s documentation on reasoning models, even though reasoning tokens are not visible to you, they still consume space in the model’s context window. If Max Output Tokens is set too low, the model may not generate any user-visible output.

To resolve this, try configuring the assistant with the maximum Max Output Tokens allowed by each model:

  • o1 and o3-mini: up to 100k tokens

  • o1-mini: up to 65k tokens

Increasing the Max Output Tokens to these values should prevent empty responses when the status indicates “Succeeded.”

Invalid 'max_tokens': integer below minimum value

The following error appears when executing an assistant where the  max_tokens parameter is set to -1.

Error code: 400
Invalid 'max_tokens': integer below minimum value. Expected a value >= 1, but got -1 instead.
type: invalid_request_error
param: max_tokens
code: integer_below_min_value

The case was reproduced using OpenAI provider. Assign a maximum value according to the selected model as -1 is not detailed as supported.

max_tokens is too large

The following error appears when executing an assistant

Error connecting to the SAIA service cause: 400
max_tokens is too large: 12000. This model supports at most 4096 completion tokens, whereas you provided 12000

Check the max_token parameter supported for your assistant configured model; the selected max_token parameter is greater than the maximum supported.

The response was filtered due to the prompt triggering Azure OpenAI's content management policy

The following error appears when executing an assistant with a complex query using Azure OpenAI endpoints

The response was filtered due to the prompt triggering Azure OpenAIs content management policy.
Please modify your prompt and retry.
To learn more about our content filtering policies please read our documentation
https://go.microsoft.com/fwlink/?linkid=2198766

Check the deployment made for the associated endpoint, make sure to set the content filter to the empty value (default); do not use the Microsoft.Default.v2 configuration.

Go to the Azure AI Foundry portal, locate the deployments section; and for each completion model (such as gpt-4o, gpt-4o-mini), use the Update Deployment option to set the Content Filter as "Default".

image_202536185529_1_png

Service: BedrockRuntime, Status Code: 403, Request ID: GUID

The following error appears when using a model in AWS Bedrock

You don't have access to the model with the specified model ID
Received Model Group=awsbedrock/modelname

Make sure you have access to the model modelname, follow these steps to enable it:

Add or remove access to Amazon Bedrock foundation models

Empty Prompt for Anthropic Models

When creating a Chat Assistant and selecting an Anthropic model (for example, anthropic/claude-3-7-sonnet-latest), the Prompt field is mandatory and cannot be left empty. If you try to configure the assistant without any content in this field, you will encounter an error similar to the following:

{"error":{"message":"litellm.BadRequestError: AnthropicException - {\"type\":\"error\",\"error\":{\"type\":\"invalid_request_error\",\"message\":\"system: text content blocks must be non-empty\"}}. Received Model Group=anthropic/claude-3-7-sonnet-latest\nAvailable Model Group Fallbacks=None","type":null,"param":null,"code":"400"}}

Make sure that the Prompt field contains at least one valid text content block. Review the assistant's configuration and fill in the prompt with appropriate information to avoid this error.

Last update: March 2025 | © GeneXus. All rights reserved. GeneXus Powered by Globant