Table of contents
Official Content
  • This documentation is valid for:

List of supported models via /chat Api

Module Provider Model Full Name Chat Support Function Calling support Environment support
saia.models.openai OpenAI
  • openai/gpt-4o
  • openai/gpt-4o-mini
  • openai/gpt-4o-2024-11-20
  • openai/o1(1)

openai/* (any openai model)

  • Beta
  • qa
  • Production
  • openai/o1-preview(1)
  • openai/o1-mini(1)
  • openai/chatgpt-4o-latest
 
saia.models.googlevertexai Google VertexAI
  • vertex_ai/gemini-2.0-flash-exp
  • vertex_ai/gemini-1.0-pro
  • vertex_ai/gemini-1.5-pro
  • vertex_ai/gemini-1.5-flash
  • vertex_ai/claude-3-5-sonnet-20240620
  • vertex_ai/claude-3-sonnet-20240229
  • vertex_ai/claude-3-opus-20240229
  • vertex_ai/claude-3-haiku-20240307
  • vertex_ai/gemini-1.5-pro-002
  • vertex_ai/gemini-1.5-flash-002
  • vertex_ai/claude-3-5-sonnet-v2-20241022
  • vertex_ai/claude-3-5-haiku-20241022
  • vertex_ai/meta.llama-3.1-405b-instruct-maas
  • Beta
  • qa
  • Production
  • vertex_ai/meta.llama-3.2-90b-vision-instruct-maas
  • vertex_ai/gemini-2.0-flash-exp
  • Beta
saia.models.azure Azure OpenAI
  • azure/gpt-35-turbo-16k
  • azure/gpt-4
  • azure/gpt-4o
  • Beta
  • qa
  • Production
saia.models.anthropic Anthropic
  • anthropic/claude-3-sonnet-20240229
  • anthropic/claude-3-opus-20240229
  • anthropic/claude-3-haiku-20240307
  • anthropic/claude-3-5-sonnet-20240620
  • anthropic/claude-3-5-sonnet-20241022
  • anthropic/claude-3-5-haiku-20241022
  • Beta
  • qa
  • Production
saia.models.awsbedrock AWS Bedrock
  • awsbedrock/anthropic.claude-3-haiku
  • awsbedrock/anthropic.claude-3-sonnet
  • awsbedrock/anthropic.claude-3.5-sonnet
  • awsbedrock/anthropic.claude-3-opus
  • awsbedrock/meta.llama3-8b
  • awsbedrock/meta.llama3-70b
  • awsbedrock/amazon.titan-lite-v1
  • awsbedrock/amazon.titan-express-v1
  • awsbedrock/cohere.command
  • awsbedrock/meta.llama3-1-70b
  • awsbedrock/meta.llama3-1-405b
  • awsbedrock/anthropic.claude-3.5-sonnet-v2
  • awsbedrock/anthropic.claude-3.5-haiku
  • awsbedrock/amazon.nova-pro-v1:0
  • awsbedrock/amazon.nova-lite-v1:0
  • awsbedrock/amazon.nova-micro-v1:0
  • Beta
  • qa
  • Production
  • awsbedrock/meta.llama3-2-1b
  • awsbedrock/meta.llama3-2-3b
  • awsbedrock/meta.llama3-2-11b
  • awsbedrock/meta.llama3-2-90b
  • Beta
saia.models.gemini Gemini
  • gemini/gemini-1.5-flash-latest
  • gemini/gemini-1.5-flash-exp-0827
  • gemini/gemini-1.5-flash-8b-exp-0827
  • gemini/gemini-1.5-pro-latest
  • gemini/gemini-1.5-pro-exp-0801
  • gemini/gemini-1.5-pro-exp-0827
  • gemini/gemini-exp-1114
  • gemini/gemini-exp-1121
  • gemini/gemini-exp-1206
  • gemini/gemini-2.0-flash-exp
  • Beta
  • gemini/gemini-2.0-flash-thinking-exp-1219
 
saia.models.groq Groq
  • groq/llama-3.3-70b-versatile
  • groq/llama-3.2-3b-preview
  • groq/llama-3.2-1b-preview
  • groq/llama-3.1-8b-instant
  • groq/mixtral-8x7b-32768
  • groq/llama-3.1-70b-versatile (deprecated)
  • Beta
  • qa
  • groq/llama-3.2-11b-vision-preview
  • groq/llama-3.2-90b-vision-preview
  • Beta
saia.models.nvidia NVidia
  • nvidia/nvidia.nemotron-mini-4b-instruct
  • nvidia/meta.llama-3.1-8b-instruct
  • nvidia/meta.llama-3.1-70b-instruct
  • nvidia/meta.llama-3.1-405b-instruct
  • nvidia/meta.llama-3.2-3b-instruct
  • Beta
  • qa
  • nvidia/meta.llama-3.2-1b-instruct
 
  • nvidia/llama-3.1-nemotron-70b-instruct
  • Beta
saia.models.sambanova SambaNova
  • sambanova/Meta-Llama-3.1-8B-Instruct
  • sambanova/Meta-Llama-3.1-70B-Instruct
  • sambanova/Llama-3.2-11B-Vision-Instruct
  • sambanova/Llama-3.2-90B-Vision-Instruct
  • sambanova/Meta-Llama-3.3-70B-Instruct
  • sambanova/Qwen2.5-72B-Instruct
  • sambanova/Qwen2.5-Coder-32B-Instruct
  • Beta
  • sambanova/Meta-Llama-3.1-405B-Instruct
  • sambanova/Meta-Llama-3.2-1B-Instruct
  • sambanova/Meta-Llama-3.2-3B-Instruct
 
saia.models.cerebras Cerebras
  • cerebras/llama3.1-8b
  • cerebras/llama3.1-70b
  • cerebras/llama-3.3-70b
  • Beta
saia.models.cohere Cohere
  • command-r
  • command-r-08-2024
  • cohere/command-r-plus
  • cohere/command-r-plus-08-2024
  • cohere/command-r7b-12-2024
  • Beta

(1) - To use these models the temperature must be set to 1, check Reasoning models.

Globant Enterprise AI LLM consumption limits in SaaS mode

When using Globant Enterprise AI in SaaS mode, you have a monthly limit of 11,000 requests for the following LLMs:

  • OpenAI:
    • GPT 4o
    • o1-preview
    • o1-mini
  • Google Vertex AI:
    • Gemini Pro 1.0
    • Gemini Pro 1.5
  • AWS Bedrock:
    • Claude 3.5 Sonnet (v1 and v2)
    • Claude 3 Opus
    • Llama 3.1 Instruct (405b)

For any other LLMs or models, pricing and usage limits will be evaluated on a case-by-case basis.

See Also

LLM Troubleshooting

Last update: December 2024 | © GeneXus. All rights reserved. GeneXus Powered by Globant