Table of contents
Official Content
  • This documentation is valid for:

This log shows the most important fixes or features added to the platform.

Log

2025

March 10th

  • New LLMs:
    • GPT-4.5
    • Claude 3.7 Sonnet (Providers Anthropic, Vertex AI and AWS Bedrock)
    • Updates in Gemini 2.0 series:
      • vertex_ai/gemini-2.0-flash-lite-preview-02-05
      • vertex_ai/gemini-2.0-flash-thinking-exp-01-21
  • RAG Revision #6
    • Support for o3-mini, gpt-4.5-preview, claude-3-7-sonnet-20250219, new DeepSeek, Gemini2* and sambanova LLM providers.
    • New pinecone provider available for embeddings and rerankers.
    • The CleanUp action message has been corrected to clearly specify that it will permanently delete the RAG Assistant files and update the information in the RDS.
    • Added usage element on every response.
    • Improvements when changing the LLM/Embeddings settings; all models and providers are normalized to be selected from standard combo-box items; use the override mechanism if you need other options.
    • Support for guardrails.
    • New documentAggregation property to decide how sources are grouped and returned.
  • It is possible to provide feedback on the response of the Chat with Data Assistant in the Frontend.
  • The new Evaluation APIs introduce key functionalities through three interconnected APIs: DataSet API, Evaluation Plan API, and Evaluation Result API. This version is primarily designed for users with a data science profile and is mainly accessed via APIs, complemented by a series of Jupyter notebooks that demonstrate their use. For a comprehensive guide on how to use these APIs, you can refer to How to Evaluate an AI Assistant and the EvaluationAPITutorial.ipynb notebook, which provide practical examples and code for working through the evaluation process.
  • File attachment support in Flows (version 0.9).
  • Support for Full Story integration in the Workspace/Playground to generate user access statistics in Full Story.
  • In the LLM API, for models that have descriptions in the specified languages, the descriptions property is included in the Response, which contains the descriptions in the available languages, such as Spanish, English and Japanese.
  • Data Analyst Assistant 2.0 version presents important improvements, simplifying the interaction with the data by reducing the main components to just two: Dispatcher and Thinker. In addition, the metadata structure is automatically generated when loading the datasets, streamlining the setup process. For more information, see How to use Data Analyst Assistant.
  • The option to consult version-specific documentation is now available.
    Articles with versions show the option “Other document versions” in the header. Clicking on “Other document versions” brings up a menu that allows you to choose between the most recent version (“Latest”) or earlier versions (e.g. “2025-02 or prior”). If you select a version other than “Latest”, a message appears: “This is not the latest version of this document; to access the latest version, click here”. This message provides a direct link to the most up-to-date documentation.

Components Version Update

February 10th

  • New documentation with details about Supported Chart Types.
  • New Usage Limits API.
  • Flows
  • RAG Revision #5
  • New endpoint GET /accessControl/apitoken/validate returns information about the organization and project associated with the provided apitoken.
  • New LLMs:
    • Already in production
      • OpenAI: o3-mini
    • Already in Beta
      • DeepSeek:
        • deepseek/deepseek-reasoner
        • deepseek/deepseek-chat
        • azure/deepseek-r1
        • nvidia/deepseek-ai-deepseek-r1
        • groq/deepseek-r1-distill-llama-70b
        • sambanova/DeepSeek-R1-Distill-Llama-70B
      • Updates in Gemini 2.0 series:
        • gemini-2.0-flash-thinking-exp-01-21 (Via Providers Gemini and Vertex AI)
        • gemini/gemini-2.0-flash-lite-preview
        • gemini/gemini-2.0-pro-exp
        • vertex_ai/gemini-2.0-flash-001
      • sambanova/Llama-3.1-Tulu-3-405B

Components Version Update

January 13th

  • Internationalization, Backoffice, and frontend support for Japanese.
  • Invitations now include information about the organization and project in the subject.
  • New LLMs
    • Already in Production
      • OpenAI: o1 (2024-12-17 version)
    • Already in Beta
      • New Providers: Cohere
        • Cohere: Cohere-r
  • Guardrails configured by assistant.
  • Rerank API to semantically order a list of document chunks given a query. 
  • New optional RAG Retrieve and Rerank adds an extra layer of precision to ensure that only the most relevant information reaches the model used in the generation step.

2024

December 12th

  • Automatic Creation of Default Assistant
  • Organization Usage Limits: It is possible to set quota limits to control organization expenses or usage.
  • Chat with Data Assistant
    • Show details about the generated query in the Playground.
    • Support in Chat API to interact with Chat with Data Assistant.
  • Flows
    • Support for markdown when showing the response on the different channels supported by Flows (web, Slack, WhatsApp, and Teams).
    • New component for connecting flows to the agent overflow console (Human-in-the-loop) via B2Chat. Please read How to connect a Flow to B2Chat.
  • RAG
  • Data Analyst Assistant
    • Option to update metadata options.
    • New version by default in new Data Analyst assistants.
  • New LLMs
    • OpenAI: gpt-4o-2024-11-20
    • AWS Bedrock: Anthropic Claude 3.5 Haiku
    • Amazon Nova models (Micro, Lite, and Pro)
    • Llama 3.1 405B on Vertex AI
    • Beta:
      • Support for providers Cerebras, SambaNova and xAI (Grok models).
      • All new Gemini Experimental models.
  • Security

November 12th

  • Flows execution integrated into the Playground
  • New LLMs support
    • OpenAI: o1-preview and o1-mini
    • Claude Sonnet 3.5 v2 - Providers: Anthropic, Vertex AI, and AWS Bedrock
    • Llama 3.2 models - Providers: Vertex AI and AWS Bedrock
  • Chat with data assistants
    • Possibility to edit metadata, entities, and attribute descriptions. 
    • The Properties tab has been renamed to Settings along with the options that can be configured in it.
  • RAG
    • New returnSourceDocuments option to disable returning the documents section used to answer the question.
    • New step option to use the assistant as a retrieval tool.
    • Support for custom history in conversations using the chat_history variable.
  • Stand-alone Frontend based on the new Playground UI
    • Options to customize the Frontend to use the client logo, color palette, welcome message, and descriptions.
    • Feature to collect feedback (thumbs up/down) in each response.
    • Google Analytics support.
  • Data Analyst Assistant
    • Support to upload large CSV files.
  • In the Organization API, the ability to set and manage usage limits on projects through the POST /project and GET /project/{id} endpoints has been added.
  • Quota Limit now includes improvements such as highlighting the active quota in green, offering options to cancel active quotas, among others.

October 17th

  • Rebranding to Globant Enteprise AI
  • Improvements in RAG
  • Playground improvements
  • File management improvements
  • New LLMs supported
    • NVIDIA provider with new models supported
      • nvidia.nemotron-mini-4b-instruct
      • meta.llama-3.1-8b-instruct
      • meta.llama-3.1-70b-instruct
      • meta.llama-3.1-405b-instruct
      • meta.llama-3.2-3b-instruct
    • Groq provider supported
      • groq/llama-3.1-70b-versatile
      • groq/llama-3.2-11b-vision-preview
      • groq/llama-3.2-3b-previewgroq/llama-3.2-1b-preview
  • New embeddings models added
    • Vertex AI:
      • vertex_ai/textembedding-gecko
      • vertex_ai/text-embedding-004
      • vertex_ai/textembedding-gecko-multilingual
    • Nvidia:
      • nvidia/nvclip
      • nvidia/nv-embed-v1
      • nvidia/baai.bge-m3
      • nvidia/snowflake.arctic-embed-l
      • nvidia/nv-embedqa-mistral-7b-v2
      • nvidia/embed-qa-4
      • nvidia/nv-embedqa-e5-v5

September 25th

  • Support for file processing with prompt-based assistants. This will enable many scenarios, such as uploading documents and summarizing, extracting, and checking information, etc. Also, depending on the model used by the assistant, it will be able to process audio, video, or images.
  • Support for multi-modal LLMs allow processing docs, audio, video, and images in models like GPT-4o or Gemini Pro.
  • Chat with data assistants
    • The model used to build the queries was updated with GPT-4o, which improves the quality of the generated query.
    • Configure the query builder server by organization and project. This means you can connect with different DBMS from each project when building Chat with data assistants.
    • Show an explanation of how the query was built.
  • New Playground Interface design
    • New design
    • Upload documents from the front end to chat with them.
  • Flows builder
    • There will be two types of Flows, one more oriented to build a conversational UI and the other to build assistant flows.
      Access to these flows will only be available through Chat API or through the channels offered by Flows.
  • New models hosted in AWS Bedrock added:
    • Amazon Titan Express v1
    • Amazon Titan Lite v1
    • Anthropic Claude 3 Haiku
    • Anthropic Claude 3 Sonnet
    • Anthropic Claude 3.5 Sonnet
    • Cohere Command
    • Meta Llama 3 8B
    • Meta LLama 3 70B
  • It is now possible to provide clear guidance on the assistant's capabilities, allowing you to add information such as descriptions, features, and example prompts. This configuration can be done from the Backoffice, Start Page, or WelcomeData section of the Assistant API and RAG Assistants API endpoints.
  • RAG Assistants

August 9th

  • Support of new models
    • GPT-4o mini
  • RAG Assistants
    • New option called CLEANUP allows to delete the documents associated to a RAG Assistant.
    • When creating a new assistant, the following defaults are updated:
  • Data Analyst Assistant
  • Considerations

July 4th

June 10th

  • Enterprise AI Proxy is deprecated. Use Chat API instead.
  • Support for new LLMs 
    • OpenAI new model GPT-4o
    • Models in Google Vertex
      • Gemini 1.0 Pro
      • Gemini 1.5 Flash preview-0514
      • Gemini 1.5 Pro preview-0514
      • Claude 3 Haiku
      • Claude 3 Opus
      • Claude 3 Sonnet
  • RAG Improvements
    • New option to initialize RAG Assistant based on another when creating a new RAG Assistant.
    • New option to export document list in View Documents over a RAG Assistant.
    • Added filter options when browsing Documents.
    • SelfQuery RAG retriever partial support for a customized Prompt.
    • Support for text-embedding-004 in Google models to generate the embeddings.
  • Deprecated Assistant API endpoints.
    • /assistant/text/begin
    • /assistant/text
  • Support to deploy in Google Cloud Platform.

May 8th

  • New Chat with Data Assistant.
  • New Ingestion SDK to automate document ingestion in RAG assistants.
  • New models hosted in NVIDIA platform supported. See Supported LLMs for more details.
  • New option to export information about projects and members available for the organization administrator.
  • New API to extend dataset for Data Analyst Assistant 1.0.
  • New filter by user email in Requests. 
  • Update default to use text-embedding-3-small OpenAI Embeddings for new RAG assistants.
  • Support for gemini-1.5-pro-preview-0409 model added.

April 3rd

March 11th

  • GeneXus Identity Provider is implemented, expanding the login options in the Backoffice of the production environment. This allows for login not only with Google but also with Apple or GeneXus Account.
  • It is possible to customize the icon for each assistant.

February 29th

  • Frontend improvements in UI/UX.
  • Option to get feedback from end users when interacting with RAG Assistant.
  • Gemini Pro LLM support.
  • New Dashboard with user metrics.
  • New Average Request Time metric added in the Project Dashboard.

January 8th

  • The option formerly known as 'Search Documents' has been improved and renamed to RAG Assistant (Retrieval Augmented Generation) to provide an optimized experience when searching and generating information.

Frontend

  • Feedback is provided during conversations with RAG Assistants, indicating where you are in the process.
  • 'Response streaming' support for RAG Assistants.
  • Settings are hidden when selecting an assistant, except when 'Chat with LLMs' is selected.

2023

December 19th

  • Fixed: Too Many Redirects when accessing Playground using a browser in Spanish language.

December 6th

  • New backoffice design.
  • Access to the Playground from the backoffice to chat with the assistants defined in the project.
  • Upload images for analysis with GPT-4 Vision.
  • Google Analytics support at the frontend.
  • Keep a conversation thread when chatting with documents.
  • An email notification is sent automatically when a new member is invited to join the organization or project.

November 28th

  • First version officially released!!

November 6th

  • The following OpenAI models are supported: GTP-4 Turbo (gpt-4-1106-preview), GPT-3.5 Turbo (gpt-3.5-turbo-1106), and GPT-4 Vision (gpt-4-vision-preview).

October 18th

October 11th

  • AI-Driven Load Balancing: The platform automatically manages the Load Balancing process when you work with generative AI providers, efficiently addressing the limits imposed by LLM platforms.

 

Last update: March 2025 | © GeneXus. All rights reserved. GeneXus Powered by Globant