Explore the key innovation points in this roadmap, which reveal the improvements and breakthroughs on the way.
- Station: Discover and Scale AI Across the Organization
- The Station is the new Globant Enterprise AI module designed for consumer users to easily explore, adopt, and execute AI solutions. Serving as the centralized entry point for company-wide AI enablement, it empowers users to discover and interact with Agents.
- This first release includes:
- Search, filter, and discovery of available AI solutions
- Detailed solution pages to understand capabilities and use cases
- Solution sharing via public links
- Navigation sidebar, including direct access to the Lab
- Import from Lab to add new solutions to the Station
- Redirection to Workspace for execution
- Ratings and reviews to gather user feedback
- Help and contact support options to ensure user success
- This release lays the foundation for scalable AI adoption across the enterprise.
- Note: In this version, the Station will be available only for customer private cloud installations. In future releases, it will also be available in the SaaS environment.
- A2A (Agent-to-Agent) Protocol Support for Enhanced Integration and Extensibility
- Globant Enterprise AI now supports the A2A (Agent-to-Agent) protocol, enabling seamless integration of agents defined in other frameworks. With this new feature, users can import external agents and utilize them as Tools within agents created in the Lab. This powerful capability significantly enhances the integration and extensibility of GEAI, allowing organizations to leverage existing investments, connect diverse agent ecosystems, and build more sophisticated solutions by combining agents across platforms.
- All Agents Automatically Exposed via A2A Protocol
- All agents defined in the Lab are now automatically exposed through the A2A protocol, with no additional configuration required. Each agent is published with an A2A-compliant API, and its capabilities and skills are described in an AgentCard format. The AgentCard is available at a dedicated endpoint, following the A2A standard:
- This enhancement allows third-party systems that support A2A to seamlessly discover and interact with GEAI agents. For more details on the A2A protocol and AgentCard specification, please refer to the official A2A documentation.
- Workspace/Playground
- Shareable Chat Links.
- A new feature will allow users to share chat sessions with others, including the full conversation history with an agent. This shared access enables anonymous access via a public link.
- Direct Chat Session Access via Unique Agent URL
- This feature allows users to initiate a new chat session with a specific agent directly through its own unique URL. This enhancement streamlines collaboration and accessibility by enabling users to easily share links to agents, making it faster and more convenient to start new interactions with them from anywhere.
- Universal File Upload Compatibility
- Users will be able to upload previously unsupported file formats—such as .doc, .docx, .odt, .rtf, .ppt, and .pptx—directly in the chat interface. Even if the selected LLM does not natively support these formats, the platform will automatically convert the files (e.g., to PDF or plain text) at the server level before processing. This enhancement ensures broader file compatibility across both multimodal and non-multimodal models, streamlining interactions and improving user experience.
- Lab Improvements
- Agent Export and Import Options
- We have introduced new export and import options in the Globant Enterprise AI Lab. These features allow users to easily share agent definitions and their associated tools with others, even across different projects. With these capabilities, users can export their agents and later import them into other projects, enabling seamless collaboration and reusability of agent configurations within the GEAI platform.
- Agent Execution Trace Debugging & Download
- A new feature is now available in Globant Enterprise AI Lab that enhances the agent testing experience. Users can now view detailed execution traces of agents in a dedicated debug tab while testing. Additionally, there is a new option to download the complete execution log for further analysis or record-keeping.
- New Agent Configuration Parameter: maxRuns
- A new configuration parameter called maxRuns is now available in the Agent UI. This setting defines the maximum number of autonomous iterations an agent can perform before returning control to the user. Each iteration corresponds to a single LLM call, and the default value is set to 5. This allows fine-tuning the level of agent autonomy based on the complexity and nature of the task. This enhancement provides greater control over agent behavior, helping balance automation with user oversight.
- Iris Meta Agent: Agents as Tools Support
- A new feature has been introduced to the Iris meta agent, enabling agents created by Iris to use other agents as tools. With this update, agents built with Iris can seamlessly integrate and leverage the capabilities of additional agents, significantly enhancing their functionality and enabling more complex workflows. This allows Iris-created agents to delegate tasks or access specialized skills from other agents, making them more versatile.
- Tools
- Per-User Consent for GDrive Tool Access
- A new consent mechanism will be introduced for tools that integrate with Google Drive. Before an agent can access or manipulate a user's GDrive data, the user must explicitly grant permission. This per-user consent model ensures secure, transparent usage of third-party tools, aligning with data privacy best practices and organizational compliance requirements.
- Expanded Model Support for the Create Image Tool
- The Create Image tool, which can be associated with agents in the Globant Enterprise AI Lab, now supports a wider range of image generation models. Users can now generate images using the following models:
- openai/gpt-image-1
- openai/dall-e-3
- vertex_ai/imagen-3.0-generate-001
- vertex_ai/imagen-3.0-fast-generate-001
- vertex_ai/imagen-3.0-generate-002
- xai/grok-2-image-1212
- This expanded support provides users with greater flexibility and more options for creating images tailored to their specific needs.
- New Public Tool: com.globant.geai.serpapi.google_search
- A new public tool, com.globant.geai.serpapi.google_search, has been added to Globant Enterprise AI Lab. This web search tool allows you to query across various Google engines, including Google, Google Maps, Google News, Google Images, Google Videos, and Google Scholar. You can specify which search engine to use in the agent guidelines or directly in the chat. By default, the standard Google engine is used. This tool expands the information retrieval capabilities of your agents, enabling more dynamic and context-aware responses.
- New Public Tools: Firecrawl Web Scraper and Web Search
- Two new public tools from Firecrawl have been added to Globant Enterprise AI Lab:
- com.globant.geai.firecrawl.web_scraper
- This tool allows agents to fetch content from any web page. It returns page content in multiple formats, including markdown, HTML, links, and screenshot. You can specify one or more formats to retrieve (e.g., markdown, links, screenshot). Additionally, this tool supports fetching PDF documents from the web.
- com.globant.geai.firecrawl.web_search
- This tool enables agents to search web pages and view short snippets from the results. It can be used in combination with the web scraper tool to extract the full content of selected web pages.
- These additions provide agents with enhanced web browsing and data extraction capabilities, broadening the range of information accessible within GEAI.
- LLM Usage Limit Alerts and Notifications
- A new feature has been added to the Globant Enterprise AI Lab and Backoffice - Console to help you manage your LLM usage more effectively. You will now receive warning notifications when your LLM consumption exceeds the configurable alert threshold (soft limit), which can be set per project or as a general cap at the organization level. In addition, if a project or organization runs out of available balance to continue using LLMs, an error notification will be displayed. These alerts enhance visibility and control over LLM usage, helping users avoid unexpected interruptions.
- Console Improvements
- Prompt Files option: It allows you to upload files at the organization and project level so that the Chat Assistant you define can use them to answer questions.
- LLMs
- New OpenAI's models already available through the Responses API and coming soon through the Chat API:
- o3-pro: Part of OpenAI’s “o” series, this model is trained with reinforcement learning to perform complex reasoning and deliver more accurate answers. o3-pro leverages increased compute to “think before it answers,” consistently providing higher-quality responses.
- codex-mini-latest: This is a fine-tuned version of o4-mini, specifically optimized for use in Codex CLI.
- New Anthropic – Web Search Tool: The web search tool gives Claude direct access to real-time web content, enabling it to answer questions using up-to-date information beyond its training cutoff. Claude automatically cites sources from search results as part of its response. More details on usage and supported models: How to use LLMs with built-in web search tools via API.
- Claude 4: Anthropic’s latest generation of models, featuring Claude Opus 4 for advanced reasoning and coding, and Claude Sonnet 4 for high-performance, efficient task execution, now available in our Production environment.
- New Providers Coming to Production: xAI (Grok models) and Cohere.
- Integration of Azure AI Foundry: Azure AI Foundry is being introduced as LLM provider to leverage its unified platform for building, customizing, and deploying AI applications. This integration provides access to a diverse catalog of over 11,000 models from providers such as OpenAI, xAI, Microsoft, DeepSeek, Meta, Hugging Face, and Cohere, along with robust tools for responsible AI development and seamless integration with the Azure ecosystem.
- Imagen 4: The Imagen 4 family of models is now available for text-to-image generation through the Images API via Vertex AI. This integration brings Google’s advanced Imagen 4 models—including Standard, Ultra, and Fast variants—for high-quality, brand-consistent image creation with support for multiple languages.
- Model Lifecycle Updates:
- GPT-4.5 Preview Deprecation: Access to GPT-4.5 Preview via the API will end on July 14, 2025. To avoid disruption, this model is being migrated to GPT-4.1.
- Vertex AI Gemini 2.5 Updates: New GA endpoints for Gemini 2.5 Flash (gemini-2.5-flash) and Gemini 2.5 Pro (gemini-2.5-pro) are now available (effective June 17, 2025). Existing preview endpoints for Gemini 2.5 Flash and Pro will be migrated to these new GA endpoints.
- For more information, please refer to Deprecated Models.
- Fix wrong timeout using 600 seconds when calling assistants, now it uses the provider configured under the parameter HttpTimeout, defaults to 120s.
- Flows
- Slack Mentions Support for Flows
- Globant Enterprise AI now supports mentions for Flows within Slack. Users can add a Flow to a Slack channel and then invoke it directly by @-tagging the Flow. This enables seamless initiation and management of conversation threads with Flows straight from Slack. This integration streamlines collaboration and enhances productivity by allowing teams to interact with and trigger Flows without leaving their Slack workspace.
- Flows Export/Import Now Includes Agent Configuration
- The export and import functionality for Flows has been enhanced. When exporting or importing a Flow, the configuration of associated agents is now included as part of the process. This improvement ensures that Flows and their agent settings can be seamlessly transferred between projects or environments, making it easier to share, replicate, and maintain complete solutions.
- RAG
- New RAG Tool to be used from agents.
- New ingestion properties valid for omni-parser API too.
- New password parameter for processing PDF files with password protection.
- New chunkStrategy parameter to decide how to process tables and images (enabled by default).
- New chunkSize and chunkOverlap parameters to override the default assistant configuration.
- The Requests log section details the parameters used for the ingestion.
- New RAG Document API to better serve associated documents.
- New Multivalued Filter Operators when ingesting with multivalued metadata.
- New assistants defaults
- Embeddings configuration updated tu use cache by default.
- LLM configuration updated from gpt-4o-mini to gpt-4.1-mini
- ingestion vLLM usage from openai/gpt-4o to openai/gpt-4.1-mini, minor updates on the associated prompts
- Fix broken issue when handling the threadId (conversation) from the Workspace.
- Fix broken issue in relation to the plugins API not returning the StartPage section.
- Fix PayloadTooLargeError error when using a Prompt higher than 12k tokens.
- Performance improvements when processing embeddings associated to xlsx/csv files.
- Performance improvements when querying Pinecone Vector Store Provider.
- Python SDK Updates and Enhancements
- The Python SDK has been updated with several new features, improvements, and changes to streamline development and agent management. These enhancements make the Python SDK more robust, user-friendly, and supportive of advanced agent development and management workflows.
- Added
- Save and Restore Chat Sessions: You can now save and restore chat sessions using JSON files, making it easier to maintain conversation history.
- Switch Agents in Chat GUI: The chat user interface now allows seamless switching between agents within an active session.
- API Status Command: The geai CLI tool now includes a status command to check the health of your GEAI instance's API.
- Reasoning Strategy in Agent Definition: Agent definitions now support specifying a reasoning strategy for more advanced customization.
- Changed
- Man Pages Installation: The script for installing man pages has been updated to support system-wide installation with the --system flag.
- Comprehensive Help in Man Pages: All help texts are now included in the man pages for the geai CLI tool.
- Simplified Lab Project Selection: The Lab no longer requires explicit project IDs; it now retrieves the project automatically using the API key and base URL provided.
Work in progress
This section lists features and improvements still in development, with no confirmed release date.
- Workspace/Playground
- VoiceAgents: Real-Time Conversational AI with Speech
- VoiceAgents introduces real-time, voice-based interactions with AI agents, enabling natural, two-way conversations through speech. Powered by advanced audio transcription, natural language understanding, and text-to-speech synthesis, this feature allows users to speak directly with AI agents and receive immediate spoken responses, bringing human-AI interaction to a whole new level of fluidity and accessibility.
- Lab Improvements
- Options to export/import processes and flows.
- RAG Assistant and API Assistant will migrate from the Console to The Lab:
- This gives Assistants access to advanced configuration, custom tools, and a flexible development workflow in The Lab.
- Chat with Database & Data Analytics Integration: The current assistants will be fully migrated into the Lab interface, allowing for seamless usage within agent workflows.
- Audit Logs: The Lab will begin tracking user actions related to entity creation, updates, and deletions—strengthening traceability and accountability.
- Entity Version Management: Users will be able to view the version history of any Lab entity and restore previous versions when needed.
- New configuration option in the tool to generate images
- Console Improvements:
- Quota Alerts: Email notifications will be sent when a project or organization reaches its soft limit, helping teams manage usage proactively.
- Model Configuration Controls: Organizations and projects will gain the ability to define which LLMs are enabled, improving governance and cost management.
- Evaluation module backoffice.
- LLMs:
- New Provider Coming to Production: Mistral AI.