Explore the key innovation points in this roadmap, which reveal the improvements and breakthroughs on the way.
- Station: Discover and Scale AI Across the Organization
- The Station is the new Globant Enterprise AI module designed for consumer users to easily explore, adopt, and execute AI solutions. Serving as the centralized entry point for company-wide AI enablement, it empowers users to discover and interact with Agents.
- This first release includes:
- Search, filter, and discovery of available AI solutions
- Detailed solution pages to understand capabilities and use cases
- Solution sharing via public links
- Navigation sidebar, including direct access to the Lab
- Import from Lab to add new solutions to the Station
- Redirection to Workspace for execution
- Ratings and reviews to gather user feedback
- Help and contact support options to ensure user success
- This release lays the foundation for scalable AI adoption across the enterprise.
- Workspace/Playground
- Shareable Chat Links.
- A new feature will allow users to share chat sessions with others, including the full conversation history with an agent. This shared access enables anonymous access via a public link.
- Universal File Upload Compatibility
- Users will be able to upload previously unsupported file formats—such as .doc, .docx, .odt, .rtf, .ppt, and .pptx—directly in the chat interface. Even if the selected LLM does not natively support these formats, the platform will automatically convert the files (e.g., to PDF or plain text) at the server level before processing. This enhancement ensures broader file compatibility across both multimodal and non-multimodal models, streamlining interactions and improving user experience.
- Lab Improvements
- Options to export/import agents
- Tools
- Per-User Consent for GDrive Tool Access
- A new consent mechanism will be introduced for tools that integrate with Google Drive. Before an agent can access or manipulate a user's GDrive data, the user must explicitly grant permission. This per-user consent model ensures secure, transparent usage of third-party tools, aligning with data privacy best practices and organizational compliance requirements.
- Console Improvements
- Prompt Files option: It allows you to upload files at the organization and project level so that the Chat Assistant you define can use them to answer questions.
- LLMs
- New OpenAI's models already available through the Responses API:
- o3-pro: Part of OpenAI’s “o” series, this model is trained with reinforcement learning to perform complex reasoning and deliver more accurate answers. o3-pro leverages increased compute to “think before it answers,” consistently providing higher-quality responses.
- codex-mini-latest: This is a fine-tuned version of o4-mini, specifically optimized for use in Codex CLI.
- New Anthropic – Web Search Tool: The web search tool gives Claude direct access to real-time web content, enabling it to answer questions using up-to-date information beyond its training cutoff. Claude automatically cites sources from search results as part of its response. More details on usage and supported models: How to use LLMs with built-in web search tools via API.
- Claude 4: Anthropic’s latest generation of models, featuring Claude Opus 4 for advanced reasoning and coding, and Claude Sonnet 4 for high-performance, efficient task execution, now available in our Production environment.
- New Providers Coming to Production: xAI (Grok models), Cohere, and Mistral AI.
- Integration of Azure AI Foundry: Azure AI Foundry is being introduced as a beta LLM provider to leverage its unified platform for building, customizing, and deploying AI applications. This integration provides access to a diverse catalog of over 11,000 models from providers such as OpenAI, xAI, Microsoft, DeepSeek, Meta, Hugging Face, and Cohere, along with robust tools for responsible AI development and seamless integration with the Azure ecosystem.
- RAG
Work in progress
This section lists features and improvements still in development, with no confirmed release date.
- A2A protocol support
- Workspace/Playground
- VoiceAgents: Real-Time Conversational AI with Speech
- VoiceAgents introduces real-time, voice-based interactions with AI agents, enabling natural, two-way conversations through speech. Powered by advanced audio transcription, natural language understanding, and text-to-speech synthesis, this feature allows users to speak directly with AI agents and receive immediate spoken responses, bringing human-AI interaction to a whole new level of fluidity and accessibility.
- Lab Improvements
- Options to export/import processes and flows.
- RAG Assistant and API Assistant will migrate from the Console to The Lab:
- This gives Assistants access to advanced configuration, custom tools, and a flexible development workflow in The Lab.
- Chat with Database & Data Analytics Integration: The current assistants will be fully migrated into the Lab interface, allowing for seamless usage within agent workflows.
- Audit Logs: The Lab will begin tracking user actions related to entity creation, updates, and deletions—strengthening traceability and accountability.
- Entity Version Management: Users will be able to view the version history of any Lab entity and restore previous versions when needed.
- New configuration option in the tool to generate images
- Console Improvements:
- Quota Alerts: Email notifications will be sent when a project or organization reaches its soft limit, helping teams manage usage proactively.
- Model Configuration Controls: Organizations and projects will gain the ability to define which LLMs are enabled, improving governance and cost management.
- Evaluation module backoffice.