Google upgrades Gemini Deep Research Max: integrates MCP to connect internal corporate databases and native charts, enabling analyst due diligence

robot
Abstract generation in progress

Google Announces Major Upgrade to Gemini Deep Research, Launches Two New Agents: Deep Research and Deep Research Max, Integrates the Latest Gemini 3.1 Pro Model, and Connects to Financial Data Platforms or Internal Company Data via the MCP Protocol.
(Background: OpenAI Unlocks Deep Research: Paid users can query 10 times per month; Microsoft releases a multimodal AI agent called Magma)
(Additional background: OpenAI launches the “ChatGPT Agent”! Combines Operator and Deep Research: excels at ticket booking, ordering delivery, and writing presentations—everything handled)

Table of Contents

Toggle

  • What is Max: Thinking longer for deeper answers
  • MCP support: Evolving from searching the web to “searching any database”
  • Three major feature breakthroughs: Charts, collaborative planning, real-time streaming
  • AI Agents surpass the “search assistant” threshold

At around 21 last night, Google announced a major upgrade to Gemini Deep Research, and also rolled out two agents: Deep Research (speed-first) and Deep Research Max (quality-first). It fully integrates Gemini 3.1 Pro and, for the first time, enters public testing via a paid plan through the Gemini API.

What is Max: Thinking longer for deeper answers

The core difference of Deep Research Max lies in “extended test-time compute.” The agent doesn’t just run once and submit; it repeatedly reasons, searches, and revises—like a never-sleeping research assistant—until it believes the report quality has reached the target before producing the output.

Google’s official statement says Max has delivered a “leapfrog” improvement in industry-standard retrieval and reasoning capabilities. Compared with last December’s preview version, the number of sources consulted has increased dramatically, enabling it to catch key differences the previous version ignored, and when weighing conflicting evidence, it proactively cites authoritative sources such as SEC filings and peer-reviewed journals.

Users can schedule a run overnight; by the morning, the analyst team arrives at the office and the complete due diligence report is already waiting in their inbox. Speed isn’t the focus—depth is.

By contrast, the standard Deep Research version emphasizes a substantial reduction in latency and cost, replacing the December preview version as the default choice for interactive scenarios (when users need instant Q&A and don’t require the kind of time-consuming deep digging that Max provides).

MCP support: Evolving from searching the web to “searching any database”

This Deep Research upgrade also provides native support for MCP (Model Context Protocol). In the past, agents could only retrieve publicly available web information. Now, through MCP, they can seamlessly connect to company-customized data sources and professional data streams.

The practical meaning is: finance teams can connect internal ERP systems and private APIs from market data vendors through an MCP server, and Deep Research can then, within a single research workflow, query public web data, Bloomberg terminal data, and their own databases in parallel—without needing manual tool switching.

Google also announced partnerships with FactSet, S&P Global, PitchBook. The three institutions jointly design MCP servers, allowing customers to integrate the financial and market data from these platforms directly into Deep Research workflows. For investment banks, private equity firms, and research organizations, the significance of this bridging is self-evident.

On the tool side, users can enable Google Search, remote MCP, URL Context, Code Execution, and File Search at the same time; they can also completely turn off internet access so the agent operates only within custom databases. This point is especially critical for enterprise customers that have concerns about data leakage.

Three major feature breakthroughs: Charts, collaborative planning, real-time streaming

First is native charts and infographics. This is the first time on the Gemini API: Deep Research no longer outputs only text. It can directly generate HTML charts or a Nano Banana infographic, upgrading research reports from plain text into visual analytical files.

Second is collaborative planning. Before executing research, the agent first generates a research plan. Users can review, guide, and modify this plan, and then have the agent carry out the work. This makes control over the scope of the investigation more fine-grained; it’s no longer a black box of “ask one question, wait for a report,” but a human–AI co-defined research framework.

Third is real-time streaming. The system tracks the agent’s intermediate reasoning steps. With a live thought summary, users can see what the agent is doing while they’re waiting. Text and images are generated and sent back as they’re produced, greatly reducing the uncertainty of long waits.

In terms of multimodal grounding, Deep Research can now take PDFs, CSVs, images, audio, and video as inputs. Integrating cross-format data no longer requires manual preprocessing.

AI Agents surpass the “search assistant” threshold

The emergence of Deep Research Max, to a certain extent, signals that AI agents have entered a new level of maturity in enterprise research workflows. In the past, when we talked about AI-assisted research, it mostly stayed at the level of “help me summarize this document” or “help me search a few articles.” In essence, it was an automated search assistant.

But once an agent can repeatedly reason, independently weigh conflicting evidence, cite SEC filing documents, and connect to private financial databases through MCP, what it does is already much closer to the due diligence work an entry-level analyst performs.

Of course, “closer” doesn’t mean “replacing.” How to verify an agent’s reasoning logic, how to manage its access permissions to private data, and how to use AI-generated research conclusions in regulatory environments—these are still questions enterprises are exploring. But the signal Google is sending today is very clear: technically, this path is already open.

Deep Research and Deep Research Max are now available through Gemini API paid plans in public preview, and the Google Cloud version is expected to follow. For the full announcement, please refer to the official Google Blog explanation.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin