Google unveils an updated preview version of Gemini 2.5 Pro, anticipating a forthcoming stable release.
Google has announced the release of the upgraded Gemini 2.5 Pro (I/O Edition) preview model, marking a significant leap in AI capabilities. This model is designed to handle complex reasoning tasks with state-of-the-art performance and maximum response accuracy.
One of the key features of the Gemini 2.5 Pro is its multimodal input support, allowing it to process a wide variety of inputs including audio, images, video, text, and entire code repositories. This comprehensive understanding and interaction across multiple data types make it an ideal tool for tackling difficult problems, analyzing large databases, and performing complex coding, reasoning, and multimodal understanding.
The new model also boasts large input size limits, capable of accommodating input sizes up to 500 MB, ensuring efficient processing of extensive datasets. Developers can set and manage the “thinking budget,” controlling how much cognitive effort the model expends before responding, which enhances accuracy and performance based on the needs of different tasks.
The Gemini 2.5 Pro is available via Google Cloud’s Vertex AI platform, allowing for integration with various applications and deployments. It is also accessible through the Gemini API, extending its availability to Google AI Studio, Vertex AI, and the Gemini app.
Google claims that the new version of Gemini 2.5 Pro is cheaper to run per token than many comparable thinking models. To further cater to developers, the model now supports thinking budgets to keep costs down.
The preview model is currently available, and developers can customise how much a Gemini thinking model can ponder a request with token limits. The upgraded Gemini 2.5 Pro preview model will become generally available starting in a couple of weeks.
Upon its public release, the Gemini 2.5 Pro is expected to be easy to integrate with developer processes. The model's availability through the Gemini API, Google AI Studio, Vertex AI, and the Gemini app will facilitate this integration.
In terms of performance, the new version of Gemini 2.5 Pro has extended Google's lead on the LMArena leaderboard and claimed the top spot on the WebDevArena leaderboard. This demonstrates the model's superiority in AI benchmarks.
The family of 2.5 models also introduces a variant called 2.5 Flash-Lite, optimized for the lowest latency and cost, serving as a cost-effective upgrade for workloads requiring speed and efficiency.
For users of the Gemini app, the new version of Gemini 2.5 Pro is available starting today. The model has been adjusted to respond to developer feedback, ensuring it delivers unique and correctly-formatted replies to user queries.
This latest iteration of the Gemini 2.5 Pro represents a significant advancement in Google’s AI capabilities by combining multimodal input processing, maximum reasoning power, flexible thinking control, and scalable deployment options to serve complex, real-world AI applications effectively. This marks a major step beyond prior Gemini models towards more powerful and versatile AI solutions.
The Gemini 2.5 Pro with its advanced artificial-intelligence capabilities now supports thinking budgets, allowing developers to control costs while maintaining high performance. This technology-driven model, available through Google Cloud’s Vertex AI platform, also excels in multimodal input processing and complex coding, reasoning, and multimodal understanding tasks.