Google Gemini: Everything you need to know about the new generative AI platform

Google Gemini: Everything you need to know about the new generative AI platform


Google aims to create an impact with Gemini, its leading set of generative AI models, applications, and offerings. Curious about what Gemini entails? Wondering about its applications and how it fares against rivals? To simplify staying informed about the latest Gemini advancements, we’ve crafted this helpful guide. We’ll continually update it with fresh Gemini models, features, and insights into Google’s Gemini roadmap.


What is Gemini?

Gemini represents Google’s long-promised, cutting-edge GenAI model family, crafted through collaboration between Google’s AI research labs, DeepMind, and Google Research. This innovative suite is available in three distinct variations:

  • Gemini Ultra, the most performant Gemini model.
  • Gemini Pro, a “lite” Gemini model.
  • Gemini Nano, a smaller “distilled” model that runs on mobile devices like the Pixel 8 Pro.


Gemini models were meticulously trained to be inherently multimodal, meaning they can effectively utilize and process not just textual data but also audio, images, and videos. This comprehensive training involved pretraining and fine-tuning across a diverse range of data sources, including audiovisual content, codebases, and multilingual text.


This unique capability distinguishes Gemini from models like Google’s LaMDA, which was exclusively trained on textual data. Unlike LaMDA, which is limited to understanding and generating text-based content such as essays or email drafts, Gemini models possess the versatility to comprehend and produce a wide array of multimedia content.


What’s the difference between the Gemini apps and Gemini models?

Google Gemini: Everything you need to know about the new generative AI platform

    Image Credits: Google


Despite Google’s consistent branding challenges, it initially failed to clarify that Gemini stands as a separate entity from the Gemini apps found on both web and mobile platforms (previously known as Bard). These apps essentially serve as gateways to access specific Gemini models, akin to a client for Google’s GenAI. It’s worth noting that the Gemini apps and models operate independently from Imagen 2, Google’s text-to-image model, which is accessible within certain development tools and environments provided by the company.


What can Gemini do?

Due to their multimodal nature, Gemini models theoretically possess the capacity to undertake a variety of tasks spanning transcribing speech, captioning images and videos, and even generating artwork. While some of these functionalities have yet to materialize into product offerings (more details on that later), Google has made ambitious promises regarding their eventual integration and expansion. However, given Google’s track record, it’s understandable if skepticism lingers.


Google’s initial Bard launch fell short of expectations, and more recently, a video purportedly showcasing Gemini’s capabilities was revealed to be heavily doctored, representing more of an aspiration than reality. Nonetheless, assuming Google’s claims are mostly truthful, here’s a glimpse into the potential capabilities of the various tiers of Gemini once they reach full fruition:


Gemini Ultra

Google asserts that Gemini Ultra, leveraging its multimodal capabilities, can assist with tasks like solving physics homework, providing step-by-step solutions on worksheets, and identifying potential errors in pre-filled answers. Moreover, it can aid in locating relevant scientific papers, extracting pertinent information, and updating charts with newly generated formulas based on the latest data.


While Gemini Ultra technically supports image generation, this feature has not been fully integrated into the model’s productized version. The complexity of the mechanism might explain this delay. Unlike ChatGPT, which utilizes prompts to generate images through intermediary steps like DALL-E 3, Gemini produces images directly without such intermediate processes.

Gemini Ultra is accessible via an API through Vertex AI, Google’s comprehensive AI developer platform, and AI Studio, its web-based tool for developers. However, access to Gemini Ultra, branded as Gemini Advanced, comes with a subscription fee attached to the Google One AI Premium Plan, priced at $20 per month.


The AI Premium Plan also facilitates integration with your broader Google Workspace account, encompassing Gmail, Docs, Sheets, and Google Meet recordings. This connectivity proves beneficial for tasks such as email summarization or note-taking during video calls with Gemini’s assistance.


Gemini Pro

Google positions Gemini Pro as an advancement over LaMDA, emphasizing its superior reasoning, planning, and comprehension capabilities. An independent study conducted by researchers from Carnegie Mellon and BerriAI corroborated this assertion, finding that the initial iteration of Gemini Pro surpassed OpenAI’s GPT-3.5 in managing longer and more intricate reasoning chains. However, like other large language models, Gemini Pro encountered challenges with mathematics problems involving multiple digits, along with instances of flawed reasoning and evident errors reported by users.


In response to these findings, Google introduced remedies with the release of Gemini 1.5 Pro. This updated version, designed as a seamless replacement, boasts several enhancements compared to its predecessor, most notably in its data processing capacity. Gemini 1.5 Pro can now handle approximately 700,000 words or 30,000 lines of code, a significant increase from the previous version’s capabilities. Furthermore, being multimodal, Gemini 1.5 Pro can analyze up to 11 hours of audio or an hour of video across various languages, albeit at a slower pace.

Gemini 1.5 Pro debuted in public preview on Vertex AI in April, offering users a glimpse into its expanded capabilities. Additionally, Google introduced Gemini Pro Vision, an additional endpoint capable of processing both text and imagery, including photos and videos, and generating text outputs akin to OpenAI’s GPT-4 with Vision model.


Using Gemini Pro in Vertex AI. Image Credits: GeminiGoogle Gemini: Everything you need to know about the new generative AI platform


Within Vertex AI, developers have the flexibility to tailor Gemini Pro to specific contexts and applications through a fine-tuning or “grounding” process. Moreover, Gemini Pro can seamlessly integrate with external third-party APIs to execute specialized actions as needed.


In AI Studio, structured workflows facilitate the creation of chat prompts using Gemini Pro. Developers enjoy access to both Gemini Pro and the Gemini Pro Vision endpoints, empowering them to adjust the model temperature to regulate the creative breadth of the output. Additionally, developers can furnish examples to provide tone and style instructions while fine-tuning safety settings to align with their requirements.


Gemini Nano

Gemini Nano represents a scaled-down version of the Gemini Pro and Ultra models, optimized for efficiency to operate directly on select smartphones rather than relying on remote servers. Presently, it fuels several functionalities on devices such as the Pixel 8 Pro, Pixel 8, and Samsung Galaxy S24, including the Summarize feature in Recorder and Smart Reply in Gboard.

In the Recorder app, which facilitates audio recording and transcription, Gemini Nano generates summaries of recorded conversations, interviews, and presentations directly on the device. Users can access these summaries offline, ensuring privacy as no data is transmitted from their phone during the process.

Moreover, Gemini Nano integrates with Gboard, Google’s keyboard application, to power Smart Reply, assisting users in suggesting responses during messaging conversations. Initially compatible with WhatsApp, this feature is slated to expand to other messaging apps over time.

In the Google Messages app on supported devices, Nano enables Magic Compose, a feature capable of crafting messages in various styles such as “excited,” “formal,” and “lyrical.”


Is Gemini better than OpenAI’s GPT-4?

Google has consistently highlighted Gemini’s prowess in benchmark tests, asserting that Gemini Ultra surpasses current state-of-the-art results on “30 of the 32 widely used academic benchmarks in large language model research and development.” Additionally, the company suggests that Gemini 1.5 Pro outperforms Gemini Ultra in tasks such as content summarization, brainstorming, and writing in certain scenarios. However, it’s expected that this dynamic may shift with the introduction of the next Ultra model.

Despite Google’s claims, questions persist about whether benchmark scores truly reflect a superior model. Some observers note that the scores Google highlights are only marginally better than those achieved by corresponding models from OpenAI. Furthermore, early feedback from users and academics indicates shortcomings with the older version of Gemini Pro, including inaccuracies in basic facts, challenges with translations, and subpar coding suggestions.


How much does Gemini cost?

Gemini 1.5 Pro is currently accessible at no cost within the Gemini apps, AI Studio, and Vertex AI during its preview phase. However, once Gemini 1.5 Pro transitions out of preview in Vertex, users will incur charges at a rate of $0.0025 per character for model usage, while output will be billed at $0.00005 per character. Vertex customers are billed per 1,000 characters, equating to approximately 140 to 250 words. For models like Gemini Pro Vision, charges apply per image at a rate of $0.0025.

To illustrate, summarizing a 500-word article, totaling 2,000 characters, using Gemini 1.5 Pro would amount to $5. Conversely, generating an article of similar length would cost $0.1.

As for Ultra pricing, it has yet to be disclosed.


Where can you try Gemini?

Gemini Pro


The Gemini apps offer the most straightforward access to Gemini Pro, where both Pro and Ultra models are actively responding to inquiries across multiple languages. Additionally, both Gemini Pro and Ultra are available for preview in Vertex AI through an API. While the API is currently free to use within specified limits, it supports various regions, including Europe, and incorporates features such as chat functionality and filtering.


Furthermore, developers can leverage Gemini Pro and Ultra within AI Studio. This platform enables the iteration of prompts and the creation of Gemini-based chatbots, providing developers with API keys for integration into their applications or the option to export the code to more comprehensive Integrated Development Environments (IDEs).

Google’s suite of AI-powered assistance tools for code completion and generation, known as Code Assist (formerly Duet AI for Developers), now incorporates Gemini models. This integration allows developers to execute “large-scale” changes across codebases, including tasks like updating cross-file dependencies and reviewing extensive code segments.


Moreover, Gemini models have been integrated into Google’s development tools for Chrome and the Firebase mobile development platform, as well as its database creation and management tools. Additionally, Google has introduced new security products supported by Gemini, such as Gemini in Threat Intelligence. This component of Google’s Mandiant cybersecurity platform facilitates the analysis of potentially malicious code on a large scale, enabling users to conduct natural language searches for ongoing threats or indicators of compromise.


Gemini Nano

Gemini Nano is currently available on the Pixel 8 Pro, Pixel 8, and Samsung Galaxy S24 smartphones, with plans to expand its availability to other devices in the future. Developers keen on integrating the model into their Android applications can register for an early glimpse into its capabilities.


Is Gemini coming to the iPhone?

There are reports suggesting that Apple and Google are in discussions to potentially incorporate Gemini for various features in an upcoming iOS update later this year. However, nothing is set in stone yet. Apple is also reportedly engaging in discussions with OpenAI and has been exploring the development of its own GenAI capabilities.

Leave a Reply

Your email address will not be published. Required fields are marked *

error: Content is protected !!