Gemma, Meet NIM: NVIDIA Teams Up With Google DeepMind to Drive Large Language Model Innovation

PaliGemma, the latest Google open model, debuts with NVIDIA NIM inference microservices support today.
by Dave Salvator

Large language models that power generative AI are seeing intense innovation — models that handle multiple types of data such as text, image and sounds are becoming increasingly common.

However, building and deploying these models remains challenging. Developers need a way to quickly experience and evaluate models to determine the best fit for their use case, and then optimize the model for performance in a way that not only is cost-effective but offers the best performance.

To make it easier for developers to create AI-powered applications with world-class performance, NVIDIA and Google today announced three new collaborations at Google I/O ‘24.

Gemma + NIM

Using TensorRT-LLM, NVIDIA is working with Google to optimize two new models it introduced at the event: Gemma 2 and PaliGemma. These models are built from the same research and technology used to create the Gemini models, and each is focused on a specific area:

  • Gemma 2 is the next generation of Gemma models for a broad range of use cases and features a brand new architecture designed for breakthrough performance and efficiency.
  • PaliGemma is an open vision language model (VLM) inspired by PaLI-3. Built on open components including the SigLIP vision model and the Gemma language model, PaliGemma is designed for vision-language tasks such as image and short video captioning, visual question answering, understanding text in images, object detection and object segmentation. PaliGemma is designed for class-leading fine-tuning performance on a wide range of vision-language tasks and is also supported by NVIDIA JAX-Toolbox.

Gemma 2 and PaliGemma will be offered with NVIDIA NIM inference microservices, part of the NVIDIA AI Enterprise software platform, which simplifies the deployment of AI models at scale. NIM support for the two new models are available from the API catalog, starting with PaliGemma today; they soon will be released as containers on NVIDIA NGC and GitHub.

Bringing Accelerated Data Analytics to Colab

Google also announced that RAPIDS cuDF, an open-source GPU dataframe library, is now supported by default on Google Colab, one of the most popular developer platforms for data scientists. It now takes just a few seconds for Google Colab’s 10 million monthly users to accelerate pandas-based Python workflows by up to 50x using NVIDIA L4 Tensor Core GPUs, with zero code changes.

With RAPIDS cuDF, developers using Google Colab can speed up exploratory analysis and production data pipelines. While pandas is one of the world’s most popular data processing tools due to its intuitive API, applications often struggle as their data sizes grow. With even 5-10GB of data, many simple operations can take minutes to finish on a CPU, slowing down exploratory analysis and production data pipelines.

RAPIDS cuDF is designed to solve this problem by seamlessly accelerating pandas code on GPUs where applicable, and falling back to CPU-pandas where not. With RAPIDS cuDF available by default on Colab, all developers everywhere can leverage accelerated data analytics.

Taking AI on the Road 

By employing AI PCs using NVIDIA RTX graphics, Google and NVIDIA also announced a Firebase Genkit collaboration that enables app developers to easily integrate generative AI models, like the new family of Gemma models, into their web and mobile applications to deliver custom content, provide semantic search and answer questions. Developers can start work streams using local RTX GPUs before moving their work seamlessly to Google Cloud infrastructure.

To make this even easier, developers can build apps with Genkit using JavaScript, a programming language mobile developers commonly use to build their apps.

The Innovation Beat Goes On

NVIDIA and Google Cloud are collaborating in multiple domains to propel AI forward. From the upcoming Grace Blackwell-powered DGX Cloud platform and JAX framework support, to bringing the NVIDIA NeMo framework to Google Kubernetes Engine, the companies’ full-stack partnership expands the possibilities of what customers can do with AI using NVIDIA technologies on Google Cloud.