What is Gemma?
Gemma is a family of lightweight, state-of-the-art open models built from the same research and technology used to create the Gemini models.
How can I use Gemma?
You can use Gemma for a variety of natural language processing tasks. Gemma models achieve exceptional benchmark results at their 2B and 7B sizes, even outperforming some larger open models. With Keras 3.0, you can enjoy seamless compatibility with JAX, TensorFlow, and PyTorch, empowering you to effortlessly choose and switch frameworks depending on your task.
Features of Gemma
Responsible by design
Gemma models incorporate comprehensive safety measures, ensuring responsible and trustworthy AI solutions through curated datasets and rigorous tuning.
Unmatched performance at size
Gemma models achieve exceptional benchmark results at their 2B and 7B sizes, even outperforming some larger open models.
Framework flexible
With Keras 3.0, you can enjoy seamless compatibility with JAX, TensorFlow, and PyTorch, empowering you to effortlessly choose and switch frameworks depending on your task.
Variants of Gemma
Gemma
Gemma models are lightweight, text-to-text, decoder-only large language models, trained on a massive dataset of text, code, and mathematical content for a variety of natural language processing tasks.
CodeGemma
CodeGemma brings powerful code completion and generation capabilities in sizes fit for your local computer.
PaliGemma
PaliGemma is an open vision-language model that is designed for class-leading fine-tune performance on a wide range of vision-language tasks.
RecurrentGemma
RecurrentGemma is a technically distinct model that leverages recurrent neural networks and local attention to improve memory efficiency.
Quick-start guides for developers
You can find quick-start guides on Kaggle, Google Cloud, and Colab.