Google Launches Gemma 4 Open-Source Model: How to Try It

Google Launches Gemma 4 Open-Source Model: How to Try It

Google unveiled the latest version of its open AI model, Gemma 4, on Thursday. Crucially, Gemma 4 is a fully open-source model licensed under Apache 2.0, which is not common of frontier models.

Google says Gemma 4 can operate on “billions of Android devices” as well as select laptop GPUs, allowing users to run open models locally on their devices. “This open-source license provides a foundation for complete developer flexibility and digital sovereignty; granting you complete control over your data, infrastructure, and models,” a Google blog post states.It enables you to build freely and securely across any environment, whether on-premises or in the cloud.”

Most people are familiar with Google’s popular Gemini AI model, courtesy to the ubiquitous AI chatbot that has been integrated into many of the company’s products.

Gemma is a large language model (LLM) built using the same technology and research as Google DeepMind’s Gemini 3.

Google describes Gemma 4 as their “most capable” open AI model ever.

Gemma versus Gemini?

So how does Gemma differ from Gemini?

Gemini is Google’s exclusive subscription AI product, as well as the name of the company’s multimodal AI models. Gemini has been integrated into almost all of Google’s core products, including Google Search, Gmail, Google Docs, and Google Cloud.

Gemma 4 is, however, an open AI model, which means that the code and data used to train it are made available to its users. Gemma AI models can be operated on a user’s local hardware, even without an internet connection. Gemma 4 can be downloaded and run on any device for free. These open AI models offer a more private and safe experience, as none of the chats, uploaded files, or replies are shared with any other parties.

Developers could leverage open AI models such as Gemma 4 to integrate AI into their own applications without incurring ongoing membership fees.

What is Gemma 4?

Gemma 4 adds enhanced capabilities to Google’s open AI model family.

According to Google’s announcement, Gemma 4 now supports complex reasoning, including as multi-step planning and deep logic. With Gemma 4, Google claims to have made “significant improvements in math and instruction-following benchmarks that require it”.

Gemma 4 now enables processes necessary for agentic workflows and localizes AI coding assistance. Gemma 4 can also process audio and video to recognize speech and interpret visuals like charts.

Gemma 4 comes in four sizes, depending on the number of weights required to power the model: two billion, four billion, 26 billion, and 31 billion.

According to Hugging Face, these open-weight models come in both pre-trained and instruction-tuned varieties, giving developers even more flexibility.

Google says the AI model has been trained in over 140 languages and has a context window of up to 256,000 tokens. (The context window for the smaller E2B and E4B models is 128,000, however.)

Gemma 4 is currently available and open source.

When it comes to AI models, open does not necessarily mean open source.

Previous Gemma iterations were open-weight (meaning the training datasets were publicly available), but they were still subject to Google’s rules, even if users may download the model to their device. Users could alter the local LLM, but they had to follow Google’s regulations for its use and redistribution.

With Gemma 4, Google has made the model available and open source.

Google is publishing Gemma 4 under the Apache 2.0 license, which is widely used for open source software.

This license allows anybody to download and edit Gemma 4 and use it for any purpose, whether personal or commercial. Gemma 4 can also be redistributed without paying a royalty. The only requirement under the Apache 2.0 license is attribution, and the license must be distributed with the AI model.

Are you looking for Gemma 4? It’s hot to try.

Gemma 4 is available in Google AI Studio and can be obtained via third-party websites such as Hugging Face, Kaggle, or Ollama.

Priyanka Patil

error: Content is protected !!