Google claims its Gemma AI models are able to run on laptops and desktops, while surpassing the capabilities of larger models.
Google has released a pair of AI models called Gemma to help developers and researchers in creating their own models.
The tech giant said these models are built on the same research and technology used to create Gemini, Google’s flagship generative AI model that it created to challenge ChatGPT.
The Gemma models come in two sizes – Gemma 2B and Gemma 7B – which each coming in pre-trained and instruction-tuned variants. These models appear to be designed for flexibility, as Google claims they can run directly on a developer laptop or desktop computer.
“Gemma surpasses significantly larger models on key benchmarks while adhering to our rigorous standards for safe and responsible outputs,” Google claimed in a blogpost.
Victor Botev, the CTO of Iris.ai, said the new models are a sign of the “fast-growing capabilities” of smaller language models.
“A model being able to run directly on a laptop, with equal capabilities to Llama 2, is an impressive feat and removes a huge adoption barrier for AI that many organisations possess,” Botev said.
“Bigger isn’t always better. Practical application is more important than massive parameter counts, especially when considering the huge costs involved with many large language models.”
Open and safe AI
The decision to offer AI models to developers and researchers bears a similarity to Meta’s strategy, which frequently shares its AI models and tools with researchers.
Meta has previously claimed that many of its AI models are open source. But certain groups have taken issue with this claim as these models are made available for research purposes, rather than following the specific rules of an open-source license.
Google has refrained from referring to the two Gemma models as open source and refers to them as “open models”. The company said the terms of use f0r its Gemma models permit responsible commercial usage and distribution for all organisations.
The company also said its latest AI models are designed with its AI principles “at the forefront”, to ensure they’re used safely. For example, Google said Gemma’s pre-trained models use automated techniques to “filter out certain personal information and other sensitive data from training sets”.
The company also released a responsible Generative AI Toolkit with Gemma to help developers and researchers build “safe and responsible” AI applications.
Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.