Google Unveils Gemma AI Model Inspired by Gemini Chatbot

Google announced about the publication of a large language model of machine Learning gemma , built using the technologies used to build a chatbot model Gemini trying to compete with ChatGPT. Model is available in four versions covering 2 and 7 billions of parameters, in the basic and optimized representations for dialogue systems. Options with 2 billion parameters are suitable for use in consumer applications and CPU is enough for their processing. Options with 7 billion parameters require more powerful equipment and the availability of GPU or TPU.

Among the areas of application of the Gemma model is the creation of dialogical systems and virtual assistants, the generation of text, the formation of answers to questions asked in natural language, a brief presentation and generalization of the contents, an explanation of the essence of concepts and terms, correction of errors in the text, assistance in learning languages . The creation of various types of text data is supported, including poems, code in programming languages, rewriting works in other words, the formation of letters according to the template. At the same time, the model has a relatively small size that allows you to use it on its equipment with limited resources, for example, on ordinary laptops and PCs.

License The model allows free use and distribution not only in research and personal projects, but also in commercial products. The creation and publication of changed options for the model is also allowed. At the same time, the conditions of use prohibit the use of the model to perform harmful actions and prescribe, if possible, to use the latest version of Gemma in their products.

Support for working with Gemma models is already added to the tools transformers and in responsieble genate ai toolkit . To optimize the model, you can use the framework keras and the backens for Tensorflow, Jax and Pytorch. It is also possible to use GEMMA with frameworks maxtext , nvidia nemo and tensorrt-lllm .

.

The size of the context taken into account by the Gemma context is 8 thousand tokens (the number of tokens that the model can process and remember when generating the text). For comparison, the context size of Gemini and GPT-4 models is 32 thousand tokens, and the GPT-4 Turbo model has 128 thousand. The model is supported only by English. By performance

/Reports, release notes, official announcements.