Colab Notebook | GenAI 01: Gemma 모델의 튜닝 전후 성능 비교
#OpenLLMbyGoogle #Keras #LoRA / 240222 동준상.넥스트플랫폼 / ipynb
https://colab.research.google.com/drive/1pGBgSIf5EsbLLveStyaNIUIl8ea0KgcZ?usp=sharing

Prompt Engineering | Google Open Weights Gemma
Gemma: Open LLM by Google
https://blog.google/technology/developers/gemma-open-models/
Gemma는?
- a family of lightweight, state-of-the-art open models
- built from the same research and technology
- used to create the Gemini models.
- meaning: precious stone in Latin
Gemma의 주요 특징
- Releasing model weights in two sizes: Gemma 2B and Gemma 7B. Each size is released with pre-trained and instruction-tuned variants.
- A new Responsible Generative AI Toolkit provides guidance and essential tools for creating safer AI applications with Gemma.
- Providing toolchains for inference and supervised fine-tuning (SFT) across all major frameworks: JAX, PyTorch, and TensorFlow through native Keras 3.0.
- Ready-to-use Colab and Kaggle notebooks, alongside integration with popular tools such as Hugging Face, MaxText, NVIDIA NeMo and TensorRT-LLM, make it easy to get started with Gemma.
- Pre-trained and instruction-tuned Gemma models can run on your laptop, workstation, or Google Cloud with easy deployment on Vertex AI and Google Kubernetes Engine (GKE).
- Optimization across multiple AI hardware platforms ensures industry-leading performance, including NVIDIA GPUs and Google Cloud TPUs.
- Terms of use permit responsible commercial usage and distribution for all organizations, regardless of size.
Gemma
동준상.넥스트플랫폼
끝 | 감사합니다.