The AI race continues and Google has been at the forefront of it all as witnessed at the recent Google I/O developer conference. Google I/O is an annual developer conference hosted by Google, where the company showcases its latest technologies and products. The conference typically features keynote speeches, technical sessions, and product announcements, and attracts both developers and consumers.
The launch of ChatGPT, on first glance, looked like a huge swing at the information giant, but Google has slowly begun to prove that they’ve been working on AI powered projects for a long time.
Google has released a free learning path for Generative AI, focusing on the areas described below.
- Introduction to Generative AI – This is an introductory-level microlearning course aimed at explaining what Generative AI is, how it is used, and how it differs from traditional machine-learning methods. It also covers Google Tools to help you develop your own Gen AI apps. This course is estimated to take approximately 45 minutes to complete.
- Introduction to Large Language Models – This is an introductory level microlearning course that explores what large language models (LLM) are, the use cases where they can be utilized, and how you can use prompt tuning to enhance LLM performance. It also covers Google tools to help you develop your own Gen AI apps. This course is estimated to take approximately 45 minutes to complete.
- Introduction to Responsible AI – This is an introductory-level microlearning course aimed at explaining what responsible AI is, why it’s important, and how Google implements responsible AI in their products. It also introduces Google’s 7 AI principles.
- Introduction to Image Generation – This course introduces diffusion models, a family of machine learning models that recently showed promise in the image generation space. Diffusion models draw inspiration from physics, specifically thermodynamics. Within the last few years, diffusion models became popular in both research and industry. Diffusion models underpin many state-of-the-art image generation models and tools on Google Cloud. This course introduces you to the theory behind diffusion models and how to train and deploy them on Vertex AI.
- Encoder-Decoder Architecture – This course gives you a synopsis of the encoder-decoder architecture, which is a powerful and prevalent machine learning architecture for sequence-to-sequence tasks such as machine translation, text summarization, and question answering. You learn about the main components of the encoder-decoder architecture and how to train and serve these models. In the corresponding lab walkthrough, you’ll code in TensorFlow a simple implementation of the encoder-decoder architecture for poetry generation from the beginning.
- Attention Mechanism – This course will introduce you to the attention mechanism, a powerful technique that allows neural networks to focus on specific parts of an input sequence. You will learn how attention works, and how it can be used to improve the performance of a variety of machine learning tasks, including machine translation, text summarization, and question answering. This course is estimated to take approximately 45 minutes to complete.
- Transformer Models and BERT Model – This course introduces you to the Transformer architecture and the Bidirectional Encoder Representations from Transformers (BERT) model. You learn about the main components of the Transformer architecture, such as the self-attention mechanism, and how it is used to build the BERT model. You also learn about the different tasks that BERT can be used for, such as text classification, question answering, and natural language inference. This course is estimated to take approximately 45 minutes to complete.
- Create Image Captioning Models – This course teaches you how to create an image captioning model by using deep learning. You learn about the different components of an image captioning model, such as the encoder and decoder, and how to train and evaluate your model. By the end of this course, you will be able to create your own image captioning models and use them to generate captions for images
- Introduction to Generative AI Studio – This course introduces Generative AI Studio, a product on Vertex AI, that helps you prototype and customize generative AI models so you can use their capabilities in your applications. In this course, you learn what Generative AI Studio is, its features and options, and how to use it by walking through demos of the product. In the end, you will have a hands-on lab to apply what you learned and a quiz to test your knowledge.
- Generative AI Explorer – Vertex AI – The Generative AI Explorer – Vertex Quest is a collection of labs on how to use Generative AI on Google Cloud. Through the labs, you will learn about how to use the models in the Vertex AI PaLM API family, including text-bison, chat-bison, and textembedding-gecko. You will also learn about prompt design, best practices, and how it can be used for ideation, text classification, text extraction, text summarization, and more. You will also learn how to tune a foundation model by training it via Vertex AI custom training and deploy it to a Vertex AI endpoint.