Transcript for:
AI Simplified: Introduction to Vertex AI

hi i'm priyanka vergaria and this is ai simplified where we will take the journey from data sets all the way to deployed machine learning models no matter which service you offer today it is crucial to use the data you have to make predictions so you can improve your apps and the user experience but most teams have varying levels of machine learning expertise ranging from novice all the way to experts to accelerate ai innovation you need a platform that can help you build expertise for those novice users and provide a seamless and flexible environment for those experts this is where vertex ai comes in it provides tools for every step of the machine learning workflow across different model types for varying levels of machine learning expertise before we take a look at vertex ai though let's understand the typical machine learning workflow after defining your prediction task the first thing you do is ingest the data analyze it and then transform it then you create and train the model evaluate that model for efficiency and optimization and then deploy it to make predictions now with vertex ai you get a simplified machine learning workflow in one central place ingestion analysis and transforming is really all about data preparation and you do that using managed data sets within vertex ai you have tools to create the data set by importing your data using the console or just the api you can also label and annotate the data right from within the console for model training you have two options automl or custom with varying machine learning expertise on the team for some use cases such as images or videos text files and tabular data automl works great with automl you don't need to write any of the model code vertex ai will take care of finding the best model for that task and for other use cases where you would like more control over your model's architecture use custom models now custom models are great for frameworks and architectures and code that you want to write yourself so this works great for tensorflow or pytorch code once that model is trained you have the ability to assess that model optimize it and even understand the signals behind your model's predictions and you do that with explainable ai and explainable ai lets you dive deeper into your model and understand which factors are playing a role in defining what that model is predicting once you're happy with the model you deploy it to an endpoint to serve it for online predictions using the api or the console now this deployment includes all the physical resources and the scalable hardware that's needed to scale that model for lower latency and online predictions you can of course use the undeployed model for batch predictions once the model is deployed you can get the predictions using either the command line interface or the console ui or the sdk and the apis at this point you might be wondering how this looks in the console and where to find it so let me give you a little tour of the dashboard in the console when we click on vertex ai we land on the dashboard where you can see the recent data sets recent models and get predictions on the left you have all the steps that are involved in the machine learning workflow all the way from data sets to predictions in the data sets you can create your data sets depending on the type of data and your prediction task supported data types include image tabular text or videos if your prediction task doesn't fall into one of these use cases don't worry you can still use vertex ai for your custom model training and prediction once you have created a data set you can see it listed in the dataset list in the notebook section of the console you can create your customized notebook instances with the type of environment and gpus you want the training tab you can see and create your training jobs the beauty is that you can have one data set but you can train it in different ways with auto ml you can train a high quality model with minimal effort automl edge for models that are optimized for edge devices and with the custom training option you can train models built with any framework we are pre-built or custom containers pre-built containers are available for supported frameworks such as tensorflow pytorch scikit-learn and xgboost you provide your code as a python package the custom containers option allows you to train models built with any framework or language by putting your training application code in a docker container push it to container registry and run training on vertex you can accelerate the training with gpus and also apply hyperparameter tuning in the models tab you can see all the models you have created here you can also import models trained outside of google cloud to serve it for online and batch predictions in order to use a model you create an endpoint which brings us to our next step creating an endpoint is how you serve your models for online predictions each model can have multiple endpoints enter the compute resources so your endpoint can auto scale the resources based on your traffic you can even split traffic across endpoints and send model logs to cloud logging once the model is live we can make predictions in the ui or through the sdk in the batch predictions tab you can make predictions on the batch of data from cloud storage this was a pretty high level overview where we saw that vertex ai provides the tools to support your entire machine learning workflow from data management all the way to predictions in the next episodes we will dive much deeper into all of these steps and build end-to-end machine learning workflow in the meantime let's continue our discussion in the comments below i'm excited to hear all about your machine learning use case and workflow