{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "**Chapter 19 – Training and Deploying TensorFlow Models at Scale**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "_This notebook contains all the sample code and solutions to the exercises in chapter 19._" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", " \n", " | \n", "\n", " \n", " | \n", "
/gcs/{bucket_name}/
:"
]
},
{
"cell_type": "code",
"execution_count": 95,
"metadata": {},
"outputs": [],
"source": [
"with open(\"my_keras_tuner_search.py\") as f:\n",
" script = f.read()\n",
"\n",
"with open(\"my_keras_tuner_search.py\", \"w\") as f:\n",
" f.write(script.replace(\"/gcs/my_bucket/\", f\"/gcs/{bucket_name}/\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now all we need to do is to start a custom training job based on this script, exactly like in the previous section. Don't forget to add `keras-tuner` to the list of `requirements`:"
]
},
{
"cell_type": "code",
"execution_count": 96,
"metadata": {},
"outputs": [],
"source": [
"hp_search_job = aiplatform.CustomTrainingJob(\n",
" display_name=\"my_hp_search_job\",\n",
" script_path=\"my_keras_tuner_search.py\",\n",
" container_uri=\"gcr.io/cloud-aiplatform/training/tf-gpu.2-4:latest\",\n",
" model_serving_container_image_uri=server_image,\n",
" requirements=[\"keras-tuner~=1.1.2\"],\n",
" staging_bucket=f\"gs://{bucket_name}/staging\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 97,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Training script copied to:\n",
"gs://my_bucket/staging/aiplatform-2022-04-15-13:34:32.591-aiplatform_custom_trainer_script-0.1.tar.gz.\n",
"Training Output directory:\n",
"gs://my_bucket/staging/aiplatform-custom-training-2022-04-15-13:34:34.453 \n",
"View Training:\n",
"https://console.cloud.google.com/ai/platform/locations/us-central1/training/8601543785521872896?project=522977795627\n",
"View backing custom job:\n",
"https://console.cloud.google.com/ai/platform/locations/us-central1/training/5022607048831926272?project=522977795627\n",
"CustomTrainingJob projects/522977795627/locations/us-central1/trainingPipelines/8601543785521872896 current state:\n",
"PipelineState.PIPELINE_STATE_RUNNING\n",
"CustomTrainingJob projects/522977795627/locations/us-central1/trainingPipelines/8601543785521872896 current state:\n",
"PipelineState.PIPELINE_STATE_RUNNING\n",
"CustomTrainingJob projects/522977795627/locations/us-central1/trainingPipelines/8601543785521872896 current state:\n",
"PipelineState.PIPELINE_STATE_RUNNING\n",
"CustomTrainingJob projects/522977795627/locations/us-central1/trainingPipelines/8601543785521872896 current state:\n",
"PipelineState.PIPELINE_STATE_RUNNING\n",
"CustomTrainingJob projects/522977795627/locations/us-central1/trainingPipelines/8601543785521872896 current state:\n",
"PipelineState.PIPELINE_STATE_RUNNING\n",
"CustomTrainingJob projects/522977795627/locations/us-central1/trainingPipelines/8601543785521872896 current state:\n",
"PipelineState.PIPELINE_STATE_RUNNING\n",
"CustomTrainingJob projects/522977795627/locations/us-central1/trainingPipelines/8601543785521872896 current state:\n",
"PipelineState.PIPELINE_STATE_RUNNING\n",
"CustomTrainingJob projects/522977795627/locations/us-central1/trainingPipelines/8601543785521872896 current state:\n",
"PipelineState.PIPELINE_STATE_RUNNING\n",
"CustomTrainingJob projects/522977795627/locations/us-central1/trainingPipelines/8601543785521872896 current state:\n",
"PipelineState.PIPELINE_STATE_RUNNING\n",
"CustomTrainingJob projects/522977795627/locations/us-central1/trainingPipelines/8601543785521872896 current state:\n",
"PipelineState.PIPELINE_STATE_RUNNING\n",
"CustomTrainingJob projects/522977795627/locations/us-central1/trainingPipelines/8601543785521872896 current state:\n",
"PipelineState.PIPELINE_STATE_RUNNING\n",
"CustomTrainingJob projects/522977795627/locations/us-central1/trainingPipelines/8601543785521872896 current state:\n",
"PipelineState.PIPELINE_STATE_RUNNING\n",
"CustomTrainingJob projects/522977795627/locations/us-central1/trainingPipelines/8601543785521872896 current state:\n",
"PipelineState.PIPELINE_STATE_RUNNING\n",
"CustomTrainingJob projects/522977795627/locations/us-central1/trainingPipelines/8601543785521872896 current state:\n",
"PipelineState.PIPELINE_STATE_RUNNING\n",
"CustomTrainingJob projects/522977795627/locations/us-central1/trainingPipelines/8601543785521872896 current state:\n",
"PipelineState.PIPELINE_STATE_RUNNING\n",
"CustomTrainingJob projects/522977795627/locations/us-central1/trainingPipelines/8601543785521872896 current state:\n",
"PipelineState.PIPELINE_STATE_RUNNING\n",
"CustomTrainingJob projects/522977795627/locations/us-central1/trainingPipelines/8601543785521872896 current state:\n",
"PipelineState.PIPELINE_STATE_RUNNING\n",
"CustomTrainingJob projects/522977795627/locations/us-central1/trainingPipelines/8601543785521872896 current state:\n",
"PipelineState.PIPELINE_STATE_RUNNING\n",
"CustomTrainingJob projects/522977795627/locations/us-central1/trainingPipelines/8601543785521872896 current state:\n",
"PipelineState.PIPELINE_STATE_RUNNING\n",
"CustomTrainingJob projects/522977795627/locations/us-central1/trainingPipelines/8601543785521872896 current state:\n",
"PipelineState.PIPELINE_STATE_RUNNING\n",
"CustomTrainingJob projects/522977795627/locations/us-central1/trainingPipelines/8601543785521872896 current state:\n",
"PipelineState.PIPELINE_STATE_RUNNING\n",
"CustomTrainingJob projects/522977795627/locations/us-central1/trainingPipelines/8601543785521872896 current state:\n",
"PipelineState.PIPELINE_STATE_RUNNING\n",
"CustomTrainingJob run completed. Resource name: projects/522977795627/locations/us-central1/trainingPipelines/8601543785521872896\n",
"Model available at projects/522977795627/locations/us-central1/models/8176544832480168612\n",
"\n"
]
}
],
"source": [
"mnist_model3 = hp_search_job.run(\n",
" machine_type=\"n1-standard-4\",\n",
" replica_count=3,\n",
" accelerator_type=\"NVIDIA_TESLA_K80\",\n",
" accelerator_count=2,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And we have a model!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's clean up:"
]
},
{
"cell_type": "code",
"execution_count": 98,
"metadata": {},
"outputs": [],
"source": [
"mnist_model3.delete()\n",
"hp_search_job.delete()\n",
"blobs = bucket.list_blobs(prefix=f\"gs://{bucket_name}/staging/\")\n",
"for blob in blobs:\n",
" blob.delete()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Extra Material – Using AutoML to Train a Model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's start by exporting the MNIST dataset to PNG images, and prepare an `import.csv` pointing to each image, and indicating the split (training, validation, or test) and the label:"
]
},
{
"cell_type": "code",
"execution_count": 99,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"70000/70000"
]
}
],
"source": [
"import matplotlib.pyplot as plt\n",
"\n",
"mnist_path = Path(\"datasets/mnist\")\n",
"mnist_path.mkdir(parents=True, exist_ok=True)\n",
"idx = 0\n",
"with open(mnist_path / \"import.csv\", \"w\") as import_csv:\n",
" for split, X, y in zip((\"training\", \"validation\", \"test\"),\n",
" (X_train, X_valid, X_test),\n",
" (y_train, y_valid, y_test)):\n",
" for image, label in zip(X, y):\n",
" print(f\"\\r{idx + 1}/70000\", end=\"\")\n",
" filename = f\"{idx:05d}.png\"\n",
" plt.imsave(mnist_path / filename, np.tile(image, 3))\n",
" line = f\"{split},gs://{bucket_name}/mnist/{filename},{label}\\n\"\n",
" import_csv.write(line)\n",
" idx += 1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's upload this dataset to GCS:"
]
},
{
"cell_type": "code",
"execution_count": 100,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Uploaded datasets/mnist \n"
]
}
],
"source": [
"upload_directory(bucket, mnist_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's create a managed image dataset on Vertex AI:"
]
},
{
"cell_type": "code",
"execution_count": 101,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Creating ImageDataset\n",
"Create ImageDataset backing LRO: projects/522977795627/locations/us-central1/datasets/7532459492777132032/operations/3812233931370004480\n",
"ImageDataset created. Resource name: projects/522977795627/locations/us-central1/datasets/7532459492777132032\n",
"To use this ImageDataset in another session:\n",
"ds = aiplatform.ImageDataset('projects/522977795627/locations/us-central1/datasets/7532459492777132032')\n",
"Importing ImageDataset data: projects/522977795627/locations/us-central1/datasets/7532459492777132032\n",
"Import ImageDataset data backing LRO: projects/522977795627/locations/us-central1/datasets/7532459492777132032/operations/3010593197698056192\n",
"ImageDataset data imported. Resource name: projects/522977795627/locations/us-central1/datasets/7532459492777132032\n"
]
}
],
"source": [
"from aiplatform.schema.dataset.ioformat.image import single_label_classification\n",
"\n",
"mnist_dataset = aiplatform.ImageDataset.create(\n",
" display_name=\"mnist-dataset\",\n",
" gcs_source=[f\"gs://{bucket_name}/mnist/import.csv\"],\n",
" project=project_id,\n",
" import_schema_uri=single_label_classification,\n",
" sync=True,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create an AutoML training job on this dataset:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**TODO**"
]
},
{
"cell_type": "markdown",
"metadata": {
"tags": []
},
"source": [
"# Exercise Solutions"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1. to 8."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"1. A SavedModel contains a TensorFlow model, including its architecture (a computation graph) and its weights. It is stored as a directory containing a _saved_model.pb_ file, which defines the computation graph (represented as a serialized protocol buffer), and a _variables_ subdirectory containing the variable values. For models containing a large number of weights, these variable values may be split across multiple files. A SavedModel also includes an _assets_ subdirectory that may contain additional data, such as vocabulary files, class names, or some example instances for this model. To be more accurate, a SavedModel can contain one or more _metagraphs_. A metagraph is a computation graph plus some function signature definitions (including their input and output names, types, and shapes). Each metagraph is identified by a set of tags. To inspect a SavedModel, you can use the command-line tool `saved_model_cli` or just load it using `tf.saved_model.load()` and inspect it in Python.\n",
"2. TF Serving allows you to deploy multiple TensorFlow models (or multiple versions of the same model) and make them accessible to all your applications easily via a REST API or a gRPC API. Using your models directly in your applications would make it harder to deploy a new version of a model across all applications. Implementing your own microservice to wrap a TF model would require extra work, and it would be hard to match TF Serving's features. TF Serving has many features: it can monitor a directory and autodeploy the models that are placed there, and you won't have to change or even restart any of your applications to benefit from the new model versions; it's fast, well tested, and scales very well; and it supports A/B testing of experimental models and deploying a new model version to just a subset of your users (in this case the model is called a _canary_). TF Serving is also capable of grouping individual requests into batches to run them jointly on the GPU. To deploy TF Serving, you can install it from source, but it is much simpler to install it using a Docker image. To deploy a cluster of TF Serving Docker images, you can use an orchestration tool such as Kubernetes, or use a fully hosted solution such as Google Vertex AI.\n",
"3. To deploy a model across multiple TF Serving instances, all you need to do is configure these TF Serving instances to monitor the same _models_ directory, and then export your new model as a SavedModel into a subdirectory.\n",
"4. The gRPC API is more efficient than the REST API. However, its client libraries are not as widely available, and if you activate compression when using the REST API, you can get almost the same performance. So, the gRPC API is most useful when you need the highest possible performance and the clients are not limited to the REST API.\n",
"5. To reduce a model's size so it can run on a mobile or embedded device, TFLite uses several techniques:\n",
" * It provides a converter which can optimize a SavedModel: it shrinks the model and reduces its latency. To do this, it prunes all the operations that are not needed to make predictions (such as training operations), and it optimizes and fuses operations whenever possible.\n",
" * The converter can also perform post-training quantization: this technique dramatically reduces the model’s size, so it’s much faster to download and store.\n",
" * It saves the optimized model using the FlatBuffer format, which can be loaded to RAM directly, without parsing. This reduces the loading time and memory footprint.\n",
"6. Quantization-aware training consists in adding fake quantization operations to the model during training. This allows the model to learn to ignore the quantization noise; the final weights will be more robust to quantization.\n",
"7. Model parallelism means chopping your model into multiple parts and running them in parallel across multiple devices, hopefully speeding up the model during training or inference. Data parallelism means creating multiple exact replicas of your model and deploying them across multiple devices. At each iteration during training, each replica is given a different batch of data, and it computes the gradients of the loss with regard to the model parameters. In synchronous data parallelism, the gradients from all replicas are then aggregated and the optimizer performs a Gradient Descent step. The parameters may be centralized (e.g., on parameter servers) or replicated across all replicas and kept in sync using AllReduce. In asynchronous data parallelism, the parameters are centralized and the replicas run independently from each other, each updating the central parameters directly at the end of each training iteration, without having to wait for the other replicas. To speed up training, data parallelism turns out to work better than model parallelism, in general. This is mostly because it requires less communication across devices. Moreover, it is much easier to implement, and it works the same way for any model, whereas model parallelism requires analyzing the model to determine the best way to chop it into pieces. That said, research in this domain is making quick progress (e.g., PipeDream or Pathways), so a mix of model parallelism and data parallelism is probably the way forward.\n",
"8. When training a model across multiple servers, you can use the following distribution strategies:\n",
" * The `MultiWorkerMirroredStrategy` performs mirrored data parallelism. The model is replicated across all available servers and devices, and each replica gets a different batch of data at each training iteration and computes its own gradients. The mean of the gradients is computed and shared across all replicas using a distributed AllReduce implementation (NCCL by default), and all replicas perform the same Gradient Descent step. This strategy is the simplest to use since all servers and devices are treated in exactly the same way, and it performs fairly well. In general, you should use this strategy. Its main limitation is that it requires the model to fit in RAM on every replica.\n",
" * The `ParameterServerStrategy` performs asynchronous data parallelism. The model is replicated across all devices on all workers, and the parameters are sharded across all parameter servers. Each worker has its own training loop, running asynchronously with the other workers; at each training iteration, each worker gets its own batch of data and fetches the latest version of the model parameters from the parameter servers, then it computes the gradients of the loss with regard to these parameters, and it sends them to the parameter servers. Lastly, the parameter servers perform a Gradient Descent step using these gradients. This strategy is generally slower than the previous strategy, and a bit harder to deploy, since it requires managing parameter servers. However, it can be useful in some situations, especially when you can take advantage of the asynchronous updates, for example to reduce I/O bottlenecks. This depends on many factors, including hardware, network topology, number of servers, model size, and more, so your mileage may vary."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 9.\n",
"_Exercise: Train a model (any model you like) and deploy it to TF Serving or Google Vertex AI. Write the client code to query it using the REST API or the gRPC API. Update the model and deploy the new version. Your client code will now query the new version. Roll back to the first version._"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Please follow the steps in the Deploying TensorFlow models to TensorFlow Serving section above."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 10.\n",
"_Exercise: Train any model across multiple GPUs on the same machine using the `MirroredStrategy` (if you do not have access to GPUs, you can use Colaboratory with a GPU Runtime and create two virtual GPUs). Train the model again using the `CentralStorageStrategy `and compare the training time._"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Please follow the steps in the [Distributed Training](#Distributed-Training) section above."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 11.\n",
"_Exercise: Train a small model on Google Vertex AI, using TensorFlow Cloud Tuner for hyperparameter tuning._"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Please follow the instructions in the _Hyperparameter Tuning using TensorFlow Cloud Tuner_ section in the book."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Congratulations!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You've reached the end of the book! I hope you found it useful. 😊"
]
}
],
"metadata": {
"accelerator": "GPU",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.10"
}
},
"nbformat": 4,
"nbformat_minor": 4
}