{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "**Chapter 11 – Training Deep Neural Networks**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "_This notebook contains all the sample code and solutions to the exercises in chapter 11._" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", " \n", " \n", "
\n", " \"Open\n", " \n", " \n", "
" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "# Setup" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This project requires Python 3.8 or above:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import sys\n", "\n", "assert sys.version_info >= (3, 8)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It also requires Scikit-Learn ≥ 1.0.1:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "import sklearn\n", "\n", "assert sklearn.__version__ >= \"1.0.1\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And TensorFlow ≥ 2.6:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "import tensorflow as tf\n", "\n", "assert tf.__version__ >= \"2.6.0\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we did in previous chapters, let's define the default font sizes to make the figures prettier:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "\n", "plt.rc('font', size=14)\n", "plt.rc('axes', labelsize=14, titlesize=14)\n", "plt.rc('legend', fontsize=14)\n", "plt.rc('xtick', labelsize=10)\n", "plt.rc('ytick', labelsize=10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And let's create the `images/deep` folder (if it doesn't already exist), and define the `save_fig()` function which is used through this notebook to save the figures in high-res for the book:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "from pathlib import Path\n", "\n", "IMAGES_PATH = Path() / \"images\" / \"deep\"\n", "IMAGES_PATH.mkdir(parents=True, exist_ok=True)\n", "\n", "def save_fig(fig_id, tight_layout=True, fig_extension=\"png\", resolution=300):\n", " path = IMAGES_PATH / f\"{fig_id}.{fig_extension}\"\n", " if tight_layout:\n", " plt.tight_layout()\n", " plt.savefig(path, format=fig_extension, dpi=resolution)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Vanishing/Exploding Gradients Problem" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "# extra code – this cell generates and saves Figure 11–1\n", "\n", "import numpy as np\n", "\n", "def sigmoid(z):\n", " return 1 / (1 + np.exp(-z))\n", "\n", "z = np.linspace(-5, 5, 200)\n", "\n", "plt.plot([-5, 5], [0, 0], 'k-')\n", "plt.plot([-5, 5], [1, 1], 'k--')\n", "plt.plot([0, 0], [-0.2, 1.2], 'k-')\n", "plt.plot([-5, 5], [-3/4, 7/4], 'g--')\n", "plt.plot(z, sigmoid(z), \"b-\", linewidth=2,\n", " label=r\"$\\sigma(z) = \\dfrac{1}{1+e^{-z}}$\")\n", "props = dict(facecolor='black', shrink=0.1)\n", "plt.annotate('Saturating', xytext=(3.5, 0.7), xy=(5, 1), arrowprops=props,\n", " fontsize=14, ha=\"center\")\n", "plt.annotate('Saturating', xytext=(-3.5, 0.3), xy=(-5, 0), arrowprops=props,\n", " fontsize=14, ha=\"center\")\n", "plt.annotate('Linear', xytext=(2, 0.2), xy=(0, 0.5), arrowprops=props,\n", " fontsize=14, ha=\"center\")\n", "plt.grid(True)\n", "plt.axis([-5, 5, -0.2, 1.2])\n", "plt.xlabel(\"$z$\")\n", "plt.legend(loc=\"upper left\", fontsize=16)\n", "\n", "save_fig(\"sigmoid_saturation_plot\")\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Xavier and He Initialization" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "dense = tf.keras.layers.Dense(50, activation=\"relu\",\n", " kernel_initializer=\"he_normal\")" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "he_avg_init = tf.keras.initializers.VarianceScaling(scale=2., mode=\"fan_avg\",\n", " distribution=\"uniform\")\n", "dense = tf.keras.layers.Dense(50, activation=\"sigmoid\",\n", " kernel_initializer=he_avg_init)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Nonsaturating Activation Functions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Leaky ReLU" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "# extra code – this cell generates and saves Figure 11–2\n", "\n", "def leaky_relu(z, alpha):\n", " return np.maximum(alpha * z, z)\n", "\n", "z = np.linspace(-5, 5, 200)\n", "plt.plot(z, leaky_relu(z, 0.1), \"b-\", linewidth=2, label=r\"$LeakyReLU(z) = max(\\alpha z, z)$\")\n", "plt.plot([-5, 5], [0, 0], 'k-')\n", "plt.plot([0, 0], [-1, 3.7], 'k-')\n", "plt.grid(True)\n", "props = dict(facecolor='black', shrink=0.1)\n", "plt.annotate('Leak', xytext=(-3.5, 0.5), xy=(-5, -0.3), arrowprops=props,\n", " fontsize=14, ha=\"center\")\n", "plt.xlabel(\"$z$\")\n", "plt.axis([-5, 5, -1, 3.7])\n", "plt.gca().set_aspect(\"equal\")\n", "plt.legend()\n", "\n", "save_fig(\"leaky_relu_plot\")\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "leaky_relu = tf.keras.layers.LeakyReLU(alpha=0.2) # defaults to alpha=0.3\n", "dense = tf.keras.layers.Dense(50, activation=leaky_relu,\n", " kernel_initializer=\"he_normal\")" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "model = tf.keras.models.Sequential([\n", " # [...] # more layers\n", " tf.keras.layers.Dense(50, kernel_initializer=\"he_normal\"), # no activation\n", " tf.keras.layers.LeakyReLU(alpha=0.2), # activation as a separate layer\n", " # [...] # more layers\n", "])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### ELU" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer, and use He initialization:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "dense = tf.keras.layers.Dense(50, activation=\"elu\",\n", " kernel_initializer=\"he_normal\")" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "### SELU" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By default, the SELU hyperparameters (`scale` and `alpha`) are tuned in such a way that the mean output of each neuron remains close to 0, and the standard deviation remains close to 1 (assuming the inputs are standardized with mean 0 and standard deviation 1 too, and other constraints are respected, as explained in the book). Using this activation function, even a 1,000 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "# extra code – this cell generates and saves Figure 11–3\n", "\n", "from scipy.special import erfc\n", "\n", "# alpha and scale to self normalize with mean 0 and standard deviation 1\n", "# (see equation 14 in the paper):\n", "alpha_0_1 = -np.sqrt(2 / np.pi) / (erfc(1 / np.sqrt(2)) * np.exp(1 / 2) - 1)\n", "scale_0_1 = (\n", " (1 - erfc(1 / np.sqrt(2)) * np.sqrt(np.e))\n", " * np.sqrt(2 * np.pi)\n", " * (\n", " 2 * erfc(np.sqrt(2)) * np.e ** 2\n", " + np.pi * erfc(1 / np.sqrt(2)) ** 2 * np.e\n", " - 2 * (2 + np.pi) * erfc(1 / np.sqrt(2)) * np.sqrt(np.e)\n", " + np.pi\n", " + 2\n", " ) ** (-1 / 2)\n", ")\n", "\n", "def elu(z, alpha=1):\n", " return np.where(z < 0, alpha * (np.exp(z) - 1), z)\n", "\n", "def selu(z, scale=scale_0_1, alpha=alpha_0_1):\n", " return scale * elu(z, alpha)\n", "\n", "z = np.linspace(-5, 5, 200)\n", "plt.plot(z, elu(z), \"b-\", linewidth=2, label=r\"ELU$_\\alpha(z) = \\alpha (e^z - 1)$ if $z < 0$, else $z$\")\n", "plt.plot(z, selu(z), \"r--\", linewidth=2, label=r\"SELU$(z) = 1.05 \\, $ELU$_{1.67}(z)$\")\n", "plt.plot([-5, 5], [0, 0], 'k-')\n", "plt.plot([-5, 5], [-1, -1], 'k:', linewidth=2)\n", "plt.plot([-5, 5], [-1.758, -1.758], 'k:', linewidth=2)\n", "plt.plot([0, 0], [-2.2, 3.2], 'k-')\n", "plt.grid(True)\n", "plt.axis([-5, 5, -2.2, 3.2])\n", "plt.xlabel(\"$z$\")\n", "plt.gca().set_aspect(\"equal\")\n", "plt.legend()\n", "\n", "save_fig(\"elu_selu_plot\")\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using SELU is straightforward:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "dense = tf.keras.layers.Dense(50, activation=\"selu\",\n", " kernel_initializer=\"lecun_normal\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Extra material – an example of a self-regularized network using SELU**\n", "\n", "Let's create a neural net for Fashion MNIST with 100 hidden layers, using the SELU activation function:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "tf.random.set_seed(42)\n", "model = tf.keras.Sequential()\n", "model.add(tf.keras.layers.Flatten(input_shape=[28, 28]))\n", "for layer in range(100):\n", " model.add(tf.keras.layers.Dense(100, activation=\"selu\",\n", " kernel_initializer=\"lecun_normal\"))\n", "model.add(tf.keras.layers.Dense(10, activation=\"softmax\"))" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "model.compile(loss=\"sparse_categorical_crossentropy\",\n", " optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),\n", " metrics=[\"accuracy\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [], "source": [ "fashion_mnist = tf.keras.datasets.fashion_mnist.load_data()\n", "(X_train_full, y_train_full), (X_test, y_test) = fashion_mnist\n", "X_train, y_train = X_train_full[:-5000], y_train_full[:-5000]\n", "X_valid, y_valid = X_train_full[-5000:], y_train_full[-5000:]\n", "X_train, X_valid, X_test = X_train / 255, X_valid / 255, X_test / 255" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [], "source": [ "class_names = [\"T-shirt/top\", \"Trouser\", \"Pullover\", \"Dress\", \"Coat\",\n", " \"Sandal\", \"Shirt\", \"Sneaker\", \"Bag\", \"Ankle boot\"]" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [], "source": [ "pixel_means = X_train.mean(axis=0, keepdims=True)\n", "pixel_stds = X_train.std(axis=0, keepdims=True)\n", "X_train_scaled = (X_train - pixel_means) / pixel_stds\n", "X_valid_scaled = (X_valid - pixel_means) / pixel_stds\n", "X_test_scaled = (X_test - pixel_means) / pixel_stds" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [], "source": [ "history = model.fit(X_train_scaled, y_train, epochs=5,\n", " validation_data=(X_valid_scaled, y_valid))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The network managed to learn, despite how deep it is. Now look at what happens if we try to use the ReLU activation function instead:" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [], "source": [ "tf.random.set_seed(42)" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [], "source": [ "model = tf.keras.Sequential()\n", "model.add(tf.keras.layers.Flatten(input_shape=[28, 28]))\n", "for layer in range(100):\n", " model.add(tf.keras.layers.Dense(100, activation=\"relu\",\n", " kernel_initializer=\"he_normal\"))\n", "model.add(tf.keras.layers.Dense(10, activation=\"softmax\"))" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [], "source": [ "model.compile(loss=\"sparse_categorical_crossentropy\",\n", " optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),\n", " metrics=[\"accuracy\"])" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [], "source": [ "history = model.fit(X_train_scaled, y_train, epochs=5,\n", " validation_data=(X_valid_scaled, y_valid))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Not great at all, we suffered from the vanishing/exploding gradients problem." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### GELU, Swish and Mish" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [], "source": [ "# extra code – this cell generates and saves Figure 11–4\n", "\n", "def swish(z, beta=1):\n", " return z * sigmoid(beta * z)\n", "\n", "def approx_gelu(z):\n", " return swish(z, beta=1.702)\n", "\n", "def softplus(z):\n", " return np.log(1 + np.exp(z))\n", "\n", "def mish(z):\n", " return z * np.tanh(softplus(z))\n", "\n", "z = np.linspace(-4, 2, 200)\n", "\n", "beta = 0.6\n", "plt.plot(z, approx_gelu(z), \"b-\", linewidth=2,\n", " label=r\"GELU$(z) = z\\,\\Phi(z)$\")\n", "plt.plot(z, swish(z), \"r--\", linewidth=2,\n", " label=r\"Swish$(z) = z\\,\\sigma(z)$\")\n", "plt.plot(z, swish(z, beta), \"r:\", linewidth=2,\n", " label=fr\"Swish$_{{\\beta={beta}}}(z)=z\\,\\sigma({beta}\\,z)$\")\n", "plt.plot(z, mish(z), \"g:\", linewidth=3,\n", " label=fr\"Mish$(z) = z\\,\\tanh($softplus$(z))$\")\n", "plt.plot([-4, 2], [0, 0], 'k-')\n", "plt.plot([0, 0], [-2.2, 3.2], 'k-')\n", "plt.grid(True)\n", "plt.axis([-4, 2, -1, 2])\n", "plt.gca().set_aspect(\"equal\")\n", "plt.xlabel(\"$z$\")\n", "plt.legend(loc=\"upper left\")\n", "\n", "save_fig(\"gelu_swish_mish_plot\")\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Batch Normalization" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [], "source": [ "# extra code - clear the name counters and set the random seed\n", "tf.keras.backend.clear_session()\n", "tf.random.set_seed(42)" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [], "source": [ "model = tf.keras.Sequential([\n", " tf.keras.layers.Flatten(input_shape=[28, 28]),\n", " tf.keras.layers.BatchNormalization(),\n", " tf.keras.layers.Dense(300, activation=\"relu\",\n", " kernel_initializer=\"he_normal\"),\n", " tf.keras.layers.BatchNormalization(),\n", " tf.keras.layers.Dense(100, activation=\"relu\",\n", " kernel_initializer=\"he_normal\"),\n", " tf.keras.layers.BatchNormalization(),\n", " tf.keras.layers.Dense(10, activation=\"softmax\")\n", "])" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [], "source": [ "model.summary()" ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [], "source": [ "[(var.name, var.trainable) for var in model.layers[1].variables]" ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [], "source": [ "# extra code – just show that the model works! 😊\n", "model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=\"sgd\",\n", " metrics=\"accuracy\")\n", "model.fit(X_train, y_train, epochs=2, validation_data=(X_valid, y_valid))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Sometimes applying BN before the activation function works better (there's a debate on this topic). Moreover, the layer before a `BatchNormalization` layer does not need to have bias terms, since the `BatchNormalization` layer some as well, it would be a waste of parameters, so you can set `use_bias=False` when creating those layers:" ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [], "source": [ "# extra code - clear the name counters and set the random seed\n", "tf.keras.backend.clear_session()\n", "tf.random.set_seed(42)" ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [], "source": [ "model = tf.keras.Sequential([\n", " tf.keras.layers.Flatten(input_shape=[28, 28]),\n", " tf.keras.layers.Dense(300, kernel_initializer=\"he_normal\", use_bias=False),\n", " tf.keras.layers.BatchNormalization(),\n", " tf.keras.layers.Activation(\"relu\"),\n", " tf.keras.layers.Dense(100, kernel_initializer=\"he_normal\", use_bias=False),\n", " tf.keras.layers.BatchNormalization(),\n", " tf.keras.layers.Activation(\"relu\"),\n", " tf.keras.layers.Dense(10, activation=\"softmax\")\n", "])" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [], "source": [ "# extra code – just show that the model works! 😊\n", "model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=\"sgd\",\n", " metrics=\"accuracy\")\n", "model.fit(X_train, y_train, epochs=2, validation_data=(X_valid, y_valid))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Gradient Clipping" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "All `tf.keras.optimizers` accept `clipnorm` or `clipvalue` arguments:" ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [], "source": [ "optimizer = tf.keras.optimizers.SGD(clipvalue=1.0)\n", "model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=optimizer)" ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [], "source": [ "optimizer = tf.keras.optimizers.SGD(clipnorm=1.0)\n", "model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=optimizer)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Reusing Pretrained Layers" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Reusing a Keras model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's split the fashion MNIST training set in two:\n", "* `X_train_A`: all images of all items except for T-shirts/tops and pullovers (classes 0 and 2).\n", "* `X_train_B`: a much smaller training set of just the first 200 images of T-shirts/tops and pullovers.\n", "\n", "The validation set and the test set are also split this way, but without restricting the number of images.\n", "\n", "We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (trousers, dresses, coats, sandals, shirts, sneakers, bags, and ankle boots) are somewhat similar to classes in set B (T-shirts/tops and pullovers). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the chapter 14)." ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [], "source": [ "# extra code – split Fashion MNIST into tasks A and B, then train and save\n", "# model A to \"my_model_A\".\n", "\n", "pos_class_id = class_names.index(\"Pullover\")\n", "neg_class_id = class_names.index(\"T-shirt/top\")\n", "\n", "def split_dataset(X, y):\n", " y_for_B = (y == pos_class_id) | (y == neg_class_id)\n", " y_A = y[~y_for_B]\n", " y_B = (y[y_for_B] == pos_class_id).astype(np.float32)\n", " old_class_ids = list(set(range(10)) - set([neg_class_id, pos_class_id]))\n", " for old_class_id, new_class_id in zip(old_class_ids, range(8)):\n", " y_A[y_A == old_class_id] = new_class_id # reorder class ids for A\n", " return ((X[~y_for_B], y_A), (X[y_for_B], y_B))\n", "\n", "(X_train_A, y_train_A), (X_train_B, y_train_B) = split_dataset(X_train, y_train)\n", "(X_valid_A, y_valid_A), (X_valid_B, y_valid_B) = split_dataset(X_valid, y_valid)\n", "(X_test_A, y_test_A), (X_test_B, y_test_B) = split_dataset(X_test, y_test)\n", "X_train_B = X_train_B[:200]\n", "y_train_B = y_train_B[:200]\n", "\n", "tf.random.set_seed(42)\n", "\n", "model_A = tf.keras.Sequential([\n", " tf.keras.layers.Flatten(input_shape=[28, 28]),\n", " tf.keras.layers.Dense(100, activation=\"relu\",\n", " kernel_initializer=\"he_normal\"),\n", " tf.keras.layers.Dense(100, activation=\"relu\",\n", " kernel_initializer=\"he_normal\"),\n", " tf.keras.layers.Dense(100, activation=\"relu\",\n", " kernel_initializer=\"he_normal\"),\n", " tf.keras.layers.Dense(8, activation=\"softmax\")\n", "])\n", "\n", "model_A.compile(loss=\"sparse_categorical_crossentropy\",\n", " optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),\n", " metrics=[\"accuracy\"])\n", "history = model_A.fit(X_train_A, y_train_A, epochs=20,\n", " validation_data=(X_valid_A, y_valid_A))\n", "model_A.save(\"my_model_A\")" ] }, { "cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [], "source": [ "# extra code – train and evaluate model B, without reusing model A\n", "\n", "tf.random.set_seed(42)\n", "model_B = tf.keras.Sequential([\n", " tf.keras.layers.Flatten(input_shape=[28, 28]),\n", " tf.keras.layers.Dense(100, activation=\"relu\",\n", " kernel_initializer=\"he_normal\"),\n", " tf.keras.layers.Dense(100, activation=\"relu\",\n", " kernel_initializer=\"he_normal\"),\n", " tf.keras.layers.Dense(100, activation=\"relu\",\n", " kernel_initializer=\"he_normal\"),\n", " tf.keras.layers.Dense(1, activation=\"sigmoid\")\n", "])\n", "\n", "model_B.compile(loss=\"binary_crossentropy\",\n", " optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),\n", " metrics=[\"accuracy\"])\n", "history = model_B.fit(X_train_B, y_train_B, epochs=20,\n", " validation_data=(X_valid_B, y_valid_B))\n", "model_B.evaluate(X_test_B, y_test_B)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Model B reaches 91.85% accuracy on the test set. Now let's try reusing the pretrained model A." ] }, { "cell_type": "code", "execution_count": 38, "metadata": {}, "outputs": [], "source": [ "model_A = tf.keras.models.load_model(\"my_model_A\")\n", "model_B_on_A = tf.keras.Sequential(model_A.layers[:-1])\n", "model_B_on_A.add(tf.keras.layers.Dense(1, activation=\"sigmoid\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that `model_B_on_A` and `model_A` actually share layers now, so when we train one, it will update both models. If we want to avoid that, we need to build `model_B_on_A` on top of a *clone* of `model_A`:" ] }, { "cell_type": "code", "execution_count": 39, "metadata": {}, "outputs": [], "source": [ "tf.random.set_seed(42) # extra code – ensure reproducibility" ] }, { "cell_type": "code", "execution_count": 40, "metadata": {}, "outputs": [], "source": [ "model_A_clone = tf.keras.models.clone_model(model_A)\n", "model_A_clone.set_weights(model_A.get_weights())" ] }, { "cell_type": "code", "execution_count": 41, "metadata": {}, "outputs": [], "source": [ "# extra code – creating model_B_on_A just like in the previous cell\n", "model_B_on_A = tf.keras.Sequential(model_A_clone.layers[:-1])\n", "model_B_on_A.add(tf.keras.layers.Dense(1, activation=\"sigmoid\"))" ] }, { "cell_type": "code", "execution_count": 42, "metadata": {}, "outputs": [], "source": [ "for layer in model_B_on_A.layers[:-1]:\n", " layer.trainable = False\n", "\n", "optimizer = tf.keras.optimizers.SGD(learning_rate=0.001)\n", "model_B_on_A.compile(loss=\"binary_crossentropy\", optimizer=optimizer,\n", " metrics=[\"accuracy\"])" ] }, { "cell_type": "code", "execution_count": 43, "metadata": {}, "outputs": [], "source": [ "history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,\n", " validation_data=(X_valid_B, y_valid_B))\n", "\n", "for layer in model_B_on_A.layers[:-1]:\n", " layer.trainable = True\n", "\n", "optimizer = tf.keras.optimizers.SGD(learning_rate=0.001)\n", "model_B_on_A.compile(loss=\"binary_crossentropy\", optimizer=optimizer,\n", " metrics=[\"accuracy\"])\n", "history = model_B_on_A.fit(X_train_B, y_train_B, epochs=16,\n", " validation_data=(X_valid_B, y_valid_B))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So, what's the final verdict?" ] }, { "cell_type": "code", "execution_count": 44, "metadata": {}, "outputs": [], "source": [ "model_B_on_A.evaluate(X_test_B, y_test_B)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Great! We got a bit of transfer: the model's accuracy went up 2 percentage points, from 91.85% to 93.85%. This means the error rate dropped by almost 25%:" ] }, { "cell_type": "code", "execution_count": 45, "metadata": {}, "outputs": [], "source": [ "1 - (100 - 93.85) / (100 - 91.85)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Faster Optimizers" ] }, { "cell_type": "code", "execution_count": 46, "metadata": {}, "outputs": [], "source": [ "# extra code – a little function to test an optimizer on Fashion MNIST\n", "\n", "def build_model(seed=42):\n", " tf.random.set_seed(seed)\n", " return tf.keras.Sequential([\n", " tf.keras.layers.Flatten(input_shape=[28, 28]),\n", " tf.keras.layers.Dense(100, activation=\"relu\",\n", " kernel_initializer=\"he_normal\"),\n", " tf.keras.layers.Dense(100, activation=\"relu\",\n", " kernel_initializer=\"he_normal\"),\n", " tf.keras.layers.Dense(100, activation=\"relu\",\n", " kernel_initializer=\"he_normal\"),\n", " tf.keras.layers.Dense(10, activation=\"softmax\")\n", " ])\n", "\n", "def build_and_train_model(optimizer):\n", " model = build_model()\n", " model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=optimizer,\n", " metrics=[\"accuracy\"])\n", " return model.fit(X_train, y_train, epochs=10,\n", " validation_data=(X_valid, y_valid))" ] }, { "cell_type": "code", "execution_count": 47, "metadata": {}, "outputs": [], "source": [ "optimizer = tf.keras.optimizers.SGD(learning_rate=0.001, momentum=0.9)" ] }, { "cell_type": "code", "execution_count": 48, "metadata": {}, "outputs": [], "source": [ "history_sgd = build_and_train_model(optimizer) # extra code" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Momentum optimization" ] }, { "cell_type": "code", "execution_count": 49, "metadata": {}, "outputs": [], "source": [ "optimizer = tf.keras.optimizers.SGD(learning_rate=0.001, momentum=0.9)" ] }, { "cell_type": "code", "execution_count": 50, "metadata": {}, "outputs": [], "source": [ "history_momentum = build_and_train_model(optimizer) # extra code" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Nesterov Accelerated Gradient" ] }, { "cell_type": "code", "execution_count": 51, "metadata": {}, "outputs": [], "source": [ "optimizer = tf.keras.optimizers.SGD(learning_rate=0.001, momentum=0.9,\n", " nesterov=True)" ] }, { "cell_type": "code", "execution_count": 52, "metadata": {}, "outputs": [], "source": [ "history_nesterov = build_and_train_model(optimizer) # extra code" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## AdaGrad" ] }, { "cell_type": "code", "execution_count": 53, "metadata": {}, "outputs": [], "source": [ "optimizer = tf.keras.optimizers.Adagrad(learning_rate=0.001)" ] }, { "cell_type": "code", "execution_count": 54, "metadata": {}, "outputs": [], "source": [ "history_adagrad = build_and_train_model(optimizer) # extra code" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## RMSProp" ] }, { "cell_type": "code", "execution_count": 55, "metadata": {}, "outputs": [], "source": [ "optimizer = tf.keras.optimizers.RMSprop(learning_rate=0.001, rho=0.9)" ] }, { "cell_type": "code", "execution_count": 56, "metadata": {}, "outputs": [], "source": [ "history_rmsprop = build_and_train_model(optimizer) # extra code" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Adam Optimization" ] }, { "cell_type": "code", "execution_count": 57, "metadata": {}, "outputs": [], "source": [ "optimizer = tf.keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9,\n", " beta_2=0.999)" ] }, { "cell_type": "code", "execution_count": 58, "metadata": {}, "outputs": [], "source": [ "history_adam = build_and_train_model(optimizer) # extra code" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Adamax Optimization**" ] }, { "cell_type": "code", "execution_count": 59, "metadata": {}, "outputs": [], "source": [ "optimizer = tf.keras.optimizers.Adamax(learning_rate=0.001, beta_1=0.9,\n", " beta_2=0.999)" ] }, { "cell_type": "code", "execution_count": 60, "metadata": {}, "outputs": [], "source": [ "history_adamax = build_and_train_model(optimizer) # extra code" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "**Nadam Optimization**" ] }, { "cell_type": "code", "execution_count": 61, "metadata": { "tags": [] }, "outputs": [], "source": [ "optimizer = tf.keras.optimizers.Nadam(learning_rate=0.001, beta_1=0.9,\n", " beta_2=0.999)" ] }, { "cell_type": "code", "execution_count": 62, "metadata": {}, "outputs": [], "source": [ "history_nadam = build_and_train_model(optimizer) # extra code" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**AdamW Optimization**" ] }, { "cell_type": "code", "execution_count": 63, "metadata": { "tags": [] }, "outputs": [], "source": [ "import tensorflow_addons as tfa\n", "\n", "optimizer = tfa.optimizers.AdamW(weight_decay=1e-5, learning_rate=0.001,\n", " beta_1=0.9, beta_2=0.999)" ] }, { "cell_type": "code", "execution_count": 64, "metadata": {}, "outputs": [], "source": [ "history_adamw = build_and_train_model(optimizer) # extra code" ] }, { "cell_type": "code", "execution_count": 65, "metadata": {}, "outputs": [], "source": [ "# extra code – visualize the learning curves of all the optimizers\n", "\n", "for loss in (\"loss\", \"val_loss\"):\n", " plt.figure(figsize=(12, 8))\n", " opt_names = \"SGD Momentum Nesterov AdaGrad RMSProp Adam Adamax Nadam AdamW\"\n", " for history, opt_name in zip((history_sgd, history_momentum, history_nesterov,\n", " history_adagrad, history_rmsprop, history_adam,\n", " history_adamax, history_nadam, history_adamw),\n", " opt_names.split()):\n", " plt.plot(history.history[loss], label=f\"{opt_name}\", linewidth=3)\n", "\n", " plt.grid()\n", " plt.xlabel(\"Epochs\")\n", " plt.ylabel({\"loss\": \"Training loss\", \"val_loss\": \"Validation loss\"}[loss])\n", " plt.legend(loc=\"upper left\")\n", " plt.axis([0, 9, 0.1, 0.7])\n", " plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Learning Rate Scheduling" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Power Scheduling" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```lr = lr0 / (1 + steps / s)**c```\n", "* Keras uses `c=1` and `s = 1 / decay`" ] }, { "cell_type": "code", "execution_count": 66, "metadata": {}, "outputs": [], "source": [ "optimizer = tf.keras.optimizers.SGD(learning_rate=0.01, decay=1e-4)" ] }, { "cell_type": "code", "execution_count": 67, "metadata": {}, "outputs": [], "source": [ "history_power_scheduling = build_and_train_model(optimizer) # extra code" ] }, { "cell_type": "code", "execution_count": 68, "metadata": {}, "outputs": [], "source": [ "# extra code – this cell plots power scheduling\n", "\n", "import math\n", "\n", "learning_rate = 0.01\n", "decay = 1e-4\n", "batch_size = 32\n", "n_steps_per_epoch = math.ceil(len(X_train) / batch_size)\n", "n_epochs = 25\n", "\n", "epochs = np.arange(n_epochs)\n", "lrs = learning_rate / (1 + decay * epochs * n_steps_per_epoch)\n", "\n", "plt.plot(epochs, lrs, \"o-\")\n", "plt.axis([0, n_epochs - 1, 0, 0.01])\n", "plt.xlabel(\"Epoch\")\n", "plt.ylabel(\"Learning Rate\")\n", "plt.title(\"Power Scheduling\", fontsize=14)\n", "plt.grid(True)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exponential Scheduling" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```lr = lr0 * 0.1 ** (epoch / s)```" ] }, { "cell_type": "code", "execution_count": 69, "metadata": {}, "outputs": [], "source": [ "def exponential_decay_fn(epoch):\n", " return 0.01 * 0.1 ** (epoch / 20)" ] }, { "cell_type": "code", "execution_count": 70, "metadata": {}, "outputs": [], "source": [ "def exponential_decay(lr0, s):\n", " def exponential_decay_fn(epoch):\n", " return lr0 * 0.1 ** (epoch / s)\n", " return exponential_decay_fn\n", "\n", "exponential_decay_fn = exponential_decay(lr0=0.01, s=20)" ] }, { "cell_type": "code", "execution_count": 71, "metadata": {}, "outputs": [], "source": [ "# extra code – build and compile a model for Fashion MNIST\n", "\n", "tf.random.set_seed(42)\n", "model = build_model()\n", "optimizer = tf.keras.optimizers.SGD(learning_rate=0.001)\n", "model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=optimizer,\n", " metrics=[\"accuracy\"])" ] }, { "cell_type": "code", "execution_count": 72, "metadata": {}, "outputs": [], "source": [ "lr_scheduler = tf.keras.callbacks.LearningRateScheduler(exponential_decay_fn)\n", "history = model.fit(X_train, y_train, epochs=n_epochs,\n", " validation_data=(X_valid, y_valid),\n", " callbacks=[lr_scheduler])" ] }, { "cell_type": "code", "execution_count": 73, "metadata": {}, "outputs": [], "source": [ "# extra code – this cell plots exponential scheduling\n", "\n", "plt.plot(history.epoch, history.history[\"lr\"], \"o-\")\n", "plt.axis([0, n_epochs - 1, 0, 0.011])\n", "plt.xlabel(\"Epoch\")\n", "plt.ylabel(\"Learning Rate\")\n", "plt.title(\"Exponential Scheduling\", fontsize=14)\n", "plt.grid(True)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The schedule function can take the current learning rate as a second argument:" ] }, { "cell_type": "code", "execution_count": 74, "metadata": {}, "outputs": [], "source": [ "def exponential_decay_fn(epoch, lr):\n", " return lr * 0.1 ** (1 / 20)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Extra material**: if you want to update the learning rate at each iteration rather than at each epoch, you can write your own callback class:" ] }, { "cell_type": "code", "execution_count": 75, "metadata": {}, "outputs": [], "source": [ "K = tf.keras.backend\n", "\n", "class ExponentialDecay(tf.keras.callbacks.Callback):\n", " def __init__(self, n_steps=40_000):\n", " super().__init__()\n", " self.n_steps = n_steps\n", "\n", " def on_batch_begin(self, batch, logs=None):\n", " # Note: the `batch` argument is reset at each epoch\n", " lr = K.get_value(self.model.optimizer.learning_rate)\n", " new_learning_rate = lr * 0.1 ** (1 / self.n_steps)\n", " K.set_value(self.model.optimizer.learning_rate, new_learning_rate)\n", "\n", " def on_epoch_end(self, epoch, logs=None):\n", " logs = logs or {}\n", " logs['lr'] = K.get_value(self.model.optimizer.learning_rate)" ] }, { "cell_type": "code", "execution_count": 76, "metadata": {}, "outputs": [], "source": [ "lr0 = 0.01\n", "model = build_model()\n", "optimizer = tf.keras.optimizers.SGD(learning_rate=lr0)\n", "model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=optimizer,\n", " metrics=[\"accuracy\"])" ] }, { "cell_type": "code", "execution_count": 77, "metadata": {}, "outputs": [], "source": [ "n_epochs = 25\n", "batch_size = 32\n", "n_steps = n_epochs * math.ceil(len(X_train) / batch_size)\n", "exp_decay = ExponentialDecay(n_steps)\n", "history = model.fit(X_train, y_train, epochs=n_epochs,\n", " validation_data=(X_valid, y_valid),\n", " callbacks=[exp_decay])" ] }, { "cell_type": "code", "execution_count": 78, "metadata": { "scrolled": true }, "outputs": [], "source": [ "n_steps = n_epochs * math.ceil(len(X_train) / batch_size)\n", "steps = np.arange(n_steps)\n", "decay_rate = 0.1\n", "lrs = lr0 * decay_rate ** (steps / n_steps)\n", "\n", "plt.plot(steps, lrs, \"-\", linewidth=2)\n", "plt.axis([0, n_steps - 1, 0, lr0 * 1.1])\n", "plt.xlabel(\"Batch\")\n", "plt.ylabel(\"Learning Rate\")\n", "plt.title(\"Exponential Scheduling (per batch)\", fontsize=14)\n", "plt.grid(True)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Piecewise Constant Scheduling" ] }, { "cell_type": "code", "execution_count": 79, "metadata": {}, "outputs": [], "source": [ "def piecewise_constant_fn(epoch):\n", " if epoch < 5:\n", " return 0.01\n", " elif epoch < 15:\n", " return 0.005\n", " else:\n", " return 0.001" ] }, { "cell_type": "code", "execution_count": 80, "metadata": {}, "outputs": [], "source": [ "# extra code – this cell demonstrates a more general way to define\n", "# piecewise constant scheduling.\n", "\n", "def piecewise_constant(boundaries, values):\n", " boundaries = np.array([0] + boundaries)\n", " values = np.array(values)\n", " def piecewise_constant_fn(epoch):\n", " return values[(boundaries > epoch).argmax() - 1]\n", " return piecewise_constant_fn\n", "\n", "piecewise_constant_fn = piecewise_constant([5, 15], [0.01, 0.005, 0.001])" ] }, { "cell_type": "code", "execution_count": 81, "metadata": {}, "outputs": [], "source": [ "# extra code – use a tf.keras.callbacks.LearningRateScheduler like earlier\n", "\n", "n_epochs = 25\n", "\n", "lr_scheduler = tf.keras.callbacks.LearningRateScheduler(piecewise_constant_fn)\n", "\n", "model = build_model()\n", "optimizer = tf.keras.optimizers.Nadam(learning_rate=lr0)\n", "model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=optimizer,\n", " metrics=[\"accuracy\"])\n", "history = model.fit(X_train, y_train, epochs=n_epochs,\n", " validation_data=(X_valid, y_valid),\n", " callbacks=[lr_scheduler])" ] }, { "cell_type": "code", "execution_count": 82, "metadata": {}, "outputs": [], "source": [ "# extra code – this cell plots piecewise constant scheduling\n", "\n", "plt.plot(history.epoch, history.history[\"lr\"], \"o-\")\n", "plt.axis([0, n_epochs - 1, 0, 0.011])\n", "plt.xlabel(\"Epoch\")\n", "plt.ylabel(\"Learning Rate\")\n", "plt.title(\"Piecewise Constant Scheduling\", fontsize=14)\n", "plt.grid(True)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Performance Scheduling" ] }, { "cell_type": "code", "execution_count": 83, "metadata": {}, "outputs": [], "source": [ "# extra code – build and compile the model\n", "\n", "model = build_model()\n", "optimizer = tf.keras.optimizers.SGD(learning_rate=lr0)\n", "model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=optimizer,\n", " metrics=[\"accuracy\"])" ] }, { "cell_type": "code", "execution_count": 84, "metadata": {}, "outputs": [], "source": [ "lr_scheduler = tf.keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=5)\n", "history = model.fit(X_train, y_train, epochs=n_epochs,\n", " validation_data=(X_valid, y_valid),\n", " callbacks=[lr_scheduler])" ] }, { "cell_type": "code", "execution_count": 85, "metadata": {}, "outputs": [], "source": [ "# extra code – this cell plots performance scheduling\n", "\n", "plt.plot(history.epoch, history.history[\"lr\"], \"bo-\")\n", "plt.xlabel(\"Epoch\")\n", "plt.ylabel(\"Learning Rate\", color='b')\n", "plt.tick_params('y', colors='b')\n", "plt.gca().set_xlim(0, n_epochs - 1)\n", "plt.grid(True)\n", "\n", "ax2 = plt.gca().twinx()\n", "ax2.plot(history.epoch, history.history[\"val_loss\"], \"r^-\")\n", "ax2.set_ylabel('Validation Loss', color='r')\n", "ax2.tick_params('y', colors='r')\n", "\n", "plt.title(\"Reduce LR on Plateau\", fontsize=14)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### tf.keras schedulers" ] }, { "cell_type": "code", "execution_count": 86, "metadata": {}, "outputs": [], "source": [ "import math\n", "\n", "batch_size = 32\n", "n_epochs = 25\n", "n_steps = n_epochs * math.ceil(len(X_train) / batch_size)\n", "scheduled_learning_rate = tf.keras.optimizers.schedules.ExponentialDecay(\n", " initial_learning_rate=0.01, decay_steps=n_steps, decay_rate=0.1)\n", "optimizer = tf.keras.optimizers.SGD(learning_rate=scheduled_learning_rate)" ] }, { "cell_type": "code", "execution_count": 87, "metadata": {}, "outputs": [], "source": [ "# extra code – build and train the model\n", "model = build_and_train_model(optimizer)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For piecewise constant scheduling, try this:" ] }, { "cell_type": "code", "execution_count": 88, "metadata": {}, "outputs": [], "source": [ "# extra code – shows how to use PiecewiseConstantDecay\n", "scheduled_learning_rate = tf.keras.optimizers.schedules.PiecewiseConstantDecay(\n", " boundaries=[5. * n_steps_per_epoch, 15. * n_steps_per_epoch],\n", " values=[0.01, 0.005, 0.001])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 1Cycle scheduling" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `ExponentialLearningRate` custom callback updates the learning rate during training, at the end of each batch. It multiplies it by a constant `factor`. It also saves the learning rate and loss at each batch. Since `logs[\"loss\"]` is actually the mean loss since the start of the epoch, and we want to save the batch loss instead, we must compute the mean times the number of batches since the beginning of the epoch to get the total loss so far, then we subtract the total loss at the previous batch to get the current batch's loss." ] }, { "cell_type": "code", "execution_count": 89, "metadata": {}, "outputs": [], "source": [ "K = tf.keras.backend\n", "\n", "class ExponentialLearningRate(tf.keras.callbacks.Callback):\n", " def __init__(self, factor):\n", " self.factor = factor\n", " self.rates = []\n", " self.losses = []\n", "\n", " def on_epoch_begin(self, epoch, logs=None):\n", " self.sum_of_epoch_losses = 0\n", "\n", " def on_batch_end(self, batch, logs=None):\n", " mean_epoch_loss = logs[\"loss\"] # the epoch's mean loss so far \n", " new_sum_of_epoch_losses = mean_epoch_loss * (batch + 1)\n", " batch_loss = new_sum_of_epoch_losses - self.sum_of_epoch_losses\n", " self.sum_of_epoch_losses = new_sum_of_epoch_losses\n", " self.rates.append(K.get_value(self.model.optimizer.learning_rate))\n", " self.losses.append(batch_loss)\n", " K.set_value(self.model.optimizer.learning_rate,\n", " self.model.optimizer.learning_rate * self.factor)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `find_learning_rate()` function trains the model using the `ExponentialLearningRate` callback, and it returns the learning rates and corresponding batch losses. At the end, it restores the model and its optimizer to their initial state." ] }, { "cell_type": "code", "execution_count": 90, "metadata": {}, "outputs": [], "source": [ "def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=1e-4,\n", " max_rate=1):\n", " init_weights = model.get_weights()\n", " iterations = math.ceil(len(X) / batch_size) * epochs\n", " factor = (max_rate / min_rate) ** (1 / iterations)\n", " init_lr = K.get_value(model.optimizer.learning_rate)\n", " K.set_value(model.optimizer.learning_rate, min_rate)\n", " exp_lr = ExponentialLearningRate(factor)\n", " history = model.fit(X, y, epochs=epochs, batch_size=batch_size,\n", " callbacks=[exp_lr])\n", " K.set_value(model.optimizer.learning_rate, init_lr)\n", " model.set_weights(init_weights)\n", " return exp_lr.rates, exp_lr.losses" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `plot_lr_vs_loss()` function plots the learning rates vs the losses. The optimal learning rate to use as the maximum learning rate in 1cycle is near the bottom of the curve." ] }, { "cell_type": "code", "execution_count": 91, "metadata": {}, "outputs": [], "source": [ "def plot_lr_vs_loss(rates, losses):\n", " plt.plot(rates, losses, \"b\")\n", " plt.gca().set_xscale('log')\n", " max_loss = losses[0] + min(losses)\n", " plt.hlines(min(losses), min(rates), max(rates), color=\"k\")\n", " plt.axis([min(rates), max(rates), 0, max_loss])\n", " plt.xlabel(\"Learning rate\")\n", " plt.ylabel(\"Loss\")\n", " plt.grid()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's build a simple Fashion MNIST model and compile it:" ] }, { "cell_type": "code", "execution_count": 92, "metadata": {}, "outputs": [], "source": [ "model = build_model()\n", "model.compile(loss=\"sparse_categorical_crossentropy\",\n", " optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),\n", " metrics=[\"accuracy\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's find the optimal max learning rate for 1cycle:" ] }, { "cell_type": "code", "execution_count": 93, "metadata": {}, "outputs": [], "source": [ "batch_size = 128\n", "rates, losses = find_learning_rate(model, X_train, y_train, epochs=1,\n", " batch_size=batch_size)\n", "plot_lr_vs_loss(rates, losses)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Looks like the max learning rate to use for 1cycle is around 10–1." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `OneCycleScheduler` custom callback updates the learning rate at the beginning of each batch. It applies the logic described in the book: increase the learning rate linearly during about half of training, then reduce it linearly back to the initial learning rate, and lastly reduce it down to close to zero linearly for the very last part of training." ] }, { "cell_type": "code", "execution_count": 94, "metadata": {}, "outputs": [], "source": [ "class OneCycleScheduler(tf.keras.callbacks.Callback):\n", " def __init__(self, iterations, max_lr=1e-3, start_lr=None,\n", " last_iterations=None, last_lr=None):\n", " self.iterations = iterations\n", " self.max_lr = max_lr\n", " self.start_lr = start_lr or max_lr / 10\n", " self.last_iterations = last_iterations or iterations // 10 + 1\n", " self.half_iteration = (iterations - self.last_iterations) // 2\n", " self.last_lr = last_lr or self.start_lr / 1000\n", " self.iteration = 0\n", "\n", " def _interpolate(self, iter1, iter2, lr1, lr2):\n", " return (lr2 - lr1) * (self.iteration - iter1) / (iter2 - iter1) + lr1\n", "\n", " def on_batch_begin(self, batch, logs):\n", " if self.iteration < self.half_iteration:\n", " lr = self._interpolate(0, self.half_iteration, self.start_lr,\n", " self.max_lr)\n", " elif self.iteration < 2 * self.half_iteration:\n", " lr = self._interpolate(self.half_iteration, 2 * self.half_iteration,\n", " self.max_lr, self.start_lr)\n", " else:\n", " lr = self._interpolate(2 * self.half_iteration, self.iterations,\n", " self.start_lr, self.last_lr)\n", " self.iteration += 1\n", " K.set_value(self.model.optimizer.learning_rate, lr)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's build and compile a simple Fashion MNIST model, then train it using the `OneCycleScheduler` callback:" ] }, { "cell_type": "code", "execution_count": 95, "metadata": {}, "outputs": [], "source": [ "model = build_model()\n", "model.compile(loss=\"sparse_categorical_crossentropy\",\n", " optimizer=tf.keras.optimizers.SGD(),\n", " metrics=[\"accuracy\"])\n", "n_epochs = 25\n", "onecycle = OneCycleScheduler(math.ceil(len(X_train) / batch_size) * n_epochs,\n", " max_lr=0.1)\n", "history = model.fit(X_train, y_train, epochs=n_epochs, batch_size=batch_size,\n", " validation_data=(X_valid, y_valid),\n", " callbacks=[onecycle])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Avoiding Overfitting Through Regularization" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## $\\ell_1$ and $\\ell_2$ regularization" ] }, { "cell_type": "code", "execution_count": 96, "metadata": {}, "outputs": [], "source": [ "layer = tf.keras.layers.Dense(100, activation=\"relu\",\n", " kernel_initializer=\"he_normal\",\n", " kernel_regularizer=tf.keras.regularizers.l2(0.01))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Or use `l1(0.1)` for ℓ1 regularization with a factor of 0.1, or `l1_l2(0.1, 0.01)` for both ℓ1 and ℓ2 regularization, with factors 0.1 and 0.01 respectively." ] }, { "cell_type": "code", "execution_count": 97, "metadata": {}, "outputs": [], "source": [ "tf.random.set_seed(42) # extra code – for reproducibility" ] }, { "cell_type": "code", "execution_count": 98, "metadata": {}, "outputs": [], "source": [ "from functools import partial\n", "\n", "RegularizedDense = partial(tf.keras.layers.Dense,\n", " activation=\"relu\",\n", " kernel_initializer=\"he_normal\",\n", " kernel_regularizer=tf.keras.regularizers.l2(0.01))\n", "\n", "model = tf.keras.Sequential([\n", " tf.keras.layers.Flatten(input_shape=[28, 28]),\n", " RegularizedDense(100),\n", " RegularizedDense(100),\n", " RegularizedDense(10, activation=\"softmax\")\n", "])" ] }, { "cell_type": "code", "execution_count": 99, "metadata": {}, "outputs": [], "source": [ "# extra code – compile and train the model\n", "optimizer = tf.keras.optimizers.SGD(learning_rate=0.02)\n", "model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=optimizer,\n", " metrics=[\"accuracy\"])\n", "history = model.fit(X_train, y_train, epochs=2,\n", " validation_data=(X_valid, y_valid))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Dropout" ] }, { "cell_type": "code", "execution_count": 100, "metadata": {}, "outputs": [], "source": [ "tf.random.set_seed(42) # extra code – for reproducibility" ] }, { "cell_type": "code", "execution_count": 101, "metadata": {}, "outputs": [], "source": [ "model = tf.keras.Sequential([\n", " tf.keras.layers.Flatten(input_shape=[28, 28]),\n", " tf.keras.layers.Dropout(rate=0.2),\n", " tf.keras.layers.Dense(100, activation=\"relu\",\n", " kernel_initializer=\"he_normal\"),\n", " tf.keras.layers.Dropout(rate=0.2),\n", " tf.keras.layers.Dense(100, activation=\"relu\",\n", " kernel_initializer=\"he_normal\"),\n", " tf.keras.layers.Dropout(rate=0.2),\n", " tf.keras.layers.Dense(10, activation=\"softmax\")\n", "])" ] }, { "cell_type": "code", "execution_count": 102, "metadata": {}, "outputs": [], "source": [ "# extra code – compile and train the model\n", "optimizer = tf.keras.optimizers.SGD(learning_rate=0.01, momentum=0.9)\n", "model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=optimizer,\n", " metrics=[\"accuracy\"])\n", "history = model.fit(X_train, y_train, epochs=10,\n", " validation_data=(X_valid, y_valid))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The training accuracy looks like it's lower than the validation accuracy, but that's just because dropout is only active during training. If we evaluate the model on the training set after training (i.e., with dropout turned off), we get the \"real\" training accuracy, which is very slightly higher than the validation accuracy and the test accuracy:" ] }, { "cell_type": "code", "execution_count": 103, "metadata": {}, "outputs": [], "source": [ "model.evaluate(X_train, y_train)" ] }, { "cell_type": "code", "execution_count": 104, "metadata": {}, "outputs": [], "source": [ "model.evaluate(X_test, y_test)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note**: make sure to use `AlphaDropout` instead of `Dropout` if you want to build a self-normalizing neural net using SELU." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## MC Dropout" ] }, { "cell_type": "code", "execution_count": 105, "metadata": {}, "outputs": [], "source": [ "tf.random.set_seed(42) # extra code – for reproducibility" ] }, { "cell_type": "code", "execution_count": 106, "metadata": {}, "outputs": [], "source": [ "y_probas = np.stack([model(X_test, training=True)\n", " for sample in range(100)])\n", "y_proba = y_probas.mean(axis=0)" ] }, { "cell_type": "code", "execution_count": 107, "metadata": {}, "outputs": [], "source": [ "model.predict(X_test[:1]).round(3)" ] }, { "cell_type": "code", "execution_count": 108, "metadata": {}, "outputs": [], "source": [ "y_proba[0].round(3)" ] }, { "cell_type": "code", "execution_count": 109, "metadata": {}, "outputs": [], "source": [ "y_std = y_probas.std(axis=0)\n", "y_std[0].round(3)" ] }, { "cell_type": "code", "execution_count": 110, "metadata": {}, "outputs": [], "source": [ "y_pred = y_proba.argmax(axis=1)\n", "accuracy = (y_pred == y_test).sum() / len(y_test)\n", "accuracy" ] }, { "cell_type": "code", "execution_count": 111, "metadata": {}, "outputs": [], "source": [ "class MCDropout(tf.keras.layers.Dropout):\n", " def call(self, inputs, training=None):\n", " return super().call(inputs, training=True)" ] }, { "cell_type": "code", "execution_count": 112, "metadata": {}, "outputs": [], "source": [ "# extra code – shows how to convert Dropout to MCDropout in a Sequential model\n", "Dropout = tf.keras.layers.Dropout\n", "mc_model = tf.keras.Sequential([\n", " MCDropout(layer.rate) if isinstance(layer, Dropout) else layer\n", " for layer in model.layers\n", "])\n", "mc_model.set_weights(model.get_weights())" ] }, { "cell_type": "code", "execution_count": 113, "metadata": {}, "outputs": [], "source": [ "mc_model.summary()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can use the model with MC Dropout:" ] }, { "cell_type": "code", "execution_count": 114, "metadata": {}, "outputs": [], "source": [ "# extra code – shows that the model works without retraining\n", "tf.random.set_seed(42)\n", "np.mean([mc_model.predict(X_test[:1])\n", " for sample in range(100)], axis=0).round(2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Max norm" ] }, { "cell_type": "code", "execution_count": 115, "metadata": {}, "outputs": [], "source": [ "dense = tf.keras.layers.Dense(\n", " 100, activation=\"relu\", kernel_initializer=\"he_normal\",\n", " kernel_constraint=tf.keras.constraints.max_norm(1.))" ] }, { "cell_type": "code", "execution_count": 116, "metadata": {}, "outputs": [], "source": [ "# extra code – shows how to apply max norm to every hidden layer in a model\n", "\n", "MaxNormDense = partial(tf.keras.layers.Dense,\n", " activation=\"relu\", kernel_initializer=\"he_normal\",\n", " kernel_constraint=tf.keras.constraints.max_norm(1.))\n", "\n", "tf.random.set_seed(42)\n", "model = tf.keras.Sequential([\n", " tf.keras.layers.Flatten(input_shape=[28, 28]),\n", " MaxNormDense(100),\n", " MaxNormDense(100),\n", " tf.keras.layers.Dense(10, activation=\"softmax\")\n", "])\n", "optimizer = tf.keras.optimizers.SGD(learning_rate=0.01, momentum=0.9)\n", "model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=optimizer,\n", " metrics=[\"accuracy\"])\n", "history = model.fit(X_train, y_train, epochs=10,\n", " validation_data=(X_valid, y_valid))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Exercises" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. to 7." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "1. Glorot initialization and He initialization were designed to make the output standard deviation as close as possible to the input standard deviation, at least at the beginning of training. This reduces the vanishing/exploding gradients problem.\n", "2. No, all weights should be sampled independently; they should not all have the same initial value. One important goal of sampling weights randomly is to break symmetry: if all the weights have the same initial value, even if that value is not zero, then symmetry is not broken (i.e., all neurons in a given layer are equivalent), and backpropagation will be unable to break it. Concretely, this means that all the neurons in any given layer will always have the same weights. It's like having just one neuron per layer, and much slower. It is virtually impossible for such a configuration to converge to a good solution.\n", "3. It is perfectly fine to initialize the bias terms to zero. Some people like to initialize them just like weights, and that's OK too; it does not make much difference.\n", "4. ReLU is usually a good default for the hidden layers, as it is fast and yields good results. Its ability to output precisely zero can also be useful in some cases (e.g., see Chapter 17). Moreover, it can sometimes benefit from optimized implementations as well as from hardware acceleration. The leaky ReLU variants of ReLU can improve the model's quality without hindering its speed too much compared to ReLU. For large neural nets and more complex problems, GLU, Swish and Mish can give you a slightly higher quality model, but they have a computational cost. The hyperbolic tangent (tanh) can be useful in the output layer if you need to output a number in a fixed range (by default between –1 and 1), but nowadays it is not used much in hidden layers, except in recurrent nets. The sigmoid activation function is also useful in the output layer when you need to estimate a probability (e.g., for binary classification), but it is rarely used in hidden layers (there are exceptions—for example, for the coding layer of variational autoencoders; see Chapter 17). The softplus activation function is useful in the output layer when you need to ensure that the output will always be positive. The softmax activation function is useful in the output layer to estimate probabilities for mutually exclusive classes, but it is rarely (if ever) used in hidden layers.\n", "5. If you set the `momentum` hyperparameter too close to 1 (e.g., 0.99999) when using an `SGD` optimizer, then the algorithm will likely pick up a lot of speed, hopefully moving roughly toward the global minimum, but its momentum will carry it right past the minimum. Then it will slow down and come back, accelerate again, overshoot again, and so on. It may oscillate this way many times before converging, so overall it will take much longer to converge than with a smaller `momentum` value.\n", "6. One way to produce a sparse model (i.e., with most weights equal to zero) is to train the model normally, then zero out tiny weights. For more sparsity, you can apply ℓ1 regularization during training, which pushes the optimizer toward sparsity. A third option is to use the TensorFlow Model Optimization Toolkit.\n", "7. Yes, dropout does slow down training, in general roughly by a factor of two. However, it has no impact on inference speed since it is only turned on during training. MC Dropout is exactly like dropout during training, but it is still active during inference, so each inference is slowed down slightly. More importantly, when using MC Dropout you generally want to run inference 10 times or more to get better predictions. This means that making predictions is slowed down by a factor of 10 or more." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 8. Deep Learning on CIFAR10" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### a.\n", "*Exercise: Build a DNN with 20 hidden layers of 100 neurons each (that's too many, but it's the point of this exercise). Use He initialization and the Swish activation function.*" ] }, { "cell_type": "code", "execution_count": 117, "metadata": {}, "outputs": [], "source": [ "tf.random.set_seed(42)\n", "\n", "model = tf.keras.Sequential()\n", "model.add(tf.keras.layers.Flatten(input_shape=[32, 32, 3]))\n", "for _ in range(20):\n", " model.add(tf.keras.layers.Dense(100,\n", " activation=\"swish\",\n", " kernel_initializer=\"he_normal\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### b.\n", "*Exercise: Using Nadam optimization and early stopping, train the network on the CIFAR10 dataset. You can load it with `tf.keras.datasets.cifar10.load_data()`. The dataset is composed of 60,000 32 × 32–pixel color images (50,000 for training, 10,000 for testing) with 10 classes, so you'll need a softmax output layer with 10 neurons. Remember to search for the right learning rate each time you change the model's architecture or hyperparameters.*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's add the output layer to the model:" ] }, { "cell_type": "code", "execution_count": 118, "metadata": {}, "outputs": [], "source": [ "model.add(tf.keras.layers.Dense(10, activation=\"softmax\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's use a Nadam optimizer with a learning rate of 5e-5. I tried learning rates 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3 and 1e-2, and I compared their learning curves for 10 epochs each (using the TensorBoard callback, below). The learning rates 3e-5 and 1e-4 were pretty good, so I tried 5e-5, which turned out to be slightly better." ] }, { "cell_type": "code", "execution_count": 119, "metadata": {}, "outputs": [], "source": [ "optimizer = tf.keras.optimizers.Nadam(learning_rate=5e-5)\n", "model.compile(loss=\"sparse_categorical_crossentropy\",\n", " optimizer=optimizer,\n", " metrics=[\"accuracy\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's load the CIFAR10 dataset. We also want to use early stopping, so we need a validation set. Let's use the first 5,000 images of the original training set as the validation set:" ] }, { "cell_type": "code", "execution_count": 120, "metadata": {}, "outputs": [], "source": [ "(X_train_full, y_train_full), (X_test, y_test) = tf.keras.datasets.cifar10.load_data()\n", "\n", "X_train = X_train_full[5000:]\n", "y_train = y_train_full[5000:]\n", "X_valid = X_train_full[:5000]\n", "y_valid = y_train_full[:5000]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can create the callbacks we need and train the model:" ] }, { "cell_type": "code", "execution_count": 121, "metadata": {}, "outputs": [], "source": [ "early_stopping_cb = tf.keras.callbacks.EarlyStopping(patience=20,\n", " restore_best_weights=True)\n", "model_checkpoint_cb = tf.keras.callbacks.ModelCheckpoint(\"my_cifar10_model\",\n", " save_best_only=True)\n", "run_index = 1 # increment every time you train the model\n", "run_logdir = Path() / \"my_cifar10_logs\" / f\"run_{run_index:03d}\"\n", "tensorboard_cb = tf.keras.callbacks.TensorBoard(run_logdir)\n", "callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]" ] }, { "cell_type": "code", "execution_count": 122, "metadata": {}, "outputs": [], "source": [ "%tensorboard --logdir=./my_cifar10_logs" ] }, { "cell_type": "code", "execution_count": 123, "metadata": {}, "outputs": [], "source": [ "model.fit(X_train, y_train, epochs=100,\n", " validation_data=(X_valid, y_valid),\n", " callbacks=callbacks)" ] }, { "cell_type": "code", "execution_count": 124, "metadata": {}, "outputs": [], "source": [ "model.evaluate(X_valid, y_valid)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The model with the lowest validation loss gets about 46.7% accuracy on the validation set. It took 29 epochs to reach the lowest validation loss, with roughly 10 seconds per epoch on my laptop (without a GPU). Let's see if we can improve the model using Batch Normalization." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### c.\n", "*Exercise: Now try adding Batch Normalization and compare the learning curves: Is it converging faster than before? Does it produce a better model? How does it affect training speed?*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The code below is very similar to the code above, with a few changes:\n", "\n", "* I added a BN layer after every Dense layer (before the activation function), except for the output layer.\n", "* I changed the learning rate to 5e-4. I experimented with 1e-5, 3e-5, 5e-5, 1e-4, 3e-4, 5e-4, 1e-3 and 3e-3, and I chose the one with the best validation performance after 20 epochs.\n", "* I renamed the run directories to run_bn_* and the model file name to `my_cifar10_bn_model`." ] }, { "cell_type": "code", "execution_count": 125, "metadata": {}, "outputs": [], "source": [ "tf.random.set_seed(42)\n", "\n", "model = tf.keras.Sequential()\n", "model.add(tf.keras.layers.Flatten(input_shape=[32, 32, 3]))\n", "for _ in range(20):\n", " model.add(tf.keras.layers.Dense(100, kernel_initializer=\"he_normal\"))\n", " model.add(tf.keras.layers.BatchNormalization())\n", " model.add(tf.keras.layers.Activation(\"swish\"))\n", "model.add(tf.keras.layers.Dense(10, activation=\"softmax\"))\n", "\n", "optimizer = tf.keras.optimizers.Nadam(learning_rate=5e-4)\n", "model.compile(loss=\"sparse_categorical_crossentropy\",\n", " optimizer=optimizer,\n", " metrics=[\"accuracy\"])\n", "\n", "early_stopping_cb = tf.keras.callbacks.EarlyStopping(patience=20,\n", " restore_best_weights=True)\n", "model_checkpoint_cb = tf.keras.callbacks.ModelCheckpoint(\"my_cifar10_bn_model\",\n", " save_best_only=True)\n", "run_index = 1 # increment every time you train the model\n", "run_logdir = Path() / \"my_cifar10_logs\" / f\"run_bn_{run_index:03d}\"\n", "tensorboard_cb = tf.keras.callbacks.TensorBoard(run_logdir)\n", "callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]\n", "\n", "model.fit(X_train, y_train, epochs=100,\n", " validation_data=(X_valid, y_valid),\n", " callbacks=callbacks)\n", "\n", "model.evaluate(X_valid, y_valid)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* *Is the model converging faster than before?* Much faster! The previous model took 29 epochs to reach the lowest validation loss, while the new model achieved that same loss in just 12 epochs and continued to make progress until the 17th epoch. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.\n", "* *Does BN produce a better model?* Yes! The final model is also much better, with 50.7% validation accuracy instead of 46.7%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).\n", "* *How does BN affect training speed?* Although the model converged much faster, each epoch took about 15s instead of 10s, because of the extra computations required by the BN layers. But overall the training time (wall time) to reach the best model was shortened by about 10%." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### d.\n", "*Exercise: Try replacing Batch Normalization with SELU, and make the necessary adjustements to ensure the network self-normalizes (i.e., standardize the input features, use LeCun normal initialization, make sure the DNN contains only a sequence of dense layers, etc.).*" ] }, { "cell_type": "code", "execution_count": 126, "metadata": { "scrolled": true }, "outputs": [], "source": [ "tf.random.set_seed(42)\n", "\n", "model = tf.keras.Sequential()\n", "model.add(tf.keras.layers.Flatten(input_shape=[32, 32, 3]))\n", "for _ in range(20):\n", " model.add(tf.keras.layers.Dense(100,\n", " kernel_initializer=\"lecun_normal\",\n", " activation=\"selu\"))\n", "model.add(tf.keras.layers.Dense(10, activation=\"softmax\"))\n", "\n", "optimizer = tf.keras.optimizers.Nadam(learning_rate=7e-4)\n", "model.compile(loss=\"sparse_categorical_crossentropy\",\n", " optimizer=optimizer,\n", " metrics=[\"accuracy\"])\n", "\n", "early_stopping_cb = tf.keras.callbacks.EarlyStopping(\n", " patience=20, restore_best_weights=True)\n", "model_checkpoint_cb = tf.keras.callbacks.ModelCheckpoint(\n", " \"my_cifar10_selu_model\", save_best_only=True)\n", "run_index = 1 # increment every time you train the model\n", "run_logdir = Path() / \"my_cifar10_logs\" / f\"run_selu_{run_index:03d}\"\n", "tensorboard_cb = tf.keras.callbacks.TensorBoard(run_logdir)\n", "callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]\n", "\n", "X_means = X_train.mean(axis=0)\n", "X_stds = X_train.std(axis=0)\n", "X_train_scaled = (X_train - X_means) / X_stds\n", "X_valid_scaled = (X_valid - X_means) / X_stds\n", "X_test_scaled = (X_test - X_means) / X_stds\n", "\n", "model.fit(X_train_scaled, y_train, epochs=100,\n", " validation_data=(X_valid_scaled, y_valid),\n", " callbacks=callbacks)\n", "\n", "model.evaluate(X_valid_scaled, y_valid)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This model reached the first model's validation loss in just 8 epochs. After 14 epochs, it reached its lowest validation loss, with about 50.3% accuracy, which is better than the original model (46.7%), but not quite as good as the model using batch normalization (50.7%). Each epoch took only 9 seconds. So it's the fastest model to train so far." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### e.\n", "*Exercise: Try regularizing the model with alpha dropout. Then, without retraining your model, see if you can achieve better accuracy using MC Dropout.*" ] }, { "cell_type": "code", "execution_count": 127, "metadata": {}, "outputs": [], "source": [ "tf.random.set_seed(42)\n", "\n", "model = tf.keras.Sequential()\n", "model.add(tf.keras.layers.Flatten(input_shape=[32, 32, 3]))\n", "for _ in range(20):\n", " model.add(tf.keras.layers.Dense(100,\n", " kernel_initializer=\"lecun_normal\",\n", " activation=\"selu\"))\n", "\n", "model.add(tf.keras.layers.AlphaDropout(rate=0.1))\n", "model.add(tf.keras.layers.Dense(10, activation=\"softmax\"))\n", "\n", "optimizer = tf.keras.optimizers.Nadam(learning_rate=5e-4)\n", "model.compile(loss=\"sparse_categorical_crossentropy\",\n", " optimizer=optimizer,\n", " metrics=[\"accuracy\"])\n", "\n", "early_stopping_cb = tf.keras.callbacks.EarlyStopping(\n", " patience=20, restore_best_weights=True)\n", "model_checkpoint_cb = tf.keras.callbacks.ModelCheckpoint(\n", " \"my_cifar10_alpha_dropout_model\", save_best_only=True)\n", "run_index = 1 # increment every time you train the model\n", "run_logdir = Path() / \"my_cifar10_logs\" / f\"run_alpha_dropout_{run_index:03d}\"\n", "tensorboard_cb = tf.keras.callbacks.TensorBoard(run_logdir)\n", "callbacks = [early_stopping_cb, model_checkpoint_cb, tensorboard_cb]\n", "\n", "X_means = X_train.mean(axis=0)\n", "X_stds = X_train.std(axis=0)\n", "X_train_scaled = (X_train - X_means) / X_stds\n", "X_valid_scaled = (X_valid - X_means) / X_stds\n", "X_test_scaled = (X_test - X_means) / X_stds\n", "\n", "model.fit(X_train_scaled, y_train, epochs=100,\n", " validation_data=(X_valid_scaled, y_valid),\n", " callbacks=callbacks)\n", "\n", "model.evaluate(X_valid_scaled, y_valid)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The model reaches 48.1% accuracy on the validation set. That's worse than without dropout (50.3%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's use MC Dropout now. We will need the `MCAlphaDropout` class we used earlier, so let's just copy it here for convenience:" ] }, { "cell_type": "code", "execution_count": 128, "metadata": {}, "outputs": [], "source": [ "class MCAlphaDropout(tf.keras.layers.AlphaDropout):\n", " def call(self, inputs):\n", " return super().call(inputs, training=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's create a new model, identical to the one we just trained (with the same weights), but with `MCAlphaDropout` dropout layers instead of `AlphaDropout` layers:" ] }, { "cell_type": "code", "execution_count": 129, "metadata": {}, "outputs": [], "source": [ "mc_model = tf.keras.Sequential([\n", " (\n", " MCAlphaDropout(layer.rate)\n", " if isinstance(layer, tf.keras.layers.AlphaDropout)\n", " else layer\n", " )\n", " for layer in model.layers\n", "])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then let's add a couple utility functions. The first will run the model many times (10 by default) and it will return the mean predicted class probabilities. The second will use these mean probabilities to predict the most likely class for each instance:" ] }, { "cell_type": "code", "execution_count": 130, "metadata": {}, "outputs": [], "source": [ "def mc_dropout_predict_probas(mc_model, X, n_samples=10):\n", " Y_probas = [mc_model.predict(X) for sample in range(n_samples)]\n", " return np.mean(Y_probas, axis=0)\n", "\n", "def mc_dropout_predict_classes(mc_model, X, n_samples=10):\n", " Y_probas = mc_dropout_predict_probas(mc_model, X, n_samples)\n", " return Y_probas.argmax(axis=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's make predictions for all the instances in the validation set, and compute the accuracy:" ] }, { "cell_type": "code", "execution_count": 131, "metadata": {}, "outputs": [], "source": [ "tf.random.set_seed(42)\n", "\n", "y_pred = mc_dropout_predict_classes(mc_model, X_valid_scaled)\n", "accuracy = (y_pred == y_valid[:, 0]).mean()\n", "accuracy" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We get back to the accuracy of the model without dropout in this case (about 50.3% accuracy).\n", "\n", "So the best model we got in this exercise is the Batch Normalization model." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### f.\n", "*Exercise: Retrain your model using 1cycle scheduling and see if it improves training speed and model accuracy.*" ] }, { "cell_type": "code", "execution_count": 132, "metadata": {}, "outputs": [], "source": [ "tf.random.set_seed(42)\n", "\n", "model = tf.keras.Sequential()\n", "model.add(tf.keras.layers.Flatten(input_shape=[32, 32, 3]))\n", "for _ in range(20):\n", " model.add(tf.keras.layers.Dense(100,\n", " kernel_initializer=\"lecun_normal\",\n", " activation=\"selu\"))\n", "\n", "model.add(tf.keras.layers.AlphaDropout(rate=0.1))\n", "model.add(tf.keras.layers.Dense(10, activation=\"softmax\"))\n", "\n", "optimizer = tf.keras.optimizers.SGD()\n", "model.compile(loss=\"sparse_categorical_crossentropy\",\n", " optimizer=optimizer,\n", " metrics=[\"accuracy\"])" ] }, { "cell_type": "code", "execution_count": 133, "metadata": {}, "outputs": [], "source": [ "batch_size = 128\n", "rates, losses = find_learning_rate(model, X_train_scaled, y_train, epochs=1,\n", " batch_size=batch_size)\n", "plot_lr_vs_loss(rates, losses)" ] }, { "cell_type": "code", "execution_count": 134, "metadata": {}, "outputs": [], "source": [ "tf.random.set_seed(42)\n", "\n", "model = tf.keras.Sequential()\n", "model.add(tf.keras.layers.Flatten(input_shape=[32, 32, 3]))\n", "for _ in range(20):\n", " model.add(tf.keras.layers.Dense(100,\n", " kernel_initializer=\"lecun_normal\",\n", " activation=\"selu\"))\n", "\n", "model.add(tf.keras.layers.AlphaDropout(rate=0.1))\n", "model.add(tf.keras.layers.Dense(10, activation=\"softmax\"))\n", "\n", "optimizer = tf.keras.optimizers.SGD(learning_rate=2e-2)\n", "model.compile(loss=\"sparse_categorical_crossentropy\",\n", " optimizer=optimizer,\n", " metrics=[\"accuracy\"])" ] }, { "cell_type": "code", "execution_count": 135, "metadata": {}, "outputs": [], "source": [ "n_epochs = 15\n", "onecycle = OneCycleScheduler(math.ceil(len(X_train_scaled) / batch_size) * n_epochs, max_rate=0.05)\n", "history = model.fit(X_train_scaled, y_train, epochs=n_epochs, batch_size=batch_size,\n", " validation_data=(X_valid_scaled, y_valid),\n", " callbacks=[onecycle])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One cycle allowed us to train the model in just 15 epochs, each taking only 2 seconds (thanks to the larger batch size). This is several times faster than the fastest model we trained so far. Moreover, we improved the model's performance (from 50.7% to 52.0%)." ] }, { "cell_type": "code", "execution_count": 136, "metadata": {}, "outputs": [], "source": [ "import time\n", "time.time()" ] }, { "cell_type": "code", "execution_count": 137, "metadata": {}, "outputs": [], "source": [ "!date" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.12" }, "nav_menu": { "height": "360px", "width": "416px" }, "toc": { "navigate_menu": true, "number_sections": true, "sideBar": true, "threshold": 6, "toc_cell": false, "toc_section_display": "block", "toc_window_display": false } }, "nbformat": 4, "nbformat_minor": 4 }