From 4b264044c879e509f59a098895d0eb616e6c92de Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aur=C3=A9lien=20Geron?= Date: Wed, 21 Jun 2017 15:35:47 +0200 Subject: [PATCH] Add SELU activation function example and snip out repetitive outputs --- 11_deep_learning.ipynb | 1845 +++++++++++----------------------------- 1 file changed, 507 insertions(+), 1338 deletions(-) diff --git a/11_deep_learning.ipynb b/11_deep_learning.ipynb index c7daba7..999c4c2 100644 --- a/11_deep_learning.ipynb +++ b/11_deep_learning.ipynb @@ -2,40 +2,28 @@ "cells": [ { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "**Chapter 11 – Deep Learning**" ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "_This notebook contains all the sample code and solutions to the exercices in chapter 11._" ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "# Setup" ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:" ] @@ -44,9 +32,7 @@ "cell_type": "code", "execution_count": 1, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -85,10 +71,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "# Vanishing/Exploding Gradients Problem" ] @@ -97,9 +80,7 @@ "cell_type": "code", "execution_count": 2, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -110,11 +91,7 @@ { "cell_type": "code", "execution_count": 3, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "z = np.linspace(-5, 5, 200)\n", @@ -138,20 +115,14 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "## Xavier and He Initialization" ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Note: the book uses `tensorflow.contrib.layers.fully_connected()` rather than `tf.layers.dense()` (which did not exist when this chapter was written). It is now preferable to use `tf.layers.dense()`, because anything in the contrib module may change or be deleted without notice. The `dense()` function is almost identical to the `fully_connected()` function. The main differences relevant to this chapter are:\n", "* several parameters are renamed: `scope` becomes `name`, `activation_fn` becomes `activation` (and similarly the `_fn` suffix is removed from other parameters such as `normalizer_fn`), `weights_initializer` becomes `kernel_initializer`, etc.\n", @@ -164,9 +135,7 @@ "cell_type": "code", "execution_count": 4, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -176,11 +145,7 @@ { "cell_type": "code", "execution_count": 5, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", @@ -194,11 +159,7 @@ { "cell_type": "code", "execution_count": 6, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "he_init = tf.contrib.layers.variance_scaling_initializer()\n", @@ -208,20 +169,14 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "## Nonsaturating Activation Functions" ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "### Leaky ReLU" ] @@ -230,9 +185,7 @@ "cell_type": "code", "execution_count": 7, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -243,11 +196,7 @@ { "cell_type": "code", "execution_count": 8, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "plt.plot(z, leaky_relu(z, 0.05), \"b-\", linewidth=2)\n", @@ -265,10 +214,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Implementing Leaky ReLU in TensorFlow:" ] @@ -277,9 +223,7 @@ "cell_type": "code", "execution_count": 9, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -291,11 +235,7 @@ { "cell_type": "code", "execution_count": 10, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "def leaky_relu(z, name=None):\n", @@ -306,10 +246,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Let's train a neural network on MNIST using the Leaky ReLU. First let's create the graph:" ] @@ -318,9 +255,7 @@ "cell_type": "code", "execution_count": 11, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -336,9 +271,7 @@ "cell_type": "code", "execution_count": 12, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -349,11 +282,7 @@ { "cell_type": "code", "execution_count": 13, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "with tf.name_scope(\"dnn\"):\n", @@ -366,9 +295,7 @@ "cell_type": "code", "execution_count": 14, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -381,9 +308,7 @@ "cell_type": "code", "execution_count": 15, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -398,9 +323,7 @@ "cell_type": "code", "execution_count": 16, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -413,9 +336,7 @@ "cell_type": "code", "execution_count": 17, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -425,10 +346,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Let's load the data:" ] @@ -436,11 +354,7 @@ { "cell_type": "code", "execution_count": 18, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "from tensorflow.examples.tutorials.mnist import input_data\n", @@ -451,9 +365,6 @@ "cell_type": "code", "execution_count": 19, "metadata": { - "collapsed": false, - "deletable": true, - "editable": true, "scrolled": true }, "outputs": [], @@ -477,10 +388,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "### ELU" ] @@ -488,25 +396,17 @@ { "cell_type": "code", "execution_count": 20, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "def elu(z, alpha=1):\n", - " return np.where(z<0, alpha*(np.exp(z)-1), z)" + " return np.where(z < 0, alpha * (np.exp(z) - 1), z)" ] }, { "cell_type": "code", "execution_count": 21, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "plt.plot(z, elu(z), \"b-\", linewidth=2)\n", @@ -514,7 +414,6 @@ "plt.plot([-5, 5], [-1, -1], 'k--')\n", "plt.plot([0, 0], [-2.2, 3.2], 'k-')\n", "plt.grid(True)\n", - "props = dict(facecolor='black', shrink=0.1)\n", "plt.title(r\"ELU activation function ($\\alpha=1$)\", fontsize=14)\n", "plt.axis([-5, 5, -2.2, 3.2])\n", "\n", @@ -524,10 +423,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Implementing ELU in TensorFlow is trivial, just specify the activation function when building each layer:" ] @@ -536,9 +432,7 @@ "cell_type": "code", "execution_count": 22, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -550,11 +444,7 @@ { "cell_type": "code", "execution_count": 23, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.elu, name=\"hidden1\")" @@ -562,20 +452,195 @@ }, { "cell_type": "markdown", + "metadata": {}, + "source": [ + "### SELU" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This activation function was proposed in this [great paper](https://arxiv.org/pdf/1706.02515.pdf) by Günter Klambauer, Thomas Unterthiner and Andreas Mayr, published in June 2017 (I will definitely add it to the book). It outperforms the other activation functions very significantly for deep neural networks, so you should really try it out." + ] + }, + { + "cell_type": "code", + "execution_count": 24, "metadata": { - "deletable": true, - "editable": true + "collapsed": true }, + "outputs": [], + "source": [ + "def selu(z,\n", + " scale=1.0507009873554804934193349852946,\n", + " alpha=1.6732632423543772848170429916717):\n", + " return scale * elu(z, alpha)" + ] + }, + { + "cell_type": "code", + "execution_count": 25, + "metadata": {}, + "outputs": [], + "source": [ + "plt.plot(z, selu(z), \"b-\", linewidth=2)\n", + "plt.plot([-5, 5], [0, 0], 'k-')\n", + "plt.plot([-5, 5], [-1.758, -1.758], 'k--')\n", + "plt.plot([0, 0], [-2.2, 3.2], 'k-')\n", + "plt.grid(True)\n", + "plt.title(r\"SELU activation function\", fontsize=14)\n", + "plt.axis([-5, 5, -2.2, 3.2])\n", + "\n", + "save_fig(\"selu_plot\")\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "With this activation function, even a 100 layer deep neural network preserves roughly mean 0 and standard deviation 1 across all layers, avoiding the exploding/vanishing gradients problem:" + ] + }, + { + "cell_type": "code", + "execution_count": 26, + "metadata": {}, + "outputs": [], + "source": [ + "np.random.seed(42)\n", + "Z = np.random.normal(size=(500, 100))\n", + "for layer in range(100):\n", + " W = np.random.normal(size=(100, 100), scale=np.sqrt(1/100))\n", + " Z = selu(np.dot(Z, W))\n", + " means = np.mean(Z, axis=1)\n", + " stds = np.std(Z, axis=1)\n", + " if layer % 10 == 0:\n", + " print(\"Layer {}: {:.2f} < mean < {:.2f}, {:.2f} < std deviation < {:.2f}\".format(\n", + " layer, means.min(), means.max(), stds.min(), stds.max()))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Here's a TensorFlow implementation (there will almost certainly be a `tf.nn.selu()` function in future TensorFlow versions):" + ] + }, + { + "cell_type": "code", + "execution_count": 27, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "def selu(z,\n", + " scale=1.0507009873554804934193349852946,\n", + " alpha=1.6732632423543772848170429916717):\n", + " return scale * tf.where(z >= 0.0, z, alpha * tf.nn.elu(z))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "SELUs can also be combined with dropout, check out [this implementation](https://github.com/bioinf-jku/SNNs/blob/master/selu.py) by the Institute of Bioinformatics, Johannes Kepler University Linz." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's create a neural net for MNIST using the SELU activation function:" + ] + }, + { + "cell_type": "code", + "execution_count": 28, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "reset_graph()\n", + "\n", + "n_inputs = 28 * 28 # MNIST\n", + "n_hidden1 = 300\n", + "n_hidden2 = 100\n", + "n_outputs = 10\n", + "\n", + "X = tf.placeholder(tf.float32, shape=(None, n_inputs), name=\"X\")\n", + "y = tf.placeholder(tf.int64, shape=(None), name=\"y\")\n", + "\n", + "with tf.name_scope(\"dnn\"):\n", + " hidden1 = tf.layers.dense(X, n_hidden1, activation=selu, name=\"hidden1\")\n", + " hidden2 = tf.layers.dense(hidden1, n_hidden2, activation=selu, name=\"hidden2\")\n", + " logits = tf.layers.dense(hidden2, n_outputs, name=\"outputs\")\n", + "\n", + "with tf.name_scope(\"loss\"):\n", + " xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)\n", + " loss = tf.reduce_mean(xentropy, name=\"loss\")\n", + "\n", + "learning_rate = 0.01\n", + "\n", + "with tf.name_scope(\"train\"):\n", + " optimizer = tf.train.GradientDescentOptimizer(learning_rate)\n", + " training_op = optimizer.minimize(loss)\n", + "\n", + "with tf.name_scope(\"eval\"):\n", + " correct = tf.nn.in_top_k(logits, y, 1)\n", + " accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))\n", + "\n", + "init = tf.global_variables_initializer()\n", + "saver = tf.train.Saver()\n", + "n_epochs = 40\n", + "batch_size = 50" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now let's train it. Do not forget to scale the inputs to mean 0 and standard deviation 1:" + ] + }, + { + "cell_type": "code", + "execution_count": 29, + "metadata": {}, + "outputs": [], + "source": [ + "means = mnist.train.images.mean(axis=0, keepdims=True)\n", + "stds = mnist.train.images.std(axis=0, keepdims=True) + 1e-10\n", + "\n", + "with tf.Session() as sess:\n", + " init.run()\n", + " for epoch in range(n_epochs):\n", + " for iteration in range(mnist.train.num_examples // batch_size):\n", + " X_batch, y_batch = mnist.train.next_batch(batch_size)\n", + " X_batch_scaled = (X_batch - means) / stds\n", + " sess.run(training_op, feed_dict={X: X_batch_scaled, y: y_batch})\n", + " if epoch % 5 == 0:\n", + " acc_train = accuracy.eval(feed_dict={X: X_batch_scaled, y: y_batch})\n", + " X_val_scaled = (mnist.validation.images - means) / stds\n", + " acc_test = accuracy.eval(feed_dict={X: X_val_scaled, y: mnist.validation.labels})\n", + " print(epoch, \"Batch accuracy:\", acc_train, \"Validation accuracy:\", acc_test)\n", + "\n", + " save_path = saver.save(sess, \"./my_model_final_selu.ckpt\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, "source": [ "# Batch Normalization" ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Note: the book uses `tensorflow.contrib.layers.batch_norm()` rather than `tf.layers.batch_normalization()` (which did not exist when this chapter was written). It is now preferable to use `tf.layers.batch_normalization()`, because anything in the contrib module may change or be deleted without notice. Instead of using the `batch_norm()` function as a regularizer parameter to the `fully_connected()` function, we now use `batch_normalization()` and we explicitly create a distinct layer. The parameters are a bit different, in particular:\n", "* `decay` is renamed to `momentum`,\n", @@ -592,9 +657,7 @@ "cell_type": "code", "execution_count": 24, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -628,9 +691,7 @@ "cell_type": "code", "execution_count": 25, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -642,10 +703,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "To avoid repeating the same parameters over and over again, we can use Python's `partial()` function:" ] @@ -653,11 +711,7 @@ { "cell_type": "code", "execution_count": 26, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "from functools import partial\n", @@ -677,10 +731,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Let's build a neural net for MNIST, using the ELU activation function and Batch Normalization at each layer:" ] @@ -688,11 +739,7 @@ { "cell_type": "code", "execution_count": 27, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", @@ -740,10 +787,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Note: since we are using `tf.layers.batch_normalization()` rather than `tf.contrib.layers.batch_norm()` (as in the book), we need to explicitly run the extra update operations needed by batch normalization (`sess.run([training_op, extra_update_ops],...`)." ] @@ -752,9 +796,7 @@ "cell_type": "code", "execution_count": 28, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -765,11 +807,7 @@ { "cell_type": "code", "execution_count": 29, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)\n", @@ -790,20 +828,14 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "What!? That's not a great accuracy for MNIST. Of course, if you train for longer it will get much better accuracy, but with such a shallow network, Batch Norm and ELU are unlikely to have very positive impact: they shine mostly for much deeper nets." ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Note that you could also make the training operation depend on the update operations:\n", "\n", @@ -824,10 +856,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "One more thing: notice that the list of trainable variables is shorter than the list of all global variables. This is because the moving averages are non-trainable variables. If you want to reuse a pretrained neural network (see below), you must not forget these non-trainable variables." ] @@ -835,11 +864,7 @@ { "cell_type": "code", "execution_count": 30, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "[v.name for v in tf.trainable_variables()]" @@ -848,11 +873,7 @@ { "cell_type": "code", "execution_count": 31, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "[v.name for v in tf.global_variables()]" @@ -860,20 +881,14 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "## Gradient Clipping" ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Let's create a simple neural net for MNIST and add gradient clipping. The first part is the same as earlier (except we added a few more layers to demonstrate reusing pretrained models, see below):" ] @@ -882,9 +897,7 @@ "cell_type": "code", "execution_count": 32, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -918,9 +931,7 @@ "cell_type": "code", "execution_count": 33, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -929,10 +940,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Now we apply gradient clipping. For this, we need to get the gradients, use the `clip_by_value()` function to clip them, then apply them:" ] @@ -940,11 +948,7 @@ { "cell_type": "code", "execution_count": 34, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "threshold = 1.0\n", @@ -958,10 +962,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "The rest is the same as usual:" ] @@ -970,9 +971,7 @@ "cell_type": "code", "execution_count": 35, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -985,9 +984,7 @@ "cell_type": "code", "execution_count": 36, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -999,9 +996,7 @@ "cell_type": "code", "execution_count": 37, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -1012,11 +1007,7 @@ { "cell_type": "code", "execution_count": 38, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "with tf.Session() as sess:\n", @@ -1034,30 +1025,21 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "## Reusing Pretrained Layers" ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "## Reusing a TensorFlow Model" ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "First you need to load the graph's structure. The `import_meta_graph()` function does just that, loading the graph's operations into the default graph, and returning a `Saver` that you can then use to restore the model's state. Note that by default, a `Saver` saves the structure of the graph into a `.meta` file, so that's the file you should load:" ] @@ -1066,9 +1048,7 @@ "cell_type": "code", "execution_count": 39, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -1079,9 +1059,7 @@ "cell_type": "code", "execution_count": 40, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -1090,10 +1068,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Next you need to get a handle on all the operations you will need for training. If you don't know the graph's structure, you can list all the operations:" ] @@ -1101,11 +1076,7 @@ { "cell_type": "code", "execution_count": 41, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "for op in tf.get_default_graph().get_operations():\n", @@ -1114,10 +1085,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Oops, that's a lot of operations! It's much easier to use TensorBoard to visualize the graph. The following hack will allow you to visualize the graph within Jupyter (if it does not work with your browser, you will need to use a `FileWriter` to save the graph and then visualize it in TensorBoard):" ] @@ -1126,9 +1094,7 @@ "cell_type": "code", "execution_count": 42, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -1174,9 +1140,6 @@ "cell_type": "code", "execution_count": 43, "metadata": { - "collapsed": false, - "deletable": true, - "editable": true, "scrolled": true }, "outputs": [], @@ -1186,10 +1149,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Once you know which operations you need, you can get a handle on them using the graph's `get_operation_by_name()` or `get_tensor_by_name()` methods:" ] @@ -1197,11 +1157,7 @@ { "cell_type": "code", "execution_count": 44, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "X = tf.get_default_graph().get_tensor_by_name(\"X:0\")\n", @@ -1214,10 +1170,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "If you are the author of the original model, you could make things easier for people who will reuse your model by giving operations very clear names and documenting them. Another approach is to create a collection containing all the important operations that people will want to get a handle on:" ] @@ -1225,11 +1178,7 @@ { "cell_type": "code", "execution_count": 45, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "for op in (X, y, accuracy, training_op):\n", @@ -1238,10 +1187,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "This way people who reuse your model will be able to simply write:" ] @@ -1249,11 +1195,7 @@ { "cell_type": "code", "execution_count": 46, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "X, y, accuracy, training_op = tf.get_collection(\"my_important_ops\")" @@ -1261,10 +1203,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Now you can start a session, restore the model's state and continue training on your data:" ] @@ -1272,11 +1211,7 @@ { "cell_type": "code", "execution_count": 47, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "with tf.Session() as sess:\n", @@ -1286,10 +1221,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Actually, let's test this for real!" ] @@ -1297,11 +1229,7 @@ { "cell_type": "code", "execution_count": 48, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "with tf.Session() as sess:\n", @@ -1320,10 +1248,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Alternatively, if you have access to the Python code that built the original graph, you can use it instead of `import_meta_graph()`:" ] @@ -1332,9 +1257,7 @@ "cell_type": "code", "execution_count": 49, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -1381,10 +1304,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "And continue training:" ] @@ -1392,11 +1312,7 @@ { "cell_type": "code", "execution_count": 50, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "with tf.Session() as sess:\n", @@ -1415,10 +1331,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "In general you will want to reuse only the lower layers. If you are using `import_meta_graph()` it will load the whole graph, but you can simply ignore the parts you do not need. In this example, we add a new 4th hidden layer on top of the pretrained 3rd layer (ignoring the old 4th hidden layer). We also build a new output layer, the loss for this new output, and a new optimizer to minimize it. We also need another saver to save the whole graph (containing both the entire old graph plus the new operations), and an initialization operation to initialize all the new variables:" ] @@ -1426,11 +1339,7 @@ { "cell_type": "code", "execution_count": 51, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", @@ -1466,10 +1375,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "And we can train this new model:" ] @@ -1477,11 +1383,7 @@ { "cell_type": "code", "execution_count": 52, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "with tf.Session() as sess:\n", @@ -1501,10 +1403,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "If you have access to the Python code that built the original graph, you can just reuse the parts you need and drop the rest:" ] @@ -1513,9 +1412,7 @@ "cell_type": "code", "execution_count": 53, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -1553,10 +1450,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "However, you must create one `Saver` to restore the pretrained model (giving it the list of variables to restore, or else it will complain that the graphs don't match), and another `Saver` to save the new model, once it is trained:" ] @@ -1564,11 +1458,7 @@ { "cell_type": "code", "execution_count": 54, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "reuse_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES,\n", @@ -1596,20 +1486,14 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "## Reusing Models from Other Frameworks" ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "In this example, for each variable we want to reuse, we find its initializer's assignment operation, and we get its second input, which corresponds to the initialization value. When we run the initializer, we replace the initialization values with the ones we want, using a `feed_dict`:" ] @@ -1618,9 +1502,7 @@ "cell_type": "code", "execution_count": 55, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -1633,11 +1515,7 @@ { "cell_type": "code", "execution_count": 56, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "original_w = [[1., 2., 3.], [4., 5., 6.]] # Load the weights from the other framework\n", @@ -1664,20 +1542,14 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Note: the weights variable created by the `tf.layers.dense()` function is called `\"kernel\"` (instead of `\"weights\"` when using the `tf.contrib.layers.fully_connected()`, as in the book), and the biases variable is called `bias` instead of `biases`." ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Another approach (initially used in the book) would be to create dedicated assignment nodes and dedicated placeholders. This is more verbose and less efficient, but you may find this more explicit:" ] @@ -1685,11 +1557,7 @@ { "cell_type": "code", "execution_count": 57, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", @@ -1727,10 +1595,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Note that we could also get a handle on the variables using `get_collection()` and specifying the `scope`:" ] @@ -1738,11 +1603,7 @@ { "cell_type": "code", "execution_count": 58, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=\"hidden1\")" @@ -1750,10 +1611,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Or we could use the graph's `get_tensor_by_name()` method:" ] @@ -1761,11 +1619,7 @@ { "cell_type": "code", "execution_count": 59, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "tf.get_default_graph().get_tensor_by_name(\"hidden1/kernel:0\")" @@ -1774,11 +1628,7 @@ { "cell_type": "code", "execution_count": 60, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "tf.get_default_graph().get_tensor_by_name(\"hidden1/bias:0\")" @@ -1786,10 +1636,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "### Freezing the Lower Layers" ] @@ -1797,11 +1644,7 @@ { "cell_type": "code", "execution_count": 61, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", @@ -1836,9 +1679,7 @@ "cell_type": "code", "execution_count": 62, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -1853,9 +1694,7 @@ "cell_type": "code", "execution_count": 63, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -1866,11 +1705,7 @@ { "cell_type": "code", "execution_count": 64, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "reuse_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES,\n", @@ -1900,9 +1735,7 @@ "cell_type": "code", "execution_count": 65, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -1922,11 +1755,7 @@ { "cell_type": "code", "execution_count": 66, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "with tf.name_scope(\"dnn\"):\n", @@ -1946,9 +1775,7 @@ "cell_type": "code", "execution_count": 67, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -1967,10 +1794,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "The training code is exactly the same as earlier:" ] @@ -1978,11 +1802,7 @@ { "cell_type": "code", "execution_count": 68, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "reuse_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES,\n", @@ -2010,10 +1830,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "### Caching the Frozen Layers" ] @@ -2022,9 +1839,7 @@ "cell_type": "code", "execution_count": 69, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -2069,9 +1884,7 @@ "cell_type": "code", "execution_count": 70, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -2087,11 +1900,7 @@ { "cell_type": "code", "execution_count": 71, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", @@ -2121,20 +1930,14 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "# Faster Optimizers" ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "## Momentum optimization" ] @@ -2143,9 +1946,7 @@ "cell_type": "code", "execution_count": 72, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -2155,10 +1956,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "## Nesterov Accelerated Gradient" ] @@ -2167,9 +1965,7 @@ "cell_type": "code", "execution_count": 73, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -2179,10 +1975,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "## AdaGrad" ] @@ -2191,9 +1984,7 @@ "cell_type": "code", "execution_count": 74, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -2202,10 +1993,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "## RMSProp" ] @@ -2214,9 +2002,7 @@ "cell_type": "code", "execution_count": 75, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -2226,10 +2012,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "## Adam Optimization" ] @@ -2238,9 +2021,7 @@ "cell_type": "code", "execution_count": 76, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -2249,10 +2030,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "## Learning Rate Scheduling" ] @@ -2261,9 +2039,7 @@ "cell_type": "code", "execution_count": 77, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -2295,9 +2071,7 @@ "cell_type": "code", "execution_count": 78, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -2315,11 +2089,7 @@ { "cell_type": "code", "execution_count": 79, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "init = tf.global_variables_initializer()\n", @@ -2329,11 +2099,7 @@ { "cell_type": "code", "execution_count": 80, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "n_epochs = 5\n", @@ -2354,30 +2120,21 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "# Avoiding Overfitting Through Regularization" ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "## $\\ell_1$ and $\\ell_2$ regularization" ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Let's implement $\\ell_1$ regularization manually. First, we create the model, as usual (with just one hidden layer this time, for simplicity):" ] @@ -2386,9 +2143,7 @@ "cell_type": "code", "execution_count": 81, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -2408,10 +2163,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Next, we get a handle on the layer weights, and we compute the total loss, which is equal to the sum of the usual cross entropy loss and the $\\ell_1$ loss (i.e., the absolute values of the weights):" ] @@ -2419,11 +2171,7 @@ { "cell_type": "code", "execution_count": 82, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "W1 = tf.get_default_graph().get_tensor_by_name(\"hidden1/kernel:0\")\n", @@ -2441,10 +2189,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "The rest is just as usual:" ] @@ -2452,11 +2197,7 @@ { "cell_type": "code", "execution_count": 83, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "with tf.name_scope(\"eval\"):\n", @@ -2477,9 +2218,6 @@ "cell_type": "code", "execution_count": 84, "metadata": { - "collapsed": false, - "deletable": true, - "editable": true, "scrolled": true }, "outputs": [], @@ -2502,10 +2240,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Alternatively, we can pass a regularization function to the `tf.layers.dense()` function, which will use it to create operations that will compute the regularization loss, and it adds these operations to the collection of regularization losses. The beginning is the same as above:" ] @@ -2514,9 +2249,7 @@ "cell_type": "code", "execution_count": 85, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -2533,10 +2266,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Next, we will use Python's `partial()` function to avoid repeating the same arguments over and over again. Note that we set the `kernel_regularizer` argument:" ] @@ -2545,9 +2275,7 @@ "cell_type": "code", "execution_count": 86, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -2557,11 +2285,7 @@ { "cell_type": "code", "execution_count": 87, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "my_dense_layer = partial(\n", @@ -2577,10 +2301,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Next we must add the regularization losses to the base loss:" ] @@ -2589,9 +2310,7 @@ "cell_type": "code", "execution_count": 88, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -2605,10 +2324,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "And the rest is the same as usual:" ] @@ -2616,11 +2332,7 @@ { "cell_type": "code", "execution_count": 89, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "with tf.name_scope(\"eval\"):\n", @@ -2641,9 +2353,6 @@ "cell_type": "code", "execution_count": 90, "metadata": { - "collapsed": false, - "deletable": true, - "editable": true, "scrolled": true }, "outputs": [], @@ -2666,20 +2375,14 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "## Dropout" ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Note: the book uses `tf.contrib.layers.dropout()` rather than `tf.layers.dropout()` (which did not exist when this chapter was written). It is now preferable to use `tf.layers.dropout()`, because anything in the contrib module may change or be deleted without notice. The `tf.layers.dropout()` function is almost identical to the `tf.contrib.layers.dropout()` function, except for a few minor differences. Most importantly:\n", "* you must specify the dropout rate (`rate`) rather than the keep probability (`keep_prob`), where `rate` is simply equal to `1 - keep_prob`,\n", @@ -2690,9 +2393,7 @@ "cell_type": "code", "execution_count": 91, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -2705,11 +2406,7 @@ { "cell_type": "code", "execution_count": 92, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "training = tf.placeholder_with_default(False, shape=(), name='training')\n", @@ -2730,11 +2427,7 @@ { "cell_type": "code", "execution_count": 93, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "with tf.name_scope(\"loss\"):\n", @@ -2757,9 +2450,6 @@ "cell_type": "code", "execution_count": 94, "metadata": { - "collapsed": false, - "deletable": true, - "editable": true, "scrolled": true }, "outputs": [], @@ -2781,20 +2471,14 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "## Max norm" ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Let's go back to a plain and simple neural net for MNIST with just 2 hidden layers:" ] @@ -2802,11 +2486,7 @@ { "cell_type": "code", "execution_count": 95, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", @@ -2842,10 +2522,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Next, let's get a handle on the first hidden layer's weight and create an operation that will compute the clipped weights using the `clip_by_norm()` function. Then we create an assignment operation to assign the clipped weights to the weights variable:" ] @@ -2853,11 +2530,7 @@ { "cell_type": "code", "execution_count": 96, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "threshold = 1.0\n", @@ -2868,10 +2541,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "We can do this as well for the second hidden layer:" ] @@ -2880,9 +2550,7 @@ "cell_type": "code", "execution_count": 97, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -2893,10 +2561,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Let's add an initializer and a saver:" ] @@ -2905,9 +2570,7 @@ "cell_type": "code", "execution_count": 98, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -2917,10 +2580,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "And now we can train the model. It's pretty much as usual, except that right after running the `training_op`, we run the `clip_weights` and `clip_weights2` operations:" ] @@ -2929,9 +2589,7 @@ "cell_type": "code", "execution_count": 99, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -2942,11 +2600,7 @@ { "cell_type": "code", "execution_count": 100, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "with tf.Session() as sess: # not shown in the book\n", @@ -2966,10 +2620,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "The implementation above is straightforward and it works fine, but it is a bit messy. A better approach is to define a `max_norm_regularizer()` function:" ] @@ -2978,9 +2629,7 @@ "cell_type": "code", "execution_count": 101, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -2996,10 +2645,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Then you can call this function to get a max norm regularizer (with the threshold you want). When you create a hidden layer, you can pass this regularizer to the `kernel_regularizer` argument:" ] @@ -3007,11 +2653,7 @@ { "cell_type": "code", "execution_count": 102, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", @@ -3032,9 +2674,7 @@ "cell_type": "code", "execution_count": 103, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -3052,9 +2692,7 @@ "cell_type": "code", "execution_count": 104, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -3076,10 +2714,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Training is as usual, except you must run the weights clipping operations after each training operation:" ] @@ -3088,9 +2723,7 @@ "cell_type": "code", "execution_count": 105, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -3102,9 +2735,6 @@ "cell_type": "code", "execution_count": 106, "metadata": { - "collapsed": false, - "deletable": true, - "editable": true, "scrolled": false }, "outputs": [], @@ -3128,9 +2758,7 @@ { "cell_type": "markdown", "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "source": [ "# Exercise solutions" @@ -3138,60 +2766,42 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "## 1. to 7." ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "See appendix A." ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "## 8. Deep Learning" ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "### 8.1." ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "_Exercise: Build a DNN with five hidden layers of 100 neurons each, He initialization, and the ELU activation function._" ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "We will need similar DNNs in the next exercises, so let's create a function to build this DNN:" ] @@ -3200,9 +2810,7 @@ "cell_type": "code", "execution_count": 107, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -3222,9 +2830,7 @@ "cell_type": "code", "execution_count": 108, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -3244,30 +2850,21 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "### 8.2." ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "_Exercise: Using Adam optimization and early stopping, try training it on MNIST but only on digits 0 to 4, as we will use transfer learning for digits 5 to 9 in the next exercise. You will need a softmax output layer with five neurons, and as always make sure to save checkpoints at regular intervals and save the final model so you can reuse it later._" ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Let's complete the graph with the cost function, the training op, and all the other usual components:" ] @@ -3276,9 +2873,7 @@ "cell_type": "code", "execution_count": 109, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -3299,10 +2894,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Let's fetch the MNIST dataset:" ] @@ -3310,11 +2902,7 @@ { "cell_type": "code", "execution_count": 110, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "from tensorflow.examples.tutorials.mnist import input_data\n", @@ -3323,10 +2911,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Now let's create the training set, validation and test set (we need the validation set to implement early stopping):" ] @@ -3335,9 +2920,7 @@ "cell_type": "code", "execution_count": 111, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -3352,11 +2935,7 @@ { "cell_type": "code", "execution_count": 112, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "n_epochs = 1000\n", @@ -3395,40 +2974,28 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "We get 98.05% accuracy on the test set. That's not too bad, but let's see if we can do better by tuning the hyperparameters." ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "### 8.3." ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "_Exercise: Tune the hyperparameters using cross-validation and see what precision you can achieve._" ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Let's create a `DNNClassifier` class, compatible with Scikit-Learn's `RandomizedSearchCV` class, to perform hyperparameter tuning. Here are the key points of this implementation:\n", "* the `__init__()` method (constructor) does nothing more than create instance variables for each of the hyperparameters.\n", @@ -3444,11 +3011,7 @@ { "cell_type": "code", "execution_count": 113, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "from sklearn.base import BaseEstimator, ClassifierMixin\n", @@ -3631,10 +3194,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Let's see if we get the exact same accuracy as earlier using this class (without dropout or batch norm):" ] @@ -3642,11 +3202,7 @@ { "cell_type": "code", "execution_count": 114, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "dnn_clf = DNNClassifier(random_state=42)\n", @@ -3655,10 +3211,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "The model is trained, let's see if it gets the same accuracy as earlier:" ] @@ -3666,11 +3219,7 @@ { "cell_type": "code", "execution_count": 115, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "from sklearn.metrics import accuracy_score\n", @@ -3681,10 +3230,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Yep! Working fine. Now we can use Scikit-Learn's `RandomizedSearchCV` class to search for better hyperparameters (this may take over an hour, depending on your system):" ] @@ -3692,11 +3238,7 @@ { "cell_type": "code", "execution_count": 116, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import RandomizedSearchCV\n", @@ -3725,11 +3267,7 @@ { "cell_type": "code", "execution_count": 117, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "rnd_search.best_params_" @@ -3738,11 +3276,7 @@ { "cell_type": "code", "execution_count": 118, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "y_pred = rnd_search.predict(X_test1)\n", @@ -3751,20 +3285,14 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Wonderful! Tuning the hyperparameters got us up to 99.32% accuracy! It may not sound like a great improvement to go from 98.05% to 99.32% accuracy, but consider the error rate: it went from roughly 2% to 0.7%. That's a 65% reduction of the number of errors this model will produce!" ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "It's a good idea to save this model:" ] @@ -3773,9 +3301,7 @@ "cell_type": "code", "execution_count": 119, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -3784,30 +3310,21 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "### 8.4." ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "_Exercise: Now try adding Batch Normalization and compare the learning curves: is it converging faster than before? Does it produce a better model?_" ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Let's train the best model found, once again, to see how fast it converges (alternatively, you could tweak the code above to make it write summaries for TensorBoard, so you can visualize the learning curve):" ] @@ -3815,11 +3332,7 @@ { "cell_type": "code", "execution_count": 120, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "dnn_clf = DNNClassifier(activation=leaky_relu(alpha=0.1), batch_size=500, learning_rate=0.01,\n", @@ -3829,20 +3342,14 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "The best loss is reached at epoch 19, but it was already within 10% of that result at epoch 9." ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Let's check that we do indeed get 99.32% accuracy on the test set:" ] @@ -3850,11 +3357,7 @@ { "cell_type": "code", "execution_count": 121, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "y_pred = dnn_clf.predict(X_test1)\n", @@ -3863,10 +3366,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Good, now let's use the exact same model, but this time with batch normalization:" ] @@ -3874,11 +3374,7 @@ { "cell_type": "code", "execution_count": 122, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "dnn_clf_bn = DNNClassifier(activation=leaky_relu(alpha=0.1), batch_size=500, learning_rate=0.01,\n", @@ -3889,10 +3385,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "The best params are reached during epoch 2, that's much faster than earlier. Let's check the accuracy:" ] @@ -3900,11 +3393,7 @@ { "cell_type": "code", "execution_count": 123, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "y_pred = dnn_clf_bn.predict(X_test1)\n", @@ -3913,10 +3402,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Well, batch normalization did not improve accuracy, quite the contrary. Let's see if we can find a good set of hyperparameters that will work well with batch normalization:" ] @@ -3924,11 +3410,7 @@ { "cell_type": "code", "execution_count": 124, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import RandomizedSearchCV\n", @@ -3953,11 +3435,7 @@ { "cell_type": "code", "execution_count": 125, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "rnd_search_bn.best_params_" @@ -3966,11 +3444,7 @@ { "cell_type": "code", "execution_count": 126, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "y_pred = rnd_search_bn.predict(X_test1)\n", @@ -3979,40 +3453,28 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Oh well! Batch normalization did not help in this case. Let's see if dropout can do better." ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "### 8.5." ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "_Exercise: is the model overfitting the training set? Try adding dropout to every layer and try again. Does it help?_" ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Since batch normalization did not help, let's go back to the best model we trained earlier and see how it performs on the training set:" ] @@ -4020,11 +3482,7 @@ { "cell_type": "code", "execution_count": 127, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "y_pred = dnn_clf.predict(X_train1)\n", @@ -4033,10 +3491,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "The model performs significantly better on the training set than on the test set (99.91% vs 99.32%), which means it is overfitting the training set. A bit of regularization may help. Let's try adding dropout with a 50% dropout rate:" ] @@ -4044,11 +3499,7 @@ { "cell_type": "code", "execution_count": 128, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "dnn_clf_dropout = DNNClassifier(activation=leaky_relu(alpha=0.1), batch_size=500, learning_rate=0.01,\n", @@ -4059,20 +3510,14 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "The best params are reached during epoch 23. Dropout somewhat slowed down convergence." ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Let's check the accuracy:" ] @@ -4080,11 +3525,7 @@ { "cell_type": "code", "execution_count": 129, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "y_pred = dnn_clf_dropout.predict(X_test1)\n", @@ -4093,10 +3534,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "We are out of luck, dropout does not seem to help either. Let's try tuning the hyperparameters, perhaps we can squeeze a bit more performance out of this model:" ] @@ -4104,11 +3542,7 @@ { "cell_type": "code", "execution_count": 130, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import RandomizedSearchCV\n", @@ -4133,11 +3567,7 @@ { "cell_type": "code", "execution_count": 131, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "rnd_search_dropout.best_params_" @@ -4146,11 +3576,7 @@ { "cell_type": "code", "execution_count": 132, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "y_pred = rnd_search_dropout.predict(X_test1)\n", @@ -4159,20 +3585,14 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Oh well, neither batch normalization nor dropout improved the model. Better luck next time! :)" ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "But that's okay, we have ourselves a nice DNN that achieves 99.32% accuracy on the test set. Now, let's see if some of its expertise on digits 0 to 4 can be transferred to the task of classifying digits 5 to 9." ] @@ -4180,9 +3600,7 @@ { "cell_type": "markdown", "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "source": [ "## 9. Transfer learning" @@ -4190,20 +3608,14 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "### 9.1." ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "_Exercise: create a new DNN that reuses all the pretrained hidden layers of the previous model, freezes them, and replaces the softmax output layer with a new one._" ] @@ -4218,11 +3630,7 @@ { "cell_type": "code", "execution_count": 133, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", @@ -4247,11 +3655,7 @@ { "cell_type": "code", "execution_count": 134, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "learning_rate = 0.01\n", @@ -4278,20 +3682,14 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "### 9.2." ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "_Exercise: train this new DNN on digits 5 to 9, using only 100 images per digit, and time how long it takes. Despite this small number of examples, can you achieve high precision?_" ] @@ -4307,9 +3705,7 @@ "cell_type": "code", "execution_count": 136, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -4350,9 +3746,7 @@ { "cell_type": "code", "execution_count": 138, - "metadata": { - "collapsed": false - }, + "metadata": {}, "outputs": [], "source": [ "X_train2, y_train2 = sample_n_instances_per_class(X_train2_full, y_train2_full, n=100)\n", @@ -4369,11 +3763,7 @@ { "cell_type": "code", "execution_count": 139, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "import time\n", @@ -4429,20 +3819,14 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "### 9.3." ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "_Exercise: try caching the frozen layers, and train the model again: how much faster is it now?_" ] @@ -4475,9 +3859,7 @@ { "cell_type": "code", "execution_count": 141, - "metadata": { - "collapsed": false - }, + "metadata": {}, "outputs": [], "source": [ "import time\n", @@ -4529,20 +3911,14 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "### 9.4." ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "_Exercise: try again reusing just four hidden layers instead of five. Can you achieve a higher precision?_" ] @@ -4557,9 +3933,7 @@ { "cell_type": "code", "execution_count": 142, - "metadata": { - "collapsed": false - }, + "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", @@ -4590,9 +3964,7 @@ { "cell_type": "code", "execution_count": 143, - "metadata": { - "collapsed": false - }, + "metadata": {}, "outputs": [], "source": [ "learning_rate = 0.01\n", @@ -4615,9 +3987,7 @@ { "cell_type": "code", "execution_count": 144, - "metadata": { - "collapsed": false - }, + "metadata": {}, "outputs": [], "source": [ "n_epochs = 1000\n", @@ -4664,20 +4034,14 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "### 9.5." ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "_Exercise: now unfreeze the top two hidden layers and continue training: can you get the model to perform even better?_" ] @@ -4703,9 +4067,7 @@ { "cell_type": "code", "execution_count": 146, - "metadata": { - "collapsed": false - }, + "metadata": {}, "outputs": [], "source": [ "n_epochs = 1000\n", @@ -4770,9 +4132,7 @@ { "cell_type": "code", "execution_count": 148, - "metadata": { - "collapsed": false - }, + "metadata": {}, "outputs": [], "source": [ "n_epochs = 1000\n", @@ -4820,9 +4180,7 @@ { "cell_type": "code", "execution_count": 149, - "metadata": { - "collapsed": false - }, + "metadata": {}, "outputs": [], "source": [ "dnn_clf_5_to_9 = DNNClassifier(n_hidden_layers=4, random_state=42)\n", @@ -4832,9 +4190,7 @@ { "cell_type": "code", "execution_count": 150, - "metadata": { - "collapsed": false - }, + "metadata": {}, "outputs": [], "source": [ "y_pred = dnn_clf_5_to_9.predict(X_test2)\n", @@ -4850,30 +4206,21 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "## 10. Pretraining on an auxiliary task" ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "In this exercise you will build a DNN that compares two MNIST digit images and predicts whether they represent the same digit or not. Then you will reuse the lower layers of this network to train an MNIST classifier using very little training data." ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "### 10.1.\n", "Exercise: _Start by building two DNNs (let's call them DNN A and B), both similar to the one you built earlier but without the output layer: each DNN should have five hidden layers of 100 neurons each, He initialization, and ELU activation. Next, add one more hidden layer with 10 units on top of both DNNs. You should use TensorFlow's `concat()` function with `axis=1` to concatenate the outputs of both DNNs along the horizontal axis, then feed the result to the hidden layer. Finally, add an output layer with a single neuron using the logistic activation function._" @@ -4881,20 +4228,14 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "**Warning**! There was an error in the book for this exercise: there was no instruction to add a top hidden layer. Without it, the neural network generally fails to start learning. If you have the latest version of the book, this error has been fixed." ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "You could have two input placeholders, `X1` and `X2`, one for the images that should be fed to the first DNN, and the other for the images that should be fed to the second DNN. It would work fine. However, another option is to have a single input placeholder to hold both sets of images (each row will hold a pair of images), and use `tf.unstack()` to split this tensor into two separate tensors, like this:" ] @@ -4902,11 +4243,7 @@ { "cell_type": "code", "execution_count": 151, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "n_inputs = 28 * 28 # MNIST\n", @@ -4919,10 +4256,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "We also need the labels placeholder. Each label will be 0 if the images represent different digits, or 1 if they represent the same digit:" ] @@ -4931,9 +4265,7 @@ "cell_type": "code", "execution_count": 152, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -4942,10 +4274,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Now let's feed these inputs through two separate DNNs:" ] @@ -4953,11 +4282,7 @@ { "cell_type": "code", "execution_count": 153, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "dnn1 = dnn(X1, name=\"DNN_A\")\n", @@ -4966,10 +4291,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "And let's concatenate their outputs:" ] @@ -4978,9 +4300,7 @@ "cell_type": "code", "execution_count": 154, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -4989,10 +4309,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Each DNN outputs 100 activations (per instance), so the shape is `[None, 100]`:" ] @@ -5000,11 +4317,7 @@ { "cell_type": "code", "execution_count": 155, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "dnn1.shape" @@ -5013,11 +4326,7 @@ { "cell_type": "code", "execution_count": 156, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "dnn2.shape" @@ -5025,10 +4334,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "And of course the concatenated outputs have a shape of `[None, 200]`:" ] @@ -5036,11 +4342,7 @@ { "cell_type": "code", "execution_count": 157, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "dnn_outputs.shape" @@ -5048,10 +4350,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Now lets add an extra hidden layer with just 10 neurons, and the output layer, with a single neuron:" ] @@ -5060,9 +4359,7 @@ "cell_type": "code", "execution_count": 158, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -5073,10 +4370,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "The whole network predicts `1` if `y_proba >= 0.5` (i.e. the network predicts that the images represent the same digit), or `0` otherwise. We compute instead `logits >= 0`, which is equivalent but faster to compute: " ] @@ -5085,9 +4379,7 @@ "cell_type": "code", "execution_count": 159, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -5096,10 +4388,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Now let's add the cost function:" ] @@ -5107,11 +4396,7 @@ { "cell_type": "code", "execution_count": 160, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "y_as_float = tf.cast(y, tf.float32)\n", @@ -5121,10 +4406,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "And we can now create the training operation using an optimizer:" ] @@ -5132,11 +4414,7 @@ { "cell_type": "code", "execution_count": 161, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "learning_rate = 0.01\n", @@ -5148,10 +4426,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "We will want to measure our classifier's accuracy." ] @@ -5159,11 +4434,7 @@ { "cell_type": "code", "execution_count": 162, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "y_pred_correct = tf.equal(y_pred, y)\n", @@ -5172,10 +4443,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "And the usual `init` and `saver`:" ] @@ -5184,9 +4452,7 @@ "cell_type": "code", "execution_count": 163, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -5196,10 +4462,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "### 10.2.\n", "_Exercise: split the MNIST training set in two sets: split #1 should containing 55,000 images, and split #2 should contain contain 5,000 images. Create a function that generates a training batch where each instance is a pair of MNIST images picked from split #1. Half of the training instances should be pairs of images that belong to the same class, while the other half should be images from different classes. For each pair, the training label should be 0 if the images are from the same class, or 1 if they are from different classes._" @@ -5207,10 +4470,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "The MNIST dataset returned by TensorFlow's `input_data()` function is already split into 3 parts: a training set (55,000 instances), a validation set (5,000 instances) and a test set (10,000 instances). Let's use the first set to generate the training set composed image pairs, and we will use the second set for the second phase of the exercise (to train a regular MNIST classifier). We will use the third set as the test set for both phases." ] @@ -5219,9 +4479,7 @@ "cell_type": "code", "execution_count": 164, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -5237,10 +4495,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Let's write a function that generates pairs of images: 50% representing the same digit, and 50% representing different digits. There are many ways to implement this. In this implementation, we first decide how many \"same\" pairs (i.e. pairs of images representing the same digit) we will generate, and how many \"different\" pairs (i.e. pairs of images representing different digits). We could just use `batch_size // 2` but we want to handle the case where it is odd (granted, that might be overkill!). Then we generate random pairs and we pick the right number of \"same\" pairs, then we generate the right number of \"different\" pairs. Finally we shuffle the batch and return it:" ] @@ -5249,9 +4504,7 @@ "cell_type": "code", "execution_count": 165, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [ @@ -5278,10 +4531,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Let's test it to generate a small batch of 5 image pairs:" ] @@ -5289,11 +4539,7 @@ { "cell_type": "code", "execution_count": 166, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "batch_size = 5\n", @@ -5302,10 +4548,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Each row in `X_batch` contains a pair of images:" ] @@ -5313,11 +4556,7 @@ { "cell_type": "code", "execution_count": 167, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "X_batch.shape, X_batch.dtype" @@ -5325,10 +4564,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Let's look at these pairs:" ] @@ -5336,11 +4572,7 @@ { "cell_type": "code", "execution_count": 168, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "plt.figure(figsize=(3, 3 * batch_size))\n", @@ -5355,10 +4587,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "And let's look at the labels (0 means \"different\", 1 means \"same\"):" ] @@ -5366,11 +4595,7 @@ { "cell_type": "code", "execution_count": 169, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "y_batch" @@ -5378,20 +4603,14 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Perfect!" ] }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "### 10.3.\n", "_Exercise: train the DNN on this training set. For each image pair, you can simultaneously feed the first image to DNN A and the second image to DNN B. The whole network will gradually learn to tell whether two images belong to the same class or not._" @@ -5399,10 +4618,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Let's generate a test set composed of many pairs of images pulled from the MNIST test set:" ] @@ -5410,11 +4626,7 @@ { "cell_type": "code", "execution_count": 170, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "X_test1, y_test1 = generate_batch(X_test, y_test, batch_size=len(X_test))" @@ -5422,10 +4634,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "And now, let's train the model. There's really nothing special about this step, except for the fact that we need a fairly large `batch_size`, otherwise the model fails to learn anything and ends up with an accuracy of 50%:" ] @@ -5433,11 +4642,7 @@ { "cell_type": "code", "execution_count": 171, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "n_epochs = 100\n", @@ -5459,10 +4664,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "All right, we reach 97.6% accuracy on this digit comparison task. That's not too bad, this model knows a thing or two about comparing handwritten digits!\n", "\n", @@ -5471,10 +4673,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "### 10.4.\n", "_Exercise: now create a new DNN by reusing and freezing the hidden layers of DNN A and adding a softmax output layer on top with 10 neurons. Train this network on split #2 and see if you can achieve high performance despite having only 500 images per class._" @@ -5482,10 +4681,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Let's create the model, it is pretty straightforward. There are many ways to freeze the lower layers, as explained in the book. In this example, we chose to use the `tf.stop_gradient()` function. Note that we need one `Saver` to restore the pretrained DNN A, and another `Saver` to save the final model: " ] @@ -5493,11 +4689,7 @@ { "cell_type": "code", "execution_count": 172, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", @@ -5532,10 +4724,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Now on to training! We first initialize all variables (including the variables in the new output layer), then we restore the pretrained DNN A. Next, we just train the model on the small MNIST dataset (containing just 5,000 images):" ] @@ -5543,11 +4732,7 @@ { "cell_type": "code", "execution_count": 173, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "n_epochs = 100\n", @@ -5571,10 +4756,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Well, 96.7% accuracy, that's not the best MNIST model we have trained so far, but recall that we are only using a small training set (just 500 images per digit). Let's compare this result with the same DNN trained from scratch, without using transfer learning:" ] @@ -5582,11 +4764,7 @@ { "cell_type": "code", "execution_count": 174, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "reset_graph()\n", @@ -5621,11 +4799,7 @@ { "cell_type": "code", "execution_count": 175, - "metadata": { - "collapsed": false, - "deletable": true, - "editable": true - }, + "metadata": {}, "outputs": [], "source": [ "n_epochs = 150\n", @@ -5648,10 +4822,7 @@ }, { "cell_type": "markdown", - "metadata": { - "deletable": true, - "editable": true - }, + "metadata": {}, "source": [ "Only 94.8% accuracy... So transfer learning helped us reduce the error rate from 5.2% to 3.3% (that's over 36% error reduction). Moreover, the model using transfer learning reached over 96% accuracy in less than 10 epochs.\n", "\n", @@ -5662,9 +4833,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "collapsed": true, - "deletable": true, - "editable": true + "collapsed": true }, "outputs": [], "source": [] @@ -5703,5 +4872,5 @@ } }, "nbformat": 4, - "nbformat_minor": 0 + "nbformat_minor": 1 }