From 5d53b561ad7ef314bbc22de379d600165640e961 Mon Sep 17 00:00:00 2001 From: Akshit Gupta Date: Mon, 5 Oct 2020 23:40:19 +0530 Subject: [PATCH 01/49] updated the import --- 02_end_to_end_machine_learning_project.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/02_end_to_end_machine_learning_project.ipynb b/02_end_to_end_machine_learning_project.ipynb index 06b7747..a277f39 100644 --- a/02_end_to_end_machine_learning_project.ipynb +++ b/02_end_to_end_machine_learning_project.ipynb @@ -95,7 +95,7 @@ "source": [ "import os\n", "import tarfile\n", - "import urllib\n", + "import urllib.request\n", "\n", "DOWNLOAD_ROOT = \"https://raw.githubusercontent.com/ageron/handson-ml2/master/\"\n", "HOUSING_PATH = os.path.join(\"datasets\", \"housing\")\n", From cbfefe7a97ba07e836ffb5224bcca8af560b292d Mon Sep 17 00:00:00 2001 From: Ian Beauregard Date: Tue, 6 Oct 2020 17:02:03 -0400 Subject: [PATCH 02/49] Change function argument In Exercise 9, function `mnist_dataset` was called with the wrong argument. --- 13_loading_and_preprocessing_data.ipynb | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/13_loading_and_preprocessing_data.ipynb b/13_loading_and_preprocessing_data.ipynb index 144b216..8561f33 100644 --- a/13_loading_and_preprocessing_data.ipynb +++ b/13_loading_and_preprocessing_data.ipynb @@ -2040,8 +2040,8 @@ "outputs": [], "source": [ "train_set = mnist_dataset(train_filepaths, shuffle_buffer_size=60000)\n", - "valid_set = mnist_dataset(train_filepaths)\n", - "test_set = mnist_dataset(train_filepaths)" + "valid_set = mnist_dataset(valid_filepaths)\n", + "test_set = mnist_dataset(test_filepaths)" ] }, { From c3cbfd04d5e80ec88e95cd5b1e3c0829bb760d3f Mon Sep 17 00:00:00 2001 From: Ian Beauregard Date: Tue, 6 Oct 2020 17:51:42 -0400 Subject: [PATCH 03/49] Adding two missing words --- 13_loading_and_preprocessing_data.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/13_loading_and_preprocessing_data.ipynb b/13_loading_and_preprocessing_data.ipynb index 8561f33..4e9a935 100644 --- a/13_loading_and_preprocessing_data.ipynb +++ b/13_loading_and_preprocessing_data.ipynb @@ -2274,7 +2274,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "But let's pretend the dataset does not fit in memory, just to make things more interesting. Luckily, each review fits on just one line (they use `
` to indicate line breaks), so we can read the reviews using a `TextLineDataset`. If they didn't we would have to preprocess the input files (e.g., converting them to TFRecords). For very large datasets, it would make sense a tool like Apache Beam for that." + "But let's pretend the dataset does not fit in memory, just to make things more interesting. Luckily, each review fits on just one line (they use `
` to indicate line breaks), so we can read the reviews using a `TextLineDataset`. If they didn't we would have to preprocess the input files (e.g., converting them to TFRecords). For very large datasets, it would make sense to use a tool like Apache Beam for that." ] }, { From a83d4885dce9bd247bbe384f92f3e22df03e9b27 Mon Sep 17 00:00:00 2001 From: Ian Beauregard Date: Tue, 6 Oct 2020 18:51:06 -0400 Subject: [PATCH 04/49] Correct a small typo One missing word. --- 13_loading_and_preprocessing_data.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/13_loading_and_preprocessing_data.ipynb b/13_loading_and_preprocessing_data.ipynb index 4e9a935..c258b82 100644 --- a/13_loading_and_preprocessing_data.ipynb +++ b/13_loading_and_preprocessing_data.ipynb @@ -2473,7 +2473,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Let's run it on the same `X_example`, just to make sure the word IDs are larger now, since the vocabulary bigger:" + "Let's run it on the same `X_example`, just to make sure the word IDs are larger now, since the vocabulary is bigger:" ] }, { From 08e387005399bba46e5b1aa605e467f47ef50272 Mon Sep 17 00:00:00 2001 From: Ian Beauregard Date: Tue, 6 Oct 2020 19:20:18 -0400 Subject: [PATCH 05/49] Correct small "code typo" --- 13_loading_and_preprocessing_data.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/13_loading_and_preprocessing_data.ipynb b/13_loading_and_preprocessing_data.ipynb index c258b82..7d7ff12 100644 --- a/13_loading_and_preprocessing_data.ipynb +++ b/13_loading_and_preprocessing_data.ipynb @@ -2540,7 +2540,7 @@ "source": [ "class BagOfWords(keras.layers.Layer):\n", " def __init__(self, n_tokens, dtype=tf.int32, **kwargs):\n", - " super().__init__(dtype=tf.int32, **kwargs)\n", + " super().__init__(dtype=dtype, **kwargs)\n", " self.n_tokens = n_tokens\n", " def call(self, inputs):\n", " one_hot = tf.one_hot(inputs, self.n_tokens)\n", From 80f6cb27c080282e75a6036991c99736f6b13bba Mon Sep 17 00:00:00 2001 From: 8bitmp3 <19637339+8bitmp3@users.noreply.github.com> Date: Sat, 17 Oct 2020 15:04:51 +0100 Subject: [PATCH 06/49] Update (small) the reinforcement learning chapter --- 18_reinforcement_learning.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/18_reinforcement_learning.ipynb b/18_reinforcement_learning.ipynb index e6d3717..726a137 100644 --- a/18_reinforcement_learning.ipynb +++ b/18_reinforcement_learning.ipynb @@ -565,7 +565,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Let's create a neural network that will take observations as inputs, and output the action to take for each observation. To choose an action, the network will estimate a probability for each action, then we will select an action randomly according to the estimated probabilities. In the case of the Cart-Pole environment, there are just two possible actions (left or right), so we only need one output neuron: it will output the probability `p` of the action 0 (left), and of course the probability of action 1 (right) will be `1 - p`." + "Let's create a neural network that will take observations as inputs, and output the probabilities of actions to take for each observation. To choose an action, the network will estimate a probability for each action, then we will select an action randomly according to the estimated probabilities. In the case of the Cart-Pole environment, there are just two possible actions (left or right), so we only need one output neuron: it will output the probability `p` of the action 0 (left), and of course the probability of action 1 (right) will be `1 - p`." ] }, { From 9d99ae9f9fefa0d2ef1c124e5cec860a789c7da9 Mon Sep 17 00:00:00 2001 From: Ian Beauregard Date: Mon, 19 Oct 2020 12:38:17 -0400 Subject: [PATCH 07/49] Correct small coding typo --- 16_nlp_with_rnns_and_attention.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/16_nlp_with_rnns_and_attention.ipynb b/16_nlp_with_rnns_and_attention.ipynb index 328e421..7944feb 100644 --- a/16_nlp_with_rnns_and_attention.ipynb +++ b/16_nlp_with_rnns_and_attention.ipynb @@ -1383,7 +1383,7 @@ "outputs": [], "source": [ "def string_to_ids(s, chars=POSSIBLE_CHARS):\n", - " return [POSSIBLE_CHARS.index(c) for c in s]" + " return [chars.index(c) for c in s]" ] }, { From 7848437dc23db090f8947db3ab919a750c6906a7 Mon Sep 17 00:00:00 2001 From: Ian Beauregard Date: Mon, 19 Oct 2020 12:44:33 -0400 Subject: [PATCH 08/49] Correct typo --- 16_nlp_with_rnns_and_attention.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/16_nlp_with_rnns_and_attention.ipynb b/16_nlp_with_rnns_and_attention.ipynb index 7944feb..cd719a5 100644 --- a/16_nlp_with_rnns_and_attention.ipynb +++ b/16_nlp_with_rnns_and_attention.ipynb @@ -1452,7 +1452,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "What classes does it belong to?" + "What class does it belong to?" ] }, { From a2ffc37d2f4e2ff1be5ddd0b64ad849ebc81e402 Mon Sep 17 00:00:00 2001 From: Ian Beauregard Date: Mon, 19 Oct 2020 13:33:35 -0400 Subject: [PATCH 09/49] Modify creation of possible char list MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit I concatenated the string of all digits (+ comma and space) to the argument of function sorted ∘ set. Also, the digit '0' was written twice in the digit string. --- 16_nlp_with_rnns_and_attention.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/16_nlp_with_rnns_and_attention.ipynb b/16_nlp_with_rnns_and_attention.ipynb index cd719a5..b682c3e 100644 --- a/16_nlp_with_rnns_and_attention.ipynb +++ b/16_nlp_with_rnns_and_attention.ipynb @@ -1599,7 +1599,7 @@ "metadata": {}, "outputs": [], "source": [ - "INPUT_CHARS = \"\".join(sorted(set(\"\".join(MONTHS)))) + \"01234567890, \"\n", + "INPUT_CHARS = \"\".join(sorted(set(\"\".join(MONTHS) + \"0123456789, \")))\n", "INPUT_CHARS" ] }, From e0cae0c7beaf144f138c486a39d693c03e8f89a0 Mon Sep 17 00:00:00 2001 From: Ian Beauregard Date: Mon, 19 Oct 2020 14:19:42 -0400 Subject: [PATCH 10/49] Replace deprecated method See https://www.tensorflow.org/api_docs/python/tf/keras/Sequential?hl=en#predict_classes. --- 16_nlp_with_rnns_and_attention.ipynb | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/16_nlp_with_rnns_and_attention.ipynb b/16_nlp_with_rnns_and_attention.ipynb index b682c3e..f7baa44 100644 --- a/16_nlp_with_rnns_and_attention.ipynb +++ b/16_nlp_with_rnns_and_attention.ipynb @@ -353,7 +353,7 @@ "outputs": [], "source": [ "X_new = preprocess([\"How are yo\"])\n", - "Y_pred = model.predict_classes(X_new)\n", + "Y_pred = np.argmax(model.predict(X_new), axis=-1)\n", "tokenizer.sequences_to_texts(Y_pred + 1)[0][-1] # 1st sentence, last char" ] }, @@ -1785,7 +1785,7 @@ "metadata": {}, "outputs": [], "source": [ - "ids = model.predict_classes(X_new)\n", + "ids = np.argmax(model.predict(X_new), axis=-1)\n", "for date_str in ids_to_date_strs(ids):\n", " print(date_str)" ] @@ -1819,7 +1819,7 @@ "metadata": {}, "outputs": [], "source": [ - "ids = model.predict_classes(X_new)\n", + "ids = np.argmax(model.predict(X_new), axis=-1)\n", "for date_str in ids_to_date_strs(ids):\n", " print(date_str)" ] @@ -1847,7 +1847,7 @@ "\n", "def convert_date_strs(date_strs):\n", " X = prepare_date_strs_padded(date_strs)\n", - " ids = model.predict_classes(X)\n", + " ids = np.argmax(model.predict(X), axis=-1)\n", " return ids_to_date_strs(ids)" ] }, From 2c700450b5d048eddeb3bf542692243570f9e305 Mon Sep 17 00:00:00 2001 From: Ian Beauregard Date: Mon, 19 Oct 2020 17:17:35 -0400 Subject: [PATCH 11/49] Change Embedding's input_dim argument Wrong argument for the decoder's embedding layer. --- 16_nlp_with_rnns_and_attention.ipynb | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/16_nlp_with_rnns_and_attention.ipynb b/16_nlp_with_rnns_and_attention.ipynb index f7baa44..562b856 100644 --- a/16_nlp_with_rnns_and_attention.ipynb +++ b/16_nlp_with_rnns_and_attention.ipynb @@ -2063,7 +2063,7 @@ " len(INPUT_CHARS) + 1, encoder_embedding_size)(encoder_inputs)\n", "\n", "decoder_embedding_layer = keras.layers.Embedding(\n", - " len(INPUT_CHARS) + 2, decoder_embedding_size)\n", + " len(OUTPUT_CHARS) + 2, decoder_embedding_size)\n", "decoder_embeddings = decoder_embedding_layer(decoder_inputs)\n", "\n", "encoder = keras.layers.LSTM(units, return_state=True)\n", @@ -2260,7 +2260,7 @@ " len(INPUT_CHARS) + 1, encoder_embedding_size)(encoder_inputs)\n", "\n", "decoder_embedding_layer = keras.layers.Embedding(\n", - " len(INPUT_CHARS) + 2, decoder_embedding_size)\n", + " len(OUTPUT_CHARS) + 2, decoder_embedding_size)\n", "decoder_embeddings = decoder_embedding_layer(decoder_inputs)\n", "\n", "encoder = keras.layers.LSTM(units, return_state=True)\n", From daf309c2bb7d747f2a2d0283103ce20ba6bd3d99 Mon Sep 17 00:00:00 2001 From: Ian Beauregard Date: Tue, 20 Oct 2020 12:08:21 -0400 Subject: [PATCH 12/49] Install transformers library --- 16_nlp_with_rnns_and_attention.ipynb | 1 + 1 file changed, 1 insertion(+) diff --git a/16_nlp_with_rnns_and_attention.ipynb b/16_nlp_with_rnns_and_attention.ipynb index 562b856..945bc5f 100644 --- a/16_nlp_with_rnns_and_attention.ipynb +++ b/16_nlp_with_rnns_and_attention.ipynb @@ -2588,6 +2588,7 @@ "metadata": {}, "outputs": [], "source": [ + "!pip install -q -U transformers\n", "from transformers import TFOpenAIGPTLMHeadModel\n", "\n", "model = TFOpenAIGPTLMHeadModel.from_pretrained(\"openai-gpt\")" From 98f6d26d3b96a915a9ff377c690c90de718fd283 Mon Sep 17 00:00:00 2001 From: bric Date: Mon, 9 Nov 2020 19:29:00 +0100 Subject: [PATCH 13/49] Update 01_the_machine_learning_landscape.ipynb MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit seems that new python libs used by google colab need „import urlib.request“ instead of „import urllib“ --- 01_the_machine_learning_landscape.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/01_the_machine_learning_landscape.ipynb b/01_the_machine_learning_landscape.ipynb index 24f8aaa..a1f92db 100644 --- a/01_the_machine_learning_landscape.ipynb +++ b/01_the_machine_learning_landscape.ipynb @@ -124,7 +124,7 @@ "outputs": [], "source": [ "# Download the data\n", - "import urllib\n", + "import urllib.request\n", "DOWNLOAD_ROOT = \"https://raw.githubusercontent.com/ageron/handson-ml2/master/\"\n", "os.makedirs(datapath, exist_ok=True)\n", "for filename in (\"oecd_bli_2015.csv\", \"gdp_per_capita.csv\"):\n", From 4aa8c0b0d847aa8962944d5dc87968ebf819d2b8 Mon Sep 17 00:00:00 2001 From: hattackk <36685328+hattackk@users.noreply.github.com> Date: Sun, 15 Nov 2020 17:17:47 -0500 Subject: [PATCH 14/49] Updated wording from Multiple to Multiply Corrected wording from "multiple $D$ by" to "multiply $D$ by" --- math_linear_algebra.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/math_linear_algebra.ipynb b/math_linear_algebra.ipynb index 501a176..48ff1db 100644 --- a/math_linear_algebra.ipynb +++ b/math_linear_algebra.ipynb @@ -1347,7 +1347,7 @@ "source": [ "Looks good! You can check the other elements until you get used to the algorithm.\n", "\n", - "We multiplied a $2 \\times 3$ matrix by a $3 \\times 4$ matrix, so the result is a $2 \\times 4$ matrix. The first matrix's number of columns has to be equal to the second matrix's number of rows. If we try to multiple $D$ by $A$, we get an error because D has 4 columns while A has 2 rows:" + "We multiplied a $2 \\times 3$ matrix by a $3 \\times 4$ matrix, so the result is a $2 \\times 4$ matrix. The first matrix's number of columns has to be equal to the second matrix's number of rows. If we try to multiply $D$ by $A$, we get an error because D has 4 columns while A has 2 rows:" ] }, { From a55e3f3033bc1639d1742852c91a111ff32f0e93 Mon Sep 17 00:00:00 2001 From: Cody McCormack Date: Thu, 19 Nov 2020 16:48:32 -0700 Subject: [PATCH 15/49] Fixed misspelling of 'literature' --- math_differential_calculus.ipynb | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/math_differential_calculus.ipynb b/math_differential_calculus.ipynb index 210ff97..3a2bba2 100644 --- a/math_differential_calculus.ipynb +++ b/math_differential_calculus.ipynb @@ -544,7 +544,7 @@ "id": "Zu6u_8bw7ZUc" }, "source": [ - "A word about notations: there are several other notations for the derivative that you will find in the litterature:\n", + "A word about notations: there are several other notations for the derivative that you will find in the literature:\n", "\n", "$f'(x) = \\dfrac{\\mathrm{d}f(x)}{\\mathrm{d}x} = \\dfrac{\\mathrm{d}}{\\mathrm{d}x}f(x)$\n", "\n", @@ -1780,7 +1780,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.6.1" }, "pycharm": { "stem_cell": { From f225f5978013f4eb2f7e1792488c99b7b36d4575 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aur=C3=A9lien=20Geron?= Date: Sat, 21 Nov 2020 12:22:42 +1300 Subject: [PATCH 16/49] Update to latest library versions --- 02_end_to_end_machine_learning_project.ipynb | 161 ++++++++++-------- 03_classification.ipynb | 162 +++++++++++-------- requirements.txt | 77 ++++----- 3 files changed, 224 insertions(+), 176 deletions(-) diff --git a/02_end_to_end_machine_learning_project.ipynb b/02_end_to_end_machine_learning_project.ipynb index 06b7747..0aafdf6 100644 --- a/02_end_to_end_machine_learning_project.ipynb +++ b/02_end_to_end_machine_learning_project.ipynb @@ -931,7 +931,7 @@ "rooms_ix, bedrooms_ix, population_ix, households_ix = 3, 4, 5, 6\n", "\n", "class CombinedAttributesAdder(BaseEstimator, TransformerMixin):\n", - " def __init__(self, add_bedrooms_per_room = True): # no *args or **kargs\n", + " def __init__(self, add_bedrooms_per_room=True): # no *args or **kargs\n", " self.add_bedrooms_per_room = add_bedrooms_per_room\n", " def fit(self, X, y=None):\n", " return self # nothing else to do\n", @@ -949,11 +949,36 @@ "housing_extra_attribs = attr_adder.transform(housing.values)" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Note that I hard coded the indices (3, 4, 5, 6) for concision and clarity in the book, but it would be much cleaner to get them dynamically, like this:" + ] + }, { "cell_type": "code", "execution_count": 71, "metadata": {}, "outputs": [], + "source": [ + "col_names = \"total_rooms\", \"total_bedrooms\", \"population\", \"households\"\n", + "rooms_ix, bedrooms_ix, population_ix, households_ix = [\n", + " housing.columns.get_loc(c) for c in col_names] # get the column indices" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Also, `housing_extra_attribs` is a NumPy array, we've lost the column names (unfortunately, that's a problem with Scikit-Learn). To recover a `DataFrame`, you could run this:" + ] + }, + { + "cell_type": "code", + "execution_count": 72, + "metadata": {}, + "outputs": [], "source": [ "housing_extra_attribs = pd.DataFrame(\n", " housing_extra_attribs,\n", @@ -971,7 +996,7 @@ }, { "cell_type": "code", - "execution_count": 72, + "execution_count": 73, "metadata": {}, "outputs": [], "source": [ @@ -989,7 +1014,7 @@ }, { "cell_type": "code", - "execution_count": 73, + "execution_count": 74, "metadata": {}, "outputs": [], "source": [ @@ -998,7 +1023,7 @@ }, { "cell_type": "code", - "execution_count": 74, + "execution_count": 75, "metadata": {}, "outputs": [], "source": [ @@ -1017,7 +1042,7 @@ }, { "cell_type": "code", - "execution_count": 75, + "execution_count": 76, "metadata": {}, "outputs": [], "source": [ @@ -1026,7 +1051,7 @@ }, { "cell_type": "code", - "execution_count": 76, + "execution_count": 77, "metadata": {}, "outputs": [], "source": [ @@ -1042,7 +1067,7 @@ }, { "cell_type": "code", - "execution_count": 77, + "execution_count": 78, "metadata": {}, "outputs": [], "source": [ @@ -1067,7 +1092,7 @@ }, { "cell_type": "code", - "execution_count": 78, + "execution_count": 79, "metadata": {}, "outputs": [], "source": [ @@ -1089,7 +1114,7 @@ }, { "cell_type": "code", - "execution_count": 79, + "execution_count": 80, "metadata": {}, "outputs": [], "source": [ @@ -1103,7 +1128,7 @@ }, { "cell_type": "code", - "execution_count": 80, + "execution_count": 81, "metadata": {}, "outputs": [], "source": [ @@ -1120,7 +1145,7 @@ }, { "cell_type": "code", - "execution_count": 81, + "execution_count": 82, "metadata": {}, "outputs": [], "source": [ @@ -1136,7 +1161,7 @@ }, { "cell_type": "code", - "execution_count": 82, + "execution_count": 83, "metadata": {}, "outputs": [], "source": [ @@ -1148,7 +1173,7 @@ }, { "cell_type": "code", - "execution_count": 83, + "execution_count": 84, "metadata": {}, "outputs": [], "source": [ @@ -1169,7 +1194,7 @@ }, { "cell_type": "code", - "execution_count": 84, + "execution_count": 85, "metadata": {}, "outputs": [], "source": [ @@ -1178,7 +1203,7 @@ }, { "cell_type": "code", - "execution_count": 85, + "execution_count": 86, "metadata": {}, "outputs": [], "source": [ @@ -1187,7 +1212,7 @@ }, { "cell_type": "code", - "execution_count": 86, + "execution_count": 87, "metadata": {}, "outputs": [], "source": [ @@ -1201,7 +1226,7 @@ }, { "cell_type": "code", - "execution_count": 87, + "execution_count": 88, "metadata": {}, "outputs": [], "source": [ @@ -1213,7 +1238,7 @@ }, { "cell_type": "code", - "execution_count": 88, + "execution_count": 89, "metadata": {}, "outputs": [], "source": [ @@ -1225,7 +1250,7 @@ }, { "cell_type": "code", - "execution_count": 89, + "execution_count": 90, "metadata": {}, "outputs": [], "source": [ @@ -1244,7 +1269,7 @@ }, { "cell_type": "code", - "execution_count": 90, + "execution_count": 91, "metadata": {}, "outputs": [], "source": [ @@ -1257,7 +1282,7 @@ }, { "cell_type": "code", - "execution_count": 91, + "execution_count": 92, "metadata": {}, "outputs": [], "source": [ @@ -1271,7 +1296,7 @@ }, { "cell_type": "code", - "execution_count": 92, + "execution_count": 93, "metadata": {}, "outputs": [], "source": [ @@ -1290,7 +1315,7 @@ }, { "cell_type": "code", - "execution_count": 93, + "execution_count": 94, "metadata": {}, "outputs": [], "source": [ @@ -1302,7 +1327,7 @@ }, { "cell_type": "code", - "execution_count": 94, + "execution_count": 95, "metadata": {}, "outputs": [], "source": [ @@ -1314,7 +1339,7 @@ }, { "cell_type": "code", - "execution_count": 95, + "execution_count": 96, "metadata": {}, "outputs": [], "source": [ @@ -1328,7 +1353,7 @@ }, { "cell_type": "code", - "execution_count": 96, + "execution_count": 97, "metadata": {}, "outputs": [], "source": [ @@ -1338,7 +1363,7 @@ }, { "cell_type": "code", - "execution_count": 97, + "execution_count": 98, "metadata": {}, "outputs": [], "source": [ @@ -1354,7 +1379,7 @@ }, { "cell_type": "code", - "execution_count": 98, + "execution_count": 99, "metadata": {}, "outputs": [], "source": [ @@ -1384,7 +1409,7 @@ }, { "cell_type": "code", - "execution_count": 99, + "execution_count": 100, "metadata": {}, "outputs": [], "source": [ @@ -1393,7 +1418,7 @@ }, { "cell_type": "code", - "execution_count": 100, + "execution_count": 101, "metadata": {}, "outputs": [], "source": [ @@ -1409,7 +1434,7 @@ }, { "cell_type": "code", - "execution_count": 101, + "execution_count": 102, "metadata": {}, "outputs": [], "source": [ @@ -1420,7 +1445,7 @@ }, { "cell_type": "code", - "execution_count": 102, + "execution_count": 103, "metadata": {}, "outputs": [], "source": [ @@ -1429,7 +1454,7 @@ }, { "cell_type": "code", - "execution_count": 103, + "execution_count": 104, "metadata": {}, "outputs": [], "source": [ @@ -1449,7 +1474,7 @@ }, { "cell_type": "code", - "execution_count": 104, + "execution_count": 105, "metadata": {}, "outputs": [], "source": [ @@ -1460,7 +1485,7 @@ }, { "cell_type": "code", - "execution_count": 105, + "execution_count": 106, "metadata": {}, "outputs": [], "source": [ @@ -1470,7 +1495,7 @@ }, { "cell_type": "code", - "execution_count": 106, + "execution_count": 107, "metadata": {}, "outputs": [], "source": [ @@ -1484,7 +1509,7 @@ }, { "cell_type": "code", - "execution_count": 107, + "execution_count": 108, "metadata": {}, "outputs": [], "source": [ @@ -1502,7 +1527,7 @@ }, { "cell_type": "code", - "execution_count": 108, + "execution_count": 109, "metadata": {}, "outputs": [], "source": [ @@ -1518,7 +1543,7 @@ }, { "cell_type": "code", - "execution_count": 109, + "execution_count": 110, "metadata": {}, "outputs": [], "source": [ @@ -1540,7 +1565,7 @@ }, { "cell_type": "code", - "execution_count": 110, + "execution_count": 111, "metadata": {}, "outputs": [], "source": [ @@ -1560,7 +1585,7 @@ }, { "cell_type": "code", - "execution_count": 111, + "execution_count": 112, "metadata": {}, "outputs": [], "source": [ @@ -1585,7 +1610,7 @@ }, { "cell_type": "code", - "execution_count": 112, + "execution_count": 113, "metadata": {}, "outputs": [], "source": [ @@ -1607,7 +1632,7 @@ }, { "cell_type": "code", - "execution_count": 113, + "execution_count": 114, "metadata": {}, "outputs": [], "source": [ @@ -1616,7 +1641,7 @@ }, { "cell_type": "code", - "execution_count": 114, + "execution_count": 115, "metadata": {}, "outputs": [], "source": [ @@ -1635,7 +1660,7 @@ }, { "cell_type": "code", - "execution_count": 115, + "execution_count": 116, "metadata": {}, "outputs": [], "source": [ @@ -1671,7 +1696,7 @@ }, { "cell_type": "code", - "execution_count": 116, + "execution_count": 117, "metadata": {}, "outputs": [], "source": [ @@ -1697,7 +1722,7 @@ }, { "cell_type": "code", - "execution_count": 117, + "execution_count": 118, "metadata": {}, "outputs": [], "source": [ @@ -1715,7 +1740,7 @@ }, { "cell_type": "code", - "execution_count": 118, + "execution_count": 119, "metadata": {}, "outputs": [], "source": [ @@ -1745,7 +1770,7 @@ }, { "cell_type": "code", - "execution_count": 119, + "execution_count": 120, "metadata": {}, "outputs": [], "source": [ @@ -1778,7 +1803,7 @@ }, { "cell_type": "code", - "execution_count": 120, + "execution_count": 121, "metadata": {}, "outputs": [], "source": [ @@ -1796,7 +1821,7 @@ }, { "cell_type": "code", - "execution_count": 121, + "execution_count": 122, "metadata": {}, "outputs": [], "source": [ @@ -1819,7 +1844,7 @@ }, { "cell_type": "code", - "execution_count": 122, + "execution_count": 123, "metadata": {}, "outputs": [], "source": [ @@ -1844,7 +1869,7 @@ }, { "cell_type": "code", - "execution_count": 123, + "execution_count": 124, "metadata": {}, "outputs": [], "source": [ @@ -1883,7 +1908,7 @@ }, { "cell_type": "code", - "execution_count": 124, + "execution_count": 125, "metadata": {}, "outputs": [], "source": [ @@ -1919,7 +1944,7 @@ }, { "cell_type": "code", - "execution_count": 125, + "execution_count": 126, "metadata": {}, "outputs": [], "source": [ @@ -1935,7 +1960,7 @@ }, { "cell_type": "code", - "execution_count": 126, + "execution_count": 127, "metadata": {}, "outputs": [], "source": [ @@ -1945,7 +1970,7 @@ }, { "cell_type": "code", - "execution_count": 127, + "execution_count": 128, "metadata": {}, "outputs": [], "source": [ @@ -1961,7 +1986,7 @@ }, { "cell_type": "code", - "execution_count": 128, + "execution_count": 129, "metadata": {}, "outputs": [], "source": [ @@ -1977,7 +2002,7 @@ }, { "cell_type": "code", - "execution_count": 129, + "execution_count": 130, "metadata": {}, "outputs": [], "source": [ @@ -1989,7 +2014,7 @@ }, { "cell_type": "code", - "execution_count": 130, + "execution_count": 131, "metadata": {}, "outputs": [], "source": [ @@ -2005,7 +2030,7 @@ }, { "cell_type": "code", - "execution_count": 131, + "execution_count": 132, "metadata": {}, "outputs": [], "source": [ @@ -2021,7 +2046,7 @@ }, { "cell_type": "code", - "execution_count": 132, + "execution_count": 133, "metadata": {}, "outputs": [], "source": [ @@ -2051,7 +2076,7 @@ }, { "cell_type": "code", - "execution_count": 133, + "execution_count": 134, "metadata": {}, "outputs": [], "source": [ @@ -2064,7 +2089,7 @@ }, { "cell_type": "code", - "execution_count": 134, + "execution_count": 135, "metadata": {}, "outputs": [], "source": [ @@ -2080,7 +2105,7 @@ }, { "cell_type": "code", - "execution_count": 135, + "execution_count": 136, "metadata": {}, "outputs": [], "source": [ @@ -2114,7 +2139,7 @@ }, { "cell_type": "code", - "execution_count": 136, + "execution_count": 137, "metadata": {}, "outputs": [], "source": [ @@ -2130,7 +2155,7 @@ }, { "cell_type": "code", - "execution_count": 137, + "execution_count": 138, "metadata": {}, "outputs": [], "source": [ @@ -2168,7 +2193,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.8" }, "nav_menu": { "height": "279px", diff --git a/03_classification.ipynb b/03_classification.ipynb index b574513..5d76ee1 100644 --- a/03_classification.ipynb +++ b/03_classification.ipynb @@ -291,7 +291,7 @@ "from sklearn.model_selection import StratifiedKFold\n", "from sklearn.base import clone\n", "\n", - "skfolds = StratifiedKFold(n_splits=3, random_state=42)\n", + "skfolds = StratifiedKFold(n_splits=3, shuffle=True, random_state=42)\n", "\n", "for train_index, test_index in skfolds.split(X_train, y_train_5):\n", " clone_clf = clone(sgd_clf)\n", @@ -306,6 +306,13 @@ " print(n_correct / len(y_pred))" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Note**: `shuffle=True` was omitted by mistake in previous releases of the book." + ] + }, { "cell_type": "code", "execution_count": 19, @@ -330,6 +337,17 @@ "cross_val_score(never_5_clf, X_train, y_train_5, cv=3, scoring=\"accuracy\")" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Warning**: this output (and many others in this notebook and other notebooks) may differ slightly from those in the book. Don't worry, that's okay! There are several reasons for this:\n", + "* first, Scikit-Learn and other libraries evolve, and algorithms get tweaked a bit, which may change the exact result you get. If you use the latest Scikit-Learn version (and in general, you really should), you probably won't be using the exact same version I used when I wrote the book or this notebook, hence the difference. I try to keep this notebook reasonably up to date, but I can't change the numbers on the pages in your copy of the book.\n", + "* second, many training algorithms are stochastic, meaning they rely on randomness. In principle, it's possible to get consistent outputs from a random number generator by setting the seed from which it generates the pseudo-random numbers (which is why you will see `random_state=42` or `np.random.seed(42)` pretty often). However, sometimes this does not suffice due to the other factors listed here.\n", + "* third, if the training algorithm runs across multiple threads (as do some algorithms implemented in C) or across multiple processes (e.g., when using the `n_jobs` argument), then the precise order in which operations will run is not always guaranteed, and thus the exact result may vary slightly.\n", + "* lastly, other things may prevent perfect reproducibility, such as Python maps and sets whose order is not guaranteed to be stable across sessions, or the order of files in a directory which is also not guaranteed." + ] + }, { "cell_type": "code", "execution_count": 21, @@ -375,11 +393,12 @@ }, { "cell_type": "code", - "execution_count": 25, + "execution_count": 27, "metadata": {}, "outputs": [], "source": [ - "4096 / (4096 + 1522)" + "cm = confusion_matrix(y_train_5, y_train_pred)\n", + "cm[1, 1] / (cm[0, 1] + cm[1, 1])" ] }, { @@ -393,11 +412,11 @@ }, { "cell_type": "code", - "execution_count": 27, + "execution_count": 28, "metadata": {}, "outputs": [], "source": [ - "4096 / (4096 + 1325)" + "cm[1, 1] / (cm[1, 0] + cm[1, 1])" ] }, { @@ -417,7 +436,7 @@ "metadata": {}, "outputs": [], "source": [ - "4096 / (4096 + (1522 + 1325) / 2)" + "cm[1, 1] / (cm[1, 1] + (cm[1, 0] + cm[0, 1]) / 2)" ] }, { @@ -462,7 +481,7 @@ }, { "cell_type": "code", - "execution_count": 34, + "execution_count": 30, "metadata": {}, "outputs": [], "source": [ @@ -472,7 +491,7 @@ }, { "cell_type": "code", - "execution_count": 35, + "execution_count": 31, "metadata": {}, "outputs": [], "source": [ @@ -483,7 +502,7 @@ }, { "cell_type": "code", - "execution_count": 36, + "execution_count": 32, "metadata": {}, "outputs": [], "source": [ @@ -514,7 +533,7 @@ }, { "cell_type": "code", - "execution_count": 37, + "execution_count": 33, "metadata": {}, "outputs": [], "source": [ @@ -523,7 +542,7 @@ }, { "cell_type": "code", - "execution_count": 38, + "execution_count": 35, "metadata": {}, "outputs": [], "source": [ @@ -536,47 +555,20 @@ "\n", "plt.figure(figsize=(8, 6))\n", "plot_precision_vs_recall(precisions, recalls)\n", - "plt.plot([0.4368, 0.4368], [0., 0.9], \"r:\")\n", - "plt.plot([0.0, 0.4368], [0.9, 0.9], \"r:\")\n", - "plt.plot([0.4368], [0.9], \"ro\")\n", + "plt.plot([recall_90_precision, recall_90_precision], [0., 0.9], \"r:\")\n", + "plt.plot([0.0, recall_90_precision], [0.9, 0.9], \"r:\")\n", + "plt.plot([recall_90_precision], [0.9], \"ro\")\n", "save_fig(\"precision_vs_recall_plot\")\n", "plt.show()" ] }, - { - "cell_type": "code", - "execution_count": 39, - "metadata": {}, - "outputs": [], - "source": [ - "threshold_90_precision = thresholds[np.argmax(precisions >= 0.90)]" - ] - }, - { - "cell_type": "code", - "execution_count": 40, - "metadata": {}, - "outputs": [], - "source": [ - "threshold_90_precision" - ] - }, - { - "cell_type": "code", - "execution_count": 41, - "metadata": {}, - "outputs": [], - "source": [ - "y_train_pred_90 = (y_scores >= threshold_90_precision)" - ] - }, { "cell_type": "code", "execution_count": 42, "metadata": {}, "outputs": [], "source": [ - "precision_score(y_train_5, y_train_pred_90)" + "threshold_90_precision = thresholds[np.argmax(precisions >= 0.90)]" ] }, { @@ -584,6 +576,33 @@ "execution_count": 43, "metadata": {}, "outputs": [], + "source": [ + "threshold_90_precision" + ] + }, + { + "cell_type": "code", + "execution_count": 44, + "metadata": {}, + "outputs": [], + "source": [ + "y_train_pred_90 = (y_scores >= threshold_90_precision)" + ] + }, + { + "cell_type": "code", + "execution_count": 45, + "metadata": {}, + "outputs": [], + "source": [ + "precision_score(y_train_5, y_train_pred_90)" + ] + }, + { + "cell_type": "code", + "execution_count": 46, + "metadata": {}, + "outputs": [], "source": [ "recall_score(y_train_5, y_train_pred_90)" ] @@ -597,7 +616,7 @@ }, { "cell_type": "code", - "execution_count": 44, + "execution_count": 47, "metadata": {}, "outputs": [], "source": [ @@ -608,7 +627,7 @@ }, { "cell_type": "code", - "execution_count": 45, + "execution_count": 50, "metadata": {}, "outputs": [], "source": [ @@ -620,18 +639,19 @@ " plt.ylabel('True Positive Rate (Recall)', fontsize=16) # Not shown\n", " plt.grid(True) # Not shown\n", "\n", - "plt.figure(figsize=(8, 6)) # Not shown\n", + "plt.figure(figsize=(8, 6)) # Not shown\n", "plot_roc_curve(fpr, tpr)\n", - "plt.plot([4.837e-3, 4.837e-3], [0., 0.4368], \"r:\") # Not shown\n", - "plt.plot([0.0, 4.837e-3], [0.4368, 0.4368], \"r:\") # Not shown\n", - "plt.plot([4.837e-3], [0.4368], \"ro\") # Not shown\n", - "save_fig(\"roc_curve_plot\") # Not shown\n", + "fpr_90 = fpr[np.argmax(tpr >= recall_90_precision)] # Not shown\n", + "plt.plot([fpr_90, fpr_90], [0., recall_90_precision], \"r:\") # Not shown\n", + "plt.plot([0.0, fpr_90], [recall_90_precision, recall_90_precision], \"r:\") # Not shown\n", + "plt.plot([fpr_90], [recall_90_precision], \"ro\") # Not shown\n", + "save_fig(\"roc_curve_plot\") # Not shown\n", "plt.show()" ] }, { "cell_type": "code", - "execution_count": 46, + "execution_count": 53, "metadata": {}, "outputs": [], "source": [ @@ -649,7 +669,7 @@ }, { "cell_type": "code", - "execution_count": 47, + "execution_count": 54, "metadata": {}, "outputs": [], "source": [ @@ -661,7 +681,7 @@ }, { "cell_type": "code", - "execution_count": 48, + "execution_count": 55, "metadata": {}, "outputs": [], "source": [ @@ -671,18 +691,20 @@ }, { "cell_type": "code", - "execution_count": 49, + "execution_count": 57, "metadata": {}, "outputs": [], "source": [ + "recall_for_forest = tpr_forest[np.argmax(fpr_forest >= fpr_90)]\n", + "\n", "plt.figure(figsize=(8, 6))\n", "plt.plot(fpr, tpr, \"b:\", linewidth=2, label=\"SGD\")\n", "plot_roc_curve(fpr_forest, tpr_forest, \"Random Forest\")\n", - "plt.plot([4.837e-3, 4.837e-3], [0., 0.4368], \"r:\")\n", - "plt.plot([0.0, 4.837e-3], [0.4368, 0.4368], \"r:\")\n", - "plt.plot([4.837e-3], [0.4368], \"ro\")\n", - "plt.plot([4.837e-3, 4.837e-3], [0., 0.9487], \"r:\")\n", - "plt.plot([4.837e-3], [0.9487], \"ro\")\n", + "plt.plot([fpr_90, fpr_90], [0., recall_90_precision], \"r:\")\n", + "plt.plot([0.0, fpr_90], [recall_90_precision, recall_90_precision], \"r:\")\n", + "plt.plot([fpr_90], [recall_90_precision], \"ro\")\n", + "plt.plot([fpr_90, fpr_90], [0., recall_for_forest], \"r:\")\n", + "plt.plot([fpr_90], [recall_for_forest], \"ro\")\n", "plt.grid(True)\n", "plt.legend(loc=\"lower right\", fontsize=16)\n", "save_fig(\"roc_curve_comparison_plot\")\n", @@ -691,7 +713,7 @@ }, { "cell_type": "code", - "execution_count": 50, + "execution_count": 58, "metadata": {}, "outputs": [], "source": [ @@ -700,7 +722,7 @@ }, { "cell_type": "code", - "execution_count": 51, + "execution_count": 59, "metadata": {}, "outputs": [], "source": [ @@ -710,7 +732,7 @@ }, { "cell_type": "code", - "execution_count": 52, + "execution_count": 60, "metadata": {}, "outputs": [], "source": [ @@ -1031,7 +1053,7 @@ "outputs": [], "source": [ "from sklearn.dummy import DummyClassifier\n", - "dmy_clf = DummyClassifier()\n", + "dmy_clf = DummyClassifier(strategy=\"prior\")\n", "y_probas_dmy = cross_val_predict(dmy_clf, X_train, y_train_5, cv=3, method=\"predict_proba\")\n", "y_scores_dmy = y_probas_dmy[:, 1]" ] @@ -2127,14 +2149,14 @@ }, { "cell_type": "code", - "execution_count": 142, + "execution_count": 185, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "from sklearn.model_selection import train_test_split\n", "\n", - "X = np.array(ham_emails + spam_emails)\n", + "X = np.array(ham_emails + spam_emails, dtype=object)\n", "y = np.array([0] * len(ham_emails) + [1] * len(spam_emails))\n", "\n", "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)" @@ -2488,14 +2510,14 @@ }, { "cell_type": "code", - "execution_count": 158, + "execution_count": 183, "metadata": {}, "outputs": [], "source": [ "from sklearn.linear_model import LogisticRegression\n", "from sklearn.model_selection import cross_val_score\n", "\n", - "log_clf = LogisticRegression(solver=\"lbfgs\", random_state=42)\n", + "log_clf = LogisticRegression(solver=\"lbfgs\", max_iter=1000, random_state=42)\n", "score = cross_val_score(log_clf, X_train_transformed, y_train, cv=3, verbose=3)\n", "score.mean()" ] @@ -2504,14 +2526,14 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Over 98.7%, not bad for a first try! :) However, remember that we are using the \"easy\" dataset. You can try with the harder datasets, the results won't be so amazing. You would have to try multiple models, select the best ones and fine-tune them using cross-validation, and so on.\n", + "Over 98.5%, not bad for a first try! :) However, remember that we are using the \"easy\" dataset. You can try with the harder datasets, the results won't be so amazing. You would have to try multiple models, select the best ones and fine-tune them using cross-validation, and so on.\n", "\n", "But you get the picture, so let's stop now, and just print out the precision/recall we get on the test set:" ] }, { "cell_type": "code", - "execution_count": 159, + "execution_count": 184, "metadata": {}, "outputs": [], "source": [ @@ -2519,7 +2541,7 @@ "\n", "X_test_transformed = preprocess_pipeline.transform(X_test)\n", "\n", - "log_clf = LogisticRegression(solver=\"lbfgs\", random_state=42)\n", + "log_clf = LogisticRegression(solver=\"lbfgs\", max_iter=1000, random_state=42)\n", "log_clf.fit(X_train_transformed, y_train)\n", "\n", "y_pred = log_clf.predict(X_test_transformed)\n", @@ -2552,7 +2574,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.8" }, "nav_menu": {}, "toc": { diff --git a/requirements.txt b/requirements.txt index b06dc2d..440c6d7 100644 --- a/requirements.txt +++ b/requirements.txt @@ -2,71 +2,64 @@ # on Windows or when using a GPU. Please see the installation # instructions in INSTALL.md + ##### Core scientific packages jupyter==1.0.0 -matplotlib==3.1.3 -numpy==1.18.1 -pandas==1.0.3 -scipy==1.4.1 - +matplotlib==3.3.2 +numpy==1.18.5 +pandas==1.1.3 +scipy==1.5.3 ##### Machine Learning packages -scikit-learn==0.22 +scikit-learn==0.23.2 # Optional: the XGBoost library is only used in chapter 7 -xgboost==1.0.2 +xgboost==1.2.1 # Optional: the transformers library is only using in chapter 16 -transformers==2.8.0 +transformers==3.3.1 ##### TensorFlow-related packages # If you have a TF-compatible GPU and you want to enable GPU support, then -# replace tensorflow with tensorflow-gpu, and replace tensorflow-serving-api -# with tensorflow-serving-api-gpu. +# replace tensorflow-serving-api with tensorflow-serving-api-gpu. # Your GPU must have CUDA Compute Capability 3.5 or higher support, and # you must install CUDA, cuDNN and more: see tensorflow.org for the detailed # installation instructions. -tensorflow==2.1.0 - +tensorflow==2.3.1 # Optional: the TF Serving API library is just needed for chapter 19. -tensorflow-serving-api==2.1.0 -#tensorflow-serving-api-gpu==2.1.0 +tensorflow-serving-api==2.3.0 # or tensorflow-serving-api-gpu if gpu -tensorboard==2.1.1 -tensorboard-plugin-profile==2.2.0 -tensorflow-datasets==2.1.0 -tensorflow-hub==0.7.0 -tensorflow-probability==0.9.0 +tensorboard==2.3.0 +tensorboard-plugin-profile==2.3.0 +tensorflow-datasets==4.0.1 +tensorflow-hub==0.9.0 +tensorflow-probability==0.11.1 # Optional: only used in chapter 13. # NOT AVAILABLE ON WINDOWS -tfx==0.21.2 +tfx==0.24.1 # Optional: only used in chapter 16. # NOT AVAILABLE ON WINDOWS -tensorflow-addons==0.8.3 +tensorflow-addons==0.11.2 ##### Reinforcement Learning library (chapter 18) # There are a few dependencies you need to install first, check out: # https://github.com/openai/gym#installing-everything -gym[atari]==0.17.1 +gym[atari]==0.17.3 # On Windows, install atari_py using: # pip install --no-index -f https://github.com/Kojoley/atari-py/releases atari_py -tf-agents==0.3.0 - +tf-agents==0.6.0 ##### Image manipulation -imageio==2.6.1 -Pillow==7.0.0 -scikit-image==0.16.2 -graphviz==0.13.2 -pydot==1.4.1 -opencv-python==4.2.0.32 -pyglet==1.5.0 +Pillow==8.0.0 +graphviz==0.14.2 +opencv-python==4.4.0.44 +pyglet==1.4.11 #pyvirtualdisplay # needed in chapter 16, if on a headless server # (i.e., without screen, e.g., Colab or VM) @@ -78,10 +71,10 @@ pyglet==1.5.0 joblib==0.14.1 # Easy http requests -requests==2.23.0 +requests==2.24.0 # Nice utility to diff Jupyter Notebooks. -nbdime==2.0.0 +nbdime==2.1.0 # May be useful with Pandas for complex "where" clauses (e.g., Pandas # tutorial). @@ -89,13 +82,21 @@ numexpr==2.7.1 # Optional: these libraries can be useful in the classification chapter, # exercise 4. -nltk==3.4.5 -urlextract==0.14.0 +nltk==3.5 +urlextract==1.1.0 # Optional: these libraries are only used in chapter 16 -spacy==2.2.4 -ftfy==5.7 +ftfy==5.8 # Optional: tqdm displays nice progress bars, ipywidgets for tqdm's notebook support -tqdm==4.43.0 +tqdm==4.50.2 ipywidgets==7.5.1 + + + +# Specific lib versions to avoid conflicts +attrs==19.3.0 +cloudpickle==1.3.0 +dill==0.3.1.1 +gast==0.3.3 +httplib2==0.17.4 From 8ebdcffc6b65257f76c9197776c6fd3f47d5eb4e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aur=C3=A9lien=20Geron?= Date: Mon, 23 Nov 2020 16:52:37 +1300 Subject: [PATCH 17/49] Work around TF Agents issue: env.step(1) => env.step(np.array(1)) --- 18_reinforcement_learning.ipynb | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/18_reinforcement_learning.ipynb b/18_reinforcement_learning.ipynb index e6d3717..b17da27 100644 --- a/18_reinforcement_learning.ipynb +++ b/18_reinforcement_learning.ipynb @@ -1860,13 +1860,20 @@ "env.reset()" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Warning**: since TF Agents 0.4.0, there seems to be an issue with passing an integer to the `env.step()` method (it raises an `AttributeError`). You need to wrap it in a NumPy array, as done below. Please see [TF Agents Issue #520](https://github.com/tensorflow/agents/issues/520) for more details." + ] + }, { "cell_type": "code", "execution_count": 82, "metadata": {}, "outputs": [], "source": [ - "env.step(1) # Fire" + "env.step(np.array(1)) # Fire" ] }, { @@ -2074,9 +2081,9 @@ "source": [ "env.seed(42)\n", "env.reset()\n", - "time_step = env.step(1) # FIRE\n", + "time_step = env.step(np.array(1)) # FIRE\n", "for _ in range(4):\n", - " time_step = env.step(3) # LEFT" + " time_step = env.step(np.array(3)) # LEFT" ] }, { @@ -2215,7 +2222,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Create the replay buffer:" + "Create the replay buffer (this may use a lot of RAM, so please reduce the buffer size if you get an out-of-memory error):" ] }, { @@ -2521,7 +2528,7 @@ " lives = tf_env.pyenv.envs[0].ale.lives()\n", " if prev_lives != lives:\n", " tf_env.reset()\n", - " tf_env.pyenv.envs[0].step(1)\n", + " tf_env.pyenv.envs[0].step(np.array(1))\n", " prev_lives = lives\n", "\n", "watch_driver = DynamicStepDriver(\n", @@ -2790,7 +2797,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.8" } }, "nbformat": 4, From 05aca9c313becab22fc5cda65684c1563228280a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Alejandro=20Samar=C3=ADn?= Date: Fri, 27 Nov 2020 01:59:30 +0000 Subject: [PATCH 18/49] Fix bottom='off' to bottom=False in tools_matplotlib.ipynb --- tools_matplotlib.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools_matplotlib.ipynb b/tools_matplotlib.ipynb index 1cf655a..bddb7f6 100644 --- a/tools_matplotlib.ipynb +++ b/tools_matplotlib.ipynb @@ -695,7 +695,7 @@ "ax = plt.subplot(133)\n", "plt.plot(x, x**3)\n", "plt.minorticks_on()\n", - "ax.tick_params(axis='x', which='minor', bottom='off')\n", + "ax.tick_params(axis='x', which='minor', bottom=False)\n", "ax.xaxis.set_ticks([-2, 0, 1, 2])\n", "ax.yaxis.set_ticks(np.arange(-5, 5, 1))\n", "ax.yaxis.set_ticklabels([\"min\", -4, -3, -2, -1, 0, 1, 2, 3, \"max\"])\n", From db6ed9d3c81569c84ba1f8da99de2dad0b8220b3 Mon Sep 17 00:00:00 2001 From: yx-chan131 <68681893+yx-chan131@users.noreply.github.com> Date: Fri, 4 Dec 2020 14:27:37 +0800 Subject: [PATCH 19/49] Update 01_the_machine_learning_landscape.ipynb must specifically import urllib.request --- 01_the_machine_learning_landscape.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/01_the_machine_learning_landscape.ipynb b/01_the_machine_learning_landscape.ipynb index 24f8aaa..a1f92db 100644 --- a/01_the_machine_learning_landscape.ipynb +++ b/01_the_machine_learning_landscape.ipynb @@ -124,7 +124,7 @@ "outputs": [], "source": [ "# Download the data\n", - "import urllib\n", + "import urllib.request\n", "DOWNLOAD_ROOT = \"https://raw.githubusercontent.com/ageron/handson-ml2/master/\"\n", "os.makedirs(datapath, exist_ok=True)\n", "for filename in (\"oecd_bli_2015.csv\", \"gdp_per_capita.csv\"):\n", From dad239b5a7234e3cfb1bce7c3e90b95c6014d18d Mon Sep 17 00:00:00 2001 From: Marco Breemhaar Date: Thu, 31 Dec 2020 17:34:26 +0100 Subject: [PATCH 20/49] Fixed compatibility issue for scikit-learn 0.24 --- 03_classification.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/03_classification.ipynb b/03_classification.ipynb index 5d76ee1..567f7ff 100644 --- a/03_classification.ipynb +++ b/03_classification.ipynb @@ -91,7 +91,7 @@ "outputs": [], "source": [ "from sklearn.datasets import fetch_openml\n", - "mnist = fetch_openml('mnist_784', version=1)\n", + "mnist = fetch_openml('mnist_784', version=1, as_frame=False)\n", "mnist.keys()" ] }, From 1ab5e16f37b13a38c6c290e3a55d7f424565b5f7 Mon Sep 17 00:00:00 2001 From: hms5232 Date: Mon, 4 Jan 2021 21:25:46 +0800 Subject: [PATCH 21/49] Update 02 about urllib module attribute See https://github.com/ageron/handson-ml2/issues/219 This error also occurred in 01. There are many PRs for 01 but none for 02? --- 02_end_to_end_machine_learning_project.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/02_end_to_end_machine_learning_project.ipynb b/02_end_to_end_machine_learning_project.ipynb index 0aafdf6..69714f2 100644 --- a/02_end_to_end_machine_learning_project.ipynb +++ b/02_end_to_end_machine_learning_project.ipynb @@ -95,7 +95,7 @@ "source": [ "import os\n", "import tarfile\n", - "import urllib\n", + "import urllib.request\n", "\n", "DOWNLOAD_ROOT = \"https://raw.githubusercontent.com/ageron/handson-ml2/master/\"\n", "HOUSING_PATH = os.path.join(\"datasets\", \"housing\")\n", From 9b7ab19c56db92a7635fa82e1df37d81fbac291d Mon Sep 17 00:00:00 2001 From: Jerome Lovy Date: Mon, 25 Jan 2021 11:55:32 +0100 Subject: [PATCH 22/49] Import urllib.request instead of urllib As of January 25, 2021, in some environments, such as Colab (Python 3.6.9), the following import statement import urllib is not the right one for using urllib.request. Indeed, calls to urllib.request functions then yield the following error: AttributeError: module 'urllib' has no attribute 'request' One must import urllib.request instead. See also https://stackoverflow.com/q/22278993 --- 01_the_machine_learning_landscape.ipynb | 2 +- 02_end_to_end_machine_learning_project.ipynb | 2 +- 03_classification.ipynb | 2 +- 09_unsupervised_learning.ipynb | 2 +- 13_loading_and_preprocessing_data.ipynb | 2 +- 5 files changed, 5 insertions(+), 5 deletions(-) diff --git a/01_the_machine_learning_landscape.ipynb b/01_the_machine_learning_landscape.ipynb index 24f8aaa..a1f92db 100644 --- a/01_the_machine_learning_landscape.ipynb +++ b/01_the_machine_learning_landscape.ipynb @@ -124,7 +124,7 @@ "outputs": [], "source": [ "# Download the data\n", - "import urllib\n", + "import urllib.request\n", "DOWNLOAD_ROOT = \"https://raw.githubusercontent.com/ageron/handson-ml2/master/\"\n", "os.makedirs(datapath, exist_ok=True)\n", "for filename in (\"oecd_bli_2015.csv\", \"gdp_per_capita.csv\"):\n", diff --git a/02_end_to_end_machine_learning_project.ipynb b/02_end_to_end_machine_learning_project.ipynb index 0aafdf6..69714f2 100644 --- a/02_end_to_end_machine_learning_project.ipynb +++ b/02_end_to_end_machine_learning_project.ipynb @@ -95,7 +95,7 @@ "source": [ "import os\n", "import tarfile\n", - "import urllib\n", + "import urllib.request\n", "\n", "DOWNLOAD_ROOT = \"https://raw.githubusercontent.com/ageron/handson-ml2/master/\"\n", "HOUSING_PATH = os.path.join(\"datasets\", \"housing\")\n", diff --git a/03_classification.ipynb b/03_classification.ipynb index 5d76ee1..f535992 100644 --- a/03_classification.ipynb +++ b/03_classification.ipynb @@ -1918,7 +1918,7 @@ "source": [ "import os\n", "import tarfile\n", - "import urllib\n", + "import urllib.request\n", "\n", "DOWNLOAD_ROOT = \"http://spamassassin.apache.org/old/publiccorpus/\"\n", "HAM_URL = DOWNLOAD_ROOT + \"20030228_easy_ham.tar.bz2\"\n", diff --git a/09_unsupervised_learning.ipynb b/09_unsupervised_learning.ipynb index fc9197d..3284d6a 100644 --- a/09_unsupervised_learning.ipynb +++ b/09_unsupervised_learning.ipynb @@ -935,7 +935,7 @@ "metadata": {}, "outputs": [], "source": [ - "import urllib\n", + "import urllib.request\n", "from sklearn.datasets import fetch_openml\n", "\n", "mnist = fetch_openml('mnist_784', version=1)\n", diff --git a/13_loading_and_preprocessing_data.ipynb b/13_loading_and_preprocessing_data.ipynb index 144b216..1591134 100644 --- a/13_loading_and_preprocessing_data.ipynb +++ b/13_loading_and_preprocessing_data.ipynb @@ -1430,7 +1430,7 @@ "source": [ "import os\n", "import tarfile\n", - "import urllib\n", + "import urllib.request\n", "\n", "DOWNLOAD_ROOT = \"https://raw.githubusercontent.com/ageron/handson-ml2/master/\"\n", "HOUSING_PATH = os.path.join(\"datasets\", \"housing\")\n", From d0f78d88162988f026dde4500c4636bd61b27908 Mon Sep 17 00:00:00 2001 From: Ayush Sharma <62552288+ayushs2k1@users.noreply.github.com> Date: Mon, 25 Jan 2021 17:56:06 +0530 Subject: [PATCH 23/49] Update 01_the_machine_learning_landscape.ipynb --- 01_the_machine_learning_landscape.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/01_the_machine_learning_landscape.ipynb b/01_the_machine_learning_landscape.ipynb index 24f8aaa..a1f92db 100644 --- a/01_the_machine_learning_landscape.ipynb +++ b/01_the_machine_learning_landscape.ipynb @@ -124,7 +124,7 @@ "outputs": [], "source": [ "# Download the data\n", - "import urllib\n", + "import urllib.request\n", "DOWNLOAD_ROOT = \"https://raw.githubusercontent.com/ageron/handson-ml2/master/\"\n", "os.makedirs(datapath, exist_ok=True)\n", "for filename in (\"oecd_bli_2015.csv\", \"gdp_per_capita.csv\"):\n", From a86b2f657f2de17b9dd0dede866ad25c383afe0b Mon Sep 17 00:00:00 2001 From: hellowesley <15952538+hellowesley@users.noreply.github.com> Date: Wed, 27 Jan 2021 20:12:29 -0800 Subject: [PATCH 24/49] fix missing chapter 3 header --- 03_classification.ipynb | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/03_classification.ipynb b/03_classification.ipynb index 5d76ee1..29fe4cf 100644 --- a/03_classification.ipynb +++ b/03_classification.ipynb @@ -857,6 +857,13 @@ "cross_val_score(sgd_clf, X_train_scaled, y_train, cv=3, scoring=\"accuracy\")" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Error analysis" + ] + }, { "cell_type": "code", "execution_count": 64, @@ -2574,7 +2581,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.8" + "version": "3.8.2" }, "nav_menu": {}, "toc": { From 670873843defc76e5586b521390d5356af922070 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aur=C3=A9lien=20Geron?= Date: Sun, 14 Feb 2021 15:02:09 +1300 Subject: [PATCH 25/49] Update libraries to latest version, including TensorFlow 2.4.1 and Scikit-Learn 0.24.1 --- 01_the_machine_learning_landscape.ipynb | 4 +- 02_end_to_end_machine_learning_project.ipynb | 44 +- 03_classification.ipynb | 68 ++- 04_training_linear_models.ipynb | 8 +- 05_support_vector_machines.ipynb | 11 +- 06_decision_trees.ipynb | 2 +- 07_ensemble_learning_and_random_forests.ipynb | 147 ++--- 08_dimensionality_reduction.ipynb | 14 +- 09_unsupervised_learning.ipynb | 547 ++++++++++-------- 10_neural_nets_with_keras.ipynb | 31 +- 11_training_deep_neural_networks.ipynb | 22 +- ..._models_and_training_with_tensorflow.ipynb | 467 +++++++-------- 13_loading_and_preprocessing_data.ipynb | 16 +- environment.yml | 92 ++- requirements.txt | 61 +- 15 files changed, 797 insertions(+), 737 deletions(-) diff --git a/01_the_machine_learning_landscape.ipynb b/01_the_machine_learning_landscape.ipynb index 24f8aaa..622a80a 100644 --- a/01_the_machine_learning_landscape.ipynb +++ b/01_the_machine_learning_landscape.ipynb @@ -124,7 +124,7 @@ "outputs": [], "source": [ "# Download the data\n", - "import urllib\n", + "import urllib.request\n", "DOWNLOAD_ROOT = \"https://raw.githubusercontent.com/ageron/handson-ml2/master/\"\n", "os.makedirs(datapath, exist_ok=True)\n", "for filename in (\"oecd_bli_2015.csv\", \"gdp_per_capita.csv\"):\n", @@ -785,7 +785,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.9" }, "nav_menu": {}, "toc": { diff --git a/02_end_to_end_machine_learning_project.ipynb b/02_end_to_end_machine_learning_project.ipynb index 0aafdf6..b8619d6 100644 --- a/02_end_to_end_machine_learning_project.ipynb +++ b/02_end_to_end_machine_learning_project.ipynb @@ -73,11 +73,7 @@ " print(\"Saving figure\", fig_id)\n", " if tight_layout:\n", " plt.tight_layout()\n", - " plt.savefig(path, format=fig_extension, dpi=resolution)\n", - "\n", - "# Ignore useless warnings (see SciPy issue #5998)\n", - "import warnings\n", - "warnings.filterwarnings(action=\"ignore\", message=\"^internal gelsd\")" + " plt.savefig(path, format=fig_extension, dpi=resolution)" ] }, { @@ -95,7 +91,7 @@ "source": [ "import os\n", "import tarfile\n", - "import urllib\n", + "import urllib.request\n", "\n", "DOWNLOAD_ROOT = \"https://raw.githubusercontent.com/ageron/handson-ml2/master/\"\n", "HOUSING_PATH = os.path.join(\"datasets\", \"housing\")\n", @@ -490,9 +486,9 @@ "outputs": [], "source": [ "housing.plot(kind=\"scatter\", x=\"longitude\", y=\"latitude\", alpha=0.4,\n", - " s=housing[\"population\"]/100, label=\"population\", figsize=(10,7),\n", - " c=\"median_house_value\", cmap=plt.get_cmap(\"jet\"), colorbar=True,\n", - " sharex=False)\n", + " s=housing[\"population\"]/100, label=\"population\", figsize=(10,7),\n", + " c=\"median_house_value\", cmap=plt.get_cmap(\"jet\"), colorbar=True,\n", + " sharex=False)\n", "plt.legend()\n", "save_fig(\"housing_prices_scatterplot\")" ] @@ -522,10 +518,9 @@ "import matplotlib.image as mpimg\n", "california_img=mpimg.imread(os.path.join(images_path, filename))\n", "ax = housing.plot(kind=\"scatter\", x=\"longitude\", y=\"latitude\", figsize=(10,7),\n", - " s=housing['population']/100, label=\"Population\",\n", - " c=\"median_house_value\", cmap=plt.get_cmap(\"jet\"),\n", - " colorbar=False, alpha=0.4,\n", - " )\n", + " s=housing['population']/100, label=\"Population\",\n", + " c=\"median_house_value\", cmap=plt.get_cmap(\"jet\"),\n", + " colorbar=False, alpha=0.4)\n", "plt.imshow(california_img, extent=[-124.55, -113.80, 32.45, 42.05], alpha=0.5,\n", " cmap=plt.get_cmap(\"jet\"))\n", "plt.ylabel(\"Latitude\", fontsize=14)\n", @@ -1694,6 +1689,13 @@ "Question: Try a Support Vector Machine regressor (`sklearn.svm.SVR`), with various hyperparameters such as `kernel=\"linear\"` (with various values for the `C` hyperparameter) or `kernel=\"rbf\"` (with various values for the `C` and `gamma` hyperparameters). Don't worry about what these hyperparameters mean for now. How does the best `SVR` predictor perform?" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Warning**: the following cell may take close to 30 minutes to run, or more depending on your hardware." + ] + }, { "cell_type": "code", "execution_count": 117, @@ -1768,6 +1770,13 @@ "Question: Try replacing `GridSearchCV` with `RandomizedSearchCV`." ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Warning**: the following cell may take close to 45 minutes to run, or more depending on your hardware." + ] + }, { "cell_type": "code", "execution_count": 120, @@ -2137,6 +2146,13 @@ "Question: Automatically explore some preparation options using `GridSearchCV`." ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Warning**: the following cell may take close to 45 minutes to run, or more depending on your hardware." + ] + }, { "cell_type": "code", "execution_count": 137, @@ -2193,7 +2209,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.8" + "version": "3.7.9" }, "nav_menu": { "height": "279px", diff --git a/03_classification.ipynb b/03_classification.ipynb index 5d76ee1..cd5ac1e 100644 --- a/03_classification.ipynb +++ b/03_classification.ipynb @@ -345,7 +345,7 @@ "* first, Scikit-Learn and other libraries evolve, and algorithms get tweaked a bit, which may change the exact result you get. If you use the latest Scikit-Learn version (and in general, you really should), you probably won't be using the exact same version I used when I wrote the book or this notebook, hence the difference. I try to keep this notebook reasonably up to date, but I can't change the numbers on the pages in your copy of the book.\n", "* second, many training algorithms are stochastic, meaning they rely on randomness. In principle, it's possible to get consistent outputs from a random number generator by setting the seed from which it generates the pseudo-random numbers (which is why you will see `random_state=42` or `np.random.seed(42)` pretty often). However, sometimes this does not suffice due to the other factors listed here.\n", "* third, if the training algorithm runs across multiple threads (as do some algorithms implemented in C) or across multiple processes (e.g., when using the `n_jobs` argument), then the precise order in which operations will run is not always guaranteed, and thus the exact result may vary slightly.\n", - "* lastly, other things may prevent perfect reproducibility, such as Python maps and sets whose order is not guaranteed to be stable across sessions, or the order of files in a directory which is also not guaranteed." + "* lastly, other things may prevent perfect reproducibility, such as Python dicts and sets whose order is not guaranteed to be stable across sessions, or the order of files in a directory which is also not guaranteed." ] }, { @@ -393,7 +393,7 @@ }, { "cell_type": "code", - "execution_count": 27, + "execution_count": 25, "metadata": {}, "outputs": [], "source": [ @@ -412,7 +412,7 @@ }, { "cell_type": "code", - "execution_count": 28, + "execution_count": 27, "metadata": {}, "outputs": [], "source": [ @@ -481,7 +481,7 @@ }, { "cell_type": "code", - "execution_count": 30, + "execution_count": 34, "metadata": {}, "outputs": [], "source": [ @@ -491,7 +491,7 @@ }, { "cell_type": "code", - "execution_count": 31, + "execution_count": 35, "metadata": {}, "outputs": [], "source": [ @@ -502,7 +502,7 @@ }, { "cell_type": "code", - "execution_count": 32, + "execution_count": 36, "metadata": {}, "outputs": [], "source": [ @@ -533,7 +533,7 @@ }, { "cell_type": "code", - "execution_count": 33, + "execution_count": 37, "metadata": {}, "outputs": [], "source": [ @@ -542,7 +542,7 @@ }, { "cell_type": "code", - "execution_count": 35, + "execution_count": 38, "metadata": {}, "outputs": [], "source": [ @@ -564,7 +564,7 @@ }, { "cell_type": "code", - "execution_count": 42, + "execution_count": 39, "metadata": {}, "outputs": [], "source": [ @@ -573,7 +573,7 @@ }, { "cell_type": "code", - "execution_count": 43, + "execution_count": 40, "metadata": {}, "outputs": [], "source": [ @@ -582,7 +582,7 @@ }, { "cell_type": "code", - "execution_count": 44, + "execution_count": 41, "metadata": {}, "outputs": [], "source": [ @@ -591,7 +591,7 @@ }, { "cell_type": "code", - "execution_count": 45, + "execution_count": 42, "metadata": {}, "outputs": [], "source": [ @@ -600,7 +600,7 @@ }, { "cell_type": "code", - "execution_count": 46, + "execution_count": 43, "metadata": {}, "outputs": [], "source": [ @@ -616,7 +616,7 @@ }, { "cell_type": "code", - "execution_count": 47, + "execution_count": 44, "metadata": {}, "outputs": [], "source": [ @@ -627,7 +627,7 @@ }, { "cell_type": "code", - "execution_count": 50, + "execution_count": 45, "metadata": {}, "outputs": [], "source": [ @@ -651,7 +651,7 @@ }, { "cell_type": "code", - "execution_count": 53, + "execution_count": 46, "metadata": {}, "outputs": [], "source": [ @@ -669,7 +669,7 @@ }, { "cell_type": "code", - "execution_count": 54, + "execution_count": 47, "metadata": {}, "outputs": [], "source": [ @@ -681,7 +681,7 @@ }, { "cell_type": "code", - "execution_count": 55, + "execution_count": 48, "metadata": {}, "outputs": [], "source": [ @@ -691,7 +691,7 @@ }, { "cell_type": "code", - "execution_count": 57, + "execution_count": 49, "metadata": {}, "outputs": [], "source": [ @@ -713,7 +713,7 @@ }, { "cell_type": "code", - "execution_count": 58, + "execution_count": 50, "metadata": {}, "outputs": [], "source": [ @@ -722,7 +722,7 @@ }, { "cell_type": "code", - "execution_count": 59, + "execution_count": 51, "metadata": {}, "outputs": [], "source": [ @@ -732,7 +732,7 @@ }, { "cell_type": "code", - "execution_count": 60, + "execution_count": 52, "metadata": {}, "outputs": [], "source": [ @@ -836,6 +836,13 @@ "sgd_clf.decision_function([some_digit])" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Warning**: the following two cells may take close to 30 minutes to run, or more depending on your hardware." + ] + }, { "cell_type": "code", "execution_count": 62, @@ -1202,7 +1209,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "**Warning**: the next cell may take hours to run, depending on your hardware." + "**Warning**: the next cell may take close to 16 hours to run, or more depending on your hardware." ] }, { @@ -1348,6 +1355,13 @@ "knn_clf.fit(X_train_augmented, y_train_augmented)" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Warning**: the following cell may take close to an hour to run, depending on your hardware." + ] + }, { "cell_type": "code", "execution_count": 99, @@ -1918,7 +1932,7 @@ "source": [ "import os\n", "import tarfile\n", - "import urllib\n", + "import urllib.request\n", "\n", "DOWNLOAD_ROOT = \"http://spamassassin.apache.org/old/publiccorpus/\"\n", "HAM_URL = DOWNLOAD_ROOT + \"20030228_easy_ham.tar.bz2\"\n", @@ -2149,7 +2163,7 @@ }, { "cell_type": "code", - "execution_count": 185, + "execution_count": 142, "metadata": {}, "outputs": [], "source": [ @@ -2510,7 +2524,7 @@ }, { "cell_type": "code", - "execution_count": 183, + "execution_count": 158, "metadata": {}, "outputs": [], "source": [ @@ -2533,7 +2547,7 @@ }, { "cell_type": "code", - "execution_count": 184, + "execution_count": 159, "metadata": {}, "outputs": [], "source": [ diff --git a/04_training_linear_models.ipynb b/04_training_linear_models.ipynb index 0ac4d1a..f91c9cc 100644 --- a/04_training_linear_models.ipynb +++ b/04_training_linear_models.ipynb @@ -79,11 +79,7 @@ " print(\"Saving figure\", fig_id)\n", " if tight_layout:\n", " plt.tight_layout()\n", - " plt.savefig(path, format=fig_extension, dpi=resolution)\n", - "\n", - "# Ignore useless warnings (see SciPy issue #5998)\n", - "import warnings\n", - "warnings.filterwarnings(action=\"ignore\", message=\"^internal gelsd\")" + " plt.savefig(path, format=fig_extension, dpi=resolution)" ] }, { @@ -1797,7 +1793,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.8" }, "nav_menu": {}, "toc": { diff --git a/05_support_vector_machines.ipynb b/05_support_vector_machines.ipynb index 2fa5543..208cf18 100644 --- a/05_support_vector_machines.ipynb +++ b/05_support_vector_machines.ipynb @@ -1566,7 +1566,14 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "This looks pretty low but remember we only trained the model on 1,000 instances. Let's retrain the best estimator on the whole training set (run this at night, it will take hours):" + "This looks pretty low but remember we only trained the model on 1,000 instances. Let's retrain the best estimator on the whole training set:" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Warning**: the following cell may take hours to run, depending on your hardware." ] }, { @@ -1830,7 +1837,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.8" }, "nav_menu": {}, "toc": { diff --git a/06_decision_trees.ipynb b/06_decision_trees.ipynb index a8237ac..504acaa 100644 --- a/06_decision_trees.ipynb +++ b/06_decision_trees.ipynb @@ -729,7 +729,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.8" }, "nav_menu": { "height": "309px", diff --git a/07_ensemble_learning_and_random_forests.ipynb b/07_ensemble_learning_and_random_forests.ipynb index f4f135e..63a224e 100644 --- a/07_ensemble_learning_and_random_forests.ipynb +++ b/07_ensemble_learning_and_random_forests.ipynb @@ -181,6 +181,13 @@ " print(clf.__class__.__name__, accuracy_score(y_test, y_pred))" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Note**: the results in this notebook may differ slightly from the book, as Scikit-Learn algorithms sometimes get tweaked." + ] + }, { "cell_type": "markdown", "metadata": {}, @@ -535,21 +542,26 @@ "\n", "fix, axes = plt.subplots(ncols=2, figsize=(10,4), sharey=True)\n", "for subplot, learning_rate in ((0, 1), (1, 0.5)):\n", - " sample_weights = np.ones(m)\n", + " sample_weights = np.ones(m) / m\n", " plt.sca(axes[subplot])\n", " for i in range(5):\n", - " svm_clf = SVC(kernel=\"rbf\", C=0.05, gamma=\"scale\", random_state=42)\n", - " svm_clf.fit(X_train, y_train, sample_weight=sample_weights)\n", + " svm_clf = SVC(kernel=\"rbf\", C=0.2, gamma=0.6, random_state=42)\n", + " svm_clf.fit(X_train, y_train, sample_weight=sample_weights * m)\n", " y_pred = svm_clf.predict(X_train)\n", - " sample_weights[y_pred != y_train] *= (1 + learning_rate)\n", + "\n", + " r = sample_weights[y_pred != y_train].sum() / sample_weights.sum() # equation 7-1\n", + " alpha = learning_rate * np.log((1 - r) / r) # equation 7-2\n", + " sample_weights[y_pred != y_train] *= np.exp(alpha) # equation 7-3\n", + " sample_weights /= sample_weights.sum() # normalization step\n", + "\n", " plot_decision_boundary(svm_clf, X, y, alpha=0.2)\n", " plt.title(\"learning_rate = {}\".format(learning_rate), fontsize=16)\n", " if subplot == 0:\n", - " plt.text(-0.7, -0.65, \"1\", fontsize=14)\n", - " plt.text(-0.6, -0.10, \"2\", fontsize=14)\n", - " plt.text(-0.5, 0.10, \"3\", fontsize=14)\n", - " plt.text(-0.4, 0.55, \"4\", fontsize=14)\n", - " plt.text(-0.3, 0.90, \"5\", fontsize=14)\n", + " plt.text(-0.75, -0.95, \"1\", fontsize=14)\n", + " plt.text(-1.05, -0.95, \"2\", fontsize=14)\n", + " plt.text(1.0, -0.95, \"3\", fontsize=14)\n", + " plt.text(-1.45, -0.5, \"4\", fontsize=14)\n", + " plt.text(1.36, -0.95, \"5\", fontsize=14)\n", " else:\n", " plt.ylabel(\"\")\n", "\n", @@ -557,15 +569,6 @@ "plt.show()" ] }, - { - "cell_type": "code", - "execution_count": 32, - "metadata": {}, - "outputs": [], - "source": [ - "list(m for m in dir(ada_clf) if not m.startswith(\"_\") and m.endswith(\"_\"))" - ] - }, { "cell_type": "markdown", "metadata": {}, @@ -575,7 +578,7 @@ }, { "cell_type": "code", - "execution_count": 33, + "execution_count": 32, "metadata": {}, "outputs": [], "source": [ @@ -586,7 +589,7 @@ }, { "cell_type": "code", - "execution_count": 34, + "execution_count": 33, "metadata": {}, "outputs": [], "source": [ @@ -598,7 +601,7 @@ }, { "cell_type": "code", - "execution_count": 35, + "execution_count": 34, "metadata": {}, "outputs": [], "source": [ @@ -609,7 +612,7 @@ }, { "cell_type": "code", - "execution_count": 36, + "execution_count": 35, "metadata": {}, "outputs": [], "source": [ @@ -620,7 +623,7 @@ }, { "cell_type": "code", - "execution_count": 37, + "execution_count": 36, "metadata": {}, "outputs": [], "source": [ @@ -629,7 +632,7 @@ }, { "cell_type": "code", - "execution_count": 38, + "execution_count": 37, "metadata": {}, "outputs": [], "source": [ @@ -638,7 +641,7 @@ }, { "cell_type": "code", - "execution_count": 39, + "execution_count": 38, "metadata": {}, "outputs": [], "source": [ @@ -647,7 +650,7 @@ }, { "cell_type": "code", - "execution_count": 40, + "execution_count": 39, "metadata": {}, "outputs": [], "source": [ @@ -663,7 +666,7 @@ }, { "cell_type": "code", - "execution_count": 41, + "execution_count": 40, "metadata": {}, "outputs": [], "source": [ @@ -703,7 +706,7 @@ }, { "cell_type": "code", - "execution_count": 42, + "execution_count": 41, "metadata": {}, "outputs": [], "source": [ @@ -715,7 +718,7 @@ }, { "cell_type": "code", - "execution_count": 43, + "execution_count": 42, "metadata": {}, "outputs": [], "source": [ @@ -725,7 +728,7 @@ }, { "cell_type": "code", - "execution_count": 44, + "execution_count": 43, "metadata": {}, "outputs": [], "source": [ @@ -755,7 +758,7 @@ }, { "cell_type": "code", - "execution_count": 45, + "execution_count": 44, "metadata": {}, "outputs": [], "source": [ @@ -778,7 +781,7 @@ }, { "cell_type": "code", - "execution_count": 46, + "execution_count": 45, "metadata": {}, "outputs": [], "source": [ @@ -787,7 +790,7 @@ }, { "cell_type": "code", - "execution_count": 47, + "execution_count": 46, "metadata": {}, "outputs": [], "source": [ @@ -816,7 +819,7 @@ }, { "cell_type": "code", - "execution_count": 48, + "execution_count": 47, "metadata": {}, "outputs": [], "source": [ @@ -840,7 +843,7 @@ }, { "cell_type": "code", - "execution_count": 49, + "execution_count": 48, "metadata": {}, "outputs": [], "source": [ @@ -849,7 +852,7 @@ }, { "cell_type": "code", - "execution_count": 50, + "execution_count": 49, "metadata": {}, "outputs": [], "source": [ @@ -865,7 +868,7 @@ }, { "cell_type": "code", - "execution_count": 51, + "execution_count": 50, "metadata": {}, "outputs": [], "source": [ @@ -878,7 +881,7 @@ }, { "cell_type": "code", - "execution_count": 52, + "execution_count": 51, "metadata": {}, "outputs": [], "source": [ @@ -892,7 +895,7 @@ }, { "cell_type": "code", - "execution_count": 53, + "execution_count": 52, "metadata": {}, "outputs": [], "source": [ @@ -906,7 +909,7 @@ }, { "cell_type": "code", - "execution_count": 54, + "execution_count": 53, "metadata": {}, "outputs": [], "source": [ @@ -915,7 +918,7 @@ }, { "cell_type": "code", - "execution_count": 55, + "execution_count": 54, "metadata": {}, "outputs": [], "source": [ @@ -966,7 +969,7 @@ }, { "cell_type": "code", - "execution_count": 56, + "execution_count": 55, "metadata": {}, "outputs": [], "source": [ @@ -975,7 +978,7 @@ }, { "cell_type": "code", - "execution_count": 57, + "execution_count": 56, "metadata": {}, "outputs": [], "source": [ @@ -994,7 +997,7 @@ }, { "cell_type": "code", - "execution_count": 58, + "execution_count": 57, "metadata": {}, "outputs": [], "source": [ @@ -1005,19 +1008,19 @@ }, { "cell_type": "code", - "execution_count": 59, + "execution_count": 58, "metadata": {}, "outputs": [], "source": [ "random_forest_clf = RandomForestClassifier(n_estimators=100, random_state=42)\n", "extra_trees_clf = ExtraTreesClassifier(n_estimators=100, random_state=42)\n", - "svm_clf = LinearSVC(random_state=42)\n", + "svm_clf = LinearSVC(max_iter=100, tol=20, random_state=42)\n", "mlp_clf = MLPClassifier(random_state=42)" ] }, { "cell_type": "code", - "execution_count": 60, + "execution_count": 59, "metadata": {}, "outputs": [], "source": [ @@ -1029,7 +1032,7 @@ }, { "cell_type": "code", - "execution_count": 61, + "execution_count": 60, "metadata": {}, "outputs": [], "source": [ @@ -1052,7 +1055,7 @@ }, { "cell_type": "code", - "execution_count": 62, + "execution_count": 61, "metadata": {}, "outputs": [], "source": [ @@ -1061,7 +1064,7 @@ }, { "cell_type": "code", - "execution_count": 63, + "execution_count": 62, "metadata": {}, "outputs": [], "source": [ @@ -1075,7 +1078,7 @@ }, { "cell_type": "code", - "execution_count": 64, + "execution_count": 63, "metadata": {}, "outputs": [], "source": [ @@ -1084,7 +1087,7 @@ }, { "cell_type": "code", - "execution_count": 65, + "execution_count": 64, "metadata": {}, "outputs": [], "source": [ @@ -1093,7 +1096,7 @@ }, { "cell_type": "code", - "execution_count": 66, + "execution_count": 65, "metadata": {}, "outputs": [], "source": [ @@ -1102,7 +1105,7 @@ }, { "cell_type": "code", - "execution_count": 67, + "execution_count": 66, "metadata": {}, "outputs": [], "source": [ @@ -1118,7 +1121,7 @@ }, { "cell_type": "code", - "execution_count": 68, + "execution_count": 67, "metadata": {}, "outputs": [], "source": [ @@ -1134,7 +1137,7 @@ }, { "cell_type": "code", - "execution_count": 69, + "execution_count": 68, "metadata": {}, "outputs": [], "source": [ @@ -1150,7 +1153,7 @@ }, { "cell_type": "code", - "execution_count": 70, + "execution_count": 69, "metadata": {}, "outputs": [], "source": [ @@ -1166,7 +1169,7 @@ }, { "cell_type": "code", - "execution_count": 71, + "execution_count": 70, "metadata": {}, "outputs": [], "source": [ @@ -1182,7 +1185,7 @@ }, { "cell_type": "code", - "execution_count": 72, + "execution_count": 71, "metadata": {}, "outputs": [], "source": [ @@ -1198,7 +1201,7 @@ }, { "cell_type": "code", - "execution_count": 73, + "execution_count": 72, "metadata": {}, "outputs": [], "source": [ @@ -1207,7 +1210,7 @@ }, { "cell_type": "code", - "execution_count": 74, + "execution_count": 73, "metadata": {}, "outputs": [], "source": [ @@ -1230,7 +1233,7 @@ }, { "cell_type": "code", - "execution_count": 75, + "execution_count": 74, "metadata": {}, "outputs": [], "source": [ @@ -1240,7 +1243,7 @@ }, { "cell_type": "code", - "execution_count": 76, + "execution_count": 75, "metadata": {}, "outputs": [], "source": [ @@ -1270,7 +1273,7 @@ }, { "cell_type": "code", - "execution_count": 77, + "execution_count": 76, "metadata": {}, "outputs": [], "source": [ @@ -1282,7 +1285,7 @@ }, { "cell_type": "code", - "execution_count": 78, + "execution_count": 77, "metadata": {}, "outputs": [], "source": [ @@ -1291,7 +1294,7 @@ }, { "cell_type": "code", - "execution_count": 79, + "execution_count": 78, "metadata": {}, "outputs": [], "source": [ @@ -1301,7 +1304,7 @@ }, { "cell_type": "code", - "execution_count": 80, + "execution_count": 79, "metadata": {}, "outputs": [], "source": [ @@ -1324,7 +1327,7 @@ }, { "cell_type": "code", - "execution_count": 81, + "execution_count": 80, "metadata": {}, "outputs": [], "source": [ @@ -1336,7 +1339,7 @@ }, { "cell_type": "code", - "execution_count": 82, + "execution_count": 81, "metadata": {}, "outputs": [], "source": [ @@ -1345,7 +1348,7 @@ }, { "cell_type": "code", - "execution_count": 83, + "execution_count": 82, "metadata": {}, "outputs": [], "source": [ @@ -1354,7 +1357,7 @@ }, { "cell_type": "code", - "execution_count": 84, + "execution_count": 83, "metadata": {}, "outputs": [], "source": [ @@ -1392,7 +1395,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.8" }, "nav_menu": { "height": "252px", diff --git a/08_dimensionality_reduction.ipynb b/08_dimensionality_reduction.ipynb index 1be0a56..c7f1797 100644 --- a/08_dimensionality_reduction.ipynb +++ b/08_dimensionality_reduction.ipynb @@ -74,11 +74,7 @@ " print(\"Saving figure\", fig_id)\n", " if tight_layout:\n", " plt.tight_layout()\n", - " plt.savefig(path, format=fig_extension, dpi=resolution)\n", - "\n", - "# Ignore useless warnings (see SciPy issue #5998)\n", - "import warnings\n", - "warnings.filterwarnings(action=\"ignore\", message=\"^internal gelsd\")" + " plt.savefig(path, format=fig_extension, dpi=resolution)" ] }, { @@ -1731,7 +1727,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Nice! Reducing dimensionality led to a 4× speedup. :) Let's check the model's accuracy:" + "Nice! Reducing dimensionality led to over 2× speedup. :) Let's check the model's accuracy:" ] }, { @@ -1748,7 +1744,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "A very slight drop in performance, which might be a reasonable price to pay for a 4× speedup, depending on the application." + "A very slight drop in performance, which might be a reasonable price to pay for a 2× speedup, depending on the application." ] }, { @@ -2229,7 +2225,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Yes, PCA roughly gave us a 25% speedup, without damaging the result. We have a winner!" + "Yes, PCA roughly gave us over 2x speedup, without damaging the result. We have a winner!" ] }, { @@ -2256,7 +2252,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.8" } }, "nbformat": 4, diff --git a/09_unsupervised_learning.ipynb b/09_unsupervised_learning.ipynb index fc9197d..67283b1 100644 --- a/09_unsupervised_learning.ipynb +++ b/09_unsupervised_learning.ipynb @@ -74,11 +74,7 @@ " print(\"Saving figure\", fig_id)\n", " if tight_layout:\n", " plt.tight_layout()\n", - " plt.savefig(path, format=fig_extension, dpi=resolution)\n", - "\n", - "# Ignore useless warnings (see SciPy issue #5998)\n", - "import warnings\n", - "warnings.filterwarnings(action=\"ignore\", message=\"^internal gelsd\")" + " plt.savefig(path, format=fig_extension, dpi=resolution)" ] }, { @@ -163,9 +159,14 @@ "metadata": {}, "outputs": [], "source": [ - "y_pred = GaussianMixture(n_components=3, random_state=42).fit(X).predict(X)\n", - "mapping = np.array([2, 0, 1])\n", - "y_pred = np.array([mapping[cluster_id] for cluster_id in y_pred])" + "y_pred = GaussianMixture(n_components=3, random_state=42).fit(X).predict(X)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's map each cluster to a class. Instead of hard coding the mapping (as is done in the book, for simplicity), we will pick the most common class for each cluster (using the `scipy.stats.mode()` function):" ] }, { @@ -173,6 +174,31 @@ "execution_count": 7, "metadata": {}, "outputs": [], + "source": [ + "from scipy import stats\n", + "\n", + "mapping = {}\n", + "for class_id in np.unique(y):\n", + " mode, _ = stats.mode(y_pred[y==class_id])\n", + " mapping[mode[0]] = class_id\n", + "\n", + "mapping" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [], + "source": [ + "y_pred = np.array([mapping[cluster_id] for cluster_id in y_pred])" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": {}, + "outputs": [], "source": [ "plt.plot(X[y_pred==0, 2], X[y_pred==0, 3], \"yo\", label=\"Cluster 1\")\n", "plt.plot(X[y_pred==1, 2], X[y_pred==1, 3], \"bs\", label=\"Cluster 2\")\n", @@ -185,7 +211,7 @@ }, { "cell_type": "code", - "execution_count": 8, + "execution_count": 10, "metadata": {}, "outputs": [], "source": [ @@ -194,13 +220,20 @@ }, { "cell_type": "code", - "execution_count": 9, + "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "np.sum(y_pred==y) / len(y_pred)" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Note**: the results in this notebook may differ slightly from the book. This is because algorithms can sometimes be tweaked a bit between Scikit-Learn versions." + ] + }, { "cell_type": "markdown", "metadata": {}, @@ -217,7 +250,7 @@ }, { "cell_type": "code", - "execution_count": 10, + "execution_count": 12, "metadata": {}, "outputs": [], "source": [ @@ -226,7 +259,7 @@ }, { "cell_type": "code", - "execution_count": 11, + "execution_count": 13, "metadata": {}, "outputs": [], "source": [ @@ -241,7 +274,7 @@ }, { "cell_type": "code", - "execution_count": 12, + "execution_count": 14, "metadata": {}, "outputs": [], "source": [ @@ -258,7 +291,7 @@ }, { "cell_type": "code", - "execution_count": 13, + "execution_count": 15, "metadata": {}, "outputs": [], "source": [ @@ -270,7 +303,7 @@ }, { "cell_type": "code", - "execution_count": 14, + "execution_count": 16, "metadata": {}, "outputs": [], "source": [ @@ -296,7 +329,7 @@ }, { "cell_type": "code", - "execution_count": 15, + "execution_count": 17, "metadata": {}, "outputs": [], "source": [ @@ -305,7 +338,7 @@ }, { "cell_type": "code", - "execution_count": 16, + "execution_count": 18, "metadata": {}, "outputs": [], "source": [ @@ -323,7 +356,7 @@ }, { "cell_type": "code", - "execution_count": 17, + "execution_count": 19, "metadata": {}, "outputs": [], "source": [ @@ -332,7 +365,7 @@ }, { "cell_type": "code", - "execution_count": 18, + "execution_count": 20, "metadata": {}, "outputs": [], "source": [ @@ -348,7 +381,7 @@ }, { "cell_type": "code", - "execution_count": 19, + "execution_count": 21, "metadata": {}, "outputs": [], "source": [ @@ -364,7 +397,7 @@ }, { "cell_type": "code", - "execution_count": 20, + "execution_count": 22, "metadata": {}, "outputs": [], "source": [ @@ -380,7 +413,7 @@ }, { "cell_type": "code", - "execution_count": 21, + "execution_count": 23, "metadata": {}, "outputs": [], "source": [ @@ -404,7 +437,7 @@ }, { "cell_type": "code", - "execution_count": 22, + "execution_count": 24, "metadata": {}, "outputs": [], "source": [ @@ -415,10 +448,10 @@ " if weights is not None:\n", " centroids = centroids[weights > weights.max() / 10]\n", " plt.scatter(centroids[:, 0], centroids[:, 1],\n", - " marker='o', s=30, linewidths=8,\n", + " marker='o', s=35, linewidths=8,\n", " color=circle_color, zorder=10, alpha=0.9)\n", " plt.scatter(centroids[:, 0], centroids[:, 1],\n", - " marker='x', s=50, linewidths=50,\n", + " marker='x', s=2, linewidths=12,\n", " color=cross_color, zorder=11, alpha=1)\n", "\n", "def plot_decision_boundaries(clusterer, X, resolution=1000, show_centroids=True,\n", @@ -450,7 +483,7 @@ }, { "cell_type": "code", - "execution_count": 23, + "execution_count": 25, "metadata": {}, "outputs": [], "source": [ @@ -483,7 +516,7 @@ }, { "cell_type": "code", - "execution_count": 24, + "execution_count": 26, "metadata": {}, "outputs": [], "source": [ @@ -499,7 +532,7 @@ }, { "cell_type": "code", - "execution_count": 25, + "execution_count": 27, "metadata": {}, "outputs": [], "source": [ @@ -517,7 +550,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The K-Means algorithm is one of the fastest clustering algorithms, but also one of the simplest:\n", + "The K-Means algorithm is one of the fastest clustering algorithms, and also one of the simplest:\n", "* First initialize $k$ centroids randomly: $k$ distinct instances are chosen randomly from the dataset and the centroids are placed at their locations.\n", "* Repeat until convergence (i.e., until the centroids stop moving):\n", " * Assign each instance to the closest centroid.\n", @@ -540,16 +573,16 @@ }, { "cell_type": "code", - "execution_count": 26, + "execution_count": 28, "metadata": {}, "outputs": [], "source": [ "kmeans_iter1 = KMeans(n_clusters=5, init=\"random\", n_init=1,\n", - " algorithm=\"full\", max_iter=1, random_state=1)\n", + " algorithm=\"full\", max_iter=1, random_state=0)\n", "kmeans_iter2 = KMeans(n_clusters=5, init=\"random\", n_init=1,\n", - " algorithm=\"full\", max_iter=2, random_state=1)\n", + " algorithm=\"full\", max_iter=2, random_state=0)\n", "kmeans_iter3 = KMeans(n_clusters=5, init=\"random\", n_init=1,\n", - " algorithm=\"full\", max_iter=3, random_state=1)\n", + " algorithm=\"full\", max_iter=3, random_state=0)\n", "kmeans_iter1.fit(X)\n", "kmeans_iter2.fit(X)\n", "kmeans_iter3.fit(X)" @@ -564,7 +597,7 @@ }, { "cell_type": "code", - "execution_count": 27, + "execution_count": 29, "metadata": {}, "outputs": [], "source": [ @@ -617,7 +650,7 @@ }, { "cell_type": "code", - "execution_count": 28, + "execution_count": 30, "metadata": {}, "outputs": [], "source": [ @@ -640,14 +673,14 @@ }, { "cell_type": "code", - "execution_count": 29, + "execution_count": 31, "metadata": {}, "outputs": [], "source": [ "kmeans_rnd_init1 = KMeans(n_clusters=5, init=\"random\", n_init=1,\n", - " algorithm=\"full\", random_state=11)\n", + " algorithm=\"full\", random_state=2)\n", "kmeans_rnd_init2 = KMeans(n_clusters=5, init=\"random\", n_init=1,\n", - " algorithm=\"full\", random_state=19)\n", + " algorithm=\"full\", random_state=5)\n", "\n", "plot_clusterer_comparison(kmeans_rnd_init1, kmeans_rnd_init2, X,\n", " \"Solution 1\", \"Solution 2 (with a different random init)\")\n", @@ -672,7 +705,7 @@ }, { "cell_type": "code", - "execution_count": 30, + "execution_count": 32, "metadata": {}, "outputs": [], "source": [ @@ -688,7 +721,7 @@ }, { "cell_type": "code", - "execution_count": 31, + "execution_count": 33, "metadata": {}, "outputs": [], "source": [ @@ -700,12 +733,12 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The `score()` method returns the negative inertia. Why negative? Well, it is because a predictor's `score()` method must always respect the \"_great is better_\" rule." + "The `score()` method returns the negative inertia. Why negative? Well, it is because a predictor's `score()` method must always respect the \"_greater is better_\" rule." ] }, { "cell_type": "code", - "execution_count": 32, + "execution_count": 34, "metadata": {}, "outputs": [], "source": [ @@ -728,7 +761,7 @@ }, { "cell_type": "code", - "execution_count": 33, + "execution_count": 35, "metadata": {}, "outputs": [], "source": [ @@ -737,7 +770,7 @@ }, { "cell_type": "code", - "execution_count": 34, + "execution_count": 36, "metadata": {}, "outputs": [], "source": [ @@ -760,12 +793,12 @@ }, { "cell_type": "code", - "execution_count": 35, + "execution_count": 37, "metadata": {}, "outputs": [], "source": [ "kmeans_rnd_10_inits = KMeans(n_clusters=5, init=\"random\", n_init=10,\n", - " algorithm=\"full\", random_state=11)\n", + " algorithm=\"full\", random_state=2)\n", "kmeans_rnd_10_inits.fit(X)" ] }, @@ -778,7 +811,7 @@ }, { "cell_type": "code", - "execution_count": 36, + "execution_count": 38, "metadata": {}, "outputs": [], "source": [ @@ -820,7 +853,7 @@ }, { "cell_type": "code", - "execution_count": 37, + "execution_count": 39, "metadata": {}, "outputs": [], "source": [ @@ -829,7 +862,7 @@ }, { "cell_type": "code", - "execution_count": 38, + "execution_count": 40, "metadata": {}, "outputs": [], "source": [ @@ -862,22 +895,29 @@ }, { "cell_type": "code", - "execution_count": 39, + "execution_count": 41, "metadata": {}, "outputs": [], "source": [ - "%timeit -n 50 KMeans(algorithm=\"elkan\").fit(X)" + "%timeit -n 50 KMeans(algorithm=\"elkan\", random_state=42).fit(X)" ] }, { "cell_type": "code", - "execution_count": 40, + "execution_count": 42, "metadata": { "scrolled": true }, "outputs": [], "source": [ - "%timeit -n 50 KMeans(algorithm=\"full\").fit(X)" + "%timeit -n 50 KMeans(algorithm=\"full\", random_state=42).fit(X)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "There's no big difference in this case, as the dataset is fairly small." ] }, { @@ -896,7 +936,7 @@ }, { "cell_type": "code", - "execution_count": 41, + "execution_count": 43, "metadata": {}, "outputs": [], "source": [ @@ -905,7 +945,7 @@ }, { "cell_type": "code", - "execution_count": 42, + "execution_count": 44, "metadata": {}, "outputs": [], "source": [ @@ -915,7 +955,7 @@ }, { "cell_type": "code", - "execution_count": 43, + "execution_count": 45, "metadata": {}, "outputs": [], "source": [ @@ -931,11 +971,11 @@ }, { "cell_type": "code", - "execution_count": 44, + "execution_count": 46, "metadata": {}, "outputs": [], "source": [ - "import urllib\n", + "import urllib.request\n", "from sklearn.datasets import fetch_openml\n", "\n", "mnist = fetch_openml('mnist_784', version=1)\n", @@ -944,7 +984,7 @@ }, { "cell_type": "code", - "execution_count": 45, + "execution_count": 47, "metadata": {}, "outputs": [], "source": [ @@ -963,7 +1003,7 @@ }, { "cell_type": "code", - "execution_count": 46, + "execution_count": 48, "metadata": {}, "outputs": [], "source": [ @@ -974,7 +1014,7 @@ }, { "cell_type": "code", - "execution_count": 47, + "execution_count": 49, "metadata": {}, "outputs": [], "source": [ @@ -991,7 +1031,7 @@ }, { "cell_type": "code", - "execution_count": 48, + "execution_count": 50, "metadata": {}, "outputs": [], "source": [ @@ -1008,7 +1048,7 @@ }, { "cell_type": "code", - "execution_count": 49, + "execution_count": 51, "metadata": {}, "outputs": [], "source": [ @@ -1017,7 +1057,7 @@ }, { "cell_type": "code", - "execution_count": 50, + "execution_count": 52, "metadata": {}, "outputs": [], "source": [ @@ -1049,7 +1089,7 @@ }, { "cell_type": "code", - "execution_count": 51, + "execution_count": 53, "metadata": {}, "outputs": [], "source": [ @@ -1065,20 +1105,20 @@ }, { "cell_type": "code", - "execution_count": 52, + "execution_count": 54, "metadata": {}, "outputs": [], "source": [ - "%timeit KMeans(n_clusters=5).fit(X)" + "%timeit KMeans(n_clusters=5, random_state=42).fit(X)" ] }, { "cell_type": "code", - "execution_count": 53, + "execution_count": 55, "metadata": {}, "outputs": [], "source": [ - "%timeit MiniBatchKMeans(n_clusters=5).fit(X)" + "%timeit MiniBatchKMeans(n_clusters=5, random_state=42).fit(X)" ] }, { @@ -1090,7 +1130,7 @@ }, { "cell_type": "code", - "execution_count": 54, + "execution_count": 56, "metadata": {}, "outputs": [], "source": [ @@ -1099,7 +1139,7 @@ }, { "cell_type": "code", - "execution_count": 55, + "execution_count": 57, "metadata": {}, "outputs": [], "source": [ @@ -1117,7 +1157,7 @@ }, { "cell_type": "code", - "execution_count": 56, + "execution_count": 58, "metadata": {}, "outputs": [], "source": [ @@ -1158,7 +1198,7 @@ }, { "cell_type": "code", - "execution_count": 57, + "execution_count": 59, "metadata": {}, "outputs": [], "source": [ @@ -1179,7 +1219,7 @@ }, { "cell_type": "code", - "execution_count": 58, + "execution_count": 60, "metadata": {}, "outputs": [], "source": [ @@ -1188,7 +1228,7 @@ }, { "cell_type": "code", - "execution_count": 59, + "execution_count": 61, "metadata": {}, "outputs": [], "source": [ @@ -1204,7 +1244,7 @@ }, { "cell_type": "code", - "execution_count": 60, + "execution_count": 62, "metadata": {}, "outputs": [], "source": [ @@ -1215,7 +1255,7 @@ }, { "cell_type": "code", - "execution_count": 61, + "execution_count": 63, "metadata": {}, "outputs": [], "source": [ @@ -1244,7 +1284,7 @@ }, { "cell_type": "code", - "execution_count": 62, + "execution_count": 64, "metadata": {}, "outputs": [], "source": [ @@ -1268,7 +1308,7 @@ }, { "cell_type": "code", - "execution_count": 63, + "execution_count": 65, "metadata": {}, "outputs": [], "source": [ @@ -1277,7 +1317,7 @@ }, { "cell_type": "code", - "execution_count": 64, + "execution_count": 66, "metadata": {}, "outputs": [], "source": [ @@ -1286,7 +1326,7 @@ }, { "cell_type": "code", - "execution_count": 65, + "execution_count": 67, "metadata": {}, "outputs": [], "source": [ @@ -1296,7 +1336,7 @@ }, { "cell_type": "code", - "execution_count": 66, + "execution_count": 68, "metadata": {}, "outputs": [], "source": [ @@ -1325,7 +1365,7 @@ }, { "cell_type": "code", - "execution_count": 67, + "execution_count": 69, "metadata": {}, "outputs": [], "source": [ @@ -1371,6 +1411,13 @@ "plt.show()" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "As you can see, $k=5$ looks like the best option here, as all clusters are roughly the same size, and they all cross the dashed line, which represents the mean silhouette score." + ] + }, { "cell_type": "markdown", "metadata": {}, @@ -1380,7 +1427,7 @@ }, { "cell_type": "code", - "execution_count": 68, + "execution_count": 70, "metadata": {}, "outputs": [], "source": [ @@ -1394,7 +1441,7 @@ }, { "cell_type": "code", - "execution_count": 69, + "execution_count": 71, "metadata": {}, "outputs": [], "source": [ @@ -1403,7 +1450,7 @@ }, { "cell_type": "code", - "execution_count": 70, + "execution_count": 72, "metadata": {}, "outputs": [], "source": [ @@ -1415,7 +1462,7 @@ }, { "cell_type": "code", - "execution_count": 71, + "execution_count": 73, "metadata": {}, "outputs": [], "source": [ @@ -1442,7 +1489,7 @@ }, { "cell_type": "code", - "execution_count": 72, + "execution_count": 74, "metadata": {}, "outputs": [], "source": [ @@ -1458,7 +1505,7 @@ }, { "cell_type": "code", - "execution_count": 73, + "execution_count": 75, "metadata": {}, "outputs": [], "source": [ @@ -1469,7 +1516,7 @@ }, { "cell_type": "code", - "execution_count": 74, + "execution_count": 76, "metadata": {}, "outputs": [], "source": [ @@ -1481,7 +1528,7 @@ }, { "cell_type": "code", - "execution_count": 75, + "execution_count": 77, "metadata": {}, "outputs": [], "source": [ @@ -1495,7 +1542,7 @@ }, { "cell_type": "code", - "execution_count": 76, + "execution_count": 78, "metadata": {}, "outputs": [], "source": [ @@ -1533,7 +1580,7 @@ }, { "cell_type": "code", - "execution_count": 77, + "execution_count": 79, "metadata": {}, "outputs": [], "source": [ @@ -1542,7 +1589,7 @@ }, { "cell_type": "code", - "execution_count": 78, + "execution_count": 80, "metadata": {}, "outputs": [], "source": [ @@ -1558,7 +1605,7 @@ }, { "cell_type": "code", - "execution_count": 79, + "execution_count": 81, "metadata": {}, "outputs": [], "source": [ @@ -1567,7 +1614,7 @@ }, { "cell_type": "code", - "execution_count": 80, + "execution_count": 82, "metadata": {}, "outputs": [], "source": [ @@ -1583,7 +1630,7 @@ }, { "cell_type": "code", - "execution_count": 81, + "execution_count": 83, "metadata": {}, "outputs": [], "source": [ @@ -1592,7 +1639,7 @@ }, { "cell_type": "code", - "execution_count": 82, + "execution_count": 84, "metadata": {}, "outputs": [], "source": [ @@ -1602,7 +1649,7 @@ }, { "cell_type": "code", - "execution_count": 83, + "execution_count": 85, "metadata": {}, "outputs": [], "source": [ @@ -1618,7 +1665,7 @@ }, { "cell_type": "code", - "execution_count": 84, + "execution_count": 86, "metadata": {}, "outputs": [], "source": [ @@ -1627,7 +1674,7 @@ }, { "cell_type": "code", - "execution_count": 85, + "execution_count": 87, "metadata": {}, "outputs": [], "source": [ @@ -1640,7 +1687,7 @@ }, { "cell_type": "code", - "execution_count": 86, + "execution_count": 88, "metadata": {}, "outputs": [], "source": [ @@ -1649,7 +1696,7 @@ }, { "cell_type": "code", - "execution_count": 87, + "execution_count": 89, "metadata": {}, "outputs": [], "source": [ @@ -1665,16 +1712,23 @@ }, { "cell_type": "code", - "execution_count": 88, + "execution_count": 90, "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import GridSearchCV" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Warning**: the following cell may take close to 20 minutes to run, or more depending on your hardware." + ] + }, { "cell_type": "code", - "execution_count": 89, + "execution_count": 91, "metadata": {}, "outputs": [], "source": [ @@ -1692,7 +1746,7 @@ }, { "cell_type": "code", - "execution_count": 90, + "execution_count": 92, "metadata": {}, "outputs": [], "source": [ @@ -1701,7 +1755,7 @@ }, { "cell_type": "code", - "execution_count": 91, + "execution_count": 93, "metadata": {}, "outputs": [], "source": [ @@ -1731,7 +1785,7 @@ }, { "cell_type": "code", - "execution_count": 92, + "execution_count": 94, "metadata": {}, "outputs": [], "source": [ @@ -1740,7 +1794,7 @@ }, { "cell_type": "code", - "execution_count": 93, + "execution_count": 95, "metadata": {}, "outputs": [], "source": [ @@ -1758,7 +1812,7 @@ }, { "cell_type": "code", - "execution_count": 94, + "execution_count": 96, "metadata": {}, "outputs": [], "source": [ @@ -1767,7 +1821,7 @@ }, { "cell_type": "code", - "execution_count": 95, + "execution_count": 97, "metadata": {}, "outputs": [], "source": [ @@ -1786,7 +1840,7 @@ }, { "cell_type": "code", - "execution_count": 96, + "execution_count": 98, "metadata": {}, "outputs": [], "source": [ @@ -1802,16 +1856,25 @@ }, { "cell_type": "code", - "execution_count": 97, + "execution_count": 99, + "metadata": {}, + "outputs": [], + "source": [ + "y_train[representative_digit_idx]" + ] + }, + { + "cell_type": "code", + "execution_count": 100, "metadata": {}, "outputs": [], "source": [ "y_representative_digits = np.array([\n", - " 4, 8, 0, 6, 8, 3, 7, 7, 9, 2,\n", - " 5, 5, 8, 5, 2, 1, 2, 9, 6, 1,\n", - " 1, 6, 9, 0, 8, 3, 0, 7, 4, 1,\n", - " 6, 5, 2, 4, 1, 8, 6, 3, 9, 2,\n", - " 4, 2, 9, 4, 7, 6, 2, 3, 1, 1])" + " 0, 1, 3, 2, 7, 6, 4, 6, 9, 5,\n", + " 1, 2, 9, 5, 2, 7, 8, 1, 8, 6,\n", + " 3, 2, 5, 4, 5, 4, 0, 3, 2, 6,\n", + " 1, 7, 7, 9, 1, 8, 6, 5, 4, 8,\n", + " 5, 3, 3, 6, 7, 9, 7, 8, 4, 9])" ] }, { @@ -1823,7 +1886,7 @@ }, { "cell_type": "code", - "execution_count": 98, + "execution_count": 101, "metadata": {}, "outputs": [], "source": [ @@ -1836,7 +1899,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Wow! We jumped from 83.3% accuracy to 92.2%, although we are still only training the model on 50 instances. Since it's often costly and painful to label instances, especially when it has to be done manually by experts, it's a good idea to make them label representative instances rather than just random instances." + "Wow! We jumped from 83.3% accuracy to 91.3%, although we are still only training the model on 50 instances. Since it's often costly and painful to label instances, especially when it has to be done manually by experts, it's a good idea to make them label representative instances rather than just random instances." ] }, { @@ -1848,7 +1911,7 @@ }, { "cell_type": "code", - "execution_count": 99, + "execution_count": 102, "metadata": {}, "outputs": [], "source": [ @@ -1859,7 +1922,7 @@ }, { "cell_type": "code", - "execution_count": 100, + "execution_count": 103, "metadata": {}, "outputs": [], "source": [ @@ -1869,7 +1932,7 @@ }, { "cell_type": "code", - "execution_count": 101, + "execution_count": 104, "metadata": {}, "outputs": [], "source": [ @@ -1880,16 +1943,16 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We got a tiny little accuracy boost. Better than nothing, but we should probably have propagated the labels only to the instances closest to the centroid, because by propagating to the full cluster, we have certainly included some outliers. Let's only propagate the labels to the 20th percentile closest to the centroid:" + "We got a tiny little accuracy boost. Better than nothing, but we should probably have propagated the labels only to the instances closest to the centroid, because by propagating to the full cluster, we have certainly included some outliers. Let's only propagate the labels to the 75th percentile closest to the centroid:" ] }, { "cell_type": "code", - "execution_count": 102, + "execution_count": 105, "metadata": {}, "outputs": [], "source": [ - "percentile_closest = 20\n", + "percentile_closest = 75\n", "\n", "X_cluster_dist = X_digits_dist[np.arange(len(X_train)), kmeans.labels_]\n", "for i in range(k):\n", @@ -1902,7 +1965,7 @@ }, { "cell_type": "code", - "execution_count": 103, + "execution_count": 106, "metadata": {}, "outputs": [], "source": [ @@ -1913,7 +1976,7 @@ }, { "cell_type": "code", - "execution_count": 104, + "execution_count": 107, "metadata": {}, "outputs": [], "source": [ @@ -1923,7 +1986,7 @@ }, { "cell_type": "code", - "execution_count": 105, + "execution_count": 108, "metadata": {}, "outputs": [], "source": [ @@ -1934,19 +1997,19 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Nice! With just 50 labeled instances (just 5 examples per class on average!), we got 94% performance, which is pretty close to the performance of logistic regression on the fully labeled _digits_ dataset (which was 96.9%)." + "A bit better. With just 50 labeled instances (just 5 examples per class on average!), we got 92.7% performance, which is getting closer to the performance of logistic regression on the fully labeled _digits_ dataset (which was 96.9%)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "This is because the propagated labels are actually pretty good: their accuracy is very close to 99%:" + "This is because the propagated labels are actually pretty good: their accuracy is close to 96%:" ] }, { "cell_type": "code", - "execution_count": 106, + "execution_count": 109, "metadata": {}, "outputs": [], "source": [ @@ -1971,7 +2034,7 @@ }, { "cell_type": "code", - "execution_count": 107, + "execution_count": 110, "metadata": {}, "outputs": [], "source": [ @@ -1980,7 +2043,7 @@ }, { "cell_type": "code", - "execution_count": 108, + "execution_count": 111, "metadata": {}, "outputs": [], "source": [ @@ -1989,7 +2052,7 @@ }, { "cell_type": "code", - "execution_count": 109, + "execution_count": 112, "metadata": {}, "outputs": [], "source": [ @@ -1998,7 +2061,7 @@ }, { "cell_type": "code", - "execution_count": 110, + "execution_count": 113, "metadata": {}, "outputs": [], "source": [ @@ -2008,7 +2071,7 @@ }, { "cell_type": "code", - "execution_count": 111, + "execution_count": 114, "metadata": {}, "outputs": [], "source": [ @@ -2017,7 +2080,7 @@ }, { "cell_type": "code", - "execution_count": 112, + "execution_count": 115, "metadata": {}, "outputs": [], "source": [ @@ -2026,7 +2089,7 @@ }, { "cell_type": "code", - "execution_count": 113, + "execution_count": 116, "metadata": {}, "outputs": [], "source": [ @@ -2035,7 +2098,7 @@ }, { "cell_type": "code", - "execution_count": 114, + "execution_count": 117, "metadata": {}, "outputs": [], "source": [ @@ -2044,7 +2107,7 @@ }, { "cell_type": "code", - "execution_count": 115, + "execution_count": 118, "metadata": {}, "outputs": [], "source": [ @@ -2053,7 +2116,7 @@ }, { "cell_type": "code", - "execution_count": 116, + "execution_count": 119, "metadata": {}, "outputs": [], "source": [ @@ -2063,7 +2126,7 @@ }, { "cell_type": "code", - "execution_count": 117, + "execution_count": 120, "metadata": {}, "outputs": [], "source": [ @@ -2096,7 +2159,7 @@ }, { "cell_type": "code", - "execution_count": 118, + "execution_count": 121, "metadata": {}, "outputs": [], "source": [ @@ -2114,7 +2177,7 @@ }, { "cell_type": "code", - "execution_count": 119, + "execution_count": 122, "metadata": {}, "outputs": [], "source": [ @@ -2123,7 +2186,7 @@ }, { "cell_type": "code", - "execution_count": 120, + "execution_count": 123, "metadata": {}, "outputs": [], "source": [ @@ -2132,7 +2195,7 @@ }, { "cell_type": "code", - "execution_count": 121, + "execution_count": 124, "metadata": {}, "outputs": [], "source": [ @@ -2142,7 +2205,7 @@ }, { "cell_type": "code", - "execution_count": 122, + "execution_count": 125, "metadata": {}, "outputs": [], "source": [ @@ -2152,7 +2215,7 @@ }, { "cell_type": "code", - "execution_count": 123, + "execution_count": 126, "metadata": {}, "outputs": [], "source": [ @@ -2161,7 +2224,7 @@ }, { "cell_type": "code", - "execution_count": 124, + "execution_count": 127, "metadata": {}, "outputs": [], "source": [ @@ -2174,7 +2237,7 @@ }, { "cell_type": "code", - "execution_count": 125, + "execution_count": 128, "metadata": {}, "outputs": [], "source": [ @@ -2200,7 +2263,7 @@ }, { "cell_type": "code", - "execution_count": 126, + "execution_count": 129, "metadata": {}, "outputs": [], "source": [ @@ -2209,7 +2272,7 @@ }, { "cell_type": "code", - "execution_count": 127, + "execution_count": 130, "metadata": {}, "outputs": [], "source": [ @@ -2219,7 +2282,7 @@ }, { "cell_type": "code", - "execution_count": 128, + "execution_count": 131, "metadata": {}, "outputs": [], "source": [ @@ -2229,7 +2292,7 @@ }, { "cell_type": "code", - "execution_count": 129, + "execution_count": 132, "metadata": {}, "outputs": [], "source": [ @@ -2238,7 +2301,7 @@ }, { "cell_type": "code", - "execution_count": 130, + "execution_count": 133, "metadata": {}, "outputs": [], "source": [ @@ -2260,7 +2323,7 @@ }, { "cell_type": "code", - "execution_count": 131, + "execution_count": 134, "metadata": {}, "outputs": [], "source": [ @@ -2284,7 +2347,7 @@ }, { "cell_type": "code", - "execution_count": 132, + "execution_count": 135, "metadata": {}, "outputs": [], "source": [ @@ -2293,7 +2356,7 @@ }, { "cell_type": "code", - "execution_count": 133, + "execution_count": 136, "metadata": {}, "outputs": [], "source": [ @@ -2303,7 +2366,7 @@ }, { "cell_type": "code", - "execution_count": 134, + "execution_count": 137, "metadata": {}, "outputs": [], "source": [ @@ -2314,7 +2377,7 @@ }, { "cell_type": "code", - "execution_count": 135, + "execution_count": 138, "metadata": {}, "outputs": [], "source": [ @@ -2323,7 +2386,7 @@ }, { "cell_type": "code", - "execution_count": 136, + "execution_count": 139, "metadata": { "scrolled": true }, @@ -2341,7 +2404,7 @@ }, { "cell_type": "code", - "execution_count": 137, + "execution_count": 140, "metadata": {}, "outputs": [], "source": [ @@ -2362,7 +2425,7 @@ }, { "cell_type": "code", - "execution_count": 138, + "execution_count": 141, "metadata": {}, "outputs": [], "source": [ @@ -2371,7 +2434,7 @@ }, { "cell_type": "code", - "execution_count": 139, + "execution_count": 142, "metadata": {}, "outputs": [], "source": [ @@ -2388,7 +2451,7 @@ }, { "cell_type": "code", - "execution_count": 140, + "execution_count": 143, "metadata": {}, "outputs": [], "source": [ @@ -2397,7 +2460,7 @@ }, { "cell_type": "code", - "execution_count": 141, + "execution_count": 144, "metadata": {}, "outputs": [], "source": [ @@ -2406,7 +2469,7 @@ }, { "cell_type": "code", - "execution_count": 142, + "execution_count": 145, "metadata": {}, "outputs": [], "source": [ @@ -2422,7 +2485,7 @@ }, { "cell_type": "code", - "execution_count": 143, + "execution_count": 146, "metadata": {}, "outputs": [], "source": [ @@ -2438,7 +2501,7 @@ }, { "cell_type": "code", - "execution_count": 144, + "execution_count": 147, "metadata": {}, "outputs": [], "source": [ @@ -2454,7 +2517,7 @@ }, { "cell_type": "code", - "execution_count": 145, + "execution_count": 148, "metadata": {}, "outputs": [], "source": [ @@ -2463,7 +2526,7 @@ }, { "cell_type": "code", - "execution_count": 146, + "execution_count": 149, "metadata": {}, "outputs": [], "source": [ @@ -2479,7 +2542,7 @@ }, { "cell_type": "code", - "execution_count": 147, + "execution_count": 150, "metadata": {}, "outputs": [], "source": [ @@ -2489,7 +2552,7 @@ }, { "cell_type": "code", - "execution_count": 148, + "execution_count": 151, "metadata": {}, "outputs": [], "source": [ @@ -2512,7 +2575,7 @@ }, { "cell_type": "code", - "execution_count": 149, + "execution_count": 152, "metadata": {}, "outputs": [], "source": [ @@ -2528,7 +2591,7 @@ }, { "cell_type": "code", - "execution_count": 150, + "execution_count": 153, "metadata": {}, "outputs": [], "source": [ @@ -2551,7 +2614,7 @@ }, { "cell_type": "code", - "execution_count": 151, + "execution_count": 154, "metadata": {}, "outputs": [], "source": [ @@ -2590,7 +2653,7 @@ }, { "cell_type": "code", - "execution_count": 152, + "execution_count": 155, "metadata": {}, "outputs": [], "source": [ @@ -2615,7 +2678,7 @@ }, { "cell_type": "code", - "execution_count": 153, + "execution_count": 156, "metadata": {}, "outputs": [], "source": [ @@ -2631,7 +2694,7 @@ }, { "cell_type": "code", - "execution_count": 154, + "execution_count": 157, "metadata": {}, "outputs": [], "source": [ @@ -2649,7 +2712,7 @@ }, { "cell_type": "code", - "execution_count": 155, + "execution_count": 158, "metadata": {}, "outputs": [], "source": [ @@ -2661,7 +2724,7 @@ }, { "cell_type": "code", - "execution_count": 156, + "execution_count": 159, "metadata": {}, "outputs": [], "source": [ @@ -2686,7 +2749,7 @@ }, { "cell_type": "code", - "execution_count": 157, + "execution_count": 160, "metadata": {}, "outputs": [], "source": [ @@ -2697,7 +2760,7 @@ }, { "cell_type": "code", - "execution_count": 158, + "execution_count": 161, "metadata": {}, "outputs": [], "source": [ @@ -2737,7 +2800,7 @@ }, { "cell_type": "code", - "execution_count": 159, + "execution_count": 162, "metadata": {}, "outputs": [], "source": [ @@ -2746,7 +2809,7 @@ }, { "cell_type": "code", - "execution_count": 160, + "execution_count": 163, "metadata": {}, "outputs": [], "source": [ @@ -2762,7 +2825,7 @@ }, { "cell_type": "code", - "execution_count": 161, + "execution_count": 164, "metadata": {}, "outputs": [], "source": [ @@ -2779,7 +2842,7 @@ }, { "cell_type": "code", - "execution_count": 162, + "execution_count": 165, "metadata": {}, "outputs": [], "source": [ @@ -2788,7 +2851,7 @@ }, { "cell_type": "code", - "execution_count": 163, + "execution_count": 166, "metadata": {}, "outputs": [], "source": [ @@ -2811,7 +2874,7 @@ }, { "cell_type": "code", - "execution_count": 164, + "execution_count": 167, "metadata": {}, "outputs": [], "source": [ @@ -2821,7 +2884,7 @@ }, { "cell_type": "code", - "execution_count": 165, + "execution_count": 168, "metadata": {}, "outputs": [], "source": [ @@ -2831,7 +2894,7 @@ }, { "cell_type": "code", - "execution_count": 166, + "execution_count": 169, "metadata": {}, "outputs": [], "source": [ @@ -2862,7 +2925,7 @@ }, { "cell_type": "code", - "execution_count": 167, + "execution_count": 170, "metadata": {}, "outputs": [], "source": [ @@ -2881,7 +2944,7 @@ }, { "cell_type": "code", - "execution_count": 168, + "execution_count": 171, "metadata": {}, "outputs": [], "source": [ @@ -2890,7 +2953,7 @@ }, { "cell_type": "code", - "execution_count": 169, + "execution_count": 172, "metadata": {}, "outputs": [], "source": [ @@ -2913,7 +2976,7 @@ }, { "cell_type": "code", - "execution_count": 170, + "execution_count": 173, "metadata": {}, "outputs": [], "source": [ @@ -2922,7 +2985,7 @@ }, { "cell_type": "code", - "execution_count": 171, + "execution_count": 174, "metadata": {}, "outputs": [], "source": [ @@ -2939,7 +3002,7 @@ }, { "cell_type": "code", - "execution_count": 172, + "execution_count": 175, "metadata": {}, "outputs": [], "source": [ @@ -2948,7 +3011,7 @@ }, { "cell_type": "code", - "execution_count": 173, + "execution_count": 176, "metadata": {}, "outputs": [], "source": [ @@ -2959,7 +3022,7 @@ }, { "cell_type": "code", - "execution_count": 174, + "execution_count": 177, "metadata": {}, "outputs": [], "source": [ @@ -2974,7 +3037,7 @@ }, { "cell_type": "code", - "execution_count": 175, + "execution_count": 178, "metadata": {}, "outputs": [], "source": [ @@ -2983,7 +3046,7 @@ }, { "cell_type": "code", - "execution_count": 176, + "execution_count": 179, "metadata": {}, "outputs": [], "source": [ @@ -2992,7 +3055,7 @@ }, { "cell_type": "code", - "execution_count": 177, + "execution_count": 180, "metadata": {}, "outputs": [], "source": [ @@ -3019,7 +3082,7 @@ }, { "cell_type": "code", - "execution_count": 178, + "execution_count": 181, "metadata": {}, "outputs": [], "source": [ @@ -3028,7 +3091,7 @@ }, { "cell_type": "code", - "execution_count": 179, + "execution_count": 182, "metadata": { "scrolled": true }, @@ -3040,7 +3103,7 @@ }, { "cell_type": "code", - "execution_count": 180, + "execution_count": 183, "metadata": {}, "outputs": [], "source": [ @@ -3074,7 +3137,7 @@ }, { "cell_type": "code", - "execution_count": 181, + "execution_count": 184, "metadata": {}, "outputs": [], "source": [ @@ -3083,7 +3146,7 @@ }, { "cell_type": "code", - "execution_count": 182, + "execution_count": 185, "metadata": {}, "outputs": [], "source": [ @@ -3096,7 +3159,7 @@ }, { "cell_type": "code", - "execution_count": 183, + "execution_count": 186, "metadata": {}, "outputs": [], "source": [ @@ -3204,7 +3267,7 @@ }, { "cell_type": "code", - "execution_count": 4, + "execution_count": 187, "metadata": {}, "outputs": [], "source": [ @@ -3215,7 +3278,7 @@ }, { "cell_type": "code", - "execution_count": 5, + "execution_count": 188, "metadata": {}, "outputs": [], "source": [ @@ -3224,7 +3287,7 @@ }, { "cell_type": "code", - "execution_count": 6, + "execution_count": 189, "metadata": {}, "outputs": [], "source": [ @@ -3240,7 +3303,7 @@ }, { "cell_type": "code", - "execution_count": 7, + "execution_count": 190, "metadata": {}, "outputs": [], "source": [ @@ -3263,7 +3326,7 @@ }, { "cell_type": "code", - "execution_count": 8, + "execution_count": 191, "metadata": {}, "outputs": [], "source": [ @@ -3281,7 +3344,7 @@ }, { "cell_type": "code", - "execution_count": 9, + "execution_count": 192, "metadata": {}, "outputs": [], "source": [ @@ -3304,7 +3367,7 @@ }, { "cell_type": "code", - "execution_count": 10, + "execution_count": 193, "metadata": {}, "outputs": [], "source": [ @@ -3320,7 +3383,7 @@ }, { "cell_type": "code", - "execution_count": 11, + "execution_count": 194, "metadata": {}, "outputs": [], "source": [ @@ -3342,7 +3405,7 @@ }, { "cell_type": "code", - "execution_count": 12, + "execution_count": 195, "metadata": {}, "outputs": [], "source": [ @@ -3353,12 +3416,12 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "It looks like the best number of clusters is quite high, at 120. You might have expected it to be 40, since there are 40 different people on the pictures. However, the same person may look quite different on different pictures (e.g., with or without glasses, or simply shifted left or right)." + "It looks like the best number of clusters is quite high, at 100. You might have expected it to be 40, since there are 40 different people on the pictures. However, the same person may look quite different on different pictures (e.g., with or without glasses, or simply shifted left or right)." ] }, { "cell_type": "code", - "execution_count": 13, + "execution_count": 196, "metadata": {}, "outputs": [], "source": [ @@ -3377,12 +3440,12 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The optimal number of clusters is not clear on this inertia diagram, as there is no obvious elbow, so let's stick with k=120." + "The optimal number of clusters is not clear on this inertia diagram, as there is no obvious elbow, so let's stick with k=100." ] }, { "cell_type": "code", - "execution_count": 14, + "execution_count": 197, "metadata": {}, "outputs": [], "source": [ @@ -3398,7 +3461,7 @@ }, { "cell_type": "code", - "execution_count": 15, + "execution_count": 198, "metadata": {}, "outputs": [], "source": [ @@ -3445,7 +3508,7 @@ }, { "cell_type": "code", - "execution_count": 16, + "execution_count": 199, "metadata": {}, "outputs": [], "source": [ @@ -3465,7 +3528,7 @@ }, { "cell_type": "code", - "execution_count": 17, + "execution_count": 200, "metadata": {}, "outputs": [], "source": [ @@ -3502,7 +3565,7 @@ }, { "cell_type": "code", - "execution_count": 18, + "execution_count": 201, "metadata": {}, "outputs": [], "source": [ @@ -3533,7 +3596,7 @@ }, { "cell_type": "code", - "execution_count": 19, + "execution_count": 202, "metadata": {}, "outputs": [], "source": [ @@ -3544,7 +3607,7 @@ }, { "cell_type": "code", - "execution_count": 20, + "execution_count": 203, "metadata": {}, "outputs": [], "source": [ @@ -3576,7 +3639,7 @@ }, { "cell_type": "code", - "execution_count": 21, + "execution_count": 204, "metadata": {}, "outputs": [], "source": [ @@ -3595,7 +3658,7 @@ }, { "cell_type": "code", - "execution_count": 22, + "execution_count": 205, "metadata": {}, "outputs": [], "source": [ @@ -3606,7 +3669,7 @@ }, { "cell_type": "code", - "execution_count": 23, + "execution_count": 206, "metadata": {}, "outputs": [], "source": [ @@ -3622,7 +3685,7 @@ }, { "cell_type": "code", - "execution_count": 24, + "execution_count": 207, "metadata": {}, "outputs": [], "source": [ @@ -3650,7 +3713,7 @@ }, { "cell_type": "code", - "execution_count": 25, + "execution_count": 208, "metadata": {}, "outputs": [], "source": [ @@ -3659,7 +3722,7 @@ }, { "cell_type": "code", - "execution_count": 26, + "execution_count": 209, "metadata": {}, "outputs": [], "source": [ @@ -3675,7 +3738,7 @@ }, { "cell_type": "code", - "execution_count": 27, + "execution_count": 210, "metadata": {}, "outputs": [], "source": [ @@ -3705,7 +3768,7 @@ }, { "cell_type": "code", - "execution_count": 28, + "execution_count": 211, "metadata": {}, "outputs": [], "source": [ @@ -3714,7 +3777,7 @@ }, { "cell_type": "code", - "execution_count": 29, + "execution_count": 212, "metadata": {}, "outputs": [], "source": [ @@ -3727,7 +3790,7 @@ }, { "cell_type": "code", - "execution_count": 30, + "execution_count": 213, "metadata": {}, "outputs": [], "source": [ @@ -3736,7 +3799,7 @@ }, { "cell_type": "code", - "execution_count": 31, + "execution_count": 214, "metadata": {}, "outputs": [], "source": [ @@ -3745,7 +3808,7 @@ }, { "cell_type": "code", - "execution_count": 32, + "execution_count": 215, "metadata": {}, "outputs": [], "source": [ @@ -3754,7 +3817,7 @@ }, { "cell_type": "code", - "execution_count": 33, + "execution_count": 216, "metadata": {}, "outputs": [], "source": [ @@ -3786,7 +3849,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.9" } }, "nbformat": 4, diff --git a/10_neural_nets_with_keras.ipynb b/10_neural_nets_with_keras.ipynb index caeac36..70b806e 100644 --- a/10_neural_nets_with_keras.ipynb +++ b/10_neural_nets_with_keras.ipynb @@ -84,11 +84,7 @@ " print(\"Saving figure\", fig_id)\n", " if tight_layout:\n", " plt.tight_layout()\n", - " plt.savefig(path, format=fig_extension, dpi=resolution)\n", - "\n", - "# Ignore useless warnings (see SciPy issue #5998)\n", - "import warnings\n", - "warnings.filterwarnings(action=\"ignore\", message=\"^internal gelsd\")" + " plt.savefig(path, format=fig_extension, dpi=resolution)\n" ] }, { @@ -735,13 +731,21 @@ "y_proba.round(2)" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Warning**: `model.predict_classes(X_new)` is deprecated. It is replaced with `np.argmax(model.predict(X_new), axis=-1)`." + ] + }, { "cell_type": "code", "execution_count": 44, "metadata": {}, "outputs": [], "source": [ - "y_pred = model.predict_classes(X_new)\n", + "#y_pred = model.predict_classes(X_new) # deprecated\n", + "y_pred = np.argmax(model.predict(X_new), axis=-1)\n", "y_pred" ] }, @@ -1514,7 +1518,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "**Warning**: the following cell crashes at the end of training. This seems to be caused by [Keras issue #13586](https://github.com/keras-team/keras/issues/13586), which was triggered by a recent change in Scikit-Learn. [Pull Request #13598](https://github.com/keras-team/keras/pull/13598) seems to fix the issue, so this problem should be resolved soon." + "**Warning**: the following cell crashes at the end of training. This seems to be caused by [Keras issue #13586](https://github.com/keras-team/keras/issues/13586), which was triggered by a recent change in Scikit-Learn. [Pull Request #13598](https://github.com/keras-team/keras/pull/13598) seems to fix the issue, so this problem should be resolved soon. In the meantime, I've added `.tolist()` and `.rvs(1000).tolist()` as workarounds." ] }, { @@ -1528,8 +1532,8 @@ "\n", "param_distribs = {\n", " \"n_hidden\": [0, 1, 2, 3],\n", - " \"n_neurons\": np.arange(1, 100),\n", - " \"learning_rate\": reciprocal(3e-4, 3e-2),\n", + " \"n_neurons\": np.arange(1, 100) .tolist(),\n", + " \"learning_rate\": reciprocal(3e-4, 3e-2) .rvs(1000).tolist(),\n", "}\n", "\n", "rnd_search_cv = RandomizedSearchCV(keras_reg, param_distribs, n_iter=10, cv=3, verbose=2)\n", @@ -1888,6 +1892,7 @@ "plt.gca().set_xscale('log')\n", "plt.hlines(min(expon_lr.losses), min(expon_lr.rates), max(expon_lr.rates))\n", "plt.axis([min(expon_lr.rates), max(expon_lr.rates), 0, expon_lr.losses[0]])\n", + "plt.grid()\n", "plt.xlabel(\"Learning rate\")\n", "plt.ylabel(\"Loss\")" ] @@ -1896,7 +1901,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The loss starts shooting back up violently around 3e-1, so let's try using 2e-1 as our learning rate:" + "The loss starts shooting back up violently when the learning rate goes over 6e-1, so let's try using half of that, at 3e-1:" ] }, { @@ -1931,7 +1936,7 @@ "outputs": [], "source": [ "model.compile(loss=\"sparse_categorical_crossentropy\",\n", - " optimizer=keras.optimizers.SGD(lr=2e-1),\n", + " optimizer=keras.optimizers.SGD(lr=3e-1),\n", " metrics=[\"accuracy\"])" ] }, @@ -1958,7 +1963,7 @@ "\n", "history = model.fit(X_train, y_train, epochs=100,\n", " validation_data=(X_valid, y_valid),\n", - " callbacks=[early_stopping_cb, checkpoint_cb, tensorboard_cb])" + " callbacks=[checkpoint_cb, early_stopping_cb, tensorboard_cb])" ] }, { @@ -2011,7 +2016,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.9" }, "nav_menu": { "height": "264px", diff --git a/11_training_deep_neural_networks.ipynb b/11_training_deep_neural_networks.ipynb index 9621268..207e64f 100644 --- a/11_training_deep_neural_networks.ipynb +++ b/11_training_deep_neural_networks.ipynb @@ -1039,7 +1039,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Great! We got quite a bit of transfer: the error rate dropped by a factor of 4!" + "Great! We got quite a bit of transfer: the error rate dropped by a factor of 4.5!" ] }, { @@ -1048,7 +1048,7 @@ "metadata": {}, "outputs": [], "source": [ - "(100 - 96.95) / (100 - 99.25)" + "(100 - 97.05) / (100 - 99.35)" ] }, { @@ -2274,7 +2274,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The model with the lowest validation loss gets about 47% accuracy on the validation set. It took 39 epochs to reach the lowest validation loss, with roughly 10 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization." + "The model with the lowest validation loss gets about 47.6% accuracy on the validation set. It took 27 epochs to reach the lowest validation loss, with roughly 8 seconds per epoch on my laptop (without a GPU). Let's see if we can improve performance using Batch Normalization." ] }, { @@ -2339,9 +2339,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "* *Is the model converging faster than before?* Much faster! The previous model took 39 epochs to reach the lowest validation loss, while the new model with BN took 18 epochs. That's more than twice as fast as the previous model. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.\n", - "* *Does BN produce a better model?* Yes! The final model is also much better, with 55% accuracy instead of 47%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).\n", - "* *How does BN affect training speed?* Although the model converged twice as fast, each epoch took about 16s instead of 10s, because of the extra computations required by the BN layers. So overall, although the number of epochs was reduced by 50%, the training time (wall time) was shortened by 30%. Which is still pretty significant!" + "* *Is the model converging faster than before?* Much faster! The previous model took 27 epochs to reach the lowest validation loss, while the new model achieved that same loss in just 5 epochs and continued to make progress until the 16th epoch. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.\n", + "* *Does BN produce a better model?* Yes! The final model is also much better, with 54.0% accuracy instead of 47.6%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).\n", + "* *How does BN affect training speed?* Although the model converged much faster, each epoch took about 12s instead of 8s, because of the extra computations required by the BN layers. But overall the training time (wall time) was shortened significantly!" ] }, { @@ -2412,7 +2412,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We get 51.4% accuracy, which is better than the original model, but not quite as good as the model using batch normalization. Moreover, it took 13 epochs to reach the best model, which is much faster than both the original model and the BN model, plus each epoch took only 10 seconds, just like the original model. So it's by far the fastest model to train (both in terms of epochs and wall time)." + "We get 47.9% accuracy, which is not much better than the original model (47.6%), and not as good as the model using batch normalization (54.0%). However, convergence was almost as fast as with the BN model, plus each epoch took only 7 seconds. So it's by far the fastest model to train so far." ] }, { @@ -2473,7 +2473,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The model reaches 50.8% accuracy on the validation set. That's very slightly worse than without dropout (51.4%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case." + "The model reaches 48.9% accuracy on the validation set. That's very slightly better than without dropout (47.6%). With an extensive hyperparameter search, it might be possible to do better (I tried dropout rates of 5%, 10%, 20% and 40%, and learning rates 1e-4, 3e-4, 5e-4, and 1e-3), but probably not much better in this case." ] }, { @@ -2561,7 +2561,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We only get virtually no accuracy improvement in this case (from 50.8% to 50.9%).\n", + "We get no accuracy improvement in this case (we're still at 48.9% accuracy).\n", "\n", "So the best model we got in this exercise is the Batch Normalization model." ] @@ -2655,7 +2655,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "One cycle allowed us to train the model in just 15 epochs, each taking only 3 seconds (thanks to the larger batch size). This is over 3 times faster than the fastest model we trained so far. Moreover, we improved the model's performance (from 50.8% to 52.8%). The batch normalized model reaches a slightly better performance, but it's much slower to train." + "One cycle allowed us to train the model in just 15 epochs, each taking only 2 seconds (thanks to the larger batch size). This is several times faster than the fastest model we trained so far. Moreover, we improved the model's performance (from 47.6% to 52.0%). The batch normalized model reaches a slightly better performance (54%), but it's much slower to train." ] }, { @@ -2682,7 +2682,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.9" }, "nav_menu": { "height": "360px", diff --git a/12_custom_models_and_training_with_tensorflow.ipynb b/12_custom_models_and_training_with_tensorflow.ipynb index 6e59a58..2826b8a 100644 --- a/12_custom_models_and_training_with_tensorflow.ipynb +++ b/12_custom_models_and_training_with_tensorflow.ipynb @@ -59,10 +59,11 @@ "except Exception:\n", " pass\n", "\n", - "# TensorFlow ≥2.0 is required\n", + "# TensorFlow ≥2.4 is required in this notebook\n", + "# Earlier 2.x versions will mostly work the same, but with a few bugs\n", "import tensorflow as tf\n", "from tensorflow import keras\n", - "assert tf.__version__ >= \"2.0\"\n", + "assert tf.__version__ >= \"2.4\"\n", "\n", "# Common imports\n", "import numpy as np\n", @@ -1033,8 +1034,8 @@ "metadata": {}, "outputs": [], "source": [ - "#model = keras.models.load_model(\"my_model_with_a_custom_loss_class.h5\", # TODO: check PR #25956\n", - "# custom_objects={\"HuberLoss\": HuberLoss})" + "model = keras.models.load_model(\"my_model_with_a_custom_loss_class.h5\",\n", + " custom_objects={\"HuberLoss\": HuberLoss})" ] }, { @@ -1052,16 +1053,6 @@ "execution_count": 82, "metadata": {}, "outputs": [], - "source": [ - "#model = keras.models.load_model(\"my_model_with_a_custom_loss_class.h5\", # TODO: check PR #25956\n", - "# custom_objects={\"HuberLoss\": HuberLoss})" - ] - }, - { - "cell_type": "code", - "execution_count": 83, - "metadata": {}, - "outputs": [], "source": [ "model.loss.threshold" ] @@ -1075,7 +1066,7 @@ }, { "cell_type": "code", - "execution_count": 84, + "execution_count": 83, "metadata": {}, "outputs": [], "source": [ @@ -1086,7 +1077,7 @@ }, { "cell_type": "code", - "execution_count": 85, + "execution_count": 84, "metadata": {}, "outputs": [], "source": [ @@ -1106,7 +1097,7 @@ }, { "cell_type": "code", - "execution_count": 86, + "execution_count": 85, "metadata": {}, "outputs": [], "source": [ @@ -1118,7 +1109,7 @@ }, { "cell_type": "code", - "execution_count": 87, + "execution_count": 86, "metadata": {}, "outputs": [], "source": [ @@ -1129,7 +1120,7 @@ }, { "cell_type": "code", - "execution_count": 88, + "execution_count": 87, "metadata": {}, "outputs": [], "source": [ @@ -1145,7 +1136,7 @@ }, { "cell_type": "code", - "execution_count": 89, + "execution_count": 88, "metadata": {}, "outputs": [], "source": [ @@ -1154,7 +1145,7 @@ }, { "cell_type": "code", - "execution_count": 90, + "execution_count": 89, "metadata": {}, "outputs": [], "source": [ @@ -1164,7 +1155,7 @@ }, { "cell_type": "code", - "execution_count": 91, + "execution_count": 90, "metadata": {}, "outputs": [], "source": [ @@ -1173,7 +1164,7 @@ }, { "cell_type": "code", - "execution_count": 92, + "execution_count": 91, "metadata": {}, "outputs": [], "source": [ @@ -1189,7 +1180,7 @@ }, { "cell_type": "code", - "execution_count": 93, + "execution_count": 92, "metadata": {}, "outputs": [], "source": [ @@ -1204,7 +1195,7 @@ }, { "cell_type": "code", - "execution_count": 94, + "execution_count": 93, "metadata": {}, "outputs": [], "source": [ @@ -1215,7 +1206,7 @@ }, { "cell_type": "code", - "execution_count": 95, + "execution_count": 94, "metadata": {}, "outputs": [], "source": [ @@ -1231,7 +1222,7 @@ }, { "cell_type": "code", - "execution_count": 96, + "execution_count": 95, "metadata": {}, "outputs": [], "source": [ @@ -1240,7 +1231,7 @@ }, { "cell_type": "code", - "execution_count": 97, + "execution_count": 96, "metadata": {}, "outputs": [], "source": [ @@ -1250,7 +1241,7 @@ }, { "cell_type": "code", - "execution_count": 98, + "execution_count": 97, "metadata": {}, "outputs": [], "source": [ @@ -1259,7 +1250,7 @@ }, { "cell_type": "code", - "execution_count": 99, + "execution_count": 98, "metadata": {}, "outputs": [], "source": [ @@ -1282,7 +1273,7 @@ }, { "cell_type": "code", - "execution_count": 100, + "execution_count": 99, "metadata": {}, "outputs": [], "source": [ @@ -1293,7 +1284,7 @@ }, { "cell_type": "code", - "execution_count": 101, + "execution_count": 100, "metadata": {}, "outputs": [], "source": [ @@ -1306,7 +1297,7 @@ }, { "cell_type": "code", - "execution_count": 102, + "execution_count": 101, "metadata": {}, "outputs": [], "source": [ @@ -1315,7 +1306,7 @@ }, { "cell_type": "code", - "execution_count": 103, + "execution_count": 102, "metadata": {}, "outputs": [], "source": [ @@ -1326,7 +1317,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "**Warning**: if you use the same function as the loss and a metric, you may be surprised to see different results. This is generally just due to floating point precision errors: even though the mathematical equations are equivalent, the operations are not run in the same order, which can lead to small differences. Moreover, when using sample weights, there's more than just precision errors:\n", + "**Note**: if you use the same function as the loss and a metric, you may be surprised to see different results. This is generally just due to floating point precision errors: even though the mathematical equations are equivalent, the operations are not run in the same order, which can lead to small differences. Moreover, when using sample weights, there's more than just precision errors:\n", "* the loss since the start of the epoch is the mean of all batch losses seen so far. Each batch loss is the sum of the weighted instance losses divided by the _batch size_ (not the sum of weights, so the batch loss is _not_ the weighted mean of the losses).\n", "* the metric since the start of the epoch is equal to the sum of weighted instance losses divided by sum of all weights seen so far. In other words, it is the weighted mean of all the instance losses. Not the same thing.\n", "\n", @@ -1335,7 +1326,7 @@ }, { "cell_type": "code", - "execution_count": 104, + "execution_count": 103, "metadata": {}, "outputs": [], "source": [ @@ -1344,7 +1335,7 @@ }, { "cell_type": "code", - "execution_count": 105, + "execution_count": 104, "metadata": {}, "outputs": [], "source": [ @@ -1354,7 +1345,7 @@ }, { "cell_type": "code", - "execution_count": 106, + "execution_count": 105, "metadata": {}, "outputs": [], "source": [ @@ -1370,7 +1361,7 @@ }, { "cell_type": "code", - "execution_count": 107, + "execution_count": 106, "metadata": {}, "outputs": [], "source": [ @@ -1380,7 +1371,7 @@ }, { "cell_type": "code", - "execution_count": 108, + "execution_count": 107, "metadata": {}, "outputs": [], "source": [ @@ -1389,7 +1380,7 @@ }, { "cell_type": "code", - "execution_count": 109, + "execution_count": 108, "metadata": {}, "outputs": [], "source": [ @@ -1398,7 +1389,7 @@ }, { "cell_type": "code", - "execution_count": 110, + "execution_count": 109, "metadata": {}, "outputs": [], "source": [ @@ -1407,7 +1398,7 @@ }, { "cell_type": "code", - "execution_count": 111, + "execution_count": 110, "metadata": {}, "outputs": [], "source": [ @@ -1423,7 +1414,7 @@ }, { "cell_type": "code", - "execution_count": 112, + "execution_count": 111, "metadata": {}, "outputs": [], "source": [ @@ -1431,15 +1422,9 @@ " def __init__(self, threshold=1.0, **kwargs):\n", " super().__init__(**kwargs) # handles base args (e.g., dtype)\n", " self.threshold = threshold\n", - " #self.huber_fn = create_huber(threshold) # TODO: investigate why this fails\n", + " self.huber_fn = create_huber(threshold)\n", " self.total = self.add_weight(\"total\", initializer=\"zeros\")\n", " self.count = self.add_weight(\"count\", initializer=\"zeros\")\n", - " def huber_fn(self, y_true, y_pred): # workaround\n", - " error = y_true - y_pred\n", - " is_small_error = tf.abs(error) < self.threshold\n", - " squared_loss = tf.square(error) / 2\n", - " linear_loss = self.threshold * tf.abs(error) - self.threshold**2 / 2\n", - " return tf.where(is_small_error, squared_loss, linear_loss)\n", " def update_state(self, y_true, y_pred, sample_weight=None):\n", " metric = self.huber_fn(y_true, y_pred)\n", " self.total.assign_add(tf.reduce_sum(metric))\n", @@ -1451,16 +1436,9 @@ " return {**base_config, \"threshold\": self.threshold}" ] }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "**Warning**: when running the following cell, if you get autograph warnings such as `WARNING:tensorflow:AutoGraph could not transform [...] and will run it as-is`, then please install version 0.2.2 of the gast library (e.g., by running `!pip install gast==0.2.2`), then restart the kernel and run this notebook again from the beginning (see [autograph issue #1](https://github.com/tensorflow/autograph/issues/1) for more details):" - ] - }, { "cell_type": "code", - "execution_count": 113, + "execution_count": 112, "metadata": {}, "outputs": [], "source": [ @@ -1474,7 +1452,7 @@ }, { "cell_type": "code", - "execution_count": 114, + "execution_count": 113, "metadata": {}, "outputs": [], "source": [ @@ -1488,7 +1466,7 @@ }, { "cell_type": "code", - "execution_count": 115, + "execution_count": 114, "metadata": {}, "outputs": [], "source": [ @@ -1497,7 +1475,7 @@ }, { "cell_type": "code", - "execution_count": 116, + "execution_count": 115, "metadata": {}, "outputs": [], "source": [ @@ -1514,7 +1492,7 @@ }, { "cell_type": "code", - "execution_count": 117, + "execution_count": 116, "metadata": {}, "outputs": [], "source": [ @@ -1525,7 +1503,7 @@ }, { "cell_type": "code", - "execution_count": 118, + "execution_count": 117, "metadata": {}, "outputs": [], "source": [ @@ -1538,7 +1516,7 @@ }, { "cell_type": "code", - "execution_count": 119, + "execution_count": 118, "metadata": {}, "outputs": [], "source": [ @@ -1547,7 +1525,7 @@ }, { "cell_type": "code", - "execution_count": 120, + "execution_count": 119, "metadata": {}, "outputs": [], "source": [ @@ -1556,7 +1534,7 @@ }, { "cell_type": "code", - "execution_count": 121, + "execution_count": 120, "metadata": {}, "outputs": [], "source": [ @@ -1565,18 +1543,18 @@ }, { "cell_type": "code", - "execution_count": 122, + "execution_count": 121, "metadata": {}, "outputs": [], "source": [ - "#model = keras.models.load_model(\"my_model_with_a_custom_metric.h5\", # TODO: check PR #25956\n", - "# custom_objects={\"huber_fn\": create_huber(2.0),\n", - "# \"HuberMetric\": HuberMetric})" + "model = keras.models.load_model(\"my_model_with_a_custom_metric.h5\",\n", + " custom_objects={\"huber_fn\": create_huber(2.0),\n", + " \"HuberMetric\": HuberMetric})" ] }, { "cell_type": "code", - "execution_count": 123, + "execution_count": 122, "metadata": {}, "outputs": [], "source": [ @@ -1592,7 +1570,7 @@ }, { "cell_type": "code", - "execution_count": 124, + "execution_count": 123, "metadata": {}, "outputs": [], "source": [ @@ -1608,7 +1586,7 @@ }, { "cell_type": "code", - "execution_count": 125, + "execution_count": 124, "metadata": {}, "outputs": [], "source": [ @@ -1634,7 +1612,7 @@ }, { "cell_type": "code", - "execution_count": 126, + "execution_count": 125, "metadata": {}, "outputs": [], "source": [ @@ -1645,7 +1623,7 @@ }, { "cell_type": "code", - "execution_count": 127, + "execution_count": 126, "metadata": {}, "outputs": [], "source": [ @@ -1658,7 +1636,7 @@ }, { "cell_type": "code", - "execution_count": 128, + "execution_count": 127, "metadata": {}, "outputs": [], "source": [ @@ -1667,7 +1645,7 @@ }, { "cell_type": "code", - "execution_count": 129, + "execution_count": 128, "metadata": { "scrolled": true }, @@ -1680,7 +1658,7 @@ }, { "cell_type": "code", - "execution_count": 130, + "execution_count": 129, "metadata": {}, "outputs": [], "source": [ @@ -1689,7 +1667,7 @@ }, { "cell_type": "code", - "execution_count": 131, + "execution_count": 130, "metadata": {}, "outputs": [], "source": [ @@ -1698,33 +1676,26 @@ }, { "cell_type": "code", - "execution_count": 132, + "execution_count": 131, "metadata": {}, "outputs": [], "source": [ - "#model = keras.models.load_model(\"my_model_with_a_custom_metric_v2.h5\", # TODO: check PR #25956\n", - "# custom_objects={\"HuberMetric\": HuberMetric})" + "model = keras.models.load_model(\"my_model_with_a_custom_metric_v2.h5\",\n", + " custom_objects={\"HuberMetric\": HuberMetric})" ] }, { "cell_type": "code", - "execution_count": 133, + "execution_count": 132, "metadata": {}, "outputs": [], "source": [ "model.fit(X_train_scaled.astype(np.float32), y_train.astype(np.float32), epochs=2)" ] }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "**Warning**: In TF 2.2, tf.keras adds an extra first metric in `model.metrics` at position 0 (see [TF issue #38150](https://github.com/tensorflow/tensorflow/issues/38150)). This forces us to use `model.metrics[-1]` rather than `model.metrics[0]` to access the `HuberMetric`." - ] - }, { "cell_type": "code", - "execution_count": 134, + "execution_count": 133, "metadata": { "scrolled": true }, @@ -1742,7 +1713,7 @@ }, { "cell_type": "code", - "execution_count": 135, + "execution_count": 134, "metadata": {}, "outputs": [], "source": [ @@ -1751,7 +1722,7 @@ }, { "cell_type": "code", - "execution_count": 136, + "execution_count": 135, "metadata": {}, "outputs": [], "source": [ @@ -1767,7 +1738,7 @@ }, { "cell_type": "code", - "execution_count": 137, + "execution_count": 136, "metadata": {}, "outputs": [], "source": [ @@ -1778,7 +1749,7 @@ }, { "cell_type": "code", - "execution_count": 138, + "execution_count": 137, "metadata": {}, "outputs": [], "source": [ @@ -1787,7 +1758,7 @@ " keras.layers.Dense(1),\n", " exponential_layer\n", "])\n", - "model.compile(loss=\"mse\", optimizer=\"nadam\")\n", + "model.compile(loss=\"mse\", optimizer=\"sgd\")\n", "model.fit(X_train_scaled, y_train, epochs=5,\n", " validation_data=(X_valid_scaled, y_valid))\n", "model.evaluate(X_test_scaled, y_test)" @@ -1795,7 +1766,7 @@ }, { "cell_type": "code", - "execution_count": 139, + "execution_count": 138, "metadata": {}, "outputs": [], "source": [ @@ -1827,7 +1798,7 @@ }, { "cell_type": "code", - "execution_count": 140, + "execution_count": 139, "metadata": {}, "outputs": [], "source": [ @@ -1838,7 +1809,7 @@ }, { "cell_type": "code", - "execution_count": 141, + "execution_count": 140, "metadata": {}, "outputs": [], "source": [ @@ -1850,7 +1821,7 @@ }, { "cell_type": "code", - "execution_count": 142, + "execution_count": 141, "metadata": {}, "outputs": [], "source": [ @@ -1862,7 +1833,7 @@ }, { "cell_type": "code", - "execution_count": 143, + "execution_count": 142, "metadata": {}, "outputs": [], "source": [ @@ -1871,7 +1842,7 @@ }, { "cell_type": "code", - "execution_count": 144, + "execution_count": 143, "metadata": {}, "outputs": [], "source": [ @@ -1881,7 +1852,7 @@ }, { "cell_type": "code", - "execution_count": 145, + "execution_count": 144, "metadata": {}, "outputs": [], "source": [ @@ -1897,7 +1868,7 @@ }, { "cell_type": "code", - "execution_count": 146, + "execution_count": 145, "metadata": {}, "outputs": [], "source": [ @@ -1908,7 +1879,7 @@ }, { "cell_type": "code", - "execution_count": 147, + "execution_count": 146, "metadata": {}, "outputs": [], "source": [ @@ -1926,7 +1897,7 @@ }, { "cell_type": "code", - "execution_count": 148, + "execution_count": 147, "metadata": {}, "outputs": [], "source": [ @@ -1948,7 +1919,7 @@ }, { "cell_type": "code", - "execution_count": 149, + "execution_count": 148, "metadata": {}, "outputs": [], "source": [ @@ -1967,7 +1938,7 @@ }, { "cell_type": "code", - "execution_count": 150, + "execution_count": 149, "metadata": {}, "outputs": [], "source": [ @@ -1976,7 +1947,7 @@ }, { "cell_type": "code", - "execution_count": 151, + "execution_count": 150, "metadata": {}, "outputs": [], "source": [ @@ -1996,7 +1967,7 @@ }, { "cell_type": "code", - "execution_count": 152, + "execution_count": 151, "metadata": {}, "outputs": [], "source": [ @@ -2019,7 +1990,7 @@ }, { "cell_type": "code", - "execution_count": 153, + "execution_count": 152, "metadata": {}, "outputs": [], "source": [ @@ -2030,7 +2001,7 @@ }, { "cell_type": "code", - "execution_count": 154, + "execution_count": 153, "metadata": {}, "outputs": [], "source": [ @@ -2043,7 +2014,7 @@ }, { "cell_type": "code", - "execution_count": 155, + "execution_count": 154, "metadata": {}, "outputs": [], "source": [ @@ -2052,7 +2023,7 @@ }, { "cell_type": "code", - "execution_count": 156, + "execution_count": 155, "metadata": {}, "outputs": [], "source": [ @@ -2061,7 +2032,7 @@ }, { "cell_type": "code", - "execution_count": 157, + "execution_count": 156, "metadata": {}, "outputs": [], "source": [ @@ -2077,7 +2048,7 @@ }, { "cell_type": "code", - "execution_count": 158, + "execution_count": 157, "metadata": {}, "outputs": [], "source": [ @@ -2088,7 +2059,7 @@ }, { "cell_type": "code", - "execution_count": 159, + "execution_count": 158, "metadata": {}, "outputs": [], "source": [ @@ -2103,7 +2074,7 @@ }, { "cell_type": "code", - "execution_count": 160, + "execution_count": 159, "metadata": {}, "outputs": [], "source": [ @@ -2120,9 +2091,16 @@ "## Losses and Metrics Based on Model Internals" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Note**: due to an issue introduced in TF 2.2 ([#46858](https://github.com/tensorflow/tensorflow/issues/46858)), it is currently not possible to use `add_loss()` along with the `build()` method. So the following code differs from the book: I create the `reconstruct` layer in the constructor instead of the `build()` method. Unfortunately, this means that the number of units in this layer must be hard-coded (alternatively, it could be passed as an argument to the constructor)." + ] + }, { "cell_type": "code", - "execution_count": 161, + "execution_count": 160, "metadata": {}, "outputs": [], "source": [ @@ -2133,13 +2111,14 @@ " kernel_initializer=\"lecun_normal\")\n", " for _ in range(5)]\n", " self.out = keras.layers.Dense(output_dim)\n", - " # TODO: check https://github.com/tensorflow/tensorflow/issues/26260\n", - " #self.reconstruction_mean = keras.metrics.Mean(name=\"reconstruction_error\")\n", + " self.reconstruct = keras.layers.Dense(8) # workaround for TF issue #46858\n", + " self.reconstruction_mean = keras.metrics.Mean(name=\"reconstruction_error\")\n", "\n", - " def build(self, batch_input_shape):\n", - " n_inputs = batch_input_shape[-1]\n", - " self.reconstruct = keras.layers.Dense(n_inputs)\n", - " super().build(batch_input_shape)\n", + " #Commented out due to TF issue #46858, see the note above\n", + " #def build(self, batch_input_shape):\n", + " # n_inputs = batch_input_shape[-1]\n", + " # self.reconstruct = keras.layers.Dense(n_inputs)\n", + " # super().build(batch_input_shape)\n", "\n", " def call(self, inputs, training=None):\n", " Z = inputs\n", @@ -2148,15 +2127,15 @@ " reconstruction = self.reconstruct(Z)\n", " recon_loss = tf.reduce_mean(tf.square(reconstruction - inputs))\n", " self.add_loss(0.05 * recon_loss)\n", - " #if training:\n", - " # result = self.reconstruction_mean(recon_loss)\n", - " # self.add_metric(result)\n", + " if training:\n", + " result = self.reconstruction_mean(recon_loss)\n", + " self.add_metric(result)\n", " return self.out(Z)" ] }, { "cell_type": "code", - "execution_count": 162, + "execution_count": 161, "metadata": {}, "outputs": [], "source": [ @@ -2167,7 +2146,7 @@ }, { "cell_type": "code", - "execution_count": 163, + "execution_count": 162, "metadata": {}, "outputs": [], "source": [ @@ -2186,7 +2165,7 @@ }, { "cell_type": "code", - "execution_count": 164, + "execution_count": 163, "metadata": {}, "outputs": [], "source": [ @@ -2196,7 +2175,7 @@ }, { "cell_type": "code", - "execution_count": 165, + "execution_count": 164, "metadata": {}, "outputs": [], "source": [ @@ -2207,7 +2186,7 @@ }, { "cell_type": "code", - "execution_count": 166, + "execution_count": 165, "metadata": {}, "outputs": [], "source": [ @@ -2216,7 +2195,7 @@ }, { "cell_type": "code", - "execution_count": 167, + "execution_count": 166, "metadata": {}, "outputs": [], "source": [ @@ -2229,7 +2208,7 @@ }, { "cell_type": "code", - "execution_count": 168, + "execution_count": 167, "metadata": {}, "outputs": [], "source": [ @@ -2238,7 +2217,7 @@ }, { "cell_type": "code", - "execution_count": 169, + "execution_count": 168, "metadata": {}, "outputs": [], "source": [ @@ -2254,7 +2233,7 @@ }, { "cell_type": "code", - "execution_count": 170, + "execution_count": 169, "metadata": {}, "outputs": [], "source": [ @@ -2268,7 +2247,7 @@ }, { "cell_type": "code", - "execution_count": 171, + "execution_count": 170, "metadata": {}, "outputs": [], "source": [ @@ -2277,7 +2256,7 @@ }, { "cell_type": "code", - "execution_count": 172, + "execution_count": 171, "metadata": {}, "outputs": [], "source": [ @@ -2290,7 +2269,7 @@ }, { "cell_type": "code", - "execution_count": 173, + "execution_count": 172, "metadata": {}, "outputs": [], "source": [ @@ -2299,7 +2278,7 @@ }, { "cell_type": "code", - "execution_count": 174, + "execution_count": 173, "metadata": {}, "outputs": [], "source": [ @@ -2313,7 +2292,7 @@ }, { "cell_type": "code", - "execution_count": 175, + "execution_count": 174, "metadata": {}, "outputs": [], "source": [ @@ -2322,7 +2301,7 @@ }, { "cell_type": "code", - "execution_count": 176, + "execution_count": 175, "metadata": {}, "outputs": [], "source": [ @@ -2336,7 +2315,7 @@ }, { "cell_type": "code", - "execution_count": 177, + "execution_count": 176, "metadata": {}, "outputs": [], "source": [ @@ -2351,7 +2330,7 @@ }, { "cell_type": "code", - "execution_count": 178, + "execution_count": 177, "metadata": {}, "outputs": [], "source": [ @@ -2366,7 +2345,7 @@ }, { "cell_type": "code", - "execution_count": 179, + "execution_count": 178, "metadata": {}, "outputs": [], "source": [ @@ -2375,7 +2354,7 @@ }, { "cell_type": "code", - "execution_count": 180, + "execution_count": 179, "metadata": {}, "outputs": [], "source": [ @@ -2384,7 +2363,7 @@ }, { "cell_type": "code", - "execution_count": 181, + "execution_count": 180, "metadata": {}, "outputs": [], "source": [ @@ -2399,7 +2378,7 @@ }, { "cell_type": "code", - "execution_count": 182, + "execution_count": 181, "metadata": {}, "outputs": [], "source": [ @@ -2412,7 +2391,7 @@ }, { "cell_type": "code", - "execution_count": 183, + "execution_count": 182, "metadata": {}, "outputs": [], "source": [ @@ -2421,7 +2400,7 @@ }, { "cell_type": "code", - "execution_count": 184, + "execution_count": 183, "metadata": {}, "outputs": [], "source": [ @@ -2434,7 +2413,7 @@ }, { "cell_type": "code", - "execution_count": 185, + "execution_count": 184, "metadata": {}, "outputs": [], "source": [ @@ -2448,7 +2427,7 @@ }, { "cell_type": "code", - "execution_count": 186, + "execution_count": 185, "metadata": {}, "outputs": [], "source": [ @@ -2458,7 +2437,7 @@ }, { "cell_type": "code", - "execution_count": 187, + "execution_count": 186, "metadata": {}, "outputs": [], "source": [ @@ -2478,7 +2457,7 @@ }, { "cell_type": "code", - "execution_count": 188, + "execution_count": 187, "metadata": {}, "outputs": [], "source": [ @@ -2489,7 +2468,7 @@ }, { "cell_type": "code", - "execution_count": 189, + "execution_count": 188, "metadata": {}, "outputs": [], "source": [ @@ -2503,7 +2482,7 @@ }, { "cell_type": "code", - "execution_count": 190, + "execution_count": 189, "metadata": {}, "outputs": [], "source": [ @@ -2514,7 +2493,7 @@ }, { "cell_type": "code", - "execution_count": 191, + "execution_count": 190, "metadata": {}, "outputs": [], "source": [ @@ -2528,7 +2507,7 @@ }, { "cell_type": "code", - "execution_count": 192, + "execution_count": 191, "metadata": {}, "outputs": [], "source": [ @@ -2553,7 +2532,7 @@ }, { "cell_type": "code", - "execution_count": 193, + "execution_count": 192, "metadata": {}, "outputs": [], "source": [ @@ -2568,7 +2547,7 @@ }, { "cell_type": "code", - "execution_count": 194, + "execution_count": 193, "metadata": {}, "outputs": [], "source": [ @@ -2577,7 +2556,7 @@ }, { "cell_type": "code", - "execution_count": 195, + "execution_count": 194, "metadata": {}, "outputs": [], "source": [ @@ -2590,7 +2569,7 @@ }, { "cell_type": "code", - "execution_count": 196, + "execution_count": 195, "metadata": {}, "outputs": [], "source": [ @@ -2606,7 +2585,7 @@ }, { "cell_type": "code", - "execution_count": 197, + "execution_count": 196, "metadata": {}, "outputs": [], "source": [ @@ -2617,7 +2596,7 @@ }, { "cell_type": "code", - "execution_count": 198, + "execution_count": 197, "metadata": {}, "outputs": [], "source": [ @@ -2632,7 +2611,7 @@ }, { "cell_type": "code", - "execution_count": 199, + "execution_count": 198, "metadata": {}, "outputs": [], "source": [ @@ -2660,7 +2639,7 @@ }, { "cell_type": "code", - "execution_count": 200, + "execution_count": 199, "metadata": {}, "outputs": [], "source": [ @@ -2703,7 +2682,7 @@ }, { "cell_type": "code", - "execution_count": 201, + "execution_count": 200, "metadata": {}, "outputs": [], "source": [ @@ -2713,7 +2692,7 @@ }, { "cell_type": "code", - "execution_count": 202, + "execution_count": 201, "metadata": {}, "outputs": [], "source": [ @@ -2722,7 +2701,7 @@ }, { "cell_type": "code", - "execution_count": 203, + "execution_count": 202, "metadata": {}, "outputs": [], "source": [ @@ -2731,7 +2710,7 @@ }, { "cell_type": "code", - "execution_count": 204, + "execution_count": 203, "metadata": {}, "outputs": [], "source": [ @@ -2741,7 +2720,7 @@ }, { "cell_type": "code", - "execution_count": 205, + "execution_count": 204, "metadata": {}, "outputs": [], "source": [ @@ -2750,7 +2729,7 @@ }, { "cell_type": "code", - "execution_count": 206, + "execution_count": 205, "metadata": {}, "outputs": [], "source": [ @@ -2766,7 +2745,7 @@ }, { "cell_type": "code", - "execution_count": 207, + "execution_count": 206, "metadata": {}, "outputs": [], "source": [ @@ -2776,7 +2755,7 @@ }, { "cell_type": "code", - "execution_count": 208, + "execution_count": 207, "metadata": {}, "outputs": [], "source": [ @@ -2785,7 +2764,7 @@ }, { "cell_type": "code", - "execution_count": 209, + "execution_count": 208, "metadata": {}, "outputs": [], "source": [ @@ -2801,7 +2780,7 @@ }, { "cell_type": "code", - "execution_count": 210, + "execution_count": 209, "metadata": {}, "outputs": [], "source": [ @@ -2810,7 +2789,7 @@ }, { "cell_type": "code", - "execution_count": 211, + "execution_count": 210, "metadata": {}, "outputs": [], "source": [ @@ -2820,7 +2799,7 @@ }, { "cell_type": "code", - "execution_count": 212, + "execution_count": 211, "metadata": {}, "outputs": [], "source": [ @@ -2830,7 +2809,7 @@ }, { "cell_type": "code", - "execution_count": 213, + "execution_count": 212, "metadata": {}, "outputs": [], "source": [ @@ -2839,7 +2818,7 @@ }, { "cell_type": "code", - "execution_count": 214, + "execution_count": 213, "metadata": {}, "outputs": [], "source": [ @@ -2848,7 +2827,7 @@ }, { "cell_type": "code", - "execution_count": 215, + "execution_count": 214, "metadata": {}, "outputs": [], "source": [ @@ -2857,7 +2836,7 @@ }, { "cell_type": "code", - "execution_count": 216, + "execution_count": 215, "metadata": {}, "outputs": [], "source": [ @@ -2873,7 +2852,7 @@ }, { "cell_type": "code", - "execution_count": 217, + "execution_count": 216, "metadata": {}, "outputs": [], "source": [ @@ -2885,7 +2864,7 @@ }, { "cell_type": "code", - "execution_count": 218, + "execution_count": 217, "metadata": {}, "outputs": [], "source": [ @@ -2894,7 +2873,7 @@ }, { "cell_type": "code", - "execution_count": 219, + "execution_count": 218, "metadata": {}, "outputs": [], "source": [ @@ -2903,7 +2882,7 @@ }, { "cell_type": "code", - "execution_count": 220, + "execution_count": 219, "metadata": {}, "outputs": [], "source": [ @@ -2923,7 +2902,7 @@ }, { "cell_type": "code", - "execution_count": 221, + "execution_count": 220, "metadata": {}, "outputs": [], "source": [ @@ -2935,7 +2914,7 @@ }, { "cell_type": "code", - "execution_count": 222, + "execution_count": 221, "metadata": {}, "outputs": [], "source": [ @@ -2946,7 +2925,7 @@ }, { "cell_type": "code", - "execution_count": 223, + "execution_count": 222, "metadata": {}, "outputs": [], "source": [ @@ -2958,7 +2937,7 @@ }, { "cell_type": "code", - "execution_count": 224, + "execution_count": 223, "metadata": {}, "outputs": [], "source": [ @@ -2985,7 +2964,7 @@ }, { "cell_type": "code", - "execution_count": 225, + "execution_count": 224, "metadata": {}, "outputs": [], "source": [ @@ -2998,7 +2977,7 @@ }, { "cell_type": "code", - "execution_count": 226, + "execution_count": 225, "metadata": {}, "outputs": [], "source": [ @@ -3007,7 +2986,7 @@ }, { "cell_type": "code", - "execution_count": 227, + "execution_count": 226, "metadata": {}, "outputs": [], "source": [ @@ -3023,7 +3002,7 @@ }, { "cell_type": "code", - "execution_count": 228, + "execution_count": 227, "metadata": {}, "outputs": [], "source": [ @@ -3037,7 +3016,7 @@ }, { "cell_type": "code", - "execution_count": 229, + "execution_count": 228, "metadata": {}, "outputs": [], "source": [ @@ -3046,7 +3025,7 @@ }, { "cell_type": "code", - "execution_count": 230, + "execution_count": 229, "metadata": {}, "outputs": [], "source": [ @@ -3062,7 +3041,7 @@ }, { "cell_type": "code", - "execution_count": 231, + "execution_count": 230, "metadata": {}, "outputs": [], "source": [ @@ -3075,7 +3054,7 @@ }, { "cell_type": "code", - "execution_count": 232, + "execution_count": 231, "metadata": {}, "outputs": [], "source": [ @@ -3091,7 +3070,7 @@ }, { "cell_type": "code", - "execution_count": 233, + "execution_count": 232, "metadata": {}, "outputs": [], "source": [ @@ -3104,7 +3083,7 @@ }, { "cell_type": "code", - "execution_count": 234, + "execution_count": 233, "metadata": {}, "outputs": [], "source": [ @@ -3114,7 +3093,7 @@ }, { "cell_type": "code", - "execution_count": 235, + "execution_count": 234, "metadata": {}, "outputs": [], "source": [ @@ -3124,7 +3103,7 @@ }, { "cell_type": "code", - "execution_count": 236, + "execution_count": 235, "metadata": {}, "outputs": [], "source": [ @@ -3137,7 +3116,7 @@ }, { "cell_type": "code", - "execution_count": 237, + "execution_count": 236, "metadata": {}, "outputs": [], "source": [ @@ -3147,7 +3126,7 @@ }, { "cell_type": "code", - "execution_count": 238, + "execution_count": 237, "metadata": {}, "outputs": [], "source": [ @@ -3157,7 +3136,7 @@ }, { "cell_type": "code", - "execution_count": 239, + "execution_count": 238, "metadata": {}, "outputs": [], "source": [ @@ -3172,7 +3151,7 @@ }, { "cell_type": "code", - "execution_count": 240, + "execution_count": 239, "metadata": {}, "outputs": [], "source": [ @@ -3183,7 +3162,7 @@ }, { "cell_type": "code", - "execution_count": 241, + "execution_count": 240, "metadata": { "scrolled": true }, @@ -3195,12 +3174,12 @@ " x += 1\n", " return x\n", "\n", - "tf.autograph.to_code(add_10.python_function)" + "print(tf.autograph.to_code(add_10.python_function))" ] }, { "cell_type": "code", - "execution_count": 242, + "execution_count": 241, "metadata": {}, "outputs": [], "source": [ @@ -3214,7 +3193,7 @@ }, { "cell_type": "code", - "execution_count": 243, + "execution_count": 242, "metadata": {}, "outputs": [], "source": [ @@ -3238,7 +3217,7 @@ }, { "cell_type": "code", - "execution_count": 244, + "execution_count": 243, "metadata": {}, "outputs": [], "source": [ @@ -3250,7 +3229,7 @@ }, { "cell_type": "code", - "execution_count": 245, + "execution_count": 244, "metadata": {}, "outputs": [], "source": [ @@ -3262,7 +3241,7 @@ }, { "cell_type": "code", - "execution_count": 246, + "execution_count": 245, "metadata": {}, "outputs": [], "source": [ @@ -3291,7 +3270,7 @@ }, { "cell_type": "code", - "execution_count": 247, + "execution_count": 246, "metadata": {}, "outputs": [], "source": [ @@ -3302,7 +3281,7 @@ }, { "cell_type": "code", - "execution_count": 248, + "execution_count": 247, "metadata": {}, "outputs": [], "source": [ @@ -3327,7 +3306,7 @@ }, { "cell_type": "code", - "execution_count": 249, + "execution_count": 248, "metadata": {}, "outputs": [], "source": [ @@ -3336,7 +3315,7 @@ }, { "cell_type": "code", - "execution_count": 250, + "execution_count": 249, "metadata": {}, "outputs": [], "source": [ @@ -3354,7 +3333,7 @@ }, { "cell_type": "code", - "execution_count": 251, + "execution_count": 250, "metadata": {}, "outputs": [], "source": [ @@ -3365,7 +3344,7 @@ }, { "cell_type": "code", - "execution_count": 252, + "execution_count": 251, "metadata": {}, "outputs": [], "source": [ @@ -3374,7 +3353,7 @@ }, { "cell_type": "code", - "execution_count": 253, + "execution_count": 252, "metadata": {}, "outputs": [], "source": [ @@ -3390,7 +3369,7 @@ }, { "cell_type": "code", - "execution_count": 254, + "execution_count": 253, "metadata": {}, "outputs": [], "source": [ @@ -3408,7 +3387,7 @@ }, { "cell_type": "code", - "execution_count": 255, + "execution_count": 254, "metadata": {}, "outputs": [], "source": [ @@ -3419,7 +3398,7 @@ }, { "cell_type": "code", - "execution_count": 256, + "execution_count": 255, "metadata": {}, "outputs": [], "source": [ @@ -3428,7 +3407,7 @@ }, { "cell_type": "code", - "execution_count": 257, + "execution_count": 256, "metadata": {}, "outputs": [], "source": [ @@ -3437,7 +3416,7 @@ }, { "cell_type": "code", - "execution_count": 258, + "execution_count": 257, "metadata": {}, "outputs": [], "source": [ @@ -3462,7 +3441,7 @@ }, { "cell_type": "code", - "execution_count": 259, + "execution_count": 258, "metadata": {}, "outputs": [], "source": [ @@ -3508,7 +3487,7 @@ }, { "cell_type": "code", - "execution_count": 260, + "execution_count": 259, "metadata": {}, "outputs": [], "source": [ @@ -3519,7 +3498,7 @@ }, { "cell_type": "code", - "execution_count": 261, + "execution_count": 260, "metadata": {}, "outputs": [], "source": [ @@ -3576,7 +3555,7 @@ }, { "cell_type": "code", - "execution_count": 262, + "execution_count": 261, "metadata": {}, "outputs": [], "source": [ @@ -3630,7 +3609,7 @@ }, { "cell_type": "code", - "execution_count": 263, + "execution_count": 262, "metadata": {}, "outputs": [], "source": [ @@ -3652,7 +3631,7 @@ }, { "cell_type": "code", - "execution_count": 264, + "execution_count": 263, "metadata": {}, "outputs": [], "source": [ @@ -3691,7 +3670,7 @@ }, { "cell_type": "code", - "execution_count": 265, + "execution_count": 264, "metadata": {}, "outputs": [], "source": [ @@ -3704,7 +3683,7 @@ }, { "cell_type": "code", - "execution_count": 266, + "execution_count": 265, "metadata": {}, "outputs": [], "source": [ @@ -3715,7 +3694,7 @@ }, { "cell_type": "code", - "execution_count": 267, + "execution_count": 266, "metadata": {}, "outputs": [], "source": [ @@ -3728,7 +3707,7 @@ }, { "cell_type": "code", - "execution_count": 268, + "execution_count": 267, "metadata": {}, "outputs": [], "source": [ @@ -3743,7 +3722,7 @@ }, { "cell_type": "code", - "execution_count": 269, + "execution_count": 268, "metadata": {}, "outputs": [], "source": [ @@ -3787,7 +3766,7 @@ }, { "cell_type": "code", - "execution_count": 270, + "execution_count": 269, "metadata": {}, "outputs": [], "source": [ @@ -3798,7 +3777,7 @@ }, { "cell_type": "code", - "execution_count": 271, + "execution_count": 270, "metadata": {}, "outputs": [], "source": [ @@ -3816,7 +3795,7 @@ }, { "cell_type": "code", - "execution_count": 272, + "execution_count": 271, "metadata": {}, "outputs": [], "source": [ @@ -3826,7 +3805,7 @@ }, { "cell_type": "code", - "execution_count": 273, + "execution_count": 272, "metadata": {}, "outputs": [], "source": [ @@ -3840,7 +3819,7 @@ }, { "cell_type": "code", - "execution_count": 274, + "execution_count": 273, "metadata": {}, "outputs": [], "source": [ @@ -3901,7 +3880,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.9" } }, "nbformat": 4, diff --git a/13_loading_and_preprocessing_data.ipynb b/13_loading_and_preprocessing_data.ipynb index 144b216..0c82bb2 100644 --- a/13_loading_and_preprocessing_data.ipynb +++ b/13_loading_and_preprocessing_data.ipynb @@ -1026,7 +1026,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "**Warning**: there's currently a bug preventing `from tensorflow.train import X` so we work around it by writing `X = tf.train.X`. See https://github.com/tensorflow/tensorflow/issues/33289 for more details." + "**Warning**: in TensorFlow 2.0 and 2.1, there was a bug preventing `from tensorflow.train import X` so we work around it by writing `X = tf.train.X`. See https://github.com/tensorflow/tensorflow/issues/33289 for more details." ] }, { @@ -1294,7 +1294,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "**Warning**: there's currently a bug preventing `from tensorflow.train import X` so we work around it by writing `X = tf.train.X`. See https://github.com/tensorflow/tensorflow/issues/33289 for more details." + "**Warning**: in TensorFlow 2.0 and 2.1, there was a bug preventing `from tensorflow.train import X` so we work around it by writing `X = tf.train.X`. See https://github.com/tensorflow/tensorflow/issues/33289 for more details." ] }, { @@ -1430,7 +1430,7 @@ "source": [ "import os\n", "import tarfile\n", - "import urllib\n", + "import urllib.request\n", "\n", "DOWNLOAD_ROOT = \"https://raw.githubusercontent.com/ageron/handson-ml2/master/\"\n", "HOUSING_PATH = os.path.join(\"datasets\", \"housing\")\n", @@ -2120,7 +2120,7 @@ }, { "cell_type": "code", - "execution_count": 162, + "execution_count": 130, "metadata": {}, "outputs": [], "source": [ @@ -2267,7 +2267,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "It takes about 20 seconds to load the dataset and go through it 10 times." + "It takes about 17 seconds to load the dataset and go through it 10 times." ] }, { @@ -2306,7 +2306,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Now it takes about 34 seconds to go through the dataset 10 times. That's much slower, essentially because the dataset is not cached in RAM, so it must be reloaded at each epoch. If you add `.cache()` just before `.repeat(10)`, you will see that this implementation will be about as fast as the previous one." + "Now it takes about 33 seconds to go through the dataset 10 times. That's much slower, essentially because the dataset is not cached in RAM, so it must be reloaded at each epoch. If you add `.cache()` just before `.repeat(10)`, you will see that this implementation will be about as fast as the previous one." ] }, { @@ -2609,7 +2609,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We get about 75% accuracy on the validation set after just the first epoch, but after that the model makes no progress. We will do better in Chapter 16. For now the point is just to perform efficient preprocessing using `tf.data` and Keras preprocessing layers." + "We get about 73.7% accuracy on the validation set after just the first epoch, but after that the model makes no significant progress. We will do better in Chapter 16. For now the point is just to perform efficient preprocessing using `tf.data` and Keras preprocessing layers." ] }, { @@ -2766,7 +2766,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.9" }, "nav_menu": { "height": "264px", diff --git a/environment.yml b/environment.yml index ed38bc8..8025023 100644 --- a/environment.yml +++ b/environment.yml @@ -3,55 +3,45 @@ channels: - conda-forge - defaults dependencies: - - graphviz - - imageio=2.6 - - ipython=7.12 - - ipywidgets=7.5 - - joblib=0.14 - - jupyter=1.0 - - matplotlib=3.1 - - nbdime=2.0 - - nltk=3.4 - - numexpr=2.7 - - numpy=1.18 - - pandas=1.0 - - pillow=7.0 - - pip - - psutil=5.7 - - py-xgboost=0.90 - - pydot=1.4 - - pyglet=1.5 - - pyopengl=3.1 - - python=3.7 - - python-graphviz - #- pyvirtualdisplay=0.2 # add if on headless server - - requests=2.22 - - scikit-image=0.16 - - scikit-learn=0.22 - - scipy=1.4 - - tqdm=4.43 - - wheel - - widgetsnbextension=3.5 + - atari_py=0.2 # used only in chapter 18 + - ftfy=5.8 # used only in chapter 16 by the transformers library + - graphviz # used only in chapter 6 for dot files + - gym=0.18 # used only in chapter 18 + - ipython=7.20 # a powerful Python shell + - ipywidgets=7.6 # optionally used only in chapter 12 for tqdm in Jupyter + - joblib=0.14 # used only in chapter 2 to save/load Scikit-Learn models + - jupyter=1.0.0 # to edit and run Jupyter notebooks + - matplotlib=3.3.4 # beautiful plots. See tutorial tools_matplotlib.ipynb + - nbdime=2.1.0 # optional tool to diff Jupyter notebooks + - nltk=3.4.4 # optionally used in chapter 3, exercise 4 + - numexpr=2.7.2 # used only in the Pandas tutorial for numerical expressions + - numpy=1.19.5 # Powerful n-dimensional arrays and numerical computing tools + - opencv=4.5.1 # used only in chapter 18 by TF Agents for image preprocessing + - pandas=1.2.2 # data analysis and manipulation tool + - pillow=8.1.0 # image manipulation library, (used by matplotlib.image.imread) + - pip # Python's package-management system + - py-xgboost=1.3.0 # used only in chapter 7 for optimized Gradient Boosting + - pyglet=1.5.15 # used only in chapter 18 to render environments + - pyopengl=3.1.5 # used only in chapter 18 to render environments + - python=3.7 # Python! Not using latest version as some libs lack support + - python-graphviz # used only in chapter 6 for dot files + #- pyvirtualdisplay=1.3 # used only in chapter 18 if on headless server + - requests=2.25.1 # used only in chapter 19 for REST API queries + - scikit-learn=0.24.1 # machine learning library + - scipy=1.6.0 # scientific/technical computing library + - tqdm=4.56.1 # a progress bar library + - transformers=4.3.2 # Natural Language Processing lib for TF or PyTorch + - wheel # built-package format for pip + - widgetsnbextension=3.5.1 # interactive HTML widgets for Jupyter notebooks - pip: - - atari-py==0.2.6 - - ftfy==5.7 - - gast==0.2.2 - - gym==0.17.1 - - opencv-python==4.2.0.32 - - spacy==2.2.4 - - tensorboard==2.1.1 - - tensorflow-addons==0.8.3 - - tensorflow-data-validation==0.21.5 - - tensorflow-datasets==2.1.0 - - tensorflow-estimator==2.1.0 - - tensorflow-hub==0.7.0 - - tensorflow-metadata==0.21.1 - - tensorflow-model-analysis==0.21.6 - - tensorflow-probability==0.9.0 - - tensorflow-serving-api==2.1.0 # or tensorflow-serving-api-gpu if gpu - - tensorflow-transform==0.21.2 - - tensorflow==2.1.0 # or tensorflow-gpu if gpu - - tf-agents==0.3.0 - - tfx==0.21.2 - - transformers==2.8.0 - - urlextract==0.14.0 + - tensorboard-plugin-profile==2.4.0 # profiling plugin for TensorBoard + - tensorboard==2.4.1 # TensorFlow's visualization toolkit + - tensorflow-addons==0.12.1 # used only in chapter 16 for a seq2seq impl. + - tensorflow-datasets==3.0.0 # datasets repository, ready to use + - tensorflow-hub==0.9.0 # trained ML models repository, ready to use + - tensorflow-probability==0.12.1 # Optional. Probability/Stats lib. + - tensorflow-serving-api==2.4.1 # or tensorflow-serving-api-gpu if gpu + - tensorflow==2.4.1 # Deep Learning library + - tf-agents==0.7.1 # Reinforcement Learning lib based on TensorFlow + - tfx==0.27.0 # platform to deploy production ML pipelines + - urlextract==1.2.0 # optionally used in chapter 3, exercise 4 diff --git a/requirements.txt b/requirements.txt index 440c6d7..a80ae0d 100644 --- a/requirements.txt +++ b/requirements.txt @@ -5,19 +5,19 @@ ##### Core scientific packages jupyter==1.0.0 -matplotlib==3.3.2 -numpy==1.18.5 -pandas==1.1.3 -scipy==1.5.3 +matplotlib==3.3.4 +numpy==1.19.5 +pandas==1.2.2 +scipy==1.6.0 ##### Machine Learning packages -scikit-learn==0.23.2 +scikit-learn==0.24.1 # Optional: the XGBoost library is only used in chapter 7 -xgboost==1.2.1 +xgboost==1.3.3 # Optional: the transformers library is only using in chapter 16 -transformers==3.3.1 +transformers==4.3.2 ##### TensorFlow-related packages @@ -27,39 +27,39 @@ transformers==3.3.1 # you must install CUDA, cuDNN and more: see tensorflow.org for the detailed # installation instructions. -tensorflow==2.3.1 +tensorflow==2.4.1 # Optional: the TF Serving API library is just needed for chapter 19. -tensorflow-serving-api==2.3.0 # or tensorflow-serving-api-gpu if gpu +tensorflow-serving-api==2.4.1 # or tensorflow-serving-api-gpu if gpu -tensorboard==2.3.0 -tensorboard-plugin-profile==2.3.0 -tensorflow-datasets==4.0.1 +tensorboard==2.4.1 +tensorboard-plugin-profile==2.4.0 +tensorflow-datasets==3.0.0 tensorflow-hub==0.9.0 -tensorflow-probability==0.11.1 +tensorflow-probability==0.12.1 # Optional: only used in chapter 13. # NOT AVAILABLE ON WINDOWS -tfx==0.24.1 +tfx==0.27.0 # Optional: only used in chapter 16. # NOT AVAILABLE ON WINDOWS -tensorflow-addons==0.11.2 +tensorflow-addons==0.12.1 ##### Reinforcement Learning library (chapter 18) # There are a few dependencies you need to install first, check out: # https://github.com/openai/gym#installing-everything -gym[atari]==0.17.3 +gym[atari]==0.18.0 # On Windows, install atari_py using: # pip install --no-index -f https://github.com/Kojoley/atari-py/releases atari_py -tf-agents==0.6.0 +tf-agents==0.7.1 ##### Image manipulation -Pillow==8.0.0 -graphviz==0.14.2 -opencv-python==4.4.0.44 -pyglet==1.4.11 +Pillow==7.2.0 +graphviz==0.16 +opencv-python==4.5.1.48 +pyglet==1.5.0 #pyvirtualdisplay # needed in chapter 16, if on a headless server # (i.e., without screen, e.g., Colab or VM) @@ -71,32 +71,23 @@ pyglet==1.4.11 joblib==0.14.1 # Easy http requests -requests==2.24.0 +requests==2.25.1 # Nice utility to diff Jupyter Notebooks. nbdime==2.1.0 # May be useful with Pandas for complex "where" clauses (e.g., Pandas # tutorial). -numexpr==2.7.1 +numexpr==2.7.2 # Optional: these libraries can be useful in the classification chapter, # exercise 4. nltk==3.5 -urlextract==1.1.0 +urlextract==1.2.0 # Optional: these libraries are only used in chapter 16 ftfy==5.8 # Optional: tqdm displays nice progress bars, ipywidgets for tqdm's notebook support -tqdm==4.50.2 -ipywidgets==7.5.1 - - - -# Specific lib versions to avoid conflicts -attrs==19.3.0 -cloudpickle==1.3.0 -dill==0.3.1.1 -gast==0.3.3 -httplib2==0.17.4 +tqdm==4.56.1 +ipywidgets==7.6.3 From cc70196eeb78bd9899af74f723b6e99ebd139dcb Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aur=C3=A9lien=20Geron?= Date: Sun, 14 Feb 2021 15:39:03 +1300 Subject: [PATCH 26/49] Fix cities CSV url --- tools_pandas.ipynb | 308 ++++++++++++++++++++++----------------------- 1 file changed, 154 insertions(+), 154 deletions(-) diff --git a/tools_pandas.ipynb b/tools_pandas.ipynb index aab889d..7e87472 100644 --- a/tools_pandas.ipynb +++ b/tools_pandas.ipynb @@ -28,7 +28,7 @@ }, { "cell_type": "code", - "execution_count": 2, + "execution_count": 1, "metadata": {}, "outputs": [], "source": [ @@ -56,7 +56,7 @@ }, { "cell_type": "code", - "execution_count": 3, + "execution_count": 2, "metadata": {}, "outputs": [], "source": [ @@ -74,7 +74,7 @@ }, { "cell_type": "code", - "execution_count": 4, + "execution_count": 3, "metadata": {}, "outputs": [], "source": [ @@ -91,7 +91,7 @@ }, { "cell_type": "code", - "execution_count": 5, + "execution_count": 4, "metadata": {}, "outputs": [], "source": [ @@ -107,7 +107,7 @@ }, { "cell_type": "code", - "execution_count": 6, + "execution_count": 5, "metadata": {}, "outputs": [], "source": [ @@ -123,7 +123,7 @@ }, { "cell_type": "code", - "execution_count": 7, + "execution_count": 6, "metadata": {}, "outputs": [], "source": [ @@ -140,7 +140,7 @@ }, { "cell_type": "code", - "execution_count": 8, + "execution_count": 7, "metadata": {}, "outputs": [], "source": [ @@ -157,7 +157,7 @@ }, { "cell_type": "code", - "execution_count": 9, + "execution_count": 8, "metadata": {}, "outputs": [], "source": [ @@ -173,7 +173,7 @@ }, { "cell_type": "code", - "execution_count": 10, + "execution_count": 9, "metadata": {}, "outputs": [], "source": [ @@ -189,7 +189,7 @@ }, { "cell_type": "code", - "execution_count": 11, + "execution_count": 10, "metadata": {}, "outputs": [], "source": [ @@ -198,7 +198,7 @@ }, { "cell_type": "code", - "execution_count": 12, + "execution_count": 11, "metadata": {}, "outputs": [], "source": [ @@ -214,7 +214,7 @@ }, { "cell_type": "code", - "execution_count": 13, + "execution_count": 12, "metadata": {}, "outputs": [], "source": [ @@ -230,7 +230,7 @@ }, { "cell_type": "code", - "execution_count": 14, + "execution_count": 13, "metadata": {}, "outputs": [], "source": [ @@ -240,7 +240,7 @@ }, { "cell_type": "code", - "execution_count": 15, + "execution_count": 14, "metadata": {}, "outputs": [], "source": [ @@ -257,7 +257,7 @@ }, { "cell_type": "code", - "execution_count": 16, + "execution_count": 15, "metadata": {}, "outputs": [], "source": [ @@ -276,7 +276,7 @@ }, { "cell_type": "code", - "execution_count": 17, + "execution_count": 16, "metadata": {}, "outputs": [], "source": [ @@ -293,7 +293,7 @@ }, { "cell_type": "code", - "execution_count": 18, + "execution_count": 17, "metadata": {}, "outputs": [], "source": [ @@ -311,7 +311,7 @@ }, { "cell_type": "code", - "execution_count": 19, + "execution_count": 18, "metadata": {}, "outputs": [], "source": [ @@ -329,7 +329,7 @@ }, { "cell_type": "code", - "execution_count": 20, + "execution_count": 19, "metadata": {}, "outputs": [], "source": [ @@ -350,7 +350,7 @@ }, { "cell_type": "code", - "execution_count": 21, + "execution_count": 20, "metadata": {}, "outputs": [], "source": [ @@ -378,7 +378,7 @@ }, { "cell_type": "code", - "execution_count": 22, + "execution_count": 21, "metadata": {}, "outputs": [], "source": [ @@ -396,7 +396,7 @@ }, { "cell_type": "code", - "execution_count": 23, + "execution_count": 22, "metadata": {}, "outputs": [], "source": [ @@ -414,7 +414,7 @@ }, { "cell_type": "code", - "execution_count": 24, + "execution_count": 23, "metadata": { "scrolled": true }, @@ -452,7 +452,7 @@ }, { "cell_type": "code", - "execution_count": 25, + "execution_count": 24, "metadata": {}, "outputs": [], "source": [ @@ -469,7 +469,7 @@ }, { "cell_type": "code", - "execution_count": 26, + "execution_count": 25, "metadata": {}, "outputs": [], "source": [ @@ -486,7 +486,7 @@ }, { "cell_type": "code", - "execution_count": 27, + "execution_count": 26, "metadata": {}, "outputs": [], "source": [ @@ -506,7 +506,7 @@ }, { "cell_type": "code", - "execution_count": 28, + "execution_count": 27, "metadata": {}, "outputs": [], "source": [ @@ -523,7 +523,7 @@ }, { "cell_type": "code", - "execution_count": 29, + "execution_count": 28, "metadata": {}, "outputs": [], "source": [ @@ -539,7 +539,7 @@ }, { "cell_type": "code", - "execution_count": 30, + "execution_count": 29, "metadata": {}, "outputs": [], "source": [ @@ -556,7 +556,7 @@ }, { "cell_type": "code", - "execution_count": 31, + "execution_count": 30, "metadata": {}, "outputs": [], "source": [ @@ -573,7 +573,7 @@ }, { "cell_type": "code", - "execution_count": 32, + "execution_count": 31, "metadata": {}, "outputs": [], "source": [ @@ -591,7 +591,7 @@ }, { "cell_type": "code", - "execution_count": 33, + "execution_count": 32, "metadata": {}, "outputs": [], "source": [ @@ -608,7 +608,7 @@ }, { "cell_type": "code", - "execution_count": 34, + "execution_count": 33, "metadata": { "scrolled": true }, @@ -620,7 +620,7 @@ }, { "cell_type": "code", - "execution_count": 35, + "execution_count": 34, "metadata": {}, "outputs": [], "source": [ @@ -640,7 +640,7 @@ }, { "cell_type": "code", - "execution_count": 36, + "execution_count": 35, "metadata": {}, "outputs": [], "source": [ @@ -659,7 +659,7 @@ }, { "cell_type": "code", - "execution_count": 37, + "execution_count": 36, "metadata": {}, "outputs": [], "source": [ @@ -676,7 +676,7 @@ }, { "cell_type": "code", - "execution_count": 38, + "execution_count": 37, "metadata": {}, "outputs": [], "source": [ @@ -693,7 +693,7 @@ }, { "cell_type": "code", - "execution_count": 39, + "execution_count": 38, "metadata": {}, "outputs": [], "source": [ @@ -713,7 +713,7 @@ }, { "cell_type": "code", - "execution_count": 40, + "execution_count": 39, "metadata": {}, "outputs": [], "source": [ @@ -730,7 +730,7 @@ }, { "cell_type": "code", - "execution_count": 41, + "execution_count": 40, "metadata": {}, "outputs": [], "source": [ @@ -747,7 +747,7 @@ }, { "cell_type": "code", - "execution_count": 42, + "execution_count": 41, "metadata": {}, "outputs": [], "source": [ @@ -763,7 +763,7 @@ }, { "cell_type": "code", - "execution_count": 43, + "execution_count": 42, "metadata": {}, "outputs": [], "source": [ @@ -779,7 +779,7 @@ }, { "cell_type": "code", - "execution_count": 44, + "execution_count": 43, "metadata": {}, "outputs": [], "source": [ @@ -795,7 +795,7 @@ }, { "cell_type": "code", - "execution_count": 45, + "execution_count": 44, "metadata": {}, "outputs": [], "source": [ @@ -811,7 +811,7 @@ }, { "cell_type": "code", - "execution_count": 46, + "execution_count": 45, "metadata": {}, "outputs": [], "source": [ @@ -821,7 +821,7 @@ }, { "cell_type": "code", - "execution_count": 47, + "execution_count": 46, "metadata": {}, "outputs": [], "source": [ @@ -838,7 +838,7 @@ }, { "cell_type": "code", - "execution_count": 48, + "execution_count": 47, "metadata": {}, "outputs": [], "source": [ @@ -855,7 +855,7 @@ }, { "cell_type": "code", - "execution_count": 49, + "execution_count": 48, "metadata": {}, "outputs": [], "source": [ @@ -871,7 +871,7 @@ }, { "cell_type": "code", - "execution_count": 50, + "execution_count": 49, "metadata": {}, "outputs": [], "source": [ @@ -894,7 +894,7 @@ }, { "cell_type": "code", - "execution_count": 51, + "execution_count": 50, "metadata": {}, "outputs": [], "source": [ @@ -928,7 +928,7 @@ }, { "cell_type": "code", - "execution_count": 52, + "execution_count": 51, "metadata": {}, "outputs": [], "source": [ @@ -944,7 +944,7 @@ }, { "cell_type": "code", - "execution_count": 53, + "execution_count": 52, "metadata": {}, "outputs": [], "source": [ @@ -960,7 +960,7 @@ }, { "cell_type": "code", - "execution_count": 54, + "execution_count": 53, "metadata": {}, "outputs": [], "source": [ @@ -981,7 +981,7 @@ }, { "cell_type": "code", - "execution_count": 55, + "execution_count": 54, "metadata": {}, "outputs": [], "source": [ @@ -1007,7 +1007,7 @@ }, { "cell_type": "code", - "execution_count": 56, + "execution_count": 55, "metadata": {}, "outputs": [], "source": [ @@ -1030,7 +1030,7 @@ }, { "cell_type": "code", - "execution_count": 57, + "execution_count": 56, "metadata": {}, "outputs": [], "source": [ @@ -1051,7 +1051,7 @@ }, { "cell_type": "code", - "execution_count": 58, + "execution_count": 57, "metadata": {}, "outputs": [], "source": [ @@ -1074,7 +1074,7 @@ }, { "cell_type": "code", - "execution_count": 59, + "execution_count": 58, "metadata": {}, "outputs": [], "source": [ @@ -1102,7 +1102,7 @@ }, { "cell_type": "code", - "execution_count": 60, + "execution_count": 59, "metadata": {}, "outputs": [], "source": [ @@ -1111,7 +1111,7 @@ }, { "cell_type": "code", - "execution_count": 61, + "execution_count": 60, "metadata": {}, "outputs": [], "source": [ @@ -1128,7 +1128,7 @@ }, { "cell_type": "code", - "execution_count": 62, + "execution_count": 61, "metadata": {}, "outputs": [], "source": [ @@ -1144,7 +1144,7 @@ }, { "cell_type": "code", - "execution_count": 63, + "execution_count": 62, "metadata": {}, "outputs": [], "source": [ @@ -1162,7 +1162,7 @@ }, { "cell_type": "code", - "execution_count": 64, + "execution_count": 63, "metadata": {}, "outputs": [], "source": [ @@ -1180,7 +1180,7 @@ }, { "cell_type": "code", - "execution_count": 65, + "execution_count": 64, "metadata": {}, "outputs": [], "source": [ @@ -1199,7 +1199,7 @@ }, { "cell_type": "code", - "execution_count": 66, + "execution_count": 65, "metadata": {}, "outputs": [], "source": [ @@ -1216,7 +1216,7 @@ }, { "cell_type": "code", - "execution_count": 67, + "execution_count": 66, "metadata": {}, "outputs": [], "source": [ @@ -1233,7 +1233,7 @@ }, { "cell_type": "code", - "execution_count": 68, + "execution_count": 67, "metadata": { "scrolled": true }, @@ -1261,7 +1261,7 @@ }, { "cell_type": "code", - "execution_count": 69, + "execution_count": 68, "metadata": {}, "outputs": [], "source": [ @@ -1277,7 +1277,7 @@ }, { "cell_type": "code", - "execution_count": 70, + "execution_count": 69, "metadata": {}, "outputs": [], "source": [ @@ -1293,7 +1293,7 @@ }, { "cell_type": "code", - "execution_count": 71, + "execution_count": 70, "metadata": {}, "outputs": [], "source": [ @@ -1309,7 +1309,7 @@ }, { "cell_type": "code", - "execution_count": 72, + "execution_count": 71, "metadata": {}, "outputs": [], "source": [ @@ -1325,7 +1325,7 @@ }, { "cell_type": "code", - "execution_count": 73, + "execution_count": 72, "metadata": {}, "outputs": [], "source": [ @@ -1341,7 +1341,7 @@ }, { "cell_type": "code", - "execution_count": 74, + "execution_count": 73, "metadata": {}, "outputs": [], "source": [ @@ -1358,7 +1358,7 @@ }, { "cell_type": "code", - "execution_count": 75, + "execution_count": 74, "metadata": {}, "outputs": [], "source": [ @@ -1367,7 +1367,7 @@ }, { "cell_type": "code", - "execution_count": 76, + "execution_count": 75, "metadata": {}, "outputs": [], "source": [ @@ -1381,7 +1381,7 @@ }, { "cell_type": "code", - "execution_count": 77, + "execution_count": 76, "metadata": {}, "outputs": [], "source": [ @@ -1397,7 +1397,7 @@ }, { "cell_type": "code", - "execution_count": 78, + "execution_count": 77, "metadata": {}, "outputs": [], "source": [ @@ -1414,7 +1414,7 @@ }, { "cell_type": "code", - "execution_count": 79, + "execution_count": 78, "metadata": {}, "outputs": [], "source": [ @@ -1432,7 +1432,7 @@ }, { "cell_type": "code", - "execution_count": 80, + "execution_count": 79, "metadata": {}, "outputs": [], "source": [ @@ -1451,7 +1451,7 @@ }, { "cell_type": "code", - "execution_count": 81, + "execution_count": 80, "metadata": {}, "outputs": [], "source": [ @@ -1473,7 +1473,7 @@ }, { "cell_type": "code", - "execution_count": 82, + "execution_count": 81, "metadata": {}, "outputs": [], "source": [ @@ -1490,7 +1490,7 @@ }, { "cell_type": "code", - "execution_count": 83, + "execution_count": 82, "metadata": {}, "outputs": [], "source": [ @@ -1512,7 +1512,7 @@ }, { "cell_type": "code", - "execution_count": 84, + "execution_count": 83, "metadata": {}, "outputs": [], "source": [ @@ -1539,7 +1539,7 @@ }, { "cell_type": "code", - "execution_count": 85, + "execution_count": 84, "metadata": {}, "outputs": [], "source": [ @@ -1555,7 +1555,7 @@ }, { "cell_type": "code", - "execution_count": 86, + "execution_count": 85, "metadata": {}, "outputs": [], "source": [ @@ -1572,7 +1572,7 @@ }, { "cell_type": "code", - "execution_count": 87, + "execution_count": 86, "metadata": {}, "outputs": [], "source": [ @@ -1591,7 +1591,7 @@ }, { "cell_type": "code", - "execution_count": 88, + "execution_count": 87, "metadata": {}, "outputs": [], "source": [ @@ -1608,7 +1608,7 @@ }, { "cell_type": "code", - "execution_count": 89, + "execution_count": 88, "metadata": {}, "outputs": [], "source": [ @@ -1624,7 +1624,7 @@ }, { "cell_type": "code", - "execution_count": 90, + "execution_count": 89, "metadata": {}, "outputs": [], "source": [ @@ -1641,7 +1641,7 @@ }, { "cell_type": "code", - "execution_count": 91, + "execution_count": 90, "metadata": {}, "outputs": [], "source": [ @@ -1661,7 +1661,7 @@ }, { "cell_type": "code", - "execution_count": 92, + "execution_count": 91, "metadata": {}, "outputs": [], "source": [ @@ -1678,7 +1678,7 @@ }, { "cell_type": "code", - "execution_count": 93, + "execution_count": 92, "metadata": { "scrolled": true }, @@ -1705,7 +1705,7 @@ }, { "cell_type": "code", - "execution_count": 94, + "execution_count": 93, "metadata": {}, "outputs": [], "source": [ @@ -1723,7 +1723,7 @@ }, { "cell_type": "code", - "execution_count": 95, + "execution_count": 94, "metadata": {}, "outputs": [], "source": [ @@ -1739,7 +1739,7 @@ }, { "cell_type": "code", - "execution_count": 96, + "execution_count": 95, "metadata": {}, "outputs": [], "source": [ @@ -1755,7 +1755,7 @@ }, { "cell_type": "code", - "execution_count": 97, + "execution_count": 96, "metadata": {}, "outputs": [], "source": [ @@ -1771,7 +1771,7 @@ }, { "cell_type": "code", - "execution_count": 98, + "execution_count": 97, "metadata": {}, "outputs": [], "source": [ @@ -1787,7 +1787,7 @@ }, { "cell_type": "code", - "execution_count": 99, + "execution_count": 98, "metadata": {}, "outputs": [], "source": [ @@ -1803,7 +1803,7 @@ }, { "cell_type": "code", - "execution_count": 100, + "execution_count": 99, "metadata": {}, "outputs": [], "source": [ @@ -1819,7 +1819,7 @@ }, { "cell_type": "code", - "execution_count": 101, + "execution_count": 100, "metadata": {}, "outputs": [], "source": [ @@ -1835,7 +1835,7 @@ }, { "cell_type": "code", - "execution_count": 102, + "execution_count": 101, "metadata": {}, "outputs": [], "source": [ @@ -1851,7 +1851,7 @@ }, { "cell_type": "code", - "execution_count": 103, + "execution_count": 102, "metadata": {}, "outputs": [], "source": [ @@ -1867,7 +1867,7 @@ }, { "cell_type": "code", - "execution_count": 104, + "execution_count": 103, "metadata": { "scrolled": true }, @@ -1886,7 +1886,7 @@ }, { "cell_type": "code", - "execution_count": 105, + "execution_count": 104, "metadata": {}, "outputs": [], "source": [ @@ -1897,7 +1897,7 @@ }, { "cell_type": "code", - "execution_count": 106, + "execution_count": 105, "metadata": { "scrolled": true }, @@ -1920,7 +1920,7 @@ }, { "cell_type": "code", - "execution_count": 107, + "execution_count": 106, "metadata": { "scrolled": true }, @@ -1938,7 +1938,7 @@ }, { "cell_type": "code", - "execution_count": 108, + "execution_count": 107, "metadata": {}, "outputs": [], "source": [ @@ -1959,7 +1959,7 @@ }, { "cell_type": "code", - "execution_count": 109, + "execution_count": 108, "metadata": {}, "outputs": [], "source": [ @@ -1975,7 +1975,7 @@ }, { "cell_type": "code", - "execution_count": 110, + "execution_count": 109, "metadata": {}, "outputs": [], "source": [ @@ -1991,7 +1991,7 @@ }, { "cell_type": "code", - "execution_count": 111, + "execution_count": 110, "metadata": {}, "outputs": [], "source": [ @@ -2011,7 +2011,7 @@ }, { "cell_type": "code", - "execution_count": 112, + "execution_count": 111, "metadata": {}, "outputs": [], "source": [ @@ -2027,7 +2027,7 @@ }, { "cell_type": "code", - "execution_count": 113, + "execution_count": 112, "metadata": { "scrolled": true }, @@ -2047,7 +2047,7 @@ }, { "cell_type": "code", - "execution_count": 114, + "execution_count": 113, "metadata": {}, "outputs": [], "source": [ @@ -2064,7 +2064,7 @@ }, { "cell_type": "code", - "execution_count": 115, + "execution_count": 114, "metadata": {}, "outputs": [], "source": [ @@ -2084,7 +2084,7 @@ }, { "cell_type": "code", - "execution_count": 116, + "execution_count": 115, "metadata": { "scrolled": true }, @@ -2103,7 +2103,7 @@ }, { "cell_type": "code", - "execution_count": 117, + "execution_count": 116, "metadata": {}, "outputs": [], "source": [ @@ -2120,7 +2120,7 @@ }, { "cell_type": "code", - "execution_count": 118, + "execution_count": 117, "metadata": {}, "outputs": [], "source": [ @@ -2144,7 +2144,7 @@ }, { "cell_type": "code", - "execution_count": 119, + "execution_count": 118, "metadata": {}, "outputs": [], "source": [ @@ -2153,7 +2153,7 @@ }, { "cell_type": "code", - "execution_count": 120, + "execution_count": 119, "metadata": {}, "outputs": [], "source": [ @@ -2172,7 +2172,7 @@ }, { "cell_type": "code", - "execution_count": 121, + "execution_count": 120, "metadata": {}, "outputs": [], "source": [ @@ -2188,7 +2188,7 @@ }, { "cell_type": "code", - "execution_count": 122, + "execution_count": 121, "metadata": {}, "outputs": [], "source": [ @@ -2204,7 +2204,7 @@ }, { "cell_type": "code", - "execution_count": 123, + "execution_count": 122, "metadata": {}, "outputs": [], "source": [ @@ -2220,7 +2220,7 @@ }, { "cell_type": "code", - "execution_count": 124, + "execution_count": 123, "metadata": {}, "outputs": [], "source": [ @@ -2237,7 +2237,7 @@ }, { "cell_type": "code", - "execution_count": 125, + "execution_count": 124, "metadata": {}, "outputs": [], "source": [ @@ -2257,7 +2257,7 @@ }, { "cell_type": "code", - "execution_count": 126, + "execution_count": 125, "metadata": {}, "outputs": [], "source": [ @@ -2273,7 +2273,7 @@ }, { "cell_type": "code", - "execution_count": 127, + "execution_count": 126, "metadata": {}, "outputs": [], "source": [ @@ -2289,7 +2289,7 @@ }, { "cell_type": "code", - "execution_count": 128, + "execution_count": 127, "metadata": {}, "outputs": [], "source": [ @@ -2311,7 +2311,7 @@ }, { "cell_type": "code", - "execution_count": 129, + "execution_count": 128, "metadata": {}, "outputs": [], "source": [ @@ -2328,7 +2328,7 @@ }, { "cell_type": "code", - "execution_count": 130, + "execution_count": 129, "metadata": {}, "outputs": [], "source": [ @@ -2350,7 +2350,7 @@ }, { "cell_type": "code", - "execution_count": 131, + "execution_count": 130, "metadata": {}, "outputs": [], "source": [ @@ -2368,7 +2368,7 @@ }, { "cell_type": "code", - "execution_count": 132, + "execution_count": 131, "metadata": {}, "outputs": [], "source": [ @@ -2390,7 +2390,7 @@ }, { "cell_type": "code", - "execution_count": 133, + "execution_count": 132, "metadata": {}, "outputs": [], "source": [ @@ -2410,7 +2410,7 @@ }, { "cell_type": "code", - "execution_count": 134, + "execution_count": 133, "metadata": {}, "outputs": [], "source": [ @@ -2422,18 +2422,18 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "As you might guess, there are similar `read_json`, `read_html`, `read_excel` functions as well. We can also read data straight from the Internet. For example, let's load all U.S. cities from [simplemaps.com](http://simplemaps.com/):" + "As you might guess, there are similar `read_json`, `read_html`, `read_excel` functions as well. We can also read data straight from the Internet. For example, let's load the top 1,000 U.S. cities from github:" ] }, { "cell_type": "code", - "execution_count": 135, + "execution_count": 134, "metadata": {}, "outputs": [], "source": [ "us_cities = None\n", "try:\n", - " csv_url = \"http://simplemaps.com/files/cities.csv\"\n", + " csv_url = \"https://raw.githubusercontent.com/plotly/datasets/master/us-cities-top-1k.csv\"\n", " us_cities = pd.read_csv(csv_url, index_col=0)\n", " us_cities = us_cities.head()\n", "except IOError as e:\n", @@ -2460,7 +2460,7 @@ }, { "cell_type": "code", - "execution_count": 136, + "execution_count": 135, "metadata": {}, "outputs": [], "source": [ @@ -2477,7 +2477,7 @@ }, { "cell_type": "code", - "execution_count": 137, + "execution_count": 136, "metadata": {}, "outputs": [], "source": [ @@ -2500,7 +2500,7 @@ }, { "cell_type": "code", - "execution_count": 138, + "execution_count": 137, "metadata": {}, "outputs": [], "source": [ @@ -2518,7 +2518,7 @@ }, { "cell_type": "code", - "execution_count": 139, + "execution_count": 138, "metadata": {}, "outputs": [], "source": [ @@ -2535,7 +2535,7 @@ }, { "cell_type": "code", - "execution_count": 140, + "execution_count": 139, "metadata": {}, "outputs": [], "source": [ @@ -2551,7 +2551,7 @@ }, { "cell_type": "code", - "execution_count": 141, + "execution_count": 140, "metadata": {}, "outputs": [], "source": [ @@ -2570,7 +2570,7 @@ }, { "cell_type": "code", - "execution_count": 142, + "execution_count": 141, "metadata": {}, "outputs": [], "source": [ @@ -2587,7 +2587,7 @@ }, { "cell_type": "code", - "execution_count": 143, + "execution_count": 142, "metadata": {}, "outputs": [], "source": [ @@ -2603,7 +2603,7 @@ }, { "cell_type": "code", - "execution_count": 144, + "execution_count": 143, "metadata": {}, "outputs": [], "source": [ @@ -2619,7 +2619,7 @@ }, { "cell_type": "code", - "execution_count": 145, + "execution_count": 144, "metadata": {}, "outputs": [], "source": [ @@ -2635,7 +2635,7 @@ }, { "cell_type": "code", - "execution_count": 146, + "execution_count": 145, "metadata": { "scrolled": true }, @@ -2653,7 +2653,7 @@ }, { "cell_type": "code", - "execution_count": 147, + "execution_count": 146, "metadata": { "scrolled": true }, @@ -2678,7 +2678,7 @@ }, { "cell_type": "code", - "execution_count": 148, + "execution_count": 147, "metadata": {}, "outputs": [], "source": [ @@ -2702,7 +2702,7 @@ }, { "cell_type": "code", - "execution_count": 149, + "execution_count": 148, "metadata": {}, "outputs": [], "source": [ @@ -2720,7 +2720,7 @@ }, { "cell_type": "code", - "execution_count": 150, + "execution_count": 149, "metadata": {}, "outputs": [], "source": [ @@ -2737,7 +2737,7 @@ }, { "cell_type": "code", - "execution_count": 151, + "execution_count": 150, "metadata": {}, "outputs": [], "source": [ @@ -2754,7 +2754,7 @@ }, { "cell_type": "code", - "execution_count": 152, + "execution_count": 151, "metadata": {}, "outputs": [], "source": [ @@ -2793,7 +2793,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.9" }, "toc": { "toc_cell": false, From a1876057106815608d05dd7cb4e0f84e23caef11 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aur=C3=A9lien=20Geron?= Date: Mon, 15 Feb 2021 09:56:26 +1300 Subject: [PATCH 27/49] Update libraries to latest version, including TensorFlow 2.4.1 and Scikit-Learn 0.24.1 --- environment-windows.yml | 90 ++++++++++++++++++----------------------- environment.yml | 36 ++++++++--------- 2 files changed, 58 insertions(+), 68 deletions(-) diff --git a/environment-windows.yml b/environment-windows.yml index 7f309c0..4fc46dc 100644 --- a/environment-windows.yml +++ b/environment-windows.yml @@ -3,54 +3,44 @@ channels: - conda-forge - defaults dependencies: - - graphviz - - imageio=2.6.1 - - ipython=7.10.1 - - ipywidgets=7.5.1 - - joblib=0.14.0 - - jupyter=1.0.0 - - matplotlib=3.1.2 - - nbdime=1.1.0 - - nltk=3.4.5 - - numexpr=2.7.0 - - numpy=1.17.3 - - pandas=0.25.3 - - pillow=6.2.1 - - pip - - py-xgboost=0.90 - - pydot=1.4.1 - - pyopengl=3.1.3b2 - - python=3.7 - - python-graphviz - - requests=2.22.0 - - scikit-image=0.16.2 - - scikit-learn=0.22 - - scipy=1.3.1 - - tqdm=4.40.0 - - wheel - - widgetsnbextension=3.5.1 + - atari_py=0.2 # used only in chapter 18 + - ftfy=5.8 # used only in chapter 16 by the transformers library + - graphviz # used only in chapter 6 for dot files + - gym=0.18 # used only in chapter 18 + - ipython=7.20 # a powerful Python shell + - ipywidgets=7.6 # optionally used only in chapter 12 for tqdm in Jupyter + - joblib=0.14 # used only in chapter 2 to save/load Scikit-Learn models + - jupyter=1.0 # to edit and run Jupyter notebooks + - matplotlib=3.3 # beautiful plots. See tutorial tools_matplotlib.ipynb + - nbdime=2.1 # optional tool to diff Jupyter notebooks + - nltk=3.4 # optionally used in chapter 3, exercise 4 + - numexpr=2.7 # used only in the Pandas tutorial for numerical expressions + - numpy=1.19 # Powerful n-dimensional arrays and numerical computing tools + - opencv=4.5 # used only in chapter 18 by TF Agents for image preprocessing + - pandas=1.2 # data analysis and manipulation tool + - pillow=8.1 # image manipulation library, (used by matplotlib.image.imread) + - pip # Python's package-management system + - py-xgboost=0.90 # used only in chapter 7 for optimized Gradient Boosting + - pyglet=1.5 # used only in chapter 18 to render environments + - pyopengl=3.1 # used only in chapter 18 to render environments + - python=3.7 # Python! Not using latest version as some libs lack support + - python-graphviz # used only in chapter 6 for dot files + - requests=2.25 # used only in chapter 19 for REST API queries + - scikit-learn=0.24 # machine learning library + - scipy=1.6 # scientific/technical computing library + - tqdm=4.56 # a progress bar library + - transformers=4.3 # Natural Language Processing lib for TF or PyTorch + - wheel # built-package format for pip + - widgetsnbextension=3.5 # interactive HTML widgets for Jupyter notebooks - pip: - #- atari-py==0.2.6 # NOT ON WINDOWS YET - - ftfy==5.7 - - gym==0.15.4 - - opencv-python==4.1.2.30 - - psutil==5.6.7 - - pyglet==1.3.2 - - spacy==2.2.4 - - tensorboard==2.1.1 - #- tensorflow-addons==0.8.3 # NOT ON WINDOWS YET - #- tensorflow-data-validation==0.21.5 # NOT ON WINDOWS YET - - tensorflow-datasets==2.1.0 - - tensorflow-estimator==2.1.0 - - tensorflow-hub==0.7.0 - #- tensorflow-metadata==0.21.1 # NOT ON WINDOWS YET - #- tensorflow-model-analysis==0.21.6 # NOT ON WINDOWS YET - - tensorflow-probability==0.9.0 - - tensorflow-serving-api==2.1.0 # or tensorflow-serving-api-gpu if gpu - #- tensorflow-transform==0.21.2 # NOT ON WINDOWS YET - - tensorflow==2.1.0 # or tensorflow-gpu if gpu - - tf-agents==0.3.0 - #- tfx==0.21.2 # NOT ON WINDOWS YET - - transformers==2.8.0 - - urlextract==0.13.0 - #- pyvirtualdisplay # add if on headless server + - tensorboard-plugin-profile==2.4.0 # profiling plugin for TensorBoard + - tensorboard==2.4.1 # TensorFlow's visualization toolkit + - tensorflow-addons==0.12.1 # used only in chapter 16 for a seq2seq impl. + - tensorflow-datasets==3.0.0 # datasets repository, ready to use + - tensorflow-hub==0.9.0 # trained ML models repository, ready to use + - tensorflow-probability==0.12.1 # Optional. Probability/Stats lib. + - tensorflow-serving-api==2.4.1 # or tensorflow-serving-api-gpu if gpu + - tensorflow==2.4.1 # Deep Learning library + - tf-agents==0.7.1 # Reinforcement Learning lib based on TensorFlow + - tfx==0.27.0 # platform to deploy production ML pipelines + - urlextract==1.2.0 # optionally used in chapter 3, exercise 4 diff --git a/environment.yml b/environment.yml index 8025023..c757a05 100644 --- a/environment.yml +++ b/environment.yml @@ -10,29 +10,29 @@ dependencies: - ipython=7.20 # a powerful Python shell - ipywidgets=7.6 # optionally used only in chapter 12 for tqdm in Jupyter - joblib=0.14 # used only in chapter 2 to save/load Scikit-Learn models - - jupyter=1.0.0 # to edit and run Jupyter notebooks - - matplotlib=3.3.4 # beautiful plots. See tutorial tools_matplotlib.ipynb - - nbdime=2.1.0 # optional tool to diff Jupyter notebooks - - nltk=3.4.4 # optionally used in chapter 3, exercise 4 - - numexpr=2.7.2 # used only in the Pandas tutorial for numerical expressions - - numpy=1.19.5 # Powerful n-dimensional arrays and numerical computing tools - - opencv=4.5.1 # used only in chapter 18 by TF Agents for image preprocessing - - pandas=1.2.2 # data analysis and manipulation tool - - pillow=8.1.0 # image manipulation library, (used by matplotlib.image.imread) + - jupyter=1.0 # to edit and run Jupyter notebooks + - matplotlib=3.3 # beautiful plots. See tutorial tools_matplotlib.ipynb + - nbdime=2.1 # optional tool to diff Jupyter notebooks + - nltk=3.4 # optionally used in chapter 3, exercise 4 + - numexpr=2.7 # used only in the Pandas tutorial for numerical expressions + - numpy=1.19 # Powerful n-dimensional arrays and numerical computing tools + - opencv=4.5 # used only in chapter 18 by TF Agents for image preprocessing + - pandas=1.2 # data analysis and manipulation tool + - pillow=8.1 # image manipulation library, (used by matplotlib.image.imread) - pip # Python's package-management system - - py-xgboost=1.3.0 # used only in chapter 7 for optimized Gradient Boosting - - pyglet=1.5.15 # used only in chapter 18 to render environments - - pyopengl=3.1.5 # used only in chapter 18 to render environments + - py-xgboost=1.3 # used only in chapter 7 for optimized Gradient Boosting + - pyglet=1.5 # used only in chapter 18 to render environments + - pyopengl=3.1 # used only in chapter 18 to render environments - python=3.7 # Python! Not using latest version as some libs lack support - python-graphviz # used only in chapter 6 for dot files #- pyvirtualdisplay=1.3 # used only in chapter 18 if on headless server - - requests=2.25.1 # used only in chapter 19 for REST API queries - - scikit-learn=0.24.1 # machine learning library - - scipy=1.6.0 # scientific/technical computing library - - tqdm=4.56.1 # a progress bar library - - transformers=4.3.2 # Natural Language Processing lib for TF or PyTorch + - requests=2.25 # used only in chapter 19 for REST API queries + - scikit-learn=0.24 # machine learning library + - scipy=1.6 # scientific/technical computing library + - tqdm=4.56 # a progress bar library + - transformers=4.3 # Natural Language Processing lib for TF or PyTorch - wheel # built-package format for pip - - widgetsnbextension=3.5.1 # interactive HTML widgets for Jupyter notebooks + - widgetsnbextension=3.5 # interactive HTML widgets for Jupyter notebooks - pip: - tensorboard-plugin-profile==2.4.0 # profiling plugin for TensorBoard - tensorboard==2.4.1 # TensorFlow's visualization toolkit From 44e1b9b9fff3b813d343039fb66fde5e844f3e3e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aur=C3=A9lien=20Geron?= Date: Mon, 15 Feb 2021 20:31:59 +1300 Subject: [PATCH 28/49] Update installation instructions and have just one environment.yml for all platforms --- INSTALL.md | 12 +---------- README.md | 10 ++------- environment-windows.yml | 46 ----------------------------------------- environment.yml | 2 +- 4 files changed, 4 insertions(+), 66 deletions(-) delete mode 100644 environment-windows.yml diff --git a/INSTALL.md b/INSTALL.md index a1ce66b..efb9d48 100644 --- a/INSTALL.md +++ b/INSTALL.md @@ -24,25 +24,15 @@ Once Anaconda or miniconda is installed, then run the following command to updat ## Install the GPU Driver and Libraries If you have a TensorFlow-compatible GPU card (NVidia card with Compute Capability ≥ 3.5), and you want TensorFlow to use it, then you should download the latest driver for your card from [nvidia.com](https://www.nvidia.com/Download/index.aspx?lang=en-us) and install it. You will also need NVidia's CUDA and cuDNN libraries, but the good news is that they will be installed automatically when you install the tensorflow-gpu package from Anaconda. However, if you don't use Anaconda, you will have to install them manually. If you hit any roadblock, see TensorFlow's [GPU installation instructions](https://tensorflow.org/install/gpu) for more details. -If you want to use a GPU then you should also edit environment.yml (or environment-windows.yml if you're on Windows), located at the root of the handson-ml2 project, replace tensorflow=2.0.0 with tensorflow-gpu=2.0.0, and replace tensorflow-serving-api==2.0.0 with tensorflow-serving-api-gpu==2.0.0. This will not be needed anymore when TensorFlow 2.1 is released. - ## Create the tf2 Environment Next, make sure you're in the handson-ml2 directory and run the following command. It will create a new `conda` environment containing every library you will need to run all the notebooks (by default, the environment will be named `tf2`, but you can choose another name using the `-n` option): - $ conda env create -f environment.yml # or environment-windows.yml on Windows + $ conda env create -f environment.yml Next, activate the new environment: $ conda activate tf2 -## Windows -If you're on Windows, and you want to go through chapter 18 on Reinforcement Learning, then you will also need to run the following command. It installs a Windows-compatible fork of the atari-py library. - - $ pip install --no-index -f https://github.com/Kojoley/atari-py/releases atari_py - - -> **Warning**: TensorFlow Transform (used in chapter 13) and TensorFlow-AddOns (used in chapter 16) are not yet available on Windows, but the TensorFlow team is working on it. - ## Start Jupyter You're almost there! You just need to register the `tf2` conda environment to Jupyter. The notebooks in this project will default to the environment named `python3`, so it's best to register this environment using the name `python3` (if you prefer to use another name, you will have to select it in the "Kernel > Change kernel..." menu in Jupyter every time you open a notebook): diff --git a/README.md b/README.md index aad0709..ed97aea 100644 --- a/README.md +++ b/README.md @@ -38,25 +38,19 @@ Read the [Docker instructions](https://github.com/ageron/handson-ml2/tree/master ### Want to install this project on your own machine? -Start by installing [Anaconda](https://www.anaconda.com/distribution/) (or [Miniconda](https://docs.conda.io/en/latest/miniconda.html)), [git](https://git-scm.com/downloads), and if you have a TensorFlow-compatible GPU, install the [GPU driver](https://www.nvidia.com/Download/index.aspx). +Start by installing [Anaconda](https://www.anaconda.com/distribution/) (or [Miniconda](https://docs.conda.io/en/latest/miniconda.html)), [git](https://git-scm.com/downloads), and if you have a TensorFlow-compatible GPU, install the [GPU driver](https://www.nvidia.com/Download/index.aspx), as well as the appropriate version of CUDA and cuDNN (see TensorFlow's documentation for more details). Next, clone this project by opening a terminal and typing the following commands (do not type the first `$` signs on each line, they just indicate that these are terminal commands): $ git clone https://github.com/ageron/handson-ml2.git $ cd handson-ml2 -If you want to use a GPU, then edit `environment.yml` (or `environment-windows.yml` on Windows) and replace `tensorflow=2.0.0` with `tensorflow-gpu=2.0.0`. Also replace `tensorflow-serving-api==2.0.0` with `tensorflow-serving-api-gpu==2.0.0`. - Next, run the following commands: $ conda env create -f environment.yml # or environment-windows.yml on Windows $ conda activate tf2 $ python -m ipykernel install --user --name=python3 -Then if you're on Windows, run the following command: - - $ pip install --no-index -f https://github.com/Kojoley/atari-py/releases atari_py - Finally, start Jupyter: $ jupyter notebook @@ -64,4 +58,4 @@ Finally, start Jupyter: If you need further instructions, read the [detailed installation instructions](INSTALL.md). ## Contributors -I would like to thank everyone who contributed to this project, either by providing useful feedback, filing issues or submitting Pull Requests. Special thanks go to Haesun Park who helped on some of the exercise solutions, and to Steven Bunkley and Ziembla who created the `docker` directory. Thanks as well to github user SuperYorio for helping out on the coding exercise solutions. \ No newline at end of file +I would like to thank everyone who contributed to this project, either by providing useful feedback, filing issues or submitting Pull Requests. Special thanks go to Haesun Park who helped on some of the exercise solutions, and to Steven Bunkley and Ziembla who created the `docker` directory. Thanks as well to github user SuperYorio for helping out on the coding exercise solutions. diff --git a/environment-windows.yml b/environment-windows.yml deleted file mode 100644 index 4fc46dc..0000000 --- a/environment-windows.yml +++ /dev/null @@ -1,46 +0,0 @@ -name: tf2 -channels: - - conda-forge - - defaults -dependencies: - - atari_py=0.2 # used only in chapter 18 - - ftfy=5.8 # used only in chapter 16 by the transformers library - - graphviz # used only in chapter 6 for dot files - - gym=0.18 # used only in chapter 18 - - ipython=7.20 # a powerful Python shell - - ipywidgets=7.6 # optionally used only in chapter 12 for tqdm in Jupyter - - joblib=0.14 # used only in chapter 2 to save/load Scikit-Learn models - - jupyter=1.0 # to edit and run Jupyter notebooks - - matplotlib=3.3 # beautiful plots. See tutorial tools_matplotlib.ipynb - - nbdime=2.1 # optional tool to diff Jupyter notebooks - - nltk=3.4 # optionally used in chapter 3, exercise 4 - - numexpr=2.7 # used only in the Pandas tutorial for numerical expressions - - numpy=1.19 # Powerful n-dimensional arrays and numerical computing tools - - opencv=4.5 # used only in chapter 18 by TF Agents for image preprocessing - - pandas=1.2 # data analysis and manipulation tool - - pillow=8.1 # image manipulation library, (used by matplotlib.image.imread) - - pip # Python's package-management system - - py-xgboost=0.90 # used only in chapter 7 for optimized Gradient Boosting - - pyglet=1.5 # used only in chapter 18 to render environments - - pyopengl=3.1 # used only in chapter 18 to render environments - - python=3.7 # Python! Not using latest version as some libs lack support - - python-graphviz # used only in chapter 6 for dot files - - requests=2.25 # used only in chapter 19 for REST API queries - - scikit-learn=0.24 # machine learning library - - scipy=1.6 # scientific/technical computing library - - tqdm=4.56 # a progress bar library - - transformers=4.3 # Natural Language Processing lib for TF or PyTorch - - wheel # built-package format for pip - - widgetsnbextension=3.5 # interactive HTML widgets for Jupyter notebooks - - pip: - - tensorboard-plugin-profile==2.4.0 # profiling plugin for TensorBoard - - tensorboard==2.4.1 # TensorFlow's visualization toolkit - - tensorflow-addons==0.12.1 # used only in chapter 16 for a seq2seq impl. - - tensorflow-datasets==3.0.0 # datasets repository, ready to use - - tensorflow-hub==0.9.0 # trained ML models repository, ready to use - - tensorflow-probability==0.12.1 # Optional. Probability/Stats lib. - - tensorflow-serving-api==2.4.1 # or tensorflow-serving-api-gpu if gpu - - tensorflow==2.4.1 # Deep Learning library - - tf-agents==0.7.1 # Reinforcement Learning lib based on TensorFlow - - tfx==0.27.0 # platform to deploy production ML pipelines - - urlextract==1.2.0 # optionally used in chapter 3, exercise 4 diff --git a/environment.yml b/environment.yml index c757a05..9e21acf 100644 --- a/environment.yml +++ b/environment.yml @@ -20,7 +20,7 @@ dependencies: - pandas=1.2 # data analysis and manipulation tool - pillow=8.1 # image manipulation library, (used by matplotlib.image.imread) - pip # Python's package-management system - - py-xgboost=1.3 # used only in chapter 7 for optimized Gradient Boosting + - py-xgboost=0.90 # used only in chapter 7 for optimized Gradient Boosting - pyglet=1.5 # used only in chapter 18 to render environments - pyopengl=3.1 # used only in chapter 18 to render environments - python=3.7 # Python! Not using latest version as some libs lack support From 374d9b279e6c951224e9566c1400a8b46f48a0f0 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aur=C3=A9lien=20Geron?= Date: Mon, 15 Feb 2021 21:03:21 +1300 Subject: [PATCH 29/49] Update python version --- extra_autodiff.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/extra_autodiff.ipynb b/extra_autodiff.ipynb index 7296613..e67a164 100644 --- a/extra_autodiff.ipynb +++ b/extra_autodiff.ipynb @@ -898,7 +898,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.9" }, "nav_menu": { "height": "603px", From 198227f5869ac106fff1b8e333ee104c2b82b50d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aur=C3=A9lien=20Geron?= Date: Mon, 15 Feb 2021 22:28:28 +1300 Subject: [PATCH 30/49] Fix installation instructions --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index ed97aea..5a0f98b 100644 --- a/README.md +++ b/README.md @@ -47,7 +47,7 @@ Next, clone this project by opening a terminal and typing the following commands Next, run the following commands: - $ conda env create -f environment.yml # or environment-windows.yml on Windows + $ conda env create -f environment.yml $ conda activate tf2 $ python -m ipykernel install --user --name=python3 From f86635b2332193ea8ec71c1064a0697857bab927 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aur=C3=A9lien=20Geron?= Date: Tue, 16 Feb 2021 15:04:34 +1300 Subject: [PATCH 31/49] Update libraries to latest version, including TensorFlow 2.4.1 and Scikit-Learn 0.24.1 --- 14_deep_computer_vision_with_cnns.ipynb | 2 +- ...essing_sequences_using_rnns_and_cnns.ipynb | 5 ++- 16_nlp_with_rnns_and_attention.ipynb | 43 +++++++++++++++---- 3 files changed, 39 insertions(+), 11 deletions(-) diff --git a/14_deep_computer_vision_with_cnns.ipynb b/14_deep_computer_vision_with_cnns.ipynb index 860ce3a..c1b9e2f 100644 --- a/14_deep_computer_vision_with_cnns.ipynb +++ b/14_deep_computer_vision_with_cnns.ipynb @@ -1366,7 +1366,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.9" }, "nav_menu": {}, "toc": { diff --git a/15_processing_sequences_using_rnns_and_cnns.ipynb b/15_processing_sequences_using_rnns_and_cnns.ipynb index 10933e5..8441ada 100644 --- a/15_processing_sequences_using_rnns_and_cnns.ipynb +++ b/15_processing_sequences_using_rnns_and_cnns.ipynb @@ -1868,7 +1868,8 @@ " arpegio = tf.reshape(arpegio, [1, -1])\n", " for chord in range(length):\n", " for note in range(4):\n", - " next_note = model.predict_classes(arpegio)[:1, -1:]\n", + " #next_note = model.predict_classes(arpegio)[:1, -1:]\n", + " next_note = np.argmax(model.predict(arpegio), axis=-1)[:1, -1:]\n", " arpegio = tf.concat([arpegio, next_note], axis=1)\n", " arpegio = tf.where(arpegio == 0, arpegio, arpegio + min_note - 1)\n", " return tf.reshape(arpegio, shape=[-1, 4])" @@ -2010,7 +2011,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.9" }, "nav_menu": {}, "toc": { diff --git a/16_nlp_with_rnns_and_attention.ipynb b/16_nlp_with_rnns_and_attention.ipynb index 328e421..3efe4d5 100644 --- a/16_nlp_with_rnns_and_attention.ipynb +++ b/16_nlp_with_rnns_and_attention.ipynb @@ -309,6 +309,20 @@ "## Creating and Training the Model" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Warning**: the following code may take up to 24 hours to run, depending on your hardware. If you use a GPU, it may take just 1 or 2 hours, or less." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Note**: the `GRU` class will only use the GPU (if you have one) when using the default values for the following arguments: `activation`, `recurrent_activation`, `recurrent_dropout`, `unroll`, `use_bias` and `reset_after`. This is why I commented out `recurrent_dropout=0.2` (compared to the book)." + ] + }, { "cell_type": "code", "execution_count": 18, @@ -317,9 +331,11 @@ "source": [ "model = keras.models.Sequential([\n", " keras.layers.GRU(128, return_sequences=True, input_shape=[None, max_id],\n", - " dropout=0.2, recurrent_dropout=0.2),\n", + " #dropout=0.2, recurrent_dropout=0.2),\n", + " dropout=0.2),\n", " keras.layers.GRU(128, return_sequences=True,\n", - " dropout=0.2, recurrent_dropout=0.2),\n", + " #dropout=0.2, recurrent_dropout=0.2),\n", + " dropout=0.2),\n", " keras.layers.TimeDistributed(keras.layers.Dense(max_id,\n", " activation=\"softmax\"))\n", "])\n", @@ -346,6 +362,13 @@ " return tf.one_hot(X, max_id)" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Warning**: the `predict_classes()` method is deprecated. Instead, we must use `np.argmax(model.predict(X_new), axis=-1)`." + ] + }, { "cell_type": "code", "execution_count": 20, @@ -353,7 +376,8 @@ "outputs": [], "source": [ "X_new = preprocess([\"How are yo\"])\n", - "Y_pred = model.predict_classes(X_new)\n", + "#Y_pred = model.predict_classes(X_new)\n", + "Y_pred = np.argmax(model.predict(X_new), axis=-1)\n", "tokenizer.sequences_to_texts(Y_pred + 1)[0][-1] # 1st sentence, last char" ] }, @@ -1785,7 +1809,8 @@ "metadata": {}, "outputs": [], "source": [ - "ids = model.predict_classes(X_new)\n", + "#ids = model.predict_classes(X_new)\n", + "ids = np.argmax(model.predict(X_new), axis=-1)\n", "for date_str in ids_to_date_strs(ids):\n", " print(date_str)" ] @@ -1819,7 +1844,8 @@ "metadata": {}, "outputs": [], "source": [ - "ids = model.predict_classes(X_new)\n", + "#ids = model.predict_classes(X_new)\n", + "ids = np.argmax(model.predict(X_new), axis=-1)\n", "for date_str in ids_to_date_strs(ids):\n", " print(date_str)" ] @@ -1847,7 +1873,8 @@ "\n", "def convert_date_strs(date_strs):\n", " X = prepare_date_strs_padded(date_strs)\n", - " ids = model.predict_classes(X)\n", + " #ids = model.predict_classes(X)\n", + " ids = np.argmax(model.predict(X), axis=-1)\n", " return ids_to_date_strs(ids)" ] }, @@ -2226,7 +2253,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "**Warning**: due to a TF bug, this version only works using TensorFlow 2.2." + "**Warning**: due to a TF bug, this version only works using TensorFlow 2.2 or above." ] }, { @@ -2711,7 +2738,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.9" }, "nav_menu": {}, "toc": { From 14cee24b59ac5479fb51c49337c845fffc564ba5 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aur=C3=A9lien=20Geron?= Date: Tue, 16 Feb 2021 15:23:29 +1300 Subject: [PATCH 32/49] Add solution to exercise 9 --- 17_autoencoders_and_gans.ipynb | 199 ++++++++++++++++++++++++++++++++- 1 file changed, 198 insertions(+), 1 deletion(-) diff --git a/17_autoencoders_and_gans.ipynb b/17_autoencoders_and_gans.ipynb index bd1a64a..ea4fe9f 100644 --- a/17_autoencoders_and_gans.ipynb +++ b/17_autoencoders_and_gans.ipynb @@ -1603,6 +1603,203 @@ " plt.imshow(image, cmap=\"binary\")\n", " plt.axis(\"off\")" ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Exercise Solutions" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 1. to 8.\n", + "\n", + "See Appendix A." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 9.\n", + "_Exercise: Try using a denoising autoencoder to pretrain an image classifier. You can use MNIST (the simplest option), or a more complex image dataset such as [CIFAR10](https://homl.info/122) if you want a bigger challenge. Regardless of the dataset you're using, follow these steps:_\n", + "* Split the dataset into a training set and a test set. Train a deep denoising autoencoder on the full training set.\n", + "* Check that the images are fairly well reconstructed. Visualize the images that most activate each neuron in the coding layer.\n", + "* Build a classification DNN, reusing the lower layers of the autoencoder. Train it using only 500 images from the training set. Does it perform better with or without pretraining?" + ] + }, + { + "cell_type": "code", + "execution_count": 144, + "metadata": {}, + "outputs": [], + "source": [ + "[X_train, y_train], [X_test, y_test] = keras.datasets.cifar10.load_data()\n", + "X_train = X_train / 255\n", + "X_test = X_test / 255" + ] + }, + { + "cell_type": "code", + "execution_count": 203, + "metadata": {}, + "outputs": [], + "source": [ + "tf.random.set_seed(42)\n", + "np.random.seed(42)\n", + "\n", + "denoising_encoder = keras.models.Sequential([\n", + " keras.layers.GaussianNoise(0.1, input_shape=[32, 32, 3]),\n", + " keras.layers.Conv2D(32, kernel_size=3, padding=\"same\", activation=\"relu\"),\n", + " keras.layers.MaxPool2D(),\n", + " keras.layers.Flatten(),\n", + " keras.layers.Dense(512, activation=\"relu\"),\n", + "])" + ] + }, + { + "cell_type": "code", + "execution_count": 204, + "metadata": {}, + "outputs": [], + "source": [ + "denoising_encoder.summary()" + ] + }, + { + "cell_type": "code", + "execution_count": 205, + "metadata": {}, + "outputs": [], + "source": [ + "denoising_decoder = keras.models.Sequential([\n", + " keras.layers.Dense(16 * 16 * 32, activation=\"relu\", input_shape=[512]),\n", + " keras.layers.Reshape([16, 16, 32]),\n", + " keras.layers.Conv2DTranspose(filters=3, kernel_size=3, strides=2,\n", + " padding=\"same\", activation=\"sigmoid\")\n", + "])" + ] + }, + { + "cell_type": "code", + "execution_count": 206, + "metadata": {}, + "outputs": [], + "source": [ + "denoising_decoder.summary()" + ] + }, + { + "cell_type": "code", + "execution_count": 207, + "metadata": {}, + "outputs": [], + "source": [ + "denoising_ae = keras.models.Sequential([denoising_encoder, denoising_decoder])\n", + "denoising_ae.compile(loss=\"binary_crossentropy\", optimizer=keras.optimizers.Nadam(),\n", + " metrics=[\"mse\"])\n", + "history = denoising_ae.fit(X_train, X_train, epochs=10,\n", + " validation_data=(X_test, X_test))" + ] + }, + { + "cell_type": "code", + "execution_count": 208, + "metadata": {}, + "outputs": [], + "source": [ + "n_images = 5\n", + "new_images = X_test[:n_images]\n", + "new_images_noisy = new_images + np.random.randn(n_images, 32, 32, 3) * 0.1\n", + "new_images_denoised = denoising_ae.predict(new_images_noisy)\n", + "\n", + "plt.figure(figsize=(6, n_images * 2))\n", + "for index in range(n_images):\n", + " plt.subplot(n_images, 3, index * 3 + 1)\n", + " plt.imshow(new_images[index])\n", + " plt.axis('off')\n", + " if index == 0:\n", + " plt.title(\"Original\")\n", + " plt.subplot(n_images, 3, index * 3 + 2)\n", + " plt.imshow(np.clip(new_images_noisy[index], 0., 1.))\n", + " plt.axis('off')\n", + " if index == 0:\n", + " plt.title(\"Noisy\")\n", + " plt.subplot(n_images, 3, index * 3 + 3)\n", + " plt.imshow(new_images_denoised[index])\n", + " plt.axis('off')\n", + " if index == 0:\n", + " plt.title(\"Denoised\")\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 10.\n", + "_Exercise: Train a variational autoencoder on the image dataset of your choice, and use it to generate images. Alternatively, you can try to find an unlabeled dataset that you are interested in and see if you can generate new samples._\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 11.\n", + "_Exercise: Train a DCGAN to tackle the image dataset of your choice, and use it to generate images. Add experience replay and see if this helps. Turn it into a conditional GAN where you can control the generated class._\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { @@ -1621,7 +1818,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.9" }, "nav_menu": { "height": "381px", From b7acf0c9a5b43754cce30c7f5cd6c6cca1a44ce3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aur=C3=A9lien=20Geron?= Date: Tue, 16 Feb 2021 18:21:45 +1300 Subject: [PATCH 33/49] Update libraries to latest version, including TensorFlow 2.4.1 and Scikit-Learn 0.24.1 --- 15_processing_sequences_using_rnns_and_cnns.ipynb | 7 +++++++ 17_autoencoders_and_gans.ipynb | 14 +++++++------- 2 files changed, 14 insertions(+), 7 deletions(-) diff --git a/15_processing_sequences_using_rnns_and_cnns.ipynb b/15_processing_sequences_using_rnns_and_cnns.ipynb index 8441ada..b846b42 100644 --- a/15_processing_sequences_using_rnns_and_cnns.ipynb +++ b/15_processing_sequences_using_rnns_and_cnns.ipynb @@ -1857,6 +1857,13 @@ "Now let's write a function that will generate a new chorale. We will give it a few seed chords, it will convert them to arpegios (the format expected by the model), and use the model to predict the next note, then the next, and so on. In the end, it will group the notes 4 by 4 to create chords again, and return the resulting chorale." ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Warning**: `model.predict_classes(X)` is deprecated. It is replaced with `np.argmax(model.predict(X), axis=-1)`." + ] + }, { "cell_type": "code", "execution_count": 94, diff --git a/17_autoencoders_and_gans.ipynb b/17_autoencoders_and_gans.ipynb index ea4fe9f..af967c2 100644 --- a/17_autoencoders_and_gans.ipynb +++ b/17_autoencoders_and_gans.ipynb @@ -1633,7 +1633,7 @@ }, { "cell_type": "code", - "execution_count": 144, + "execution_count": 77, "metadata": {}, "outputs": [], "source": [ @@ -1644,7 +1644,7 @@ }, { "cell_type": "code", - "execution_count": 203, + "execution_count": 78, "metadata": {}, "outputs": [], "source": [ @@ -1662,7 +1662,7 @@ }, { "cell_type": "code", - "execution_count": 204, + "execution_count": 79, "metadata": {}, "outputs": [], "source": [ @@ -1671,7 +1671,7 @@ }, { "cell_type": "code", - "execution_count": 205, + "execution_count": 80, "metadata": {}, "outputs": [], "source": [ @@ -1685,7 +1685,7 @@ }, { "cell_type": "code", - "execution_count": 206, + "execution_count": 81, "metadata": {}, "outputs": [], "source": [ @@ -1694,7 +1694,7 @@ }, { "cell_type": "code", - "execution_count": 207, + "execution_count": 82, "metadata": {}, "outputs": [], "source": [ @@ -1707,7 +1707,7 @@ }, { "cell_type": "code", - "execution_count": 208, + "execution_count": 83, "metadata": {}, "outputs": [], "source": [ From 0ba6b3b5c85dcd8e865c045e019bb03420b93ae4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aur=C3=A9lien=20Geron?= Date: Wed, 17 Feb 2021 22:20:15 +1300 Subject: [PATCH 34/49] Add .vscode to .gitignore --- .gitignore | 1 + 1 file changed, 1 insertion(+) diff --git a/.gitignore b/.gitignore index 6b5acd1..42c276b 100644 --- a/.gitignore +++ b/.gitignore @@ -5,6 +5,7 @@ *.pyc .DS_Store .ipynb_checkpoints +.vscode/ checkpoint logs/* tf_logs/* From 749817ccfa6b172f9647a0298560a6c0a2234836 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aur=C3=A9lien=20Geron?= Date: Thu, 18 Feb 2021 11:59:02 +1300 Subject: [PATCH 35/49] Update libraries to latest version, including TensorFlow 2.4.1 and Scikit-Learn 0.24.1 --- 18_reinforcement_learning.ipynb | 49 +++++++++++++++++---------------- 1 file changed, 25 insertions(+), 24 deletions(-) diff --git a/18_reinforcement_learning.ipynb b/18_reinforcement_learning.ipynb index b17da27..ed44866 100644 --- a/18_reinforcement_learning.ipynb +++ b/18_reinforcement_learning.ipynb @@ -639,7 +639,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 26, "metadata": {}, "outputs": [], "source": [ @@ -1316,7 +1316,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We will also need a replay memory. It will contain the agent's experiences, in the form of tuples: `(obs, action, reward, next_obs, done)`. We can use the `deque` class for that:" + "We will also need a replay memory. It will contain the agent's experiences, in the form of tuples: `(obs, action, reward, next_obs, done)`. We can use the `deque` class for that (but make sure to check out DeepMind's excellent [Reverb library](https://github.com/deepmind/reverb) for a much more robust implementation of experience replay):" ] }, { @@ -1860,20 +1860,13 @@ "env.reset()" ] }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "**Warning**: since TF Agents 0.4.0, there seems to be an issue with passing an integer to the `env.step()` method (it raises an `AttributeError`). You need to wrap it in a NumPy array, as done below. Please see [TF Agents Issue #520](https://github.com/tensorflow/agents/issues/520) for more details." - ] - }, { "cell_type": "code", "execution_count": 82, "metadata": {}, "outputs": [], "source": [ - "env.step(np.array(1)) # Fire" + "env.step(1) # Fire" ] }, { @@ -2081,9 +2074,9 @@ "source": [ "env.seed(42)\n", "env.reset()\n", - "time_step = env.step(np.array(1)) # FIRE\n", + "time_step = env.step(1) # FIRE\n", "for _ in range(4):\n", - " time_step = env.step(np.array(3)) # LEFT" + " time_step = env.step(3) # LEFT" ] }, { @@ -2194,13 +2187,9 @@ "source": [ "from tf_agents.agents.dqn.dqn_agent import DqnAgent\n", "\n", - "# see TF-agents issue #113\n", - "#optimizer = keras.optimizers.RMSprop(lr=2.5e-4, rho=0.95, momentum=0.0,\n", - "# epsilon=0.00001, centered=True)\n", - "\n", "train_step = tf.Variable(0)\n", "update_period = 4 # run a training step every 4 collect steps\n", - "optimizer = tf.compat.v1.train.RMSPropOptimizer(learning_rate=2.5e-4, decay=0.95, momentum=0.0,\n", + "optimizer = keras.optimizers.RMSprop(lr=2.5e-4, rho=0.95, momentum=0.0,\n", " epsilon=0.00001, centered=True)\n", "epsilon_fn = keras.optimizers.schedules.PolynomialDecay(\n", " initial_learning_rate=1.0, # initial ε\n", @@ -2222,7 +2211,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Create the replay buffer (this may use a lot of RAM, so please reduce the buffer size if you get an out-of-memory error):" + "Create the replay buffer (this will use a lot of RAM, so please reduce the buffer size if you get an out-of-memory error):" ] }, { @@ -2236,7 +2225,7 @@ "replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(\n", " data_spec=agent.collect_data_spec,\n", " batch_size=tf_env.batch_size,\n", - " max_length=1000000)\n", + " max_length=1000000) # reduce if OOM error\n", "\n", "replay_buffer_observer = replay_buffer.add_batch" ] @@ -2363,16 +2352,28 @@ "Let's sample 2 sub-episodes, with 3 time steps each and display them:" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Note**: `replay_buffer.get_next()` is deprecated. We must use `replay_buffer.as_dataset(..., single_deterministic_pass=False)` instead." + ] + }, { "cell_type": "code", "execution_count": 109, "metadata": {}, "outputs": [], "source": [ - "tf.random.set_seed(888) # chosen to show an example of trajectory at the end of an episode\n", + "tf.random.set_seed(93) # chosen to show an example of trajectory at the end of an episode\n", "\n", - "trajectories, buffer_info = replay_buffer.get_next(\n", - " sample_batch_size=2, num_steps=3)" + "#trajectories, buffer_info = replay_buffer.get_next(\n", + "# sample_batch_size=2, num_steps=3)\n", + "\n", + "trajectories, buffer_info = next(iter(replay_buffer.as_dataset(\n", + " sample_batch_size=2,\n", + " num_steps=3,\n", + " single_deterministic_pass=False)))" ] }, { @@ -2528,7 +2529,7 @@ " lives = tf_env.pyenv.envs[0].ale.lives()\n", " if prev_lives != lives:\n", " tf_env.reset()\n", - " tf_env.pyenv.envs[0].step(np.array(1))\n", + " tf_env.pyenv.envs[0].step(1)\n", " prev_lives = lives\n", "\n", "watch_driver = DynamicStepDriver(\n", @@ -2797,7 +2798,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.8" + "version": "3.7.9" } }, "nbformat": 4, From 7223978ae6a14f7274a9de6bf8ec9c6a3e40f596 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aur=C3=A9lien=20Geron?= Date: Thu, 18 Feb 2021 22:26:11 +1300 Subject: [PATCH 36/49] Update libraries to latest version, including TensorFlow 2.4.1 and Scikit-Learn 0.24.1 --- 19_training_and_deploying_at_scale.ipynb | 288 +++++++++++++---------- 1 file changed, 162 insertions(+), 126 deletions(-) diff --git a/19_training_and_deploying_at_scale.ipynb b/19_training_and_deploying_at_scale.ipynb index 291e5fd..e3b3f93 100644 --- a/19_training_and_deploying_at_scale.ipynb +++ b/19_training_and_deploying_at_scale.ipynb @@ -286,12 +286,12 @@ "metadata": {}, "outputs": [], "source": [ - "np.round([[1.1739199e-04, 1.1239604e-07, 6.0210604e-04, 2.0804715e-03, 2.5779348e-06,\n", - " 6.4079795e-05, 2.7411186e-08, 9.9669880e-01, 3.9654213e-05, 3.9471846e-04],\n", - " [1.2294615e-03, 2.9207937e-05, 9.8599273e-01, 9.6755642e-03, 8.8930705e-08,\n", - " 2.9156188e-04, 1.5831805e-03, 1.1311053e-09, 1.1980456e-03, 1.1113169e-07],\n", - " [6.4066830e-05, 9.6359509e-01, 9.0598064e-03, 2.9872139e-03, 5.9552520e-04,\n", - " 3.7478798e-03, 2.5074568e-03, 1.1462728e-02, 5.5553433e-03, 4.2495009e-04]], 2)" + "np.round([[1.1347984e-04, 1.5187356e-07, 9.7032893e-04, 2.7640699e-03, 3.7826971e-06,\n", + " 7.6876910e-05, 3.9140293e-08, 9.9559116e-01, 5.3502394e-05, 4.2665208e-04],\n", + " [8.2443521e-04, 3.5493889e-05, 9.8826385e-01, 7.0466995e-03, 1.2957400e-07,\n", + " 2.3389691e-04, 2.5639210e-03, 9.5886099e-10, 1.0314899e-03, 8.7952529e-08],\n", + " [4.4693781e-05, 9.7028232e-01, 9.0526715e-03, 2.2641101e-03, 4.8766597e-04,\n", + " 2.8800720e-03, 2.2714981e-03, 8.3753867e-03, 4.0439744e-03, 2.9759688e-04]], 2)" ] }, { @@ -682,13 +682,21 @@ "# Using GPUs" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Note**: `tf.test.is_gpu_available()` is deprecated. Instead, please use `tf.config.list_physical_devices('GPU')`." + ] + }, { "cell_type": "code", "execution_count": 41, "metadata": {}, "outputs": [], "source": [ - "tf.test.is_gpu_available()" + "#tf.test.is_gpu_available() # deprecated\n", + "tf.config.list_physical_devices('GPU')" ] }, { @@ -799,7 +807,12 @@ "# Use the central storage strategy instead:\n", "#distribution = tf.distribute.experimental.CentralStorageStrategy()\n", "\n", - "#resolver = tf.distribute.cluster_resolver.TPUClusterResolver()\n", + "#if IS_COLAB and \"COLAB_TPU_ADDR\" in os.environ:\n", + "# tpu_address = \"grpc://\" + os.environ[\"COLAB_TPU_ADDR\"]\n", + "#else:\n", + "# tpu_address = \"\"\n", + "#resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu_address)\n", + "#tf.config.experimental_connect_to_cluster(resolver)\n", "#tf.tpu.experimental.initialize_tpu_system(resolver)\n", "#distribution = tf.distribute.experimental.TPUStrategy(resolver)\n", "\n", @@ -886,17 +899,6 @@ " print()" ] }, - { - "cell_type": "code", - "execution_count": 52, - "metadata": {}, - "outputs": [], - "source": [ - "batch_size = 100 # must be divisible by the number of workers\n", - "model.fit(X_train, y_train, epochs=10,\n", - " validation_data=(X_valid, y_valid), batch_size=batch_size)" - ] - }, { "cell_type": "markdown", "metadata": {}, @@ -910,24 +912,37 @@ "source": [ "A TensorFlow cluster is a group of TensorFlow processes running in parallel, usually on different machines, and talking to each other to complete some work, for example training or executing a neural network. Each TF process in the cluster is called a \"task\" (or a \"TF server\"). It has an IP address, a port, and a type (also called its role or its job). The type can be `\"worker\"`, `\"chief\"`, `\"ps\"` (parameter server) or `\"evaluator\"`:\n", "* Each **worker** performs computations, usually on a machine with one or more GPUs.\n", - "* The **chief** performs computations as well, but it also handles extra work such as writing TensorBoard logs or saving checkpoints. There is a single chief in a cluster. If no chief is specified, then the first worker is the chief.\n", + "* The **chief** performs computations as well, but it also handles extra work such as writing TensorBoard logs or saving checkpoints. There is a single chief in a cluster, typically the first worker (i.e., worker #0).\n", "* A **parameter server** (ps) only keeps track of variable values, it is usually on a CPU-only machine.\n", "* The **evaluator** obviously takes care of evaluation. There is usually a single evaluator in a cluster.\n", "\n", "The set of tasks that share the same type is often called a \"job\". For example, the \"worker\" job is the set of all workers.\n", "\n", - "To start a TensorFlow cluster, you must first specify it. This means defining all the tasks (IP address, TCP port, and type). For example, the following cluster specification defines a cluster with 3 tasks (2 workers and 1 parameter server). It's a dictionary with one key per job, and the values are lists of task addresses:\n", - "\n", - "```\n", - "{\n", - " \"worker\": [\"my-worker0.example.com:9876\", \"my-worker1.example.com:9876\"],\n", - " \"ps\": [\"my-ps0.example.com:9876\"]\n", - "}\n", - "```\n", - "\n", + "To start a TensorFlow cluster, you must first define it. This means specifying all the tasks (IP address, TCP port, and type). For example, the following cluster specification defines a cluster with 3 tasks (2 workers and 1 parameter server). It's a dictionary with one key per job, and the values are lists of task addresses:" + ] + }, + { + "cell_type": "code", + "execution_count": 52, + "metadata": {}, + "outputs": [], + "source": [ + "cluster_spec = {\n", + " \"worker\": [\n", + " \"machine-a.example.com:2222\", # /job:worker/task:0\n", + " \"machine-b.example.com:2222\" # /job:worker/task:1\n", + " ],\n", + " \"ps\": [\"machine-c.example.com:2222\"] # /job:ps/task:0\n", + "}" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ "Every task in the cluster may communicate with every other task in the server, so make sure to configure your firewall to authorize all communications between these machines on these ports (it's usually simpler if you use the same port on every machine).\n", "\n", - "When a task is started, it needs to be told which one it is: its type and index (the task index is also called the task id). A common way to specify everything at once (both the cluster spec and the current task's type and id) is to set the `TF_CONFIG` environment variable before starting the program. It must be a JSON-encoded dictionary containing a cluster specification (under the `\"cluster\"` key), and the type and index of the task to start (under the `\"task\"` key). For example, the following `TF_CONFIG` environment variable defines a simple cluster with 2 workers and 1 parameter server, and specifies that the task to start is the first worker:" + "When a task is started, it needs to be told which one it is: its type and index (the task index is also called the task id). A common way to specify everything at once (both the cluster spec and the current task's type and id) is to set the `TF_CONFIG` environment variable before starting the program. It must be a JSON-encoded dictionary containing a cluster specification (under the `\"cluster\"` key), and the type and index of the task to start (under the `\"task\"` key). For example, the following `TF_CONFIG` environment variable defines the same cluster as above, with 2 workers and 1 parameter server, and specifies that the task to start is worker #1:" ] }, { @@ -940,13 +955,10 @@ "import json\n", "\n", "os.environ[\"TF_CONFIG\"] = json.dumps({\n", - " \"cluster\": {\n", - " \"worker\": [\"my-work0.example.com:9876\", \"my-work1.example.com:9876\"],\n", - " \"ps\": [\"my-ps0.example.com:9876\"]\n", - " },\n", - " \"task\": {\"type\": \"worker\", \"index\": 0}\n", + " \"cluster\": cluster_spec,\n", + " \"task\": {\"type\": \"worker\", \"index\": 1}\n", "})\n", - "print(\"TF_CONFIG='{}'\".format(os.environ[\"TF_CONFIG\"]))" + "os.environ[\"TF_CONFIG\"]" ] }, { @@ -960,7 +972,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Then you would write a short Python script to start a task. The same script can be used on every machine, since it will load the `TF_CONFIG` variable, which will tell it which task to start:" + "TensorFlow's `TFConfigClusterResolver` class reads the cluster configuration from this environment variable:" ] }, { @@ -972,16 +984,7 @@ "import tensorflow as tf\n", "\n", "resolver = tf.distribute.cluster_resolver.TFConfigClusterResolver()\n", - "worker0 = tf.distribute.Server(resolver.cluster_spec(),\n", - " job_name=resolver.task_type,\n", - " task_index=resolver.task_id)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Another way to specify the cluster specification is directly in Python, rather than through an environment variable:" + "resolver.cluster_spec()" ] }, { @@ -990,17 +993,7 @@ "metadata": {}, "outputs": [], "source": [ - "cluster_spec = tf.train.ClusterSpec({\n", - " \"worker\": [\"127.0.0.1:9901\", \"127.0.0.1:9902\"],\n", - " \"ps\": [\"127.0.0.1:9903\"]\n", - "})" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "You can then start a server simply by passing it the cluster spec and indicating its type and index. Let's start the two remaining tasks (remember that in general you would only start a single task per machine; we are starting 3 tasks on the localhost just for the purpose of this code example):" + "resolver.task_type" ] }, { @@ -1009,8 +1002,18 @@ "metadata": {}, "outputs": [], "source": [ - "#worker1 = tf.distribute.Server(cluster_spec, job_name=\"worker\", task_index=1)\n", - "ps0 = tf.distribute.Server(cluster_spec, job_name=\"ps\", task_index=0)" + "resolver.task_id" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now let's run a simpler cluster with just two worker tasks, both running on the local machine. We will use the `MultiWorkerMirroredStrategy` to train a model across these two tasks.\n", + "\n", + "The first step is to write the training code. As this code will be used to run both workers, each in its own process, we write this code to a separate Python file, `my_mnist_multiworker_task.py`. The code is relatively straightforward, but there are a couple important things to note:\n", + "* We create the `MultiWorkerMirroredStrategy` before doing anything else with TensorFlow.\n", + "* Only one of the workers will take care of logging to TensorBoard and saving checkpoints. As mentioned earlier, this worker is called the *chief*, and by convention it is usually worker #0." ] }, { @@ -1019,70 +1022,41 @@ "metadata": {}, "outputs": [], "source": [ - "os.environ[\"TF_CONFIG\"] = json.dumps({\n", - " \"cluster\": {\n", - " \"worker\": [\"127.0.0.1:9901\", \"127.0.0.1:9902\"],\n", - " \"ps\": [\"127.0.0.1:9903\"]\n", - " },\n", - " \"task\": {\"type\": \"worker\", \"index\": 1}\n", - "})\n", - "print(repr(os.environ[\"TF_CONFIG\"]))" - ] - }, - { - "cell_type": "code", - "execution_count": 58, - "metadata": {}, - "outputs": [], - "source": [ - "distribution = tf.distribute.experimental.MultiWorkerMirroredStrategy()\n", + "%%writefile my_mnist_multiworker_task.py\n", "\n", - "keras.backend.clear_session()\n", - "tf.random.set_seed(42)\n", - "np.random.seed(42)\n", - "\n", - "os.environ[\"TF_CONFIG\"] = json.dumps({\n", - " \"cluster\": {\n", - " \"worker\": [\"127.0.0.1:9901\", \"127.0.0.1:9902\"],\n", - " \"ps\": [\"127.0.0.1:9903\"]\n", - " },\n", - " \"task\": {\"type\": \"worker\", \"index\": 1}\n", - "})\n", - "#CUDA_VISIBLE_DEVICES=0 \n", - "\n", - "with distribution.scope():\n", - " model = create_model()\n", - " model.compile(loss=\"sparse_categorical_crossentropy\",\n", - " optimizer=keras.optimizers.SGD(lr=1e-2),\n", - " metrics=[\"accuracy\"])" - ] - }, - { - "cell_type": "code", - "execution_count": 59, - "metadata": {}, - "outputs": [], - "source": [ + "import os\n", + "import numpy as np\n", "import tensorflow as tf\n", "from tensorflow import keras\n", - "import numpy as np\n", + "import time\n", "\n", - "# At the beginning of the program (restart the kernel before running this cell)\n", - "distribution = tf.distribute.experimental.MultiWorkerMirroredStrategy()\n", + "# At the beginning of the program\n", + "distribution = tf.distribute.MultiWorkerMirroredStrategy()\n", "\n", + "resolver = tf.distribute.cluster_resolver.TFConfigClusterResolver()\n", + "print(\"Starting task {}{}\".format(resolver.task_type, resolver.task_id))\n", + "\n", + "# Only worker #0 will write checkpoints and log to TensorBoard\n", + "if resolver.task_id == 0:\n", + " root_logdir = os.path.join(os.curdir, \"my_mnist_multiworker_logs\")\n", + " run_id = time.strftime(\"run_%Y_%m_%d-%H_%M_%S\")\n", + " run_dir = os.path.join(root_logdir, run_id)\n", + " callbacks = [\n", + " keras.callbacks.TensorBoard(run_dir),\n", + " keras.callbacks.ModelCheckpoint(\"my_mnist_multiworker_model.h5\",\n", + " save_best_only=True),\n", + " ]\n", + "else:\n", + " callbacks = []\n", + "\n", + "# Load and prepare the MNIST dataset\n", "(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.mnist.load_data()\n", "X_train_full = X_train_full[..., np.newaxis] / 255.\n", - "X_test = X_test[..., np.newaxis] / 255.\n", "X_valid, X_train = X_train_full[:5000], X_train_full[5000:]\n", "y_valid, y_train = y_train_full[:5000], y_train_full[5000:]\n", - "X_new = X_test[:3]\n", "\n", - "n_workers = 2\n", - "batch_size = 32 * n_workers\n", - "dataset = tf.data.Dataset.from_tensor_slices((X_train[..., np.newaxis], y_train)).repeat().batch(batch_size)\n", - " \n", - "def create_model():\n", - " return keras.models.Sequential([\n", + "with distribution.scope():\n", + " model = keras.models.Sequential([\n", " keras.layers.Conv2D(filters=64, kernel_size=7, activation=\"relu\",\n", " padding=\"same\", input_shape=[28, 28, 1]),\n", " keras.layers.MaxPooling2D(pool_size=2),\n", @@ -1096,14 +1070,62 @@ " keras.layers.Dropout(0.5),\n", " keras.layers.Dense(units=10, activation='softmax'),\n", " ])\n", - "\n", - "with distribution.scope():\n", - " model = create_model()\n", " model.compile(loss=\"sparse_categorical_crossentropy\",\n", " optimizer=keras.optimizers.SGD(lr=1e-2),\n", " metrics=[\"accuracy\"])\n", "\n", - "model.fit(dataset, steps_per_epoch=len(X_train)//batch_size, epochs=10)" + "model.fit(X_train, y_train, validation_data=(X_valid, y_valid),\n", + " epochs=10, callbacks=callbacks)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In a real world application, there would typically be a single worker per machine, but in this example we're running both workers on the same machine, so they will both try to use all the available GPU RAM (if this machine has a GPU), and this will likely lead to an Out-Of-Memory (OOM) error. To avoid this, we could use the `CUDA_VISIBLE_DEVICES` environment variable to assign a different GPU to each worker. Alternatively, we can simply disable GPU support, like this:" + ] + }, + { + "cell_type": "code", + "execution_count": 58, + "metadata": {}, + "outputs": [], + "source": [ + "os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"-1\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We are now ready to start both workers, each in its own process, using Python's `subprocess` module. Before we start each process, we need to set the `TF_CONFIG` environment variable appropriately, changing only the task index:" + ] + }, + { + "cell_type": "code", + "execution_count": 59, + "metadata": {}, + "outputs": [], + "source": [ + "import subprocess\n", + "\n", + "cluster_spec = {\"worker\": [\"127.0.0.1:9901\", \"127.0.0.1:9902\"]}\n", + "\n", + "for index, worker_address in enumerate(cluster_spec[\"worker\"]):\n", + " os.environ[\"TF_CONFIG\"] = json.dumps({\n", + " \"cluster\": cluster_spec,\n", + " \"task\": {\"type\": \"worker\", \"index\": index}\n", + " })\n", + " subprocess.Popen(\"python my_mnist_multiworker_task.py\", shell=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "That's it! Our TensorFlow cluster is now running, but we can't see it in this notebook because it's running in separate processes (but if you are running this notebook in Jupyter, you can see the worker logs in Jupyter's server logs).\n", + "\n", + "Since the chief (worker #0) is writing to TensorBoard, we use TensorBoard to view the training progress. Run the following cell, then click on the settings button (i.e., the gear icon) in the TensorBoard interface and check the \"Reload data\" box to make TensorBoard automatically refresh every 30s. Once the first epoch of training is finished (which may take a few minutes), and once TensorBoard refreshes, the SCALARS tab will appear. Click on this tab to view the progress of the model's training and validation accuracy." ] }, { @@ -1112,12 +1134,15 @@ "metadata": {}, "outputs": [], "source": [ - "# Hyperparameter tuning\n", - "\n", - "# Only talk to ps server\n", - "config_proto = tf.ConfigProto(device_filters=['/job:ps', '/job:worker/task:%d' % tf_config['task']['index']])\n", - "config = tf.estimator.RunConfig(session_config=config_proto)\n", - "# default since 1.10" + "%load_ext tensorboard\n", + "%tensorboard --logdir=./my_mnist_multiworker_logs --port=6006" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "That's it! Once training is over, the best checkpoint of the model will be available in the `my_mnist_multiworker_model.h5` file. You can load it using `keras.models.load_model()` and use it for predictions, as usual:" ] }, { @@ -1126,7 +1151,18 @@ "metadata": {}, "outputs": [], "source": [ - "strategy.num_replicas_in_sync" + "from tensorflow import keras\n", + "\n", + "model = keras.models.load_model(\"my_mnist_multiworker_model.h5\")\n", + "Y_pred = model.predict(X_new)\n", + "np.argmax(Y_pred, axis=-1)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "And that's all for today! Hope you found this useful. 😊" ] } ], @@ -1146,7 +1182,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.9" } }, "nbformat": 4, From fe2c2a27ca775fb6d8de8936bf47e42d21aceb66 Mon Sep 17 00:00:00 2001 From: Marco Breemhaar Date: Thu, 18 Feb 2021 11:11:24 +0100 Subject: [PATCH 37/49] Fixed sklearn compatibility issue for ch 8 and 9 --- 08_dimensionality_reduction.ipynb | 2 +- 09_unsupervised_learning.ipynb | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/08_dimensionality_reduction.ipynb b/08_dimensionality_reduction.ipynb index 1be0a56..40a7324 100644 --- a/08_dimensionality_reduction.ipynb +++ b/08_dimensionality_reduction.ipynb @@ -773,7 +773,7 @@ "source": [ "from sklearn.datasets import fetch_openml\n", "\n", - "mnist = fetch_openml('mnist_784', version=1)\n", + "mnist = fetch_openml('mnist_784', version=1, as_frame=False)\n", "mnist.target = mnist.target.astype(np.uint8)" ] }, diff --git a/09_unsupervised_learning.ipynb b/09_unsupervised_learning.ipynb index fc9197d..7589bb4 100644 --- a/09_unsupervised_learning.ipynb +++ b/09_unsupervised_learning.ipynb @@ -938,7 +938,7 @@ "import urllib\n", "from sklearn.datasets import fetch_openml\n", "\n", - "mnist = fetch_openml('mnist_784', version=1)\n", + "mnist = fetch_openml('mnist_784', version=1, as_frame=False)\n", "mnist.target = mnist.target.astype(np.int64)" ] }, From 08376eee3b119e0a4e3f9ca75811456f4d14d409 Mon Sep 17 00:00:00 2001 From: Marco Breemhaar Date: Thu, 18 Feb 2021 11:11:24 +0100 Subject: [PATCH 38/49] Fixed sklearn 0.24 issue for ch 5, 7, 8 and 9 --- 05_support_vector_machines.ipynb | 2 +- 07_ensemble_learning_and_random_forests.ipynb | 2 +- 08_dimensionality_reduction.ipynb | 2 +- 09_unsupervised_learning.ipynb | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/05_support_vector_machines.ipynb b/05_support_vector_machines.ipynb index 2fa5543..e005667 100644 --- a/05_support_vector_machines.ipynb +++ b/05_support_vector_machines.ipynb @@ -1388,7 +1388,7 @@ "outputs": [], "source": [ "from sklearn.datasets import fetch_openml\n", - "mnist = fetch_openml('mnist_784', version=1, cache=True)\n", + "mnist = fetch_openml('mnist_784', version=1, cache=True, as_frame=False)\n", "\n", "X = mnist[\"data\"]\n", "y = mnist[\"target\"].astype(np.uint8)\n", diff --git a/07_ensemble_learning_and_random_forests.ipynb b/07_ensemble_learning_and_random_forests.ipynb index f4f135e..39c55f4 100644 --- a/07_ensemble_learning_and_random_forests.ipynb +++ b/07_ensemble_learning_and_random_forests.ipynb @@ -453,7 +453,7 @@ "source": [ "from sklearn.datasets import fetch_openml\n", "\n", - "mnist = fetch_openml('mnist_784', version=1)\n", + "mnist = fetch_openml('mnist_784', version=1, as_frame=False)\n", "mnist.target = mnist.target.astype(np.uint8)" ] }, diff --git a/08_dimensionality_reduction.ipynb b/08_dimensionality_reduction.ipynb index 1be0a56..40a7324 100644 --- a/08_dimensionality_reduction.ipynb +++ b/08_dimensionality_reduction.ipynb @@ -773,7 +773,7 @@ "source": [ "from sklearn.datasets import fetch_openml\n", "\n", - "mnist = fetch_openml('mnist_784', version=1)\n", + "mnist = fetch_openml('mnist_784', version=1, as_frame=False)\n", "mnist.target = mnist.target.astype(np.uint8)" ] }, diff --git a/09_unsupervised_learning.ipynb b/09_unsupervised_learning.ipynb index fc9197d..7589bb4 100644 --- a/09_unsupervised_learning.ipynb +++ b/09_unsupervised_learning.ipynb @@ -938,7 +938,7 @@ "import urllib\n", "from sklearn.datasets import fetch_openml\n", "\n", - "mnist = fetch_openml('mnist_784', version=1)\n", + "mnist = fetch_openml('mnist_784', version=1, as_frame=False)\n", "mnist.target = mnist.target.astype(np.int64)" ] }, From 97af3c635ba5cbb94a6144a479182eced0dbae0b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aur=C3=A9lien=20Geron?= Date: Fri, 19 Feb 2021 08:26:32 +1300 Subject: [PATCH 39/49] layer.updates is deprecated, and model_B.summary() instead of model.summary(), fixes #380 --- 11_training_deep_neural_networks.ipynb | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/11_training_deep_neural_networks.ipynb b/11_training_deep_neural_networks.ipynb index 207e64f..596a0f5 100644 --- a/11_training_deep_neural_networks.ipynb +++ b/11_training_deep_neural_networks.ipynb @@ -673,7 +673,7 @@ "metadata": {}, "outputs": [], "source": [ - "bn1.updates" + "#bn1.updates #deprecated" ] }, { @@ -953,7 +953,7 @@ "metadata": {}, "outputs": [], "source": [ - "model.summary()" + "model_B.summary()" ] }, { From 0eb31f77c2215cff11840c1e021aa37850d9c14f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aur=C3=A9lien=20Geron?= Date: Fri, 19 Feb 2021 17:52:10 +1300 Subject: [PATCH 40/49] Replace random_state=n_clusters with random_state=42, fixes #366 --- 09_unsupervised_learning.ipynb | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/09_unsupervised_learning.ipynb b/09_unsupervised_learning.ipynb index 67283b1..aedfa4b 100644 --- a/09_unsupervised_learning.ipynb +++ b/09_unsupervised_learning.ipynb @@ -3416,7 +3416,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "It looks like the best number of clusters is quite high, at 100. You might have expected it to be 40, since there are 40 different people on the pictures. However, the same person may look quite different on different pictures (e.g., with or without glasses, or simply shifted left or right)." + "It looks like the best number of clusters is quite high, at 120. You might have expected it to be 40, since there are 40 different people on the pictures. However, the same person may look quite different on different pictures (e.g., with or without glasses, or simply shifted left or right)." ] }, { @@ -3573,7 +3573,7 @@ "\n", "for n_clusters in k_range:\n", " pipeline = Pipeline([\n", - " (\"kmeans\", KMeans(n_clusters=n_clusters, random_state=n_clusters)),\n", + " (\"kmeans\", KMeans(n_clusters=n_clusters, random_state=42)),\n", " (\"forest_clf\", RandomForestClassifier(n_estimators=150, random_state=42))\n", " ])\n", " pipeline.fit(X_train_pca, y_train)\n", From 1d7c2956d12bdb2f8962085785294eb05b4f736a Mon Sep 17 00:00:00 2001 From: Nithin A R <59356114+neonithinar@users.noreply.github.com> Date: Mon, 22 Feb 2021 13:41:32 +0530 Subject: [PATCH 41/49] Possible typo while loading IMDb dataset possible typo while loading IMDb from keras (X_train, y_test), (X_valid, y_test) = keras.datasets.imdb.load_data() changed to (X_train, y_train), (X_test, y_test) = keras.datasets.imdb.load_data() --- 16_nlp_with_rnns_and_attention.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/16_nlp_with_rnns_and_attention.ipynb b/16_nlp_with_rnns_and_attention.ipynb index 328e421..9108a82 100644 --- a/16_nlp_with_rnns_and_attention.ipynb +++ b/16_nlp_with_rnns_and_attention.ipynb @@ -614,7 +614,7 @@ "metadata": {}, "outputs": [], "source": [ - "(X_train, y_test), (X_valid, y_test) = keras.datasets.imdb.load_data()" + "(X_train, y_train), (X_test, y_test) = keras.datasets.imdb.load_data()" ] }, { From 64f0e05a941897d3475c2f4cce7ca573f68daac4 Mon Sep 17 00:00:00 2001 From: B D <73541689+lebaste77@users.noreply.github.com> Date: Sun, 28 Feb 2021 12:02:23 +0100 Subject: [PATCH 42/49] Minor change on greedy policy variable usage Chap 18, why not using directly the 'n_outputs' variable defined earlier, instead of hardcoded '2' --- 18_reinforcement_learning.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/18_reinforcement_learning.ipynb b/18_reinforcement_learning.ipynb index ed44866..b723f04 100644 --- a/18_reinforcement_learning.ipynb +++ b/18_reinforcement_learning.ipynb @@ -1306,7 +1306,7 @@ "source": [ "def epsilon_greedy_policy(state, epsilon=0):\n", " if np.random.rand() < epsilon:\n", - " return np.random.randint(2)\n", + " return np.random.randint(n_outputs)\n", " else:\n", " Q_values = model.predict(state[np.newaxis])\n", " return np.argmax(Q_values[0])" From 01eeeb2065c1b8bce96932e9d5d75fb64635558b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aur=C3=A9lien=20Geron?= Date: Mon, 1 Mar 2021 18:50:46 +1300 Subject: [PATCH 43/49] Replace C with self.C, fixes #386 --- 05_support_vector_machines.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/05_support_vector_machines.ipynb b/05_support_vector_machines.ipynb index 208cf18..5f68eab 100644 --- a/05_support_vector_machines.ipynb +++ b/05_support_vector_machines.ipynb @@ -1109,7 +1109,7 @@ " self.Js.append(J)\n", "\n", " w_gradient_vector = w - self.C * np.sum(X_t_sv, axis=0).reshape(-1, 1)\n", - " b_derivative = -C * np.sum(t_sv)\n", + " b_derivative = -self.C * np.sum(t_sv)\n", " \n", " w = w - self.eta(epoch) * w_gradient_vector\n", " b = b - self.eta(epoch) * b_derivative\n", From 7cde12c6486a014c8f090171f5ad2c59db95f924 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aur=C3=A9lien=20Geron?= Date: Mon, 1 Mar 2021 19:37:18 +1300 Subject: [PATCH 44/49] Fix typo when calling imdb.load_data(), fixes #385 --- 16_nlp_with_rnns_and_attention.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/16_nlp_with_rnns_and_attention.ipynb b/16_nlp_with_rnns_and_attention.ipynb index 3efe4d5..7b8813c 100644 --- a/16_nlp_with_rnns_and_attention.ipynb +++ b/16_nlp_with_rnns_and_attention.ipynb @@ -638,7 +638,7 @@ "metadata": {}, "outputs": [], "source": [ - "(X_train, y_test), (X_valid, y_test) = keras.datasets.imdb.load_data()" + "(X_train, y_train), (X_test, y_test) = keras.datasets.imdb.load_data()" ] }, { From 9fede98b42c2dee23256a171d2b46ea1c12fa0f9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aur=C3=A9lien=20Geron?= Date: Mon, 1 Mar 2021 22:18:40 +1300 Subject: [PATCH 45/49] Add not about squared=False, fixes #361 --- 02_end_to_end_machine_learning_project.ipynb | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/02_end_to_end_machine_learning_project.ipynb b/02_end_to_end_machine_learning_project.ipynb index b8619d6..775ae1a 100644 --- a/02_end_to_end_machine_learning_project.ipynb +++ b/02_end_to_end_machine_learning_project.ipynb @@ -1219,6 +1219,13 @@ "lin_rmse" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Note**: since Scikit-Learn 0.22, you can get the RMSE directly by calling the `mean_squared_error()` function with `squared=False`." + ] + }, { "cell_type": "code", "execution_count": 88, From 5663779ae83023cf7ae4992266ebc8b9c9160327 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aur=C3=A9lien=20Geron?= Date: Tue, 2 Mar 2021 09:19:21 +1300 Subject: [PATCH 46/49] Use as_frame=False for fetch_open_ml(), and svd_solver=full for PCA, fixes #358 --- 08_dimensionality_reduction.ipynb | 29 ++++++++++++++++++----------- 1 file changed, 18 insertions(+), 11 deletions(-) diff --git a/08_dimensionality_reduction.ipynb b/08_dimensionality_reduction.ipynb index c7f1797..272c8ea 100644 --- a/08_dimensionality_reduction.ipynb +++ b/08_dimensionality_reduction.ipynb @@ -184,7 +184,7 @@ "source": [ "from sklearn.decomposition import PCA\n", "\n", - "pca = PCA(n_components = 2)\n", + "pca = PCA(n_components=2)\n", "X2D = pca.fit_transform(X)" ] }, @@ -761,6 +761,13 @@ "# MNIST compression" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Warning:** since Scikit-Learn 0.24, `fetch_openml()` returns a Pandas `DataFrame` by default. To avoid this and keep the same code as in the book, we set `as_frame=True`." + ] + }, { "cell_type": "code", "execution_count": 31, @@ -769,7 +776,7 @@ "source": [ "from sklearn.datasets import fetch_openml\n", "\n", - "mnist = fetch_openml('mnist_784', version=1)\n", + "mnist = fetch_openml('mnist_784', version=1, as_frame=False)\n", "mnist.target = mnist.target.astype(np.uint8)" ] }, @@ -863,7 +870,7 @@ "metadata": {}, "outputs": [], "source": [ - "pca = PCA(n_components = 154)\n", + "pca = PCA(n_components=154)\n", "X_reduced = pca.fit_transform(X_train)\n", "X_recovered = pca.inverse_transform(X_reduced)" ] @@ -1101,15 +1108,15 @@ "\n", "for n_components in (2, 10, 154):\n", " print(\"n_components =\", n_components)\n", - " regular_pca = PCA(n_components=n_components)\n", + " regular_pca = PCA(n_components=n_components, svd_solver=\"full\")\n", " inc_pca = IncrementalPCA(n_components=n_components, batch_size=500)\n", " rnd_pca = PCA(n_components=n_components, random_state=42, svd_solver=\"randomized\")\n", "\n", - " for pca in (regular_pca, inc_pca, rnd_pca):\n", + " for name, pca in ((\"PCA\", regular_pca), (\"Inc PCA\", inc_pca), (\"Rnd PCA\", rnd_pca)):\n", " t1 = time.time()\n", " pca.fit(X_train)\n", " t2 = time.time()\n", - " print(\" {}: {:.1f} seconds\".format(pca.__class__.__name__, t2 - t1))" + " print(\" {}: {:.1f} seconds\".format(name, t2 - t1))" ] }, { @@ -1130,12 +1137,12 @@ "sizes = [1000, 10000, 20000, 30000, 40000, 50000, 70000, 100000, 200000, 500000]\n", "for n_samples in sizes:\n", " X = np.random.randn(n_samples, 5)\n", - " pca = PCA(n_components = 2, svd_solver=\"randomized\", random_state=42)\n", + " pca = PCA(n_components=2, svd_solver=\"randomized\", random_state=42)\n", " t1 = time.time()\n", " pca.fit(X)\n", " t2 = time.time()\n", " times_rpca.append(t2 - t1)\n", - " pca = PCA(n_components = 2)\n", + " pca = PCA(n_components=2, svd_solver=\"full\")\n", " t1 = time.time()\n", " pca.fit(X)\n", " t2 = time.time()\n", @@ -1169,12 +1176,12 @@ "sizes = [1000, 2000, 3000, 4000, 5000, 6000]\n", "for n_features in sizes:\n", " X = np.random.randn(2000, n_features)\n", - " pca = PCA(n_components = 2, random_state=42, svd_solver=\"randomized\")\n", + " pca = PCA(n_components=2, random_state=42, svd_solver=\"randomized\")\n", " t1 = time.time()\n", " pca.fit(X)\n", " t2 = time.time()\n", " times_rpca.append(t2 - t1)\n", - " pca = PCA(n_components = 2)\n", + " pca = PCA(n_components=2, svd_solver=\"full\")\n", " t1 = time.time()\n", " pca.fit(X)\n", " t2 = time.time()\n", @@ -2252,7 +2259,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.8" + "version": "3.7.9" } }, "nbformat": 4, From 346dfe6d1eb7100f619a0476c5f94cb52b944d31 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aur=C3=A9lien=20Geron?= Date: Tue, 2 Mar 2021 09:29:06 +1300 Subject: [PATCH 47/49] Use as_frame=False when calling fetch_openml() --- 03_classification.ipynb | 11 +++++++++-- 05_support_vector_machines.ipynb | 11 +++++++++-- 07_ensemble_learning_and_random_forests.ipynb | 11 +++++++++-- 09_unsupervised_learning.ipynb | 9 ++++++++- 4 files changed, 35 insertions(+), 7 deletions(-) diff --git a/03_classification.ipynb b/03_classification.ipynb index cd5ac1e..26ef8f0 100644 --- a/03_classification.ipynb +++ b/03_classification.ipynb @@ -84,6 +84,13 @@ "# MNIST" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Warning:** since Scikit-Learn 0.24, `fetch_openml()` returns a Pandas `DataFrame` by default. To avoid this and keep the same code as in the book, we use `as_frame=False`." + ] + }, { "cell_type": "code", "execution_count": 2, @@ -91,7 +98,7 @@ "outputs": [], "source": [ "from sklearn.datasets import fetch_openml\n", - "mnist = fetch_openml('mnist_784', version=1)\n", + "mnist = fetch_openml('mnist_784', version=1, as_frame=False)\n", "mnist.keys()" ] }, @@ -2588,7 +2595,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.8" + "version": "3.7.9" }, "nav_menu": {}, "toc": { diff --git a/05_support_vector_machines.ipynb b/05_support_vector_machines.ipynb index 5f68eab..bb9b855 100644 --- a/05_support_vector_machines.ipynb +++ b/05_support_vector_machines.ipynb @@ -1381,6 +1381,13 @@ "First, let's load the dataset and split it into a training set and a test set. We could use `train_test_split()` but people usually just take the first 60,000 instances for the training set, and the last 10,000 instances for the test set (this makes it possible to compare your model's performance with others): " ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Warning:** since Scikit-Learn 0.24, `fetch_openml()` returns a Pandas `DataFrame` by default. To avoid this, we use `as_frame=False`." + ] + }, { "cell_type": "code", "execution_count": 47, @@ -1388,7 +1395,7 @@ "outputs": [], "source": [ "from sklearn.datasets import fetch_openml\n", - "mnist = fetch_openml('mnist_784', version=1, cache=True)\n", + "mnist = fetch_openml('mnist_784', version=1, cache=True, as_frame=False)\n", "\n", "X = mnist[\"data\"]\n", "y = mnist[\"target\"].astype(np.uint8)\n", @@ -1837,7 +1844,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.8" + "version": "3.7.9" }, "nav_menu": {}, "toc": { diff --git a/07_ensemble_learning_and_random_forests.ipynb b/07_ensemble_learning_and_random_forests.ipynb index 63a224e..089f502 100644 --- a/07_ensemble_learning_and_random_forests.ipynb +++ b/07_ensemble_learning_and_random_forests.ipynb @@ -452,6 +452,13 @@ "## Feature importance" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Warning:** since Scikit-Learn 0.24, `fetch_openml()` returns a Pandas `DataFrame` by default. To avoid this and keep the same code as in the book, we use `as_frame=False`." + ] + }, { "cell_type": "code", "execution_count": 25, @@ -460,7 +467,7 @@ "source": [ "from sklearn.datasets import fetch_openml\n", "\n", - "mnist = fetch_openml('mnist_784', version=1)\n", + "mnist = fetch_openml('mnist_784', version=1, as_frame=False)\n", "mnist.target = mnist.target.astype(np.uint8)" ] }, @@ -1395,7 +1402,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.8" + "version": "3.7.9" }, "nav_menu": { "height": "252px", diff --git a/09_unsupervised_learning.ipynb b/09_unsupervised_learning.ipynb index aedfa4b..ad9a3b8 100644 --- a/09_unsupervised_learning.ipynb +++ b/09_unsupervised_learning.ipynb @@ -969,6 +969,13 @@ "If the dataset does not fit in memory, the simplest option is to use the `memmap` class, just like we did for incremental PCA in the previous chapter. First let's load MNIST:" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Warning:** since Scikit-Learn 0.24, `fetch_openml()` returns a Pandas `DataFrame` by default. To avoid this and keep the same code as in the book, we use `as_frame=False`." + ] + }, { "cell_type": "code", "execution_count": 46, @@ -978,7 +985,7 @@ "import urllib.request\n", "from sklearn.datasets import fetch_openml\n", "\n", - "mnist = fetch_openml('mnist_784', version=1)\n", + "mnist = fetch_openml('mnist_784', version=1, as_frame=False)\n", "mnist.target = mnist.target.astype(np.int64)" ] }, From b201196be12e636b35d3a100622f8dea4cfcb4b1 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aur=C3=A9lien=20Geron?= Date: Tue, 2 Mar 2021 10:13:13 +1300 Subject: [PATCH 48/49] Add Colab button, fixes #346 --- extra_autodiff.ipynb | 11 ++++ extra_gradient_descent_comparison.ipynb | 13 +++- index.ipynb | 82 +++++++++++++++++++------ math_linear_algebra.ipynb | 13 +++- tools_matplotlib.ipynb | 13 +++- tools_numpy.ipynb | 24 ++++++-- tools_pandas.ipynb | 11 ++++ 7 files changed, 140 insertions(+), 27 deletions(-) diff --git a/extra_autodiff.ipynb b/extra_autodiff.ipynb index e67a164..6ef643d 100644 --- a/extra_autodiff.ipynb +++ b/extra_autodiff.ipynb @@ -14,6 +14,17 @@ "_This notebook contains toy implementations of various autodiff techniques, to explain how they works._" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + " \n", + "
\n", + " Run in Google Colab\n", + "
" + ] + }, { "cell_type": "markdown", "metadata": {}, diff --git a/extra_gradient_descent_comparison.ipynb b/extra_gradient_descent_comparison.ipynb index 69643b4..d6dac91 100644 --- a/extra_gradient_descent_comparison.ipynb +++ b/extra_gradient_descent_comparison.ipynb @@ -14,6 +14,17 @@ "This notebook displays an animation comparing Batch, Mini-Batch and Stochastic Gradient Descent (introduced in Chapter 4). Thanks to [Daniel Ingram](https://github.com/daniel-s-ingram) who contributed this notebook." ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + " \n", + "
\n", + " Run in Google Colab\n", + "
" + ] + }, { "cell_type": "code", "execution_count": 1, @@ -257,7 +268,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.9" } }, "nbformat": 4, diff --git a/index.ipynb b/index.ipynb index 09a1f2b..91842dc 100644 --- a/index.ipynb +++ b/index.ipynb @@ -10,6 +10,17 @@ "\n", "[Prerequisites](#Prerequisites) (see below)\n", "\n", + "\n", + " \n", + "
\n", + " Run in Google Colab\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ "## Notebooks\n", "1. [The Machine Learning landscape](01_the_machine_learning_landscape.ipynb)\n", "2. [End-to-end Machine Learning project](02_end_to_end_machine_learning_project.ipynb)\n", @@ -29,34 +40,65 @@ "16. [Natural Language Processing with RNNs and Attention](16_nlp_with_rnns_and_attention.ipynb)\n", "17. [Representation Learning Using Autoencoders](17_autoencoders.ipynb)\n", "18. [Reinforcement Learning](18_reinforcement_learning.ipynb)\n", - "19. [Training and Deploying TensorFlow Models at Scale](19_training_and_deploying_at_scale.ipynb)\n", - "\n", - "## Scientific Python tutorials\n", - "* [NumPy](tools_numpy.ipynb)\n", - "* [Matplotlib](tools_matplotlib.ipynb)\n", - "* [Pandas](tools_pandas.ipynb)\n", - "\n", - "## Math Tutorials\n", - "* [Linear Algebra](math_linear_algebra.ipynb)\n", - "* [Differential Calculus](math_differential_calculus.ipynb)\n", - "\n", - "## Extra Material\n", - "* [Auto-differentiation](extra_autodiff.ipynb)\n", - "\n", - "## Misc.\n", - "* [Equations](book_equations.pdf) (list of equations in the book)\n" + "19. [Training and Deploying TensorFlow Models at Scale](19_training_and_deploying_at_scale.ipynb)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Scientific Python tutorials\n", + "* [NumPy](tools_numpy.ipynb)\n", + "* [Matplotlib](tools_matplotlib.ipynb)\n", + "* [Pandas](tools_pandas.ipynb)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Math Tutorials\n", + "* [Linear Algebra](math_linear_algebra.ipynb)\n", + "* [Differential Calculus](math_differential_calculus.ipynb)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Extra Material\n", + "* [Auto-differentiation](extra_autodiff.ipynb)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Misc.\n", + "* [Equations](book_equations.pdf) (list of equations in the book)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Prerequisites" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "## Prerequisites\n", "### To understand\n", "* **Python** – you don't need to be an expert python programmer, but you do need to know the basics. If you don't, the official [Python tutorial](https://docs.python.org/3/tutorial/) is a good place to start.\n", "* **Scientific Python** – We will be using a few popular python libraries, in particular NumPy, matplotlib and pandas. If you are not familiar with these libraries, you should probably start by going through the tutorials in the Tools section (especially NumPy).\n", - "* **Math** – We will also use some notions of Linear Algebra, Calculus, Statistics and Probability theory. You should be able to follow along if you learned these in the past as it won't be very advanced, but if you don't know about these topics or you need a refresher then go through the appropriate introduction in the Math section.\n", - "\n", + "* **Math** – We will also use some notions of Linear Algebra, Calculus, Statistics and Probability theory. You should be able to follow along if you learned these in the past as it won't be very advanced, but if you don't know about these topics or you need a refresher then go through the appropriate introduction in the Math section." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ "### To run the examples\n", "* **Jupyter** – These notebooks are based on Jupyter. You can run these notebooks in just one click using a hosted platform such as Binder, Deepnote or Colaboratory (no installation required), or you can just view them using Jupyter.org's viewer, or you can install everything on your machine, as you prefer. Check out the [home page](https://github.com/ageron/handson-ml2/) for more details." ] @@ -85,7 +127,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.9" }, "nav_menu": {}, "toc": { diff --git a/math_linear_algebra.ipynb b/math_linear_algebra.ipynb index 501a176..6c33d02 100644 --- a/math_linear_algebra.ipynb +++ b/math_linear_algebra.ipynb @@ -11,6 +11,17 @@ "*Machine Learning relies heavily on Linear Algebra, so it is essential to understand what vectors and matrices are, what operations you can perform with them, and how they can be useful.*" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + " \n", + "
\n", + " Run in Google Colab\n", + "
" + ] + }, { "cell_type": "markdown", "metadata": {}, @@ -3063,7 +3074,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.9" }, "toc": { "toc_cell": false, diff --git a/tools_matplotlib.ipynb b/tools_matplotlib.ipynb index 1cf655a..949f5c2 100644 --- a/tools_matplotlib.ipynb +++ b/tools_matplotlib.ipynb @@ -9,6 +9,17 @@ "*This notebook demonstrates how to use the matplotlib library to plot beautiful graphs.*" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + " \n", + "
\n", + " Run in Google Colab\n", + "
" + ] + }, { "cell_type": "markdown", "metadata": { @@ -1242,7 +1253,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.9" }, "toc": { "toc_cell": true, diff --git a/tools_numpy.ipynb b/tools_numpy.ipynb index ddd3ab6..7b5c22e 100644 --- a/tools_numpy.ipynb +++ b/tools_numpy.ipynb @@ -6,9 +6,25 @@ "source": [ "**Tools - NumPy**\n", "\n", - "*NumPy is the fundamental library for scientific computing with Python. NumPy is centered around a powerful N-dimensional array object, and it also contains useful linear algebra, Fourier transform, and random number functions.*\n", - "\n", - "# Creating arrays" + "*NumPy is the fundamental library for scientific computing with Python. NumPy is centered around a powerful N-dimensional array object, and it also contains useful linear algebra, Fourier transform, and random number functions.*" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + " \n", + "
\n", + " Run in Google Colab\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Creating Arrays" ] }, { @@ -2833,7 +2849,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.6" + "version": "3.7.9" }, "toc": { "toc_cell": false, diff --git a/tools_pandas.ipynb b/tools_pandas.ipynb index 7e87472..2e93943 100644 --- a/tools_pandas.ipynb +++ b/tools_pandas.ipynb @@ -12,6 +12,17 @@ "* NumPy – if you are not familiar with NumPy, we recommend that you go through the [NumPy tutorial](tools_numpy.ipynb) now." ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + " \n", + "
\n", + " Run in Google Colab\n", + "
" + ] + }, { "cell_type": "markdown", "metadata": {}, From 3d418c030809b5f7fb8875d210d9958fbf14e107 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aur=C3=A9lien=20Geron?= Date: Tue, 2 Mar 2021 11:10:15 +1300 Subject: [PATCH 49/49] Install the transformers library when running on Colab --- 16_nlp_with_rnns_and_attention.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/16_nlp_with_rnns_and_attention.ipynb b/16_nlp_with_rnns_and_attention.ipynb index 3dd2d1f..240cd52 100644 --- a/16_nlp_with_rnns_and_attention.ipynb +++ b/16_nlp_with_rnns_and_attention.ipynb @@ -57,6 +57,7 @@ " # %tensorflow_version only exists in Colab.\n", " %tensorflow_version 2.x\n", " !pip install -q -U tensorflow-addons\n", + " !pip install -q -U transformers\n", " IS_COLAB = True\n", "except Exception:\n", " IS_COLAB = False\n", @@ -2615,7 +2616,6 @@ "metadata": {}, "outputs": [], "source": [ - "!pip install -q -U transformers\n", "from transformers import TFOpenAIGPTLMHeadModel\n", "\n", "model = TFOpenAIGPTLMHeadModel.from_pretrained(\"openai-gpt\")"