Update chapter numbers after the SVM chapter goes online

main
Aurélien Geron 2021-10-15 22:18:08 +13:00
parent ce4fccf74c
commit a655f25a65
15 changed files with 41 additions and 41 deletions

View File

@ -4,14 +4,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**Chapter 6 Decision Trees**"
"**Chapter 5 Decision Trees**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"_This notebook contains all the sample code and solutions to the exercises in chapter 6._"
"_This notebook contains all the sample code and solutions to the exercises in chapter 5._"
]
},
{

View File

@ -4,14 +4,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**Chapter 7 Ensemble Learning and Random Forests**"
"**Chapter 6 Ensemble Learning and Random Forests**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"_This notebook contains all the sample code and solutions to the exercises in chapter 7._"
"_This notebook contains all the sample code and solutions to the exercises in chapter 6._"
]
},
{

View File

@ -4,14 +4,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**Chapter 8 Dimensionality Reduction**"
"**Chapter 7 Dimensionality Reduction**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"_This notebook contains all the sample code and solutions to the exercises in chapter 8._"
"_This notebook contains all the sample code and solutions to the exercises in chapter 7._"
]
},
{

View File

@ -4,14 +4,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**Chapter 9 Unsupervised Learning**"
"**Chapter 8 Unsupervised Learning**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"_This notebook contains all the sample code and solutions to the exercises in chapter 9._"
"_This notebook contains all the sample code and solutions to the exercises in chapter 8._"
]
},
{

View File

@ -4,14 +4,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**Chapter 10 Introduction to Artificial Neural Networks with Keras**"
"**Chapter 9 Introduction to Artificial Neural Networks with Keras**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"_This notebook contains all the sample code and solutions to the exercises in chapter 10._"
"_This notebook contains all the sample code and solutions to the exercises in chapter 9._"
]
},
{

View File

@ -4,14 +4,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**Chapter 11 Training Deep Neural Networks**"
"**Chapter 10 Training Deep Neural Networks**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"_This notebook contains all the sample code and solutions to the exercises in chapter 11._"
"_This notebook contains all the sample code and solutions to the exercises in chapter 10._"
]
},
{
@ -795,7 +795,7 @@
"\n",
"The validation set and the test set are also split this way, but without restricting the number of images.\n",
"\n",
"We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the CNN chapter)."
"We will train a model on set A (classification task with 8 classes), and try to reuse it to tackle set B (binary classification). We hope to transfer a little bit of knowledge from task A to task B, since classes in set A (sneakers, ankle boots, coats, t-shirts, etc.) are somewhat similar to classes in set B (sandals and shirts). However, since we are using `Dense` layers, only patterns that occur at the same location can be reused (in contrast, convolutional layers will transfer much better, since learned patterns can be detected anywhere on the image, as we will see in the chapter 13)."
]
},
{
@ -2368,7 +2368,7 @@
"metadata": {},
"source": [
"* *Is the model converging faster than before?* Much faster! The previous model took 27 epochs to reach the lowest validation loss, while the new model achieved that same loss in just 5 epochs and continued to make progress until the 16th epoch. The BN layers stabilized training and allowed us to use a much larger learning rate, so convergence was faster.\n",
"* *Does BN produce a better model?* Yes! The final model is also much better, with 54.0% accuracy instead of 47.6%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 14).\n",
"* *Does BN produce a better model?* Yes! The final model is also much better, with 54.0% accuracy instead of 47.6%. It's still not a very good model, but at least it's much better than before (a Convolutional Neural Network would do much better, but that's a different topic, see chapter 13).\n",
"* *How does BN affect training speed?* Although the model converged much faster, each epoch took about 12s instead of 8s, because of the extra computations required by the BN layers. But overall the training time (wall time) was shortened significantly!"
]
},

View File

@ -4,14 +4,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**Chapter 12 Custom Models and Training with TensorFlow**"
"**Chapter 11 Custom Models and Training with TensorFlow**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"_This notebook contains all the sample code and solutions to the exercises in chapter 12._"
"_This notebook contains all the sample code and solutions to the exercises in chapter 11._"
]
},
{
@ -3622,7 +3622,7 @@
"metadata": {},
"source": [
"## 12. Implement a custom layer that performs _Layer Normalization_\n",
"_We will use this type of layer in Chapter 15 when using Recurrent Neural Networks._"
"_We will use this type of layer in Chapter 14 when using Recurrent Neural Networks._"
]
},
{
@ -3752,7 +3752,7 @@
"metadata": {},
"source": [
"## 13. Train a model using a custom training loop to tackle the Fashion MNIST dataset\n",
"_The Fashion MNIST dataset was introduced in Chapter 10._"
"_The Fashion MNIST dataset was introduced in Chapter 9._"
]
},
{

View File

@ -4,14 +4,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**Chapter 13 Loading and Preprocessing Data with TensorFlow**"
"**Chapter 12 Loading and Preprocessing Data with TensorFlow**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"_This notebook contains all the sample code and solutions to the exercises in chapter 13._"
"_This notebook contains all the sample code and solutions to the exercises in chapter 12._"
]
},
{
@ -1881,7 +1881,7 @@
"\n",
"## 9.\n",
"### a.\n",
"_Exercise: Load the Fashion MNIST dataset (introduced in Chapter 10); split it into a training set, a validation set, and a test set; shuffle the training set; and save each dataset to multiple TFRecord files. Each record should be a serialized `Example` protobuf with two features: the serialized image (use `tf.io.serialize_tensor()` to serialize each image), and the label. Note: for large images, you could use `tf.io.encode_jpeg()` instead. This would save a lot of space, but it would lose a bit of image quality._"
"_Exercise: Load the Fashion MNIST dataset (introduced in Chapter 9); split it into a training set, a validation set, and a test set; shuffle the training set; and save each dataset to multiple TFRecord files. Each record should be a serialized `Example` protobuf with two features: the serialized image (use `tf.io.serialize_tensor()` to serialize each image), and the label. Note: for large images, you could use `tf.io.encode_jpeg()` instead. This would save a lot of space, but it would lose a bit of image quality._"
]
},
{
@ -2407,7 +2407,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we are ready to create the `TextVectorization` layer. Its constructor just saves the hyperparameters (`max_vocabulary_size` and `n_oov_buckets`). The `adapt()` method computes the vocabulary using the `get_vocabulary()` function, then it builds a `StaticVocabularyTable` (see Chapter 16 for more details). The `call()` method preprocesses the reviews to get a padded list of words for each review, then it uses the `StaticVocabularyTable` to lookup the index of each word in the vocabulary:"
"Now we are ready to create the `TextVectorization` layer. Its constructor just saves the hyperparameters (`max_vocabulary_size` and `n_oov_buckets`). The `adapt()` method computes the vocabulary using the `get_vocabulary()` function, then it builds a `StaticVocabularyTable` (see Chapter 15 for more details). The `call()` method preprocesses the reviews to get a padded list of words for each review, then it uses the `StaticVocabularyTable` to lookup the index of each word in the vocabulary:"
]
},
{
@ -2620,7 +2620,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We get about 73.5% accuracy on the validation set after just the first epoch, but after that the model makes no significant progress. We will do better in Chapter 16. For now the point is just to perform efficient preprocessing using `tf.data` and Keras preprocessing layers."
"We get about 73.5% accuracy on the validation set after just the first epoch, but after that the model makes no significant progress. We will do better in Chapter 15. For now the point is just to perform efficient preprocessing using `tf.data` and Keras preprocessing layers."
]
},
{
@ -2628,7 +2628,7 @@
"metadata": {},
"source": [
"### e.\n",
"_Exercise: Add an `Embedding` layer and compute the mean embedding for each review, multiplied by the square root of the number of words (see Chapter 16). This rescaled mean embedding can then be passed to the rest of your model._"
"_Exercise: Add an `Embedding` layer and compute the mean embedding for each review, multiplied by the square root of the number of words (see Chapter 15). This rescaled mean embedding can then be passed to the rest of your model._"
]
},
{
@ -2735,7 +2735,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The model is not better using embeddings (but we will do better in Chapter 16). The pipeline looks fast enough (we optimized it earlier)."
"The model is not better using embeddings (but we will do better in Chapter 15). The pipeline looks fast enough (we optimized it earlier)."
]
},
{

View File

@ -4,14 +4,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**Chapter 14 Deep Computer Vision Using Convolutional Neural Networks**"
"**Chapter 13 Deep Computer Vision Using Convolutional Neural Networks**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"_This notebook contains all the sample code and solutions to the exercises in chapter 14._"
"_This notebook contains all the sample code and solutions to the exercises in chapter 13._"
]
},
{

View File

@ -4,14 +4,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**Chapter 15 Processing Sequences Using RNNs and CNNs**"
"**Chapter 14 Processing Sequences Using RNNs and CNNs**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"_This notebook contains all the sample code and solutions to the exercises in chapter 15._"
"_This notebook contains all the sample code and solutions to the exercises in chapter 14._"
]
},
{
@ -1778,7 +1778,7 @@
"source": [
"Now let's create the model:\n",
"\n",
"* We could feed the note values directly to the model, as floats, but this would probably not give good results. Indeed, the relationships between notes are not that simple: for example, if you replace a C3 with a C4, the melody will still sound fine, even though these notes are 12 semi-tones apart (i.e., one octave). Conversely, if you replace a C3 with a C\\#3, it's very likely that the chord will sound horrible, despite these notes being just next to each other. So we will use an `Embedding` layer to convert each note to a small vector representation (see Chapter 16 for more details on embeddings). We will use 5-dimensional embeddings, so the output of this first layer will have a shape of `[batch_size, window_size, 5]`.\n",
"* We could feed the note values directly to the model, as floats, but this would probably not give good results. Indeed, the relationships between notes are not that simple: for example, if you replace a C3 with a C4, the melody will still sound fine, even though these notes are 12 semi-tones apart (i.e., one octave). Conversely, if you replace a C3 with a C\\#3, it's very likely that the chord will sound horrible, despite these notes being just next to each other. So we will use an `Embedding` layer to convert each note to a small vector representation (see Chapter 15 for more details on embeddings). We will use 5-dimensional embeddings, so the output of this first layer will have a shape of `[batch_size, window_size, 5]`.\n",
"* We will then feed this data to a small WaveNet-like neural network, composed of a stack of 4 `Conv1D` layers with doubling dilation rates. We will intersperse these layers with `BatchNormalization` layers for faster better convergence.\n",
"* Then one `LSTM` layer to try to capture long-term patterns.\n",
"* And finally a `Dense` layer to produce the final note probabilities. It will predict one probability for each chorale in the batch, for each time step, and for each possible note (including silence). So the output shape will be `[batch_size, window_size, 47]`."

View File

@ -4,14 +4,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**Chapter 16 Natural Language Processing with RNNs and Attention**"
"**Chapter 15 Natural Language Processing with RNNs and Attention**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"_This notebook contains all the sample code and solutions to the exercises in chapter 16._"
"_This notebook contains all the sample code and solutions to the exercises in chapter 15._"
]
},
{
@ -2608,7 +2608,7 @@
"metadata": {},
"source": [
"There are still a few interesting features from TF-Addons that you may want to look at:\n",
"* Using a `BeamSearchDecoder` rather than a `BasicDecoder` for inference. Instead of outputing the character with the highest probability, this decoder keeps track of the several candidates, and keeps only the most likely sequences of candidates (see chapter 16 in the book for more details).\n",
"* Using a `BeamSearchDecoder` rather than a `BasicDecoder` for inference. Instead of outputing the character with the highest probability, this decoder keeps track of the several candidates, and keeps only the most likely sequences of candidates (see chapter 15 in the book for more details).\n",
"* Setting masks or specifying `sequence_length` if the input or target sequences may have very different lengths.\n",
"* Using a `ScheduledOutputTrainingSampler`, which gives you more flexibility than the `ScheduledEmbeddingTrainingSampler` to decide how to feed the output at time _t_ to the cell at time _t_+1. By default it feeds the outputs directly to cell, without computing the argmax ID and passing it through an embedding layer. Alternatively, you specify a `next_inputs_fn` function that will be used to convert the cell outputs to inputs at the next step."
]

View File

@ -4,14 +4,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**Chapter 17 Autoencoders and GANs**"
"**Chapter 16 Autoencoders and GANs**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"_This notebook contains all the sample code and solutions to the exercises in chapter 17._"
"_This notebook contains all the sample code and solutions to the exercises in chapter 16._"
]
},
{

View File

@ -4,14 +4,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**Chapter 18 Reinforcement Learning**"
"**Chapter 17 Reinforcement Learning**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"_This notebook contains all the sample code and solutions to the exercises in chapter 18._"
"_This notebook contains all the sample code and solutions to the exercises in chapter 17._"
]
},
{

View File

@ -4,14 +4,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**Chapter 19 Training and Deploying TensorFlow Models at Scale**"
"**Chapter 18 Training and Deploying TensorFlow Models at Scale**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"_This notebook contains all the sample code and solutions to the exercises in chapter 19._"
"_This notebook contains all the sample code and solutions to the exercises in chapter 18._"
]
},
{

View File

@ -4,14 +4,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**Chapter 5 Support Vector Machines**"
"**Support Vector Machines**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"_This notebook contains all the sample code and solutions to the exercises in chapter 5._"
"_This notebook is an extra chapter on Support Vector Machines. It also includes exercises and their solutions at the end._"
]
},
{