Merge pull request #14 from vi3itor/ch4-errata

Chapter 4: Fix figure numbering and correct typos
main
Aurélien Geron 2022-06-02 10:02:00 +12:00 committed by GitHub
commit 27f9d4f11e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 9 additions and 9 deletions

View File

@ -947,7 +947,7 @@
"train_errors = -train_scores.mean(axis=1)\n",
"valid_errors = -valid_scores.mean(axis=1)\n",
"\n",
"plt.figure(figsize=(6, 4)) # extra code not need, just formatting\n",
"plt.figure(figsize=(6, 4)) # extra code not needed, just formatting\n",
"plt.plot(train_sizes, train_errors, \"r-+\", linewidth=2, label=\"train\")\n",
"plt.plot(train_sizes, valid_errors, \"b-\", linewidth=3, label=\"valid\")\n",
"\n",
@ -1124,11 +1124,11 @@
"source": [
"# extra code this cell generates and saves Figure 417\n",
"\n",
"def plot_model(model_class, polynomial, alphas, **model_kargs):\n",
"def plot_model(model_class, polynomial, alphas, **model_kwargs):\n",
" plt.plot(X, y, \"b.\", linewidth=3)\n",
" for alpha, style in zip(alphas, (\"b:\", \"g--\", \"r-\")):\n",
" if alpha > 0:\n",
" model = model_class(alpha, **model_kargs)\n",
" model = model_class(alpha, **model_kwargs)\n",
" else:\n",
" model = LinearRegression()\n",
" if polynomial:\n",
@ -1875,7 +1875,7 @@
"plt.plot([decision_boundary, decision_boundary], [0, 1], \"k:\", linewidth=2,\n",
" label=\"Decision boundary\")\n",
"\n",
"# extra code this section beautifies and saves Figure 421\n",
"# extra code this section beautifies and saves Figure 423\n",
"plt.arrow(x=decision_boundary, y=0.08, dx=-0.3, dy=0,\n",
" head_width=0.05, head_length=0.1, fc=\"b\", ec=\"b\")\n",
"plt.arrow(x=decision_boundary, y=0.92, dx=0.3, dy=0,\n",
@ -1951,7 +1951,7 @@
}
],
"source": [
"# extra code this cell generates and saves Figure 422\n",
"# extra code this cell generates and saves Figure 424\n",
"\n",
"X = iris.data[[\"petal length (cm)\", \"petal width (cm)\"]].values\n",
"y = iris.target_names[iris.target] == 'virginica'\n",
@ -2083,7 +2083,7 @@
}
],
"source": [
"# extra code this cell generates and saves Figure 423\n",
"# extra code this cell generates and saves Figure 425\n",
"\n",
"from matplotlib.colors import ListedColormap\n",
"\n",
@ -2195,7 +2195,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but again, we want to did this manually:"
"The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but again, we want to do it manually:"
]
},
{
@ -2227,7 +2227,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for any given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance. To understand this code, you need to know that `np.diag(np.ones(n))` creates an n×n matrix full of 0s except for 1s on the main diagonal. Moreover, if `a` in a NumPy array, then `a[[1, 3, 2]]` returns an array with 3 rows equal to `a[1]`, `a[3]` and `a[2]` (this is [advanced NumPy indexing](https://numpy.org/doc/stable/reference/arrays.indexing.html#advanced-indexing))."
"The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for any given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance. To understand this code, you need to know that `np.diag(np.ones(n))` creates an n×n matrix full of 0s except for 1s on the main diagonal. Moreover, if `a` is a NumPy array, then `a[[1, 3, 2]]` returns an array with 3 rows equal to `a[1]`, `a[3]` and `a[2]` (this is [advanced NumPy indexing](https://numpy.org/doc/stable/user/basics.indexing.html#advanced-indexing))."
]
},
{
@ -2662,7 +2662,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Oh well, still no change in validation acccuracy, but at least early training shortened training a bit."
"Oh well, still no change in validation accuracy, but at least early stopping shortened training a bit."
]
},
{