handson-ml/04_training_linear_models.i...

1931 lines
62 KiB
Plaintext
Raw Normal View History

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
2021-10-02 13:14:44 +02:00
"**Chapter 4 Training Models**"
2016-09-27 16:39:16 +02:00
]
},
{
"cell_type": "markdown",
"metadata": {},
2016-09-27 16:39:16 +02:00
"source": [
2017-08-19 17:01:55 +02:00
"_This notebook contains all the sample code and solutions to the exercises in chapter 4._"
2016-09-27 16:39:16 +02:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<table align=\"left\">\n",
" <td>\n",
" <a href=\"https://colab.research.google.com/github/ageron/handson-ml3/blob/main/04_training_linear_models.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n",
" </td>\n",
" <td>\n",
" <a target=\"_blank\" href=\"https://kaggle.com/kernels/welcome?src=https://github.com/ageron/handson-ml3/blob/main/04_training_linear_models.ipynb\"><img src=\"https://kaggle.com/static/images/open-in-kaggle.svg\" /></a>\n",
" </td>\n",
"</table>"
]
},
2016-09-27 16:39:16 +02:00
{
"cell_type": "markdown",
2021-11-03 23:35:15 +01:00
"metadata": {
"tags": []
},
2016-09-27 16:39:16 +02:00
"source": [
"# Setup"
]
},
{
"cell_type": "markdown",
"metadata": {},
2016-09-27 16:39:16 +02:00
"source": [
2021-11-03 23:35:15 +01:00
"This project requires Python 3.8 or above:"
]
},
{
"cell_type": "code",
2017-02-17 11:51:26 +01:00
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import sys\n",
"\n",
2021-11-03 23:35:15 +01:00
"assert sys.version_info >= (3, 8)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It also requires Scikit-Learn ≥ 1.0.1:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import sklearn\n",
2016-09-27 16:39:16 +02:00
"\n",
2021-11-03 23:35:15 +01:00
"assert sklearn.__version__ >= \"1.0.1\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As we did in previous chapters, let's define the default font sizes to make the figures prettier:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
2021-11-27 00:54:49 +01:00
"import matplotlib.pyplot as plt\n",
"\n",
2021-11-27 00:54:49 +01:00
"plt.rc('font', size=12)\n",
"plt.rc('axes', labelsize=14, titlesize=14)\n",
"plt.rc('legend', fontsize=14)\n",
"plt.rc('xtick',labelsize=10)\n",
"plt.rc('ytick',labelsize=10)"
2021-11-03 23:35:15 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And let's create the `images/training_linear_models` folder (if it doesn't already exist), and define the `save_fig()` function which is used through this notebook to save the figures in high-res for the book:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from pathlib import Path\n",
"\n",
"IMAGES_PATH = Path() / \"images\" / \"training_linear_models\"\n",
"IMAGES_PATH.mkdir(parents=True, exist_ok=True)\n",
"\n",
"def save_fig(fig_id, tight_layout=True, fig_extension=\"png\", resolution=300):\n",
" path = IMAGES_PATH / f\"{fig_id}.{fig_extension}\"\n",
" if tight_layout:\n",
" plt.tight_layout()\n",
" plt.savefig(path, format=fig_extension, dpi=resolution)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2021-10-02 13:14:44 +02:00
"# Linear Regression"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## The Normal Equation"
]
},
{
"cell_type": "code",
2021-11-03 23:35:15 +01:00
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
2017-05-29 23:20:14 +02:00
"import numpy as np\n",
"\n",
2021-11-03 23:35:15 +01:00
"np.random.seed(42) # to make this code example reproducible\n",
"m = 100 # number of instances\n",
"X = 2 * np.random.rand(m, 1) # column vector\n",
"y = 4 + 3 * X + np.random.randn(m, 1) # column vector"
]
},
{
"cell_type": "code",
2021-11-03 23:35:15 +01:00
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
2021-11-21 05:36:22 +01:00
"# not in the book generates and saves Figure 41\n",
2021-11-03 23:35:15 +01:00
"\n",
"import matplotlib.pyplot as plt\n",
"\n",
"plt.figure(figsize=(6, 4))\n",
"plt.plot(X, y, \"b.\")\n",
2021-11-03 23:35:15 +01:00
"plt.xlabel(\"$x_1$\")\n",
"plt.ylabel(\"$y$\", rotation=0)\n",
"plt.axis([0, 2, 0, 15])\n",
2021-11-03 23:35:15 +01:00
"plt.grid()\n",
2016-09-27 16:39:16 +02:00
"save_fig(\"generated_data_plot\")\n",
"plt.show()"
]
},
{
"cell_type": "code",
2021-11-03 23:35:15 +01:00
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
2021-11-03 23:35:15 +01:00
"from sklearn.preprocessing import add_dummy_feature\n",
"\n",
"X_b = add_dummy_feature(X) # add x0 = 1 to each instance\n",
"theta_best = np.linalg.inv(X_b.T @ X_b) @ X_b.T @ y"
]
},
{
"cell_type": "code",
2021-11-03 23:35:15 +01:00
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"theta_best"
]
},
{
"cell_type": "code",
2021-11-03 23:35:15 +01:00
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"X_new = np.array([[0], [2]])\n",
2021-11-03 23:35:15 +01:00
"X_new_b = add_dummy_feature(X_new) # add x0 = 1 to each instance\n",
"y_predict = X_new_b @ theta_best\n",
"y_predict"
]
},
{
"cell_type": "code",
2021-11-03 23:35:15 +01:00
"execution_count": 10,
"metadata": {},
2017-05-29 23:20:14 +02:00
"outputs": [],
"source": [
2021-11-10 05:58:42 +01:00
"import matplotlib.pyplot as plt\n",
"\n",
2021-11-21 05:36:22 +01:00
"plt.figure(figsize=(6, 4)) # not in the book not needed, just formatting\n",
2021-11-03 23:35:15 +01:00
"plt.plot(X_new, y_predict, \"r-\", label=\"Predictions\")\n",
"plt.plot(X, y, \"b.\")\n",
2021-11-03 23:35:15 +01:00
"\n",
2021-11-21 05:36:22 +01:00
"# not in the book beautifies and saves Figure 42\n",
2021-11-03 23:35:15 +01:00
"plt.xlabel(\"$x_1$\")\n",
"plt.ylabel(\"$y$\", rotation=0)\n",
"plt.axis([0, 2, 0, 15])\n",
2021-11-03 23:35:15 +01:00
"plt.grid()\n",
"plt.legend(loc=\"upper left\")\n",
2019-05-06 07:14:50 +02:00
"save_fig(\"linear_model_predictions_plot\")\n",
2021-11-03 23:35:15 +01:00
"\n",
"plt.show()"
]
},
{
"cell_type": "code",
2021-11-03 23:35:15 +01:00
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.linear_model import LinearRegression\n",
"\n",
"lin_reg = LinearRegression()\n",
"lin_reg.fit(X, y)\n",
"lin_reg.intercept_, lin_reg.coef_"
]
},
{
"cell_type": "code",
2021-11-03 23:35:15 +01:00
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"lin_reg.predict(X_new)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for \"least squares\"), which you could call directly:"
]
},
{
"cell_type": "code",
2021-11-03 23:35:15 +01:00
"execution_count": 13,
"metadata": {},
2017-05-29 23:20:14 +02:00
"outputs": [],
"source": [
"theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6)\n",
"theta_best_svd"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This function computes $\\mathbf{X}^+\\mathbf{y}$, where $\\mathbf{X}^{+}$ is the _pseudoinverse_ of $\\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly:"
]
},
{
"cell_type": "code",
2021-11-03 23:35:15 +01:00
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
2021-11-03 23:35:15 +01:00
"np.linalg.pinv(X_b) @ y"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2021-10-02 13:14:44 +02:00
"# Gradient Descent\n",
"## Batch Gradient Descent"
]
},
{
"cell_type": "code",
2021-11-03 23:35:15 +01:00
"execution_count": 15,
"metadata": {},
"outputs": [],
2017-05-29 23:20:14 +02:00
"source": [
"eta = 0.1 # learning rate\n",
2021-11-03 23:35:15 +01:00
"n_epochs = 1000\n",
"m = len(X_b) # number of instances\n",
"\n",
2021-11-03 23:35:15 +01:00
"np.random.seed(42)\n",
"theta = np.random.randn(2, 1) # randomly initialized model parameters\n",
2017-05-29 23:20:14 +02:00
"\n",
2021-11-03 23:35:15 +01:00
"for epoch in range(n_epochs):\n",
" gradients = 2 / m * X_b.T @ (X_b @ theta - y)\n",
2017-05-29 23:20:14 +02:00
" theta = theta - eta * gradients"
]
},
{
2021-11-03 23:35:15 +01:00
"cell_type": "markdown",
"metadata": {},
2017-05-29 23:20:14 +02:00
"source": [
2021-11-03 23:35:15 +01:00
"The trained model parameters:"
2017-05-29 23:20:14 +02:00
]
},
{
"cell_type": "code",
2021-11-03 23:35:15 +01:00
"execution_count": 16,
"metadata": {},
2017-05-29 23:20:14 +02:00
"outputs": [],
"source": [
2021-11-03 23:35:15 +01:00
"theta"
2017-05-29 23:20:14 +02:00
]
},
{
"cell_type": "code",
2021-11-03 23:35:15 +01:00
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
2021-11-21 05:36:22 +01:00
"# not in the book generates and saves Figure 48\n",
2021-11-03 23:35:15 +01:00
"\n",
"import matplotlib as mpl\n",
"\n",
2021-11-03 23:35:15 +01:00
"def plot_gradient_descent(theta, eta):\n",
2016-09-27 16:39:16 +02:00
" m = len(X_b)\n",
" plt.plot(X, y, \"b.\")\n",
2021-11-03 23:35:15 +01:00
" n_epochs = 1000\n",
" n_shown = 20\n",
" theta_path = []\n",
" for epoch in range(n_epochs):\n",
" if epoch < n_shown:\n",
" y_predict = X_new_b @ theta\n",
" color = mpl.colors.rgb2hex(plt.cm.OrRd(epoch / n_shown + 0.15))\n",
" plt.plot(X_new, y_predict, linestyle=\"solid\", color=color)\n",
" gradients = 2 / m * X_b.T @ (X_b @ theta - y)\n",
" theta = theta - eta * gradients\n",
2021-11-03 23:35:15 +01:00
" theta_path.append(theta)\n",
" plt.xlabel(\"$x_1$\")\n",
" plt.axis([0, 2, 0, 15])\n",
2021-11-03 23:35:15 +01:00
" plt.grid()\n",
2021-11-21 22:18:02 +01:00
" plt.title(fr\"$\\eta = {eta}$\")\n",
2021-11-21 05:36:22 +01:00
" return theta_path\n",
2021-11-03 23:35:15 +01:00
"\n",
"np.random.seed(42)\n",
"theta = np.random.randn(2,1) # random initialization\n",
2017-05-29 23:20:14 +02:00
"\n",
"plt.figure(figsize=(10,4))\n",
2021-11-03 23:35:15 +01:00
"plt.subplot(131)\n",
"plot_gradient_descent(theta, eta=0.02)\n",
"plt.ylabel(\"$y$\", rotation=0)\n",
"plt.subplot(132)\n",
"theta_path_bgd = plot_gradient_descent(theta, eta=0.1)\n",
"plt.gca().axes.yaxis.set_ticklabels([])\n",
"plt.subplot(133)\n",
"plt.gca().axes.yaxis.set_ticklabels([])\n",
"plot_gradient_descent(theta, eta=0.5)\n",
2017-05-29 23:20:14 +02:00
"save_fig(\"gradient_descent_plot\")\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2021-10-02 13:14:44 +02:00
"## Stochastic Gradient Descent"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 18,
"metadata": {},
2017-05-29 23:20:14 +02:00
"outputs": [],
"source": [
2021-11-21 05:36:22 +01:00
"theta_path_sgd = [] # not in the book we need to store the path of theta in\n",
" # the parameter space to plot the next figure"
2017-05-29 23:20:14 +02:00
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 19,
"metadata": {},
"outputs": [],
"source": [
2017-05-29 23:20:14 +02:00
"n_epochs = 50\n",
"t0, t1 = 5, 50 # learning schedule hyperparameters\n",
"\n",
"def learning_schedule(t):\n",
" return t0 / (t + t1)\n",
"\n",
2021-11-03 23:35:15 +01:00
"np.random.seed(42)\n",
"theta = np.random.randn(2, 1) # random initialization\n",
"\n",
2021-11-21 05:36:22 +01:00
"n_shown = 20 # not in the book just needed to generate the figure below\n",
"plt.figure(figsize=(6, 4)) # not in the book not needed, just formatting\n",
"\n",
2017-05-29 23:20:14 +02:00
"for epoch in range(n_epochs):\n",
2021-11-03 23:35:15 +01:00
" for iteration in range(m):\n",
"\n",
2021-11-21 05:36:22 +01:00
" # not in the book these 4 lines are used to generate the figure\n",
2021-11-03 23:35:15 +01:00
" if epoch == 0 and iteration < n_shown:\n",
" y_predict = X_new_b @ theta\n",
" color = mpl.colors.rgb2hex(plt.cm.OrRd(iteration / n_shown + 0.15))\n",
" plt.plot(X_new, y_predict, color=color)\n",
"\n",
2017-05-29 23:20:14 +02:00
" random_index = np.random.randint(m)\n",
2021-11-03 23:35:15 +01:00
" xi = X_b[random_index : random_index + 1]\n",
" yi = y[random_index : random_index + 1]\n",
" gradients = 2 / 1 * xi.T @ (xi @ theta - yi)\n",
" eta = learning_schedule(epoch * m + iteration)\n",
" theta = theta - eta * gradients\n",
2021-11-21 05:36:22 +01:00
" theta_path_sgd.append(theta) # not in the book to generate the figure\n",
2017-05-29 23:20:14 +02:00
"\n",
2021-11-21 05:36:22 +01:00
"# not in the book this section beautifies and saves Figure 410\n",
2021-11-03 23:35:15 +01:00
"plt.plot(X, y, \"b.\")\n",
"plt.xlabel(\"$x_1$\")\n",
"plt.ylabel(\"$y$\", rotation=0)\n",
"plt.axis([0, 2, 0, 15])\n",
"plt.grid()\n",
"save_fig(\"sgd_plot\")\n",
"plt.show()"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 20,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"theta"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 21,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.linear_model import SGDRegressor\n",
"\n",
2021-11-03 23:35:15 +01:00
"sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1,\n",
" random_state=42)\n",
"sgd_reg.fit(X, y.ravel()) # y.ravel() because fit() expects 1D targets"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 22,
"metadata": {},
"outputs": [],
"source": [
"sgd_reg.intercept_, sgd_reg.coef_"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2021-10-02 13:14:44 +02:00
"## Mini-batch gradient descent"
]
},
2021-11-03 23:35:15 +01:00
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The code in this section is used to generate the next figure, it is not in the book."
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 23,
"metadata": {},
"outputs": [],
"source": [
2021-11-21 05:36:22 +01:00
"# not in the book this cell generates and saves Figure 411\n",
"\n",
2021-11-03 23:35:15 +01:00
"from math import ceil\n",
"\n",
2021-11-03 23:35:15 +01:00
"n_epochs = 50\n",
"minibatch_size = 20\n",
2021-11-03 23:35:15 +01:00
"n_batches_per_epoch = ceil(m / minibatch_size)\n",
"\n",
"np.random.seed(42)\n",
2021-11-03 23:35:15 +01:00
"theta = np.random.randn(2, 1) # random initialization\n",
"\n",
"t0, t1 = 200, 1000 # learning schedule hyperparameters\n",
"\n",
"def learning_schedule(t):\n",
" return t0 / (t + t1)\n",
"\n",
2021-11-03 23:35:15 +01:00
"theta_path_mgd = []\n",
"for epoch in range(n_epochs):\n",
" shuffled_indices = np.random.permutation(m)\n",
2016-09-27 16:39:16 +02:00
" X_b_shuffled = X_b[shuffled_indices]\n",
" y_shuffled = y[shuffled_indices]\n",
2021-11-03 23:35:15 +01:00
" for iteration in range(0, n_batches_per_epoch):\n",
" idx = iteration * minibatch_size\n",
" xi = X_b_shuffled[idx : idx + minibatch_size]\n",
" yi = y_shuffled[idx : idx + minibatch_size]\n",
" gradients = 2 / minibatch_size * xi.T @ (xi @ theta - yi)\n",
" eta = learning_schedule(iteration)\n",
" theta = theta - eta * gradients\n",
2021-11-21 05:36:22 +01:00
" theta_path_mgd.append(theta)\n",
"\n",
"theta_path_bgd = np.array(theta_path_bgd)\n",
"theta_path_sgd = np.array(theta_path_sgd)\n",
2021-11-21 05:36:22 +01:00
"theta_path_mgd = np.array(theta_path_mgd)\n",
"\n",
2021-11-03 23:35:15 +01:00
"plt.figure(figsize=(7, 4))\n",
"plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], \"r-s\", linewidth=1,\n",
" label=\"Stochastic\")\n",
"plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], \"g-+\", linewidth=2,\n",
" label=\"Mini-batch\")\n",
"plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], \"b-o\", linewidth=3,\n",
" label=\"Batch\")\n",
"plt.legend(loc=\"upper left\")\n",
"plt.xlabel(r\"$\\theta_0$\")\n",
"plt.ylabel(r\"$\\theta_1$ \", rotation=0)\n",
"plt.axis([2.6, 4.6, 2.3, 3.4])\n",
"plt.grid()\n",
"save_fig(\"gradient_descent_paths_plot\")\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2021-10-02 13:14:44 +02:00
"# Polynomial Regression"
]
},
2017-05-29 23:20:14 +02:00
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 24,
"metadata": {},
2017-05-29 23:20:14 +02:00
"outputs": [],
"source": [
2021-11-03 23:35:15 +01:00
"np.random.seed(42)\n",
"m = 100\n",
2017-05-29 23:20:14 +02:00
"X = 6 * np.random.rand(m, 1) - 3\n",
2021-11-03 23:35:15 +01:00
"y = 0.5 * X ** 2 + X + 2 + np.random.randn(m, 1)"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 25,
"metadata": {},
"outputs": [],
"source": [
2021-11-21 05:36:22 +01:00
"# not in the book this cell generates and saves Figure 412\n",
2021-11-03 23:35:15 +01:00
"plt.figure(figsize=(6, 4))\n",
"plt.plot(X, y, \"b.\")\n",
2021-11-03 23:35:15 +01:00
"plt.xlabel(\"$x_1$\")\n",
"plt.ylabel(\"$y$\", rotation=0)\n",
"plt.axis([-3, 3, 0, 10])\n",
2021-11-03 23:35:15 +01:00
"plt.grid()\n",
"save_fig(\"quadratic_data_plot\")\n",
"plt.show()"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 26,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.preprocessing import PolynomialFeatures\n",
2021-11-03 23:35:15 +01:00
"\n",
"poly_features = PolynomialFeatures(degree=2, include_bias=False)\n",
"X_poly = poly_features.fit_transform(X)\n",
"X[0]"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 27,
"metadata": {},
"outputs": [],
"source": [
"X_poly[0]"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 28,
"metadata": {},
"outputs": [],
"source": [
"lin_reg = LinearRegression()\n",
"lin_reg.fit(X_poly, y)\n",
"lin_reg.intercept_, lin_reg.coef_"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 29,
"metadata": {},
"outputs": [],
"source": [
2021-11-21 05:36:22 +01:00
"# not in the book this cell generates and saves Figure 413\n",
2021-11-03 23:35:15 +01:00
"\n",
"X_new = np.linspace(-3, 3, 100).reshape(100, 1)\n",
"X_new_poly = poly_features.transform(X_new)\n",
"y_new = lin_reg.predict(X_new_poly)\n",
2021-11-03 23:35:15 +01:00
"\n",
"plt.figure(figsize=(6, 4))\n",
"plt.plot(X, y, \"b.\")\n",
"plt.plot(X_new, y_new, \"r-\", linewidth=2, label=\"Predictions\")\n",
2021-11-03 23:35:15 +01:00
"plt.xlabel(\"$x_1$\")\n",
"plt.ylabel(\"$y$\", rotation=0)\n",
"plt.legend(loc=\"upper left\")\n",
"plt.axis([-3, 3, 0, 10])\n",
2021-11-03 23:35:15 +01:00
"plt.grid()\n",
"save_fig(\"quadratic_predictions_plot\")\n",
"plt.show()"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 30,
"metadata": {},
"outputs": [],
"source": [
2021-11-21 05:36:22 +01:00
"# not in the book this cell generates and saves Figure 414\n",
2021-11-03 23:35:15 +01:00
"\n",
"from sklearn.preprocessing import StandardScaler\n",
2021-11-03 23:35:15 +01:00
"from sklearn.pipeline import make_pipeline\n",
"\n",
2021-11-03 23:35:15 +01:00
"plt.figure(figsize=(6, 4))\n",
"\n",
"for style, width, degree in ((\"r-+\", 2, 1), (\"b--\", 2, 2), (\"g-\", 1, 300)):\n",
" polybig_features = PolynomialFeatures(degree=degree, include_bias=False)\n",
" std_scaler = StandardScaler()\n",
" lin_reg = LinearRegression()\n",
2021-11-03 23:35:15 +01:00
" polynomial_regression = make_pipeline(polybig_features, std_scaler, lin_reg)\n",
" polynomial_regression.fit(X, y)\n",
" y_newbig = polynomial_regression.predict(X_new)\n",
2021-11-03 23:35:15 +01:00
" label = f\"{degree} degree{'s' if degree > 1 else ''}\"\n",
" plt.plot(X_new, y_newbig, style, label=label, linewidth=width)\n",
"\n",
"plt.plot(X, y, \"b.\", linewidth=3)\n",
"plt.legend(loc=\"upper left\")\n",
2021-11-03 23:35:15 +01:00
"plt.xlabel(\"$x_1$\")\n",
"plt.ylabel(\"$y$\", rotation=0)\n",
"plt.axis([-3, 3, 0, 10])\n",
2021-11-03 23:35:15 +01:00
"plt.grid()\n",
"save_fig(\"high_degree_polynomials_plot\")\n",
"plt.show()"
]
},
2021-10-02 13:14:44 +02:00
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Learning Curves"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 31,
"metadata": {},
"outputs": [],
"source": [
2021-11-03 23:35:15 +01:00
"from sklearn.model_selection import learning_curve\n",
"\n",
"train_sizes, train_scores, valid_scores = learning_curve(\n",
" LinearRegression(), X, y, train_sizes=np.linspace(0.01, 1.0, 40), cv=5,\n",
" scoring=\"neg_root_mean_squared_error\")\n",
"train_errors = -train_scores.mean(axis=1)\n",
"valid_errors = -valid_scores.mean(axis=1)\n",
"\n",
2021-11-21 05:36:22 +01:00
"plt.figure(figsize=(6, 4)) # not in the book not need, just formatting\n",
2021-11-03 23:35:15 +01:00
"plt.plot(train_sizes, train_errors, \"r-+\", linewidth=2, label=\"train\")\n",
"plt.plot(train_sizes, valid_errors, \"b-\", linewidth=3, label=\"valid\")\n",
"\n",
2021-11-21 05:36:22 +01:00
"# not in the book beautifies and saves Figure 415\n",
2021-11-03 23:35:15 +01:00
"plt.xlabel(\"Training set size\")\n",
"plt.ylabel(\"RMSE\")\n",
"plt.grid()\n",
"plt.legend(loc=\"upper right\")\n",
"plt.axis([0, 80, 0, 2.5])\n",
"save_fig(\"underfitting_learning_curves_plot\")\n",
"\n",
"plt.show()"
2017-05-29 23:20:14 +02:00
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 32,
"metadata": {},
2017-05-29 23:20:14 +02:00
"outputs": [],
"source": [
2021-11-03 23:35:15 +01:00
"from sklearn.pipeline import make_pipeline\n",
"\n",
"polynomial_regression = make_pipeline(\n",
" PolynomialFeatures(degree=10, include_bias=False),\n",
" LinearRegression())\n",
"\n",
"train_sizes, train_scores, valid_scores = learning_curve(\n",
" polynomial_regression, X, y, train_sizes=np.linspace(0.01, 1.0, 40), cv=5,\n",
2021-11-21 05:36:22 +01:00
" scoring=\"neg_root_mean_squared_error\")"
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {},
"outputs": [],
"source": [
"# not in the book generates and saves Figure 416\n",
2021-11-03 23:35:15 +01:00
"\n",
"train_errors = -train_scores.mean(axis=1)\n",
"valid_errors = -valid_scores.mean(axis=1)\n",
2021-11-21 05:36:22 +01:00
"\n",
2021-11-03 23:35:15 +01:00
"plt.figure(figsize=(6, 4))\n",
"plt.plot(train_sizes, train_errors, \"r-+\", linewidth=2, label=\"train\")\n",
"plt.plot(train_sizes, valid_errors, \"b-\", linewidth=3, label=\"valid\")\n",
"plt.legend(loc=\"upper right\")\n",
"plt.xlabel(\"Training set size\")\n",
"plt.ylabel(\"RMSE\")\n",
"plt.grid()\n",
"plt.axis([0, 80, 0, 2.5])\n",
"save_fig(\"learning_curves_plot\")\n",
"plt.show()"
]
},
{
2021-11-03 23:35:15 +01:00
"cell_type": "markdown",
"metadata": {},
"source": [
2021-11-03 23:35:15 +01:00
"# Regularized Linear Models"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2021-11-03 23:35:15 +01:00
"## Ridge Regression"
2021-10-02 13:14:44 +02:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2021-11-03 23:35:15 +01:00
"Let's generate a very small and noisy linear dataset:"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 34,
"metadata": {},
"outputs": [],
"source": [
2021-11-21 05:36:22 +01:00
"# not in the book we've done this type of generation several times before\n",
"np.random.seed(42)\n",
"m = 20\n",
"X = 3 * np.random.rand(m, 1)\n",
"y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5\n",
"X_new = np.linspace(0, 3, 100).reshape(100, 1)"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 35,
"metadata": {},
"outputs": [],
"source": [
2021-11-21 05:36:22 +01:00
"# not in the book a quick peek at the dataset we just generated\n",
2021-11-03 23:35:15 +01:00
"plt.figure(figsize=(6, 4))\n",
"plt.plot(X, y, \".\")\n",
"plt.xlabel(\"$x_1$\")\n",
"plt.ylabel(\"$y$ \", rotation=0)\n",
"plt.axis([0, 3, 0, 3.5])\n",
"plt.grid()\n",
"plt.show()"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 36,
"metadata": {},
"outputs": [],
"source": [
2021-11-03 23:35:15 +01:00
"from sklearn.linear_model import Ridge\n",
"\n",
"ridge_reg = Ridge(alpha=1, solver=\"cholesky\")\n",
"ridge_reg.fit(X, y)\n",
"ridge_reg.predict([[1.5]])"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 37,
"metadata": {},
"outputs": [],
"source": [
2021-11-21 05:36:22 +01:00
"# not in the book this cell generates and saves Figure 417\n",
"\n",
"def plot_model(model_class, polynomial, alphas, **model_kargs):\n",
2021-11-03 23:35:15 +01:00
" plt.plot(X, y, \"b.\", linewidth=3)\n",
" for alpha, style in zip(alphas, (\"b:\", \"g--\", \"r-\")):\n",
" if alpha > 0:\n",
" model = model_class(alpha, **model_kargs)\n",
" else:\n",
" model = LinearRegression()\n",
" if polynomial:\n",
2021-11-03 23:35:15 +01:00
" model = make_pipeline(\n",
" PolynomialFeatures(degree=10, include_bias=False),\n",
" StandardScaler(),\n",
" model)\n",
" model.fit(X, y)\n",
" y_new_regul = model.predict(X_new)\n",
2021-11-03 23:35:15 +01:00
" plt.plot(X_new, y_new_regul, style, linewidth=2,\n",
2021-11-21 22:18:02 +01:00
" label=fr\"$\\alpha = {alpha}$\")\n",
2021-11-03 23:35:15 +01:00
" plt.legend(loc=\"upper left\")\n",
" plt.xlabel(\"$x_1$\")\n",
" plt.axis([0, 3, 0, 3.5])\n",
" plt.grid()\n",
"\n",
"plt.figure(figsize=(9, 3.5))\n",
"plt.subplot(121)\n",
"plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42)\n",
2021-11-03 23:35:15 +01:00
"plt.ylabel(\"$y$ \", rotation=0)\n",
"plt.subplot(122)\n",
"plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42)\n",
2021-11-03 23:35:15 +01:00
"plt.gca().axes.yaxis.set_ticklabels([])\n",
"save_fig(\"ridge_regression_plot\")\n",
"plt.show()"
]
},
{
2021-11-03 23:35:15 +01:00
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 38,
"metadata": {},
2021-11-03 23:35:15 +01:00
"outputs": [],
"source": [
2021-11-03 23:35:15 +01:00
"sgd_reg = SGDRegressor(penalty=\"l2\", random_state=42)\n",
"sgd_reg.fit(X, y.ravel())\n",
"sgd_reg.predict([[1.5]])"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 39,
"metadata": {},
"outputs": [],
"source": [
2021-11-21 05:36:22 +01:00
"# not in the book show that we get roughly the same solution as earlier when\n",
"# we use Stochastic Average GD (solver=\"sag\")\n",
2021-11-03 23:35:15 +01:00
"ridge_reg = Ridge(alpha=1, solver=\"sag\", random_state=42)\n",
"ridge_reg.fit(X, y)\n",
"ridge_reg.predict([[1.5]])"
]
},
2021-10-02 13:14:44 +02:00
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Lasso Regression"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 40,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.linear_model import Lasso\n",
"\n",
"lasso_reg = Lasso(alpha=0.1)\n",
"lasso_reg.fit(X, y)\n",
"lasso_reg.predict([[1.5]])"
]
},
2021-10-02 13:14:44 +02:00
{
2021-11-03 23:35:15 +01:00
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 41,
2021-10-02 13:14:44 +02:00
"metadata": {},
2021-11-03 23:35:15 +01:00
"outputs": [],
2021-10-02 13:14:44 +02:00
"source": [
2021-11-21 05:36:22 +01:00
"# not in the book this cell generates and saves Figure 418\n",
2021-11-03 23:35:15 +01:00
"plt.figure(figsize=(9, 3.5))\n",
"plt.subplot(121)\n",
"plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42)\n",
2021-11-03 23:44:16 +01:00
"plt.ylabel(\"$y$ \", rotation=0)\n",
2021-11-03 23:35:15 +01:00
"plt.subplot(122)\n",
"plot_model(Lasso, polynomial=True, alphas=(0, 1e-2, 1), random_state=42)\n",
"plt.gca().axes.yaxis.set_ticklabels([])\n",
"save_fig(\"lasso_regression_plot\")\n",
"plt.show()"
2021-10-02 13:14:44 +02:00
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 42,
"metadata": {},
"outputs": [],
"source": [
2021-11-21 05:36:22 +01:00
"# not in the book this BIG cell generates and saves Figure 419\n",
"\n",
2021-11-03 23:35:15 +01:00
"t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5\n",
"\n",
"t1s = np.linspace(t1a, t1b, 500)\n",
"t2s = np.linspace(t2a, t2b, 500)\n",
"t1, t2 = np.meshgrid(t1s, t2s)\n",
"T = np.c_[t1.ravel(), t2.ravel()]\n",
"Xr = np.array([[1, 1], [1, -1], [1, 0.5]])\n",
"yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:]\n",
"\n",
"J = (1 / len(Xr) * ((T @ Xr.T - yr.T) ** 2).sum(axis=1)).reshape(t1.shape)\n",
"\n",
"N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape)\n",
"N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape)\n",
"\n",
"t_min_idx = np.unravel_index(J.argmin(), J.shape)\n",
"t1_min, t2_min = t1[t_min_idx], t2[t_min_idx]\n",
"\n",
2021-11-21 05:36:22 +01:00
"t_init = np.array([[0.25], [-1]])\n",
"\n",
2021-11-03 23:35:15 +01:00
"def bgd_path(theta, X, y, l1, l2, core=1, eta=0.05, n_iterations=200):\n",
" path = [theta]\n",
" for iteration in range(n_iterations):\n",
" gradients = (core * 2 / len(X) * X.T @ (X @ theta - y)\n",
" + l1 * np.sign(theta) + l2 * theta)\n",
" theta = theta - eta * gradients\n",
" path.append(theta)\n",
" return np.array(path)\n",
"\n",
2021-11-03 23:35:15 +01:00
"fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8))\n",
"\n",
2021-11-03 23:35:15 +01:00
"for i, N, l1, l2, title in ((0, N1, 2.0, 0, \"Lasso\"), (1, N2, 0, 2.0, \"Ridge\")):\n",
" JR = J + l1 * N1 + l2 * 0.5 * N2 ** 2\n",
"\n",
2021-11-03 23:35:15 +01:00
" tr_min_idx = np.unravel_index(JR.argmin(), JR.shape)\n",
" t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx]\n",
"\n",
" levels = np.exp(np.linspace(0, 1, 20)) - 1\n",
" levelsJ = levels * (J.max() - J.min()) + J.min()\n",
" levelsJR = levels * (JR.max() - JR.min()) + JR.min()\n",
" levelsN = np.linspace(0, N.max(), 10)\n",
"\n",
" path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0)\n",
" path_JR = bgd_path(t_init, Xr, yr, l1, l2)\n",
" path_N = bgd_path(theta=np.array([[2.0], [0.5]]), X=Xr, y=yr,\n",
" l1=np.sign(l1) / 3, l2=np.sign(l2), core=0)\n",
" ax = axes[i, 0]\n",
" ax.grid()\n",
" ax.axhline(y=0, color=\"k\")\n",
" ax.axvline(x=0, color=\"k\")\n",
" ax.contourf(t1, t2, N / 2.0, levels=levelsN)\n",
" ax.plot(path_N[:, 0], path_N[:, 1], \"y--\")\n",
" ax.plot(0, 0, \"ys\")\n",
" ax.plot(t1_min, t2_min, \"ys\")\n",
2021-11-21 22:18:02 +01:00
" ax.set_title(fr\"$\\ell_{i + 1}$ penalty\")\n",
2021-11-03 23:35:15 +01:00
" ax.axis([t1a, t1b, t2a, t2b])\n",
" if i == 1:\n",
" ax.set_xlabel(r\"$\\theta_1$\")\n",
" ax.set_ylabel(r\"$\\theta_2$\", rotation=0)\n",
"\n",
2021-11-03 23:35:15 +01:00
" ax = axes[i, 1]\n",
" ax.grid()\n",
" ax.axhline(y=0, color=\"k\")\n",
" ax.axvline(x=0, color=\"k\")\n",
" ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9)\n",
" ax.plot(path_JR[:, 0], path_JR[:, 1], \"w-o\")\n",
" ax.plot(path_N[:, 0], path_N[:, 1], \"y--\")\n",
" ax.plot(0, 0, \"ys\")\n",
" ax.plot(t1_min, t2_min, \"ys\")\n",
" ax.plot(t1r_min, t2r_min, \"rs\")\n",
" ax.set_title(title)\n",
" ax.axis([t1a, t1b, t2a, t2b])\n",
" if i == 1:\n",
" ax.set_xlabel(r\"$\\theta_1$\")\n",
"\n",
2021-11-03 23:35:15 +01:00
"save_fig(\"lasso_vs_ridge_plot\")\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2021-11-03 23:35:15 +01:00
"## Elastic Net"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 43,
"metadata": {},
"outputs": [],
"source": [
2021-11-03 23:35:15 +01:00
"from sklearn.linear_model import ElasticNet\n",
"\n",
2021-11-03 23:35:15 +01:00
"elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5)\n",
"elastic_net.fit(X, y)\n",
"elastic_net.predict([[1.5]])"
]
},
{
2021-11-03 23:35:15 +01:00
"cell_type": "markdown",
"metadata": {},
2017-05-29 23:20:14 +02:00
"source": [
2021-11-03 23:35:15 +01:00
"## Early Stopping"
]
},
2016-09-27 16:39:16 +02:00
{
2021-11-03 23:35:15 +01:00
"cell_type": "markdown",
"metadata": {},
2016-09-27 16:39:16 +02:00
"source": [
2021-11-03 23:35:15 +01:00
"Let's go back to the quadratic dataset we used earlier:"
2016-09-27 16:39:16 +02:00
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 44,
"metadata": {},
2016-09-27 16:39:16 +02:00
"outputs": [],
"source": [
2021-11-21 05:36:22 +01:00
"# not in the book this is the same code as earlier\n",
2021-11-03 23:35:15 +01:00
"np.random.seed(42)\n",
"m = 100\n",
"X = 6 * np.random.rand(m, 1) - 3\n",
"y = 0.5 * X ** 2 + X + 2 + np.random.randn(m, 1)\n",
2021-11-10 05:58:42 +01:00
"X_train, y_train = X[: m // 2], y[: m // 2, 0]\n",
"X_valid, y_valid = X[m // 2 :], y[m // 2 :, 0]"
2016-09-27 16:39:16 +02:00
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 45,
"metadata": {},
2016-09-27 16:39:16 +02:00
"outputs": [],
"source": [
2021-11-03 23:35:15 +01:00
"from copy import deepcopy\n",
"from sklearn.metrics import mean_squared_error\n",
2021-11-10 05:58:42 +01:00
"from sklearn.preprocessing import StandardScaler\n",
2016-09-27 16:39:16 +02:00
"\n",
2021-11-03 23:35:15 +01:00
"preprocessing = make_pipeline(PolynomialFeatures(degree=90, include_bias=False),\n",
" StandardScaler())\n",
"X_train_prep = preprocessing.fit_transform(X_train)\n",
"X_valid_prep = preprocessing.transform(X_valid)\n",
"sgd_reg = SGDRegressor(penalty=None, eta0=0.002, random_state=42)\n",
"n_epochs = 500\n",
"best_valid_rmse = float('inf')\n",
2021-11-21 05:36:22 +01:00
"train_errors, val_errors = [], [] # not in the book it's for the figure below\n",
2016-09-27 16:39:16 +02:00
"\n",
2021-11-03 23:35:15 +01:00
"for epoch in range(n_epochs):\n",
" sgd_reg.partial_fit(X_train_prep, y_train)\n",
" y_valid_predict = sgd_reg.predict(X_valid_prep)\n",
" val_error = mean_squared_error(y_valid, y_valid_predict, squared=False)\n",
" if val_error < best_valid_rmse:\n",
" best_valid_rmse = val_error\n",
" best_model = deepcopy(sgd_reg)\n",
"\n",
2021-11-21 05:36:22 +01:00
" # not in the book we evaluate the train error and save it for the figure\n",
2021-11-03 23:35:15 +01:00
" y_train_predict = sgd_reg.predict(X_train_prep)\n",
" train_error = mean_squared_error(y_train, y_train_predict, squared=False)\n",
" val_errors.append(val_error)\n",
" train_errors.append(train_error)\n",
"\n",
2021-11-21 05:36:22 +01:00
"# not in the book this section generates and saves Figure 420\n",
2021-11-03 23:35:15 +01:00
"best_epoch = np.argmin(val_errors)\n",
"plt.figure(figsize=(6, 4))\n",
"plt.annotate('Best model',\n",
" xy=(best_epoch, best_valid_rmse),\n",
" xytext=(best_epoch, best_valid_rmse + 0.5),\n",
2021-11-27 01:38:47 +01:00
" ha=\"center\",\n",
2021-11-03 23:35:15 +01:00
" arrowprops=dict(facecolor='black', shrink=0.05))\n",
"plt.plot([0, n_epochs], [best_valid_rmse, best_valid_rmse], \"k:\", linewidth=2)\n",
"plt.plot(val_errors, \"b-\", linewidth=3, label=\"Validation set\")\n",
"plt.plot(best_epoch, best_valid_rmse, \"bo\")\n",
"plt.plot(train_errors, \"r--\", linewidth=2, label=\"Training set\")\n",
"plt.legend(loc=\"upper right\")\n",
"plt.xlabel(\"Epoch\")\n",
"plt.ylabel(\"RMSE\")\n",
"plt.axis([0, n_epochs, 0, 3.5])\n",
"plt.grid()\n",
"save_fig(\"early_stopping_plot\")\n",
2016-09-27 16:39:16 +02:00
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2021-10-02 13:14:44 +02:00
"# Logistic Regression"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2021-11-03 23:35:15 +01:00
"## Estimating Probabilities"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 46,
"metadata": {},
"outputs": [],
"source": [
2021-11-21 05:36:22 +01:00
"# not in the book generates and saves Figure 421\n",
2021-11-03 23:35:15 +01:00
"\n",
"lim = 6\n",
"t = np.linspace(-lim, lim, 100)\n",
"sig = 1 / (1 + np.exp(-t))\n",
2021-11-03 23:35:15 +01:00
"\n",
"plt.figure(figsize=(8, 3))\n",
"plt.plot([-lim, lim], [0, 0], \"k-\")\n",
"plt.plot([-lim, lim], [0.5, 0.5], \"k:\")\n",
"plt.plot([-lim, lim], [1, 1], \"k:\")\n",
"plt.plot([0, 0], [-1.1, 1.1], \"k-\")\n",
2021-11-03 23:35:15 +01:00
"plt.plot(t, sig, \"b-\", linewidth=2, label=r\"$\\sigma(t) = \\dfrac{1}{1 + e^{-t}}$\")\n",
"plt.xlabel(\"t\")\n",
2021-11-03 23:35:15 +01:00
"plt.legend(loc=\"upper left\")\n",
"plt.axis([-lim, lim, -0.1, 1.1])\n",
"plt.gca().set_yticks([0, 0.25, 0.5, 0.75, 1])\n",
"plt.grid()\n",
"save_fig(\"logistic_function_plot\")\n",
"plt.show()"
]
},
{
2021-11-03 23:35:15 +01:00
"cell_type": "markdown",
"metadata": {},
"source": [
2021-11-03 23:35:15 +01:00
"## Decision Boundaries"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 47,
"metadata": {},
"outputs": [],
"source": [
2021-11-03 23:35:15 +01:00
"from sklearn.datasets import load_iris\n",
"\n",
"iris = load_iris(as_frame=True)\n",
"list(iris)"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 48,
"metadata": {},
"outputs": [],
"source": [
2021-11-21 05:36:22 +01:00
"print(iris.DESCR) # not in the book it's a bit too long"
]
},
{
2021-11-03 23:35:15 +01:00
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 49,
"metadata": {},
2021-11-03 23:35:15 +01:00
"outputs": [],
"source": [
2021-11-10 05:58:42 +01:00
"iris.data.head(3)"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 50,
"metadata": {},
"outputs": [],
"source": [
2021-11-10 05:58:42 +01:00
"iris.target.head(3) # note that the instances are not shuffled"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 51,
"metadata": {},
"outputs": [],
"source": [
2021-11-03 23:35:15 +01:00
"iris.target_names"
2017-05-29 23:20:14 +02:00
]
},
{
2021-11-03 23:35:15 +01:00
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 52,
"metadata": {},
2021-11-03 23:35:15 +01:00
"outputs": [],
2017-05-29 23:20:14 +02:00
"source": [
2021-11-03 23:35:15 +01:00
"from sklearn.linear_model import LogisticRegression\n",
"from sklearn.model_selection import train_test_split\n",
"\n",
"X = iris.data[[\"petal width (cm)\"]].values\n",
"y = iris.target_names[iris.target] == 'virginica'\n",
"X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)\n",
"\n",
"log_reg = LogisticRegression(random_state=42)\n",
"log_reg.fit(X_train, y_train)"
2017-05-29 23:20:14 +02:00
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 53,
"metadata": {},
2017-05-29 23:20:14 +02:00
"outputs": [],
"source": [
2021-11-03 23:35:15 +01:00
"X_new = np.linspace(0, 3, 1000).reshape(-1, 1) # reshape to get a column vector\n",
2017-05-29 23:20:14 +02:00
"y_proba = log_reg.predict_proba(X_new)\n",
2021-11-03 23:35:15 +01:00
"decision_boundary = X_new[y_proba[:, 1] >= 0.5][0, 0]\n",
"\n",
2021-11-21 05:36:22 +01:00
"plt.figure(figsize=(8, 3)) # not in the book not needed, just formatting\n",
2021-11-03 23:35:15 +01:00
"plt.plot(X_new, y_proba[:, 0], \"b--\", linewidth=2,\n",
" label=\"Not Iris virginica proba\")\n",
"plt.plot(X_new, y_proba[:, 1], \"g-\", linewidth=2, label=\"Iris virginica proba\")\n",
"plt.plot([decision_boundary, decision_boundary], [0, 1], \"k:\", linewidth=2,\n",
" label=\"Decision boundary\")\n",
"\n",
2021-11-21 05:36:22 +01:00
"# not in the book this section beautifies and saves Figure 421\n",
2021-11-03 23:35:15 +01:00
"plt.arrow(x=decision_boundary, y=0.08, dx=-0.3, dy=0,\n",
" head_width=0.05, head_length=0.1, fc=\"b\", ec=\"b\")\n",
"plt.arrow(x=decision_boundary, y=0.92, dx=0.3, dy=0,\n",
" head_width=0.05, head_length=0.1, fc=\"g\", ec=\"g\")\n",
"plt.plot(X_train[y_train == 0], y_train[y_train == 0], \"bs\")\n",
"plt.plot(X_train[y_train == 1], y_train[y_train == 1], \"g^\")\n",
"plt.xlabel(\"Petal width (cm)\")\n",
"plt.ylabel(\"Probability\")\n",
"plt.legend(loc=\"center left\")\n",
2017-05-29 23:20:14 +02:00
"plt.axis([0, 3, -0.02, 1.02])\n",
2021-11-03 23:35:15 +01:00
"plt.grid()\n",
2017-05-29 23:20:14 +02:00
"save_fig(\"logistic_regression_plot\")\n",
2021-11-03 23:35:15 +01:00
"\n",
2017-05-29 23:20:14 +02:00
"plt.show()"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 54,
"metadata": {},
2017-05-29 23:20:14 +02:00
"outputs": [],
"source": [
"decision_boundary"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 55,
"metadata": {},
2017-05-29 23:20:14 +02:00
"outputs": [],
"source": [
"log_reg.predict([[1.7], [1.5]])"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 56,
"metadata": {},
"outputs": [],
"source": [
2021-11-21 05:36:22 +01:00
"# not in the book this cell generates and saves Figure 422\n",
"\n",
2021-11-03 23:35:15 +01:00
"X = iris.data[[\"petal length (cm)\", \"petal width (cm)\"]].values\n",
"y = iris.target_names[iris.target] == 'virginica'\n",
"X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)\n",
"\n",
2021-11-03 23:35:15 +01:00
"log_reg = LogisticRegression(C=2, random_state=42)\n",
"log_reg.fit(X_train, y_train)\n",
"\n",
2021-11-03 23:35:15 +01:00
"# for the contour plot\n",
"x0, x1 = np.meshgrid(np.linspace(2.9, 7, 500).reshape(-1, 1),\n",
" np.linspace(0.8, 2.7, 200).reshape(-1, 1))\n",
"X_new = np.c_[x0.ravel(), x1.ravel()] # one instance per point on the figure\n",
"y_proba = log_reg.predict_proba(X_new)\n",
"zz = y_proba[:, 1].reshape(x0.shape)\n",
"\n",
2021-11-03 23:35:15 +01:00
"# for the decision boundary\n",
"left_right = np.array([2.9, 7])\n",
2021-11-03 23:35:15 +01:00
"boundary = -((log_reg.coef_[0, 0] * left_right + log_reg.intercept_[0])\n",
" / log_reg.coef_[0, 1])\n",
"\n",
2021-11-03 23:35:15 +01:00
"plt.figure(figsize=(10, 4))\n",
"plt.plot(X_train[y_train == 0, 0], X_train[y_train == 0, 1], \"bs\")\n",
"plt.plot(X_train[y_train == 1, 0], X_train[y_train == 1, 1], \"g^\")\n",
"contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg)\n",
"plt.clabel(contour, inline=1)\n",
"plt.plot(left_right, boundary, \"k--\", linewidth=3)\n",
2021-11-27 01:38:47 +01:00
"plt.text(3.5, 1.27, \"Not Iris virginica\", color=\"b\", ha=\"center\")\n",
"plt.text(6.5, 2.3, \"Iris virginica\", color=\"g\", ha=\"center\")\n",
2021-11-03 23:35:15 +01:00
"plt.xlabel(\"Petal length\")\n",
"plt.ylabel(\"Petal width\")\n",
"plt.axis([2.9, 7, 0.8, 2.7])\n",
2021-11-03 23:35:15 +01:00
"plt.grid()\n",
"save_fig(\"logistic_regression_contour_plot\")\n",
"plt.show()"
]
},
2021-11-03 23:35:15 +01:00
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Softmax Regression"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 57,
"metadata": {},
"outputs": [],
"source": [
2021-11-03 23:35:15 +01:00
"X = iris.data[[\"petal length (cm)\", \"petal width (cm)\"]].values\n",
"y = iris[\"target\"]\n",
2021-11-03 23:35:15 +01:00
"X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)\n",
"\n",
2021-11-03 23:35:15 +01:00
"softmax_reg = LogisticRegression(C=30, random_state=42)\n",
"softmax_reg.fit(X_train, y_train)"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 58,
2021-11-03 23:35:15 +01:00
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"softmax_reg.predict([[5, 2]])"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 59,
2021-11-03 23:35:15 +01:00
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"softmax_reg.predict_proba([[5, 2]]).round(2)"
2017-05-29 23:20:14 +02:00
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 60,
"metadata": {},
2017-05-29 23:20:14 +02:00
"outputs": [],
"source": [
2021-11-21 05:36:22 +01:00
"# not in the book this cell generates and saves Figure 423\n",
"\n",
2021-11-03 23:35:15 +01:00
"from matplotlib.colors import ListedColormap\n",
"\n",
"custom_cmap = ListedColormap([\"#fafab0\", \"#9898ff\", \"#a0faa0\"])\n",
"\n",
2021-11-03 23:35:15 +01:00
"x0, x1 = np.meshgrid(np.linspace(0, 8, 500).reshape(-1, 1),\n",
" np.linspace(0, 3.5, 200).reshape(-1, 1))\n",
"X_new = np.c_[x0.ravel(), x1.ravel()]\n",
"\n",
"y_proba = softmax_reg.predict_proba(X_new)\n",
"y_predict = softmax_reg.predict(X_new)\n",
"\n",
"zz1 = y_proba[:, 1].reshape(x0.shape)\n",
"zz = y_predict.reshape(x0.shape)\n",
"\n",
"plt.figure(figsize=(10, 4))\n",
2021-11-03 23:35:15 +01:00
"plt.plot(X[y == 2, 0], X[y == 2, 1], \"g^\", label=\"Iris virginica\")\n",
"plt.plot(X[y == 1, 0], X[y == 1, 1], \"bs\", label=\"Iris versicolor\")\n",
"plt.plot(X[y == 0, 0], X[y == 0, 1], \"yo\", label=\"Iris setosa\")\n",
"\n",
"plt.contourf(x0, x1, zz, cmap=custom_cmap)\n",
2021-11-03 23:35:15 +01:00
"contour = plt.contour(x0, x1, zz1, cmap=\"hot\")\n",
"plt.clabel(contour, inline=1)\n",
"plt.xlabel(\"Petal length\")\n",
"plt.ylabel(\"Petal width\")\n",
"plt.legend(loc=\"center left\")\n",
"plt.axis([0.5, 7, 0, 3.5])\n",
"plt.grid()\n",
"save_fig(\"softmax_regression_contour_plot\")\n",
"plt.show()"
]
},
2016-09-27 16:39:16 +02:00
{
"cell_type": "markdown",
"metadata": {},
2016-09-27 16:39:16 +02:00
"source": [
"# Exercise solutions"
]
},
{
"cell_type": "markdown",
"metadata": {},
2017-05-29 23:20:14 +02:00
"source": [
"## 1. to 11."
]
},
{
"cell_type": "markdown",
"metadata": {},
2017-05-29 23:20:14 +02:00
"source": [
"1. If you have a training set with millions of features you can use Stochastic Gradient Descent or Mini-batch Gradient Descent, and perhaps Batch Gradient Descent if the training set fits in memory. But you cannot use the Normal Equation or the SVD approach because the computational complexity grows quickly (more than quadratically) with the number of features.\n",
"2. If the features in your training set have very different scales, the cost function will have the shape of an elongated bowl, so the Gradient Descent algorithms will take a long time to converge. To solve this you should scale the data before training the model. Note that the Normal Equation or SVD approach will work just fine without scaling. Moreover, regularized models may converge to a suboptimal solution if the features are not scaled: since regularization penalizes large weights, features with smaller values will tend to be ignored compared to features with larger values.\n",
"3. Gradient Descent cannot get stuck in a local minimum when training a Logistic Regression model because the cost function is convex. _Convex_ means that if you draw a straight line between any two points on the curve, the line never crosses the curve.\n",
"4. If the optimization problem is convex (such as Linear Regression or Logistic Regression), and assuming the learning rate is not too high, then all Gradient Descent algorithms will approach the global optimum and end up producing fairly similar models. However, unless you gradually reduce the learning rate, Stochastic GD and Mini-batch GD will never truly converge; instead, they will keep jumping back and forth around the global optimum. This means that even if you let them run for a very long time, these Gradient Descent algorithms will produce slightly different models.\n",
"5. If the validation error consistently goes up after every epoch, then one possibility is that the learning rate is too high and the algorithm is diverging. If the training error also goes up, then this is clearly the problem and you should reduce the learning rate. However, if the training error is not going up, then your model is overfitting the training set and you should stop training.\n",
"6. Due to their random nature, neither Stochastic Gradient Descent nor Mini-batch Gradient Descent is guaranteed to make progress at every single training iteration. So if you immediately stop training when the validation error goes up, you may stop much too early, before the optimum is reached. A better option is to save the model at regular intervals; then, when it has not improved for a long time (meaning it will probably never beat the record), you can revert to the best saved model.\n",
"7. Stochastic Gradient Descent has the fastest training iteration since it considers only one training instance at a time, so it is generally the first to reach the vicinity of the global optimum (or Mini-batch GD with a very small mini-batch size). However, only Batch Gradient Descent will actually converge, given enough training time. As mentioned, Stochastic GD and Mini-batch GD will bounce around the optimum, unless you gradually reduce the learning rate.\n",
"8. If the validation error is much higher than the training error, this is likely because your model is overfitting the training set. One way to try to fix this is to reduce the polynomial degree: a model with fewer degrees of freedom is less likely to overfit. Another thing you can try is to regularize the model—for example, by adding an ℓ₂ penalty (Ridge) or an ℓ₁ penalty (Lasso) to the cost function. This will also reduce the degrees of freedom of the model. Lastly, you can try to increase the size of the training set.\n",
"9. If both the training error and the validation error are almost equal and fairly high, the model is likely underfitting the training set, which means it has a high bias. You should try reducing the regularization hyperparameter _α_.\n",
"10. Let's see:\n",
" * A model with some regularization typically performs better than a model without any regularization, so you should generally prefer Ridge Regression over plain Linear Regression.\n",
" * Lasso Regression uses an ℓ₁ penalty, which tends to push the weights down to exactly zero. This leads to sparse models, where all weights are zero except for the most important weights. This is a way to perform feature selection automatically, which is good if you suspect that only a few features actually matter. When you are not sure, you should prefer Ridge Regression.\n",
" * Elastic Net is generally preferred over Lasso since Lasso may behave erratically in some cases (when several features are strongly correlated or when there are more features than training instances). However, it does add an extra hyperparameter to tune. If you want Lasso without the erratic behavior, you can just use Elastic Net with an `l1_ratio` close to 1.\n",
"11. If you want to classify pictures as outdoor/indoor and daytime/nighttime, since these are not exclusive classes (i.e., all four combinations are possible) you should train two Logistic Regression classifiers."
2017-05-29 23:20:14 +02:00
]
},
{
"cell_type": "markdown",
"metadata": {},
2017-05-29 23:20:14 +02:00
"source": [
"## 12. Batch Gradient Descent with early stopping for Softmax Regression\n",
2021-11-03 23:35:15 +01:00
"Exercise: _Implement Batch Gradient Descent with early stopping for Softmax Regression without using Scikit-Learn, only NumPy._"
2017-05-29 23:20:14 +02:00
]
},
{
"cell_type": "markdown",
"metadata": {},
2017-05-29 23:20:14 +02:00
"source": [
"Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier."
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 61,
"metadata": {},
2017-05-29 23:20:14 +02:00
"outputs": [],
"source": [
2021-11-03 23:35:15 +01:00
"X = iris.data[[\"petal length (cm)\", \"petal width (cm)\"]].values\n",
"y = iris[\"target\"].values"
2017-05-29 23:20:14 +02:00
]
},
{
"cell_type": "markdown",
"metadata": {},
2017-05-29 23:20:14 +02:00
"source": [
2021-11-03 23:35:15 +01:00
"We need to add the bias term for every instance ($x_0 = 1$). The easiest option to do this would be to use Scikit-Learn's `add_dummy_feature()` function, but the point of this exercise is to get a better understanding of the algorithms by implementing them manually. So here is one possible implementation:"
2017-05-29 23:20:14 +02:00
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 62,
"metadata": {},
2017-05-29 23:20:14 +02:00
"outputs": [],
"source": [
2021-11-03 23:35:15 +01:00
"X_with_bias = np.c_[np.ones(len(X)), X]"
2017-05-29 23:20:14 +02:00
]
},
{
"cell_type": "markdown",
"metadata": {},
2017-05-29 23:20:14 +02:00
"source": [
2021-11-03 23:35:15 +01:00
"The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but again, we want to did this manually:"
2017-05-29 23:20:14 +02:00
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 63,
"metadata": {},
2017-05-29 23:20:14 +02:00
"outputs": [],
"source": [
"test_ratio = 0.2\n",
"validation_ratio = 0.2\n",
"total_size = len(X_with_bias)\n",
"\n",
"test_size = int(total_size * test_ratio)\n",
"validation_size = int(total_size * validation_ratio)\n",
"train_size = total_size - test_size - validation_size\n",
"\n",
2021-11-03 23:35:15 +01:00
"np.random.seed(42)\n",
2017-05-29 23:20:14 +02:00
"rnd_indices = np.random.permutation(total_size)\n",
"\n",
"X_train = X_with_bias[rnd_indices[:train_size]]\n",
"y_train = y[rnd_indices[:train_size]]\n",
"X_valid = X_with_bias[rnd_indices[train_size:-test_size]]\n",
"y_valid = y[rnd_indices[train_size:-test_size]]\n",
"X_test = X_with_bias[rnd_indices[-test_size:]]\n",
"y_test = y[rnd_indices[-test_size:]]"
]
},
{
"cell_type": "markdown",
"metadata": {},
2017-05-29 23:20:14 +02:00
"source": [
2021-11-03 23:35:15 +01:00
"The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for any given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance. To understand this code, you need to know that `np.diag(np.ones(n))` creates an n×n matrix full of 0s except for 1s on the main diagonal. Moreover, if `a` in a NumPy array, then `a[[1,3,2]]` returns an array with 3 rows equal to `a[1]`, `a[3]` and `a[2]` (this is [advanced NumPy indexing](https://numpy.org/doc/stable/reference/arrays.indexing.html#advanced-indexing))."
2017-05-29 23:20:14 +02:00
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 64,
"metadata": {},
2017-05-29 23:20:14 +02:00
"outputs": [],
"source": [
"def to_one_hot(y):\n",
2021-11-03 23:35:15 +01:00
" return np.diag(np.ones(y.max() + 1))[y]"
2017-05-29 23:20:14 +02:00
]
},
{
"cell_type": "markdown",
"metadata": {},
2017-05-29 23:20:14 +02:00
"source": [
"Let's test this function on the first 10 instances:"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 65,
"metadata": {},
2017-05-29 23:20:14 +02:00
"outputs": [],
"source": [
"y_train[:10]"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 66,
"metadata": {},
2017-05-29 23:20:14 +02:00
"outputs": [],
"source": [
"to_one_hot(y_train[:10])"
]
},
{
"cell_type": "markdown",
"metadata": {},
2017-05-29 23:20:14 +02:00
"source": [
"Looks good, so let's create the target class probabilities matrix for the training set and the test set:"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 67,
"metadata": {},
2017-05-29 23:20:14 +02:00
"outputs": [],
"source": [
"Y_train_one_hot = to_one_hot(y_train)\n",
"Y_valid_one_hot = to_one_hot(y_valid)\n",
"Y_test_one_hot = to_one_hot(y_test)"
]
},
2021-11-03 23:35:15 +01:00
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's scale the inputs. We compute the mean and standard deviation of each feature on the training set (except for the bias feature), then we center and scale each feature in the training set, the validation set, and the test set:"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 68,
2021-11-03 23:35:15 +01:00
"metadata": {},
"outputs": [],
"source": [
"mean = X_train[:, 1:].mean(axis=0)\n",
"std = X_train[:, 1:].std(axis=0)\n",
"X_train[:, 1:] = (X_train[:, 1:] - mean) / std\n",
"X_valid[:, 1:] = (X_valid[:, 1:] - mean) / std\n",
"X_test[:, 1:] = (X_test[:, 1:] - mean) / std"
]
},
2017-05-29 23:20:14 +02:00
{
"cell_type": "markdown",
"metadata": {},
2017-05-29 23:20:14 +02:00
"source": [
"Now let's implement the Softmax function. Recall that it is defined by the following equation:\n",
"\n",
"$\\sigma\\left(\\mathbf{s}(\\mathbf{x})\\right)_k = \\dfrac{\\exp\\left(s_k(\\mathbf{x})\\right)}{\\sum\\limits_{j=1}^{K}{\\exp\\left(s_j(\\mathbf{x})\\right)}}$"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 69,
"metadata": {},
2017-05-29 23:20:14 +02:00
"outputs": [],
"source": [
"def softmax(logits):\n",
" exps = np.exp(logits)\n",
2021-11-03 23:35:15 +01:00
" exp_sums = exps.sum(axis=1, keepdims=True)\n",
2017-05-29 23:20:14 +02:00
" return exps / exp_sums"
]
},
{
"cell_type": "markdown",
"metadata": {},
2017-05-29 23:20:14 +02:00
"source": [
"We are almost ready to start training. Let's define the number of inputs and outputs:"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 70,
"metadata": {},
2016-09-27 16:39:16 +02:00
"outputs": [],
2017-05-29 23:20:14 +02:00
"source": [
2021-11-03 23:35:15 +01:00
"n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term)\n",
"n_outputs = len(np.unique(y_train)) # == 3 (there are 3 iris classes)"
2017-05-29 23:20:14 +02:00
]
},
{
"cell_type": "markdown",
"metadata": {},
2017-05-29 23:20:14 +02:00
"source": [
"Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.\n",
"\n",
"So the equations we will need are the cost function:\n",
"\n",
"$J(\\mathbf{\\Theta}) =\n",
"- \\dfrac{1}{m}\\sum\\limits_{i=1}^{m}\\sum\\limits_{k=1}^{K}{y_k^{(i)}\\log\\left(\\hat{p}_k^{(i)}\\right)}$\n",
"\n",
"And the equation for the gradients:\n",
"\n",
"$\\nabla_{\\mathbf{\\theta}^{(k)}} \\, J(\\mathbf{\\Theta}) = \\dfrac{1}{m} \\sum\\limits_{i=1}^{m}{ \\left ( \\hat{p}^{(i)}_k - y_k^{(i)} \\right ) \\mathbf{x}^{(i)}}$\n",
"\n",
"Note that $\\log\\left(\\hat{p}_k^{(i)}\\right)$ may not be computable if $\\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\\epsilon$ to $\\log\\left(\\hat{p}_k^{(i)}\\right)$ to avoid getting `nan` values."
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 71,
"metadata": {},
2017-05-29 23:20:14 +02:00
"outputs": [],
"source": [
2021-11-03 23:35:15 +01:00
"eta = 0.5\n",
"n_epochs = 5001\n",
2017-05-29 23:20:14 +02:00
"m = len(X_train)\n",
2021-11-03 23:35:15 +01:00
"epsilon = 1e-5\n",
2017-05-29 23:20:14 +02:00
"\n",
2021-11-03 23:35:15 +01:00
"np.random.seed(42)\n",
2017-05-29 23:20:14 +02:00
"Theta = np.random.randn(n_inputs, n_outputs)\n",
"\n",
2021-11-03 23:35:15 +01:00
"for epoch in range(n_epochs):\n",
" logits = X_train @ Theta\n",
2017-05-29 23:20:14 +02:00
" Y_proba = softmax(logits)\n",
2021-11-03 23:35:15 +01:00
" if epoch % 1000 == 0:\n",
" Y_proba_valid = softmax(X_valid @ Theta)\n",
" xentropy_losses = -(Y_valid_one_hot * np.log(Y_proba_valid + epsilon))\n",
" print(epoch, xentropy_losses.sum(axis=1).mean())\n",
2021-03-02 05:26:41 +01:00
" error = Y_proba - Y_train_one_hot\n",
2021-11-03 23:35:15 +01:00
" gradients = 1 / m * X_train.T @ error\n",
2017-05-29 23:20:14 +02:00
" Theta = Theta - eta * gradients"
]
},
{
"cell_type": "markdown",
"metadata": {},
2017-05-29 23:20:14 +02:00
"source": [
"And that's it! The Softmax model is trained. Let's look at the model parameters:"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 72,
"metadata": {},
2017-05-29 23:20:14 +02:00
"outputs": [],
"source": [
"Theta"
]
},
{
"cell_type": "markdown",
"metadata": {},
2017-05-29 23:20:14 +02:00
"source": [
"Let's make predictions for the validation set and check the accuracy score:"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 73,
"metadata": {},
2017-05-29 23:20:14 +02:00
"outputs": [],
"source": [
2021-11-03 23:35:15 +01:00
"logits = X_valid @ Theta\n",
2017-05-29 23:20:14 +02:00
"Y_proba = softmax(logits)\n",
2021-11-03 23:35:15 +01:00
"y_predict = Y_proba.argmax(axis=1)\n",
2017-05-29 23:20:14 +02:00
"\n",
2021-11-03 23:35:15 +01:00
"accuracy_score = (y_predict == y_valid).mean()\n",
2017-05-29 23:20:14 +02:00
"accuracy_score"
]
},
{
"cell_type": "markdown",
"metadata": {},
2017-05-29 23:20:14 +02:00
"source": [
2021-11-03 23:35:15 +01:00
"Well, this model looks pretty ok. For the sake of the exercise, let's add a bit of $\\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`."
2017-05-29 23:20:14 +02:00
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 74,
"metadata": {},
2017-05-29 23:20:14 +02:00
"outputs": [],
"source": [
2021-11-03 23:35:15 +01:00
"eta = 0.5\n",
"n_epochs = 5001\n",
2017-05-29 23:20:14 +02:00
"m = len(X_train)\n",
2021-11-03 23:35:15 +01:00
"epsilon = 1e-5\n",
"alpha = 0.01 # regularization hyperparameter\n",
2017-05-29 23:20:14 +02:00
"\n",
2021-11-03 23:35:15 +01:00
"np.random.seed(42)\n",
2017-05-29 23:20:14 +02:00
"Theta = np.random.randn(n_inputs, n_outputs)\n",
"\n",
2021-11-03 23:35:15 +01:00
"for epoch in range(n_epochs):\n",
" logits = X_train @ Theta\n",
2017-05-29 23:20:14 +02:00
" Y_proba = softmax(logits)\n",
2021-11-03 23:35:15 +01:00
" if epoch % 1000 == 0:\n",
" Y_proba_valid = softmax(X_valid @ Theta)\n",
" xentropy_losses = -(Y_valid_one_hot * np.log(Y_proba_valid + epsilon))\n",
" l2_loss = 1 / 2 * (Theta[1:] ** 2).sum()\n",
" total_loss = xentropy_losses.sum(axis=1).mean() + alpha * l2_loss\n",
" print(epoch, total_loss.round(4))\n",
2021-03-02 05:26:41 +01:00
" error = Y_proba - Y_train_one_hot\n",
2021-11-03 23:35:15 +01:00
" gradients = 1 / m * X_train.T @ error\n",
" gradients += np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]]\n",
2017-05-29 23:20:14 +02:00
" Theta = Theta - eta * gradients"
]
},
{
"cell_type": "markdown",
"metadata": {},
2017-05-29 23:20:14 +02:00
"source": [
"Because of the additional $\\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out:"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 75,
"metadata": {},
2017-05-29 23:20:14 +02:00
"outputs": [],
"source": [
2021-11-03 23:35:15 +01:00
"logits = X_valid @ Theta\n",
2017-05-29 23:20:14 +02:00
"Y_proba = softmax(logits)\n",
2021-11-03 23:35:15 +01:00
"y_predict = Y_proba.argmax(axis=1)\n",
2017-05-29 23:20:14 +02:00
"\n",
2021-11-03 23:35:15 +01:00
"accuracy_score = (y_predict == y_valid).mean()\n",
2017-05-29 23:20:14 +02:00
"accuracy_score"
]
},
{
"cell_type": "markdown",
"metadata": {},
2017-05-29 23:20:14 +02:00
"source": [
2021-11-03 23:35:15 +01:00
"In this case, the $\\ell_2$ penalty did not change the test accuracy. Perhaps try fine-tuning `alpha`?"
2017-05-29 23:20:14 +02:00
]
},
{
"cell_type": "markdown",
"metadata": {},
2017-05-29 23:20:14 +02:00
"source": [
"Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing."
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 76,
"metadata": {},
2017-05-29 23:20:14 +02:00
"outputs": [],
"source": [
2021-11-03 23:35:15 +01:00
"eta = 0.5\n",
"n_epochs = 50_001\n",
2017-05-29 23:20:14 +02:00
"m = len(X_train)\n",
2021-11-03 23:35:15 +01:00
"epsilon = 1e-5\n",
"C = 100 # regularization hyperparameter\n",
2017-05-29 23:20:14 +02:00
"best_loss = np.infty\n",
"\n",
2021-11-03 23:35:15 +01:00
"np.random.seed(42)\n",
2017-05-29 23:20:14 +02:00
"Theta = np.random.randn(n_inputs, n_outputs)\n",
"\n",
2021-11-03 23:35:15 +01:00
"for epoch in range(n_epochs):\n",
" logits = X_train @ Theta\n",
2017-05-29 23:20:14 +02:00
" Y_proba = softmax(logits)\n",
2021-11-03 23:35:15 +01:00
" Y_proba_valid = softmax(X_valid @ Theta)\n",
" xentropy_losses = -(Y_valid_one_hot * np.log(Y_proba_valid + epsilon))\n",
" l2_loss = 1 / 2 * (Theta[1:] ** 2).sum()\n",
" total_loss = xentropy_losses.sum(axis=1).mean() + 1 / C * l2_loss\n",
" if epoch % 1000 == 0:\n",
" print(epoch, total_loss.round(4))\n",
" if total_loss < best_loss:\n",
" best_loss = total_loss\n",
2017-05-29 23:20:14 +02:00
" else:\n",
2021-11-03 23:35:15 +01:00
" print(epoch - 1, best_loss.round(4))\n",
" print(epoch, total_loss.round(4), \"early stopping!\")\n",
" break\n",
" error = Y_proba - Y_train_one_hot\n",
" gradients = 1 / m * X_train.T @ error\n",
" gradients += np.r_[np.zeros([1, n_outputs]), 1 / C * Theta[1:]]\n",
" Theta = Theta - eta * gradients"
2017-05-29 23:20:14 +02:00
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 77,
"metadata": {},
2017-05-29 23:20:14 +02:00
"outputs": [],
"source": [
2021-11-03 23:35:15 +01:00
"logits = X_valid @ Theta\n",
2017-05-29 23:20:14 +02:00
"Y_proba = softmax(logits)\n",
2021-11-03 23:35:15 +01:00
"y_predict = Y_proba.argmax(axis=1)\n",
2017-05-29 23:20:14 +02:00
"\n",
2021-11-03 23:35:15 +01:00
"accuracy_score = (y_predict == y_valid).mean()\n",
2017-05-29 23:20:14 +02:00
"accuracy_score"
]
},
{
"cell_type": "markdown",
"metadata": {},
2017-05-29 23:20:14 +02:00
"source": [
2021-11-03 23:35:15 +01:00
"Oh well, still no change in validation acccuracy, but at least early training shortened training a bit."
2017-05-29 23:20:14 +02:00
]
},
{
"cell_type": "markdown",
"metadata": {},
2017-05-29 23:20:14 +02:00
"source": [
2021-11-03 23:35:15 +01:00
"Now let's plot the model's predictions on the whole dataset (remember to scale all features fed to the model):"
2017-05-29 23:20:14 +02:00
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 78,
"metadata": {},
2017-05-29 23:20:14 +02:00
"outputs": [],
"source": [
2021-11-03 23:35:15 +01:00
"custom_cmap = mpl.colors.ListedColormap(['#fafab0','#9898ff','#a0faa0'])\n",
"\n",
"x0, x1 = np.meshgrid(np.linspace(0, 8, 500).reshape(-1, 1),\n",
" np.linspace(0, 3.5, 200).reshape(-1, 1))\n",
2017-05-29 23:20:14 +02:00
"X_new = np.c_[x0.ravel(), x1.ravel()]\n",
2021-11-03 23:35:15 +01:00
"X_new = (X_new - mean) / std\n",
"X_new_with_bias = np.c_[np.ones(len(X_new)), X_new]\n",
2017-05-29 23:20:14 +02:00
"\n",
2021-11-03 23:35:15 +01:00
"logits = X_new_with_bias @ Theta\n",
2017-05-29 23:20:14 +02:00
"Y_proba = softmax(logits)\n",
2021-11-03 23:35:15 +01:00
"y_predict = Y_proba.argmax(axis=1)\n",
2017-05-29 23:20:14 +02:00
"\n",
"zz1 = Y_proba[:, 1].reshape(x0.shape)\n",
"zz = y_predict.reshape(x0.shape)\n",
"\n",
"plt.figure(figsize=(10, 4))\n",
2021-11-03 23:35:15 +01:00
"plt.plot(X[y == 2, 0], X[y == 2, 1], \"g^\", label=\"Iris virginica\")\n",
"plt.plot(X[y == 1, 0], X[y == 1, 1], \"bs\", label=\"Iris versicolor\")\n",
"plt.plot(X[y == 0, 0], X[y == 0, 1], \"yo\", label=\"Iris setosa\")\n",
2017-05-29 23:20:14 +02:00
"\n",
"plt.contourf(x0, x1, zz, cmap=custom_cmap)\n",
2021-11-03 23:35:15 +01:00
"contour = plt.contour(x0, x1, zz1, cmap=\"hot\")\n",
"plt.clabel(contour, inline=1)\n",
"plt.xlabel(\"Petal length\")\n",
"plt.ylabel(\"Petal width\")\n",
"plt.legend(loc=\"upper left\")\n",
2017-05-29 23:20:14 +02:00
"plt.axis([0, 7, 0, 3.5])\n",
2021-11-03 23:35:15 +01:00
"plt.grid()\n",
2017-05-29 23:20:14 +02:00
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
2017-05-29 23:20:14 +02:00
"source": [
"And now let's measure the final model's accuracy on the test set:"
]
},
{
"cell_type": "code",
2021-11-21 05:36:22 +01:00
"execution_count": 79,
"metadata": {},
2017-05-29 23:20:14 +02:00
"outputs": [],
"source": [
2021-11-03 23:35:15 +01:00
"logits = X_test @ Theta\n",
2017-05-29 23:20:14 +02:00
"Y_proba = softmax(logits)\n",
2021-11-03 23:35:15 +01:00
"y_predict = Y_proba.argmax(axis=1)\n",
2017-05-29 23:20:14 +02:00
"\n",
2021-11-03 23:35:15 +01:00
"accuracy_score = (y_predict == y_test).mean()\n",
2017-05-29 23:20:14 +02:00
"accuracy_score"
]
},
{
"cell_type": "markdown",
"metadata": {},
2017-05-29 23:20:14 +02:00
"source": [
2021-11-03 23:35:15 +01:00
"Well we get even better performance on the test set. This variability is likely due to the very small size of the dataset: depending on how you sample the training set, validation set and the test set, you can get quite different results. Try changing the random seed and running the code again a few times, you will see that the results will vary."
2017-05-29 23:20:14 +02:00
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
2017-05-29 23:20:14 +02:00
"outputs": [],
2016-09-27 16:39:16 +02:00
"source": []
}
],
"metadata": {
"kernelspec": {
2021-11-03 23:35:15 +01:00
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
2021-10-17 03:27:34 +02:00
"version": "3.8.12"
},
2016-09-27 16:39:16 +02:00
"nav_menu": {},
"toc": {
2016-09-27 16:39:16 +02:00
"navigate_menu": true,
"number_sections": true,
"sideBar": true,
"threshold": 6,
"toc_cell": false,
2016-09-27 16:39:16 +02:00
"toc_section_display": "block",
"toc_window_display": false
}
},
"nbformat": 4,
2020-04-06 09:13:12 +02:00
"nbformat_minor": 4
}