handson-ml/04_training_linear_models.i...

1916 lines
57 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters!

This file contains ambiguous Unicode characters that may be confused with others in your current locale. If your use case is intentional and legitimate, you can safely ignore this warning. Use the Escape button to highlight these characters.

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Chapter 4 Training Models**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"_This notebook contains all the sample code and solutions to the exercises in chapter 4._"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<table align=\"left\">\n",
" <td>\n",
" <a href=\"https://colab.research.google.com/github/ageron/handson-ml2/blob/master/04_training_linear_models.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n",
" </td>\n",
" <td>\n",
" <a target=\"_blank\" href=\"https://kaggle.com/kernels/welcome?src=https://github.com/ageron/handson-ml2/blob/master/04_training_linear_models.ipynb\"><img src=\"https://kaggle.com/static/images/open-in-kaggle.svg\" /></a>\n",
" </td>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"metadata": {
"tags": []
},
"source": [
"# Setup"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This project requires Python 3.8 or above:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import sys\n",
"\n",
"assert sys.version_info >= (3, 8)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It also requires Scikit-Learn ≥ 1.0.1:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import sklearn\n",
"\n",
"assert sklearn.__version__ >= \"1.0.1\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As we did in previous chapters, let's define the default font sizes to make the figures prettier:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"import matplotlib as mpl\n",
"\n",
"mpl.rc('font', size=12)\n",
"mpl.rc('axes', labelsize=14, titlesize=14)\n",
"mpl.rc('legend', fontsize=14)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And let's create the `images/training_linear_models` folder (if it doesn't already exist), and define the `save_fig()` function which is used through this notebook to save the figures in high-res for the book:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from pathlib import Path\n",
"\n",
"IMAGES_PATH = Path() / \"images\" / \"training_linear_models\"\n",
"IMAGES_PATH.mkdir(parents=True, exist_ok=True)\n",
"\n",
"def save_fig(fig_id, tight_layout=True, fig_extension=\"png\", resolution=300):\n",
" path = IMAGES_PATH / f\"{fig_id}.{fig_extension}\"\n",
" if tight_layout:\n",
" plt.tight_layout()\n",
" plt.savefig(path, format=fig_extension, dpi=resolution)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Linear Regression"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## The Normal Equation"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"\n",
"np.random.seed(42) # to make this code example reproducible\n",
"m = 100 # number of instances\n",
"X = 2 * np.random.rand(m, 1) # column vector\n",
"y = 4 + 3 * X + np.random.randn(m, 1) # column vector"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"# not in the book generates and saves Figure 41\n",
"\n",
"import matplotlib.pyplot as plt\n",
"\n",
"plt.figure(figsize=(6, 4))\n",
"plt.plot(X, y, \"b.\")\n",
"plt.xlabel(\"$x_1$\")\n",
"plt.ylabel(\"$y$\", rotation=0)\n",
"plt.axis([0, 2, 0, 15])\n",
"plt.grid()\n",
"save_fig(\"generated_data_plot\")\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.preprocessing import add_dummy_feature\n",
"\n",
"X_b = add_dummy_feature(X) # add x0 = 1 to each instance\n",
"theta_best = np.linalg.inv(X_b.T @ X_b) @ X_b.T @ y"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"theta_best"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"X_new = np.array([[0], [2]])\n",
"X_new_b = add_dummy_feature(X_new) # add x0 = 1 to each instance\n",
"y_predict = X_new_b @ theta_best\n",
"y_predict"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n",
"\n",
"plt.figure(figsize=(6, 4)) # not in the book not needed, just formatting\n",
"plt.plot(X_new, y_predict, \"r-\", label=\"Predictions\")\n",
"plt.plot(X, y, \"b.\")\n",
"\n",
"# not in the book beautifies and saves Figure 42\n",
"plt.xlabel(\"$x_1$\")\n",
"plt.ylabel(\"$y$\", rotation=0)\n",
"plt.axis([0, 2, 0, 15])\n",
"plt.grid()\n",
"plt.legend(loc=\"upper left\")\n",
"save_fig(\"linear_model_predictions_plot\")\n",
"\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.linear_model import LinearRegression\n",
"\n",
"lin_reg = LinearRegression()\n",
"lin_reg.fit(X, y)\n",
"lin_reg.intercept_, lin_reg.coef_"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"lin_reg.predict(X_new)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for \"least squares\"), which you could call directly:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6)\n",
"theta_best_svd"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This function computes $\\mathbf{X}^+\\mathbf{y}$, where $\\mathbf{X}^{+}$ is the _pseudoinverse_ of $\\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly:"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"np.linalg.pinv(X_b) @ y"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Gradient Descent\n",
"## Batch Gradient Descent"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
"eta = 0.1 # learning rate\n",
"n_epochs = 1000\n",
"m = len(X_b) # number of instances\n",
"\n",
"np.random.seed(42)\n",
"theta = np.random.randn(2, 1) # randomly initialized model parameters\n",
"\n",
"for epoch in range(n_epochs):\n",
" gradients = 2 / m * X_b.T @ (X_b @ theta - y)\n",
" theta = theta - eta * gradients"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The trained model parameters:"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [],
"source": [
"theta"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
"# not in the book generates and saves Figure 48\n",
"\n",
"import matplotlib as mpl\n",
"\n",
"def plot_gradient_descent(theta, eta):\n",
" m = len(X_b)\n",
" plt.plot(X, y, \"b.\")\n",
" n_epochs = 1000\n",
" n_shown = 20\n",
" theta_path = []\n",
" for epoch in range(n_epochs):\n",
" if epoch < n_shown:\n",
" y_predict = X_new_b @ theta\n",
" color = mpl.colors.rgb2hex(plt.cm.OrRd(epoch / n_shown + 0.15))\n",
" plt.plot(X_new, y_predict, linestyle=\"solid\", color=color)\n",
" gradients = 2 / m * X_b.T @ (X_b @ theta - y)\n",
" theta = theta - eta * gradients\n",
" theta_path.append(theta)\n",
" plt.xlabel(\"$x_1$\")\n",
" plt.axis([0, 2, 0, 15])\n",
" plt.grid()\n",
" plt.title(r\"$\\eta = {}$\".format(eta))\n",
" return theta_path\n",
"\n",
"np.random.seed(42)\n",
"theta = np.random.randn(2,1) # random initialization\n",
"\n",
"plt.figure(figsize=(10,4))\n",
"plt.subplot(131)\n",
"plot_gradient_descent(theta, eta=0.02)\n",
"plt.ylabel(\"$y$\", rotation=0)\n",
"plt.subplot(132)\n",
"theta_path_bgd = plot_gradient_descent(theta, eta=0.1)\n",
"plt.gca().axes.yaxis.set_ticklabels([])\n",
"plt.subplot(133)\n",
"plt.gca().axes.yaxis.set_ticklabels([])\n",
"plot_gradient_descent(theta, eta=0.5)\n",
"save_fig(\"gradient_descent_plot\")\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Stochastic Gradient Descent"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
"theta_path_sgd = [] # not in the book we need to store the path of theta in\n",
" # the parameter space to plot the next figure"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [],
"source": [
"n_epochs = 50\n",
"t0, t1 = 5, 50 # learning schedule hyperparameters\n",
"\n",
"def learning_schedule(t):\n",
" return t0 / (t + t1)\n",
"\n",
"np.random.seed(42)\n",
"theta = np.random.randn(2, 1) # random initialization\n",
"\n",
"n_shown = 20 # not in the book just needed to generate the figure below\n",
"plt.figure(figsize=(6, 4)) # not in the book not needed, just formatting\n",
"\n",
"for epoch in range(n_epochs):\n",
" for iteration in range(m):\n",
"\n",
" # not in the book these 4 lines are used to generate the figure\n",
" if epoch == 0 and iteration < n_shown:\n",
" y_predict = X_new_b @ theta\n",
" color = mpl.colors.rgb2hex(plt.cm.OrRd(iteration / n_shown + 0.15))\n",
" plt.plot(X_new, y_predict, color=color)\n",
"\n",
" random_index = np.random.randint(m)\n",
" xi = X_b[random_index : random_index + 1]\n",
" yi = y[random_index : random_index + 1]\n",
" gradients = 2 / 1 * xi.T @ (xi @ theta - yi)\n",
" eta = learning_schedule(epoch * m + iteration)\n",
" theta = theta - eta * gradients\n",
" theta_path_sgd.append(theta) # not in the book to generate the figure\n",
"\n",
"# not in the book this section beautifies and saves Figure 410\n",
"plt.plot(X, y, \"b.\")\n",
"plt.xlabel(\"$x_1$\")\n",
"plt.ylabel(\"$y$\", rotation=0)\n",
"plt.axis([0, 2, 0, 15])\n",
"plt.grid()\n",
"save_fig(\"sgd_plot\")\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"theta"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.linear_model import SGDRegressor\n",
"\n",
"sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1,\n",
" random_state=42)\n",
"sgd_reg.fit(X, y.ravel()) # y.ravel() because fit() expects 1D targets"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [],
"source": [
"sgd_reg.intercept_, sgd_reg.coef_"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Mini-batch gradient descent"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The code in this section is used to generate the next figure, it is not in the book."
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [],
"source": [
"# not in the book this cell generates and saves Figure 411\n",
"\n",
"from math import ceil\n",
"\n",
"n_epochs = 50\n",
"minibatch_size = 20\n",
"n_batches_per_epoch = ceil(m / minibatch_size)\n",
"\n",
"np.random.seed(42)\n",
"theta = np.random.randn(2, 1) # random initialization\n",
"\n",
"t0, t1 = 200, 1000 # learning schedule hyperparameters\n",
"\n",
"def learning_schedule(t):\n",
" return t0 / (t + t1)\n",
"\n",
"theta_path_mgd = []\n",
"for epoch in range(n_epochs):\n",
" shuffled_indices = np.random.permutation(m)\n",
" X_b_shuffled = X_b[shuffled_indices]\n",
" y_shuffled = y[shuffled_indices]\n",
" for iteration in range(0, n_batches_per_epoch):\n",
" idx = iteration * minibatch_size\n",
" xi = X_b_shuffled[idx : idx + minibatch_size]\n",
" yi = y_shuffled[idx : idx + minibatch_size]\n",
" gradients = 2 / minibatch_size * xi.T @ (xi @ theta - yi)\n",
" eta = learning_schedule(iteration)\n",
" theta = theta - eta * gradients\n",
" theta_path_mgd.append(theta)\n",
"\n",
"theta_path_bgd = np.array(theta_path_bgd)\n",
"theta_path_sgd = np.array(theta_path_sgd)\n",
"theta_path_mgd = np.array(theta_path_mgd)\n",
"\n",
"plt.figure(figsize=(7, 4))\n",
"plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], \"r-s\", linewidth=1,\n",
" label=\"Stochastic\")\n",
"plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], \"g-+\", linewidth=2,\n",
" label=\"Mini-batch\")\n",
"plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], \"b-o\", linewidth=3,\n",
" label=\"Batch\")\n",
"plt.legend(loc=\"upper left\")\n",
"plt.xlabel(r\"$\\theta_0$\")\n",
"plt.ylabel(r\"$\\theta_1$ \", rotation=0)\n",
"plt.axis([2.6, 4.6, 2.3, 3.4])\n",
"plt.grid()\n",
"save_fig(\"gradient_descent_paths_plot\")\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Polynomial Regression"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [],
"source": [
"np.random.seed(42)\n",
"m = 100\n",
"X = 6 * np.random.rand(m, 1) - 3\n",
"y = 0.5 * X ** 2 + X + 2 + np.random.randn(m, 1)"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [],
"source": [
"# not in the book this cell generates and saves Figure 412\n",
"plt.figure(figsize=(6, 4))\n",
"plt.plot(X, y, \"b.\")\n",
"plt.xlabel(\"$x_1$\")\n",
"plt.ylabel(\"$y$\", rotation=0)\n",
"plt.axis([-3, 3, 0, 10])\n",
"plt.grid()\n",
"save_fig(\"quadratic_data_plot\")\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.preprocessing import PolynomialFeatures\n",
"\n",
"poly_features = PolynomialFeatures(degree=2, include_bias=False)\n",
"X_poly = poly_features.fit_transform(X)\n",
"X[0]"
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [],
"source": [
"X_poly[0]"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [],
"source": [
"lin_reg = LinearRegression()\n",
"lin_reg.fit(X_poly, y)\n",
"lin_reg.intercept_, lin_reg.coef_"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [],
"source": [
"# not in the book this cell generates and saves Figure 413\n",
"\n",
"X_new = np.linspace(-3, 3, 100).reshape(100, 1)\n",
"X_new_poly = poly_features.transform(X_new)\n",
"y_new = lin_reg.predict(X_new_poly)\n",
"\n",
"plt.figure(figsize=(6, 4))\n",
"plt.plot(X, y, \"b.\")\n",
"plt.plot(X_new, y_new, \"r-\", linewidth=2, label=\"Predictions\")\n",
"plt.xlabel(\"$x_1$\")\n",
"plt.ylabel(\"$y$\", rotation=0)\n",
"plt.legend(loc=\"upper left\")\n",
"plt.axis([-3, 3, 0, 10])\n",
"plt.grid()\n",
"save_fig(\"quadratic_predictions_plot\")\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {},
"outputs": [],
"source": [
"# not in the book this cell generates and saves Figure 414\n",
"\n",
"from sklearn.preprocessing import StandardScaler\n",
"from sklearn.pipeline import make_pipeline\n",
"\n",
"plt.figure(figsize=(6, 4))\n",
"\n",
"for style, width, degree in ((\"r-+\", 2, 1), (\"b--\", 2, 2), (\"g-\", 1, 300)):\n",
" polybig_features = PolynomialFeatures(degree=degree, include_bias=False)\n",
" std_scaler = StandardScaler()\n",
" lin_reg = LinearRegression()\n",
" polynomial_regression = make_pipeline(polybig_features, std_scaler, lin_reg)\n",
" polynomial_regression.fit(X, y)\n",
" y_newbig = polynomial_regression.predict(X_new)\n",
" label = f\"{degree} degree{'s' if degree > 1 else ''}\"\n",
" plt.plot(X_new, y_newbig, style, label=label, linewidth=width)\n",
"\n",
"plt.plot(X, y, \"b.\", linewidth=3)\n",
"plt.legend(loc=\"upper left\")\n",
"plt.xlabel(\"$x_1$\")\n",
"plt.ylabel(\"$y$\", rotation=0)\n",
"plt.axis([-3, 3, 0, 10])\n",
"plt.grid()\n",
"save_fig(\"high_degree_polynomials_plot\")\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Learning Curves"
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.model_selection import learning_curve\n",
"\n",
"train_sizes, train_scores, valid_scores = learning_curve(\n",
" LinearRegression(), X, y, train_sizes=np.linspace(0.01, 1.0, 40), cv=5,\n",
" scoring=\"neg_root_mean_squared_error\")\n",
"train_errors = -train_scores.mean(axis=1)\n",
"valid_errors = -valid_scores.mean(axis=1)\n",
"\n",
"plt.figure(figsize=(6, 4)) # not in the book not need, just formatting\n",
"plt.plot(train_sizes, train_errors, \"r-+\", linewidth=2, label=\"train\")\n",
"plt.plot(train_sizes, valid_errors, \"b-\", linewidth=3, label=\"valid\")\n",
"\n",
"# not in the book beautifies and saves Figure 415\n",
"plt.xlabel(\"Training set size\")\n",
"plt.ylabel(\"RMSE\")\n",
"plt.grid()\n",
"plt.legend(loc=\"upper right\")\n",
"plt.axis([0, 80, 0, 2.5])\n",
"save_fig(\"underfitting_learning_curves_plot\")\n",
"\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 32,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.pipeline import make_pipeline\n",
"\n",
"polynomial_regression = make_pipeline(\n",
" PolynomialFeatures(degree=10, include_bias=False),\n",
" LinearRegression())\n",
"\n",
"train_sizes, train_scores, valid_scores = learning_curve(\n",
" polynomial_regression, X, y, train_sizes=np.linspace(0.01, 1.0, 40), cv=5,\n",
" scoring=\"neg_root_mean_squared_error\")"
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {},
"outputs": [],
"source": [
"# not in the book generates and saves Figure 416\n",
"\n",
"train_errors = -train_scores.mean(axis=1)\n",
"valid_errors = -valid_scores.mean(axis=1)\n",
"\n",
"plt.figure(figsize=(6, 4))\n",
"plt.plot(train_sizes, train_errors, \"r-+\", linewidth=2, label=\"train\")\n",
"plt.plot(train_sizes, valid_errors, \"b-\", linewidth=3, label=\"valid\")\n",
"plt.legend(loc=\"upper right\")\n",
"plt.xlabel(\"Training set size\")\n",
"plt.ylabel(\"RMSE\")\n",
"plt.grid()\n",
"plt.axis([0, 80, 0, 2.5])\n",
"save_fig(\"learning_curves_plot\")\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Regularized Linear Models"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Ridge Regression"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's generate a very small and noisy linear dataset:"
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {},
"outputs": [],
"source": [
"# not in the book we've done this type of generation several times before\n",
"np.random.seed(42)\n",
"m = 20\n",
"X = 3 * np.random.rand(m, 1)\n",
"y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5\n",
"X_new = np.linspace(0, 3, 100).reshape(100, 1)"
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {},
"outputs": [],
"source": [
"# not in the book a quick peek at the dataset we just generated\n",
"plt.figure(figsize=(6, 4))\n",
"plt.plot(X, y, \".\")\n",
"plt.xlabel(\"$x_1$\")\n",
"plt.ylabel(\"$y$ \", rotation=0)\n",
"plt.axis([0, 3, 0, 3.5])\n",
"plt.grid()\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.linear_model import Ridge\n",
"\n",
"ridge_reg = Ridge(alpha=1, solver=\"cholesky\")\n",
"ridge_reg.fit(X, y)\n",
"ridge_reg.predict([[1.5]])"
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {},
"outputs": [],
"source": [
"# not in the book this cell generates and saves Figure 417\n",
"\n",
"def plot_model(model_class, polynomial, alphas, **model_kargs):\n",
" plt.plot(X, y, \"b.\", linewidth=3)\n",
" for alpha, style in zip(alphas, (\"b:\", \"g--\", \"r-\")):\n",
" if alpha > 0:\n",
" model = model_class(alpha, **model_kargs)\n",
" else:\n",
" model = LinearRegression()\n",
" if polynomial:\n",
" model = make_pipeline(\n",
" PolynomialFeatures(degree=10, include_bias=False),\n",
" StandardScaler(),\n",
" model)\n",
" model.fit(X, y)\n",
" y_new_regul = model.predict(X_new)\n",
" plt.plot(X_new, y_new_regul, style, linewidth=2,\n",
" label=r\"$\\alpha = {}$\".format(alpha))\n",
" plt.legend(loc=\"upper left\")\n",
" plt.xlabel(\"$x_1$\")\n",
" plt.axis([0, 3, 0, 3.5])\n",
" plt.grid()\n",
"\n",
"plt.figure(figsize=(9, 3.5))\n",
"plt.subplot(121)\n",
"plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42)\n",
"plt.ylabel(\"$y$ \", rotation=0)\n",
"plt.subplot(122)\n",
"plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42)\n",
"plt.gca().axes.yaxis.set_ticklabels([])\n",
"save_fig(\"ridge_regression_plot\")\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 38,
"metadata": {},
"outputs": [],
"source": [
"sgd_reg = SGDRegressor(penalty=\"l2\", random_state=42)\n",
"sgd_reg.fit(X, y.ravel())\n",
"sgd_reg.predict([[1.5]])"
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {},
"outputs": [],
"source": [
"# not in the book show that we get roughly the same solution as earlier when\n",
"# we use Stochastic Average GD (solver=\"sag\")\n",
"ridge_reg = Ridge(alpha=1, solver=\"sag\", random_state=42)\n",
"ridge_reg.fit(X, y)\n",
"ridge_reg.predict([[1.5]])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Lasso Regression"
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.linear_model import Lasso\n",
"\n",
"lasso_reg = Lasso(alpha=0.1)\n",
"lasso_reg.fit(X, y)\n",
"lasso_reg.predict([[1.5]])"
]
},
{
"cell_type": "code",
"execution_count": 41,
"metadata": {},
"outputs": [],
"source": [
"# not in the book this cell generates and saves Figure 418\n",
"plt.figure(figsize=(9, 3.5))\n",
"plt.subplot(121)\n",
"plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42)\n",
"plt.ylabel(\"$y$ \", rotation=0)\n",
"plt.subplot(122)\n",
"plot_model(Lasso, polynomial=True, alphas=(0, 1e-2, 1), random_state=42)\n",
"plt.gca().axes.yaxis.set_ticklabels([])\n",
"save_fig(\"lasso_regression_plot\")\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 42,
"metadata": {},
"outputs": [],
"source": [
"# not in the book this BIG cell generates and saves Figure 419\n",
"\n",
"t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5\n",
"\n",
"t1s = np.linspace(t1a, t1b, 500)\n",
"t2s = np.linspace(t2a, t2b, 500)\n",
"t1, t2 = np.meshgrid(t1s, t2s)\n",
"T = np.c_[t1.ravel(), t2.ravel()]\n",
"Xr = np.array([[1, 1], [1, -1], [1, 0.5]])\n",
"yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:]\n",
"\n",
"J = (1 / len(Xr) * ((T @ Xr.T - yr.T) ** 2).sum(axis=1)).reshape(t1.shape)\n",
"\n",
"N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape)\n",
"N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape)\n",
"\n",
"t_min_idx = np.unravel_index(J.argmin(), J.shape)\n",
"t1_min, t2_min = t1[t_min_idx], t2[t_min_idx]\n",
"\n",
"t_init = np.array([[0.25], [-1]])\n",
"\n",
"def bgd_path(theta, X, y, l1, l2, core=1, eta=0.05, n_iterations=200):\n",
" path = [theta]\n",
" for iteration in range(n_iterations):\n",
" gradients = (core * 2 / len(X) * X.T @ (X @ theta - y)\n",
" + l1 * np.sign(theta) + l2 * theta)\n",
" theta = theta - eta * gradients\n",
" path.append(theta)\n",
" return np.array(path)\n",
"\n",
"fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(10.1, 8))\n",
"\n",
"for i, N, l1, l2, title in ((0, N1, 2.0, 0, \"Lasso\"), (1, N2, 0, 2.0, \"Ridge\")):\n",
" JR = J + l1 * N1 + l2 * 0.5 * N2 ** 2\n",
"\n",
" tr_min_idx = np.unravel_index(JR.argmin(), JR.shape)\n",
" t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx]\n",
"\n",
" levels = np.exp(np.linspace(0, 1, 20)) - 1\n",
" levelsJ = levels * (J.max() - J.min()) + J.min()\n",
" levelsJR = levels * (JR.max() - JR.min()) + JR.min()\n",
" levelsN = np.linspace(0, N.max(), 10)\n",
"\n",
" path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0)\n",
" path_JR = bgd_path(t_init, Xr, yr, l1, l2)\n",
" path_N = bgd_path(theta=np.array([[2.0], [0.5]]), X=Xr, y=yr,\n",
" l1=np.sign(l1) / 3, l2=np.sign(l2), core=0)\n",
" ax = axes[i, 0]\n",
" ax.grid()\n",
" ax.axhline(y=0, color=\"k\")\n",
" ax.axvline(x=0, color=\"k\")\n",
" ax.contourf(t1, t2, N / 2.0, levels=levelsN)\n",
" ax.plot(path_N[:, 0], path_N[:, 1], \"y--\")\n",
" ax.plot(0, 0, \"ys\")\n",
" ax.plot(t1_min, t2_min, \"ys\")\n",
" ax.set_title(r\"$\\ell_{}$ penalty\".format(i + 1))\n",
" ax.axis([t1a, t1b, t2a, t2b])\n",
" if i == 1:\n",
" ax.set_xlabel(r\"$\\theta_1$\")\n",
" ax.set_ylabel(r\"$\\theta_2$\", rotation=0)\n",
"\n",
" ax = axes[i, 1]\n",
" ax.grid()\n",
" ax.axhline(y=0, color=\"k\")\n",
" ax.axvline(x=0, color=\"k\")\n",
" ax.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9)\n",
" ax.plot(path_JR[:, 0], path_JR[:, 1], \"w-o\")\n",
" ax.plot(path_N[:, 0], path_N[:, 1], \"y--\")\n",
" ax.plot(0, 0, \"ys\")\n",
" ax.plot(t1_min, t2_min, \"ys\")\n",
" ax.plot(t1r_min, t2r_min, \"rs\")\n",
" ax.set_title(title)\n",
" ax.axis([t1a, t1b, t2a, t2b])\n",
" if i == 1:\n",
" ax.set_xlabel(r\"$\\theta_1$\")\n",
"\n",
"save_fig(\"lasso_vs_ridge_plot\")\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Elastic Net"
]
},
{
"cell_type": "code",
"execution_count": 43,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.linear_model import ElasticNet\n",
"\n",
"elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5)\n",
"elastic_net.fit(X, y)\n",
"elastic_net.predict([[1.5]])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Early Stopping"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's go back to the quadratic dataset we used earlier:"
]
},
{
"cell_type": "code",
"execution_count": 44,
"metadata": {},
"outputs": [],
"source": [
"# not in the book this is the same code as earlier\n",
"np.random.seed(42)\n",
"m = 100\n",
"X = 6 * np.random.rand(m, 1) - 3\n",
"y = 0.5 * X ** 2 + X + 2 + np.random.randn(m, 1)\n",
"X_train, y_train = X[: m // 2], y[: m // 2, 0]\n",
"X_valid, y_valid = X[m // 2 :], y[m // 2 :, 0]"
]
},
{
"cell_type": "code",
"execution_count": 45,
"metadata": {},
"outputs": [],
"source": [
"from copy import deepcopy\n",
"from sklearn.metrics import mean_squared_error\n",
"from sklearn.preprocessing import StandardScaler\n",
"\n",
"preprocessing = make_pipeline(PolynomialFeatures(degree=90, include_bias=False),\n",
" StandardScaler())\n",
"X_train_prep = preprocessing.fit_transform(X_train)\n",
"X_valid_prep = preprocessing.transform(X_valid)\n",
"sgd_reg = SGDRegressor(penalty=None, eta0=0.002, random_state=42)\n",
"n_epochs = 500\n",
"best_valid_rmse = float('inf')\n",
"train_errors, val_errors = [], [] # not in the book it's for the figure below\n",
"\n",
"for epoch in range(n_epochs):\n",
" sgd_reg.partial_fit(X_train_prep, y_train)\n",
" y_valid_predict = sgd_reg.predict(X_valid_prep)\n",
" val_error = mean_squared_error(y_valid, y_valid_predict, squared=False)\n",
" if val_error < best_valid_rmse:\n",
" best_valid_rmse = val_error\n",
" best_model = deepcopy(sgd_reg)\n",
"\n",
" # not in the book we evaluate the train error and save it for the figure\n",
" y_train_predict = sgd_reg.predict(X_train_prep)\n",
" train_error = mean_squared_error(y_train, y_train_predict, squared=False)\n",
" val_errors.append(val_error)\n",
" train_errors.append(train_error)\n",
"\n",
"# not in the book this section generates and saves Figure 420\n",
"best_epoch = np.argmin(val_errors)\n",
"plt.figure(figsize=(6, 4))\n",
"plt.annotate('Best model',\n",
" xy=(best_epoch, best_valid_rmse),\n",
" xytext=(best_epoch, best_valid_rmse + 0.5),\n",
" ha=\"center\", fontsize=14,\n",
" arrowprops=dict(facecolor='black', shrink=0.05))\n",
"plt.plot([0, n_epochs], [best_valid_rmse, best_valid_rmse], \"k:\", linewidth=2)\n",
"plt.plot(val_errors, \"b-\", linewidth=3, label=\"Validation set\")\n",
"plt.plot(best_epoch, best_valid_rmse, \"bo\")\n",
"plt.plot(train_errors, \"r--\", linewidth=2, label=\"Training set\")\n",
"plt.legend(loc=\"upper right\")\n",
"plt.xlabel(\"Epoch\")\n",
"plt.ylabel(\"RMSE\")\n",
"plt.axis([0, n_epochs, 0, 3.5])\n",
"plt.grid()\n",
"save_fig(\"early_stopping_plot\")\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Logistic Regression"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Estimating Probabilities"
]
},
{
"cell_type": "code",
"execution_count": 46,
"metadata": {},
"outputs": [],
"source": [
"# not in the book generates and saves Figure 421\n",
"\n",
"lim = 6\n",
"t = np.linspace(-lim, lim, 100)\n",
"sig = 1 / (1 + np.exp(-t))\n",
"\n",
"plt.figure(figsize=(8, 3))\n",
"plt.plot([-lim, lim], [0, 0], \"k-\")\n",
"plt.plot([-lim, lim], [0.5, 0.5], \"k:\")\n",
"plt.plot([-lim, lim], [1, 1], \"k:\")\n",
"plt.plot([0, 0], [-1.1, 1.1], \"k-\")\n",
"plt.plot(t, sig, \"b-\", linewidth=2, label=r\"$\\sigma(t) = \\dfrac{1}{1 + e^{-t}}$\")\n",
"plt.xlabel(\"t\")\n",
"plt.legend(loc=\"upper left\")\n",
"plt.axis([-lim, lim, -0.1, 1.1])\n",
"plt.gca().set_yticks([0, 0.25, 0.5, 0.75, 1])\n",
"plt.grid()\n",
"save_fig(\"logistic_function_plot\")\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Decision Boundaries"
]
},
{
"cell_type": "code",
"execution_count": 47,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.datasets import load_iris\n",
"\n",
"iris = load_iris(as_frame=True)\n",
"list(iris)"
]
},
{
"cell_type": "code",
"execution_count": 48,
"metadata": {},
"outputs": [],
"source": [
"print(iris.DESCR) # not in the book it's a bit too long"
]
},
{
"cell_type": "code",
"execution_count": 49,
"metadata": {},
"outputs": [],
"source": [
"iris.data.head(3)"
]
},
{
"cell_type": "code",
"execution_count": 50,
"metadata": {},
"outputs": [],
"source": [
"iris.target.head(3) # note that the instances are not shuffled"
]
},
{
"cell_type": "code",
"execution_count": 51,
"metadata": {},
"outputs": [],
"source": [
"iris.target_names"
]
},
{
"cell_type": "code",
"execution_count": 52,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.linear_model import LogisticRegression\n",
"from sklearn.model_selection import train_test_split\n",
"\n",
"X = iris.data[[\"petal width (cm)\"]].values\n",
"y = iris.target_names[iris.target] == 'virginica'\n",
"X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)\n",
"\n",
"log_reg = LogisticRegression(random_state=42)\n",
"log_reg.fit(X_train, y_train)"
]
},
{
"cell_type": "code",
"execution_count": 53,
"metadata": {},
"outputs": [],
"source": [
"X_new = np.linspace(0, 3, 1000).reshape(-1, 1) # reshape to get a column vector\n",
"y_proba = log_reg.predict_proba(X_new)\n",
"decision_boundary = X_new[y_proba[:, 1] >= 0.5][0, 0]\n",
"\n",
"plt.figure(figsize=(8, 3)) # not in the book not needed, just formatting\n",
"plt.plot(X_new, y_proba[:, 0], \"b--\", linewidth=2,\n",
" label=\"Not Iris virginica proba\")\n",
"plt.plot(X_new, y_proba[:, 1], \"g-\", linewidth=2, label=\"Iris virginica proba\")\n",
"plt.plot([decision_boundary, decision_boundary], [0, 1], \"k:\", linewidth=2,\n",
" label=\"Decision boundary\")\n",
"\n",
"# not in the book this section beautifies and saves Figure 421\n",
"plt.arrow(x=decision_boundary, y=0.08, dx=-0.3, dy=0,\n",
" head_width=0.05, head_length=0.1, fc=\"b\", ec=\"b\")\n",
"plt.arrow(x=decision_boundary, y=0.92, dx=0.3, dy=0,\n",
" head_width=0.05, head_length=0.1, fc=\"g\", ec=\"g\")\n",
"plt.plot(X_train[y_train == 0], y_train[y_train == 0], \"bs\")\n",
"plt.plot(X_train[y_train == 1], y_train[y_train == 1], \"g^\")\n",
"plt.xlabel(\"Petal width (cm)\")\n",
"plt.ylabel(\"Probability\")\n",
"plt.legend(loc=\"center left\")\n",
"plt.axis([0, 3, -0.02, 1.02])\n",
"plt.grid()\n",
"save_fig(\"logistic_regression_plot\")\n",
"\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 54,
"metadata": {},
"outputs": [],
"source": [
"decision_boundary"
]
},
{
"cell_type": "code",
"execution_count": 55,
"metadata": {},
"outputs": [],
"source": [
"log_reg.predict([[1.7], [1.5]])"
]
},
{
"cell_type": "code",
"execution_count": 56,
"metadata": {},
"outputs": [],
"source": [
"# not in the book this cell generates and saves Figure 422\n",
"\n",
"X = iris.data[[\"petal length (cm)\", \"petal width (cm)\"]].values\n",
"y = iris.target_names[iris.target] == 'virginica'\n",
"X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)\n",
"\n",
"log_reg = LogisticRegression(C=2, random_state=42)\n",
"log_reg.fit(X_train, y_train)\n",
"\n",
"# for the contour plot\n",
"x0, x1 = np.meshgrid(np.linspace(2.9, 7, 500).reshape(-1, 1),\n",
" np.linspace(0.8, 2.7, 200).reshape(-1, 1))\n",
"X_new = np.c_[x0.ravel(), x1.ravel()] # one instance per point on the figure\n",
"y_proba = log_reg.predict_proba(X_new)\n",
"zz = y_proba[:, 1].reshape(x0.shape)\n",
"\n",
"# for the decision boundary\n",
"left_right = np.array([2.9, 7])\n",
"boundary = -((log_reg.coef_[0, 0] * left_right + log_reg.intercept_[0])\n",
" / log_reg.coef_[0, 1])\n",
"\n",
"plt.figure(figsize=(10, 4))\n",
"plt.plot(X_train[y_train == 0, 0], X_train[y_train == 0, 1], \"bs\")\n",
"plt.plot(X_train[y_train == 1, 0], X_train[y_train == 1, 1], \"g^\")\n",
"contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg)\n",
"plt.clabel(contour, inline=1)\n",
"plt.plot(left_right, boundary, \"k--\", linewidth=3)\n",
"plt.text(3.5, 1.27, \"Not Iris virginica\", color=\"b\", ha=\"center\", fontsize=14)\n",
"plt.text(6.5, 2.3, \"Iris virginica\", color=\"g\", ha=\"center\", fontsize=14)\n",
"plt.xlabel(\"Petal length\")\n",
"plt.ylabel(\"Petal width\")\n",
"plt.axis([2.9, 7, 0.8, 2.7])\n",
"plt.grid()\n",
"save_fig(\"logistic_regression_contour_plot\")\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Softmax Regression"
]
},
{
"cell_type": "code",
"execution_count": 57,
"metadata": {},
"outputs": [],
"source": [
"X = iris.data[[\"petal length (cm)\", \"petal width (cm)\"]].values\n",
"y = iris[\"target\"]\n",
"X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)\n",
"\n",
"softmax_reg = LogisticRegression(C=30, random_state=42)\n",
"softmax_reg.fit(X_train, y_train)"
]
},
{
"cell_type": "code",
"execution_count": 58,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"softmax_reg.predict([[5, 2]])"
]
},
{
"cell_type": "code",
"execution_count": 59,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"softmax_reg.predict_proba([[5, 2]]).round(2)"
]
},
{
"cell_type": "code",
"execution_count": 60,
"metadata": {},
"outputs": [],
"source": [
"# not in the book this cell generates and saves Figure 423\n",
"\n",
"from matplotlib.colors import ListedColormap\n",
"\n",
"custom_cmap = ListedColormap([\"#fafab0\", \"#9898ff\", \"#a0faa0\"])\n",
"\n",
"x0, x1 = np.meshgrid(np.linspace(0, 8, 500).reshape(-1, 1),\n",
" np.linspace(0, 3.5, 200).reshape(-1, 1))\n",
"X_new = np.c_[x0.ravel(), x1.ravel()]\n",
"\n",
"y_proba = softmax_reg.predict_proba(X_new)\n",
"y_predict = softmax_reg.predict(X_new)\n",
"\n",
"zz1 = y_proba[:, 1].reshape(x0.shape)\n",
"zz = y_predict.reshape(x0.shape)\n",
"\n",
"plt.figure(figsize=(10, 4))\n",
"plt.plot(X[y == 2, 0], X[y == 2, 1], \"g^\", label=\"Iris virginica\")\n",
"plt.plot(X[y == 1, 0], X[y == 1, 1], \"bs\", label=\"Iris versicolor\")\n",
"plt.plot(X[y == 0, 0], X[y == 0, 1], \"yo\", label=\"Iris setosa\")\n",
"\n",
"plt.contourf(x0, x1, zz, cmap=custom_cmap)\n",
"contour = plt.contour(x0, x1, zz1, cmap=\"hot\")\n",
"plt.clabel(contour, inline=1)\n",
"plt.xlabel(\"Petal length\")\n",
"plt.ylabel(\"Petal width\")\n",
"plt.legend(loc=\"center left\")\n",
"plt.axis([0.5, 7, 0, 3.5])\n",
"plt.grid()\n",
"save_fig(\"softmax_regression_contour_plot\")\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Exercise solutions"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1. to 11."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"See appendix A."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 12. Batch Gradient Descent with early stopping for Softmax Regression\n",
"Exercise: _Implement Batch Gradient Descent with early stopping for Softmax Regression without using Scikit-Learn, only NumPy._"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier."
]
},
{
"cell_type": "code",
"execution_count": 61,
"metadata": {},
"outputs": [],
"source": [
"X = iris.data[[\"petal length (cm)\", \"petal width (cm)\"]].values\n",
"y = iris[\"target\"].values"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We need to add the bias term for every instance ($x_0 = 1$). The easiest option to do this would be to use Scikit-Learn's `add_dummy_feature()` function, but the point of this exercise is to get a better understanding of the algorithms by implementing them manually. So here is one possible implementation:"
]
},
{
"cell_type": "code",
"execution_count": 62,
"metadata": {},
"outputs": [],
"source": [
"X_with_bias = np.c_[np.ones(len(X)), X]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but again, we want to did this manually:"
]
},
{
"cell_type": "code",
"execution_count": 63,
"metadata": {},
"outputs": [],
"source": [
"test_ratio = 0.2\n",
"validation_ratio = 0.2\n",
"total_size = len(X_with_bias)\n",
"\n",
"test_size = int(total_size * test_ratio)\n",
"validation_size = int(total_size * validation_ratio)\n",
"train_size = total_size - test_size - validation_size\n",
"\n",
"np.random.seed(42)\n",
"rnd_indices = np.random.permutation(total_size)\n",
"\n",
"X_train = X_with_bias[rnd_indices[:train_size]]\n",
"y_train = y[rnd_indices[:train_size]]\n",
"X_valid = X_with_bias[rnd_indices[train_size:-test_size]]\n",
"y_valid = y[rnd_indices[train_size:-test_size]]\n",
"X_test = X_with_bias[rnd_indices[-test_size:]]\n",
"y_test = y[rnd_indices[-test_size:]]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for any given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance. To understand this code, you need to know that `np.diag(np.ones(n))` creates an n×n matrix full of 0s except for 1s on the main diagonal. Moreover, if `a` in a NumPy array, then `a[[1,3,2]]` returns an array with 3 rows equal to `a[1]`, `a[3]` and `a[2]` (this is [advanced NumPy indexing](https://numpy.org/doc/stable/reference/arrays.indexing.html#advanced-indexing))."
]
},
{
"cell_type": "code",
"execution_count": 64,
"metadata": {},
"outputs": [],
"source": [
"def to_one_hot(y):\n",
" return np.diag(np.ones(y.max() + 1))[y]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's test this function on the first 10 instances:"
]
},
{
"cell_type": "code",
"execution_count": 65,
"metadata": {},
"outputs": [],
"source": [
"y_train[:10]"
]
},
{
"cell_type": "code",
"execution_count": 66,
"metadata": {},
"outputs": [],
"source": [
"to_one_hot(y_train[:10])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Looks good, so let's create the target class probabilities matrix for the training set and the test set:"
]
},
{
"cell_type": "code",
"execution_count": 67,
"metadata": {},
"outputs": [],
"source": [
"Y_train_one_hot = to_one_hot(y_train)\n",
"Y_valid_one_hot = to_one_hot(y_valid)\n",
"Y_test_one_hot = to_one_hot(y_test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's scale the inputs. We compute the mean and standard deviation of each feature on the training set (except for the bias feature), then we center and scale each feature in the training set, the validation set, and the test set:"
]
},
{
"cell_type": "code",
"execution_count": 68,
"metadata": {},
"outputs": [],
"source": [
"mean = X_train[:, 1:].mean(axis=0)\n",
"std = X_train[:, 1:].std(axis=0)\n",
"X_train[:, 1:] = (X_train[:, 1:] - mean) / std\n",
"X_valid[:, 1:] = (X_valid[:, 1:] - mean) / std\n",
"X_test[:, 1:] = (X_test[:, 1:] - mean) / std"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's implement the Softmax function. Recall that it is defined by the following equation:\n",
"\n",
"$\\sigma\\left(\\mathbf{s}(\\mathbf{x})\\right)_k = \\dfrac{\\exp\\left(s_k(\\mathbf{x})\\right)}{\\sum\\limits_{j=1}^{K}{\\exp\\left(s_j(\\mathbf{x})\\right)}}$"
]
},
{
"cell_type": "code",
"execution_count": 69,
"metadata": {},
"outputs": [],
"source": [
"def softmax(logits):\n",
" exps = np.exp(logits)\n",
" exp_sums = exps.sum(axis=1, keepdims=True)\n",
" return exps / exp_sums"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We are almost ready to start training. Let's define the number of inputs and outputs:"
]
},
{
"cell_type": "code",
"execution_count": 70,
"metadata": {},
"outputs": [],
"source": [
"n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term)\n",
"n_outputs = len(np.unique(y_train)) # == 3 (there are 3 iris classes)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.\n",
"\n",
"So the equations we will need are the cost function:\n",
"\n",
"$J(\\mathbf{\\Theta}) =\n",
"- \\dfrac{1}{m}\\sum\\limits_{i=1}^{m}\\sum\\limits_{k=1}^{K}{y_k^{(i)}\\log\\left(\\hat{p}_k^{(i)}\\right)}$\n",
"\n",
"And the equation for the gradients:\n",
"\n",
"$\\nabla_{\\mathbf{\\theta}^{(k)}} \\, J(\\mathbf{\\Theta}) = \\dfrac{1}{m} \\sum\\limits_{i=1}^{m}{ \\left ( \\hat{p}^{(i)}_k - y_k^{(i)} \\right ) \\mathbf{x}^{(i)}}$\n",
"\n",
"Note that $\\log\\left(\\hat{p}_k^{(i)}\\right)$ may not be computable if $\\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\\epsilon$ to $\\log\\left(\\hat{p}_k^{(i)}\\right)$ to avoid getting `nan` values."
]
},
{
"cell_type": "code",
"execution_count": 71,
"metadata": {},
"outputs": [],
"source": [
"eta = 0.5\n",
"n_epochs = 5001\n",
"m = len(X_train)\n",
"epsilon = 1e-5\n",
"\n",
"np.random.seed(42)\n",
"Theta = np.random.randn(n_inputs, n_outputs)\n",
"\n",
"for epoch in range(n_epochs):\n",
" logits = X_train @ Theta\n",
" Y_proba = softmax(logits)\n",
" if epoch % 1000 == 0:\n",
" Y_proba_valid = softmax(X_valid @ Theta)\n",
" xentropy_losses = -(Y_valid_one_hot * np.log(Y_proba_valid + epsilon))\n",
" print(epoch, xentropy_losses.sum(axis=1).mean())\n",
" error = Y_proba - Y_train_one_hot\n",
" gradients = 1 / m * X_train.T @ error\n",
" Theta = Theta - eta * gradients"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And that's it! The Softmax model is trained. Let's look at the model parameters:"
]
},
{
"cell_type": "code",
"execution_count": 72,
"metadata": {},
"outputs": [],
"source": [
"Theta"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's make predictions for the validation set and check the accuracy score:"
]
},
{
"cell_type": "code",
"execution_count": 73,
"metadata": {},
"outputs": [],
"source": [
"logits = X_valid @ Theta\n",
"Y_proba = softmax(logits)\n",
"y_predict = Y_proba.argmax(axis=1)\n",
"\n",
"accuracy_score = (y_predict == y_valid).mean()\n",
"accuracy_score"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Well, this model looks pretty ok. For the sake of the exercise, let's add a bit of $\\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`."
]
},
{
"cell_type": "code",
"execution_count": 74,
"metadata": {},
"outputs": [],
"source": [
"eta = 0.5\n",
"n_epochs = 5001\n",
"m = len(X_train)\n",
"epsilon = 1e-5\n",
"alpha = 0.01 # regularization hyperparameter\n",
"\n",
"np.random.seed(42)\n",
"Theta = np.random.randn(n_inputs, n_outputs)\n",
"\n",
"for epoch in range(n_epochs):\n",
" logits = X_train @ Theta\n",
" Y_proba = softmax(logits)\n",
" if epoch % 1000 == 0:\n",
" Y_proba_valid = softmax(X_valid @ Theta)\n",
" xentropy_losses = -(Y_valid_one_hot * np.log(Y_proba_valid + epsilon))\n",
" l2_loss = 1 / 2 * (Theta[1:] ** 2).sum()\n",
" total_loss = xentropy_losses.sum(axis=1).mean() + alpha * l2_loss\n",
" print(epoch, total_loss.round(4))\n",
" error = Y_proba - Y_train_one_hot\n",
" gradients = 1 / m * X_train.T @ error\n",
" gradients += np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]]\n",
" Theta = Theta - eta * gradients"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Because of the additional $\\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out:"
]
},
{
"cell_type": "code",
"execution_count": 75,
"metadata": {},
"outputs": [],
"source": [
"logits = X_valid @ Theta\n",
"Y_proba = softmax(logits)\n",
"y_predict = Y_proba.argmax(axis=1)\n",
"\n",
"accuracy_score = (y_predict == y_valid).mean()\n",
"accuracy_score"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this case, the $\\ell_2$ penalty did not change the test accuracy. Perhaps try fine-tuning `alpha`?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing."
]
},
{
"cell_type": "code",
"execution_count": 76,
"metadata": {},
"outputs": [],
"source": [
"eta = 0.5\n",
"n_epochs = 50_001\n",
"m = len(X_train)\n",
"epsilon = 1e-5\n",
"C = 100 # regularization hyperparameter\n",
"best_loss = np.infty\n",
"\n",
"np.random.seed(42)\n",
"Theta = np.random.randn(n_inputs, n_outputs)\n",
"\n",
"for epoch in range(n_epochs):\n",
" logits = X_train @ Theta\n",
" Y_proba = softmax(logits)\n",
" Y_proba_valid = softmax(X_valid @ Theta)\n",
" xentropy_losses = -(Y_valid_one_hot * np.log(Y_proba_valid + epsilon))\n",
" l2_loss = 1 / 2 * (Theta[1:] ** 2).sum()\n",
" total_loss = xentropy_losses.sum(axis=1).mean() + 1 / C * l2_loss\n",
" if epoch % 1000 == 0:\n",
" print(epoch, total_loss.round(4))\n",
" if total_loss < best_loss:\n",
" best_loss = total_loss\n",
" else:\n",
" print(epoch - 1, best_loss.round(4))\n",
" print(epoch, total_loss.round(4), \"early stopping!\")\n",
" break\n",
" error = Y_proba - Y_train_one_hot\n",
" gradients = 1 / m * X_train.T @ error\n",
" gradients += np.r_[np.zeros([1, n_outputs]), 1 / C * Theta[1:]]\n",
" Theta = Theta - eta * gradients"
]
},
{
"cell_type": "code",
"execution_count": 77,
"metadata": {},
"outputs": [],
"source": [
"logits = X_valid @ Theta\n",
"Y_proba = softmax(logits)\n",
"y_predict = Y_proba.argmax(axis=1)\n",
"\n",
"accuracy_score = (y_predict == y_valid).mean()\n",
"accuracy_score"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Oh well, still no change in validation acccuracy, but at least early training shortened training a bit."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's plot the model's predictions on the whole dataset (remember to scale all features fed to the model):"
]
},
{
"cell_type": "code",
"execution_count": 78,
"metadata": {},
"outputs": [],
"source": [
"custom_cmap = mpl.colors.ListedColormap(['#fafab0','#9898ff','#a0faa0'])\n",
"\n",
"x0, x1 = np.meshgrid(np.linspace(0, 8, 500).reshape(-1, 1),\n",
" np.linspace(0, 3.5, 200).reshape(-1, 1))\n",
"X_new = np.c_[x0.ravel(), x1.ravel()]\n",
"X_new = (X_new - mean) / std\n",
"X_new_with_bias = np.c_[np.ones(len(X_new)), X_new]\n",
"\n",
"logits = X_new_with_bias @ Theta\n",
"Y_proba = softmax(logits)\n",
"y_predict = Y_proba.argmax(axis=1)\n",
"\n",
"zz1 = Y_proba[:, 1].reshape(x0.shape)\n",
"zz = y_predict.reshape(x0.shape)\n",
"\n",
"plt.figure(figsize=(10, 4))\n",
"plt.plot(X[y == 2, 0], X[y == 2, 1], \"g^\", label=\"Iris virginica\")\n",
"plt.plot(X[y == 1, 0], X[y == 1, 1], \"bs\", label=\"Iris versicolor\")\n",
"plt.plot(X[y == 0, 0], X[y == 0, 1], \"yo\", label=\"Iris setosa\")\n",
"\n",
"plt.contourf(x0, x1, zz, cmap=custom_cmap)\n",
"contour = plt.contour(x0, x1, zz1, cmap=\"hot\")\n",
"plt.clabel(contour, inline=1)\n",
"plt.xlabel(\"Petal length\")\n",
"plt.ylabel(\"Petal width\")\n",
"plt.legend(loc=\"upper left\")\n",
"plt.axis([0, 7, 0, 3.5])\n",
"plt.grid()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And now let's measure the final model's accuracy on the test set:"
]
},
{
"cell_type": "code",
"execution_count": 79,
"metadata": {},
"outputs": [],
"source": [
"logits = X_test @ Theta\n",
"Y_proba = softmax(logits)\n",
"y_predict = Y_proba.argmax(axis=1)\n",
"\n",
"accuracy_score = (y_predict == y_test).mean()\n",
"accuracy_score"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Well we get even better performance on the test set. This variability is likely due to the very small size of the dataset: depending on how you sample the training set, validation set and the test set, you can get quite different results. Try changing the random seed and running the code again a few times, you will see that the results will vary."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.12"
},
"nav_menu": {},
"toc": {
"navigate_menu": true,
"number_sections": true,
"sideBar": true,
"threshold": 6,
"toc_cell": false,
"toc_section_display": "block",
"toc_window_display": false
}
},
"nbformat": 4,
"nbformat_minor": 4
}