handson-ml/03_classification.ipynb

2811 lines
78 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters!

This file contains ambiguous Unicode characters that may be confused with others in your current locale. If your use case is intentional and legitimate, you can safely ignore this warning. Use the Escape button to highlight these characters.

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Chapter 3 Classification**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"_This notebook contains all the sample code and solutions to the exercises in chapter 3._"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<table align=\"left\">\n",
" <td>\n",
" <a href=\"https://colab.research.google.com/github/ageron/handson-ml3/blob/main/03_classification.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n",
" </td>\n",
" <td>\n",
" <a target=\"_blank\" href=\"https://kaggle.com/kernels/welcome?src=https://github.com/ageron/handson-ml3/blob/main/03_classification.ipynb\"><img src=\"https://kaggle.com/static/images/open-in-kaggle.svg\" /></a>\n",
" </td>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Setup"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This project requires Python 3.8 or above:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import sys\n",
"\n",
"assert sys.version_info >= (3, 8)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It also requires Scikit-Learn ≥ 1.0.1:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import sklearn\n",
"\n",
"assert sklearn.__version__ >= \"1.0.1\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Just like in the previous chapter, let's define the default font sizes to make the figures prettier:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n",
"\n",
"plt.rc('font', size=12)\n",
"plt.rc('axes', labelsize=14, titlesize=14)\n",
"plt.rc('legend', fontsize=14)\n",
"plt.rc('xtick',labelsize=10)\n",
"plt.rc('ytick',labelsize=10)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And let's create the `images/classification` folder (if it doesn't already exist), and define the `save_fig()` function which is used through this notebook to save the figures in high-res for the book:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from pathlib import Path\n",
"\n",
"IMAGES_PATH = Path() / \"images\" / \"classification\"\n",
"IMAGES_PATH.mkdir(parents=True, exist_ok=True)\n",
"\n",
"def save_fig(fig_id, tight_layout=True, fig_extension=\"png\", resolution=300):\n",
" path = IMAGES_PATH / f\"{fig_id}.{fig_extension}\"\n",
" if tight_layout:\n",
" plt.tight_layout()\n",
" plt.savefig(path, format=fig_extension, dpi=resolution)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# MNIST"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.datasets import fetch_openml\n",
"\n",
"mnist = fetch_openml('mnist_784', as_frame=False)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"# not in the book it's a bit too long\n",
"print(mnist.DESCR)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"mnist.keys() # not in the book we only use data and target in this notebook"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"X, y = mnist.data, mnist.target\n",
"X"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"X.shape"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"y"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"y.shape"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"28 * 28"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n",
"\n",
"def plot_digit(image_data):\n",
" image = image_data.reshape(28, 28)\n",
" plt.imshow(image, cmap=\"binary\")\n",
" plt.axis(\"off\")\n",
"\n",
"some_digit = X[0]\n",
"plot_digit(some_digit)\n",
"save_fig(\"some_digit_plot\") # not in the book\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"y[0]"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
"# not in the book this code generates Figure 32\n",
"plt.figure(figsize=(9, 9))\n",
"for idx, image_data in enumerate(X[:100]):\n",
" plt.subplot(10, 10, idx + 1)\n",
" plot_digit(image_data)\n",
"plt.subplots_adjust(wspace=0, hspace=0)\n",
"save_fig(\"more_digits_plot\", tight_layout=False)\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [],
"source": [
"X_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Training a Binary Classifier"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
"y_train_5 = (y_train == '5') # True for all 5s, False for all other digits\n",
"y_test_5 = (y_test == '5')"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.linear_model import SGDClassifier\n",
"\n",
"sgd_clf = SGDClassifier(random_state=42)\n",
"sgd_clf.fit(X_train, y_train_5)"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [],
"source": [
"sgd_clf.predict([some_digit])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Performance Measures"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Measuring Accuracy Using Cross-Validation"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.model_selection import cross_val_score\n",
"\n",
"cross_val_score(sgd_clf, X_train, y_train_5, cv=3, scoring=\"accuracy\")"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.model_selection import StratifiedKFold\n",
"from sklearn.base import clone\n",
"\n",
"skfolds = StratifiedKFold(n_splits=3) # add shuffle=True is the dataset is not\n",
" # already shuffled\n",
"for train_index, test_index in skfolds.split(X_train, y_train_5):\n",
" clone_clf = clone(sgd_clf)\n",
" X_train_folds = X_train[train_index]\n",
" y_train_folds = y_train_5[train_index]\n",
" X_test_fold = X_train[test_index]\n",
" y_test_fold = y_train_5[test_index]\n",
"\n",
" clone_clf.fit(X_train_folds, y_train_folds)\n",
" y_pred = clone_clf.predict(X_test_fold)\n",
" n_correct = sum(y_pred == y_test_fold)\n",
" print(n_correct / len(y_pred))"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.dummy import DummyClassifier\n",
"\n",
"dummy_clf = DummyClassifier()\n",
"dummy_clf.fit(X_train, y_train_5)\n",
"print(any(dummy_clf.predict(X_train)))"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [],
"source": [
"cross_val_score(dummy_clf, X_train, y_train_5, cv=3, scoring=\"accuracy\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Confusion Matrix"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.model_selection import cross_val_predict\n",
"\n",
"y_train_pred = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3)"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.metrics import confusion_matrix\n",
"\n",
"cm = confusion_matrix(y_train_5, y_train_pred)\n",
"cm"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [],
"source": [
"y_train_perfect_predictions = y_train_5 # pretend we reached perfection\n",
"confusion_matrix(y_train_5, y_train_perfect_predictions)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Precision and Recall"
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.metrics import precision_score, recall_score\n",
"\n",
"precision_score(y_train_5, y_train_pred) # == 3530 / (687 + 3530)"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [],
"source": [
"# not in the book this code also computes the precision: TP / (FP + TP)\n",
"cm[1, 1] / (cm[0, 1] + cm[1, 1])"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [],
"source": [
"recall_score(y_train_5, y_train_pred) # == 3530 / (1891 + 3530)"
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {},
"outputs": [],
"source": [
"# not in the book this code also computes the recall: TP / (FN + TP)\n",
"cm[1, 1] / (cm[1, 0] + cm[1, 1])"
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.metrics import f1_score\n",
"\n",
"f1_score(y_train_5, y_train_pred)"
]
},
{
"cell_type": "code",
"execution_count": 32,
"metadata": {},
"outputs": [],
"source": [
"# not in the book this code also computes the f1 score\n",
"cm[1, 1] / (cm[1, 1] + (cm[1, 0] + cm[0, 1]) / 2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Precision/Recall Trade-off"
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {},
"outputs": [],
"source": [
"y_scores = sgd_clf.decision_function([some_digit])\n",
"y_scores"
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {},
"outputs": [],
"source": [
"threshold = 0\n",
"y_some_digit_pred = (y_scores > threshold)"
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {},
"outputs": [],
"source": [
"y_some_digit_pred"
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {},
"outputs": [],
"source": [
"# not in the book this code just shows that y_scores > 0 produces the same\n",
"# result as calling predict()\n",
"y_scores > 0"
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {},
"outputs": [],
"source": [
"threshold = 3000\n",
"y_some_digit_pred = (y_scores > threshold)\n",
"y_some_digit_pred"
]
},
{
"cell_type": "code",
"execution_count": 38,
"metadata": {},
"outputs": [],
"source": [
"y_scores = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3,\n",
" method=\"decision_function\")"
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.metrics import precision_recall_curve\n",
"\n",
"precisions, recalls, thresholds = precision_recall_curve(y_train_5, y_scores)"
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {},
"outputs": [],
"source": [
"plt.figure(figsize=(8, 4)) # not in the book it's not needed, just formatting\n",
"plt.plot(thresholds, precisions[:-1], \"b--\", label=\"Precision\", linewidth=2)\n",
"plt.plot(thresholds, recalls[:-1], \"g-\", label=\"Recall\", linewidth=2)\n",
"plt.vlines(threshold, 0, 1.0, \"k\", \"dotted\", label=\"threshold\")\n",
"\n",
"# not in the book this section just beautifies and saves Figure 35\n",
"idx = (thresholds >= threshold).argmax() # first index ≥ threshold\n",
"plt.plot(thresholds[idx], precisions[idx], \"bo\")\n",
"plt.plot(thresholds[idx], recalls[idx], \"go\")\n",
"plt.axis([-50000, 50000, 0, 1])\n",
"plt.grid()\n",
"plt.xlabel(\"Threshold\")\n",
"plt.legend(loc=\"center right\")\n",
"save_fig(\"precision_recall_vs_threshold_plot\")\n",
"\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 41,
"metadata": {},
"outputs": [],
"source": [
"import matplotlib.patches as patches # not in the book for the curved arrow\n",
"\n",
"plt.figure(figsize=(6, 5)) # not in the book not needed, just formatting\n",
"\n",
"plt.plot(recalls, precisions, linewidth=2, label=\"Precision/Recall curve\")\n",
"\n",
"# not in the book just beautifies and saves Figure 36\n",
"plt.plot([recalls[idx], recalls[idx]], [0., precisions[idx]], \"k:\")\n",
"plt.plot([0.0, recalls[idx]], [precisions[idx], precisions[idx]], \"k:\")\n",
"plt.plot([recalls[idx]], [precisions[idx]], \"ko\",\n",
" label=\"Point at threshold 3,000\")\n",
"plt.gca().add_patch(patches.FancyArrowPatch(\n",
" (0.79, 0.60), (0.61, 0.78),\n",
" connectionstyle=\"arc3,rad=.2\",\n",
" arrowstyle=\"Simple, tail_width=1.5, head_width=8, head_length=10\",\n",
" color=\"#444444\"))\n",
"plt.text(0.56, 0.62, \"Higher\\nthreshold\", fontsize=14, color=\"#333333\")\n",
"plt.xlabel(\"Recall\")\n",
"plt.ylabel(\"Precision\")\n",
"plt.axis([0, 1, 0, 1])\n",
"plt.grid()\n",
"plt.legend(loc=\"lower left\")\n",
"save_fig(\"precision_vs_recall_plot\")\n",
"\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 42,
"metadata": {},
"outputs": [],
"source": [
"idx_for_90_precision = (precisions >= 0.90).argmax()\n",
"threshold_for_90_precision = thresholds[idx_for_90_precision]\n",
"threshold_for_90_precision"
]
},
{
"cell_type": "code",
"execution_count": 43,
"metadata": {},
"outputs": [],
"source": [
"y_train_pred_90 = (y_scores >= threshold_for_90_precision)"
]
},
{
"cell_type": "code",
"execution_count": 44,
"metadata": {},
"outputs": [],
"source": [
"precision_score(y_train_5, y_train_pred_90)"
]
},
{
"cell_type": "code",
"execution_count": 45,
"metadata": {},
"outputs": [],
"source": [
"recall_at_90_precision = recall_score(y_train_5, y_train_pred_90)\n",
"recall_at_90_precision"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## The ROC Curve"
]
},
{
"cell_type": "code",
"execution_count": 46,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.metrics import roc_curve\n",
"\n",
"fpr, tpr, thresholds = roc_curve(y_train_5, y_scores)"
]
},
{
"cell_type": "code",
"execution_count": 47,
"metadata": {},
"outputs": [],
"source": [
"idx_for_threshold_at_90 = (thresholds <= threshold_for_90_precision).argmax()\n",
"tpr_90, fpr_90 = tpr[idx_for_threshold_at_90], fpr[idx_for_threshold_at_90]\n",
"\n",
"plt.figure(figsize=(6, 5)) # not in the book not needed, just formatting\n",
"plt.plot(fpr, tpr, linewidth=2, label=\"ROC curve\")\n",
"plt.plot([0, 1], [0, 1], 'k:', label=\"Random classifier's ROC curve\")\n",
"plt.plot([fpr_90], [tpr_90], \"ko\", label=\"Threshold for 90% precision\")\n",
"\n",
"# not in the book just beautifies and saves Figure 37\n",
"plt.gca().add_patch(patches.FancyArrowPatch(\n",
" (0.20, 0.89), (0.07, 0.70),\n",
" connectionstyle=\"arc3,rad=.4\",\n",
" arrowstyle=\"Simple, tail_width=1.5, head_width=8, head_length=10\",\n",
" color=\"#444444\"))\n",
"plt.text(0.12, 0.71, \"Higher\\nthreshold\", fontsize=14, color=\"#333333\")\n",
"plt.xlabel('False Positive Rate (Fall-Out)')\n",
"plt.ylabel('True Positive Rate (Recall)')\n",
"plt.grid()\n",
"plt.axis([0, 1, 0, 1])\n",
"plt.legend(loc=\"lower right\")\n",
"save_fig(\"roc_curve_plot\")\n",
"\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 48,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.metrics import roc_auc_score\n",
"\n",
"roc_auc_score(y_train_5, y_scores)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Warning:** the following cell may take a few minutes to run."
]
},
{
"cell_type": "code",
"execution_count": 49,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.ensemble import RandomForestClassifier\n",
"\n",
"forest_clf = RandomForestClassifier(random_state=42)"
]
},
{
"cell_type": "code",
"execution_count": 50,
"metadata": {},
"outputs": [],
"source": [
"y_probas_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3,\n",
" method=\"predict_proba\")"
]
},
{
"cell_type": "code",
"execution_count": 51,
"metadata": {},
"outputs": [],
"source": [
"y_probas_forest[:2]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"These are _estimated probabilities_. Among the images that the model classified as positive with a probability between 50% and 60%, there are actually about 94% positive images:"
]
},
{
"cell_type": "code",
"execution_count": 52,
"metadata": {},
"outputs": [],
"source": [
"# Not in the code\n",
"idx_50_to_60 = (y_probas_forest[:, 1] > 0.50) & (y_probas_forest[:, 1] < 0.60)\n",
"print(f\"{(y_train_5[idx_50_to_60]).sum() / idx_50_to_60.sum():.1%}\")"
]
},
{
"cell_type": "code",
"execution_count": 53,
"metadata": {},
"outputs": [],
"source": [
"y_scores_forest = y_probas_forest[:, 1]\n",
"precisions_forest, recalls_forest, thresholds_forest = precision_recall_curve(\n",
" y_train_5, y_scores_forest)"
]
},
{
"cell_type": "code",
"execution_count": 54,
"metadata": {},
"outputs": [],
"source": [
"plt.figure(figsize=(6, 5)) # not in the book not needed, just formatting\n",
"\n",
"plt.plot(recalls_forest, precisions_forest, \"b-\", linewidth=2,\n",
" label=\"Random Forest\")\n",
"plt.plot(recalls, precisions, \"--\", linewidth=2, label=\"SGD\")\n",
"\n",
"# not in the book just beautifies and saves Figure 38\n",
"plt.xlabel(\"Recall\")\n",
"plt.ylabel(\"Precision\")\n",
"plt.axis([0, 1, 0, 1])\n",
"plt.grid()\n",
"plt.legend(loc=\"lower left\")\n",
"save_fig(\"pr_curve_comparison_plot\")\n",
"\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We could use `cross_val_predict(forest_clf, X_train, y_train_5, cv=3)` to compute `y_train_pred_forest`, but since we already have the estimated probabilities, we can just use the default threshold of 50% probability to get the same predictions much faster:"
]
},
{
"cell_type": "code",
"execution_count": 55,
"metadata": {},
"outputs": [],
"source": [
"y_train_pred_forest = y_probas_forest[:, 1] >= 0.5 # positive proba ≥ 50%\n",
"f1_score(y_train_5, y_train_pred_forest)"
]
},
{
"cell_type": "code",
"execution_count": 56,
"metadata": {},
"outputs": [],
"source": [
"roc_auc_score(y_train_5, y_scores_forest)"
]
},
{
"cell_type": "code",
"execution_count": 57,
"metadata": {},
"outputs": [],
"source": [
"precision_score(y_train_5, y_train_pred_forest)"
]
},
{
"cell_type": "code",
"execution_count": 58,
"metadata": {},
"outputs": [],
"source": [
"recall_score(y_train_5, y_train_pred_forest)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Multiclass Classification"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"SVMs do not scale well to large datasets, so let's only train on the first 2,000 instances, or else this section will take a very long time to run:"
]
},
{
"cell_type": "code",
"execution_count": 59,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.svm import SVC\n",
"\n",
"svm_clf = SVC(random_state=42)\n",
"svm_clf.fit(X_train[:2000], y_train[:2000]) # y_train, not y_train_5"
]
},
{
"cell_type": "code",
"execution_count": 60,
"metadata": {},
"outputs": [],
"source": [
"svm_clf.predict([some_digit])"
]
},
{
"cell_type": "code",
"execution_count": 61,
"metadata": {},
"outputs": [],
"source": [
"some_digit_scores = svm_clf.decision_function([some_digit])\n",
"some_digit_scores.round(2)"
]
},
{
"cell_type": "code",
"execution_count": 62,
"metadata": {},
"outputs": [],
"source": [
"class_id = some_digit_scores.argmax()\n",
"class_id"
]
},
{
"cell_type": "code",
"execution_count": 63,
"metadata": {},
"outputs": [],
"source": [
"svm_clf.classes_"
]
},
{
"cell_type": "code",
"execution_count": 64,
"metadata": {},
"outputs": [],
"source": [
"svm_clf.classes_[class_id]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you want `decision_function()` to return all 45 scores, you can set the `decision_function_shape` hyperparameter to `\"ovo\"`. The default value is `\"ovr\"`, but don't let this confuse you: `SVC` always uses OvO for training. This hyperparameter only affects whether or not the 45 scores get aggregated or not:"
]
},
{
"cell_type": "code",
"execution_count": 65,
"metadata": {},
"outputs": [],
"source": [
"# not in the book this code shows how to get all 45 OvO scores if needed\n",
"svm_clf.decision_function_shape = \"ovo\"\n",
"some_digit_scores_ovo = svm_clf.decision_function([some_digit])\n",
"some_digit_scores_ovo.round(2)"
]
},
{
"cell_type": "code",
"execution_count": 66,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.multiclass import OneVsRestClassifier\n",
"\n",
"ovr_clf = OneVsRestClassifier(SVC(random_state=42))\n",
"ovr_clf.fit(X_train[:2000], y_train[:2000])"
]
},
{
"cell_type": "code",
"execution_count": 67,
"metadata": {},
"outputs": [],
"source": [
"ovr_clf.predict([some_digit])"
]
},
{
"cell_type": "code",
"execution_count": 68,
"metadata": {},
"outputs": [],
"source": [
"len(ovr_clf.estimators_)"
]
},
{
"cell_type": "code",
"execution_count": 69,
"metadata": {},
"outputs": [],
"source": [
"sgd_clf = SGDClassifier(random_state=42)\n",
"sgd_clf.fit(X_train, y_train)\n",
"sgd_clf.predict([some_digit])"
]
},
{
"cell_type": "code",
"execution_count": 70,
"metadata": {},
"outputs": [],
"source": [
"sgd_clf.decision_function([some_digit]).round()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Warning:** the following two cells make take a few minutes each to run:"
]
},
{
"cell_type": "code",
"execution_count": 71,
"metadata": {},
"outputs": [],
"source": [
"cross_val_score(sgd_clf, X_train, y_train, cv=3, scoring=\"accuracy\")"
]
},
{
"cell_type": "code",
"execution_count": 72,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.preprocessing import StandardScaler\n",
"\n",
"scaler = StandardScaler()\n",
"X_train_scaled = scaler.fit_transform(X_train.astype(\"float64\"))\n",
"cross_val_score(sgd_clf, X_train_scaled, y_train, cv=3, scoring=\"accuracy\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Error Analysis"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Warning:** the following cell will take a few minutes to run:"
]
},
{
"cell_type": "code",
"execution_count": 73,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.metrics import ConfusionMatrixDisplay\n",
"\n",
"y_train_pred = cross_val_predict(sgd_clf, X_train_scaled, y_train, cv=3)\n",
"ConfusionMatrixDisplay.from_predictions(y_train, y_train_pred)\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 74,
"metadata": {},
"outputs": [],
"source": [
"ConfusionMatrixDisplay.from_predictions(y_train, y_train_pred,\n",
" normalize=\"true\", values_format=\".0%\")\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 75,
"metadata": {},
"outputs": [],
"source": [
"sample_weight = (y_train_pred != y_train)\n",
"ConfusionMatrixDisplay.from_predictions(y_train, y_train_pred,\n",
" sample_weight=sample_weight,\n",
" normalize=\"true\", values_format=\".0%\")\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's put all plots in a couple of figures for the book:"
]
},
{
"cell_type": "code",
"execution_count": 76,
"metadata": {},
"outputs": [],
"source": [
"# not in the book this code generates Figure 39\n",
"fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(9, 4))\n",
"ConfusionMatrixDisplay.from_predictions(y_train, y_train_pred, ax=axs[0])\n",
"axs[0].set_title(\"Confusion matrix\")\n",
"ConfusionMatrixDisplay.from_predictions(y_train, y_train_pred, ax=axs[1],\n",
" normalize=\"true\", values_format=\".0%\")\n",
"axs[1].set_title(\"CM normalized by row\")\n",
"save_fig(\"confusion_matrix_plot_1\")\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 77,
"metadata": {},
"outputs": [],
"source": [
"# not in the book this code generates Figure 310\n",
"fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(9, 4))\n",
"ConfusionMatrixDisplay.from_predictions(y_train, y_train_pred, ax=axs[0],\n",
" sample_weight=sample_weight,\n",
" normalize=\"true\", values_format=\".0%\")\n",
"axs[0].set_title(\"Errors normalized by row\")\n",
"ConfusionMatrixDisplay.from_predictions(y_train, y_train_pred, ax=axs[1],\n",
" sample_weight=sample_weight,\n",
" normalize=\"pred\", values_format=\".0%\")\n",
"axs[1].set_title(\"Errors normalized by column\")\n",
"save_fig(\"confusion_matrix_plot_2\")\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 78,
"metadata": {},
"outputs": [],
"source": [
"cl_a, cl_b = '3', '5'\n",
"X_aa = X_train[(y_train == cl_a) & (y_train_pred == cl_a)]\n",
"X_ab = X_train[(y_train == cl_a) & (y_train_pred == cl_b)]\n",
"X_ba = X_train[(y_train == cl_b) & (y_train_pred == cl_a)]\n",
"X_bb = X_train[(y_train == cl_b) & (y_train_pred == cl_b)]"
]
},
{
"cell_type": "code",
"execution_count": 79,
"metadata": {},
"outputs": [],
"source": [
"# not in the book this code generates Figure 311\n",
"size = 5\n",
"pad = 0.2\n",
"plt.figure(figsize=(size, size))\n",
"for images, (label_col, label_row) in [(X_ba, (0, 0)), (X_bb, (1, 0)),\n",
" (X_aa, (0, 1)), (X_ab, (1, 1))]:\n",
" for idx, image_data in enumerate(images[:size*size]):\n",
" x = idx % size + label_col * (size + pad)\n",
" y = idx // size + label_row * (size + pad)\n",
" plt.imshow(image_data.reshape(28, 28), cmap=\"binary\",\n",
" extent=(x, x + 1, y, y + 1))\n",
"plt.xticks([size / 2, size + pad + size / 2], [str(cl_a), str(cl_b)])\n",
"plt.yticks([size / 2, size + pad + size / 2], [str(cl_b), str(cl_a)])\n",
"plt.plot([size + pad / 2, size + pad / 2], [0, 2 * size + pad], \"k:\")\n",
"plt.plot([0, 2 * size + pad], [size + pad / 2, size + pad / 2], \"k:\")\n",
"plt.axis([0, 2 * size + pad, 0, 2 * size + pad])\n",
"plt.xlabel(\"Predicted label\")\n",
"plt.ylabel(\"True label\")\n",
"save_fig(\"error_analysis_digits_plot\")\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note: there are several other ways you could code a plot like this one, but it's a bit hard to get the axis labels right:\n",
"* using [nested GridSpecs](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/gridspec_nested.html)\n",
"* merging all the digits in each block into a single image (then using 2×2 subplots). For example:\n",
" ```python\n",
" X_aa[:25].reshape(5, 5, 28, 28).transpose(0, 2, 1, 3).reshape(5 * 28, 5 * 28)\n",
" ```\n",
"* using [subfigures](https://matplotlib.org/stable/gallery/subplots_axes_and_figures/subfigures.html) (since Matplotlib 3.4)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Multilabel Classification"
]
},
{
"cell_type": "code",
"execution_count": 80,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import numpy as np\n",
"from sklearn.neighbors import KNeighborsClassifier\n",
"\n",
"y_train_large = (y_train >= '7')\n",
"y_train_odd = (y_train.astype('int8') % 2 == 1)\n",
"y_multilabel = np.c_[y_train_large, y_train_odd]\n",
"\n",
"knn_clf = KNeighborsClassifier()\n",
"knn_clf.fit(X_train, y_multilabel)"
]
},
{
"cell_type": "code",
"execution_count": 81,
"metadata": {},
"outputs": [],
"source": [
"knn_clf.predict([some_digit])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Warning**: the following cell may take a few minutes:"
]
},
{
"cell_type": "code",
"execution_count": 82,
"metadata": {},
"outputs": [],
"source": [
"y_train_knn_pred = cross_val_predict(knn_clf, X_train, y_multilabel, cv=3)\n",
"f1_score(y_multilabel, y_train_knn_pred, average=\"macro\")"
]
},
{
"cell_type": "code",
"execution_count": 83,
"metadata": {},
"outputs": [],
"source": [
"# not in the book this code shows that we get a negligible performance\n",
"# improvement when we set average=\"weighted\" because the\n",
"# classes are already pretty well balanced.\n",
"f1_score(y_multilabel, y_train_knn_pred, average=\"weighted\")"
]
},
{
"cell_type": "code",
"execution_count": 84,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.multioutput import ClassifierChain\n",
"\n",
"chain_clf = ClassifierChain(SVC(), cv=3, random_state=42)\n",
"chain_clf.fit(X_train[:2000], y_multilabel[:2000])"
]
},
{
"cell_type": "code",
"execution_count": 85,
"metadata": {},
"outputs": [],
"source": [
"chain_clf.predict([some_digit])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Multioutput Classification"
]
},
{
"cell_type": "code",
"execution_count": 86,
"metadata": {},
"outputs": [],
"source": [
"np.random.seed(42) # to make this code example reproducible\n",
"noise = np.random.randint(0, 100, (len(X_train), 784))\n",
"X_train_mod = X_train + noise\n",
"noise = np.random.randint(0, 100, (len(X_test), 784))\n",
"X_test_mod = X_test + noise\n",
"y_train_mod = X_train\n",
"y_test_mod = X_test"
]
},
{
"cell_type": "code",
"execution_count": 87,
"metadata": {},
"outputs": [],
"source": [
"# not in the book this code generates Figure 312\n",
"plt.subplot(121); plot_digit(X_test_mod[0])\n",
"plt.subplot(122); plot_digit(y_test_mod[0])\n",
"save_fig(\"noisy_digit_example_plot\")\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 88,
"metadata": {},
"outputs": [],
"source": [
"knn_clf = KNeighborsClassifier()\n",
"knn_clf.fit(X_train_mod, y_train_mod)\n",
"clean_digit = knn_clf.predict([X_test_mod[0]])\n",
"plot_digit(clean_digit)\n",
"save_fig(\"cleaned_digit_example_plot\") # not in the book saves Figure 313\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Extra Material — Calibrating Estimated Probabilities"
]
},
{
"cell_type": "code",
"execution_count": 89,
"metadata": {},
"outputs": [],
"source": [
"# TODO"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Exercise solutions"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1. An MNIST Classifier With Over 97% Accuracy"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Exercise: _Try to build a classifier for the MNIST dataset that achieves over 97% accuracy on the test set. Hint: the `KNeighborsClassifier` works quite well for this task; you just need to find good hyperparameter values (try a grid search on the `weights` and `n_neighbors` hyperparameters)._"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's start with a simple K-Nearest Neighbors classifier and measure its performance on the test set. This will be our baseline:"
]
},
{
"cell_type": "code",
"execution_count": 90,
"metadata": {},
"outputs": [],
"source": [
"knn_clf = KNeighborsClassifier()\n",
"knn_clf.fit(X_train, y_train)\n",
"baseline_accuracy = knn_clf.score(X_test, y_test)\n",
"baseline_accuracy"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Great! A regular KNN classifier with the default hyperparameters is already very close to our goal."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's see if we tuning the hyperparameters can help. To speed up the search, let's train only on the first 10,000 images:"
]
},
{
"cell_type": "code",
"execution_count": 91,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.model_selection import GridSearchCV\n",
"\n",
"param_grid = [{'weights': [\"uniform\", \"distance\"], 'n_neighbors': [3, 4, 5, 6]}]\n",
"\n",
"knn_clf = KNeighborsClassifier()\n",
"grid_search = GridSearchCV(knn_clf, param_grid, cv=5)\n",
"grid_search.fit(X_train[:10_000], y_train[:10_000])"
]
},
{
"cell_type": "code",
"execution_count": 92,
"metadata": {},
"outputs": [],
"source": [
"grid_search.best_params_"
]
},
{
"cell_type": "code",
"execution_count": 93,
"metadata": {},
"outputs": [],
"source": [
"grid_search.best_score_"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The score dropped, but that was expected since we only trained on 10,000 images. So let's take the best model and train it again on the full training set:"
]
},
{
"cell_type": "code",
"execution_count": 94,
"metadata": {},
"outputs": [],
"source": [
"grid_search.best_estimator_.fit(X_train, y_train)\n",
"tuned_accuracy = grid_search.score(X_test, y_test)\n",
"tuned_accuracy"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We reached our goal of 97% accuracy! 🥳"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Data Augmentation"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Exercise: _Write a function that can shift an MNIST image in any direction (left, right, up, or down) by one pixel. You can use the `shift()` function from the `scipy.ndimage.interpolation` module. For example, `shift(image, [2, 1], cval=0)` shifts the image two pixels down and one pixel to the right. Then, for each image in the training set, create four shifted copies (one per direction) and add them to the training set. Finally, train your best model on this expanded training set and measure its accuracy on the test set. You should observe that your model performs even better now! This technique of artificially growing the training set is called _data augmentation_ or _training set expansion_._"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's try augmenting the MNIST dataset by adding slightly shifted versions of each image."
]
},
{
"cell_type": "code",
"execution_count": 95,
"metadata": {},
"outputs": [],
"source": [
"from scipy.ndimage.interpolation import shift"
]
},
{
"cell_type": "code",
"execution_count": 96,
"metadata": {},
"outputs": [],
"source": [
"def shift_image(image, dx, dy):\n",
" image = image.reshape((28, 28))\n",
" shifted_image = shift(image, [dy, dx], cval=0, mode=\"constant\")\n",
" return shifted_image.reshape([-1])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's see if it works:"
]
},
{
"cell_type": "code",
"execution_count": 97,
"metadata": {},
"outputs": [],
"source": [
"image = X_train[1000] # some random digit to demo\n",
"shifted_image_down = shift_image(image, 0, 5)\n",
"shifted_image_left = shift_image(image, -5, 0)\n",
"\n",
"plt.figure(figsize=(12,3))\n",
"plt.subplot(131)\n",
"plt.title(\"Original\")\n",
"plt.imshow(image.reshape(28, 28),\n",
" interpolation=\"nearest\", cmap=\"Greys\")\n",
"plt.subplot(132)\n",
"plt.title(\"Shifted down\")\n",
"plt.imshow(shifted_image_down.reshape(28, 28),\n",
" interpolation=\"nearest\", cmap=\"Greys\")\n",
"plt.subplot(133)\n",
"plt.title(\"Shifted left\")\n",
"plt.imshow(shifted_image_left.reshape(28, 28),\n",
" interpolation=\"nearest\", cmap=\"Greys\")\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Looks good! Now let's create an augmented training set by shifting every image left, right, up and down by one pixel:"
]
},
{
"cell_type": "code",
"execution_count": 98,
"metadata": {},
"outputs": [],
"source": [
"X_train_augmented = [image for image in X_train]\n",
"y_train_augmented = [label for label in y_train]\n",
"\n",
"for dx, dy in ((-1, 0), (1, 0), (0, 1), (0, -1)):\n",
" for image, label in zip(X_train, y_train):\n",
" X_train_augmented.append(shift_image(image, dx, dy))\n",
" y_train_augmented.append(label)\n",
"\n",
"X_train_augmented = np.array(X_train_augmented)\n",
"y_train_augmented = np.array(y_train_augmented)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's shuffle the augmented training set, or else all shifted images will be grouped together:"
]
},
{
"cell_type": "code",
"execution_count": 99,
"metadata": {},
"outputs": [],
"source": [
"shuffle_idx = np.random.permutation(len(X_train_augmented))\n",
"X_train_augmented = X_train_augmented[shuffle_idx]\n",
"y_train_augmented = y_train_augmented[shuffle_idx]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's train the model using the best hyperparameters we found in the previous exercise:"
]
},
{
"cell_type": "code",
"execution_count": 100,
"metadata": {},
"outputs": [],
"source": [
"knn_clf = KNeighborsClassifier(**grid_search.best_params_)"
]
},
{
"cell_type": "code",
"execution_count": 101,
"metadata": {},
"outputs": [],
"source": [
"knn_clf.fit(X_train_augmented, y_train_augmented)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Warning**: the following cell may take a few minutes to run."
]
},
{
"cell_type": "code",
"execution_count": 102,
"metadata": {},
"outputs": [],
"source": [
"augmented_accuracy = knn_clf.score(X_test, y_test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"By simply augmenting the data, we got a 0.5% accuracy boost. Perhaps this does not sound so impressive, but this actually means that the error rate dropped significantly:"
]
},
{
"cell_type": "code",
"execution_count": 103,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"error_rate_change = (1 - augmented_accuracy) / (1 - tuned_accuracy) - 1\n",
"print(f\"error_rate_change = {error_rate_change:.0%}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The error rate dropped quite a bit thanks to data augmentation."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3. Tackle the Titanic dataset"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Exercise: _Tackle the Titanic dataset. A great place to start is on [Kaggle](https://www.kaggle.com/c/titanic). Alternatively, you can download the data from https://homl.info/titanic.tgz and unzip this tarball like you did for the housing data in Chapter 2. This will give you two CSV files: _train.csv_ and _test.csv_ which you can load using `pandas.read_csv()`. The goal is to train a classifier that can predict the `Survived` column based on the other columns._"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's fetch the data and load it:"
]
},
{
"cell_type": "code",
"execution_count": 104,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"import urllib.request\n",
"\n",
"def load_titanic_data():\n",
" titanic_path = Path() / \"datasets\" / \"titanic\"\n",
" titanic_path.mkdir(parents=True, exist_ok=True)\n",
" filenames = (\"train.csv\", \"test.csv\")\n",
" for filename in filenames:\n",
" filepath = titanic_path / filename\n",
" if filepath.is_file():\n",
" continue\n",
" root = \"https://raw.githubusercontent.com/ageron/handson-ml3/main/\"\n",
" url = root + \"/datasets/titanic/\" + filename\n",
" print(\"Downloading\", filename)\n",
" urllib.request.urlretrieve(url, filepath)\n",
" return [pd.read_csv(titanic_path / filename) for filename in filenames]"
]
},
{
"cell_type": "code",
"execution_count": 105,
"metadata": {},
"outputs": [],
"source": [
"train_data, test_data = load_titanic_data()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The data is already split into a training set and a test set. However, the test data does *not* contain the labels: your goal is to train the best model you can using the training data, then make your predictions on the test data and upload them to Kaggle to see your final score."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's take a peek at the top few rows of the training set:"
]
},
{
"cell_type": "code",
"execution_count": 106,
"metadata": {},
"outputs": [],
"source": [
"train_data.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The attributes have the following meaning:\n",
"* **PassengerId**: a unique identifier for each passenger\n",
"* **Survived**: that's the target, 0 means the passenger did not survive, while 1 means he/she survived.\n",
"* **Pclass**: passenger class.\n",
"* **Name**, **Sex**, **Age**: self-explanatory\n",
"* **SibSp**: how many siblings & spouses of the passenger aboard the Titanic.\n",
"* **Parch**: how many children & parents of the passenger aboard the Titanic.\n",
"* **Ticket**: ticket id\n",
"* **Fare**: price paid (in pounds)\n",
"* **Cabin**: passenger's cabin number\n",
"* **Embarked**: where the passenger embarked the Titanic"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The goal is to predict whether or not a passenger survived based on attributes such as their age, sex, passenger class, where they embarked and so on."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's explicitly set the `PassengerId` column as the index column:"
]
},
{
"cell_type": "code",
"execution_count": 107,
"metadata": {},
"outputs": [],
"source": [
"train_data = train_data.set_index(\"PassengerId\")\n",
"test_data = test_data.set_index(\"PassengerId\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's get more info to see how much data is missing:"
]
},
{
"cell_type": "code",
"execution_count": 108,
"metadata": {},
"outputs": [],
"source": [
"train_data.info()"
]
},
{
"cell_type": "code",
"execution_count": 109,
"metadata": {},
"outputs": [],
"source": [
"train_data[train_data[\"Sex\"]==\"female\"][\"Age\"].median()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Okay, the **Age**, **Cabin** and **Embarked** attributes are sometimes null (less than 891 non-null), especially the **Cabin** (77% are null). We will ignore the **Cabin** for now and focus on the rest. The **Age** attribute has about 19% null values, so we will need to decide what to do with them. Replacing null values with the median age seems reasonable. We could be a bit smarter by predicting the age based on the other columns (for example, the median age is 37 in 1st class, 29 in 2nd class and 24 in 3rd class), but we'll keep things simple and just use the overall median age."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The **Name** and **Ticket** attributes may have some value, but they will be a bit tricky to convert into useful numbers that a model can consume. So for now, we will ignore them."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's take a look at the numerical attributes:"
]
},
{
"cell_type": "code",
"execution_count": 110,
"metadata": {},
"outputs": [],
"source": [
"train_data.describe()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"* Yikes, only 38% **Survived**! 😭 That's close enough to 40%, so accuracy will be a reasonable metric to evaluate our model.\n",
"* The mean **Fare** was £32.20, which does not seem so expensive (but it was probably a lot of money back then).\n",
"* The mean **Age** was less than 30 years old."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's check that the target is indeed 0 or 1:"
]
},
{
"cell_type": "code",
"execution_count": 111,
"metadata": {},
"outputs": [],
"source": [
"train_data[\"Survived\"].value_counts()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's take a quick look at all the categorical attributes:"
]
},
{
"cell_type": "code",
"execution_count": 112,
"metadata": {},
"outputs": [],
"source": [
"train_data[\"Pclass\"].value_counts()"
]
},
{
"cell_type": "code",
"execution_count": 113,
"metadata": {},
"outputs": [],
"source": [
"train_data[\"Sex\"].value_counts()"
]
},
{
"cell_type": "code",
"execution_count": 114,
"metadata": {},
"outputs": [],
"source": [
"train_data[\"Embarked\"].value_counts()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The Embarked attribute tells us where the passenger embarked: C=Cherbourg, Q=Queenstown, S=Southampton."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's build our preprocessing pipelines, starting with the pipeline for numerical attributes:"
]
},
{
"cell_type": "code",
"execution_count": 115,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.pipeline import Pipeline\n",
"from sklearn.impute import SimpleImputer\n",
"\n",
"num_pipeline = Pipeline([\n",
" (\"imputer\", SimpleImputer(strategy=\"median\")),\n",
" (\"scaler\", StandardScaler())\n",
" ])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can build the pipeline for the categorical attributes:"
]
},
{
"cell_type": "code",
"execution_count": 116,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.preprocessing import OrdinalEncoder, OneHotEncoder"
]
},
{
"cell_type": "code",
"execution_count": 117,
"metadata": {},
"outputs": [],
"source": [
"cat_pipeline = Pipeline([\n",
" (\"ordinal_encoder\", OrdinalEncoder()), \n",
" (\"imputer\", SimpleImputer(strategy=\"most_frequent\")),\n",
" (\"cat_encoder\", OneHotEncoder(sparse=False)),\n",
" ])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, let's join the numerical and categorical pipelines:"
]
},
{
"cell_type": "code",
"execution_count": 118,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.compose import ColumnTransformer\n",
"\n",
"num_attribs = [\"Age\", \"SibSp\", \"Parch\", \"Fare\"]\n",
"cat_attribs = [\"Pclass\", \"Sex\", \"Embarked\"]\n",
"\n",
"preprocess_pipeline = ColumnTransformer([\n",
" (\"num\", num_pipeline, num_attribs),\n",
" (\"cat\", cat_pipeline, cat_attribs),\n",
" ])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Cool! Now we have a nice preprocessing pipeline that takes the raw data and outputs numerical input features that we can feed to any Machine Learning model we want."
]
},
{
"cell_type": "code",
"execution_count": 119,
"metadata": {},
"outputs": [],
"source": [
"X_train = preprocess_pipeline.fit_transform(train_data)\n",
"X_train"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's not forget to get the labels:"
]
},
{
"cell_type": "code",
"execution_count": 120,
"metadata": {},
"outputs": [],
"source": [
"y_train = train_data[\"Survived\"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We are now ready to train a classifier. Let's start with a `RandomForestClassifier`:"
]
},
{
"cell_type": "code",
"execution_count": 121,
"metadata": {},
"outputs": [],
"source": [
"forest_clf = RandomForestClassifier(n_estimators=100, random_state=42)\n",
"forest_clf.fit(X_train, y_train)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Great, our model is trained, let's use it to make predictions on the test set:"
]
},
{
"cell_type": "code",
"execution_count": 122,
"metadata": {},
"outputs": [],
"source": [
"X_test = preprocess_pipeline.transform(test_data)\n",
"y_pred = forest_clf.predict(X_test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And now we could just build a CSV file with these predictions (respecting the format excepted by Kaggle), then upload it and hope for the best. But wait! We can do better than hope. Why don't we use cross-validation to have an idea of how good our model is?"
]
},
{
"cell_type": "code",
"execution_count": 123,
"metadata": {},
"outputs": [],
"source": [
"forest_scores = cross_val_score(forest_clf, X_train, y_train, cv=10)\n",
"forest_scores.mean()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Okay, not too bad! Looking at the [leaderboard](https://www.kaggle.com/c/titanic/leaderboard) for the Titanic competition on Kaggle, you can see that our score is in the top 2%, woohoo! Some Kagglers reached 100% accuracy, but since you can easily find the [list of victims](https://www.encyclopedia-titanica.org/titanic-victims/) of the Titanic, it seems likely that there was little Machine Learning involved in their performance! 😆"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's try an `SVC`:"
]
},
{
"cell_type": "code",
"execution_count": 124,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.svm import SVC\n",
"\n",
"svm_clf = SVC(gamma=\"auto\")\n",
"svm_scores = cross_val_score(svm_clf, X_train, y_train, cv=10)\n",
"svm_scores.mean()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Great! This model looks better."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"But instead of just looking at the mean accuracy across the 10 cross-validation folds, let's plot all 10 scores for each model, along with a box plot highlighting the lower and upper quartiles, and \"whiskers\" showing the extent of the scores (thanks to Nevin Yilmaz for suggesting this visualization). Note that the `boxplot()` function detects outliers (called \"fliers\") and does not include them within the whiskers. Specifically, if the lower quartile is $Q_1$ and the upper quartile is $Q_3$, then the interquartile range $IQR = Q_3 - Q_1$ (this is the box's height), and any score lower than $Q_1 - 1.5 \\times IQR$ is a flier, and so is any score greater than $Q3 + 1.5 \\times IQR$."
]
},
{
"cell_type": "code",
"execution_count": 125,
"metadata": {},
"outputs": [],
"source": [
"plt.figure(figsize=(8, 4))\n",
"plt.plot([1]*10, svm_scores, \".\")\n",
"plt.plot([2]*10, forest_scores, \".\")\n",
"plt.boxplot([svm_scores, forest_scores], labels=(\"SVM\",\"Random Forest\"))\n",
"plt.ylabel(\"Accuracy\")\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The random forest classifier got a very high score on one of the 10 folds, but overall it had a lower mean score, as well as a bigger spread, so it looks like the SVM classifier is more likely to generalize well."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To improve this result further, you could:\n",
"* Compare many more models and tune hyperparameters using cross validation and grid search,\n",
"* Do more feature engineering, for example:\n",
" * Try to convert numerical attributes to categorical attributes: for example, different age groups had very different survival rates (see below), so it may help to create an age bucket category and use it instead of the age. Similarly, it may be useful to have a special category for people traveling alone since only 30% of them survived (see below).\n",
" * Replace **SibSp** and **Parch** with their sum.\n",
" * Try to identify parts of names that correlate well with the **Survived** attribute.\n",
" * Use the **Cabin** column, for example take its first letter and treat it as a categorical attribute."
]
},
{
"cell_type": "code",
"execution_count": 126,
"metadata": {},
"outputs": [],
"source": [
"train_data[\"AgeBucket\"] = train_data[\"Age\"] // 15 * 15\n",
"train_data[[\"AgeBucket\", \"Survived\"]].groupby(['AgeBucket']).mean()"
]
},
{
"cell_type": "code",
"execution_count": 127,
"metadata": {},
"outputs": [],
"source": [
"train_data[\"RelativesOnboard\"] = train_data[\"SibSp\"] + train_data[\"Parch\"]\n",
"train_data[[\"RelativesOnboard\", \"Survived\"]].groupby(\n",
" ['RelativesOnboard']).mean()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4. Spam classifier"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Exercise: _Build a spam classifier (a more challenging exercise):_\n",
"\n",
"* _Download examples of spam and ham from [Apache SpamAssassin's public datasets](https://homl.info/spamassassin)._\n",
"* _Unzip the datasets and familiarize yourself with the data format._\n",
"* _Split the datasets into a training set and a test set._\n",
"* _Write a data preparation pipeline to convert each email into a feature vector. Your preparation pipeline should transform an email into a (sparse) vector that indicates the presence or absence of each possible word. For example, if all emails only ever contain four words, \"Hello,\" \"how,\" \"are,\" \"you,\" then the email \"Hello you Hello Hello you\" would be converted into a vector [1, 0, 0, 1] (meaning [“Hello\" is present, \"how\" is absent, \"are\" is absent, \"you\" is present]), or [3, 0, 0, 2] if you prefer to count the number of occurrences of each word._\n",
"\n",
"_You may want to add hyperparameters to your preparation pipeline to control whether or not to strip off email headers, convert each email to lowercase, remove punctuation, replace all URLs with \"URL,\" replace all numbers with \"NUMBER,\" or even perform _stemming_ (i.e., trim off word endings; there are Python libraries available to do this)._\n",
"\n",
"_Finally, try out several classifiers and see if you can build a great spam classifier, with both high recall and high precision._"
]
},
{
"cell_type": "code",
"execution_count": 128,
"metadata": {},
"outputs": [],
"source": [
"import tarfile\n",
"\n",
"def fetch_spam_data():\n",
" root = \"http://spamassassin.apache.org/old/publiccorpus/\"\n",
" ham_url = root + \"20030228_easy_ham.tar.bz2\"\n",
" spam_url = root + \"20030228_spam.tar.bz2\"\n",
"\n",
" spam_path = Path() / \"datasets\" / \"spam\"\n",
" spam_path.mkdir(parents=True, exist_ok=True)\n",
" for dir_name, tar_name, url in ((\"easy_ham\", \"ham\", ham_url),\n",
" (\"spam\", \"spam\", spam_url)):\n",
" if not (spam_path / dir_name).is_dir():\n",
" path = (spam_path / tar_name).with_suffix(\".tar.bz2\")\n",
" print(\"Downloading\", path)\n",
" urllib.request.urlretrieve(url, path)\n",
" tar_bz2_file = tarfile.open(path)\n",
" tar_bz2_file.extractall(path=spam_path)\n",
" tar_bz2_file.close()\n",
" return [spam_path / dir_name for dir_name in (\"easy_ham\", \"spam\")]"
]
},
{
"cell_type": "code",
"execution_count": 129,
"metadata": {},
"outputs": [],
"source": [
"ham_dir, spam_dir = fetch_spam_data()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, let's load all the emails:"
]
},
{
"cell_type": "code",
"execution_count": 130,
"metadata": {},
"outputs": [],
"source": [
"ham_filenames = [f for f in sorted(ham_dir.iterdir()) if len(f.name) > 20]\n",
"spam_filenames = [f for f in sorted(spam_dir.iterdir()) if len(f.name) > 20]"
]
},
{
"cell_type": "code",
"execution_count": 131,
"metadata": {},
"outputs": [],
"source": [
"len(ham_filenames)"
]
},
{
"cell_type": "code",
"execution_count": 132,
"metadata": {},
"outputs": [],
"source": [
"len(spam_filenames)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can use Python's `email` module to parse these emails (this handles headers, encoding, and so on):"
]
},
{
"cell_type": "code",
"execution_count": 133,
"metadata": {},
"outputs": [],
"source": [
"import email\n",
"import email.policy\n",
"\n",
"def load_email(filepath):\n",
" with open(filepath, \"rb\") as f:\n",
" return email.parser.BytesParser(policy=email.policy.default).parse(f)"
]
},
{
"cell_type": "code",
"execution_count": 134,
"metadata": {},
"outputs": [],
"source": [
"ham_emails = [load_email(filepath) for filepath in ham_filenames]\n",
"spam_emails = [load_email(filepath) for filepath in spam_filenames]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's look at one example of ham and one example of spam, to get a feel of what the data looks like:"
]
},
{
"cell_type": "code",
"execution_count": 135,
"metadata": {},
"outputs": [],
"source": [
"print(ham_emails[1].get_content().strip())"
]
},
{
"cell_type": "code",
"execution_count": 136,
"metadata": {},
"outputs": [],
"source": [
"print(spam_emails[6].get_content().strip())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Some emails are actually multipart, with images and attachments (which can have their own attachments). Let's look at the various types of structures we have:"
]
},
{
"cell_type": "code",
"execution_count": 137,
"metadata": {},
"outputs": [],
"source": [
"def get_email_structure(email):\n",
" if isinstance(email, str):\n",
" return email\n",
" payload = email.get_payload()\n",
" if isinstance(payload, list):\n",
" multipart = \", \".join([get_email_structure(sub_email)\n",
" for sub_email in payload])\n",
" return f\"multipart({multipart})\"\n",
" else:\n",
" return email.get_content_type()"
]
},
{
"cell_type": "code",
"execution_count": 138,
"metadata": {},
"outputs": [],
"source": [
"from collections import Counter\n",
"\n",
"def structures_counter(emails):\n",
" structures = Counter()\n",
" for email in emails:\n",
" structure = get_email_structure(email)\n",
" structures[structure] += 1\n",
" return structures"
]
},
{
"cell_type": "code",
"execution_count": 139,
"metadata": {},
"outputs": [],
"source": [
"structures_counter(ham_emails).most_common()"
]
},
{
"cell_type": "code",
"execution_count": 140,
"metadata": {},
"outputs": [],
"source": [
"structures_counter(spam_emails).most_common()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It seems that the ham emails are more often plain text, while spam has quite a lot of HTML. Moreover, quite a few ham emails are signed using PGP, while no spam is. In short, it seems that the email structure is useful information to have."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's take a look at the email headers:"
]
},
{
"cell_type": "code",
"execution_count": 141,
"metadata": {},
"outputs": [],
"source": [
"for header, value in spam_emails[0].items():\n",
" print(header,\":\",value)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"There's probably a lot of useful information in there, such as the sender's email address (12a1mailbot1@web.de looks fishy), but we will just focus on the `Subject` header:"
]
},
{
"cell_type": "code",
"execution_count": 142,
"metadata": {},
"outputs": [],
"source": [
"spam_emails[0][\"Subject\"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Okay, before we learn too much about the data, let's not forget to split it into a training set and a test set:"
]
},
{
"cell_type": "code",
"execution_count": 143,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"from sklearn.model_selection import train_test_split\n",
"\n",
"X = np.array(ham_emails + spam_emails, dtype=object)\n",
"y = np.array([0] * len(ham_emails) + [1] * len(spam_emails))\n",
"\n",
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,\n",
" random_state=42)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Okay, let's start writing the preprocessing functions. First, we will need a function to convert HTML to plain text. Arguably the best way to do this would be to use the great [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/) library, but I would like to avoid adding another dependency to this project, so let's hack a quick & dirty solution using regular expressions (at the risk of [un̨ho͞ly radiańcé destro҉ying all enli̍̈́̂̈́ghtenment](https://stackoverflow.com/a/1732454/38626)). The following function first drops the `<head>` section, then converts all `<a>` tags to the word HYPERLINK, then it gets rid of all HTML tags, leaving only the plain text. For readability, it also replaces multiple newlines with single newlines, and finally it unescapes html entities (such as `&gt;` or `&nbsp;`):"
]
},
{
"cell_type": "code",
"execution_count": 144,
"metadata": {},
"outputs": [],
"source": [
"import re\n",
"from html import unescape\n",
"\n",
"def html_to_plain_text(html):\n",
" text = re.sub('<head.*?>.*?</head>', '', html, flags=re.M | re.S | re.I)\n",
" text = re.sub('<a\\s.*?>', ' HYPERLINK ', text, flags=re.M | re.S | re.I)\n",
" text = re.sub('<.*?>', '', text, flags=re.M | re.S)\n",
" text = re.sub(r'(\\s*\\n)+', '\\n', text, flags=re.M | re.S)\n",
" return unescape(text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's see if it works. This is HTML spam:"
]
},
{
"cell_type": "code",
"execution_count": 145,
"metadata": {},
"outputs": [],
"source": [
"html_spam_emails = [email for email in X_train[y_train==1]\n",
" if get_email_structure(email) == \"text/html\"]\n",
"sample_html_spam = html_spam_emails[7]\n",
"print(sample_html_spam.get_content().strip()[:1000], \"...\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And this is the resulting plain text:"
]
},
{
"cell_type": "code",
"execution_count": 146,
"metadata": {},
"outputs": [],
"source": [
"print(html_to_plain_text(sample_html_spam.get_content())[:1000], \"...\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Great! Now let's write a function that takes an email as input and returns its content as plain text, whatever its format is:"
]
},
{
"cell_type": "code",
"execution_count": 147,
"metadata": {},
"outputs": [],
"source": [
"def email_to_text(email):\n",
" html = None\n",
" for part in email.walk():\n",
" ctype = part.get_content_type()\n",
" if not ctype in (\"text/plain\", \"text/html\"):\n",
" continue\n",
" try:\n",
" content = part.get_content()\n",
" except: # in case of encoding issues\n",
" content = str(part.get_payload())\n",
" if ctype == \"text/plain\":\n",
" return content\n",
" else:\n",
" html = content\n",
" if html:\n",
" return html_to_plain_text(html)"
]
},
{
"cell_type": "code",
"execution_count": 148,
"metadata": {},
"outputs": [],
"source": [
"print(email_to_text(sample_html_spam)[:100], \"...\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's throw in some stemming! We will use the Natural Language Toolkit ([NLTK](http://www.nltk.org/)):"
]
},
{
"cell_type": "code",
"execution_count": 149,
"metadata": {},
"outputs": [],
"source": [
"import nltk\n",
"\n",
"stemmer = nltk.PorterStemmer()\n",
"for word in (\"Computations\", \"Computation\", \"Computing\", \"Computed\", \"Compute\",\n",
" \"Compulsive\"):\n",
" print(word, \"=>\", stemmer.stem(word))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will also need a way to replace URLs with the word \"URL\". For this, we could use hard core [regular expressions](https://mathiasbynens.be/demo/url-regex) but we will just use the [urlextract](https://github.com/lipoja/URLExtract) library:"
]
},
{
"cell_type": "code",
"execution_count": 150,
"metadata": {},
"outputs": [],
"source": [
"# Is this notebook running on Colab or Kaggle?\n",
"IS_COLAB = \"google.colab\" in sys.modules\n",
"IS_KAGGLE = \"kaggle_secrets\" in sys.modules\n",
"\n",
"# if running this notebook on Colab or Kaggle, we just pip install urlextract\n",
"if IS_COLAB or IS_KAGGLE:\n",
" %pip install -q -U urlextract"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Note:** inside a Jupyter notebook, always use `%pip` instead of `!pip`, as `!pip` may install the library inside the wrong environment, while `%pip` makes sure it's installed inside the currently running environment."
]
},
{
"cell_type": "code",
"execution_count": 151,
"metadata": {},
"outputs": [],
"source": [
"import urlextract # may require an Internet connection to download root domain\n",
" # names\n",
"\n",
"url_extractor = urlextract.URLExtract()\n",
"some_text = \"Will it detect github.com and https://youtu.be/7Pq-S557XQU?t=3m32s\"\n",
"print(url_extractor.find_urls(some_text))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We are ready to put all this together into a transformer that we will use to convert emails to word counters. Note that we split sentences into words using Python's `split()` method, which uses whitespaces for word boundaries. This works for many written languages, but not all. For example, Chinese and Japanese scripts generally don't use spaces between words, and Vietnamese often uses spaces even between syllables. It's okay in this exercise, because the dataset is (mostly) in English."
]
},
{
"cell_type": "code",
"execution_count": 152,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.base import BaseEstimator, TransformerMixin\n",
"\n",
"class EmailToWordCounterTransformer(BaseEstimator, TransformerMixin):\n",
" def __init__(self, strip_headers=True, lower_case=True,\n",
" remove_punctuation=True, replace_urls=True,\n",
" replace_numbers=True, stemming=True):\n",
" self.strip_headers = strip_headers\n",
" self.lower_case = lower_case\n",
" self.remove_punctuation = remove_punctuation\n",
" self.replace_urls = replace_urls\n",
" self.replace_numbers = replace_numbers\n",
" self.stemming = stemming\n",
" def fit(self, X, y=None):\n",
" return self\n",
" def transform(self, X, y=None):\n",
" X_transformed = []\n",
" for email in X:\n",
" text = email_to_text(email) or \"\"\n",
" if self.lower_case:\n",
" text = text.lower()\n",
" if self.replace_urls and url_extractor is not None:\n",
" urls = list(set(url_extractor.find_urls(text)))\n",
" urls.sort(key=lambda url: len(url), reverse=True)\n",
" for url in urls:\n",
" text = text.replace(url, \" URL \")\n",
" if self.replace_numbers:\n",
" text = re.sub(r'\\d+(?:\\.\\d*)?(?:[eE][+-]?\\d+)?', 'NUMBER', text)\n",
" if self.remove_punctuation:\n",
" text = re.sub(r'\\W+', ' ', text, flags=re.M)\n",
" word_counts = Counter(text.split())\n",
" if self.stemming and stemmer is not None:\n",
" stemmed_word_counts = Counter()\n",
" for word, count in word_counts.items():\n",
" stemmed_word = stemmer.stem(word)\n",
" stemmed_word_counts[stemmed_word] += count\n",
" word_counts = stemmed_word_counts\n",
" X_transformed.append(word_counts)\n",
" return np.array(X_transformed)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's try this transformer on a few emails:"
]
},
{
"cell_type": "code",
"execution_count": 153,
"metadata": {},
"outputs": [],
"source": [
"X_few = X_train[:3]\n",
"X_few_wordcounts = EmailToWordCounterTransformer().fit_transform(X_few)\n",
"X_few_wordcounts"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This looks about right!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we have the word counts, and we need to convert them to vectors. For this, we will build another transformer whose `fit()` method will build the vocabulary (an ordered list of the most common words) and whose `transform()` method will use the vocabulary to convert word counts to vectors. The output is a sparse matrix."
]
},
{
"cell_type": "code",
"execution_count": 154,
"metadata": {},
"outputs": [],
"source": [
"from scipy.sparse import csr_matrix\n",
"\n",
"class WordCounterToVectorTransformer(BaseEstimator, TransformerMixin):\n",
" def __init__(self, vocabulary_size=1000):\n",
" self.vocabulary_size = vocabulary_size\n",
" def fit(self, X, y=None):\n",
" total_count = Counter()\n",
" for word_count in X:\n",
" for word, count in word_count.items():\n",
" total_count[word] += min(count, 10)\n",
" most_common = total_count.most_common()[:self.vocabulary_size]\n",
" self.vocabulary_ = {word: index + 1\n",
" for index, (word, count) in enumerate(most_common)}\n",
" return self\n",
" def transform(self, X, y=None):\n",
" rows = []\n",
" cols = []\n",
" data = []\n",
" for row, word_count in enumerate(X):\n",
" for word, count in word_count.items():\n",
" rows.append(row)\n",
" cols.append(self.vocabulary_.get(word, 0))\n",
" data.append(count)\n",
" return csr_matrix((data, (rows, cols)),\n",
" shape=(len(X), self.vocabulary_size + 1))"
]
},
{
"cell_type": "code",
"execution_count": 155,
"metadata": {},
"outputs": [],
"source": [
"vocab_transformer = WordCounterToVectorTransformer(vocabulary_size=10)\n",
"X_few_vectors = vocab_transformer.fit_transform(X_few_wordcounts)\n",
"X_few_vectors"
]
},
{
"cell_type": "code",
"execution_count": 156,
"metadata": {},
"outputs": [],
"source": [
"X_few_vectors.toarray()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"What does this matrix mean? Well, the 99 in the second row, first column, means that the second email contains 99 words that are not part of the vocabulary. The 11 next to it means that the first word in the vocabulary is present 11 times in this email. The 9 next to it means that the second word is present 9 times, and so on. You can look at the vocabulary to know which words we are talking about. The first word is \"the\", the second word is \"of\", etc."
]
},
{
"cell_type": "code",
"execution_count": 157,
"metadata": {},
"outputs": [],
"source": [
"vocab_transformer.vocabulary_"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We are now ready to train our first spam classifier! Let's transform the whole dataset:"
]
},
{
"cell_type": "code",
"execution_count": 158,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.pipeline import Pipeline\n",
"\n",
"preprocess_pipeline = Pipeline([\n",
" (\"email_to_wordcount\", EmailToWordCounterTransformer()),\n",
" (\"wordcount_to_vector\", WordCounterToVectorTransformer()),\n",
"])\n",
"\n",
"X_train_transformed = preprocess_pipeline.fit_transform(X_train)"
]
},
{
"cell_type": "code",
"execution_count": 159,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.linear_model import LogisticRegression\n",
"from sklearn.model_selection import cross_val_score\n",
"\n",
"log_clf = LogisticRegression(max_iter=1000, random_state=42)\n",
"score = cross_val_score(log_clf, X_train_transformed, y_train, cv=3)\n",
"score.mean()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Over 98.5%, not bad for a first try! :) However, remember that we are using the \"easy\" dataset. You can try with the harder datasets, the results won't be so amazing. You would have to try multiple models, select the best ones and fine-tune them using cross-validation, and so on.\n",
"\n",
"But you get the picture, so let's stop now, and just print out the precision/recall we get on the test set:"
]
},
{
"cell_type": "code",
"execution_count": 160,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.metrics import precision_score, recall_score\n",
"\n",
"X_test_transformed = preprocess_pipeline.transform(X_test)\n",
"\n",
"log_clf = LogisticRegression(max_iter=1000, random_state=42)\n",
"log_clf.fit(X_train_transformed, y_train)\n",
"\n",
"y_pred = log_clf.predict(X_test_transformed)\n",
"\n",
"print(f\"Precision: {precision_score(y_test, y_pred):.2%}\")\n",
"print(f\"Recall: {recall_score(y_test, y_pred):.2%}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.12"
},
"nav_menu": {},
"toc": {
"navigate_menu": true,
"number_sections": true,
"sideBar": true,
"threshold": 6,
"toc_cell": false,
"toc_section_display": "block",
"toc_window_display": false
}
},
"nbformat": 4,
"nbformat_minor": 4
}