{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "**Chapter 2 – End-to-end Machine Learning project**\n", "\n", "*Welcome to Machine Learning Housing Corp.! Your task is to predict median house values in Californian districts, given a number of features from these districts.*\n", "\n", "*This notebook contains all the sample code and solutions to the exercices in chapter 2.*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note**: You may find little differences between the code outputs in the book and in these Jupyter notebooks: these slight differences are mostly due to the random nature of many training algorithms: although I have tried to make these notebooks' outputs as constant as possible, it is impossible to guarantee that they will produce the exact same output on every platform. Also, some data structures (such as dictionaries) do not preserve the item order. Finally, I fixed a few minor bugs (I added notes next to the concerned cells) which lead to slightly different results, without changing the ideas presented in the book." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Setup" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# To support both python 2 and python 3\n", "from __future__ import division, print_function, unicode_literals\n", "\n", "# Common imports\n", "import numpy as np\n", "import os\n", "\n", "# to make this notebook's output stable across runs\n", "np.random.seed(42)\n", "\n", "# To plot pretty figures\n", "%matplotlib inline\n", "import matplotlib\n", "import matplotlib.pyplot as plt\n", "plt.rcParams['axes.labelsize'] = 14\n", "plt.rcParams['xtick.labelsize'] = 12\n", "plt.rcParams['ytick.labelsize'] = 12\n", "\n", "# Where to save the figures\n", "PROJECT_ROOT_DIR = \".\"\n", "CHAPTER_ID = \"end_to_end_project\"\n", "\n", "def save_fig(fig_id, tight_layout=True):\n", " path = os.path.join(PROJECT_ROOT_DIR, \"images\", CHAPTER_ID, fig_id + \".png\")\n", " print(\"Saving figure\", fig_id)\n", " if tight_layout:\n", " plt.tight_layout()\n", " plt.savefig(path, format='png', dpi=300)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Get the data" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": true }, "outputs": [], "source": [ "import os\n", "import tarfile\n", "from six.moves import urllib\n", "\n", "DOWNLOAD_ROOT = \"https://raw.githubusercontent.com/ageron/handson-ml/master/\"\n", "HOUSING_PATH = os.path.join(\"datasets\", \"housing\")\n", "HOUSING_URL = DOWNLOAD_ROOT + \"datasets/housing/housing.tgz\"\n", "\n", "def fetch_housing_data(housing_url=HOUSING_URL, housing_path=HOUSING_PATH):\n", " if not os.path.isdir(housing_path):\n", " os.makedirs(housing_path)\n", " tgz_path = os.path.join(housing_path, \"housing.tgz\")\n", " urllib.request.urlretrieve(housing_url, tgz_path)\n", " housing_tgz = tarfile.open(tgz_path)\n", " housing_tgz.extractall(path=housing_path)\n", " housing_tgz.close()" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": true }, "outputs": [], "source": [ "fetch_housing_data()" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": true }, "outputs": [], "source": [ "import pandas as pd\n", "\n", "def load_housing_data(housing_path=HOUSING_PATH):\n", " csv_path = os.path.join(housing_path, \"housing.csv\")\n", " return pd.read_csv(csv_path)" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "housing = load_housing_data()\n", "housing.head()" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "housing.info()" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "housing[\"ocean_proximity\"].value_counts()" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "housing.describe()" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline\n", "import matplotlib.pyplot as plt\n", "housing.hist(bins=50, figsize=(20,15))\n", "save_fig(\"attribute_histogram_plots\")\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# to make this notebook's output identical at every run\n", "np.random.seed(42)" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": true }, "outputs": [], "source": [ "import numpy as np\n", "\n", "# For illustration only. Sklearn has train_test_split()\n", "def split_train_test(data, test_ratio):\n", " shuffled_indices = np.random.permutation(len(data))\n", " test_set_size = int(len(data) * test_ratio)\n", " test_indices = shuffled_indices[:test_set_size]\n", " train_indices = shuffled_indices[test_set_size:]\n", " return data.iloc[train_indices], data.iloc[test_indices]" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "train_set, test_set = split_train_test(housing, 0.2)\n", "print(len(train_set), \"train +\", len(test_set), \"test\")" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": true }, "outputs": [], "source": [ "import hashlib\n", "\n", "def test_set_check(identifier, test_ratio, hash):\n", " return hash(np.int64(identifier)).digest()[-1] < 256 * test_ratio\n", "\n", "def split_train_test_by_id(data, test_ratio, id_column, hash=hashlib.md5):\n", " ids = data[id_column]\n", " in_test_set = ids.apply(lambda id_: test_set_check(id_, test_ratio, hash))\n", " return data.loc[~in_test_set], data.loc[in_test_set]" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# This version supports both Python 2 and Python 3, instead of just Python 3.\n", "def test_set_check(identifier, test_ratio, hash):\n", " return bytearray(hash(np.int64(identifier)).digest())[-1] < 256 * test_ratio" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": true }, "outputs": [], "source": [ "housing_with_id = housing.reset_index() # adds an `index` column\n", "train_set, test_set = split_train_test_by_id(housing_with_id, 0.2, \"index\")" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": true }, "outputs": [], "source": [ "housing_with_id[\"id\"] = housing[\"longitude\"] * 1000 + housing[\"latitude\"]\n", "train_set, test_set = split_train_test_by_id(housing_with_id, 0.2, \"id\")" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [], "source": [ "test_set.head()" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from sklearn.model_selection import train_test_split\n", "\n", "train_set, test_set = train_test_split(housing, test_size=0.2, random_state=42)" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [], "source": [ "test_set.head()" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [], "source": [ "housing[\"median_income\"].hist()" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Divide by 1.5 to limit the number of income categories\n", "housing[\"income_cat\"] = np.ceil(housing[\"median_income\"] / 1.5)\n", "# Label those above 5 as 5\n", "housing[\"income_cat\"].where(housing[\"income_cat\"] < 5, 5.0, inplace=True)" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [], "source": [ "housing[\"income_cat\"].value_counts()" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [], "source": [ "housing[\"income_cat\"].hist()" ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from sklearn.model_selection import StratifiedShuffleSplit\n", "\n", "split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)\n", "for train_index, test_index in split.split(housing, housing[\"income_cat\"]):\n", " strat_train_set = housing.loc[train_index]\n", " strat_test_set = housing.loc[test_index]" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [], "source": [ "housing[\"income_cat\"].value_counts() / len(housing)" ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def income_cat_proportions(data):\n", " return data[\"income_cat\"].value_counts() / len(data)\n", "\n", "train_set, test_set = train_test_split(housing, test_size=0.2, random_state=42)\n", "\n", "compare_props = pd.DataFrame({\n", " \"Overall\": income_cat_proportions(housing),\n", " \"Stratified\": income_cat_proportions(strat_test_set),\n", " \"Random\": income_cat_proportions(test_set),\n", "}).sort_index()\n", "compare_props[\"Rand. %error\"] = 100 * compare_props[\"Random\"] / compare_props[\"Overall\"] - 100\n", "compare_props[\"Strat. %error\"] = 100 * compare_props[\"Stratified\"] / compare_props[\"Overall\"] - 100" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [], "source": [ "compare_props" ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "collapsed": true }, "outputs": [], "source": [ "for set_ in (strat_train_set, strat_test_set):\n", " set_.drop(\"income_cat\", axis=1, inplace=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Discover and visualize the data to gain insights" ] }, { "cell_type": "code", "execution_count": 29, "metadata": { "collapsed": true }, "outputs": [], "source": [ "housing = strat_train_set.copy()" ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [], "source": [ "housing.plot(kind=\"scatter\", x=\"longitude\", y=\"latitude\")\n", "save_fig(\"bad_visualization_plot\")" ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [], "source": [ "housing.plot(kind=\"scatter\", x=\"longitude\", y=\"latitude\", alpha=0.1)\n", "save_fig(\"better_visualization_plot\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The argument `sharex=False` fixes a display bug (the x-axis values and legend were not displayed). This is a temporary fix (see: https://github.com/pandas-dev/pandas/issues/10611). Thanks to Wilmer Arellano for pointing it out." ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [], "source": [ "housing.plot(kind=\"scatter\", x=\"longitude\", y=\"latitude\", alpha=0.4,\n", " s=housing[\"population\"]/100, label=\"population\", figsize=(10,7),\n", " c=\"median_house_value\", cmap=plt.get_cmap(\"jet\"), colorbar=True,\n", " sharex=False)\n", "plt.legend()\n", "save_fig(\"housing_prices_scatterplot\")" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [], "source": [ "import matplotlib.image as mpimg\n", "california_img=mpimg.imread(PROJECT_ROOT_DIR + '/images/end_to_end_project/california.png')\n", "ax = housing.plot(kind=\"scatter\", x=\"longitude\", y=\"latitude\", figsize=(10,7),\n", " s=housing['population']/100, label=\"Population\",\n", " c=\"median_house_value\", cmap=plt.get_cmap(\"jet\"),\n", " colorbar=False, alpha=0.4,\n", " )\n", "plt.imshow(california_img, extent=[-124.55, -113.80, 32.45, 42.05], alpha=0.5)\n", "plt.ylabel(\"Latitude\", fontsize=14)\n", "plt.xlabel(\"Longitude\", fontsize=14)\n", "\n", "prices = housing[\"median_house_value\"]\n", "tick_values = np.linspace(prices.min(), prices.max(), 11)\n", "cbar = plt.colorbar()\n", "cbar.ax.set_yticklabels([\"$%dk\"%(round(v/1000)) for v in tick_values], fontsize=14)\n", "cbar.set_label('Median House Value', fontsize=16)\n", "\n", "plt.legend(fontsize=16)\n", "save_fig(\"california_housing_prices_plot\")\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": 34, "metadata": { "collapsed": true }, "outputs": [], "source": [ "corr_matrix = housing.corr()" ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [], "source": [ "corr_matrix[\"median_house_value\"].sort_values(ascending=False)" ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [], "source": [ "# from pandas.tools.plotting import scatter_matrix # For older versions of Pandas\n", "from pandas.plotting import scatter_matrix\n", "\n", "attributes = [\"median_house_value\", \"median_income\", \"total_rooms\",\n", " \"housing_median_age\"]\n", "scatter_matrix(housing[attributes], figsize=(12, 8))\n", "save_fig(\"scatter_matrix_plot\")" ] }, { "cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [], "source": [ "housing.plot(kind=\"scatter\", x=\"median_income\", y=\"median_house_value\",\n", " alpha=0.1)\n", "plt.axis([0, 16, 0, 550000])\n", "save_fig(\"income_vs_house_value_scatterplot\")" ] }, { "cell_type": "code", "execution_count": 38, "metadata": { "collapsed": true }, "outputs": [], "source": [ "housing[\"rooms_per_household\"] = housing[\"total_rooms\"]/housing[\"households\"]\n", "housing[\"bedrooms_per_room\"] = housing[\"total_bedrooms\"]/housing[\"total_rooms\"]\n", "housing[\"population_per_household\"]=housing[\"population\"]/housing[\"households\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note: there was a bug in the previous cell, in the definition of the `rooms_per_household` attribute. This explains why the correlation value below differs slightly from the value in the book (unless you are reading the latest version)." ] }, { "cell_type": "code", "execution_count": 39, "metadata": {}, "outputs": [], "source": [ "corr_matrix = housing.corr()\n", "corr_matrix[\"median_house_value\"].sort_values(ascending=False)" ] }, { "cell_type": "code", "execution_count": 40, "metadata": {}, "outputs": [], "source": [ "housing.plot(kind=\"scatter\", x=\"rooms_per_household\", y=\"median_house_value\",\n", " alpha=0.2)\n", "plt.axis([0, 5, 0, 520000])\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": 41, "metadata": {}, "outputs": [], "source": [ "housing.describe()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Prepare the data for Machine Learning algorithms" ] }, { "cell_type": "code", "execution_count": 42, "metadata": { "collapsed": true }, "outputs": [], "source": [ "housing = strat_train_set.drop(\"median_house_value\", axis=1) # drop labels for training set\n", "housing_labels = strat_train_set[\"median_house_value\"].copy()" ] }, { "cell_type": "code", "execution_count": 43, "metadata": {}, "outputs": [], "source": [ "sample_incomplete_rows = housing[housing.isnull().any(axis=1)].head()\n", "sample_incomplete_rows" ] }, { "cell_type": "code", "execution_count": 44, "metadata": {}, "outputs": [], "source": [ "sample_incomplete_rows.dropna(subset=[\"total_bedrooms\"]) # option 1" ] }, { "cell_type": "code", "execution_count": 45, "metadata": {}, "outputs": [], "source": [ "sample_incomplete_rows.drop(\"total_bedrooms\", axis=1) # option 2" ] }, { "cell_type": "code", "execution_count": 46, "metadata": {}, "outputs": [], "source": [ "median = housing[\"total_bedrooms\"].median()\n", "sample_incomplete_rows[\"total_bedrooms\"].fillna(median, inplace=True) # option 3\n", "sample_incomplete_rows" ] }, { "cell_type": "code", "execution_count": 47, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from sklearn.preprocessing import Imputer\n", "\n", "imputer = Imputer(strategy=\"median\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Remove the text attribute because median can only be calculated on numerical attributes:" ] }, { "cell_type": "code", "execution_count": 48, "metadata": { "collapsed": true }, "outputs": [], "source": [ "housing_num = housing.drop(\"ocean_proximity\", axis=1)" ] }, { "cell_type": "code", "execution_count": 49, "metadata": {}, "outputs": [], "source": [ "imputer.fit(housing_num)" ] }, { "cell_type": "code", "execution_count": 50, "metadata": {}, "outputs": [], "source": [ "imputer.statistics_" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Check that this is the same as manually computing the median of each attribute:" ] }, { "cell_type": "code", "execution_count": 51, "metadata": {}, "outputs": [], "source": [ "housing_num.median().values" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Transform the training set:" ] }, { "cell_type": "code", "execution_count": 52, "metadata": { "collapsed": true }, "outputs": [], "source": [ "X = imputer.transform(housing_num)" ] }, { "cell_type": "code", "execution_count": 53, "metadata": { "collapsed": true }, "outputs": [], "source": [ "housing_tr = pd.DataFrame(X, columns=housing_num.columns,\n", " index = list(housing.index.values))" ] }, { "cell_type": "code", "execution_count": 54, "metadata": {}, "outputs": [], "source": [ "housing_tr.loc[sample_incomplete_rows.index.values]" ] }, { "cell_type": "code", "execution_count": 55, "metadata": {}, "outputs": [], "source": [ "imputer.strategy" ] }, { "cell_type": "code", "execution_count": 56, "metadata": {}, "outputs": [], "source": [ "housing_tr = pd.DataFrame(X, columns=housing_num.columns)\n", "housing_tr.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's preprocess the categorical input feature, `ocean_proximity`:" ] }, { "cell_type": "code", "execution_count": 57, "metadata": {}, "outputs": [], "source": [ "housing_cat = housing[\"ocean_proximity\"]\n", "housing_cat.head(10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can use Pandas' `factorize()` method to convert this string categorical feature to an integer categorical feature, which will be easier for Machine Learning algorithms to handle:" ] }, { "cell_type": "code", "execution_count": 58, "metadata": {}, "outputs": [], "source": [ "housing_cat_encoded, housing_categories = housing_cat.factorize()\n", "housing_cat_encoded[:10]" ] }, { "cell_type": "code", "execution_count": 59, "metadata": {}, "outputs": [], "source": [ "housing_categories" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Warning**: earlier versions of the book used the `LabelEncoder` class instead of Pandas' `factorize()` method. This was incorrect: indeed, as its name suggests, the `LabelEncoder` class was designed for labels, not for input features. The code worked because we were handling a single categorical input feature, but it would break if you passed multiple categorical input features." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can convert each categorical value to a one-hot vector using a `OneHotEncoder`:" ] }, { "cell_type": "code", "execution_count": 60, "metadata": {}, "outputs": [], "source": [ "from sklearn.preprocessing import OneHotEncoder\n", "\n", "encoder = OneHotEncoder()\n", "housing_cat_1hot = encoder.fit_transform(housing_cat_encoded.reshape(-1,1))\n", "housing_cat_1hot" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `OneHotEncoder` returns a sparse array by default, but we can convert it to a dense array if needed:" ] }, { "cell_type": "code", "execution_count": 61, "metadata": {}, "outputs": [], "source": [ "housing_cat_1hot.toarray()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Warning**: earlier versions of the book used the `LabelBinarizer` class at this point. Again, this was incorrect: just like the `LabelEncoder` class, the `LabelBinarizer` class was designed to preprocess labels, not input features. A better solution is to use Scikit-Learn's upcoming `CategoricalEncoder` class: it will soon be added to Scikit-Learn, and in the meantime you can use the code below (copied from [Pull Request #9151](https://github.com/scikit-learn/scikit-learn/pull/9151))." ] }, { "cell_type": "code", "execution_count": 62, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Definition of the CategoricalEncoder class, copied from PR #9151.\n", "# Just run this cell, or copy it to your code, do not try to understand it (yet).\n", "\n", "from sklearn.base import BaseEstimator, TransformerMixin\n", "from sklearn.utils import check_array\n", "from sklearn.preprocessing import LabelEncoder\n", "from scipy import sparse\n", "\n", "class CategoricalEncoder(BaseEstimator, TransformerMixin):\n", " \"\"\"Encode categorical features as a numeric array.\n", " The input to this transformer should be a matrix of integers or strings,\n", " denoting the values taken on by categorical (discrete) features.\n", " The features can be encoded using a one-hot aka one-of-K scheme\n", " (``encoding='onehot'``, the default) or converted to ordinal integers\n", " (``encoding='ordinal'``).\n", " This encoding is needed for feeding categorical data to many scikit-learn\n", " estimators, notably linear models and SVMs with the standard kernels.\n", " Read more in the :ref:`User Guide `.\n", " Parameters\n", " ----------\n", " encoding : str, 'onehot', 'onehot-dense' or 'ordinal'\n", " The type of encoding to use (default is 'onehot'):\n", " - 'onehot': encode the features using a one-hot aka one-of-K scheme\n", " (or also called 'dummy' encoding). This creates a binary column for\n", " each category and returns a sparse matrix.\n", " - 'onehot-dense': the same as 'onehot' but returns a dense array\n", " instead of a sparse matrix.\n", " - 'ordinal': encode the features as ordinal integers. This results in\n", " a single column of integers (0 to n_categories - 1) per feature.\n", " categories : 'auto' or a list of lists/arrays of values.\n", " Categories (unique values) per feature:\n", " - 'auto' : Determine categories automatically from the training data.\n", " - list : ``categories[i]`` holds the categories expected in the ith\n", " column. The passed categories are sorted before encoding the data\n", " (used categories can be found in the ``categories_`` attribute).\n", " dtype : number type, default np.float64\n", " Desired dtype of output.\n", " handle_unknown : 'error' (default) or 'ignore'\n", " Whether to raise an error or ignore if a unknown categorical feature is\n", " present during transform (default is to raise). When this is parameter\n", " is set to 'ignore' and an unknown category is encountered during\n", " transform, the resulting one-hot encoded columns for this feature\n", " will be all zeros.\n", " Ignoring unknown categories is not supported for\n", " ``encoding='ordinal'``.\n", " Attributes\n", " ----------\n", " categories_ : list of arrays\n", " The categories of each feature determined during fitting. When\n", " categories were specified manually, this holds the sorted categories\n", " (in order corresponding with output of `transform`).\n", " Examples\n", " --------\n", " Given a dataset with three features and two samples, we let the encoder\n", " find the maximum value per feature and transform the data to a binary\n", " one-hot encoding.\n", " >>> from sklearn.preprocessing import CategoricalEncoder\n", " >>> enc = CategoricalEncoder(handle_unknown='ignore')\n", " >>> enc.fit([[0, 0, 3], [1, 1, 0], [0, 2, 1], [1, 0, 2]])\n", " ... # doctest: +ELLIPSIS\n", " CategoricalEncoder(categories='auto', dtype=<... 'numpy.float64'>,\n", " encoding='onehot', handle_unknown='ignore')\n", " >>> enc.transform([[0, 1, 1], [1, 0, 4]]).toarray()\n", " array([[ 1., 0., 0., 1., 0., 0., 1., 0., 0.],\n", " [ 0., 1., 1., 0., 0., 0., 0., 0., 0.]])\n", " See also\n", " --------\n", " sklearn.preprocessing.OneHotEncoder : performs a one-hot encoding of\n", " integer ordinal features. The ``OneHotEncoder assumes`` that input\n", " features take on values in the range ``[0, max(feature)]`` instead of\n", " using the unique values.\n", " sklearn.feature_extraction.DictVectorizer : performs a one-hot encoding of\n", " dictionary items (also handles string-valued features).\n", " sklearn.feature_extraction.FeatureHasher : performs an approximate one-hot\n", " encoding of dictionary items or strings.\n", " \"\"\"\n", "\n", " def __init__(self, encoding='onehot', categories='auto', dtype=np.float64,\n", " handle_unknown='error'):\n", " self.encoding = encoding\n", " self.categories = categories\n", " self.dtype = dtype\n", " self.handle_unknown = handle_unknown\n", "\n", " def fit(self, X, y=None):\n", " \"\"\"Fit the CategoricalEncoder to X.\n", " Parameters\n", " ----------\n", " X : array-like, shape [n_samples, n_feature]\n", " The data to determine the categories of each feature.\n", " Returns\n", " -------\n", " self\n", " \"\"\"\n", "\n", " if self.encoding not in ['onehot', 'onehot-dense', 'ordinal']:\n", " template = (\"encoding should be either 'onehot', 'onehot-dense' \"\n", " \"or 'ordinal', got %s\")\n", " raise ValueError(template % self.handle_unknown)\n", "\n", " if self.handle_unknown not in ['error', 'ignore']:\n", " template = (\"handle_unknown should be either 'error' or \"\n", " \"'ignore', got %s\")\n", " raise ValueError(template % self.handle_unknown)\n", "\n", " if self.encoding == 'ordinal' and self.handle_unknown == 'ignore':\n", " raise ValueError(\"handle_unknown='ignore' is not supported for\"\n", " \" encoding='ordinal'\")\n", "\n", " X = check_array(X, dtype=np.object, accept_sparse='csc', copy=True)\n", " n_samples, n_features = X.shape\n", "\n", " self._label_encoders_ = [LabelEncoder() for _ in range(n_features)]\n", "\n", " for i in range(n_features):\n", " le = self._label_encoders_[i]\n", " Xi = X[:, i]\n", " if self.categories == 'auto':\n", " le.fit(Xi)\n", " else:\n", " valid_mask = np.in1d(Xi, self.categories[i])\n", " if not np.all(valid_mask):\n", " if self.handle_unknown == 'error':\n", " diff = np.unique(Xi[~valid_mask])\n", " msg = (\"Found unknown categories {0} in column {1}\"\n", " \" during fit\".format(diff, i))\n", " raise ValueError(msg)\n", " le.classes_ = np.array(np.sort(self.categories[i]))\n", "\n", " self.categories_ = [le.classes_ for le in self._label_encoders_]\n", "\n", " return self\n", "\n", " def transform(self, X):\n", " \"\"\"Transform X using one-hot encoding.\n", " Parameters\n", " ----------\n", " X : array-like, shape [n_samples, n_features]\n", " The data to encode.\n", " Returns\n", " -------\n", " X_out : sparse matrix or a 2-d array\n", " Transformed input.\n", " \"\"\"\n", " X = check_array(X, accept_sparse='csc', dtype=np.object, copy=True)\n", " n_samples, n_features = X.shape\n", " X_int = np.zeros_like(X, dtype=np.int)\n", " X_mask = np.ones_like(X, dtype=np.bool)\n", "\n", " for i in range(n_features):\n", " valid_mask = np.in1d(X[:, i], self.categories_[i])\n", "\n", " if not np.all(valid_mask):\n", " if self.handle_unknown == 'error':\n", " diff = np.unique(X[~valid_mask, i])\n", " msg = (\"Found unknown categories {0} in column {1}\"\n", " \" during transform\".format(diff, i))\n", " raise ValueError(msg)\n", " else:\n", " # Set the problematic rows to an acceptable value and\n", " # continue `The rows are marked `X_mask` and will be\n", " # removed later.\n", " X_mask[:, i] = valid_mask\n", " X[:, i][~valid_mask] = self.categories_[i][0]\n", " X_int[:, i] = self._label_encoders_[i].transform(X[:, i])\n", "\n", " if self.encoding == 'ordinal':\n", " return X_int.astype(self.dtype, copy=False)\n", "\n", " mask = X_mask.ravel()\n", " n_values = [cats.shape[0] for cats in self.categories_]\n", " n_values = np.array([0] + n_values)\n", " indices = np.cumsum(n_values)\n", "\n", " column_indices = (X_int + indices[:-1]).ravel()[mask]\n", " row_indices = np.repeat(np.arange(n_samples, dtype=np.int32),\n", " n_features)[mask]\n", " data = np.ones(n_samples * n_features)[mask]\n", "\n", " out = sparse.csc_matrix((data, (row_indices, column_indices)),\n", " shape=(n_samples, indices[-1]),\n", " dtype=self.dtype).tocsr()\n", " if self.encoding == 'onehot-dense':\n", " return out.toarray()\n", " else:\n", " return out" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `CategoricalEncoder` expects a 2D array containing one or more categorical input features. We need to reshape `housing_cat` to a 2D array:" ] }, { "cell_type": "code", "execution_count": 63, "metadata": {}, "outputs": [], "source": [ "#from sklearn.preprocessing import CategoricalEncoder # in future versions of Scikit-Learn\n", "\n", "cat_encoder = CategoricalEncoder()\n", "housing_cat_reshaped = housing_cat.values.reshape(-1, 1)\n", "housing_cat_1hot = cat_encoder.fit_transform(housing_cat_reshaped)\n", "housing_cat_1hot" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The default encoding is one-hot, and it returns a sparse array. You can use `toarray()` to get a dense array:" ] }, { "cell_type": "code", "execution_count": 64, "metadata": {}, "outputs": [], "source": [ "housing_cat_1hot.toarray()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Alternatively, you can specify the encoding to be `\"onehot-dense\"` to get a dense matrix rather than a sparse matrix:" ] }, { "cell_type": "code", "execution_count": 65, "metadata": {}, "outputs": [], "source": [ "cat_encoder = CategoricalEncoder(encoding=\"onehot-dense\")\n", "housing_cat_1hot = cat_encoder.fit_transform(housing_cat_reshaped)\n", "housing_cat_1hot" ] }, { "cell_type": "code", "execution_count": 66, "metadata": {}, "outputs": [], "source": [ "cat_encoder.categories_" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's create a custom transformer to add extra attributes:" ] }, { "cell_type": "code", "execution_count": 67, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from sklearn.base import BaseEstimator, TransformerMixin\n", "\n", "# column index\n", "rooms_ix, bedrooms_ix, population_ix, household_ix = 3, 4, 5, 6\n", "\n", "class CombinedAttributesAdder(BaseEstimator, TransformerMixin):\n", " def __init__(self, add_bedrooms_per_room = True): # no *args or **kargs\n", " self.add_bedrooms_per_room = add_bedrooms_per_room\n", " def fit(self, X, y=None):\n", " return self # nothing else to do\n", " def transform(self, X, y=None):\n", " rooms_per_household = X[:, rooms_ix] / X[:, household_ix]\n", " population_per_household = X[:, population_ix] / X[:, household_ix]\n", " if self.add_bedrooms_per_room:\n", " bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix]\n", " return np.c_[X, rooms_per_household, population_per_household,\n", " bedrooms_per_room]\n", " else:\n", " return np.c_[X, rooms_per_household, population_per_household]\n", "\n", "attr_adder = CombinedAttributesAdder(add_bedrooms_per_room=False)\n", "housing_extra_attribs = attr_adder.transform(housing.values)" ] }, { "cell_type": "code", "execution_count": 68, "metadata": {}, "outputs": [], "source": [ "housing_extra_attribs = pd.DataFrame(housing_extra_attribs, columns=list(housing.columns)+[\"rooms_per_household\", \"population_per_household\"])\n", "housing_extra_attribs.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's build a pipeline for preprocessing the numerical attributes:" ] }, { "cell_type": "code", "execution_count": 69, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from sklearn.pipeline import Pipeline\n", "from sklearn.preprocessing import StandardScaler\n", "\n", "num_pipeline = Pipeline([\n", " ('imputer', Imputer(strategy=\"median\")),\n", " ('attribs_adder', CombinedAttributesAdder()),\n", " ('std_scaler', StandardScaler()),\n", " ])\n", "\n", "housing_num_tr = num_pipeline.fit_transform(housing_num)" ] }, { "cell_type": "code", "execution_count": 70, "metadata": {}, "outputs": [], "source": [ "housing_num_tr" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And a transformer to just select a subset of the Pandas DataFrame columns:" ] }, { "cell_type": "code", "execution_count": 71, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from sklearn.base import BaseEstimator, TransformerMixin\n", "\n", "# Create a class to select numerical or categorical columns \n", "# since Scikit-Learn doesn't handle DataFrames yet\n", "class DataFrameSelector(BaseEstimator, TransformerMixin):\n", " def __init__(self, attribute_names):\n", " self.attribute_names = attribute_names\n", " def fit(self, X, y=None):\n", " return self\n", " def transform(self, X):\n", " return X[self.attribute_names].values" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's join all these components into a big pipeline that will preprocess both the numerical and the categorical features:" ] }, { "cell_type": "code", "execution_count": 72, "metadata": {}, "outputs": [], "source": [ "num_attribs = list(housing_num)\n", "cat_attribs = [\"ocean_proximity\"]\n", "\n", "num_pipeline = Pipeline([\n", " ('selector', DataFrameSelector(num_attribs)),\n", " ('imputer', Imputer(strategy=\"median\")),\n", " ('attribs_adder', CombinedAttributesAdder()),\n", " ('std_scaler', StandardScaler()),\n", " ])\n", "\n", "cat_pipeline = Pipeline([\n", " ('selector', DataFrameSelector(cat_attribs)),\n", " ('cat_encoder', CategoricalEncoder(encoding=\"onehot-dense\")),\n", " ])" ] }, { "cell_type": "code", "execution_count": 73, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from sklearn.pipeline import FeatureUnion\n", "\n", "full_pipeline = FeatureUnion(transformer_list=[\n", " (\"num_pipeline\", num_pipeline),\n", " (\"cat_pipeline\", cat_pipeline),\n", " ])" ] }, { "cell_type": "code", "execution_count": 74, "metadata": {}, "outputs": [], "source": [ "housing_prepared = full_pipeline.fit_transform(housing)\n", "housing_prepared" ] }, { "cell_type": "code", "execution_count": 75, "metadata": {}, "outputs": [], "source": [ "housing_prepared.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Select and train a model " ] }, { "cell_type": "code", "execution_count": 76, "metadata": {}, "outputs": [], "source": [ "from sklearn.linear_model import LinearRegression\n", "\n", "lin_reg = LinearRegression()\n", "lin_reg.fit(housing_prepared, housing_labels)" ] }, { "cell_type": "code", "execution_count": 77, "metadata": {}, "outputs": [], "source": [ "# let's try the full pipeline on a few training instances\n", "some_data = housing.iloc[:5]\n", "some_labels = housing_labels.iloc[:5]\n", "some_data_prepared = full_pipeline.transform(some_data)\n", "\n", "print(\"Predictions:\", lin_reg.predict(some_data_prepared))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Compare against the actual values:" ] }, { "cell_type": "code", "execution_count": 78, "metadata": {}, "outputs": [], "source": [ "print(\"Labels:\", list(some_labels))" ] }, { "cell_type": "code", "execution_count": 79, "metadata": {}, "outputs": [], "source": [ "some_data_prepared" ] }, { "cell_type": "code", "execution_count": 80, "metadata": {}, "outputs": [], "source": [ "from sklearn.metrics import mean_squared_error\n", "\n", "housing_predictions = lin_reg.predict(housing_prepared)\n", "lin_mse = mean_squared_error(housing_labels, housing_predictions)\n", "lin_rmse = np.sqrt(lin_mse)\n", "lin_rmse" ] }, { "cell_type": "code", "execution_count": 81, "metadata": {}, "outputs": [], "source": [ "from sklearn.metrics import mean_absolute_error\n", "\n", "lin_mae = mean_absolute_error(housing_labels, housing_predictions)\n", "lin_mae" ] }, { "cell_type": "code", "execution_count": 82, "metadata": {}, "outputs": [], "source": [ "from sklearn.tree import DecisionTreeRegressor\n", "\n", "tree_reg = DecisionTreeRegressor(random_state=42)\n", "tree_reg.fit(housing_prepared, housing_labels)" ] }, { "cell_type": "code", "execution_count": 83, "metadata": {}, "outputs": [], "source": [ "housing_predictions = tree_reg.predict(housing_prepared)\n", "tree_mse = mean_squared_error(housing_labels, housing_predictions)\n", "tree_rmse = np.sqrt(tree_mse)\n", "tree_rmse" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Fine-tune your model" ] }, { "cell_type": "code", "execution_count": 84, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from sklearn.model_selection import cross_val_score\n", "\n", "scores = cross_val_score(tree_reg, housing_prepared, housing_labels,\n", " scoring=\"neg_mean_squared_error\", cv=10)\n", "tree_rmse_scores = np.sqrt(-scores)" ] }, { "cell_type": "code", "execution_count": 85, "metadata": {}, "outputs": [], "source": [ "def display_scores(scores):\n", " print(\"Scores:\", scores)\n", " print(\"Mean:\", scores.mean())\n", " print(\"Standard deviation:\", scores.std())\n", "\n", "display_scores(tree_rmse_scores)" ] }, { "cell_type": "code", "execution_count": 86, "metadata": {}, "outputs": [], "source": [ "lin_scores = cross_val_score(lin_reg, housing_prepared, housing_labels,\n", " scoring=\"neg_mean_squared_error\", cv=10)\n", "lin_rmse_scores = np.sqrt(-lin_scores)\n", "display_scores(lin_rmse_scores)" ] }, { "cell_type": "code", "execution_count": 87, "metadata": {}, "outputs": [], "source": [ "from sklearn.ensemble import RandomForestRegressor\n", "\n", "forest_reg = RandomForestRegressor(random_state=42)\n", "forest_reg.fit(housing_prepared, housing_labels)" ] }, { "cell_type": "code", "execution_count": 88, "metadata": {}, "outputs": [], "source": [ "housing_predictions = forest_reg.predict(housing_prepared)\n", "forest_mse = mean_squared_error(housing_labels, housing_predictions)\n", "forest_rmse = np.sqrt(forest_mse)\n", "forest_rmse" ] }, { "cell_type": "code", "execution_count": 89, "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import cross_val_score\n", "\n", "forest_scores = cross_val_score(forest_reg, housing_prepared, housing_labels,\n", " scoring=\"neg_mean_squared_error\", cv=10)\n", "forest_rmse_scores = np.sqrt(-forest_scores)\n", "display_scores(forest_rmse_scores)" ] }, { "cell_type": "code", "execution_count": 90, "metadata": {}, "outputs": [], "source": [ "scores = cross_val_score(lin_reg, housing_prepared, housing_labels, scoring=\"neg_mean_squared_error\", cv=10)\n", "pd.Series(np.sqrt(-scores)).describe()" ] }, { "cell_type": "code", "execution_count": 91, "metadata": {}, "outputs": [], "source": [ "from sklearn.svm import SVR\n", "\n", "svm_reg = SVR(kernel=\"linear\")\n", "svm_reg.fit(housing_prepared, housing_labels)\n", "housing_predictions = svm_reg.predict(housing_prepared)\n", "svm_mse = mean_squared_error(housing_labels, housing_predictions)\n", "svm_rmse = np.sqrt(svm_mse)\n", "svm_rmse" ] }, { "cell_type": "code", "execution_count": 92, "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import GridSearchCV\n", "\n", "param_grid = [\n", " # try 12 (3×4) combinations of hyperparameters\n", " {'n_estimators': [3, 10, 30], 'max_features': [2, 4, 6, 8]},\n", " # then try 6 (2×3) combinations with bootstrap set as False\n", " {'bootstrap': [False], 'n_estimators': [3, 10], 'max_features': [2, 3, 4]},\n", " ]\n", "\n", "forest_reg = RandomForestRegressor(random_state=42)\n", "# train across 5 folds, that's a total of (12+6)*5=90 rounds of training \n", "grid_search = GridSearchCV(forest_reg, param_grid, cv=5,\n", " scoring='neg_mean_squared_error')\n", "grid_search.fit(housing_prepared, housing_labels)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The best hyperparameter combination found:" ] }, { "cell_type": "code", "execution_count": 93, "metadata": {}, "outputs": [], "source": [ "grid_search.best_params_" ] }, { "cell_type": "code", "execution_count": 94, "metadata": {}, "outputs": [], "source": [ "grid_search.best_estimator_" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's look at the score of each hyperparameter combination tested during the grid search:" ] }, { "cell_type": "code", "execution_count": 95, "metadata": {}, "outputs": [], "source": [ "cvres = grid_search.cv_results_\n", "for mean_score, params in zip(cvres[\"mean_test_score\"], cvres[\"params\"]):\n", " print(np.sqrt(-mean_score), params)" ] }, { "cell_type": "code", "execution_count": 96, "metadata": {}, "outputs": [], "source": [ "pd.DataFrame(grid_search.cv_results_)" ] }, { "cell_type": "code", "execution_count": 97, "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import RandomizedSearchCV\n", "from scipy.stats import randint\n", "\n", "param_distribs = {\n", " 'n_estimators': randint(low=1, high=200),\n", " 'max_features': randint(low=1, high=8),\n", " }\n", "\n", "forest_reg = RandomForestRegressor(random_state=42)\n", "rnd_search = RandomizedSearchCV(forest_reg, param_distributions=param_distribs,\n", " n_iter=10, cv=5, scoring='neg_mean_squared_error', random_state=42)\n", "rnd_search.fit(housing_prepared, housing_labels)" ] }, { "cell_type": "code", "execution_count": 98, "metadata": {}, "outputs": [], "source": [ "cvres = rnd_search.cv_results_\n", "for mean_score, params in zip(cvres[\"mean_test_score\"], cvres[\"params\"]):\n", " print(np.sqrt(-mean_score), params)" ] }, { "cell_type": "code", "execution_count": 99, "metadata": {}, "outputs": [], "source": [ "feature_importances = grid_search.best_estimator_.feature_importances_\n", "feature_importances" ] }, { "cell_type": "code", "execution_count": 100, "metadata": {}, "outputs": [], "source": [ "extra_attribs = [\"rooms_per_hhold\", \"pop_per_hhold\", \"bedrooms_per_room\"]\n", "cat_encoder = cat_pipeline.named_steps[\"cat_encoder\"]\n", "cat_one_hot_attribs = list(cat_encoder.categories_[0])\n", "attributes = num_attribs + extra_attribs + cat_one_hot_attribs\n", "sorted(zip(feature_importances, attributes), reverse=True)" ] }, { "cell_type": "code", "execution_count": 101, "metadata": { "collapsed": true }, "outputs": [], "source": [ "final_model = grid_search.best_estimator_\n", "\n", "X_test = strat_test_set.drop(\"median_house_value\", axis=1)\n", "y_test = strat_test_set[\"median_house_value\"].copy()\n", "\n", "X_test_prepared = full_pipeline.transform(X_test)\n", "final_predictions = final_model.predict(X_test_prepared)\n", "\n", "final_mse = mean_squared_error(y_test, final_predictions)\n", "final_rmse = np.sqrt(final_mse)" ] }, { "cell_type": "code", "execution_count": 102, "metadata": {}, "outputs": [], "source": [ "final_rmse" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Extra material" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## A full pipeline with both preparation and prediction" ] }, { "cell_type": "code", "execution_count": 103, "metadata": {}, "outputs": [], "source": [ "full_pipeline_with_predictor = Pipeline([\n", " (\"preparation\", full_pipeline),\n", " (\"linear\", LinearRegression())\n", " ])\n", "\n", "full_pipeline_with_predictor.fit(housing, housing_labels)\n", "full_pipeline_with_predictor.predict(some_data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Model persistence using joblib" ] }, { "cell_type": "code", "execution_count": 104, "metadata": { "collapsed": true }, "outputs": [], "source": [ "my_model = full_pipeline_with_predictor" ] }, { "cell_type": "code", "execution_count": 105, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from sklearn.externals import joblib\n", "joblib.dump(my_model, \"my_model.pkl\") # DIFF\n", "#...\n", "my_model_loaded = joblib.load(\"my_model.pkl\") # DIFF" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Example SciPy distributions for `RandomizedSearchCV`" ] }, { "cell_type": "code", "execution_count": 106, "metadata": {}, "outputs": [], "source": [ "from scipy.stats import geom, expon\n", "geom_distrib=geom(0.5).rvs(10000, random_state=42)\n", "expon_distrib=expon(scale=1).rvs(10000, random_state=42)\n", "plt.hist(geom_distrib, bins=50)\n", "plt.show()\n", "plt.hist(expon_distrib, bins=50)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Exercise solutions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1." ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "Question: Try a Support Vector Machine regressor (`sklearn.svm.SVR`), with various hyperparameters such as `kernel=\"linear\"` (with various values for the `C` hyperparameter) or `kernel=\"rbf\"` (with various values for the `C` and `gamma` hyperparameters). Don't worry about what these hyperparameters mean for now. How does the best `SVR` predictor perform?" ] }, { "cell_type": "code", "execution_count": 107, "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import GridSearchCV\n", "\n", "param_grid = [\n", " {'kernel': ['linear'], 'C': [10., 30., 100., 300., 1000., 3000., 10000., 30000.0]},\n", " {'kernel': ['rbf'], 'C': [1.0, 3.0, 10., 30., 100., 300., 1000.0],\n", " 'gamma': [0.01, 0.03, 0.1, 0.3, 1.0, 3.0]},\n", " ]\n", "\n", "svm_reg = SVR()\n", "grid_search = GridSearchCV(svm_reg, param_grid, cv=5, scoring='neg_mean_squared_error', verbose=2, n_jobs=4)\n", "grid_search.fit(housing_prepared, housing_labels)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The best model achieves the following score (evaluated using 5-fold cross validation):" ] }, { "cell_type": "code", "execution_count": 108, "metadata": {}, "outputs": [], "source": [ "negative_mse = grid_search.best_score_\n", "rmse = np.sqrt(-negative_mse)\n", "rmse" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "That's much worse than the `RandomForestRegressor`. Let's check the best hyperparameters found:" ] }, { "cell_type": "code", "execution_count": 109, "metadata": {}, "outputs": [], "source": [ "grid_search.best_params_" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The linear kernel seems better than the RBF kernel. Notice that the value of `C` is the maximum tested value. When this happens you definitely want to launch the grid search again with higher values for `C` (removing the smallest values), because it is likely that higher values of `C` will be better." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Question: Try replacing `GridSearchCV` with `RandomizedSearchCV`." ] }, { "cell_type": "code", "execution_count": 110, "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import RandomizedSearchCV\n", "from scipy.stats import expon, reciprocal\n", "\n", "# see https://docs.scipy.org/doc/scipy-0.19.0/reference/stats.html\n", "# for `expon()` and `reciprocal()` documentation and more probability distribution functions.\n", "\n", "# Note: gamma is ignored when kernel is \"linear\"\n", "param_distribs = {\n", " 'kernel': ['linear', 'rbf'],\n", " 'C': reciprocal(20, 200000),\n", " 'gamma': expon(scale=1.0),\n", " }\n", "\n", "svm_reg = SVR()\n", "rnd_search = RandomizedSearchCV(svm_reg, param_distributions=param_distribs,\n", " n_iter=50, cv=5, scoring='neg_mean_squared_error',\n", " verbose=2, n_jobs=4, random_state=42)\n", "rnd_search.fit(housing_prepared, housing_labels)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The best model achieves the following score (evaluated using 5-fold cross validation):" ] }, { "cell_type": "code", "execution_count": 111, "metadata": {}, "outputs": [], "source": [ "negative_mse = rnd_search.best_score_\n", "rmse = np.sqrt(-negative_mse)\n", "rmse" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now this is much closer to the performance of the `RandomForestRegressor` (but not quite there yet). Let's check the best hyperparameters found:" ] }, { "cell_type": "code", "execution_count": 112, "metadata": {}, "outputs": [], "source": [ "rnd_search.best_params_" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This time the search found a good set of hyperparameters for the RBF kernel. Randomized search tends to find better hyperparameters than grid search in the same amount of time." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's look at the exponential distribution we used, with `scale=1.0`. Note that some samples are much larger or smaller than 1.0, but when you look at the log of the distribution, you can see that most values are actually concentrated roughly in the range of exp(-2) to exp(+2), which is about 0.1 to 7.4." ] }, { "cell_type": "code", "execution_count": 113, "metadata": {}, "outputs": [], "source": [ "expon_distrib = expon(scale=1.)\n", "samples = expon_distrib.rvs(10000, random_state=42)\n", "plt.figure(figsize=(10, 4))\n", "plt.subplot(121)\n", "plt.title(\"Exponential distribution (scale=1.0)\")\n", "plt.hist(samples, bins=50)\n", "plt.subplot(122)\n", "plt.title(\"Log of this distribution\")\n", "plt.hist(np.log(samples), bins=50)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The distribution we used for `C` looks quite different: the scale of the samples is picked from a uniform distribution within a given range, which is why the right graph, which represents the log of the samples, looks roughly constant. This distribution is useful when you don't have a clue of what the target scale is:" ] }, { "cell_type": "code", "execution_count": 114, "metadata": {}, "outputs": [], "source": [ "reciprocal_distrib = reciprocal(20, 200000)\n", "samples = reciprocal_distrib.rvs(10000, random_state=42)\n", "plt.figure(figsize=(10, 4))\n", "plt.subplot(121)\n", "plt.title(\"Reciprocal distribution (scale=1.0)\")\n", "plt.hist(samples, bins=50)\n", "plt.subplot(122)\n", "plt.title(\"Log of this distribution\")\n", "plt.hist(np.log(samples), bins=50)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The reciprocal distribution is useful when you have no idea what the scale of the hyperparameter should be (indeed, as you can see on the figure on the right, all scales are equally likely, within the given range), whereas the exponential distribution is best when you know (more or less) what the scale of the hyperparameter should be." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Question: Try adding a transformer in the preparation pipeline to select only the most important attributes." ] }, { "cell_type": "code", "execution_count": 115, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from sklearn.base import BaseEstimator, TransformerMixin\n", "\n", "def indices_of_top_k(arr, k):\n", " return np.sort(np.argpartition(np.array(arr), -k)[-k:])\n", "\n", "class TopFeatureSelector(BaseEstimator, TransformerMixin):\n", " def __init__(self, feature_importances, k):\n", " self.feature_importances = feature_importances\n", " self.k = k\n", " def fit(self, X, y=None):\n", " self.feature_indices_ = indices_of_top_k(self.feature_importances, self.k)\n", " return self\n", " def transform(self, X):\n", " return X[:, self.feature_indices_]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note: this feature selector assumes that you have already computed the feature importances somehow (for example using a `RandomForestRegressor`). You may be tempted to compute them directly in the `TopFeatureSelector`'s `fit()` method, however this would likely slow down grid/randomized search since the feature importances would have to be computed for every hyperparameter combination (unless you implement some sort of cache)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's define the number of top features we want to keep:" ] }, { "cell_type": "code", "execution_count": 116, "metadata": { "collapsed": true }, "outputs": [], "source": [ "k = 5" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's look for the indices of the top k features:" ] }, { "cell_type": "code", "execution_count": 117, "metadata": {}, "outputs": [], "source": [ "top_k_feature_indices = indices_of_top_k(feature_importances, k)\n", "top_k_feature_indices" ] }, { "cell_type": "code", "execution_count": 118, "metadata": {}, "outputs": [], "source": [ "np.array(attributes)[top_k_feature_indices]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's double check that these are indeed the top k features:" ] }, { "cell_type": "code", "execution_count": 119, "metadata": {}, "outputs": [], "source": [ "sorted(zip(feature_importances, attributes), reverse=True)[:k]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Looking good... Now let's create a new pipeline that runs the previously defined preparation pipeline, and adds top k feature selection:" ] }, { "cell_type": "code", "execution_count": 120, "metadata": { "collapsed": true }, "outputs": [], "source": [ "preparation_and_feature_selection_pipeline = Pipeline([\n", " ('preparation', full_pipeline),\n", " ('feature_selection', TopFeatureSelector(feature_importances, k))\n", "])" ] }, { "cell_type": "code", "execution_count": 121, "metadata": { "collapsed": true }, "outputs": [], "source": [ "housing_prepared_top_k_features = preparation_and_feature_selection_pipeline.fit_transform(housing)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's look at the features of the first 3 instances:" ] }, { "cell_type": "code", "execution_count": 122, "metadata": {}, "outputs": [], "source": [ "housing_prepared_top_k_features[0:3]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's double check that these are indeed the top k features:" ] }, { "cell_type": "code", "execution_count": 123, "metadata": {}, "outputs": [], "source": [ "housing_prepared[0:3, top_k_feature_indices]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Works great! :)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Question: Try creating a single pipeline that does the full data preparation plus the final prediction." ] }, { "cell_type": "code", "execution_count": 124, "metadata": { "collapsed": true }, "outputs": [], "source": [ "prepare_select_and_predict_pipeline = Pipeline([\n", " ('preparation', full_pipeline),\n", " ('feature_selection', TopFeatureSelector(feature_importances, k)),\n", " ('svm_reg', SVR(**rnd_search.best_params_))\n", "])" ] }, { "cell_type": "code", "execution_count": 125, "metadata": {}, "outputs": [], "source": [ "prepare_select_and_predict_pipeline.fit(housing, housing_labels)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's try the full pipeline on a few instances:" ] }, { "cell_type": "code", "execution_count": 126, "metadata": {}, "outputs": [], "source": [ "some_data = housing.iloc[:4]\n", "some_labels = housing_labels.iloc[:4]\n", "\n", "print(\"Predictions:\\t\", prepare_select_and_predict_pipeline.predict(some_data))\n", "print(\"Labels:\\t\\t\", list(some_labels))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Well, the full pipeline seems to work fine. Of course, the predictions are not fantastic: they would be better if we used the best `RandomForestRegressor` that we found earlier, rather than the best `SVR`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Question: Automatically explore some preparation options using `GridSearchCV`." ] }, { "cell_type": "code", "execution_count": 127, "metadata": {}, "outputs": [], "source": [ "param_grid = [\n", " {'preparation__num_pipeline__imputer__strategy': ['mean', 'median', 'most_frequent'],\n", " 'feature_selection__k': [3, 4, 5, 6, 7]}\n", "]\n", "\n", "grid_search_prep = GridSearchCV(prepare_select_and_predict_pipeline, param_grid, cv=5,\n", " scoring='neg_mean_squared_error', verbose=2, n_jobs=4)\n", "grid_search_prep.fit(housing, housing_labels)" ] }, { "cell_type": "code", "execution_count": 128, "metadata": {}, "outputs": [], "source": [ "grid_search_prep.best_params_" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Great! It seems that we had the right imputer strategy (mean), and apparently only the top 7 features are useful (out of 9), the last 2 seem to just add some noise." ] }, { "cell_type": "code", "execution_count": 129, "metadata": {}, "outputs": [], "source": [ "housing.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Congratulations! You already know quite a lot about Machine Learning. :)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.2" }, "nav_menu": { "height": "279px", "width": "309px" }, "toc": { "navigate_menu": true, "number_sections": true, "sideBar": true, "threshold": 6, "toc_cell": false, "toc_section_display": "block", "toc_window_display": false } }, "nbformat": 4, "nbformat_minor": 1 }