Add exercise solutions 1 to 3

main
Aurélien Geron 2017-10-04 10:57:40 +02:00
parent f1a5512a15
commit 6053a2e254
1 changed files with 896 additions and 4 deletions

View File

@ -460,9 +460,7 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 34, "execution_count": 34,
"metadata": { "metadata": {},
"collapsed": true
},
"outputs": [], "outputs": [],
"source": [ "source": [
"y_scores = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3,\n", "y_scores = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3,\n",
@ -1171,7 +1169,901 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"**Coming soon**" "## 1. An MNIST Classifier With Over 97% Accuracy"
]
},
{
"cell_type": "code",
"execution_count": 88,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.model_selection import GridSearchCV\n",
"\n",
"param_grid = [{'weights': [\"uniform\", \"distance\"], 'n_neighbors': [3, 4, 5]}]\n",
"\n",
"knn_clf = KNeighborsClassifier()\n",
"grid_search = GridSearchCV(knn_clf, param_grid, cv=5, verbose=3)\n",
"grid_search.fit(X_train, y_train)"
]
},
{
"cell_type": "code",
"execution_count": 89,
"metadata": {},
"outputs": [],
"source": [
"grid_search.best_params_"
]
},
{
"cell_type": "code",
"execution_count": 90,
"metadata": {},
"outputs": [],
"source": [
"grid_search.best_score_"
]
},
{
"cell_type": "code",
"execution_count": 91,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.metrics import accuracy_score\n",
"\n",
"y_pred = grid_search.predict(X_test)\n",
"accuracy_score(y_test, y_pred)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Data Augmentation"
]
},
{
"cell_type": "code",
"execution_count": 92,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"from scipy.ndimage.interpolation import shift"
]
},
{
"cell_type": "code",
"execution_count": 93,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def shift_image(image, dx, dy):\n",
" image = image.reshape((28, 28))\n",
" shifted_image = shift(image, [dy, dx], cval=0, mode=\"constant\")\n",
" return shifted_image.reshape([-1])"
]
},
{
"cell_type": "code",
"execution_count": 94,
"metadata": {},
"outputs": [],
"source": [
"image = X_train[1000]\n",
"shifted_image_down = shift_image(image, 0, 5)\n",
"shifted_image_left = shift_image(image, -5, 0)\n",
"\n",
"plt.figure(figsize=(12,3))\n",
"plt.subplot(131)\n",
"plt.title(\"Original\", fontsize=14)\n",
"plt.imshow(image.reshape(28, 28), interpolation=\"nearest\", cmap=\"Greys\")\n",
"plt.subplot(132)\n",
"plt.title(\"Shifted down\", fontsize=14)\n",
"plt.imshow(shifted_image_down.reshape(28, 28), interpolation=\"nearest\", cmap=\"Greys\")\n",
"plt.subplot(133)\n",
"plt.title(\"Shifted left\", fontsize=14)\n",
"plt.imshow(shifted_image_left.reshape(28, 28), interpolation=\"nearest\", cmap=\"Greys\")\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 95,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"X_train_augmented = [image for image in X_train]\n",
"y_train_augmented = [label for label in y_train]\n",
"\n",
"for dx, dy in ((1, 0), (-1, 0), (0, 1), (0, -1)):\n",
" for image, label in zip(X_train, y_train):\n",
" X_train_augmented.append(shift_image(image, dx, dy))\n",
" y_train_augmented.append(label)\n",
"\n",
"X_train_augmented = np.array(X_train_augmented)\n",
"y_train_augmented = np.array(y_train_augmented)"
]
},
{
"cell_type": "code",
"execution_count": 96,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"shuffle_idx = np.random.permutation(len(X_train_augmented))\n",
"X_train_augmented = X_train_augmented[shuffle_idx]\n",
"y_train_augmented = y_train_augmented[shuffle_idx]"
]
},
{
"cell_type": "code",
"execution_count": 97,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"knn_clf = KNeighborsClassifier(**grid_search.best_params_)"
]
},
{
"cell_type": "code",
"execution_count": 98,
"metadata": {},
"outputs": [],
"source": [
"knn_clf.fit(X_train_augmented, y_train_augmented)"
]
},
{
"cell_type": "code",
"execution_count": 99,
"metadata": {},
"outputs": [],
"source": [
"y_pred = knn_clf.predict(X_test)\n",
"accuracy_score(y_test, y_pred)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"By simply augmenting the data, we got a 0.5% accuracy boost. :)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3. Tackle the Titanic dataset"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The goal is to predict whether or not a passenger survived based on attributes such as their age, sex, passenger class, where they embarked and so on."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's load the data:"
]
},
{
"cell_type": "code",
"execution_count": 100,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"import os\n",
"\n",
"TITANIC_PATH = os.path.join(\"datasets\", \"titanic\")"
]
},
{
"cell_type": "code",
"execution_count": 101,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"import pandas as pd\n",
"\n",
"def load_titanic_data(filename, titanic_path=TITANIC_PATH):\n",
" csv_path = os.path.join(titanic_path, filename)\n",
" return pd.read_csv(csv_path)"
]
},
{
"cell_type": "code",
"execution_count": 102,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"train_data = load_titanic_data(\"train.csv\")\n",
"test_data = load_titanic_data(\"test.csv\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The data is already split into a training set and a test set. However, the test data does *not* contain the labels: your goal is to train the best model you can using the training data, then make your predictions on the test data and upload them to Kaggle to see your final score."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's take a peek at the top few rows of the training set:"
]
},
{
"cell_type": "code",
"execution_count": 103,
"metadata": {},
"outputs": [],
"source": [
"train_data.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The attributes have the following meaning:\n",
"* **Survived**: that's the target, 0 means the passenger did not survive, while 1 means he/she survived.\n",
"* **Pclass**: passenger class.\n",
"* **Name**, **Sex**, **Age**: self-explanatory\n",
"* **SibSp**: how many siblings & spouses of the passenger aboard the Titanic.\n",
"* **Parch**: how many children & parents of the passenger aboard the Titanic.\n",
"* **Ticket**: ticket id\n",
"* **Fare**: price paid (in pounds)\n",
"* **Cabin**: passenger's cabin number\n",
"* **Embarked**: where the passenger embarked the Titanic"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's get more info to see how much data is missing:"
]
},
{
"cell_type": "code",
"execution_count": 104,
"metadata": {},
"outputs": [],
"source": [
"train_data.info()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Okay, the **Age**, **Cabin** and **Embarked** attributes are sometimes null (less than 891 non-null), especially the **Cabin** (77% are null). We will ignore the **Cabin** for now and focus on the rest. The **Age** attribute has about 19% null values, so we will need to decide what to do with them. Replacing null values with the median age seems reasonable."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The **Name** and **Ticket** attributes may have some value, but they will be a bit tricky to convert into useful numbers that a model can consume. So for now, we will ignore them."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's take a look at the numerical attributes:"
]
},
{
"cell_type": "code",
"execution_count": 105,
"metadata": {},
"outputs": [],
"source": [
"train_data.describe()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"* Yikes, only 48% **Survived**. :( That's close to 50%, so accuracy will be a reasonable metric to evaluate our model.\n",
"* The mean **Fare** was £32.20, which does not seem so expensive (but it was probably a lot of money back then).\n",
"* The mean **Age** was less than 30 years old."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's check that the target is indeed 0 or 1:"
]
},
{
"cell_type": "code",
"execution_count": 106,
"metadata": {},
"outputs": [],
"source": [
"train_data[\"Survived\"].value_counts()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's take a quick look at all the categorical attributes:"
]
},
{
"cell_type": "code",
"execution_count": 107,
"metadata": {},
"outputs": [],
"source": [
"train_data[\"Pclass\"].value_counts()"
]
},
{
"cell_type": "code",
"execution_count": 108,
"metadata": {},
"outputs": [],
"source": [
"train_data[\"Sex\"].value_counts()"
]
},
{
"cell_type": "code",
"execution_count": 109,
"metadata": {},
"outputs": [],
"source": [
"train_data[\"Embarked\"].value_counts()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The Embarked attribute tells us where the passenger embarked: C=Cherbourg, Q=Queenstown, S=Southampton."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `CategoricalEncoder` class will allow us to convert categorical attributes to one-hot vectors. It will soon be added to Scikit-Learn, and in the meantime you can use the code below (copied from Pull Request #9151)."
]
},
{
"cell_type": "code",
"execution_count": 110,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"# Definition of the CategoricalEncoder class, copied from PR #9151.\n",
"# Just run this cell, or copy it to your code, no need to try to\n",
"# understand every line.\n",
"\n",
"from sklearn.base import BaseEstimator, TransformerMixin\n",
"from sklearn.utils import check_array\n",
"from sklearn.preprocessing import LabelEncoder\n",
"from scipy import sparse\n",
"\n",
"class CategoricalEncoder(BaseEstimator, TransformerMixin):\n",
" \"\"\"Encode categorical features as a numeric array.\n",
" The input to this transformer should be a matrix of integers or strings,\n",
" denoting the values taken on by categorical (discrete) features.\n",
" The features can be encoded using a one-hot aka one-of-K scheme\n",
" (``encoding='onehot'``, the default) or converted to ordinal integers\n",
" (``encoding='ordinal'``).\n",
" This encoding is needed for feeding categorical data to many scikit-learn\n",
" estimators, notably linear models and SVMs with the standard kernels.\n",
" Read more in the :ref:`User Guide <preprocessing_categorical_features>`.\n",
" Parameters\n",
" ----------\n",
" encoding : str, 'onehot', 'onehot-dense' or 'ordinal'\n",
" The type of encoding to use (default is 'onehot'):\n",
" - 'onehot': encode the features using a one-hot aka one-of-K scheme\n",
" (or also called 'dummy' encoding). This creates a binary column for\n",
" each category and returns a sparse matrix.\n",
" - 'onehot-dense': the same as 'onehot' but returns a dense array\n",
" instead of a sparse matrix.\n",
" - 'ordinal': encode the features as ordinal integers. This results in\n",
" a single column of integers (0 to n_categories - 1) per feature.\n",
" categories : 'auto' or a list of lists/arrays of values.\n",
" Categories (unique values) per feature:\n",
" - 'auto' : Determine categories automatically from the training data.\n",
" - list : ``categories[i]`` holds the categories expected in the ith\n",
" column. The passed categories are sorted before encoding the data\n",
" (used categories can be found in the ``categories_`` attribute).\n",
" dtype : number type, default np.float64\n",
" Desired dtype of output.\n",
" handle_unknown : 'error' (default) or 'ignore'\n",
" Whether to raise an error or ignore if a unknown categorical feature is\n",
" present during transform (default is to raise). When this is parameter\n",
" is set to 'ignore' and an unknown category is encountered during\n",
" transform, the resulting one-hot encoded columns for this feature\n",
" will be all zeros.\n",
" Ignoring unknown categories is not supported for\n",
" ``encoding='ordinal'``.\n",
" Attributes\n",
" ----------\n",
" categories_ : list of arrays\n",
" The categories of each feature determined during fitting. When\n",
" categories were specified manually, this holds the sorted categories\n",
" (in order corresponding with output of `transform`).\n",
" Examples\n",
" --------\n",
" Given a dataset with three features and two samples, we let the encoder\n",
" find the maximum value per feature and transform the data to a binary\n",
" one-hot encoding.\n",
" >>> from sklearn.preprocessing import CategoricalEncoder\n",
" >>> enc = CategoricalEncoder(handle_unknown='ignore')\n",
" >>> enc.fit([[0, 0, 3], [1, 1, 0], [0, 2, 1], [1, 0, 2]])\n",
" ... # doctest: +ELLIPSIS\n",
" CategoricalEncoder(categories='auto', dtype=<... 'numpy.float64'>,\n",
" encoding='onehot', handle_unknown='ignore')\n",
" >>> enc.transform([[0, 1, 1], [1, 0, 4]]).toarray()\n",
" array([[ 1., 0., 0., 1., 0., 0., 1., 0., 0.],\n",
" [ 0., 1., 1., 0., 0., 0., 0., 0., 0.]])\n",
" See also\n",
" --------\n",
" sklearn.preprocessing.OneHotEncoder : performs a one-hot encoding of\n",
" integer ordinal features. The ``OneHotEncoder assumes`` that input\n",
" features take on values in the range ``[0, max(feature)]`` instead of\n",
" using the unique values.\n",
" sklearn.feature_extraction.DictVectorizer : performs a one-hot encoding of\n",
" dictionary items (also handles string-valued features).\n",
" sklearn.feature_extraction.FeatureHasher : performs an approximate one-hot\n",
" encoding of dictionary items or strings.\n",
" \"\"\"\n",
"\n",
" def __init__(self, encoding='onehot', categories='auto', dtype=np.float64,\n",
" handle_unknown='error'):\n",
" self.encoding = encoding\n",
" self.categories = categories\n",
" self.dtype = dtype\n",
" self.handle_unknown = handle_unknown\n",
"\n",
" def fit(self, X, y=None):\n",
" \"\"\"Fit the CategoricalEncoder to X.\n",
" Parameters\n",
" ----------\n",
" X : array-like, shape [n_samples, n_feature]\n",
" The data to determine the categories of each feature.\n",
" Returns\n",
" -------\n",
" self\n",
" \"\"\"\n",
"\n",
" if self.encoding not in ['onehot', 'onehot-dense', 'ordinal']:\n",
" template = (\"encoding should be either 'onehot', 'onehot-dense' \"\n",
" \"or 'ordinal', got %s\")\n",
" raise ValueError(template % self.handle_unknown)\n",
"\n",
" if self.handle_unknown not in ['error', 'ignore']:\n",
" template = (\"handle_unknown should be either 'error' or \"\n",
" \"'ignore', got %s\")\n",
" raise ValueError(template % self.handle_unknown)\n",
"\n",
" if self.encoding == 'ordinal' and self.handle_unknown == 'ignore':\n",
" raise ValueError(\"handle_unknown='ignore' is not supported for\"\n",
" \" encoding='ordinal'\")\n",
"\n",
" X = check_array(X, dtype=np.object, accept_sparse='csc', copy=True)\n",
" n_samples, n_features = X.shape\n",
"\n",
" self._label_encoders_ = [LabelEncoder() for _ in range(n_features)]\n",
"\n",
" for i in range(n_features):\n",
" le = self._label_encoders_[i]\n",
" Xi = X[:, i]\n",
" if self.categories == 'auto':\n",
" le.fit(Xi)\n",
" else:\n",
" valid_mask = np.in1d(Xi, self.categories[i])\n",
" if not np.all(valid_mask):\n",
" if self.handle_unknown == 'error':\n",
" diff = np.unique(Xi[~valid_mask])\n",
" msg = (\"Found unknown categories {0} in column {1}\"\n",
" \" during fit\".format(diff, i))\n",
" raise ValueError(msg)\n",
" le.classes_ = np.array(np.sort(self.categories[i]))\n",
"\n",
" self.categories_ = [le.classes_ for le in self._label_encoders_]\n",
"\n",
" return self\n",
"\n",
" def transform(self, X):\n",
" \"\"\"Transform X using one-hot encoding.\n",
" Parameters\n",
" ----------\n",
" X : array-like, shape [n_samples, n_features]\n",
" The data to encode.\n",
" Returns\n",
" -------\n",
" X_out : sparse matrix or a 2-d array\n",
" Transformed input.\n",
" \"\"\"\n",
" X = check_array(X, accept_sparse='csc', dtype=np.object, copy=True)\n",
" n_samples, n_features = X.shape\n",
" X_int = np.zeros_like(X, dtype=np.int)\n",
" X_mask = np.ones_like(X, dtype=np.bool)\n",
"\n",
" for i in range(n_features):\n",
" valid_mask = np.in1d(X[:, i], self.categories_[i])\n",
"\n",
" if not np.all(valid_mask):\n",
" if self.handle_unknown == 'error':\n",
" diff = np.unique(X[~valid_mask, i])\n",
" msg = (\"Found unknown categories {0} in column {1}\"\n",
" \" during transform\".format(diff, i))\n",
" raise ValueError(msg)\n",
" else:\n",
" # Set the problematic rows to an acceptable value and\n",
" # continue `The rows are marked `X_mask` and will be\n",
" # removed later.\n",
" X_mask[:, i] = valid_mask\n",
" X[:, i][~valid_mask] = self.categories_[i][0]\n",
" X_int[:, i] = self._label_encoders_[i].transform(X[:, i])\n",
"\n",
" if self.encoding == 'ordinal':\n",
" return X_int.astype(self.dtype, copy=False)\n",
"\n",
" mask = X_mask.ravel()\n",
" n_values = [cats.shape[0] for cats in self.categories_]\n",
" n_values = np.array([0] + n_values)\n",
" indices = np.cumsum(n_values)\n",
"\n",
" column_indices = (X_int + indices[:-1]).ravel()[mask]\n",
" row_indices = np.repeat(np.arange(n_samples, dtype=np.int32),\n",
" n_features)[mask]\n",
" data = np.ones(n_samples * n_features)[mask]\n",
"\n",
" out = sparse.csc_matrix((data, (row_indices, column_indices)),\n",
" shape=(n_samples, indices[-1]),\n",
" dtype=self.dtype).tocsr()\n",
" if self.encoding == 'onehot-dense':\n",
" return out.toarray()\n",
" else:\n",
" return out"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's build our preprocessing pipelines. We will reuse the `DataframeSelector` we built in the previous chapter to select specific attributes from the `DataFrame`:"
]
},
{
"cell_type": "code",
"execution_count": 111,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"from sklearn.base import BaseEstimator, TransformerMixin\n",
"\n",
"# A class to select numerical or categorical columns \n",
"# since Scikit-Learn doesn't handle DataFrames yet\n",
"class DataFrameSelector(BaseEstimator, TransformerMixin):\n",
" def __init__(self, attribute_names):\n",
" self.attribute_names = attribute_names\n",
" def fit(self, X, y=None):\n",
" return self\n",
" def transform(self, X):\n",
" return X[self.attribute_names]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's build the pipeline for the numerical attributes:"
]
},
{
"cell_type": "code",
"execution_count": 112,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"from sklearn.pipeline import Pipeline\n",
"from sklearn.preprocessing import Imputer\n",
"\n",
"imputer = Imputer(strategy=\"median\")\n",
"\n",
"num_pipeline = Pipeline([\n",
" (\"select_numeric\", DataFrameSelector([\"Age\", \"SibSp\", \"Parch\", \"Fare\"])),\n",
" (\"imputer\", Imputer(strategy=\"median\")),\n",
" ])"
]
},
{
"cell_type": "code",
"execution_count": 113,
"metadata": {},
"outputs": [],
"source": [
"num_pipeline.fit_transform(train_data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will also need an imputer for the string categorical columns (the regular `Imputer` does not work on those):"
]
},
{
"cell_type": "code",
"execution_count": 114,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"# Inspired from stackoverflow.com/questions/25239958\n",
"class MostFrequentImputer(BaseEstimator, TransformerMixin):\n",
" def fit(self, X, y=None):\n",
" self.most_frequent = pd.Series([X[c].value_counts().index[0] for c in X],\n",
" index=X.columns)\n",
" return self\n",
" def transform(self, X, y=None):\n",
" return X.fillna(self.most_frequent)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can build the pipeline for the categorical attributes:"
]
},
{
"cell_type": "code",
"execution_count": 115,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"cat_pipeline = Pipeline([\n",
" (\"select_cat\", DataFrameSelector([\"Pclass\", \"Sex\", \"Embarked\"])),\n",
" (\"imputer\", MostFrequentImputer()),\n",
" (\"cat_encoder\", CategoricalEncoder(encoding='onehot-dense')),\n",
" ])"
]
},
{
"cell_type": "code",
"execution_count": 116,
"metadata": {},
"outputs": [],
"source": [
"cat_pipeline.fit_transform(train_data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, let's join the numerical and categorical pipelines:"
]
},
{
"cell_type": "code",
"execution_count": 117,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"from sklearn.pipeline import FeatureUnion\n",
"preprocess_pipeline = FeatureUnion(transformer_list=[\n",
" (\"num_pipeline\", num_pipeline),\n",
" (\"cat_pipeline\", cat_pipeline),\n",
" ])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Cool! Now we have a nice preprocessing pipeline that takes the raw data and outputs numerical input features that we can feed to any Machine Learning model we want."
]
},
{
"cell_type": "code",
"execution_count": 118,
"metadata": {},
"outputs": [],
"source": [
"X_train = preprocess_pipeline.fit_transform(train_data)\n",
"X_train"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's not forget to get the labels:"
]
},
{
"cell_type": "code",
"execution_count": 119,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"y_train = train_data[\"Survived\"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We are now ready to train a classifier. Let's start with an `SVC`:"
]
},
{
"cell_type": "code",
"execution_count": 120,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.svm import SVC\n",
"\n",
"svm_clf = SVC()\n",
"svm_clf.fit(X_train, y_train)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Great, our model is trained, let's use it to make predictions on the test set:"
]
},
{
"cell_type": "code",
"execution_count": 121,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"X_test = preprocess_pipeline.transform(test_data)\n",
"y_pred = svm_clf.predict(X_test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And now we could just build a CSV file with these predictions (respecting the format excepted by Kaggle), then upload it and hope for the best. But wait! We can do better than hope. Why don't we use cross-validation to have an idea of how good our model is?"
]
},
{
"cell_type": "code",
"execution_count": 122,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.model_selection import cross_val_score\n",
"\n",
"scores = cross_val_score(svm_clf, X_train, y_train, cv=10)\n",
"scores.mean()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Okay, over 73% accuracy, clearly better than random chance, but it's not a great score. Looking at the [leaderboard](https://www.kaggle.com/c/titanic/leaderboard) for the Titanic competition on Kaggle, you can see that you need to reach above 80% accuracy to be within the top 10% Kagglers. Some reached 100%, but since you can easily find the [list of victims](https://www.encyclopedia-titanica.org/titanic-victims/) of the Titanic, it seems likely that there was little Machine Learning involved in their performance! ;-) So let's try to build a model that reaches 80% accuracy."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's try a `RandomForestClassifier`:"
]
},
{
"cell_type": "code",
"execution_count": 123,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.ensemble import RandomForestClassifier\n",
"\n",
"forest_clf = RandomForestClassifier(random_state=42)\n",
"scores = cross_val_score(forest_clf, X_train, y_train, cv=10)\n",
"scores.mean()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"That's much better!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To improve this result further, you could:\n",
"* Compare many more models and tune hyperparameters using cross validation and grid search,\n",
"* Do more feature engineering, for example:\n",
" * replace **SibSp** and **Parch** with their sum,\n",
" * try to identify parts of names that correlate well with the **Survived** attribute (e.g. if the name contains \"Countess\", then survival seems more likely),\n",
"* try to convert numerical attributes to categorical attributes: for example, different age groups had very different survival rates (see below), so it may help to create an age bucket category and use it instead of the age. Similarly, it may be useful to have a special category for people traveling alone since only 30% of them survived (see below)."
]
},
{
"cell_type": "code",
"execution_count": 124,
"metadata": {},
"outputs": [],
"source": [
"train_data[\"AgeBucket\"] = train_data[\"Age\"] // 15 * 15\n",
"train_data[[\"AgeBucket\", \"Survived\"]].groupby(['AgeBucket']).mean()"
]
},
{
"cell_type": "code",
"execution_count": 125,
"metadata": {},
"outputs": [],
"source": [
"train_data[\"RelativesOnboard\"] = train_data[\"SibSp\"] + train_data[\"Parch\"]\n",
"train_data[[\"RelativesOnboard\", \"Survived\"]].groupby(['RelativesOnboard']).mean()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4. Spam classifier\n",
"\n",
"Coming soon..."
] ]
}, },
{ {