From 04c539e8029fcce35837b4c17f837fbeabdeb801 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aur=C3=A9lien=20Geron?= Date: Sat, 20 Feb 2016 21:37:07 +0100 Subject: [PATCH] First part of the pandas tutorial --- index.ipynb | 1 + tools_pandas.ipynb | 1714 +++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 1713 insertions(+), 2 deletions(-) diff --git a/index.ipynb b/index.ipynb index 73a0b15..aa9d9b8 100644 --- a/index.ipynb +++ b/index.ipynb @@ -11,6 +11,7 @@ "## Tools\n", "* [NumPy](tools_numpy.ipynb)\n", "* [Matplotlib](tools_matplotlib.ipynb)\n", + "* [Pandas](tools_pandas.ipynb)\n", "\n", "**This work is in progress, more notebooks are coming soon...**" ] diff --git a/tools_pandas.ipynb b/tools_pandas.ipynb index fb73ed7..f9b4a75 100644 --- a/tools_pandas.ipynb +++ b/tools_pandas.ipynb @@ -5,9 +5,1719 @@ "metadata": {}, "source": [ "# Tools - pandas\n", - "*The `pandas` library provides high-performance, easy-to-use data structures and data analysis tools.*\n", + "*The `pandas` library provides high-performance, easy-to-use data structures and data analysis tools. The main data structure is the `DataFrame`, which you can think of as a spreadsheet (including column names and row labels).*\n", "\n", - "Coming soon..." + "**Prerequisites:**\n", + "* NumPy – if you are not familiar with NumPy, we recommend that you go through the [NumPy tutorial](tools_numpy.ipynb) now.\n", + "\n", + "## Setup\n", + "First, let's make sure this notebook works well in both python 2 and 3:" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "from __future__ import division\n", + "from __future__ import print_function\n", + "from __future__ import unicode_literals" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now let's import `pandas`. People usually import it as `pd`:" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "import pandas as pd" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## `Series` objects\n", + "The `pandas` library contains these useful data structures:\n", + "* `Series` objects, that we will discuss now. A `Series` object is similar to a column in a spreadsheet (with a column name and row labels).\n", + "* `DataFrame` objects. You can see this as a full spreadsheet (with column names and row labels).\n", + "* `Panel` objects. You can see a `Panel` a a dictionary of `DataFrame`s (less used). These are less used, so we will not discuss them here.\n", + "\n", + "### Creating a `Series`\n", + "Let's start by creating our first `Series` object!" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "s = pd.Series([2,-1,3,5])\n", + "s" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Similar to a 1D `ndarray`\n", + "`Series` objects behave much like one-dimensional NumPy `ndarray`s, and you can often pass them as parameters to NumPy functions:" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "import numpy as np\n", + "np.exp(s)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Arithmetic operations on `Series` are also possible, and they apply *elementwise*, just like for `ndarray`s:" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "s + pd.Series([1000,2000,3000,4000])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Similar to NumPy, if you add a single number to a `Series`, that number is added to all items in the `Series`:" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "s + 1000" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The same is true for all binary operations such as `*` or `/`, and even conditional operations:" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "s < 0" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Index labels\n", + "Each item in a `Series` object has a unique identifier called the *index label*. By default, it is simply the index of the item in the `Series` but you can also set the index labels manually:" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "s2 = pd.Series([68, 83, 112, 68], index=[\"alice\", \"bob\", \"charles\", \"darwin\"])\n", + "s2" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can then use the `Series` just like a `dict`:" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "s2[\"bob\"]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can still access the items by location, like in a regular array:" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "s2[1]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Slicing a `Series` also slices the index labels:" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "s2[1:3]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This can lead to unexpected results when using the default labels, so be careful:" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "surprise = pd.Series([1000, 1001, 1002, 1003])\n", + "surprise" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "surprise_slice = surprise[2:]\n", + "surprise_slice" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Oh look! The first element has index label `2`. The element with index label `0` is absent from the slice:" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "try:\n", + " surprise_slice[0]\n", + "except KeyError as e:\n", + " print(\"Key error:\", e)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "But you can access elements by location using the `iloc` attribute:" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "surprise_slice.iloc[0]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Init from `dict`\n", + "You can create a `Series` object from a `dict`. The keys will be used as index labels:" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "weights = {\"alice\": 68, \"bob\": 83, \"colin\": 86, \"darwin\": 68}\n", + "s3 = pd.Series(weights)\n", + "s3" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can control which elements you want to include in the `Series` and in what order by passing a second argument to the constructor with the list of desired index labels:" + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "s4 = pd.Series(weights, [\"colin\", \"alice\"])\n", + "s4" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Automatic alignment\n", + "When an operation involves multiple `Series` objects, `pandas` automatically aligns items by matching index labels." + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "print(s2.keys())\n", + "print(s3.keys())\n", + "s2 + s3" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The resulting `Series` contains the union of index labels from `s2` and `s3`. Since `\"colin\"` is missing from `s2` and `\"charles\"` is missing from `s3`, these items have a `NaN` result value (ie. Not-a-Number means *missing*).\n", + "\n", + "Automatic alignment is very handy when working with data that may come from various sources with varying structure and missing items. But if you forget to set the right index labels, you can have surprising results:" + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "s5 = pd.Series([1000,1000,1000,1000])\n", + "print(\"s2 =\", s2.values)\n", + "print(\"s5 =\", s5.values)\n", + "print(\"s2 + s5 =\")\n", + "s2 + s5" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Init with a scalar\n", + "You can also initialize a `Series` object using a scalar and a list of index labels: all items will be set to the scalar." + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "meaning = pd.Series(42, [\"life\", \"universe\", \"everything\"])\n", + "meaning" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### `Series` name\n", + "A `Series` can have a `name`:" + ] + }, + { + "cell_type": "code", + "execution_count": 21, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "s6 = pd.Series([83, 68], index=[\"bob\", \"alice\"], name=\"weights\")\n", + "s6" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Plotting a `Series`\n", + "Pandas makes it easy to plot `Series` data using matplotlib (for more details on matplotlib, check out the [matplotlib tutorial](tools_matplotlib.ipynb)). Just import matplotlib and call the `plot` method:" + ] + }, + { + "cell_type": "code", + "execution_count": 85, + "metadata": { + "collapsed": false, + "scrolled": true + }, + "outputs": [], + "source": [ + "%matplotlib inline\n", + "import matplotlib.pyplot as plt\n", + "s7 = pd.Series([4,9,10,8,14,12,11,9,17,16,19,13], name=\"temperature\")\n", + "s7.plot()\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "There are *many* options for plotting your data. It is not necessary to list them all here: if you need a particular type of plot (histograms, pie charts, etc.), just look for it in the excellent [Visualization](http://pandas.pydata.org/pandas-docs/stable/visualization.html) section of pandas' documentation, and look at the example code." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## `DataFrame` objects\n", + "A DataFrame object represents a spreadsheet, with cell values, column names and row index labels. You can think of them as dictionaries of `Series` objects.\n", + "\n", + "### Creating a `DataFrame`\n", + "You can create a DataFrame by passing a dictionary of `Series` objects:" + ] + }, + { + "cell_type": "code", + "execution_count": 23, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "people_ids = [\"alice\", \"bob\", \"charles\"]\n", + "people_dict = {\n", + " \"weight\": pd.Series([68, 83, 112], index=people_ids),\n", + " \"birthyear\": pd.Series([1985, 1984, 1992], index=people_ids, name=\"year\"),\n", + " \"children\": pd.Series([np.nan, 3, 0], index=people_ids),\n", + " \"hobby\": pd.Series([\"Biking\", \"Dancing\", \"Reading\"], index=people_ids),\n", + "}\n", + "people = pd.DataFrame(people_dict)\n", + "people" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Note that DataFrames are displayed nicely in Jupyter notebooks! Also, note that `Series` names are ignored (`\"year\"` was dropped)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can access columns pretty much as you would expect. They are returned as `Series` objects:" + ] + }, + { + "cell_type": "code", + "execution_count": 24, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "people[\"birthyear\"]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "If you pass a list of columns and/or index row labels to the `DataFrame` constructor, it will guarantee that these columns and/or rows will exist, in that order, and no other column/row will exist. For example:" + ] + }, + { + "cell_type": "code", + "execution_count": 25, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "d2 = pd.DataFrame(\n", + " people_dict,\n", + " columns=[\"birthyear\", \"weight\", \"height\"],\n", + " index=[\"bob\", \"alice\", \"eugene\"]\n", + " )\n", + "d2" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Another convenient way to create a `DataFrame` is to pass all the values to the constructor as an `ndarray`, and specify the column names and row index labels separately:" + ] + }, + { + "cell_type": "code", + "execution_count": 26, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "values = np.array([\n", + " [1985, np.nan, \"Biking\", 68],\n", + " [1984, 3, \"Dancing\", 83],\n", + " [1992, 0, \"Reading\", 112]\n", + " ])\n", + "d3 = pd.DataFrame(\n", + " values,\n", + " columns=[\"birthyear\", \"children\", \"hobby\", \"weight\"],\n", + " index=[\"alice\", \"bob\", \"charles\"]\n", + " )\n", + "d3" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Instead of an `ndarray`, you can also pass a `DataFrame` object:" + ] + }, + { + "cell_type": "code", + "execution_count": 27, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "d4 = pd.DataFrame(\n", + " d3,\n", + " columns=[\"hobby\", \"children\"],\n", + " index=[\"alice\", \"bob\"]\n", + " )\n", + "d4" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "It is also possible to create a `DataFrame` with a dictionary (or list) of dictionaries (or list):" + ] + }, + { + "cell_type": "code", + "execution_count": 28, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "people = pd.DataFrame({\n", + " \"birthyear\": {\"alice\":1985, \"bob\": 1984, \"charles\": 1992},\n", + " \"hobby\": {\"alice\":\"Biking\", \"bob\": \"Dancing\", \"charles\": \"Reading\"},\n", + " \"weight\": {\"alice\":68, \"bob\": 83, \"charles\": 112},\n", + " \"children\": {\"alice\":np.nan, \"bob\": 3, \"charles\": 0}\n", + "})\n", + "people" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Multi-indexing\n", + "If all columns are tuples of the same size, then they are understood as a multi-index. The same goes for row index labels. For example:" + ] + }, + { + "cell_type": "code", + "execution_count": 29, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "d5 = pd.DataFrame(\n", + " {\n", + " (\"public\", \"birthyear\"):\n", + " {(\"Paris\",\"alice\"):1985, (\"Paris\",\"bob\"): 1984, (\"London\",\"charles\"): 1992},\n", + " (\"public\", \"hobby\"):\n", + " {(\"Paris\",\"alice\"):\"Biking\", (\"Paris\",\"bob\"): \"Dancing\", (\"London\",\"charles\"): \"Reading\"},\n", + " (\"private\", \"weight\"):\n", + " {(\"Paris\",\"alice\"):68, (\"Paris\",\"bob\"): 83, (\"London\",\"charles\"): 112},\n", + " (\"private\", \"children\"):\n", + " {(\"Paris\", \"alice\"):np.nan, (\"Paris\",\"bob\"): 3, (\"London\",\"charles\"): 0}\n", + " }\n", + ")\n", + "d5" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can now get a `DataFrame` containing all the `\"public\"` columns very simply:" + ] + }, + { + "cell_type": "code", + "execution_count": 30, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "d5[\"public\"]" + ] + }, + { + "cell_type": "code", + "execution_count": 31, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "d5[\"public\", \"hobby\"] # Same result as d4[\"public\"][\"hobby\"]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Accessing rows\n", + "Let's go back to the `people` `DataFrame`:" + ] + }, + { + "cell_type": "code", + "execution_count": 32, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "people" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The `loc` attribute lets you access rows instead of columns. The result is `Series` object in which the `DataFrame`'s column names are mapped to row index labels:" + ] + }, + { + "cell_type": "code", + "execution_count": 33, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "people.loc[\"charles\"]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can also access rows by location using the `iloc` attribute:" + ] + }, + { + "cell_type": "code", + "execution_count": 34, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "people.iloc[2]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can also get a slice of rows, and this returns a `DataFrame` object:" + ] + }, + { + "cell_type": "code", + "execution_count": 35, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "people.iloc[1:3]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Finally, you can pass a boolean array to get the matching rows:" + ] + }, + { + "cell_type": "code", + "execution_count": 36, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "people[np.array([True, False, True])]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This is most useful when combined with boolean expressions:" + ] + }, + { + "cell_type": "code", + "execution_count": 37, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "people[people[\"birthyear\"] < 1990]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Adding and removing columns\n", + "You can generally treat `DataFrame` objects like dictionaries of `Series`, so the following work fine:" + ] + }, + { + "cell_type": "code", + "execution_count": 38, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "people" + ] + }, + { + "cell_type": "code", + "execution_count": 39, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "people[\"age\"] = 2016 - people[\"birthyear\"] # adds a new column \"age\"\n", + "people[\"over 30\"] = people[\"age\"] > 30 # adds another column \"over 30\"\n", + "birthyears = people.pop(\"birthyear\")\n", + "del people[\"children\"]\n", + "\n", + "people" + ] + }, + { + "cell_type": "code", + "execution_count": 40, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "birthyears" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "When you add a new colum, it must have the same number of rows. Missing rows are filled with NaN, and extra rows are ignored:" + ] + }, + { + "cell_type": "code", + "execution_count": 41, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "people[\"pets\"] = pd.Series({\"bob\": 0, \"charles\": 5, \"eugene\":1}) # alice is missing, eugene is ignored\n", + "people" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "When adding a new column, it is added at the end (on the right) by default. You can also insert a column anywhere else using the `insert` method:" + ] + }, + { + "cell_type": "code", + "execution_count": 42, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "people.insert(1, \"height\", [172, 181, 185])\n", + "people" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Assigning new columns\n", + "You can also create new columns by calling the `assign` method. Note that this returns a new `DataFrame` object, the original is not modified:" + ] + }, + { + "cell_type": "code", + "execution_count": 43, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "people.assign(\n", + " body_mass_index = people[\"weight\"] / (people[\"height\"] / 100) ** 2,\n", + " has_pets = people[\"pets\"] > 0\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Note that you cannot access columns created within the same assignment:" + ] + }, + { + "cell_type": "code", + "execution_count": 44, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "try:\n", + " people.assign(\n", + " body_mass_index = people[\"weight\"] / (people[\"height\"] / 100) ** 2,\n", + " overweight = people[\"body_mass_index\"] > 25\n", + " )\n", + "except KeyError as e:\n", + " print(\"Key error:\", e)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The solution is to split this assignment in two consecutive assignments:" + ] + }, + { + "cell_type": "code", + "execution_count": 45, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "d6 = people.assign(body_mass_index = people[\"weight\"] / (people[\"height\"] / 100) ** 2)\n", + "d6.assign(overweight = d6[\"body_mass_index\"] > 25)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Having to create a temporary variable `d6` is not very convenient. You may want to just chain the assigment calls, but it does not work because the `people` object is not actually modified by the first assignment:" + ] + }, + { + "cell_type": "code", + "execution_count": 46, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "try:\n", + " (people\n", + " .assign(body_mass_index = people[\"weight\"] / (people[\"height\"] / 100) ** 2)\n", + " .assign(overweight = people[\"body_mass_index\"] > 25)\n", + " )\n", + "except KeyError as e:\n", + " print(\"Key error:\", e)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "But fear not, there is a simple solution. You can pass a function to the `assign` method (typically a `lambda` function), and this function will be called with the `DataFrame` as a parameter:" + ] + }, + { + "cell_type": "code", + "execution_count": 47, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "(people\n", + " .assign(body_mass_index = lambda df: df[\"weight\"] / (df[\"height\"] / 100) ** 2)\n", + " .assign(overweight = lambda df: df[\"body_mass_index\"] > 25)\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Problem solved!" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Evaluating an expression\n", + "A great feature supported by pandas is expression evaluation. This relies on the `numexpr` library which must be installed." + ] + }, + { + "cell_type": "code", + "execution_count": 48, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "people.eval(\"weight / (height/100) ** 2 > 25\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Assignment expressions are also supported, and contrary to the `assign` method, this does not create a copy of the `DataFrame`, instead it directly modifies it:" + ] + }, + { + "cell_type": "code", + "execution_count": 49, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "people.eval(\"body_mass_index = weight / (height/100) ** 2\")\n", + "people" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can use a local or global variable in an expression by prefixing it with `'@'`:" + ] + }, + { + "cell_type": "code", + "execution_count": 50, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "overweight_threshold = 30\n", + "people.eval(\"overweight = body_mass_index > @overweight_threshold\")\n", + "people" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Querying a `DataFrame`\n", + "The `query` method lets you filter a `DataFrame` based on a query expression:" + ] + }, + { + "cell_type": "code", + "execution_count": 51, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "people.query(\"age > 30 and pets == 0\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Sorting a `DataFrame`\n", + "You can sort a `DataFrame` by calling its `sort_index` method. By default it sorts the rows by their index label, in ascending order, but let's reverse the order:" + ] + }, + { + "cell_type": "code", + "execution_count": 52, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "people.sort_index(ascending=False)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Note that `sort_index` returned a sorted *copy* of the `DataFrame`. To modify `people` directly, we can set the `inplace` argument to `True`. Also, we can sort the columns instead of the rows by setting `axis=1`:" + ] + }, + { + "cell_type": "code", + "execution_count": 53, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "people.sort_index(axis=1, inplace=True)\n", + "people" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To sort the `DataFrame` by the values instead of the labels, we can use `sort_values` and specify the column to sort by:" + ] + }, + { + "cell_type": "code", + "execution_count": 54, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "people.sort_values(by=\"age\", inplace=True)\n", + "people" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Plotting a `DataFrame`\n", + "Just like for `Series`, pandas makes it easy to draw nice graphs based on a `DataFrame`.\n", + "\n", + "For example, it is trivial to create a line plot from a `DataFrame`'s data by calling its `plot` method:" + ] + }, + { + "cell_type": "code", + "execution_count": 55, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "people.plot(kind = \"line\", x = \"body_mass_index\", y = [\"height\", \"weight\"])\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can pass extra arguments supported by matplotlib's functions. For example, we can create scatterplot and pass it a list of sizes using the `s` argument of matplotlib's `scatter` function:" + ] + }, + { + "cell_type": "code", + "execution_count": 56, + "metadata": { + "collapsed": false, + "scrolled": true + }, + "outputs": [], + "source": [ + "people.plot(kind = \"scatter\", x = \"height\", y = \"weight\", s=[40, 120, 200])\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Again, there are way too many options to list here: the best option is to scroll through the [Visualization](http://pandas.pydata.org/pandas-docs/stable/visualization.html) page in pandas' documentation, find the plot you are interested in and look at the example code." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Operations on `DataFrame`s\n", + "Although `DataFrame`s do not try to mimick NumPy arrays, there are a few similarities. Let's create a `DataFrame` to demonstrate this:" + ] + }, + { + "cell_type": "code", + "execution_count": 57, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "grades_array = np.array([[8,8,9],[10,9,9],[4, 8, 2], [9, 10, 10]])\n", + "grades = pd.DataFrame(grades_array, columns=[\"sep\", \"oct\", \"nov\"], index=[\"alice\",\"bob\",\"charles\",\"darwin\"])\n", + "grades" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can apply NumPy mathematical functions on a `DataFrame`: the function is applied to all values:" + ] + }, + { + "cell_type": "code", + "execution_count": 58, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "np.sqrt(grades)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Similarly, adding a single value to a `DataFrame` will add that value to all elements in the `DataFrame`. This is called *broadcasting*:" + ] + }, + { + "cell_type": "code", + "execution_count": 59, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "grades + 1" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Of course, the same is true for all other binary operations, including arithmetic (`*`,`/`,`**`...) and conditional (`>`, `==`...) operations:" + ] + }, + { + "cell_type": "code", + "execution_count": 60, + "metadata": { + "collapsed": false, + "scrolled": false + }, + "outputs": [], + "source": [ + "grades >= 5" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Aggregation operations, such as computing the `max`, the `sum` or the `mean` of a `DataFrame`, apply to each column, and you get back a `Series` object:" + ] + }, + { + "cell_type": "code", + "execution_count": 61, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "grades.mean()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The `all` method is also an aggregation operation: it checks whether all values are `True` or not. Let's see during which months all students got a grade greater than `5`:" + ] + }, + { + "cell_type": "code", + "execution_count": 62, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "(grades > 5).all()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Most of these functions take an optional `axis` parameter which lets you specify along which axis of the `DataFrame` you want the operation executed. The default is `axis=0`, meaning that the operation is executed vertically (on each column). You can set `axis=1` to execute the operation horizontally (on each row). For example, let's find out which students had all grades greater than `5`:" + ] + }, + { + "cell_type": "code", + "execution_count": 63, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "(grades > 5).all(axis = 1)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The `any` method returns `True` if any value is True. Let's see who got at least one grade 10:" + ] + }, + { + "cell_type": "code", + "execution_count": 64, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "(grades == 10).any(axis = 1)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "If you add a `Series` object to a `DataFrame` (or execute any other binary operation), pandas attempts to broadcast the operation to all *rows* in the `DataFrame`. This only works if the `Series` has the same size as the `DataFrame`s rows. For example, let's substract the `mean` of the `DataFrame` (a `Series` object) from the `DataFrame`:" + ] + }, + { + "cell_type": "code", + "execution_count": 65, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "grades - grades.mean() # equivalent to: grades - [7.75, 8.75, 7.50]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We substracted `7.75` from all September grades, `8.75` from October grades and `7.50` from November grades. It is equivalent to substracting this `DataFrame`:" + ] + }, + { + "cell_type": "code", + "execution_count": 66, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "pd.DataFrame([[7.75, 8.75, 7.50]]*4, index=grades.index, columns=grades.columns)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "If you want to substract the global mean from every grade, here is one way to do it:" + ] + }, + { + "cell_type": "code", + "execution_count": 67, + "metadata": { + "collapsed": false, + "scrolled": true + }, + "outputs": [], + "source": [ + "grades - grades.values.mean() # substracts the global mean (8.00) from all grades" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Automatic alignment\n", + "Similar to `Series`, when operating on multiple `DataFrame`s, pandas automatically aligns them by row index label, but also by column names. Let's create a `DataFrame` with bonus points for each person from October to December:" + ] + }, + { + "cell_type": "code", + "execution_count": 68, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "bonus_array = np.array([[0,np.nan,2],[np.nan,1,0],[0, 1, 0], [3, 3, 0]])\n", + "bonus_points = pd.DataFrame(bonus_array, columns=[\"oct\", \"nov\", \"dec\"], index=[\"bob\",\"colin\", \"darwin\", \"charles\"])\n", + "bonus_points" + ] + }, + { + "cell_type": "code", + "execution_count": 69, + "metadata": { + "collapsed": false, + "scrolled": true + }, + "outputs": [], + "source": [ + "grades + bonus_points" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Looks like the addition worked in some cases but way too many elements are now empty. That's because when aligning the `DataFrame`s, some columns and rows were only present on one side, and thus they were considered missing on the other side (`NaN`). Then adding `NaN` to a number results in `NaN`, hence the result.\n", + "\n", + "### Handling missing data\n", + "Dealing with missing data is a frequent task when working with real life data. Pandas offers a few tools to handle missing data.\n", + " \n", + "Let's try to fix the problem above. For example, we can decide that missing data should result in a zero, instead of `NaN`. We can replace all `NaN` values by a any value using the `fillna` method:" + ] + }, + { + "cell_type": "code", + "execution_count": 70, + "metadata": { + "collapsed": false, + "scrolled": true + }, + "outputs": [], + "source": [ + "(grades + bonus_points).fillna(0)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "It's a bit unfair that we're setting grades to zero in September, though. Perhaps we should decide that missing grades are missing grades, but missing bonus points should be replaced by zeros:" + ] + }, + { + "cell_type": "code", + "execution_count": 71, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "fixed_bonus_points = bonus_points.fillna(0)\n", + "fixed_bonus_points.insert(0, \"sep\", 0)\n", + "fixed_bonus_points.loc[\"alice\"] = 0\n", + "grades + fixed_bonus_points" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "That's much better: although we made up some data, we have not been too unfair.\n", + "\n", + "Another way to handle missing data is to interpolate. Let's look at the `bonus_points` `DataFrame` again:" + ] + }, + { + "cell_type": "code", + "execution_count": 72, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "bonus_points" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now let's call the `interpolate` method. By default, it interpolates vertically (`axis=0`), so let's tell it to interpolate horizontally (`axis=1`)." + ] + }, + { + "cell_type": "code", + "execution_count": 73, + "metadata": { + "collapsed": false, + "scrolled": false + }, + "outputs": [], + "source": [ + "bonus_points.interpolate(axis=1)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Bob had 0 bonus points in October, and 2 in December. When we interpolate for November, we get the mean: 1 bonus point. Colin had 1 bonus point in November, but we do not know how many bonus points he had in September, so we cannot interpolate, this is why there is still a missing value in October after interpolation. To fix this, we can set the September bonus points to 0 before interpolation." + ] + }, + { + "cell_type": "code", + "execution_count": 74, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "better_bonus_points = bonus_points.copy()\n", + "better_bonus_points.insert(0, \"sep\", 0)\n", + "better_bonus_points.loc[\"alice\"] = 0\n", + "better_bonus_points = better_bonus_points.interpolate(axis=1)\n", + "better_bonus_points" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Great, now we have reasonable bonus points everywhere. Let's find out the final grades:" + ] + }, + { + "cell_type": "code", + "execution_count": 75, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "grades + better_bonus_points" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "There's not much we can do about December and Colin: it's bad enough that we are making up bonus points, but we can't reasonably make up grades (well I guess some teachers probably do).\n", + "\n", + "It is slightly annoying that the September column ends up on the right. This is because the `DataFrame`s we are adding do not have the exact same columns (the `grades` `DataFrame` is missing the `\"dec\"` column), so to make things predictable, pandas orders the final columns alphabetically. To fix this, we can simply add the missing column before adding:" + ] + }, + { + "cell_type": "code", + "execution_count": 76, + "metadata": { + "collapsed": false, + "scrolled": true + }, + "outputs": [], + "source": [ + "grades[\"dec\"] = np.nan\n", + "final_grades = grades + better_bonus_points\n", + "final_grades" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Aggregating with `groupby`\n", + "Similar to the SQL language, pandas allows grouping your data into groups to run calculations over each group.\n", + "\n", + "First, let's add some extra data about each person so we can group them:" + ] + }, + { + "cell_type": "code", + "execution_count": 77, + "metadata": { + "collapsed": false, + "scrolled": true + }, + "outputs": [], + "source": [ + "final_grades[\"hobby\"] = [\"Biking\", \"Dancing\", \"Reading\", \"Dancing\", \"Biking\"]\n", + "final_grades" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now let's group data in this `DataFrame` by hobby:" + ] + }, + { + "cell_type": "code", + "execution_count": 78, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "grouped_grades = final_grades.groupby(\"hobby\")\n", + "grouped_grades" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now let's compute the average grade per hobby:" + ] + }, + { + "cell_type": "code", + "execution_count": 79, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "grouped_grades.mean()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "That was easy! Note that the `NaN` values have simply been skipped." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Overview functions\n", + "When dealing with large `DataFrames`, it is useful to get a quick overview of its content. Pandas offers a few functions for this. First, let's create a large `DataFrame` with a mix of numeric values, missing values and text values. Notice how Jupyter displays only the corners of the `DataFrame`:" + ] + }, + { + "cell_type": "code", + "execution_count": 80, + "metadata": { + "collapsed": false, + "scrolled": false + }, + "outputs": [], + "source": [ + "much_data = np.fromfunction(lambda x,y: (x+y*y)%17*11, (10000, 26))\n", + "large = pd.DataFrame(much_data, columns=list(\"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"))\n", + "large[large%16==0] = np.nan\n", + "large.insert(3,\"some_text\", \"Blabla\")\n", + "large" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The `head` method returns the top 5 rows:" + ] + }, + { + "cell_type": "code", + "execution_count": 81, + "metadata": { + "collapsed": false, + "scrolled": false + }, + "outputs": [], + "source": [ + "large.head()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Of course there's also a `tail` function to view the bottom 5 rows. You can pass the number of rows you want:" + ] + }, + { + "cell_type": "code", + "execution_count": 82, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "large.tail(n=2)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The `info` method prints out a summary of each columns contents:" + ] + }, + { + "cell_type": "code", + "execution_count": 83, + "metadata": { + "collapsed": false, + "scrolled": false + }, + "outputs": [], + "source": [ + "large.info()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Finally, the `describe` method gives a nice overview of the main aggregated values over each column:\n", + "* `count`: number of non-null (not NaN) values\n", + "* `mean`: mean of non-null values\n", + "* `std`: [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation) of non-null values\n", + "* `min`: minimum of non-null values\n", + "* `25%`, `50%`, `75%`: 25th, 50th and 75th [percentile](https://en.wikipedia.org/wiki/Percentile) of non-null values\n", + "* `max`: maximum of non-null values" + ] + }, + { + "cell_type": "code", + "execution_count": 84, + "metadata": { + "collapsed": false, + "scrolled": false + }, + "outputs": [], + "source": [ + "large.describe()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# To be continued...\n", + "Coming soon:\n", + "* categories\n", + "* pivot-tables\n", + "* stacking\n", + "* merging\n", + "* time series\n", + "* loading & saving" ] } ],