handson-ml/tools_pandas.ipynb

2820 lines
72 KiB
Plaintext
Raw Normal View History

2016-02-19 14:33:34 +01:00
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"**Tools - pandas**\n",
"\n",
2016-02-23 14:26:13 +01:00
"*The `pandas` library provides high-performance, easy-to-use data structures and data analysis tools. The main data structure is the `DataFrame`, which you can think of as an in-memory 2D table (like a spreadsheet, with column names and row labels). Many features available in Excel are available programmatically, such as creating pivot tables, computing columns based on other columns, plotting graphs, etc. You can also group rows by column value, or join tables much like in SQL. Pandas is also great at handling time series.*\n",
2016-02-19 14:33:34 +01:00
"\n",
2016-03-03 18:40:31 +01:00
"Prerequisites:\n",
2016-02-23 14:26:13 +01:00
"* NumPy if you are not familiar with NumPy, we recommend that you go through the [NumPy tutorial](tools_numpy.ipynb) now."
]
},
2021-03-01 22:13:13 +01:00
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<table align=\"left\">\n",
" <td>\n",
" <a target=\"_blank\" href=\"https://colab.research.google.com/github/ageron/handson-ml2/blob/master/tools_pandas.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n",
" </td>\n",
"</table>"
]
},
2016-02-23 14:26:13 +01:00
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Setup"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First, let's import `pandas`. People usually import it as `pd`:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 1,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
"import pandas as pd"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"# `Series` objects\n",
2016-02-20 21:37:07 +01:00
"The `pandas` library contains these useful data structures:\n",
2016-02-23 14:26:13 +01:00
"* `Series` objects, that we will discuss now. A `Series` object is 1D array, similar to a column in a spreadsheet (with a column name and row labels).\n",
"* `DataFrame` objects. This is a 2D table, similar to a spreadsheet (with column names and row labels).\n",
"* `Panel` objects. You can see a `Panel` as a dictionary of `DataFrame`s. These are less used, so we will not discuss them here."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Creating a `Series`\n",
2016-02-20 21:37:07 +01:00
"Let's start by creating our first `Series` object!"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 2,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
"s = pd.Series([2,-1,3,5])\n",
"s"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Similar to a 1D `ndarray`\n",
2016-02-20 21:37:07 +01:00
"`Series` objects behave much like one-dimensional NumPy `ndarray`s, and you can often pass them as parameters to NumPy functions:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 3,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
"import numpy as np\n",
"np.exp(s)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Arithmetic operations on `Series` are also possible, and they apply *elementwise*, just like for `ndarray`s:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 4,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"s + [1000,2000,3000,4000]"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"Similar to NumPy, if you add a single number to a `Series`, that number is added to all items in the `Series`. This is called * broadcasting*:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 5,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
"s + 1000"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The same is true for all binary operations such as `*` or `/`, and even conditional operations:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 6,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
"s < 0"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Index labels\n",
2016-02-23 14:26:13 +01:00
"Each item in a `Series` object has a unique identifier called the *index label*. By default, it is simply the rank of the item in the `Series` (starting at `0`) but you can also set the index labels manually:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 7,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
"s2 = pd.Series([68, 83, 112, 68], index=[\"alice\", \"bob\", \"charles\", \"darwin\"])\n",
"s2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can then use the `Series` just like a `dict`:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 8,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
"s2[\"bob\"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"You can still access the items by integer location, like in a regular array:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 9,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
"s2[1]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"To make it clear when you are accessing by label or by integer location, it is recommended to always use the `loc` attribute when accessing by label, and the `iloc` attribute when accessing by integer location:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 10,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"s2.loc[\"bob\"]"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 11,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"s2.iloc[1]"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"Slicing a `Series` also slices the index labels:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 12,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"s2.iloc[1:3]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This can lead to unexpected results when using the default numeric labels, so be careful:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 13,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
"surprise = pd.Series([1000, 1001, 1002, 1003])\n",
"surprise"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 14,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
"surprise_slice = surprise[2:]\n",
"surprise_slice"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Oh look! The first element has index label `2`. The element with index label `0` is absent from the slice:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 15,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
"try:\n",
" surprise_slice[0]\n",
"except KeyError as e:\n",
" print(\"Key error:\", e)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"But remember that you can access elements by integer location using the `iloc` attribute. This illustrates another reason why it's always better to use `loc` and `iloc` to access `Series` objects:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 16,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
"surprise_slice.iloc[0]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Init from `dict`\n",
2016-02-20 21:37:07 +01:00
"You can create a `Series` object from a `dict`. The keys will be used as index labels:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 17,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
"weights = {\"alice\": 68, \"bob\": 83, \"colin\": 86, \"darwin\": 68}\n",
"s3 = pd.Series(weights)\n",
"s3"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"You can control which elements you want to include in the `Series` and in what order by explicitly specifying the desired `index`:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 18,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"s4 = pd.Series(weights, index = [\"colin\", \"alice\"])\n",
2016-02-20 21:37:07 +01:00
"s4"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Automatic alignment\n",
2016-02-20 21:37:07 +01:00
"When an operation involves multiple `Series` objects, `pandas` automatically aligns items by matching index labels."
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 19,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
"print(s2.keys())\n",
"print(s3.keys())\n",
2016-02-23 14:26:13 +01:00
"\n",
2016-02-20 21:37:07 +01:00
"s2 + s3"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"The resulting `Series` contains the union of index labels from `s2` and `s3`. Since `\"colin\"` is missing from `s2` and `\"charles\"` is missing from `s3`, these items have a `NaN` result value. (ie. Not-a-Number means *missing*).\n",
2016-02-20 21:37:07 +01:00
"\n",
"Automatic alignment is very handy when working with data that may come from various sources with varying structure and missing items. But if you forget to set the right index labels, you can have surprising results:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 20,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
"s5 = pd.Series([1000,1000,1000,1000])\n",
"print(\"s2 =\", s2.values)\n",
"print(\"s5 =\", s5.values)\n",
2016-02-23 14:26:13 +01:00
"\n",
2016-02-20 21:37:07 +01:00
"s2 + s5"
]
},
2016-02-23 14:26:13 +01:00
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Pandas could not align the `Series`, since their labels do not match at all, hence the full `NaN` result."
]
},
2016-02-20 21:37:07 +01:00
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Init with a scalar\n",
2016-02-20 21:37:07 +01:00
"You can also initialize a `Series` object using a scalar and a list of index labels: all items will be set to the scalar."
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 21,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
"meaning = pd.Series(42, [\"life\", \"universe\", \"everything\"])\n",
"meaning"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## `Series` name\n",
2016-02-20 21:37:07 +01:00
"A `Series` can have a `name`:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 22,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
"s6 = pd.Series([83, 68], index=[\"bob\", \"alice\"], name=\"weights\")\n",
"s6"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Plotting a `Series`\n",
"Pandas makes it easy to plot `Series` data using matplotlib (for more details on matplotlib, check out the [matplotlib tutorial](tools_matplotlib.ipynb)). Just import matplotlib and call the `plot()` method:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 23,
2016-02-20 21:37:07 +01:00
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"%matplotlib inline\n",
"import matplotlib.pyplot as plt\n",
2016-02-23 14:26:13 +01:00
"temperatures = [4.4,5.1,6.1,6.2,6.1,6.1,5.7,5.2,4.7,4.1,3.9,3.5]\n",
"s7 = pd.Series(temperatures, name=\"Temperature\")\n",
2016-02-20 21:37:07 +01:00
"s7.plot()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"There are *many* options for plotting your data. It is not necessary to list them all here: if you need a particular type of plot (histograms, pie charts, etc.), just look for it in the excellent [Visualization](http://pandas.pydata.org/pandas-docs/stable/visualization.html) section of pandas' documentation, and look at the example code."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"# Handling time\n",
2016-02-23 14:26:13 +01:00
"Many datasets have timestamps, and pandas is awesome at manipulating such data:\n",
"* it can represent periods (such as 2016Q3) and frequencies (such as \"monthly\"),\n",
"* it can convert periods to actual timestamps, and *vice versa*,\n",
"* it can resample data and aggregate values any way you like,\n",
"* it can handle timezones.\n",
2016-02-20 21:37:07 +01:00
"\n",
2016-03-03 18:40:31 +01:00
"## Time range\n",
"Let's start by creating a time series using `pd.date_range()`. This returns a `DatetimeIndex` containing one datetime per hour for 12 hours starting on October 29th 2016 at 5:30pm."
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 24,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"dates = pd.date_range('2016/10/29 5:30pm', periods=12, freq='H')\n",
"dates"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"This `DatetimeIndex` may be used as an index in a `Series`:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 25,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"temp_series = pd.Series(temperatures, dates)\n",
"temp_series"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"Let's plot this series:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 26,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"temp_series.plot(kind=\"bar\")\n",
"\n",
"plt.grid(True)\n",
"plt.show()"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Resampling\n",
"Pandas lets us resample a time series very simply. Just call the `resample()` method and specify a new frequency:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 27,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"temp_series_freq_2H = temp_series.resample(\"2H\")\n",
"temp_series_freq_2H"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The resampling operation is actually a deferred operation, which is why we did not get a `Series` object, but a `DatetimeIndexResampler` object instead. To actually perform the resampling operation, we can simply call the `mean()` method: Pandas will compute the mean of every pair of consecutive hours:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 28,
"metadata": {},
"outputs": [],
"source": [
"temp_series_freq_2H = temp_series_freq_2H.mean()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's plot the result:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 29,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"temp_series_freq_2H.plot(kind=\"bar\")\n",
"plt.show()"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note how the values have automatically been aggregated into 2-hour periods. If we look at the 6-8pm period, for example, we had a value of `5.1` at 6:30pm, and `6.1` at 7:30pm. After resampling, we just have one value of `5.6`, which is the mean of `5.1` and `6.1`. Rather than computing the mean, we could have used any other aggregation function, for example we can decide to keep the minimum value of each period:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 30,
"metadata": {},
"outputs": [],
"source": [
"temp_series_freq_2H = temp_series.resample(\"2H\").min()\n",
"temp_series_freq_2H"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Or, equivalently, we could use the `apply()` method instead:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 31,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
"temp_series_freq_2H = temp_series.resample(\"2H\").apply(np.min)\n",
2016-02-23 14:26:13 +01:00
"temp_series_freq_2H"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Upsampling and interpolation\n",
2016-02-23 14:26:13 +01:00
"This was an example of downsampling. We can also upsample (ie. increase the frequency), but this creates holes in our data:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 32,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
"temp_series_freq_15min = temp_series.resample(\"15Min\").mean()\n",
2016-02-23 14:26:13 +01:00
"temp_series_freq_15min.head(n=10) # `head` displays the top n values"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"One solution is to fill the gaps by interpolating. We just call the `interpolate()` method. The default is to use linear interpolation, but we can also select another method, such as cubic interpolation:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 33,
2016-02-20 21:37:07 +01:00
"metadata": {
2016-02-23 14:26:13 +01:00
"scrolled": true
2016-02-20 21:37:07 +01:00
},
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"temp_series_freq_15min = temp_series.resample(\"15Min\").interpolate(method=\"cubic\")\n",
"temp_series_freq_15min.head(n=10)"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 34,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"temp_series.plot(label=\"Period: 1 hour\")\n",
"temp_series_freq_15min.plot(label=\"Period: 15 minutes\")\n",
"plt.legend()\n",
"plt.show()"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Timezones\n",
"By default datetimes are *naive*: they are not aware of timezones, so 2016-10-30 02:30 might mean October 30th 2016 at 2:30am in Paris or in New York. We can make datetimes timezone *aware* by calling the `tz_localize()` method:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 35,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"temp_series_ny = temp_series.tz_localize(\"America/New_York\")\n",
"temp_series_ny"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"Note that `-04:00` is now appended to all the datetimes. This means that these datetimes refer to [UTC](https://en.wikipedia.org/wiki/Coordinated_Universal_Time) - 4 hours.\n",
"\n",
"We can convert these datetimes to Paris time like this:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 36,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"temp_series_paris = temp_series_ny.tz_convert(\"Europe/Paris\")\n",
"temp_series_paris"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"You may have noticed that the UTC offset changes from `+02:00` to `+01:00`: this is because France switches to winter time at 3am that particular night (time goes back to 2am). Notice that 2:30am occurs twice! Let's go back to a naive representation (if you log some data hourly using local time, without storing the timezone, you might get something like this):"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 37,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"temp_series_paris_naive = temp_series_paris.tz_localize(None)\n",
"temp_series_paris_naive"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"Now `02:30` is really ambiguous. If we try to localize these naive datetimes to the Paris timezone, we get an error:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 38,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"try:\n",
" temp_series_paris_naive.tz_localize(\"Europe/Paris\")\n",
"except Exception as e:\n",
" print(type(e))\n",
" print(e)"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"Fortunately using the `ambiguous` argument we can tell pandas to infer the right DST (Daylight Saving Time) based on the order of the ambiguous timestamps:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 39,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"temp_series_paris_naive.tz_localize(\"Europe/Paris\", ambiguous=\"infer\")"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Periods\n",
"The `pd.period_range()` function returns a `PeriodIndex` instead of a `DatetimeIndex`. For example, let's get all quarters in 2016 and 2017:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 40,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"quarters = pd.period_range('2016Q1', periods=8, freq='Q')\n",
"quarters"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"Adding a number `N` to a `PeriodIndex` shifts the periods by `N` times the `PeriodIndex`'s frequency:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 41,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"quarters + 3"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `asfreq()` method lets us change the frequency of the `PeriodIndex`. All periods are lengthened or shortened accordingly. For example, let's convert all the quarterly periods to monthly periods (zooming in):"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 42,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"quarters.asfreq(\"M\")"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"By default, the `asfreq` zooms on the end of each period. We can tell it to zoom on the start of each period instead:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 43,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"quarters.asfreq(\"M\", how=\"start\")"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"And we can zoom out:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 44,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"quarters.asfreq(\"A\")"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"Of course we can create a `Series` with a `PeriodIndex`:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 45,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"quarterly_revenue = pd.Series([300, 320, 290, 390, 320, 360, 310, 410], index = quarters)\n",
"quarterly_revenue"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 46,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"quarterly_revenue.plot(kind=\"line\")\n",
"plt.show()"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"We can convert periods to timestamps by calling `to_timestamp`. By default this will give us the first day of each period, but by setting `how` and `freq`, we can get the last hour of each period:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 47,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"last_hours = quarterly_revenue.to_timestamp(how=\"end\", freq=\"H\")\n",
"last_hours"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"And back to periods by calling `to_period`:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 48,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"last_hours.to_period()"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"Pandas also provides many other time-related functions that we recommend you check out in the [documentation](http://pandas.pydata.org/pandas-docs/stable/timeseries.html). To whet your appetite, here is one way to get the last business day of each month in 2016, at 9am:"
2016-02-20 21:37:07 +01:00
]
},
{
2016-02-23 14:26:13 +01:00
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 49,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"months_2016 = pd.period_range(\"2016\", periods=12, freq=\"M\")\n",
"one_day_after_last_days = months_2016.asfreq(\"D\") + 1\n",
"last_bdays = one_day_after_last_days.to_timestamp() - pd.tseries.offsets.BDay()\n",
"last_bdays.to_period(\"H\") + 9"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"# `DataFrame` objects\n",
2016-02-23 14:26:13 +01:00
"A DataFrame object represents a spreadsheet, with cell values, column names and row index labels. You can define expressions to compute columns based on other columns, create pivot-tables, group rows, draw graphs, etc. You can see `DataFrame`s as dictionaries of `Series`.\n",
"\n",
2016-03-03 18:40:31 +01:00
"## Creating a `DataFrame`\n",
2016-02-23 14:26:13 +01:00
"You can create a DataFrame by passing a dictionary of `Series` objects:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 50,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"people_dict = {\n",
" \"weight\": pd.Series([68, 83, 112], index=[\"alice\", \"bob\", \"charles\"]),\n",
" \"birthyear\": pd.Series([1984, 1985, 1992], index=[\"bob\", \"alice\", \"charles\"], name=\"year\"),\n",
" \"children\": pd.Series([0, 3], index=[\"charles\", \"bob\"]),\n",
" \"hobby\": pd.Series([\"Biking\", \"Dancing\"], index=[\"alice\", \"bob\"]),\n",
"}\n",
"people = pd.DataFrame(people_dict)\n",
"people"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A few things to note:\n",
"* the `Series` were automatically aligned based on their index,\n",
"* missing values are represented as `NaN`,\n",
"* `Series` names are ignored (the name `\"year\"` was dropped),\n",
"* `DataFrame`s are displayed nicely in Jupyter notebooks, woohoo!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can access columns pretty much as you would expect. They are returned as `Series` objects:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 51,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"people[\"birthyear\"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also get multiple columns at once:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 52,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"people[[\"birthyear\", \"hobby\"]]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you pass a list of columns and/or index row labels to the `DataFrame` constructor, it will guarantee that these columns and/or rows will exist, in that order, and no other column/row will exist. For example:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 53,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"d2 = pd.DataFrame(\n",
" people_dict,\n",
" columns=[\"birthyear\", \"weight\", \"height\"],\n",
" index=[\"bob\", \"alice\", \"eugene\"]\n",
" )\n",
"d2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Another convenient way to create a `DataFrame` is to pass all the values to the constructor as an `ndarray`, or a list of lists, and specify the column names and row index labels separately:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 54,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"values = [\n",
" [1985, np.nan, \"Biking\", 68],\n",
" [1984, 3, \"Dancing\", 83],\n",
" [1992, 0, np.nan, 112]\n",
" ]\n",
"d3 = pd.DataFrame(\n",
" values,\n",
" columns=[\"birthyear\", \"children\", \"hobby\", \"weight\"],\n",
" index=[\"alice\", \"bob\", \"charles\"]\n",
" )\n",
"d3"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To specify missing values, you can either use `np.nan` or NumPy's masked arrays:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 55,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"masked_array = np.ma.asarray(values, dtype=np.object)\n",
"masked_array[(0, 2), (1, 2)] = np.ma.masked\n",
"d3 = pd.DataFrame(\n",
" masked_array,\n",
2016-02-23 14:26:13 +01:00
" columns=[\"birthyear\", \"children\", \"hobby\", \"weight\"],\n",
" index=[\"alice\", \"bob\", \"charles\"]\n",
" )\n",
"d3"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Instead of an `ndarray`, you can also pass a `DataFrame` object:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 56,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"d4 = pd.DataFrame(\n",
" d3,\n",
" columns=[\"hobby\", \"children\"],\n",
" index=[\"alice\", \"bob\"]\n",
" )\n",
"d4"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It is also possible to create a `DataFrame` with a dictionary (or list) of dictionaries (or list):"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 57,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"people = pd.DataFrame({\n",
" \"birthyear\": {\"alice\":1985, \"bob\": 1984, \"charles\": 1992},\n",
" \"hobby\": {\"alice\":\"Biking\", \"bob\": \"Dancing\"},\n",
" \"weight\": {\"alice\":68, \"bob\": 83, \"charles\": 112},\n",
" \"children\": {\"bob\": 3, \"charles\": 0}\n",
"})\n",
"people"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Multi-indexing\n",
2016-02-23 14:26:13 +01:00
"If all columns are tuples of the same size, then they are understood as a multi-index. The same goes for row index labels. For example:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 58,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"d5 = pd.DataFrame(\n",
" {\n",
" (\"public\", \"birthyear\"):\n",
" {(\"Paris\",\"alice\"):1985, (\"Paris\",\"bob\"): 1984, (\"London\",\"charles\"): 1992},\n",
" (\"public\", \"hobby\"):\n",
" {(\"Paris\",\"alice\"):\"Biking\", (\"Paris\",\"bob\"): \"Dancing\"},\n",
" (\"private\", \"weight\"):\n",
" {(\"Paris\",\"alice\"):68, (\"Paris\",\"bob\"): 83, (\"London\",\"charles\"): 112},\n",
" (\"private\", \"children\"):\n",
" {(\"Paris\", \"alice\"):np.nan, (\"Paris\",\"bob\"): 3, (\"London\",\"charles\"): 0}\n",
" }\n",
")\n",
"d5"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can now get a `DataFrame` containing all the `\"public\"` columns very simply:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 59,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"d5[\"public\"]"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 60,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"d5[\"public\", \"hobby\"] # Same result as d5[\"public\"][\"hobby\"]"
2016-02-23 14:26:13 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Dropping a level\n",
2016-02-23 14:26:13 +01:00
"Let's look at `d5` again:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 61,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"d5"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"There are two levels of columns, and two levels of indices. We can drop a column level by calling `droplevel()` (the same goes for indices):"
2016-02-23 14:26:13 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 62,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"d5.columns = d5.columns.droplevel(level = 0)\n",
"d5"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Transposing\n",
2016-02-23 14:26:13 +01:00
"You can swap columns and indices using the `T` attribute:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 63,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"d6 = d5.T\n",
"d6"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Stacking and unstacking levels\n",
"Calling the `stack()` method will push the lowest column level after the lowest index:"
2016-02-23 14:26:13 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 64,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"d7 = d6.stack()\n",
"d7"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that many `NaN` values appeared. This makes sense because many new combinations did not exist before (eg. there was no `bob` in `London`).\n",
"\n",
"Calling `unstack()` will do the reverse, once again creating many `NaN` values."
2016-02-23 14:26:13 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 65,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"d8 = d7.unstack()\n",
"d8"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we call `unstack` again, we end up with a `Series` object:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 66,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"d9 = d8.unstack()\n",
"d9"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `stack()` and `unstack()` methods let you select the `level` to stack/unstack. You can even stack/unstack multiple levels at once:"
2016-02-23 14:26:13 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 67,
2016-02-23 14:26:13 +01:00
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"d10 = d9.unstack(level = (0,1))\n",
"d10"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Most methods return modified copies\n",
"As you may have noticed, the `stack()` and `unstack()` methods do not modify the object they apply to. Instead, they work on a copy and return that copy. This is true of most methods in pandas."
2016-02-23 14:26:13 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Accessing rows\n",
2016-02-23 14:26:13 +01:00
"Let's go back to the `people` `DataFrame`:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 68,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"people"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `loc` attribute lets you access rows instead of columns. The result is a `Series` object in which the `DataFrame`'s column names are mapped to row index labels:"
2016-02-23 14:26:13 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 69,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"people.loc[\"charles\"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also access rows by integer location using the `iloc` attribute:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 70,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"people.iloc[2]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also get a slice of rows, and this returns a `DataFrame` object:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 71,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"people.iloc[1:3]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, you can pass a boolean array to get the matching rows:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 72,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"people[np.array([True, False, True])]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is most useful when combined with boolean expressions:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 73,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"people[people[\"birthyear\"] < 1990]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Adding and removing columns\n",
2016-02-23 14:26:13 +01:00
"You can generally treat `DataFrame` objects like dictionaries of `Series`, so the following work fine:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 74,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"people"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 75,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"people[\"age\"] = 2018 - people[\"birthyear\"] # adds a new column \"age\"\n",
2016-02-23 14:26:13 +01:00
"people[\"over 30\"] = people[\"age\"] > 30 # adds another column \"over 30\"\n",
"birthyears = people.pop(\"birthyear\")\n",
"del people[\"children\"]\n",
"\n",
"people"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 76,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"birthyears"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When you add a new colum, it must have the same number of rows. Missing rows are filled with NaN, and extra rows are ignored:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 77,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"people[\"pets\"] = pd.Series({\"bob\": 0, \"charles\": 5, \"eugene\":1}) # alice is missing, eugene is ignored\n",
"people"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When adding a new column, it is added at the end (on the right) by default. You can also insert a column anywhere else using the `insert()` method:"
2016-02-23 14:26:13 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 78,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"people.insert(1, \"height\", [172, 181, 185])\n",
"people"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Assigning new columns\n",
"You can also create new columns by calling the `assign()` method. Note that this returns a new `DataFrame` object, the original is not modified:"
2016-02-23 14:26:13 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 79,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"people.assign(\n",
" body_mass_index = people[\"weight\"] / (people[\"height\"] / 100) ** 2,\n",
" has_pets = people[\"pets\"] > 0\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that you cannot access columns created within the same assignment:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 80,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"try:\n",
" people.assign(\n",
" body_mass_index = people[\"weight\"] / (people[\"height\"] / 100) ** 2,\n",
" overweight = people[\"body_mass_index\"] > 25\n",
" )\n",
"except KeyError as e:\n",
" print(\"Key error:\", e)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The solution is to split this assignment in two consecutive assignments:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 81,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"d6 = people.assign(body_mass_index = people[\"weight\"] / (people[\"height\"] / 100) ** 2)\n",
"d6.assign(overweight = d6[\"body_mass_index\"] > 25)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Having to create a temporary variable `d6` is not very convenient. You may want to just chain the assigment calls, but it does not work because the `people` object is not actually modified by the first assignment:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 82,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"try:\n",
" (people\n",
" .assign(body_mass_index = people[\"weight\"] / (people[\"height\"] / 100) ** 2)\n",
" .assign(overweight = people[\"body_mass_index\"] > 25)\n",
" )\n",
"except KeyError as e:\n",
" print(\"Key error:\", e)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"But fear not, there is a simple solution. You can pass a function to the `assign()` method (typically a `lambda` function), and this function will be called with the `DataFrame` as a parameter:"
2016-02-23 14:26:13 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 83,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"(people\n",
" .assign(body_mass_index = lambda df: df[\"weight\"] / (df[\"height\"] / 100) ** 2)\n",
" .assign(overweight = lambda df: df[\"body_mass_index\"] > 25)\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Problem solved!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Evaluating an expression\n",
2016-02-23 14:26:13 +01:00
"A great feature supported by pandas is expression evaluation. This relies on the `numexpr` library which must be installed."
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 84,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"people.eval(\"weight / (height/100) ** 2 > 25\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Assignment expressions are also supported. Let's set `inplace=True` to directly modify the `DataFrame` rather than getting a modified copy:"
2016-02-23 14:26:13 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 85,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"people.eval(\"body_mass_index = weight / (height/100) ** 2\", inplace=True)\n",
2016-02-23 14:26:13 +01:00
"people"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can use a local or global variable in an expression by prefixing it with `'@'`:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 86,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"overweight_threshold = 30\n",
"people.eval(\"overweight = body_mass_index > @overweight_threshold\", inplace=True)\n",
2016-02-23 14:26:13 +01:00
"people"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Querying a `DataFrame`\n",
"The `query()` method lets you filter a `DataFrame` based on a query expression:"
2016-02-23 14:26:13 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 87,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"people.query(\"age > 30 and pets == 0\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Sorting a `DataFrame`\n",
2016-02-23 14:26:13 +01:00
"You can sort a `DataFrame` by calling its `sort_index` method. By default it sorts the rows by their index label, in ascending order, but let's reverse the order:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 88,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"people.sort_index(ascending=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that `sort_index` returned a sorted *copy* of the `DataFrame`. To modify `people` directly, we can set the `inplace` argument to `True`. Also, we can sort the columns instead of the rows by setting `axis=1`:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 89,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"people.sort_index(axis=1, inplace=True)\n",
"people"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To sort the `DataFrame` by the values instead of the labels, we can use `sort_values` and specify the column to sort by:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 90,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"people.sort_values(by=\"age\", inplace=True)\n",
"people"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Plotting a `DataFrame`\n",
2016-02-23 14:26:13 +01:00
"Just like for `Series`, pandas makes it easy to draw nice graphs based on a `DataFrame`.\n",
"\n",
"For example, it is trivial to create a line plot from a `DataFrame`'s data by calling its `plot` method:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 91,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"people.plot(kind = \"line\", x = \"body_mass_index\", y = [\"height\", \"weight\"])\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can pass extra arguments supported by matplotlib's functions. For example, we can create scatterplot and pass it a list of sizes using the `s` argument of matplotlib's `scatter()` function:"
2016-02-23 14:26:13 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 92,
2016-02-23 14:26:13 +01:00
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"people.plot(kind = \"scatter\", x = \"height\", y = \"weight\", s=[40, 120, 200])\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Again, there are way too many options to list here: the best option is to scroll through the [Visualization](http://pandas.pydata.org/pandas-docs/stable/visualization.html) page in pandas' documentation, find the plot you are interested in and look at the example code."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Operations on `DataFrame`s\n",
2016-02-23 14:26:13 +01:00
"Although `DataFrame`s do not try to mimick NumPy arrays, there are a few similarities. Let's create a `DataFrame` to demonstrate this:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 93,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"grades_array = np.array([[8,8,9],[10,9,9],[4, 8, 2], [9, 10, 10]])\n",
"grades = pd.DataFrame(grades_array, columns=[\"sep\", \"oct\", \"nov\"], index=[\"alice\",\"bob\",\"charles\",\"darwin\"])\n",
"grades"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can apply NumPy mathematical functions on a `DataFrame`: the function is applied to all values:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 94,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"np.sqrt(grades)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Similarly, adding a single value to a `DataFrame` will add that value to all elements in the `DataFrame`. This is called *broadcasting*:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 95,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"grades + 1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Of course, the same is true for all other binary operations, including arithmetic (`*`,`/`,`**`...) and conditional (`>`, `==`...) operations:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 96,
2020-04-06 09:13:12 +02:00
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"grades >= 5"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Aggregation operations, such as computing the `max`, the `sum` or the `mean` of a `DataFrame`, apply to each column, and you get back a `Series` object:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 97,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"grades.mean()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `all` method is also an aggregation operation: it checks whether all values are `True` or not. Let's see during which months all students got a grade greater than `5`:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 98,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"(grades > 5).all()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Most of these functions take an optional `axis` parameter which lets you specify along which axis of the `DataFrame` you want the operation executed. The default is `axis=0`, meaning that the operation is executed vertically (on each column). You can set `axis=1` to execute the operation horizontally (on each row). For example, let's find out which students had all grades greater than `5`:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 99,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"(grades > 5).all(axis = 1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `any` method returns `True` if any value is True. Let's see who got at least one grade 10:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 100,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"(grades == 10).any(axis = 1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you add a `Series` object to a `DataFrame` (or execute any other binary operation), pandas attempts to broadcast the operation to all *rows* in the `DataFrame`. This only works if the `Series` has the same size as the `DataFrame`s rows. For example, let's substract the `mean` of the `DataFrame` (a `Series` object) from the `DataFrame`:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 101,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"grades - grades.mean() # equivalent to: grades - [7.75, 8.75, 7.50]"
]
},
{
"cell_type": "markdown",
"metadata": {},
2016-02-20 21:37:07 +01:00
"source": [
2016-02-23 14:26:13 +01:00
"We substracted `7.75` from all September grades, `8.75` from October grades and `7.50` from November grades. It is equivalent to substracting this `DataFrame`:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 102,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"pd.DataFrame([[7.75, 8.75, 7.50]]*4, index=grades.index, columns=grades.columns)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you want to substract the global mean from every grade, here is one way to do it:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 103,
2016-02-23 14:26:13 +01:00
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"grades - grades.values.mean() # substracts the global mean (8.00) from all grades"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Automatic alignment\n",
2016-02-23 14:26:13 +01:00
"Similar to `Series`, when operating on multiple `DataFrame`s, pandas automatically aligns them by row index label, but also by column names. Let's create a `DataFrame` with bonus points for each person from October to December:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 104,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"bonus_array = np.array([[0,np.nan,2],[np.nan,1,0],[0, 1, 0], [3, 3, 0]])\n",
"bonus_points = pd.DataFrame(bonus_array, columns=[\"oct\", \"nov\", \"dec\"], index=[\"bob\",\"colin\", \"darwin\", \"charles\"])\n",
"bonus_points"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 105,
2016-02-23 14:26:13 +01:00
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"grades + bonus_points"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Looks like the addition worked in some cases but way too many elements are now empty. That's because when aligning the `DataFrame`s, some columns and rows were only present on one side, and thus they were considered missing on the other side (`NaN`). Then adding `NaN` to a number results in `NaN`, hence the result.\n",
"\n",
2016-03-03 18:40:31 +01:00
"## Handling missing data\n",
2016-02-23 14:26:13 +01:00
"Dealing with missing data is a frequent task when working with real life data. Pandas offers a few tools to handle missing data.\n",
" \n",
"Let's try to fix the problem above. For example, we can decide that missing data should result in a zero, instead of `NaN`. We can replace all `NaN` values by a any value using the `fillna()` method:"
2016-02-23 14:26:13 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 106,
2016-02-23 14:26:13 +01:00
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"(grades + bonus_points).fillna(0)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It's a bit unfair that we're setting grades to zero in September, though. Perhaps we should decide that missing grades are missing grades, but missing bonus points should be replaced by zeros:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 107,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"fixed_bonus_points = bonus_points.fillna(0)\n",
"fixed_bonus_points.insert(0, \"sep\", 0)\n",
"fixed_bonus_points.loc[\"alice\"] = 0\n",
"grades + fixed_bonus_points"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"That's much better: although we made up some data, we have not been too unfair.\n",
"\n",
"Another way to handle missing data is to interpolate. Let's look at the `bonus_points` `DataFrame` again:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 108,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"bonus_points"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's call the `interpolate` method. By default, it interpolates vertically (`axis=0`), so let's tell it to interpolate horizontally (`axis=1`)."
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 109,
2020-04-06 09:13:12 +02:00
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"bonus_points.interpolate(axis=1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Bob had 0 bonus points in October, and 2 in December. When we interpolate for November, we get the mean: 1 bonus point. Colin had 1 bonus point in November, but we do not know how many bonus points he had in September, so we cannot interpolate, this is why there is still a missing value in October after interpolation. To fix this, we can set the September bonus points to 0 before interpolation."
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 110,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"better_bonus_points = bonus_points.copy()\n",
"better_bonus_points.insert(0, \"sep\", 0)\n",
"better_bonus_points.loc[\"alice\"] = 0\n",
"better_bonus_points = better_bonus_points.interpolate(axis=1)\n",
"better_bonus_points"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Great, now we have reasonable bonus points everywhere. Let's find out the final grades:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 111,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"grades + better_bonus_points"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It is slightly annoying that the September column ends up on the right. This is because the `DataFrame`s we are adding do not have the exact same columns (the `grades` `DataFrame` is missing the `\"dec\"` column), so to make things predictable, pandas orders the final columns alphabetically. To fix this, we can simply add the missing column before adding:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 112,
2016-02-23 14:26:13 +01:00
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"grades[\"dec\"] = np.nan\n",
"final_grades = grades + better_bonus_points\n",
"final_grades"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"There's not much we can do about December and Colin: it's bad enough that we are making up bonus points, but we can't reasonably make up grades (well I guess some teachers probably do). So let's call the `dropna()` method to get rid of rows that are full of `NaN`s:"
2016-02-23 14:26:13 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 113,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"final_grades_clean = final_grades.dropna(how=\"all\")\n",
"final_grades_clean"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's remove columns that are full of `NaN`s by setting the `axis` argument to `1`:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 114,
"metadata": {},
2016-02-23 14:26:13 +01:00
"outputs": [],
"source": [
"final_grades_clean = final_grades_clean.dropna(axis=1, how=\"all\")\n",
"final_grades_clean"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Aggregating with `groupby`\n",
2016-02-23 14:26:13 +01:00
"Similar to the SQL language, pandas allows grouping your data into groups to run calculations over each group.\n",
"\n",
"First, let's add some extra data about each person so we can group them, and let's go back to the `final_grades` `DataFrame` so we can see how `NaN` values are handled:"
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 115,
2016-02-23 14:26:13 +01:00
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"final_grades[\"hobby\"] = [\"Biking\", \"Dancing\", np.nan, \"Dancing\", \"Biking\"]\n",
"final_grades"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's group data in this `DataFrame` by hobby:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 116,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"grouped_grades = final_grades.groupby(\"hobby\")\n",
"grouped_grades"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"We are ready to compute the average grade per hobby:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 117,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"grouped_grades.mean()"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"That was easy! Note that the `NaN` values have simply been skipped when computing the means."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Pivot tables\n",
"Pandas supports spreadsheet-like [pivot tables](https://en.wikipedia.org/wiki/Pivot_table) that allow quick data summarization. To illustrate this, let's create a simple `DataFrame`:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 118,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"bonus_points"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 119,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"more_grades = final_grades_clean.stack().reset_index()\n",
"more_grades.columns = [\"name\", \"month\", \"grade\"]\n",
"more_grades[\"bonus\"] = [np.nan, np.nan, np.nan, 0, np.nan, 2, 3, 3, 0, 0, 1, 0]\n",
"more_grades"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can call the `pd.pivot_table()` function for this `DataFrame`, asking to group by the `name` column. By default, `pivot_table()` computes the mean of each numeric column:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 120,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"pd.pivot_table(more_grades, index=\"name\")"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can change the aggregation function by setting the `aggfunc` argument, and we can also specify the list of columns whose values will be aggregated:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 121,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"pd.pivot_table(more_grades, index=\"name\", values=[\"grade\",\"bonus\"], aggfunc=np.max)"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"We can also specify the `columns` to aggregate over horizontally, and request the grand totals for each row and column by setting `margins=True`:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 122,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"pd.pivot_table(more_grades, index=\"name\", values=\"grade\", columns=\"month\", margins=True)"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"Finally, we can specify multiple index or column names, and pandas will create multi-level indices:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 123,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"pd.pivot_table(more_grades, index=(\"name\", \"month\"), margins=True)"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"## Overview functions\n",
"When dealing with large `DataFrames`, it is useful to get a quick overview of its content. Pandas offers a few functions for this. First, let's create a large `DataFrame` with a mix of numeric values, missing values and text values. Notice how Jupyter displays only the corners of the `DataFrame`:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 124,
2020-04-06 09:13:12 +02:00
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"much_data = np.fromfunction(lambda x,y: (x+y*y)%17*11, (10000, 26))\n",
"large_df = pd.DataFrame(much_data, columns=list(\"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"))\n",
"large_df[large_df % 16 == 0] = np.nan\n",
"large_df.insert(3,\"some_text\", \"Blabla\")\n",
"large_df"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `head()` method returns the top 5 rows:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 125,
2020-04-06 09:13:12 +02:00
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"large_df.head()"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Of course there's also a `tail()` function to view the bottom 5 rows. You can pass the number of rows you want:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 126,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"large_df.tail(n=2)"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `info()` method prints out a summary of each columns contents:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 127,
2020-04-06 09:13:12 +02:00
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"large_df.info()"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, the `describe()` method gives a nice overview of the main aggregated values over each column:\n",
2016-02-23 14:26:13 +01:00
"* `count`: number of non-null (not NaN) values\n",
"* `mean`: mean of non-null values\n",
"* `std`: [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation) of non-null values\n",
"* `min`: minimum of non-null values\n",
"* `25%`, `50%`, `75%`: 25th, 50th and 75th [percentile](https://en.wikipedia.org/wiki/Percentile) of non-null values\n",
"* `max`: maximum of non-null values"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 128,
2020-04-06 09:13:12 +02:00
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"large_df.describe()"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"# Saving & loading\n",
2016-02-23 14:26:13 +01:00
"Pandas can save `DataFrame`s to various backends, including file formats such as CSV, Excel, JSON, HTML and HDF5, or to a SQL database. Let's create a `DataFrame` to demonstrate this:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 129,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"my_df = pd.DataFrame(\n",
" [[\"Biking\", 68.5, 1985, np.nan], [\"Dancing\", 83.1, 1984, 3]], \n",
" columns=[\"hobby\",\"weight\",\"birthyear\",\"children\"],\n",
" index=[\"alice\", \"bob\"]\n",
")\n",
"my_df"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Saving\n",
2016-02-23 14:26:13 +01:00
"Let's save it to CSV, HTML and JSON:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 130,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"my_df.to_csv(\"my_df.csv\")\n",
"my_df.to_html(\"my_df.html\")\n",
"my_df.to_json(\"my_df.json\")"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"Done! Let's take a peek at what was saved:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 131,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"for filename in (\"my_df.csv\", \"my_df.html\", \"my_df.json\"):\n",
" print(\"#\", filename)\n",
" with open(filename, \"rt\") as f:\n",
" print(f.read())\n",
" print()\n"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"Note that the index is saved as the first column (with no name) in a CSV file, as `<th>` tags in HTML and as keys in JSON.\n",
"\n",
"Saving to other formats works very similarly, but some formats require extra libraries to be installed. For example, saving to Excel requires the openpyxl library:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 132,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"try:\n",
" my_df.to_excel(\"my_df.xlsx\", sheet_name='People')\n",
"except ImportError as e:\n",
" print(e)"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Loading\n",
2016-02-23 14:26:13 +01:00
"Now let's load our CSV file back into a `DataFrame`:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 133,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"my_df_loaded = pd.read_csv(\"my_df.csv\", index_col=0)\n",
"my_df_loaded"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2021-02-14 03:39:03 +01:00
"As you might guess, there are similar `read_json`, `read_html`, `read_excel` functions as well. We can also read data straight from the Internet. For example, let's load the top 1,000 U.S. cities from github:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 134,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"us_cities = None\n",
"try:\n",
2021-02-14 03:39:03 +01:00
" csv_url = \"https://raw.githubusercontent.com/plotly/datasets/master/us-cities-top-1k.csv\"\n",
2016-02-23 14:26:13 +01:00
" us_cities = pd.read_csv(csv_url, index_col=0)\n",
" us_cities = us_cities.head()\n",
"except IOError as e:\n",
" print(e)\n",
"us_cities"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"There are more options available, in particular regarding datetime format. Check out the [documentation](http://pandas.pydata.org/pandas-docs/stable/io.html) for more details."
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"# Combining `DataFrame`s\n",
2016-02-23 14:26:13 +01:00
"\n",
2016-03-03 18:40:31 +01:00
"## SQL-like joins\n",
2016-02-23 14:26:13 +01:00
"One powerful feature of pandas is it's ability to perform SQL-like joins on `DataFrame`s. Various types of joins are supported: inner joins, left/right outer joins and full joins. To illustrate this, let's start by creating a couple simple `DataFrame`s:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 135,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"city_loc = pd.DataFrame(\n",
" [\n",
" [\"CA\", \"San Francisco\", 37.781334, -122.416728],\n",
" [\"NY\", \"New York\", 40.705649, -74.008344],\n",
" [\"FL\", \"Miami\", 25.791100, -80.320733],\n",
" [\"OH\", \"Cleveland\", 41.473508, -81.739791],\n",
" [\"UT\", \"Salt Lake City\", 40.755851, -111.896657]\n",
" ], columns=[\"state\", \"city\", \"lat\", \"lng\"])\n",
"city_loc"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 136,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"city_pop = pd.DataFrame(\n",
" [\n",
" [808976, \"San Francisco\", \"California\"],\n",
" [8363710, \"New York\", \"New-York\"],\n",
" [413201, \"Miami\", \"Florida\"],\n",
" [2242193, \"Houston\", \"Texas\"]\n",
" ], index=[3,4,5,6], columns=[\"population\", \"city\", \"state\"])\n",
"city_pop"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's join these `DataFrame`s using the `merge()` function:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 137,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"pd.merge(left=city_loc, right=city_pop, on=\"city\")"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"Note that both `DataFrame`s have a column named `state`, so in the result they got renamed to `state_x` and `state_y`.\n",
"\n",
"Also, note that Cleveland, Salt Lake City and Houston were dropped because they don't exist in *both* `DataFrame`s. This is the equivalent of a SQL `INNER JOIN`. If you want a `FULL OUTER JOIN`, where no city gets dropped and `NaN` values are added, you must specify `how=\"outer\"`:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 138,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"all_cities = pd.merge(left=city_loc, right=city_pop, on=\"city\", how=\"outer\")\n",
"all_cities"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"Of course `LEFT OUTER JOIN` is also available by setting `how=\"left\"`: only the cities present in the left `DataFrame` end up in the result. Similarly, with `how=\"right\"` only cities in the right `DataFrame` appear in the result. For example:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 139,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"pd.merge(left=city_loc, right=city_pop, on=\"city\", how=\"right\")"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"If the key to join on is actually in one (or both) `DataFrame`'s index, you must use `left_index=True` and/or `right_index=True`. If the key column names differ, you must use `left_on` and `right_on`. For example:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 140,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"city_pop2 = city_pop.copy()\n",
"city_pop2.columns = [\"population\", \"name\", \"state\"]\n",
"pd.merge(left=city_loc, right=city_pop2, left_on=\"city\", right_on=\"name\")"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"## Concatenation\n",
"Rather than joining `DataFrame`s, we may just want to concatenate them. That's what `concat()` is for:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 141,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"result_concat = pd.concat([city_loc, city_pop])\n",
"result_concat"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"Note that this operation aligned the data horizontally (by columns) but not vertically (by rows). In this example, we end up with multiple rows having the same index (eg. 3). Pandas handles this rather gracefully:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 142,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"result_concat.loc[3]"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"Or you can tell pandas to just ignore the index:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 143,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"pd.concat([city_loc, city_pop], ignore_index=True)"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"Notice that when a column does not exist in a `DataFrame`, it acts as if it was filled with `NaN` values. If we set `join=\"inner\"`, then only columns that exist in *both* `DataFrame`s are returned:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 144,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"pd.concat([city_loc, city_pop], join=\"inner\")"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"You can concatenate `DataFrame`s horizontally instead of vertically by setting `axis=1`:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 145,
2016-02-20 21:37:07 +01:00
"metadata": {
2016-02-23 14:26:13 +01:00
"scrolled": true
2016-02-20 21:37:07 +01:00
},
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"pd.concat([city_loc, city_pop], axis=1)"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"In this case it really does not make much sense because the indices do not align well (eg. Cleveland and San Francisco end up on the same row, because they shared the index label `3`). So let's reindex the `DataFrame`s by city name before concatenating:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 146,
2016-02-20 21:37:07 +01:00
"metadata": {
2016-02-23 14:26:13 +01:00
"scrolled": true
2016-02-20 21:37:07 +01:00
},
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"pd.concat([city_loc.set_index(\"city\"), city_pop.set_index(\"city\")], axis=1)"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"This looks a lot like a `FULL OUTER JOIN`, except that the `state` columns were not renamed to `state_x` and `state_y`, and the `city` column is now the index."
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `append()` method is a useful shorthand for concatenating `DataFrame`s vertically:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 147,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"city_loc.append(city_pop)"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As always in pandas, the `append()` method does *not* actually modify `city_loc`: it works on a copy and returns the modified copy."
2016-02-23 14:26:13 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-03-03 18:40:31 +01:00
"# Categories\n",
2016-02-23 14:26:13 +01:00
"It is quite frequent to have values that represent categories, for example `1` for female and `2` for male, or `\"A\"` for Good, `\"B\"` for Average, `\"C\"` for Bad. These categorical values can be hard to read and cumbersome to handle, but fortunately pandas makes it easy. To illustrate this, let's take the `city_pop` `DataFrame` we created earlier, and add a column that represents a category:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 148,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"city_eco = city_pop.copy()\n",
"city_eco[\"eco_code\"] = [17, 17, 34, 20]\n",
"city_eco"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"Right now the `eco_code` column is full of apparently meaningless codes. Let's fix that. First, we will create a new categorical column based on the `eco_code`s:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 149,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"city_eco[\"economy\"] = city_eco[\"eco_code\"].astype('category')\n",
"city_eco[\"economy\"].cat.categories"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"Now we can give each category a meaningful name:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 150,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"city_eco[\"economy\"].cat.categories = [\"Finance\", \"Energy\", \"Tourism\"]\n",
"city_eco"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2016-02-23 14:26:13 +01:00
"Note that categorical values are sorted according to their categorical order, *not* their alphabetical order:"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "code",
2021-02-14 03:39:03 +01:00
"execution_count": 151,
"metadata": {},
2016-02-20 21:37:07 +01:00
"outputs": [],
"source": [
2016-02-23 14:26:13 +01:00
"city_eco.sort_values(by=\"economy\", ascending=False)"
2016-02-20 21:37:07 +01:00
]
},
{
"cell_type": "markdown",
2020-04-06 09:13:12 +02:00
"metadata": {},
2016-02-20 21:37:07 +01:00
"source": [
2016-02-23 14:26:13 +01:00
"# What next?\n",
"As you probably noticed by now, pandas is quite a large library with *many* features. Although we went through the most important features, there is still a lot to discover. Probably the best way to learn more is to get your hands dirty with some real-life data. It is also a good idea to go through pandas' excellent [documentation](http://pandas.pydata.org/pandas-docs/stable/index.html), in particular the [Cookbook](http://pandas.pydata.org/pandas-docs/stable/cookbook.html)."
2016-02-19 14:33:34 +01:00
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
2016-02-19 14:33:34 +01:00
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
2016-02-19 14:33:34 +01:00
"language": "python",
"name": "python3"
2016-02-19 14:33:34 +01:00
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
2016-02-19 14:33:34 +01:00
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
2021-02-14 03:39:03 +01:00
"version": "3.7.9"
2016-03-03 18:40:31 +01:00
},
"toc": {
"toc_cell": false,
"toc_number_sections": true,
"toc_section_display": "none",
"toc_threshold": 6,
"toc_window_display": true
2016-02-19 14:33:34 +01:00
}
},
"nbformat": 4,
2020-04-06 09:13:12 +02:00
"nbformat_minor": 4
2016-02-19 14:33:34 +01:00
}