handson-ml/extra_capsnets-cn.ipynb

2103 lines
66 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters!

This file contains ambiguous Unicode characters that may be confused with others in your current locale. If your use case is intentional and legitimate, you can safely ignore this warning. Use the Escape button to highlight these characters.

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 胶囊网络(CapsNets) "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"基于论文:[Dynamic Routing Between Capsules](https://arxiv.org/abs/1710.09829)作者Sara Sabour, Nicholas Frosst and Geoffrey E. Hinton (NIPS 2017)。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"部分启发来自于Huadong Liao的实现[CapsNet-TensorFlow](https://github.com/naturomics/CapsNet-Tensorflow)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 简介"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"观看 [视频](https://youtu.be/pPN8d0E3900)来理解胶囊网络背后的关键想法大家可能看不到因为youtube被墙了"
]
},
{
"cell_type": "code",
"execution_count": 157,
"metadata": {},
"outputs": [],
"source": [
"from IPython.display import HTML\n",
"HTML(\"\"\"<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/pPN8d0E3900\" frameborder=\"0\" allowfullscreen></iframe>\"\"\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"你或许也需要观看[视频](https://youtu.be/2Kawrd5szHE)其展示了这个notebook的难点大家可能看不到因为youtube被墙了"
]
},
{
"cell_type": "code",
"execution_count": 158,
"metadata": {},
"outputs": [],
"source": [
"HTML(\"\"\"<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/2Kawrd5szHE\" frameborder=\"0\" allowfullscreen></iframe>\"\"\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Imports"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"同时支持 Python 2 和 Python 3"
]
},
{
"cell_type": "code",
"execution_count": 78,
"metadata": {},
"outputs": [],
"source": [
"from __future__ import division, print_function, unicode_literals"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"为了绘制好看的图:"
]
},
{
"cell_type": "code",
"execution_count": 79,
"metadata": {},
"outputs": [],
"source": [
"%matplotlib inline\n",
"import matplotlib\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"我们会用到 NumPy 和 TensorFlow"
]
},
{
"cell_type": "code",
"execution_count": 80,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import tensorflow as tf"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 可重复性"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"为了能够在不重新启动Jupyter Notebook Kernel的情况下重新运行本notebook我们需要重置默认的计算图。"
]
},
{
"cell_type": "code",
"execution_count": 81,
"metadata": {},
"outputs": [],
"source": [
"tf.reset_default_graph()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"设置随机种子以便于本notebook总是可以输出相同的输出"
]
},
{
"cell_type": "code",
"execution_count": 82,
"metadata": {},
"outputs": [],
"source": [
"np.random.seed(42)\n",
"tf.set_random_seed(42)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 装载MNIST"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"是的我知道又是MNIST。但我们希望这个极具威力的想法可以工作在更大的数据集上时间会说明一切。译注因为是Hinton吗因为他老是对;-)"
]
},
{
"cell_type": "code",
"execution_count": 83,
"metadata": {},
"outputs": [],
"source": [
"from tensorflow.examples.tutorials.mnist import input_data\n",
"\n",
"mnist = input_data.read_data_sets(\"/tmp/data/\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"让我们看一下这些手写数字图像是什么样的:"
]
},
{
"cell_type": "code",
"execution_count": 84,
"metadata": {},
"outputs": [],
"source": [
"n_samples = 5\n",
"\n",
"plt.figure(figsize=(n_samples * 2, 3))\n",
"for index in range(n_samples):\n",
" plt.subplot(1, n_samples, index + 1)\n",
" sample_image = mnist.train.images[index].reshape(28, 28)\n",
" plt.imshow(sample_image, cmap=\"binary\")\n",
" plt.axis(\"off\")\n",
"\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"以及相应的标签:"
]
},
{
"cell_type": "code",
"execution_count": 85,
"metadata": {},
"outputs": [],
"source": [
"mnist.train.labels[:n_samples]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"现在让我们建立一个胶囊网络来区分这些图像。这里有一个其总体的架构享受一下ASCII字符的艺术吧! ;-)\n",
"注意:为了可读性,我摒弃了两种箭头:标签 → 掩盖,以及 输入的图像 → 重新构造损失。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```\n",
" 损 失\n",
" ↑\n",
" ┌─────────┴─────────┐\n",
" 标 签 → 边 际 损 失 重 新 构 造 损 失\n",
" ↑ ↑\n",
" 模 长 解 码 器\n",
" ↑ ↑ \n",
" 数 字 胶 囊 们 ────遮 盖─────┘\n",
" ↖↑↗ ↖↑↗ ↖↑↗\n",
" 主 胶 囊 们\n",
" ↑ \n",
" 输 入 的 图 像\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"我们打算从底层开始构建该计算图,然后逐步上移,左侧优先。让我们开始!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 输入图像"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"让我们通过为输入图像创建一个占位符作为起步该输入图像具有28×28个像素1个颜色通道=灰度。"
]
},
{
"cell_type": "code",
"execution_count": 86,
"metadata": {},
"outputs": [],
"source": [
"X = tf.placeholder(shape=[None, 28, 28, 1], dtype=tf.float32, name=\"X\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 主胶囊"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"第一层由32个特征映射组成每个特征映射为6$\\times$6个胶囊其中每个胶囊输出8维的激活向量"
]
},
{
"cell_type": "code",
"execution_count": 87,
"metadata": {},
"outputs": [],
"source": [
"caps1_n_maps = 32\n",
"caps1_n_caps = caps1_n_maps * 6 * 6 # 1152 主胶囊们\n",
"caps1_n_dims = 8"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"为了计算它们的输出,我们首先应用两个常规的卷积层:"
]
},
{
"cell_type": "code",
"execution_count": 88,
"metadata": {},
"outputs": [],
"source": [
"conv1_params = {\n",
" \"filters\": 256,\n",
" \"kernel_size\": 9,\n",
" \"strides\": 1,\n",
" \"padding\": \"valid\",\n",
" \"activation\": tf.nn.relu,\n",
"}\n",
"\n",
"conv2_params = {\n",
" \"filters\": caps1_n_maps * caps1_n_dims, # 256 个卷积滤波器\n",
" \"kernel_size\": 9,\n",
" \"strides\": 2,\n",
" \"padding\": \"valid\",\n",
" \"activation\": tf.nn.relu\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": 89,
"metadata": {},
"outputs": [],
"source": [
"conv1 = tf.layers.conv2d(X, name=\"conv1\", **conv1_params)\n",
"conv2 = tf.layers.conv2d(conv1, name=\"conv2\", **conv2_params)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"注意由于我们使用一个尺寸为9的核并且没有使用填充出于某种原因这就是`\"valid\"`的含义),该图像每经历一个卷积层就会缩减 $9-1=8$ 个像素(从 $28\\times 28$ 到 $20 \\times 20$,再从 $20\\times 20$ 到 $12\\times 12$并且由于在第二个卷积层中使用了大小为2的步幅那么该图像的大小就被除以2。这就是为什么我们最后会得到 $6\\times 6$ 的特征映射feature map。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"接着我们重塑该输出以获得一组8D向量用来表示主胶囊的输出。`conv2`的输出是一个数组包含对于每个实例都有32×8=256个特征映射feature map其中每个特征映射为6×6。所以该输出的形状为 (_batch size_, 6, 6, 256)。我们想要把256分到32个8维向量中可以通过使用重塑 (_batch size_, 6, 6, 32, 8)来达到目的。然而由于首个胶囊层会被完全连接到下一个胶囊层那么我们就可以简单地把它扁平成6×6的网格。这意味着我们只需要把它重塑成 (_batch size_, 6×6×32, 8) 即可。"
]
},
{
"cell_type": "code",
"execution_count": 90,
"metadata": {},
"outputs": [],
"source": [
"caps1_raw = tf.reshape(conv2, [-1, caps1_n_caps, caps1_n_dims],\n",
" name=\"caps1_raw\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"现在我们需要压缩这些向量。让我们来定义`squash()`函数基于论文中的公式1\n",
"\n",
"$\\operatorname{squash}(\\mathbf{s}) = \\dfrac{\\|\\mathbf{s}\\|^2}{1 + \\|\\mathbf{s}\\|^2} \\dfrac{\\mathbf{s}}{\\|\\mathbf{s}\\|}$\n",
"\n",
"该`squash()`函数将会压缩所有的向量到给定的数组中,沿给定轴(默认情况为最后一个轴)。\n",
"\n",
"**当心**这里有一个很讨厌的bug在等着你当 $\\|\\mathbf{s}\\|=0$时,$\\|\\mathbf{s}\\|$ 为 undefined这让我们不能直接使用 `tf.norm()`否则会在训练过程中失败如果一个向量为0那么梯度就会是 `nan`,所以当优化器更新变量时,这些变量也会变为 `nan`,从那个时刻起,你就止步在 `nan` 那里了。解决的方法是手工实现norm在计算的时候加上一个很小的值 epsilon$\\|\\mathbf{s}\\| \\approx \\sqrt{\\sum\\limits_i{{s_i}^2}\\,\\,+ \\epsilon}$"
]
},
{
"cell_type": "code",
"execution_count": 91,
"metadata": {},
"outputs": [],
"source": [
"def squash(s, axis=-1, epsilon=1e-7, name=None):\n",
" with tf.name_scope(name, default_name=\"squash\"):\n",
" squared_norm = tf.reduce_sum(tf.square(s), axis=axis,\n",
" keep_dims=True)\n",
" safe_norm = tf.sqrt(squared_norm + epsilon)\n",
" squash_factor = squared_norm / (1. + squared_norm)\n",
" unit_vector = s / safe_norm\n",
" return squash_factor * unit_vector"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"现在让我们应用这个函数以获得每个主胶囊$\\mathbf{u}_i$的输出:"
]
},
{
"cell_type": "code",
"execution_count": 92,
"metadata": {},
"outputs": [],
"source": [
"caps1_output = squash(caps1_raw, name=\"caps1_output\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"太棒了!我们有了首个胶囊层的输出了。不是很难,对吗?然后,计算下一层才是真正乐趣的开始(译注:好戏刚刚开始)。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 数字胶囊们"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"要计算数字胶囊们的输出,我们必须首先计算预测的输出向量(每个对应一个主胶囊/数字胶囊的对)。接着,我们就可以通过协议算法来运行路由。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 计算预测输出向量"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"该数字胶囊层包含10个胶囊每个代表一个数字每个胶囊16维"
]
},
{
"cell_type": "code",
"execution_count": 93,
"metadata": {},
"outputs": [],
"source": [
"caps2_n_caps = 10\n",
"caps2_n_dims = 16"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"对于在第一层里的每个胶囊 $i$,我们会在第二层中预测出每个胶囊 $j$ 的输出。为此,我们需要一个变换矩阵 $\\mathbf{W}_{i,j}$(每一对就是胶囊($i$, $j$) 中的一个),接着我们就可以计算预测的输出$\\hat{\\mathbf{u}}_{j|i} = \\mathbf{W}_{i,j} \\, \\mathbf{u}_i$论文中的公式2的右半部分。由于我们想要将8维向量变形为16维向量因此每个变换向量$\\mathbf{W}_{i,j}$必须具备(16, 8)形状。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"要为每对胶囊 ($i$, $j$) 计算 $\\hat{\\mathbf{u}}_{j|i}$,我们会利用 `tf.matmul()` 函数的一个特点你可能知道它可以让你进行两个矩阵相乘但你可能不知道它可以让你进行更高维度的数组相乘。它将这些数组视作为数组矩阵并且它会执行每项的矩阵相乘。例如设有两个4D数组每个包含2×3网格的矩阵。第一个包含矩阵为$\\mathbf{A}, \\mathbf{B}, \\mathbf{C}, \\mathbf{D}, \\mathbf{E}, \\mathbf{F}$,第二个包含矩阵为:$\\mathbf{G}, \\mathbf{H}, \\mathbf{I}, \\mathbf{J}, \\mathbf{K}, \\mathbf{L}$。如果你使用 `tf.matmul`函数 对这两个4D数组进行相乘你就会得到\n",
"\n",
"$\n",
"\\pmatrix{\n",
"\\mathbf{A} & \\mathbf{B} & \\mathbf{C} \\\\\n",
"\\mathbf{D} & \\mathbf{E} & \\mathbf{F}\n",
"} \\times\n",
"\\pmatrix{\n",
"\\mathbf{G} & \\mathbf{H} & \\mathbf{I} \\\\\n",
"\\mathbf{J} & \\mathbf{K} & \\mathbf{L}\n",
"} = \\pmatrix{\n",
"\\mathbf{AG} & \\mathbf{BH} & \\mathbf{CI} \\\\\n",
"\\mathbf{DJ} & \\mathbf{EK} & \\mathbf{FL}\n",
"}\n",
"$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"我们可以把这个函数用来计算每对胶囊 ($i$, $j$) 的 $\\hat{\\mathbf{u}}_{j|i}$,就像这样(回忆一下,有 6×6×32=1152 个胶囊在第一层还有10个在第二层\n",
"\n",
"$\n",
"\\pmatrix{\n",
" \\mathbf{W}_{1,1} & \\mathbf{W}_{1,2} & \\cdots & \\mathbf{W}_{1,10} \\\\\n",
" \\mathbf{W}_{2,1} & \\mathbf{W}_{2,2} & \\cdots & \\mathbf{W}_{2,10} \\\\\n",
" \\vdots & \\vdots & \\ddots & \\vdots \\\\\n",
" \\mathbf{W}_{1152,1} & \\mathbf{W}_{1152,2} & \\cdots & \\mathbf{W}_{1152,10}\n",
"} \\times\n",
"\\pmatrix{\n",
" \\mathbf{u}_1 & \\mathbf{u}_1 & \\cdots & \\mathbf{u}_1 \\\\\n",
" \\mathbf{u}_2 & \\mathbf{u}_2 & \\cdots & \\mathbf{u}_2 \\\\\n",
" \\vdots & \\vdots & \\ddots & \\vdots \\\\\n",
" \\mathbf{u}_{1152} & \\mathbf{u}_{1152} & \\cdots & \\mathbf{u}_{1152}\n",
"}\n",
"=\n",
"\\pmatrix{\n",
"\\hat{\\mathbf{u}}_{1|1} & \\hat{\\mathbf{u}}_{2|1} & \\cdots & \\hat{\\mathbf{u}}_{10|1} \\\\\n",
"\\hat{\\mathbf{u}}_{1|2} & \\hat{\\mathbf{u}}_{2|2} & \\cdots & \\hat{\\mathbf{u}}_{10|2} \\\\\n",
"\\vdots & \\vdots & \\ddots & \\vdots \\\\\n",
"\\hat{\\mathbf{u}}_{1|1152} & \\hat{\\mathbf{u}}_{2|1152} & \\cdots & \\hat{\\mathbf{u}}_{10|1152}\n",
"}\n",
"$\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"第一个数组的形状为 (1152, 10, 16, 8),第二个数组的形状为 (1152, 10, 8, 1)。注意到第二个数组必须包含10个对于向量$\\mathbf{u}_1$ 到 $\\mathbf{u}_{1152}$ 的完全拷贝。为了要创建这样的数组,我们将使用好用的 `tf.tile()` 函数,它可以让你创建包含很多基数组拷贝的数组,并且根据你想要的进行平铺。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"哦稍等我们还忘了一个维度_batch size批量/批次的大小_。假设我们要给胶囊网络提供50张图片那么该网络需要同时作出这50张图片的预测。所以第一个数组的形状为 (50, 1152, 10, 16, 8),而第二个数组的形状为 (50, 1152, 10, 8, 1)。第一层的胶囊实际上已经对于所有的50张图像作出预测所以第二个数组没有问题但对于第一个数组我们需要使用 `tf.tile()` 让其具有50个拷贝的变换矩阵。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"好了,让我们开始,创建一个可训练的变量,形状为 (1, 1152, 10, 16, 8) 可以用来持有所有的变换矩阵。第一个维度的大小为1可以让这个数组更容易的平铺。我们使用标准差为0.1的常规分布,随机初始化这个变量。"
]
},
{
"cell_type": "code",
"execution_count": 94,
"metadata": {},
"outputs": [],
"source": [
"init_sigma = 0.1\n",
"\n",
"W_init = tf.random_normal(\n",
" shape=(1, caps1_n_caps, caps2_n_caps, caps2_n_dims, caps1_n_dims),\n",
" stddev=init_sigma, dtype=tf.float32, name=\"W_init\")\n",
"W = tf.Variable(W_init, name=\"W\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"现在我们可以通过每个实例重复一次`W`来创建第一个数组:"
]
},
{
"cell_type": "code",
"execution_count": 95,
"metadata": {},
"outputs": [],
"source": [
"batch_size = tf.shape(X)[0]\n",
"W_tiled = tf.tile(W, [batch_size, 1, 1, 1, 1], name=\"W_tiled\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"就是这样!现在转到第二个数组。如前所述,我们需要创建一个数组,形状为 (_batch size_, 1152, 10, 8, 1)包含第一层胶囊的输出重复10次一次一个数字在第三个维度即axis=2。 `caps1_output` 数组的形状为 (_batch size_, 1152, 8),所以我们首先需要展开两次来获得形状 (_batch size_, 1152, 1, 8, 1) 的数组接着在第三维度重复它10次。"
]
},
{
"cell_type": "code",
"execution_count": 96,
"metadata": {},
"outputs": [],
"source": [
"caps1_output_expanded = tf.expand_dims(caps1_output, -1,\n",
" name=\"caps1_output_expanded\")\n",
"caps1_output_tile = tf.expand_dims(caps1_output_expanded, 2,\n",
" name=\"caps1_output_tile\")\n",
"caps1_output_tiled = tf.tile(caps1_output_tile, [1, 1, caps2_n_caps, 1, 1],\n",
" name=\"caps1_output_tiled\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"让我们检查以下第一个数组的形状:"
]
},
{
"cell_type": "code",
"execution_count": 97,
"metadata": {},
"outputs": [],
"source": [
"W_tiled"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"很好,现在第二个:"
]
},
{
"cell_type": "code",
"execution_count": 98,
"metadata": {},
"outputs": [],
"source": [
"caps1_output_tiled"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"好!现在,为了要获得所有的预测好的输出向量 $\\hat{\\mathbf{u}}_{j|i}$,我们只需要将这两个数组使用`tf.malmul()`函数进行相乘,就像前面解释的那样:"
]
},
{
"cell_type": "code",
"execution_count": 99,
"metadata": {},
"outputs": [],
"source": [
"caps2_predicted = tf.matmul(W_tiled, caps1_output_tiled,\n",
" name=\"caps2_predicted\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"让我们检查一下形状:"
]
},
{
"cell_type": "code",
"execution_count": 100,
"metadata": {},
"outputs": [],
"source": [
"caps2_predicted"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"非常好,对于在该批次(我们还不知道批次的大小,使用 \"?\" 替代中的每个实例以及对于每对第一和第二层的胶囊1152×10我们都有一个16D预测的输出列向量 (16×1)。我们已经准备好应用 根据协议算法的路由 了!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 根据协议的路由"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"首先,让我们初始化原始的路由权重 $b_{i,j}$ 到0:"
]
},
{
"cell_type": "code",
"execution_count": 101,
"metadata": {},
"outputs": [],
"source": [
"raw_weights = tf.zeros([batch_size, caps1_n_caps, caps2_n_caps, 1, 1],\n",
" dtype=np.float32, name=\"raw_weights\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"我们马上将会看到为什么我们需要最后两维大小为1的维度。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 第一轮"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"首先,让我们应用 sofmax 函数来计算路由权重,$\\mathbf{c}_{i} = \\operatorname{softmax}(\\mathbf{b}_i)$ 论文中的公式3"
]
},
{
"cell_type": "code",
"execution_count": 102,
"metadata": {},
"outputs": [],
"source": [
"routing_weights = tf.nn.softmax(raw_weights, dim=2, name=\"routing_weights\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"现在让我们为每个第二层胶囊计算其预测输出向量的加权,$\\mathbf{s}_j = \\sum\\limits_{i}{c_{i,j}\\hat{\\mathbf{u}}_{j|i}}$ 论文公式2的左半部分"
]
},
{
"cell_type": "code",
"execution_count": 103,
"metadata": {},
"outputs": [],
"source": [
"weighted_predictions = tf.multiply(routing_weights, caps2_predicted,\n",
" name=\"weighted_predictions\")\n",
"weighted_sum = tf.reduce_sum(weighted_predictions, axis=1, keep_dims=True,\n",
" name=\"weighted_sum\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"这里有几个重要的细节需要注意:\n",
"* 要执行元素级别矩阵相乘也称为Hadamard积记作$\\circ$),我们需要使用`tf.multiply()` 函数。它要求 `routing_weights` 和 `caps2_predicted` 具有相同的秩,这就是为什么前面我们在 `routing_weights` 上添加了两个额外的维度。\n",
"* `routing_weights`的形状为 (_batch size_, 1152, 10, 1, 1) 而 `caps2_predicted` 的形状为 (_batch size_, 1152, 10, 16, 1)。由于它们在第四个维度上不匹配1 _vs_ 16`tf.multiply()` 自动地在 `routing_weights` 该维度上 _广播_ 了16次。如果你不熟悉广播这里有一个简单的例子也许可以帮上忙\n",
"\n",
" $ \\pmatrix{1 & 2 & 3 \\\\ 4 & 5 & 6} \\circ \\pmatrix{10 & 100 & 1000} = \\pmatrix{1 & 2 & 3 \\\\ 4 & 5 & 6} \\circ \\pmatrix{10 & 100 & 1000 \\\\ 10 & 100 & 1000} = \\pmatrix{10 & 200 & 3000 \\\\ 40 & 500 & 6000} $"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"最后让我们应用squash函数到在协议算法的第一次迭代迭代结束时获取第二层胶囊的输出上$\\mathbf{v}_j = \\operatorname{squash}(\\mathbf{s}_j)$"
]
},
{
"cell_type": "code",
"execution_count": 104,
"metadata": {},
"outputs": [],
"source": [
"caps2_output_round_1 = squash(weighted_sum, axis=-2,\n",
" name=\"caps2_output_round_1\")"
]
},
{
"cell_type": "code",
"execution_count": 105,
"metadata": {},
"outputs": [],
"source": [
"caps2_output_round_1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"好我们对于每个实例有了10个16D输出向量就像我们期待的那样。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 第二轮"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"首先,让我们衡量一下,每个预测向量 $\\hat{\\mathbf{u}}_{j|i}$ 对于实际输出向量 $\\mathbf{v}_j$ 之间到底有多接近,这是通过它们的标量乘积 $\\hat{\\mathbf{u}}_{j|i} \\cdot \\mathbf{v}_j$来完成的。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"* 快速数学上的提示:如果 $\\vec{a}$ and $\\vec{b}$ 是长度相等的向量,并且 $\\mathbf{a}$ 和 $\\mathbf{b}$ 是相应的列向量(如,只有一列的矩阵),那么 $\\mathbf{a}^T \\mathbf{b}$ (即 $\\mathbf{a}$的转置和 $\\mathbf{b}$的矩阵相乘为一个1×1的矩阵包含两个向量$\\vec{a}\\cdot\\vec{b}$的标量积。在机器学习中,我们通常将向量表示为列向量,所以当我们探讨关于计算标量积 $\\hat{\\mathbf{u}}_{j|i} \\cdot \\mathbf{v}_j$的时候,其实意味着计算 ${\\hat{\\mathbf{u}}_{j|i}}^T \\mathbf{v}_j$。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"由于我们需要对每个实例和每个第一和第二层的胶囊对$(i, j)$,计算标量积 $\\hat{\\mathbf{u}}_{j|i} \\cdot \\mathbf{v}_j$ ,我们将再次利用`tf.matmul()`可以同时计算多个矩阵相乘的特点。这就要求使用 `tf.tile()`来使得所有维度都匹配(除了倒数第二个),就像我们之前所作的那样。所以让我们查看`caps2_predicted`的形状,因为它持有对每个实例和每个胶囊对的所有预测输出向量$\\hat{\\mathbf{u}}_{j|i}$。"
]
},
{
"cell_type": "code",
"execution_count": 106,
"metadata": {},
"outputs": [],
"source": [
"caps2_predicted"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"现在让我们查看 `caps2_output_round_1` 的形状它有10个输出向量每个16D对应每个实例"
]
},
{
"cell_type": "code",
"execution_count": 107,
"metadata": {},
"outputs": [],
"source": [
"caps2_output_round_1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"为了让这些形状相匹配,我们只需要在第二个维度平铺 `caps2_output_round_1` 1152次一次一个主胶囊"
]
},
{
"cell_type": "code",
"execution_count": 108,
"metadata": {},
"outputs": [],
"source": [
"caps2_output_round_1_tiled = tf.tile(\n",
" caps2_output_round_1, [1, caps1_n_caps, 1, 1, 1],\n",
" name=\"caps2_output_round_1_tiled\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"现在我们已经准备好可以调用 `tf.matmul()`(注意还需要告知它在第一个数组中的矩阵进行转置,让${\\hat{\\mathbf{u}}_{j|i}}^T$ 来替代 $\\hat{\\mathbf{u}}_{j|i}$"
]
},
{
"cell_type": "code",
"execution_count": 109,
"metadata": {},
"outputs": [],
"source": [
"agreement = tf.matmul(caps2_predicted, caps2_output_round_1_tiled,\n",
" transpose_a=True, name=\"agreement\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"我们现在可以通过对于刚计算的标量积$\\hat{\\mathbf{u}}_{j|i} \\cdot \\mathbf{v}_j$进行简单相加,来进行原始路由权重 $b_{i,j}$ 的更新:$b_{i,j} \\gets b_{i,j} + \\hat{\\mathbf{u}}_{j|i} \\cdot \\mathbf{v}_j$ 参见论文过程1中第7步"
]
},
{
"cell_type": "code",
"execution_count": 110,
"metadata": {},
"outputs": [],
"source": [
"raw_weights_round_2 = tf.add(raw_weights, agreement,\n",
" name=\"raw_weights_round_2\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"第二轮的其余部分和第一轮相同:"
]
},
{
"cell_type": "code",
"execution_count": 111,
"metadata": {},
"outputs": [],
"source": [
"routing_weights_round_2 = tf.nn.softmax(raw_weights_round_2,\n",
" dim=2,\n",
" name=\"routing_weights_round_2\")\n",
"weighted_predictions_round_2 = tf.multiply(routing_weights_round_2,\n",
" caps2_predicted,\n",
" name=\"weighted_predictions_round_2\")\n",
"weighted_sum_round_2 = tf.reduce_sum(weighted_predictions_round_2,\n",
" axis=1, keep_dims=True,\n",
" name=\"weighted_sum_round_2\")\n",
"caps2_output_round_2 = squash(weighted_sum_round_2,\n",
" axis=-2,\n",
" name=\"caps2_output_round_2\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"我们可以继续更多轮,只需要重复第二轮中相同的步骤,但为了保持简洁,我们就到这里:"
]
},
{
"cell_type": "code",
"execution_count": 112,
"metadata": {},
"outputs": [],
"source": [
"caps2_output = caps2_output_round_2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 静态还是动态循环?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"在上面的代码中我们在TensorFlow计算图中为协调算法的每一轮路由创建了不同的操作。换句话说它是一个静态循环。\n",
"\n",
"当然,与其拷贝/粘贴这些代码几次通常在python中我们可以写一个 `for` 循环但这不会改变这样一个事实那就是在计算图中最后对于每个路由迭代都会有不同的操作。这其实是可接受的因为我们通常不会具有超过5次路由迭代所以计算图不会成长得太大。\n",
"\n",
"然而你可能更倾向于在TensorFlow计算图自身实现路由循环而不是使用Python的`for`循环。为了要做到这点将需要使用TensorFlow的 `tf.while_loop()` 函数。这种方式,所有的路由循环都可以重用在该计算图中的相同的操作,这被称为动态循环。\n",
"\n",
"例如这里是如何构建一个小循环用来计算1到100的平方和"
]
},
{
"cell_type": "code",
"execution_count": 113,
"metadata": {},
"outputs": [],
"source": [
"def condition(input, counter):\n",
" return tf.less(counter, 100)\n",
"\n",
"def loop_body(input, counter):\n",
" output = tf.add(input, tf.square(counter))\n",
" return output, tf.add(counter, 1)\n",
"\n",
"with tf.name_scope(\"compute_sum_of_squares\"):\n",
" counter = tf.constant(1)\n",
" sum_of_squares = tf.constant(0)\n",
"\n",
" result = tf.while_loop(condition, loop_body, [sum_of_squares, counter])\n",
" \n",
"\n",
"with tf.Session() as sess:\n",
" print(sess.run(result))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"如你所见, `tf.while_loop()` 函数期望的循环条件和循环体由两个函数来提供。这些函数仅会被TensorFlow调用一次在构建计算图阶段_不_ 在执行计算图的时候。 `tf.while_loop()` 函数将由 `condition()` 和 `loop_body()` 创建的计算图碎片同一些用来创建循环的额外操作缝制在一起。\n",
"\n",
"还注意到在训练的过程中TensorFlow将自动地通过循环处理反向传播因此你不需要担心这些事情。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"当然,我们也可以一行代码搞定!;)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"sum([i**2 for i in range(1, 100 + 1)])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"开个玩笑抛开缩减计算图的大小不说使用动态循环而不是静态循环能够帮助减少很多的GPU RAM的使用如果你使用GPU的话。事实上如果但调用 `tf.while_loop()` 函数时,你设置了 `swap_memory=True` TensorFlow会在每个循环的迭代上自动检查GPU RAM使用情况并且它会照顾到在GPU和CPU之间swapping内存时的需求。既然CPU的内存便宜量又大相对GPU RAM而言这就很有意义了。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 估算的分类概率(模长)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"输出向量的模长代表了分类的概率,所以我们就可以使用`tf.norm()`来计算它们,但由于我们在讨论`squash`函数时看到的那样,可能会有风险,所以我们创建了自己的 `safe_norm()` 函数来进行替代:"
]
},
{
"cell_type": "code",
"execution_count": 114,
"metadata": {},
"outputs": [],
"source": [
"def safe_norm(s, axis=-1, epsilon=1e-7, keep_dims=False, name=None):\n",
" with tf.name_scope(name, default_name=\"safe_norm\"):\n",
" squared_norm = tf.reduce_sum(tf.square(s), axis=axis,\n",
" keep_dims=keep_dims)\n",
" return tf.sqrt(squared_norm + epsilon)"
]
},
{
"cell_type": "code",
"execution_count": 115,
"metadata": {},
"outputs": [],
"source": [
"y_proba = safe_norm(caps2_output, axis=-2, name=\"y_proba\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"要预测每个实例的分类,我们只需要选择那个具有最高估算概率的就可以了。要做到这点,让我们通过使用 `tf.argmax()` 来达到我们的目的:"
]
},
{
"cell_type": "code",
"execution_count": 116,
"metadata": {},
"outputs": [],
"source": [
"y_proba_argmax = tf.argmax(y_proba, axis=2, name=\"y_proba\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"让我们检查一下 `y_proba_argmax` 的形状:"
]
},
{
"cell_type": "code",
"execution_count": 117,
"metadata": {},
"outputs": [],
"source": [
"y_proba_argmax"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"这正好是我们想要的:对于每一个实例,我们现在有了最长的输出向量的索引。让我们用 `tf.squeeze()` 来移除后两个大小为1的维度。这就给出了该胶囊网络对于每个实例的预测分类"
]
},
{
"cell_type": "code",
"execution_count": 118,
"metadata": {},
"outputs": [],
"source": [
"y_pred = tf.squeeze(y_proba_argmax, axis=[1,2], name=\"y_pred\")"
]
},
{
"cell_type": "code",
"execution_count": 119,
"metadata": {},
"outputs": [],
"source": [
"y_pred"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"好了,我们现在准备好开始定义训练操作,从损失开始。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 标签"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"首先,我们将需要一个对于标签的占位符:"
]
},
{
"cell_type": "code",
"execution_count": 120,
"metadata": {},
"outputs": [],
"source": [
"y = tf.placeholder(shape=[None], dtype=tf.int64, name=\"y\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 边际损失"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"论文使用了一个特殊的边际损失,来使得在每个图像中侦测多于两个以上的数字成为可能:\n",
"\n",
"$ L_k = T_k \\max(0, m^{+} - \\|\\mathbf{v}_k\\|)^2 + \\lambda (1 - T_k) \\max(0, \\|\\mathbf{v}_k\\| - m^{-})^2$\n",
"\n",
"* $T_k$ 等于1如果分类$k$的数字出现否则为0.\n",
"* 在论文中,$m^{+} = 0.9$, $m^{-} = 0.1$,并且$\\lambda = 0.5$\n",
"* 注意在视频15:47秒处有个错误应该是最大化操作而不是norms被平方。不好意思。"
]
},
{
"cell_type": "code",
"execution_count": 121,
"metadata": {},
"outputs": [],
"source": [
"m_plus = 0.9\n",
"m_minus = 0.1\n",
"lambda_ = 0.5"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"既然 `y` 将包含数字分类从0到9要对于每个实例和每个分类获取 $T_k$ ,我们只需要使用 `tf.one_hot()` 函数即可:"
]
},
{
"cell_type": "code",
"execution_count": 122,
"metadata": {},
"outputs": [],
"source": [
"T = tf.one_hot(y, depth=caps2_n_caps, name=\"T\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"一个小例子应该可以说明这到底做了什么:"
]
},
{
"cell_type": "code",
"execution_count": 123,
"metadata": {},
"outputs": [],
"source": [
"with tf.Session():\n",
" print(T.eval(feed_dict={y: np.array([0, 1, 2, 3, 9])}))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"现在让我们对于每个输出胶囊和每个实例计算输出向量。首先,让我们验证 `caps2_output` 形状:"
]
},
{
"cell_type": "code",
"execution_count": 124,
"metadata": {},
"outputs": [],
"source": [
"caps2_output"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"这些16D向量位于第二到最后的维度因此让我们在 `axis=-2` 使用 `safe_norm()` 函数:"
]
},
{
"cell_type": "code",
"execution_count": 125,
"metadata": {},
"outputs": [],
"source": [
"caps2_output_norm = safe_norm(caps2_output, axis=-2, keep_dims=True,\n",
" name=\"caps2_output_norm\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"现在让我们计算 $\\max(0, m^{+} - \\|\\mathbf{v}_k\\|)^2$,并且重塑其结果以获得一个简单的具有形状(_batch size_, 10)的矩阵:"
]
},
{
"cell_type": "code",
"execution_count": 126,
"metadata": {},
"outputs": [],
"source": [
"present_error_raw = tf.square(tf.maximum(0., m_plus - caps2_output_norm),\n",
" name=\"present_error_raw\")\n",
"present_error = tf.reshape(present_error_raw, shape=(-1, 10),\n",
" name=\"present_error\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"接下来让我们计算 $\\max(0, \\|\\mathbf{v}_k\\| - m^{-})^2$ 并且重塑成(_batch size_,10)"
]
},
{
"cell_type": "code",
"execution_count": 127,
"metadata": {},
"outputs": [],
"source": [
"absent_error_raw = tf.square(tf.maximum(0., caps2_output_norm - m_minus),\n",
" name=\"absent_error_raw\")\n",
"absent_error = tf.reshape(absent_error_raw, shape=(-1, 10),\n",
" name=\"absent_error\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"我们准备好为每个实例和每个数字计算损失:"
]
},
{
"cell_type": "code",
"execution_count": 128,
"metadata": {},
"outputs": [],
"source": [
"L = tf.add(T * present_error, lambda_ * (1.0 - T) * absent_error,\n",
" name=\"L\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"现在我们可以把对于每个实例的数字损失进行相加($L_0 + L_1 + \\cdots + L_9$),并且在所有的实例中计算均值。这给予我们最后的边际损失:"
]
},
{
"cell_type": "code",
"execution_count": 129,
"metadata": {},
"outputs": [],
"source": [
"margin_loss = tf.reduce_mean(tf.reduce_sum(L, axis=1), name=\"margin_loss\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 重新构造"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"现在让我们添加一个解码器网络其位于胶囊网络之上。它是一个常规的3层全连接神经网络其将基于胶囊网络的输出学习重新构建输入图像。这将强制胶囊网络保留所有需要重新构造数字的信息贯穿整个网络。该约束正则化了模型它减少了训练数据集过拟合的风险并且它有助于泛化到新的数字。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 遮盖"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"论文中提及了在训练的过程中与其发送所有的胶囊网络的输出到解码器网络不如仅发送与目标数字对应的胶囊输出向量。所有其余输出向量必须被遮盖掉。在推断的时候我们必须遮盖所有输出向量除了最长的那个。即预测的数字相关的那个。你可以查看论文中的图2视频中的18:15所有的输出向量都被遮盖掉了除了那个重新构造目标的输出向量。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"我们需要一个占位符来告诉TensorFlow是否我们想要遮盖这些输出向量根据标签 (`True`) 或 预测 (`False`, 默认)"
]
},
{
"cell_type": "code",
"execution_count": 130,
"metadata": {},
"outputs": [],
"source": [
"mask_with_labels = tf.placeholder_with_default(False, shape=(),\n",
" name=\"mask_with_labels\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"现在让我们使用 `tf.cond()` 来定义重新构造的目标,如果 `mask_with_labels` 为 `True` 就是标签 `y`,否则就是 `y_pred`。"
]
},
{
"cell_type": "code",
"execution_count": 131,
"metadata": {},
"outputs": [],
"source": [
"reconstruction_targets = tf.cond(mask_with_labels, # 条件\n",
" lambda: y, # if True\n",
" lambda: y_pred, # if False\n",
" name=\"reconstruction_targets\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"注意到 `tf.cond()` 函数期望的是通过函数传递而来的if-True 和 if-False张量这些函数会在计算图构造阶段而非执行阶段被仅调用一次和`tf.while_loop()`类似。这可以允许TensorFlow添加必要操作以此处理if-True 和 if-False 张量的条件评估。然而,在这里,张量 `y` 和 `y_pred` 已经在我们调用 `tf.cond()` 时被创建不幸地是TensorFlow会认为 `y` 和 `y_pred` 是 `reconstruction_targets` 张量的依赖项。虽然,`reconstruction_targets` 张量最终是会计算出正确值,但是:\n",
"1. 无论何时,我们评估某个依赖于 `reconstruction_targets` 的张量,`y_pred` 张量也会被评估(即便 `mask_with_layers` 为 `True`)。这不是什么大问题,因为,在训练阶段计算`y_pred` 张量不会添加额外的开销,而且不管怎么样我们都需要它来计算边际损失。并且在测试中,如果我们做的是分类,我们就不需要重新构造,所以`reconstruction_grpha`根本不会被评估。\n",
"2. 我们总是需要为`y`占位符递送一个值(即使`mask_with_layers`为`False`。这就有点讨厌了当然我们可以传递一个空数组因为TensorFlow无论如何都不会用到它就是当检查依赖项的时候还不知道。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"现在我们有了重新构建的目标让我们创建重新构建的遮盖。对于目标类型它应该为1.0对于其他类型应该为0.0。为此我们就可以使用`tf.one_hot()`函数:"
]
},
{
"cell_type": "code",
"execution_count": 132,
"metadata": {},
"outputs": [],
"source": [
"reconstruction_mask = tf.one_hot(reconstruction_targets,\n",
" depth=caps2_n_caps,\n",
" name=\"reconstruction_mask\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"让我们检查一下 `reconstruction_mask`的形状:"
]
},
{
"cell_type": "code",
"execution_count": 133,
"metadata": {},
"outputs": [],
"source": [
"reconstruction_mask"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"和 `caps2_output` 的形状比对一下:"
]
},
{
"cell_type": "code",
"execution_count": 134,
"metadata": {},
"outputs": [],
"source": [
"caps2_output"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"嗯,它的形状是 (_batch size_, 1, 10, 16, 1)。我们想要将它和 `reconstruction_mask` 进行相乘,但 `reconstruction_mask`的形状是(_batch size_, 10)。我们必须对此进行reshape成 (_batch size_, 1, 10, 1, 1) 来满足相乘的要求:"
]
},
{
"cell_type": "code",
"execution_count": 135,
"metadata": {},
"outputs": [],
"source": [
"reconstruction_mask_reshaped = tf.reshape(\n",
" reconstruction_mask, [-1, 1, caps2_n_caps, 1, 1],\n",
" name=\"reconstruction_mask_reshaped\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"最终我们可以应用 遮盖 了!"
]
},
{
"cell_type": "code",
"execution_count": 136,
"metadata": {},
"outputs": [],
"source": [
"caps2_output_masked = tf.multiply(\n",
" caps2_output, reconstruction_mask_reshaped,\n",
" name=\"caps2_output_masked\")"
]
},
{
"cell_type": "code",
"execution_count": 137,
"metadata": {},
"outputs": [],
"source": [
"caps2_output_masked"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"最后还有一个重塑操作被用来扁平化解码器的输入:"
]
},
{
"cell_type": "code",
"execution_count": 138,
"metadata": {},
"outputs": [],
"source": [
"decoder_input = tf.reshape(caps2_output_masked,\n",
" [-1, caps2_n_caps * caps2_n_dims],\n",
" name=\"decoder_input\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"这给予我们一个形状是 (_batch size_, 160) 的数组:"
]
},
{
"cell_type": "code",
"execution_count": 139,
"metadata": {},
"outputs": [],
"source": [
"decoder_input"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 解码器"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"现在让我们来构建该解码器。它非常简单两个密集全连接ReLU 层紧跟这一个密集输出sigmoid层"
]
},
{
"cell_type": "code",
"execution_count": 140,
"metadata": {},
"outputs": [],
"source": [
"n_hidden1 = 512\n",
"n_hidden2 = 1024\n",
"n_output = 28 * 28"
]
},
{
"cell_type": "code",
"execution_count": 141,
"metadata": {},
"outputs": [],
"source": [
"with tf.name_scope(\"decoder\"):\n",
" hidden1 = tf.layers.dense(decoder_input, n_hidden1,\n",
" activation=tf.nn.relu,\n",
" name=\"hidden1\")\n",
" hidden2 = tf.layers.dense(hidden1, n_hidden2,\n",
" activation=tf.nn.relu,\n",
" name=\"hidden2\")\n",
" decoder_output = tf.layers.dense(hidden2, n_output,\n",
" activation=tf.nn.sigmoid,\n",
" name=\"decoder_output\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 重新构造的损失"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"现在让我们计算重新构造的损失。它不过是输入图像和重新构造过的图像的平方差。"
]
},
{
"cell_type": "code",
"execution_count": 142,
"metadata": {},
"outputs": [],
"source": [
"X_flat = tf.reshape(X, [-1, n_output], name=\"X_flat\")\n",
"squared_difference = tf.square(X_flat - decoder_output,\n",
" name=\"squared_difference\")\n",
"reconstruction_loss = tf.reduce_mean(squared_difference,\n",
" name=\"reconstruction_loss\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 最终损失"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"最终损失为边际损失和重新构造损失使用放大因子0.0005确保边际损失在训练过程中处于支配地位)的和:"
]
},
{
"cell_type": "code",
"execution_count": 143,
"metadata": {},
"outputs": [],
"source": [
"alpha = 0.0005\n",
"\n",
"loss = tf.add(margin_loss, alpha * reconstruction_loss, name=\"loss\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 最后润色"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 精度"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"为了衡量模型的精度,我们需要计算实例被正确分类的数量。为此,我们可以简单地比较`y`和`y_pred`并将比较结果的布尔值转换成float320.0代表False1.0代表True并且计算所有实例的均值"
]
},
{
"cell_type": "code",
"execution_count": 144,
"metadata": {},
"outputs": [],
"source": [
"correct = tf.equal(y, y_pred, name=\"correct\")\n",
"accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name=\"accuracy\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 训练操作"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"论文中提到作者使用Adam优化器使用了TensorFlow的默认参数"
]
},
{
"cell_type": "code",
"execution_count": 145,
"metadata": {},
"outputs": [],
"source": [
"optimizer = tf.train.AdamOptimizer()\n",
"training_op = optimizer.minimize(loss, name=\"training_op\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 初始化和Saver"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"让我们来添加变量初始器,还要加一个 `Saver`"
]
},
{
"cell_type": "code",
"execution_count": 146,
"metadata": {},
"outputs": [],
"source": [
"init = tf.global_variables_initializer()\n",
"saver = tf.train.Saver()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"还有... 我们已经完成了构造阶段!花点时间可以庆祝🎉一下。:)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 训练"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"训练我们的胶囊网络是非常标准的。为了简化我们不需要作任何花哨的超参调整、丢弃等我们只是一遍又一遍运行训练操作显示损失并且在每个epoch结束的时候根据验证集衡量一下精度显示出来并且保存模型当然验证损失是目前为止最低的模型才会被保存这是一种基本的实现早停的方法而不需要实际上打断训练的进程。我们希望代码能够自释但这里应该有几个细节值得注意\n",
"* 如果某个checkpoint文件已经存在那么它会被恢复这可以让训练被打断再从最新的checkpoint中进行恢复成为可能\n",
"* 我们不要忘记在训练的时候传递`mask_with_labels=True`\n",
"* 在测试的过程中,我们可以让`mask_with_labels`默认为`False`(但是我们仍然需要传递标签,因为它们在计算精度的时候会被用到),\n",
"* 通过 `mnist.train.next_batch()`装载的图片会被表示为类型 `float32` 数组,其形状为\\[784\\],但输入的占位符`X`期望的是一个`float32`数组,其形状为 \\[28, 28, 1\\],所以在我们把送到模型之前,必须把这些图像进行重塑,\n",
"* 我们在整个完整的验证集上对模型的损失和精度进行评估。为了能够看到进度和支持那些并没有太多RAM的系统评估损失和精度的代码在一个批次上执行一次并且最后再计算平均损失和平均精度。\n",
"\n",
"*警告*如果你没有GPU训练将会非常漫长至少几个小时。当使用GPU它应该对于每个epoch只需要几分钟在NVidia GeForce GTX 1080Ti上只需要6分钟。"
]
},
{
"cell_type": "code",
"execution_count": 147,
"metadata": {},
"outputs": [],
"source": [
"n_epochs = 10\n",
"batch_size = 50\n",
"restore_checkpoint = True\n",
"\n",
"n_iterations_per_epoch = mnist.train.num_examples // batch_size\n",
"n_iterations_validation = mnist.validation.num_examples // batch_size\n",
"best_loss_val = np.infty\n",
"checkpoint_path = \"./my_capsule_network\"\n",
"\n",
"with tf.Session() as sess:\n",
" if restore_checkpoint and tf.train.checkpoint_exists(checkpoint_path):\n",
" saver.restore(sess, checkpoint_path)\n",
" else:\n",
" init.run()\n",
"\n",
" for epoch in range(n_epochs):\n",
" for iteration in range(1, n_iterations_per_epoch + 1):\n",
" X_batch, y_batch = mnist.train.next_batch(batch_size)\n",
" # 运行训练操作并且评估损失:\n",
" _, loss_train = sess.run(\n",
" [training_op, loss],\n",
" feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),\n",
" y: y_batch,\n",
" mask_with_labels: True})\n",
" print(\"\\rIteration: {}/{} ({:.1f}%) Loss: {:.5f}\".format(\n",
" iteration, n_iterations_per_epoch,\n",
" iteration * 100 / n_iterations_per_epoch,\n",
" loss_train),\n",
" end=\"\")\n",
"\n",
" # 在每个epoch之后\n",
" # 衡量验证损失和精度:\n",
" loss_vals = []\n",
" acc_vals = []\n",
" for iteration in range(1, n_iterations_validation + 1):\n",
" X_batch, y_batch = mnist.validation.next_batch(batch_size)\n",
" loss_val, acc_val = sess.run(\n",
" [loss, accuracy],\n",
" feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),\n",
" y: y_batch})\n",
" loss_vals.append(loss_val)\n",
" acc_vals.append(acc_val)\n",
" print(\"\\rEvaluating the model: {}/{} ({:.1f}%)\".format(\n",
" iteration, n_iterations_validation,\n",
" iteration * 100 / n_iterations_validation),\n",
" end=\" \" * 10)\n",
" loss_val = np.mean(loss_vals)\n",
" acc_val = np.mean(acc_vals)\n",
" print(\"\\rEpoch: {} Val accuracy: {:.4f}% Loss: {:.6f}{}\".format(\n",
" epoch + 1, acc_val * 100, loss_val,\n",
" \" (improved)\" if loss_val < best_loss_val else \"\"))\n",
"\n",
" # 如果有进步就保存模型:\n",
" if loss_val < best_loss_val:\n",
" save_path = saver.save(sess, checkpoint_path)\n",
" best_loss_val = loss_val"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"我们在训练结束后在验证集上达到了99.32%的精度只用了5个epoches看上去不错。现在让我们将模型运用到测试集上。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 评估"
]
},
{
"cell_type": "code",
"execution_count": 148,
"metadata": {},
"outputs": [],
"source": [
"n_iterations_test = mnist.test.num_examples // batch_size\n",
"\n",
"with tf.Session() as sess:\n",
" saver.restore(sess, checkpoint_path)\n",
"\n",
" loss_tests = []\n",
" acc_tests = []\n",
" for iteration in range(1, n_iterations_test + 1):\n",
" X_batch, y_batch = mnist.test.next_batch(batch_size)\n",
" loss_test, acc_test = sess.run(\n",
" [loss, accuracy],\n",
" feed_dict={X: X_batch.reshape([-1, 28, 28, 1]),\n",
" y: y_batch})\n",
" loss_tests.append(loss_test)\n",
" acc_tests.append(acc_test)\n",
" print(\"\\rEvaluating the model: {}/{} ({:.1f}%)\".format(\n",
" iteration, n_iterations_test,\n",
" iteration * 100 / n_iterations_test),\n",
" end=\" \" * 10)\n",
" loss_test = np.mean(loss_tests)\n",
" acc_test = np.mean(acc_tests)\n",
" print(\"\\rFinal test accuracy: {:.4f}% Loss: {:.6f}\".format(\n",
" acc_test * 100, loss_test))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"我们在测试集上达到了99.21%的精度。相当棒!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 预测"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"现在让我们进行一些预测首先从测试集确定一些图片接着开始一个session恢复已经训练好的模型评估`cap2_output`来获得胶囊网络的输出向量,`decoder_output`来重新构造,用`y_pred`来获得类型预测:"
]
},
{
"cell_type": "code",
"execution_count": 149,
"metadata": {},
"outputs": [],
"source": [
"n_samples = 5\n",
"\n",
"sample_images = mnist.test.images[:n_samples].reshape([-1, 28, 28, 1])\n",
"\n",
"with tf.Session() as sess:\n",
" saver.restore(sess, checkpoint_path)\n",
" caps2_output_value, decoder_output_value, y_pred_value = sess.run(\n",
" [caps2_output, decoder_output, y_pred],\n",
" feed_dict={X: sample_images,\n",
" y: np.array([], dtype=np.int64)})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"注意:我们传递的`y`使用了一个空的数组不过TensorFlow并不会用到它前面已经解释过了。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"现在让我们把这些图片和它们的标签绘制出来,同时绘制出来的还有相应的重新构造和预测:"
]
},
{
"cell_type": "code",
"execution_count": 150,
"metadata": {},
"outputs": [],
"source": [
"sample_images = sample_images.reshape(-1, 28, 28)\n",
"reconstructions = decoder_output_value.reshape([-1, 28, 28])\n",
"\n",
"plt.figure(figsize=(n_samples * 2, 3))\n",
"for index in range(n_samples):\n",
" plt.subplot(1, n_samples, index + 1)\n",
" plt.imshow(sample_images[index], cmap=\"binary\")\n",
" plt.title(\"Label:\" + str(mnist.test.labels[index]))\n",
" plt.axis(\"off\")\n",
"\n",
"plt.show()\n",
"\n",
"plt.figure(figsize=(n_samples * 2, 3))\n",
"for index in range(n_samples):\n",
" plt.subplot(1, n_samples, index + 1)\n",
" plt.title(\"Predicted:\" + str(y_pred_value[index]))\n",
" plt.imshow(reconstructions[index], cmap=\"binary\")\n",
" plt.axis(\"off\")\n",
" \n",
"plt.show()\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"预测都正确,而且重新构造的图片看上去很棒。阿弥陀佛!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 理解输出向量"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"让我们调整一下输出向量,对它们的姿态参数表示进行查看。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"首先让我们检查`cap2_output_value` NumPy数组的形状"
]
},
{
"cell_type": "code",
"execution_count": 151,
"metadata": {},
"outputs": [],
"source": [
"caps2_output_value.shape"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"让我们创建一个函数,该函数在所有的输出向量里对于每个 16维度姿态参数进行调整。每个调整过的输出向量将和原来的输出向量相同除了它的 姿态参数 中的一个会加上一个-0.5到0.5之间变动的值。默认的会有11个步数(-0.5, -0.4, ..., +0.4, +0.5)。这个函数会返回一个数组,其形状为(_调整过的姿态参数_=16, _步数_=11, _batch size_=5, 1, 10, 16, 1)"
]
},
{
"cell_type": "code",
"execution_count": 152,
"metadata": {},
"outputs": [],
"source": [
"def tweak_pose_parameters(output_vectors, min=-0.5, max=0.5, n_steps=11):\n",
" steps = np.linspace(min, max, n_steps) # -0.25, -0.15, ..., +0.25\n",
" pose_parameters = np.arange(caps2_n_dims) # 0, 1, ..., 15\n",
" tweaks = np.zeros([caps2_n_dims, n_steps, 1, 1, 1, caps2_n_dims, 1])\n",
" tweaks[pose_parameters, :, 0, 0, 0, pose_parameters, 0] = steps\n",
" output_vectors_expanded = output_vectors[np.newaxis, np.newaxis]\n",
" return tweaks + output_vectors_expanded"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"让我们计算所有的调整过的输出向量并且重塑结果到 (_parameters_×_steps_×_instances_, 1, 10, 16, 1) 以便于我们能够传递该数组到解码器中:"
]
},
{
"cell_type": "code",
"execution_count": 153,
"metadata": {},
"outputs": [],
"source": [
"n_steps = 11\n",
"\n",
"tweaked_vectors = tweak_pose_parameters(caps2_output_value, n_steps=n_steps)\n",
"tweaked_vectors_reshaped = tweaked_vectors.reshape(\n",
" [-1, 1, caps2_n_caps, caps2_n_dims, 1])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"现在让我们递送这些调整过的输出向量到解码器并且获得重新构造,它会产生:"
]
},
{
"cell_type": "code",
"execution_count": 154,
"metadata": {},
"outputs": [],
"source": [
"tweak_labels = np.tile(mnist.test.labels[:n_samples], caps2_n_dims * n_steps)\n",
"\n",
"with tf.Session() as sess:\n",
" saver.restore(sess, checkpoint_path)\n",
" decoder_output_value = sess.run(\n",
" decoder_output,\n",
" feed_dict={caps2_output: tweaked_vectors_reshaped,\n",
" mask_with_labels: True,\n",
" y: tweak_labels})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"让我们重塑解码器的输出以便于我们能够在输出维度,调整步数,和实例之上进行迭代:"
]
},
{
"cell_type": "code",
"execution_count": 155,
"metadata": {},
"outputs": [],
"source": [
"tweak_reconstructions = decoder_output_value.reshape(\n",
" [caps2_n_dims, n_steps, n_samples, 28, 28])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"最后,让我们绘制所有的重新构造,对于前三个输出维度,对于每个调整中的步数(列)和每个数字(行):"
]
},
{
"cell_type": "code",
"execution_count": 156,
"metadata": {
"scrolled": false
},
"outputs": [],
"source": [
"for dim in range(3):\n",
" print(\"Tweaking output dimension #{}\".format(dim))\n",
" plt.figure(figsize=(n_steps / 1.2, n_samples / 1.5))\n",
" for row in range(n_samples):\n",
" for col in range(n_steps):\n",
" plt.subplot(n_samples, n_steps, row * n_steps + col + 1)\n",
" plt.imshow(tweak_reconstructions[dim, col, row], cmap=\"binary\")\n",
" plt.axis(\"off\")\n",
" plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 小结"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"我试图让这个notebook中的代码尽量的扁平和线性为了让大家容易跟上当然在实践中大家可能想要包装这些代码成可重用的函数和类。例如你可以尝试实现你自己的`PrimaryCapsuleLayer`,和`DeseRoutingCapsuleLayer` 类其参数可以是胶囊的数量路由迭代的数量是使用动态循环还是静态循环诸如此类。对于基于TensorFlow模块化的胶囊网络的实现可以参考[CapsNet-TensorFlow](https://github.com/naturomics/CapsNet-Tensorflow) 项目。\n",
"\n",
"这就是今天所有的内容我希望你们喜欢这个notebook"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.4"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": true,
"sideBar": true,
"skip_h1_title": false,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {
"height": "calc(100% - 180px)",
"left": "10px",
"top": "150px",
"width": "336px"
},
"toc_section_display": true,
"toc_window_display": true
}
},
"nbformat": 4,
"nbformat_minor": 2
}