handson-ml/16_nlp_with_rnns_and_attent...

2767 lines
89 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters!

This file contains ambiguous Unicode characters that may be confused with others in your current locale. If your use case is intentional and legitimate, you can safely ignore this warning. Use the Escape button to highlight these characters.

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Chapter 16 Natural Language Processing with RNNs and Attention**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"_This notebook contains all the sample code in chapter 16._"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<table align=\"left\">\n",
" <td>\n",
" <a target=\"_blank\" href=\"https://colab.research.google.com/github/ageron/handson-ml2/blob/master/16_nlp_with_rnns_and_attention.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n",
" </td>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Setup"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"# Python ≥3.5 is required\n",
"import sys\n",
"assert sys.version_info >= (3, 5)\n",
"\n",
"# Scikit-Learn ≥0.20 is required\n",
"import sklearn\n",
"assert sklearn.__version__ >= \"0.20\"\n",
"\n",
"try:\n",
" # %tensorflow_version only exists in Colab.\n",
" %tensorflow_version 2.x\n",
" !pip install -q -U tensorflow-addons\n",
" !pip install -q -U transformers\n",
" IS_COLAB = True\n",
"except Exception:\n",
" IS_COLAB = False\n",
"\n",
"# TensorFlow ≥2.0 is required\n",
"import tensorflow as tf\n",
"from tensorflow import keras\n",
"assert tf.__version__ >= \"2.0\"\n",
"\n",
"if not tf.config.list_physical_devices('GPU'):\n",
" print(\"No GPU was detected. LSTMs and CNNs can be very slow without a GPU.\")\n",
" if IS_COLAB:\n",
" print(\"Go to Runtime > Change runtime and select a GPU hardware accelerator.\")\n",
"\n",
"# Common imports\n",
"import numpy as np\n",
"import os\n",
"\n",
"# to make this notebook's output stable across runs\n",
"np.random.seed(42)\n",
"tf.random.set_seed(42)\n",
"\n",
"# To plot pretty figures\n",
"%matplotlib inline\n",
"import matplotlib as mpl\n",
"import matplotlib.pyplot as plt\n",
"mpl.rc('axes', labelsize=14)\n",
"mpl.rc('xtick', labelsize=12)\n",
"mpl.rc('ytick', labelsize=12)\n",
"\n",
"# Where to save the figures\n",
"PROJECT_ROOT_DIR = \".\"\n",
"CHAPTER_ID = \"nlp\"\n",
"IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, \"images\", CHAPTER_ID)\n",
"os.makedirs(IMAGES_PATH, exist_ok=True)\n",
"\n",
"def save_fig(fig_id, tight_layout=True, fig_extension=\"png\", resolution=300):\n",
" path = os.path.join(IMAGES_PATH, fig_id + \".\" + fig_extension)\n",
" print(\"Saving figure\", fig_id)\n",
" if tight_layout:\n",
" plt.tight_layout()\n",
" plt.savefig(path, format=fig_extension, dpi=resolution)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Char-RNN"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Splitting a sequence into batches of shuffled windows"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For example, let's split the sequence 0 to 14 into windows of length 5, each shifted by 2 (e.g.,`[0, 1, 2, 3, 4]`, `[2, 3, 4, 5, 6]`, etc.), then shuffle them, and split them into inputs (the first 4 steps) and targets (the last 4 steps) (e.g., `[2, 3, 4, 5, 6]` would be split into `[[2, 3, 4, 5], [3, 4, 5, 6]]`), then create batches of 3 such input/target pairs:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"np.random.seed(42)\n",
"tf.random.set_seed(42)\n",
"\n",
"n_steps = 5\n",
"dataset = tf.data.Dataset.from_tensor_slices(tf.range(15))\n",
"dataset = dataset.window(n_steps, shift=2, drop_remainder=True)\n",
"dataset = dataset.flat_map(lambda window: window.batch(n_steps))\n",
"dataset = dataset.shuffle(10).map(lambda window: (window[:-1], window[1:]))\n",
"dataset = dataset.batch(3).prefetch(1)\n",
"for index, (X_batch, Y_batch) in enumerate(dataset):\n",
" print(\"_\" * 20, \"Batch\", index, \"\\nX_batch\")\n",
" print(X_batch.numpy())\n",
" print(\"=\" * 5, \"\\nY_batch\")\n",
" print(Y_batch.numpy())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Loading the Data and Preparing the Dataset"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"shakespeare_url = \"https://raw.githubusercontent.com/karpathy/char-rnn/master/data/tinyshakespeare/input.txt\"\n",
"filepath = keras.utils.get_file(\"shakespeare.txt\", shakespeare_url)\n",
"with open(filepath) as f:\n",
" shakespeare_text = f.read()"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"print(shakespeare_text[:148])"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"\"\".join(sorted(set(shakespeare_text.lower())))"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"tokenizer = keras.preprocessing.text.Tokenizer(char_level=True)\n",
"tokenizer.fit_on_texts(shakespeare_text)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"tokenizer.texts_to_sequences([\"First\"])"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"tokenizer.sequences_to_texts([[20, 6, 9, 8, 3]])"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"max_id = len(tokenizer.word_index) # number of distinct characters\n",
"dataset_size = tokenizer.document_count # total number of characters"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"[encoded] = np.array(tokenizer.texts_to_sequences([shakespeare_text])) - 1\n",
"train_size = dataset_size * 90 // 100\n",
"dataset = tf.data.Dataset.from_tensor_slices(encoded[:train_size])"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"n_steps = 100\n",
"window_length = n_steps + 1 # target = input shifted 1 character ahead\n",
"dataset = dataset.repeat().window(window_length, shift=1, drop_remainder=True)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"dataset = dataset.flat_map(lambda window: window.batch(window_length))"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"np.random.seed(42)\n",
"tf.random.set_seed(42)"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"batch_size = 32\n",
"dataset = dataset.shuffle(10000).batch(batch_size)\n",
"dataset = dataset.map(lambda windows: (windows[:, :-1], windows[:, 1:]))"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
"dataset = dataset.map(\n",
" lambda X_batch, Y_batch: (tf.one_hot(X_batch, depth=max_id), Y_batch))"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [],
"source": [
"dataset = dataset.prefetch(1)"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
"for X_batch, Y_batch in dataset.take(1):\n",
" print(X_batch.shape, Y_batch.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Creating and Training the Model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Warning**: the following code may take up to 24 hours to run, depending on your hardware. If you use a GPU, it may take just 1 or 2 hours, or less."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Note**: the `GRU` class will only use the GPU (if you have one) when using the default values for the following arguments: `activation`, `recurrent_activation`, `recurrent_dropout`, `unroll`, `use_bias` and `reset_after`. This is why I commented out `recurrent_dropout=0.2` (compared to the book)."
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
"model = keras.models.Sequential([\n",
" keras.layers.GRU(128, return_sequences=True, input_shape=[None, max_id],\n",
" #dropout=0.2, recurrent_dropout=0.2),\n",
" dropout=0.2),\n",
" keras.layers.GRU(128, return_sequences=True,\n",
" #dropout=0.2, recurrent_dropout=0.2),\n",
" dropout=0.2),\n",
" keras.layers.TimeDistributed(keras.layers.Dense(max_id,\n",
" activation=\"softmax\"))\n",
"])\n",
"model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=\"adam\")\n",
"history = model.fit(dataset, steps_per_epoch=train_size // batch_size,\n",
" epochs=10)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Using the Model to Generate Text"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [],
"source": [
"def preprocess(texts):\n",
" X = np.array(tokenizer.texts_to_sequences(texts)) - 1\n",
" return tf.one_hot(X, max_id)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Warning**: the `predict_classes()` method is deprecated. Instead, we must use `np.argmax(model(X_new), axis=-1)`."
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [],
"source": [
"X_new = preprocess([\"How are yo\"])\n",
"#Y_pred = model.predict_classes(X_new)\n",
"Y_pred = np.argmax(model(X_new), axis=-1)\n",
"tokenizer.sequences_to_texts(Y_pred + 1)[0][-1] # 1st sentence, last char"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [],
"source": [
"tf.random.set_seed(42)\n",
"\n",
"tf.random.categorical([[np.log(0.5), np.log(0.4), np.log(0.1)]], num_samples=40).numpy()"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [],
"source": [
"def next_char(text, temperature=1):\n",
" X_new = preprocess([text])\n",
" y_proba = model(X_new)[0, -1:, :]\n",
" rescaled_logits = tf.math.log(y_proba) / temperature\n",
" char_id = tf.random.categorical(rescaled_logits, num_samples=1) + 1\n",
" return tokenizer.sequences_to_texts(char_id.numpy())[0]"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [],
"source": [
"tf.random.set_seed(42)\n",
"\n",
"next_char(\"How are yo\", temperature=1)"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [],
"source": [
"def complete_text(text, n_chars=50, temperature=1):\n",
" for _ in range(n_chars):\n",
" text += next_char(text, temperature)\n",
" return text"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [],
"source": [
"tf.random.set_seed(42)\n",
"\n",
"print(complete_text(\"t\", temperature=0.2))"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [],
"source": [
"print(complete_text(\"t\", temperature=1))"
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [],
"source": [
"print(complete_text(\"t\", temperature=2))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Stateful RNN"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [],
"source": [
"tf.random.set_seed(42)"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [],
"source": [
"dataset = tf.data.Dataset.from_tensor_slices(encoded[:train_size])\n",
"dataset = dataset.window(window_length, shift=n_steps, drop_remainder=True)\n",
"dataset = dataset.flat_map(lambda window: window.batch(window_length))\n",
"dataset = dataset.repeat().batch(1)\n",
"dataset = dataset.map(lambda windows: (windows[:, :-1], windows[:, 1:]))\n",
"dataset = dataset.map(\n",
" lambda X_batch, Y_batch: (tf.one_hot(X_batch, depth=max_id), Y_batch))\n",
"dataset = dataset.prefetch(1)"
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {},
"outputs": [],
"source": [
"batch_size = 32\n",
"encoded_parts = np.array_split(encoded[:train_size], batch_size)\n",
"datasets = []\n",
"for encoded_part in encoded_parts:\n",
" dataset = tf.data.Dataset.from_tensor_slices(encoded_part)\n",
" dataset = dataset.window(window_length, shift=n_steps, drop_remainder=True)\n",
" dataset = dataset.flat_map(lambda window: window.batch(window_length))\n",
" datasets.append(dataset)\n",
"dataset = tf.data.Dataset.zip(tuple(datasets)).map(lambda *windows: tf.stack(windows))\n",
"dataset = dataset.repeat().map(lambda windows: (windows[:, :-1], windows[:, 1:]))\n",
"dataset = dataset.map(\n",
" lambda X_batch, Y_batch: (tf.one_hot(X_batch, depth=max_id), Y_batch))\n",
"dataset = dataset.prefetch(1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Note**: once again, I commented out `recurrent_dropout=0.2` (compared to the book) so you can get GPU acceleration (if you have one)."
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {},
"outputs": [],
"source": [
"model = keras.models.Sequential([\n",
" keras.layers.GRU(128, return_sequences=True, stateful=True,\n",
" #dropout=0.2, recurrent_dropout=0.2,\n",
" dropout=0.2,\n",
" batch_input_shape=[batch_size, None, max_id]),\n",
" keras.layers.GRU(128, return_sequences=True, stateful=True,\n",
" #dropout=0.2, recurrent_dropout=0.2),\n",
" dropout=0.2),\n",
" keras.layers.TimeDistributed(keras.layers.Dense(max_id,\n",
" activation=\"softmax\"))\n",
"])"
]
},
{
"cell_type": "code",
"execution_count": 32,
"metadata": {},
"outputs": [],
"source": [
"class ResetStatesCallback(keras.callbacks.Callback):\n",
" def on_epoch_begin(self, epoch, logs):\n",
" self.model.reset_states()"
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {},
"outputs": [],
"source": [
"model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=\"adam\")\n",
"steps_per_epoch = train_size // batch_size // n_steps\n",
"history = model.fit(dataset, steps_per_epoch=steps_per_epoch, epochs=50,\n",
" callbacks=[ResetStatesCallback()])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To use the model with different batch sizes, we need to create a stateless copy. We can get rid of dropout since it is only used during training:"
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {},
"outputs": [],
"source": [
"stateless_model = keras.models.Sequential([\n",
" keras.layers.GRU(128, return_sequences=True, input_shape=[None, max_id]),\n",
" keras.layers.GRU(128, return_sequences=True),\n",
" keras.layers.TimeDistributed(keras.layers.Dense(max_id,\n",
" activation=\"softmax\"))\n",
"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To set the weights, we first need to build the model (so the weights get created):"
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {},
"outputs": [],
"source": [
"stateless_model.build(tf.TensorShape([None, None, max_id]))"
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {},
"outputs": [],
"source": [
"stateless_model.set_weights(model.get_weights())\n",
"model = stateless_model"
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {},
"outputs": [],
"source": [
"tf.random.set_seed(42)\n",
"\n",
"print(complete_text(\"t\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Sentiment Analysis"
]
},
{
"cell_type": "code",
"execution_count": 38,
"metadata": {},
"outputs": [],
"source": [
"tf.random.set_seed(42)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can load the IMDB dataset easily:"
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {},
"outputs": [],
"source": [
"(X_train, y_train), (X_test, y_test) = keras.datasets.imdb.load_data()"
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {},
"outputs": [],
"source": [
"X_train[0][:10]"
]
},
{
"cell_type": "code",
"execution_count": 41,
"metadata": {},
"outputs": [],
"source": [
"word_index = keras.datasets.imdb.get_word_index()\n",
"id_to_word = {id_ + 3: word for word, id_ in word_index.items()}\n",
"for id_, token in enumerate((\"<pad>\", \"<sos>\", \"<unk>\")):\n",
" id_to_word[id_] = token\n",
"\" \".join([id_to_word[id_] for id_ in X_train[0][:10]])"
]
},
{
"cell_type": "code",
"execution_count": 42,
"metadata": {},
"outputs": [],
"source": [
"import tensorflow_datasets as tfds\n",
"\n",
"datasets, info = tfds.load(\"imdb_reviews\", as_supervised=True, with_info=True)"
]
},
{
"cell_type": "code",
"execution_count": 43,
"metadata": {},
"outputs": [],
"source": [
"datasets.keys()"
]
},
{
"cell_type": "code",
"execution_count": 44,
"metadata": {},
"outputs": [],
"source": [
"train_size = info.splits[\"train\"].num_examples\n",
"test_size = info.splits[\"test\"].num_examples"
]
},
{
"cell_type": "code",
"execution_count": 45,
"metadata": {},
"outputs": [],
"source": [
"train_size, test_size"
]
},
{
"cell_type": "code",
"execution_count": 46,
"metadata": {},
"outputs": [],
"source": [
"for X_batch, y_batch in datasets[\"train\"].batch(2).take(1):\n",
" for review, label in zip(X_batch.numpy(), y_batch.numpy()):\n",
" print(\"Review:\", review.decode(\"utf-8\")[:200], \"...\")\n",
" print(\"Label:\", label, \"= Positive\" if label else \"= Negative\")\n",
" print()"
]
},
{
"cell_type": "code",
"execution_count": 47,
"metadata": {},
"outputs": [],
"source": [
"def preprocess(X_batch, y_batch):\n",
" X_batch = tf.strings.substr(X_batch, 0, 300)\n",
" X_batch = tf.strings.regex_replace(X_batch, rb\"<br\\s*/?>\", b\" \")\n",
" X_batch = tf.strings.regex_replace(X_batch, b\"[^a-zA-Z']\", b\" \")\n",
" X_batch = tf.strings.split(X_batch)\n",
" return X_batch.to_tensor(default_value=b\"<pad>\"), y_batch"
]
},
{
"cell_type": "code",
"execution_count": 48,
"metadata": {},
"outputs": [],
"source": [
"preprocess(X_batch, y_batch)"
]
},
{
"cell_type": "code",
"execution_count": 49,
"metadata": {},
"outputs": [],
"source": [
"from collections import Counter\n",
"\n",
"vocabulary = Counter()\n",
"for X_batch, y_batch in datasets[\"train\"].batch(32).map(preprocess):\n",
" for review in X_batch:\n",
" vocabulary.update(list(review.numpy()))"
]
},
{
"cell_type": "code",
"execution_count": 50,
"metadata": {},
"outputs": [],
"source": [
"vocabulary.most_common()[:3]"
]
},
{
"cell_type": "code",
"execution_count": 51,
"metadata": {},
"outputs": [],
"source": [
"len(vocabulary)"
]
},
{
"cell_type": "code",
"execution_count": 52,
"metadata": {},
"outputs": [],
"source": [
"vocab_size = 10000\n",
"truncated_vocabulary = [\n",
" word for word, count in vocabulary.most_common()[:vocab_size]]"
]
},
{
"cell_type": "code",
"execution_count": 53,
"metadata": {},
"outputs": [],
"source": [
"word_to_id = {word: index for index, word in enumerate(truncated_vocabulary)}\n",
"for word in b\"This movie was faaaaaantastic\".split():\n",
" print(word_to_id.get(word) or vocab_size)"
]
},
{
"cell_type": "code",
"execution_count": 54,
"metadata": {},
"outputs": [],
"source": [
"words = tf.constant(truncated_vocabulary)\n",
"word_ids = tf.range(len(truncated_vocabulary), dtype=tf.int64)\n",
"vocab_init = tf.lookup.KeyValueTensorInitializer(words, word_ids)\n",
"num_oov_buckets = 1000\n",
"table = tf.lookup.StaticVocabularyTable(vocab_init, num_oov_buckets)"
]
},
{
"cell_type": "code",
"execution_count": 55,
"metadata": {},
"outputs": [],
"source": [
"table.lookup(tf.constant([b\"This movie was faaaaaantastic\".split()]))"
]
},
{
"cell_type": "code",
"execution_count": 56,
"metadata": {},
"outputs": [],
"source": [
"def encode_words(X_batch, y_batch):\n",
" return table.lookup(X_batch), y_batch\n",
"\n",
"train_set = datasets[\"train\"].repeat().batch(32).map(preprocess)\n",
"train_set = train_set.map(encode_words).prefetch(1)"
]
},
{
"cell_type": "code",
"execution_count": 57,
"metadata": {},
"outputs": [],
"source": [
"for X_batch, y_batch in train_set.take(1):\n",
" print(X_batch)\n",
" print(y_batch)"
]
},
{
"cell_type": "code",
"execution_count": 58,
"metadata": {},
"outputs": [],
"source": [
"embed_size = 128\n",
"model = keras.models.Sequential([\n",
" keras.layers.Embedding(vocab_size + num_oov_buckets, embed_size,\n",
" mask_zero=True, # not shown in the book\n",
" input_shape=[None]),\n",
" keras.layers.GRU(128, return_sequences=True),\n",
" keras.layers.GRU(128),\n",
" keras.layers.Dense(1, activation=\"sigmoid\")\n",
"])\n",
"model.compile(loss=\"binary_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"])\n",
"history = model.fit(train_set, steps_per_epoch=train_size // 32, epochs=5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Or using manual masking:"
]
},
{
"cell_type": "code",
"execution_count": 59,
"metadata": {},
"outputs": [],
"source": [
"K = keras.backend\n",
"embed_size = 128\n",
"inputs = keras.layers.Input(shape=[None])\n",
"mask = keras.layers.Lambda(lambda inputs: K.not_equal(inputs, 0))(inputs)\n",
"z = keras.layers.Embedding(vocab_size + num_oov_buckets, embed_size)(inputs)\n",
"z = keras.layers.GRU(128, return_sequences=True)(z, mask=mask)\n",
"z = keras.layers.GRU(128)(z, mask=mask)\n",
"outputs = keras.layers.Dense(1, activation=\"sigmoid\")(z)\n",
"model = keras.models.Model(inputs=[inputs], outputs=[outputs])\n",
"model.compile(loss=\"binary_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"])\n",
"history = model.fit(train_set, steps_per_epoch=train_size // 32, epochs=5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Reusing Pretrained Embeddings"
]
},
{
"cell_type": "code",
"execution_count": 60,
"metadata": {},
"outputs": [],
"source": [
"tf.random.set_seed(42)"
]
},
{
"cell_type": "code",
"execution_count": 61,
"metadata": {},
"outputs": [],
"source": [
"TFHUB_CACHE_DIR = os.path.join(os.curdir, \"my_tfhub_cache\")\n",
"os.environ[\"TFHUB_CACHE_DIR\"] = TFHUB_CACHE_DIR"
]
},
{
"cell_type": "code",
"execution_count": 62,
"metadata": {},
"outputs": [],
"source": [
"import tensorflow_hub as hub\n",
"\n",
"model = keras.Sequential([\n",
" hub.KerasLayer(\"https://tfhub.dev/google/tf2-preview/nnlm-en-dim50/1\",\n",
" dtype=tf.string, input_shape=[], output_shape=[50]),\n",
" keras.layers.Dense(128, activation=\"relu\"),\n",
" keras.layers.Dense(1, activation=\"sigmoid\")\n",
"])\n",
"model.compile(loss=\"binary_crossentropy\", optimizer=\"adam\",\n",
" metrics=[\"accuracy\"])"
]
},
{
"cell_type": "code",
"execution_count": 63,
"metadata": {},
"outputs": [],
"source": [
"for dirpath, dirnames, filenames in os.walk(TFHUB_CACHE_DIR):\n",
" for filename in filenames:\n",
" print(os.path.join(dirpath, filename))"
]
},
{
"cell_type": "code",
"execution_count": 64,
"metadata": {},
"outputs": [],
"source": [
"import tensorflow_datasets as tfds\n",
"\n",
"datasets, info = tfds.load(\"imdb_reviews\", as_supervised=True, with_info=True)\n",
"train_size = info.splits[\"train\"].num_examples\n",
"batch_size = 32\n",
"train_set = datasets[\"train\"].repeat().batch(batch_size).prefetch(1)\n",
"history = model.fit(train_set, steps_per_epoch=train_size // batch_size, epochs=5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Automatic Translation"
]
},
{
"cell_type": "code",
"execution_count": 65,
"metadata": {},
"outputs": [],
"source": [
"tf.random.set_seed(42)"
]
},
{
"cell_type": "code",
"execution_count": 66,
"metadata": {},
"outputs": [],
"source": [
"vocab_size = 100\n",
"embed_size = 10"
]
},
{
"cell_type": "code",
"execution_count": 67,
"metadata": {},
"outputs": [],
"source": [
"import tensorflow_addons as tfa\n",
"\n",
"encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)\n",
"decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)\n",
"sequence_lengths = keras.layers.Input(shape=[], dtype=np.int32)\n",
"\n",
"embeddings = keras.layers.Embedding(vocab_size, embed_size)\n",
"encoder_embeddings = embeddings(encoder_inputs)\n",
"decoder_embeddings = embeddings(decoder_inputs)\n",
"\n",
"encoder = keras.layers.LSTM(512, return_state=True)\n",
"encoder_outputs, state_h, state_c = encoder(encoder_embeddings)\n",
"encoder_state = [state_h, state_c]\n",
"\n",
"sampler = tfa.seq2seq.sampler.TrainingSampler()\n",
"\n",
"decoder_cell = keras.layers.LSTMCell(512)\n",
"output_layer = keras.layers.Dense(vocab_size)\n",
"decoder = tfa.seq2seq.basic_decoder.BasicDecoder(decoder_cell, sampler,\n",
" output_layer=output_layer)\n",
"final_outputs, final_state, final_sequence_lengths = decoder(\n",
" decoder_embeddings, initial_state=encoder_state,\n",
" sequence_length=sequence_lengths)\n",
"Y_proba = tf.nn.softmax(final_outputs.rnn_output)\n",
"\n",
"model = keras.models.Model(\n",
" inputs=[encoder_inputs, decoder_inputs, sequence_lengths],\n",
" outputs=[Y_proba])"
]
},
{
"cell_type": "code",
"execution_count": 68,
"metadata": {},
"outputs": [],
"source": [
"model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=\"adam\")"
]
},
{
"cell_type": "code",
"execution_count": 69,
"metadata": {},
"outputs": [],
"source": [
"X = np.random.randint(100, size=10*1000).reshape(1000, 10)\n",
"Y = np.random.randint(100, size=15*1000).reshape(1000, 15)\n",
"X_decoder = np.c_[np.zeros((1000, 1)), Y[:, :-1]]\n",
"seq_lengths = np.full([1000], 15)\n",
"\n",
"history = model.fit([X, X_decoder, seq_lengths], Y, epochs=2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Bidirectional Recurrent Layers"
]
},
{
"cell_type": "code",
"execution_count": 70,
"metadata": {},
"outputs": [],
"source": [
"model = keras.models.Sequential([\n",
" keras.layers.GRU(10, return_sequences=True, input_shape=[None, 10]),\n",
" keras.layers.Bidirectional(keras.layers.GRU(10, return_sequences=True))\n",
"])\n",
"\n",
"model.summary()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Positional Encoding"
]
},
{
"cell_type": "code",
"execution_count": 71,
"metadata": {},
"outputs": [],
"source": [
"class PositionalEncoding(keras.layers.Layer):\n",
" def __init__(self, max_steps, max_dims, dtype=tf.float32, **kwargs):\n",
" super().__init__(dtype=dtype, **kwargs)\n",
" if max_dims % 2 == 1: max_dims += 1 # max_dims must be even\n",
" p, i = np.meshgrid(np.arange(max_steps), np.arange(max_dims // 2))\n",
" pos_emb = np.empty((1, max_steps, max_dims))\n",
" pos_emb[0, :, ::2] = np.sin(p / 10000**(2 * i / max_dims)).T\n",
" pos_emb[0, :, 1::2] = np.cos(p / 10000**(2 * i / max_dims)).T\n",
" self.positional_embedding = tf.constant(pos_emb.astype(self.dtype))\n",
" def call(self, inputs):\n",
" shape = tf.shape(inputs)\n",
" return inputs + self.positional_embedding[:, :shape[-2], :shape[-1]]"
]
},
{
"cell_type": "code",
"execution_count": 72,
"metadata": {},
"outputs": [],
"source": [
"max_steps = 201\n",
"max_dims = 512\n",
"pos_emb = PositionalEncoding(max_steps, max_dims)\n",
"PE = pos_emb(np.zeros((1, max_steps, max_dims), np.float32))[0].numpy()"
]
},
{
"cell_type": "code",
"execution_count": 73,
"metadata": {},
"outputs": [],
"source": [
"i1, i2, crop_i = 100, 101, 150\n",
"p1, p2, p3 = 22, 60, 35\n",
"fig, (ax1, ax2) = plt.subplots(nrows=2, ncols=1, sharex=True, figsize=(9, 5))\n",
"ax1.plot([p1, p1], [-1, 1], \"k--\", label=\"$p = {}$\".format(p1))\n",
"ax1.plot([p2, p2], [-1, 1], \"k--\", label=\"$p = {}$\".format(p2), alpha=0.5)\n",
"ax1.plot(p3, PE[p3, i1], \"bx\", label=\"$p = {}$\".format(p3))\n",
"ax1.plot(PE[:,i1], \"b-\", label=\"$i = {}$\".format(i1))\n",
"ax1.plot(PE[:,i2], \"r-\", label=\"$i = {}$\".format(i2))\n",
"ax1.plot([p1, p2], [PE[p1, i1], PE[p2, i1]], \"bo\")\n",
"ax1.plot([p1, p2], [PE[p1, i2], PE[p2, i2]], \"ro\")\n",
"ax1.legend(loc=\"center right\", fontsize=14, framealpha=0.95)\n",
"ax1.set_ylabel(\"$P_{(p,i)}$\", rotation=0, fontsize=16)\n",
"ax1.grid(True, alpha=0.3)\n",
"ax1.hlines(0, 0, max_steps - 1, color=\"k\", linewidth=1, alpha=0.3)\n",
"ax1.axis([0, max_steps - 1, -1, 1])\n",
"ax2.imshow(PE.T[:crop_i], cmap=\"gray\", interpolation=\"bilinear\", aspect=\"auto\")\n",
"ax2.hlines(i1, 0, max_steps - 1, color=\"b\")\n",
"cheat = 2 # need to raise the red line a bit, or else it hides the blue one\n",
"ax2.hlines(i2+cheat, 0, max_steps - 1, color=\"r\")\n",
"ax2.plot([p1, p1], [0, crop_i], \"k--\")\n",
"ax2.plot([p2, p2], [0, crop_i], \"k--\", alpha=0.5)\n",
"ax2.plot([p1, p2], [i2+cheat, i2+cheat], \"ro\")\n",
"ax2.plot([p1, p2], [i1, i1], \"bo\")\n",
"ax2.axis([0, max_steps - 1, 0, crop_i])\n",
"ax2.set_xlabel(\"$p$\", fontsize=16)\n",
"ax2.set_ylabel(\"$i$\", rotation=0, fontsize=16)\n",
"save_fig(\"positional_embedding_plot\")\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": 74,
"metadata": {},
"outputs": [],
"source": [
"embed_size = 512; max_steps = 500; vocab_size = 10000\n",
"encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)\n",
"decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)\n",
"embeddings = keras.layers.Embedding(vocab_size, embed_size)\n",
"encoder_embeddings = embeddings(encoder_inputs)\n",
"decoder_embeddings = embeddings(decoder_inputs)\n",
"positional_encoding = PositionalEncoding(max_steps, max_dims=embed_size)\n",
"encoder_in = positional_encoding(encoder_embeddings)\n",
"decoder_in = positional_encoding(decoder_embeddings)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here is a (very) simplified Transformer (the actual architecture has skip connections, layer norm, dense nets, and most importantly it uses Multi-Head Attention instead of regular Attention):"
]
},
{
"cell_type": "code",
"execution_count": 75,
"metadata": {},
"outputs": [],
"source": [
"Z = encoder_in\n",
"for N in range(6):\n",
" Z = keras.layers.Attention(use_scale=True)([Z, Z])\n",
"\n",
"encoder_outputs = Z\n",
"Z = decoder_in\n",
"for N in range(6):\n",
" Z = keras.layers.Attention(use_scale=True, causal=True)([Z, Z])\n",
" Z = keras.layers.Attention(use_scale=True)([Z, encoder_outputs])\n",
"\n",
"outputs = keras.layers.TimeDistributed(\n",
" keras.layers.Dense(vocab_size, activation=\"softmax\"))(Z)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here's a basic implementation of the `MultiHeadAttention` layer. One will likely be added to `keras.layers` in the near future. Note that `Conv1D` layers with `kernel_size=1` (and the default `padding=\"valid\"` and `strides=1`) is equivalent to a `TimeDistributed(Dense(...))` layer."
]
},
{
"cell_type": "code",
"execution_count": 76,
"metadata": {},
"outputs": [],
"source": [
"K = keras.backend\n",
"\n",
"class MultiHeadAttention(keras.layers.Layer):\n",
" def __init__(self, n_heads, causal=False, use_scale=False, **kwargs):\n",
" self.n_heads = n_heads\n",
" self.causal = causal\n",
" self.use_scale = use_scale\n",
" super().__init__(**kwargs)\n",
" def build(self, batch_input_shape):\n",
" self.dims = batch_input_shape[0][-1]\n",
" self.q_dims, self.v_dims, self.k_dims = [self.dims // self.n_heads] * 3 # could be hyperparameters instead\n",
" self.q_linear = keras.layers.Conv1D(self.n_heads * self.q_dims, kernel_size=1, use_bias=False)\n",
" self.v_linear = keras.layers.Conv1D(self.n_heads * self.v_dims, kernel_size=1, use_bias=False)\n",
" self.k_linear = keras.layers.Conv1D(self.n_heads * self.k_dims, kernel_size=1, use_bias=False)\n",
" self.attention = keras.layers.Attention(causal=self.causal, use_scale=self.use_scale)\n",
" self.out_linear = keras.layers.Conv1D(self.dims, kernel_size=1, use_bias=False)\n",
" super().build(batch_input_shape)\n",
" def _multi_head_linear(self, inputs, linear):\n",
" shape = K.concatenate([K.shape(inputs)[:-1], [self.n_heads, -1]])\n",
" projected = K.reshape(linear(inputs), shape)\n",
" perm = K.permute_dimensions(projected, [0, 2, 1, 3])\n",
" return K.reshape(perm, [shape[0] * self.n_heads, shape[1], -1])\n",
" def call(self, inputs):\n",
" q = inputs[0]\n",
" v = inputs[1]\n",
" k = inputs[2] if len(inputs) > 2 else v\n",
" shape = K.shape(q)\n",
" q_proj = self._multi_head_linear(q, self.q_linear)\n",
" v_proj = self._multi_head_linear(v, self.v_linear)\n",
" k_proj = self._multi_head_linear(k, self.k_linear)\n",
" multi_attended = self.attention([q_proj, v_proj, k_proj])\n",
" shape_attended = K.shape(multi_attended)\n",
" reshaped_attended = K.reshape(multi_attended, [shape[0], self.n_heads, shape_attended[1], shape_attended[2]])\n",
" perm = K.permute_dimensions(reshaped_attended, [0, 2, 1, 3])\n",
" concat = K.reshape(perm, [shape[0], shape_attended[1], -1])\n",
" return self.out_linear(concat)"
]
},
{
"cell_type": "code",
"execution_count": 77,
"metadata": {},
"outputs": [],
"source": [
"Q = np.random.rand(2, 50, 512)\n",
"V = np.random.rand(2, 80, 512)\n",
"multi_attn = MultiHeadAttention(8)\n",
"multi_attn([Q, V]).shape"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Exercise solutions"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1. to 7."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"See Appendix A."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 8.\n",
"_Exercise:_ Embedded Reber grammars _were used by Hochreiter and Schmidhuber in [their paper](https://homl.info/93) about LSTMs. They are artificial grammars that produce strings such as \"BPBTSXXVPSEPE.\" Check out Jenny Orr's [nice introduction](https://homl.info/108) to this topic. Choose a particular embedded Reber grammar (such as the one represented on Jenny Orr's page), then train an RNN to identify whether a string respects that grammar or not. You will first need to write a function capable of generating a training batch containing about 50% strings that respect the grammar, and 50% that don't._"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First we need to build a function that generates strings based on a grammar. The grammar will be represented as a list of possible transitions for each state. A transition specifies the string to output (or a grammar to generate it) and the next state."
]
},
{
"cell_type": "code",
"execution_count": 78,
"metadata": {},
"outputs": [],
"source": [
"default_reber_grammar = [\n",
" [(\"B\", 1)], # (state 0) =B=>(state 1)\n",
" [(\"T\", 2), (\"P\", 3)], # (state 1) =T=>(state 2) or =P=>(state 3)\n",
" [(\"S\", 2), (\"X\", 4)], # (state 2) =S=>(state 2) or =X=>(state 4)\n",
" [(\"T\", 3), (\"V\", 5)], # and so on...\n",
" [(\"X\", 3), (\"S\", 6)],\n",
" [(\"P\", 4), (\"V\", 6)],\n",
" [(\"E\", None)]] # (state 6) =E=>(terminal state)\n",
"\n",
"embedded_reber_grammar = [\n",
" [(\"B\", 1)],\n",
" [(\"T\", 2), (\"P\", 3)],\n",
" [(default_reber_grammar, 4)],\n",
" [(default_reber_grammar, 5)],\n",
" [(\"T\", 6)],\n",
" [(\"P\", 6)],\n",
" [(\"E\", None)]]\n",
"\n",
"def generate_string(grammar):\n",
" state = 0\n",
" output = []\n",
" while state is not None:\n",
" index = np.random.randint(len(grammar[state]))\n",
" production, state = grammar[state][index]\n",
" if isinstance(production, list):\n",
" production = generate_string(grammar=production)\n",
" output.append(production)\n",
" return \"\".join(output)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's generate a few strings based on the default Reber grammar:"
]
},
{
"cell_type": "code",
"execution_count": 79,
"metadata": {},
"outputs": [],
"source": [
"np.random.seed(42)\n",
"\n",
"for _ in range(25):\n",
" print(generate_string(default_reber_grammar), end=\" \")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Looks good. Now let's generate a few strings based on the embedded Reber grammar:"
]
},
{
"cell_type": "code",
"execution_count": 80,
"metadata": {},
"outputs": [],
"source": [
"np.random.seed(42)\n",
"\n",
"for _ in range(25):\n",
" print(generate_string(embedded_reber_grammar), end=\" \")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Okay, now we need a function to generate strings that do not respect the grammar. We could generate a random string, but the task would be a bit too easy, so instead we will generate a string that respects the grammar, and we will corrupt it by changing just one character:"
]
},
{
"cell_type": "code",
"execution_count": 81,
"metadata": {},
"outputs": [],
"source": [
"POSSIBLE_CHARS = \"BEPSTVX\"\n",
"\n",
"def generate_corrupted_string(grammar, chars=POSSIBLE_CHARS):\n",
" good_string = generate_string(grammar)\n",
" index = np.random.randint(len(good_string))\n",
" good_char = good_string[index]\n",
" bad_char = np.random.choice(sorted(set(chars) - set(good_char)))\n",
" return good_string[:index] + bad_char + good_string[index + 1:]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's look at a few corrupted strings:"
]
},
{
"cell_type": "code",
"execution_count": 82,
"metadata": {},
"outputs": [],
"source": [
"np.random.seed(42)\n",
"\n",
"for _ in range(25):\n",
" print(generate_corrupted_string(embedded_reber_grammar), end=\" \")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We cannot feed strings directly to an RNN, so we need to encode them somehow. One option would be to one-hot encode each character. Another option is to use embeddings. Let's go for the second option (but since there are just a handful of characters, one-hot encoding would probably be a good option as well). For embeddings to work, we need to convert each string into a sequence of character IDs. Let's write a function for that, using each character's index in the string of possible characters \"BEPSTVX\":"
]
},
{
"cell_type": "code",
"execution_count": 83,
"metadata": {},
"outputs": [],
"source": [
"def string_to_ids(s, chars=POSSIBLE_CHARS):\n",
" return [chars.index(c) for c in s]"
]
},
{
"cell_type": "code",
"execution_count": 84,
"metadata": {},
"outputs": [],
"source": [
"string_to_ids(\"BTTTXXVVETE\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can now generate the dataset, with 50% good strings, and 50% bad strings:"
]
},
{
"cell_type": "code",
"execution_count": 85,
"metadata": {},
"outputs": [],
"source": [
"def generate_dataset(size):\n",
" good_strings = [string_to_ids(generate_string(embedded_reber_grammar))\n",
" for _ in range(size // 2)]\n",
" bad_strings = [string_to_ids(generate_corrupted_string(embedded_reber_grammar))\n",
" for _ in range(size - size // 2)]\n",
" all_strings = good_strings + bad_strings\n",
" X = tf.ragged.constant(all_strings, ragged_rank=1)\n",
" y = np.array([[1.] for _ in range(len(good_strings))] +\n",
" [[0.] for _ in range(len(bad_strings))])\n",
" return X, y"
]
},
{
"cell_type": "code",
"execution_count": 86,
"metadata": {},
"outputs": [],
"source": [
"np.random.seed(42)\n",
"\n",
"X_train, y_train = generate_dataset(10000)\n",
"X_valid, y_valid = generate_dataset(2000)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's take a look at the first training sequence:"
]
},
{
"cell_type": "code",
"execution_count": 87,
"metadata": {},
"outputs": [],
"source": [
"X_train[0]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"What class does it belong to?"
]
},
{
"cell_type": "code",
"execution_count": 88,
"metadata": {},
"outputs": [],
"source": [
"y_train[0]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Perfect! We are ready to create the RNN to identify good strings. We build a simple sequence binary classifier:"
]
},
{
"cell_type": "code",
"execution_count": 89,
"metadata": {},
"outputs": [],
"source": [
"np.random.seed(42)\n",
"tf.random.set_seed(42)\n",
"\n",
"embedding_size = 5\n",
"\n",
"model = keras.models.Sequential([\n",
" keras.layers.InputLayer(input_shape=[None], dtype=tf.int32, ragged=True),\n",
" keras.layers.Embedding(input_dim=len(POSSIBLE_CHARS), output_dim=embedding_size),\n",
" keras.layers.GRU(30),\n",
" keras.layers.Dense(1, activation=\"sigmoid\")\n",
"])\n",
"optimizer = keras.optimizers.SGD(lr=0.02, momentum = 0.95, nesterov=True)\n",
"model.compile(loss=\"binary_crossentropy\", optimizer=optimizer, metrics=[\"accuracy\"])\n",
"history = model.fit(X_train, y_train, epochs=20, validation_data=(X_valid, y_valid))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's test our RNN on two tricky strings: the first one is bad while the second one is good. They only differ by the second to last character. If the RNN gets this right, it shows that it managed to notice the pattern that the second letter should always be equal to the second to last letter. That requires a fairly long short-term memory (which is the reason why we used a GRU cell)."
]
},
{
"cell_type": "code",
"execution_count": 90,
"metadata": {},
"outputs": [],
"source": [
"test_strings = [\"BPBTSSSSSSSXXTTVPXVPXTTTTTVVETE\",\n",
" \"BPBTSSSSSSSXXTTVPXVPXTTTTTVVEPE\"]\n",
"X_test = tf.ragged.constant([string_to_ids(s) for s in test_strings], ragged_rank=1)\n",
"\n",
"y_proba = model.predict(X_test)\n",
"print()\n",
"print(\"Estimated probability that these are Reber strings:\")\n",
"for index, string in enumerate(test_strings):\n",
" print(\"{}: {:.2f}%\".format(string, 100 * y_proba[index][0]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Ta-da! It worked fine. The RNN found the correct answers with very high confidence. :)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 9.\n",
"_Exercise: Train an EncoderDecoder model that can convert a date string from one format to another (e.g., from \"April 22, 2019\" to \"2019-04-22\")._"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's start by creating the dataset. We will use random days between 1000-01-01 and 9999-12-31:"
]
},
{
"cell_type": "code",
"execution_count": 91,
"metadata": {},
"outputs": [],
"source": [
"from datetime import date\n",
"\n",
"# cannot use strftime()'s %B format since it depends on the locale\n",
"MONTHS = [\"January\", \"February\", \"March\", \"April\", \"May\", \"June\",\n",
" \"July\", \"August\", \"September\", \"October\", \"November\", \"December\"]\n",
"\n",
"def random_dates(n_dates):\n",
" min_date = date(1000, 1, 1).toordinal()\n",
" max_date = date(9999, 12, 31).toordinal()\n",
"\n",
" ordinals = np.random.randint(max_date - min_date, size=n_dates) + min_date\n",
" dates = [date.fromordinal(ordinal) for ordinal in ordinals]\n",
"\n",
" x = [MONTHS[dt.month - 1] + \" \" + dt.strftime(\"%d, %Y\") for dt in dates]\n",
" y = [dt.isoformat() for dt in dates]\n",
" return x, y"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here are a few random dates, displayed in both the input format and the target format:"
]
},
{
"cell_type": "code",
"execution_count": 92,
"metadata": {},
"outputs": [],
"source": [
"np.random.seed(42)\n",
"\n",
"n_dates = 3\n",
"x_example, y_example = random_dates(n_dates)\n",
"print(\"{:25s}{:25s}\".format(\"Input\", \"Target\"))\n",
"print(\"-\" * 50)\n",
"for idx in range(n_dates):\n",
" print(\"{:25s}{:25s}\".format(x_example[idx], y_example[idx]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's get the list of all possible characters in the inputs:"
]
},
{
"cell_type": "code",
"execution_count": 93,
"metadata": {},
"outputs": [],
"source": [
"INPUT_CHARS = \"\".join(sorted(set(\"\".join(MONTHS) + \"0123456789, \")))\n",
"INPUT_CHARS"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And here's the list of possible characters in the outputs:"
]
},
{
"cell_type": "code",
"execution_count": 94,
"metadata": {},
"outputs": [],
"source": [
"OUTPUT_CHARS = \"0123456789-\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's write a function to convert a string to a list of character IDs, as we did in the previous exercise:"
]
},
{
"cell_type": "code",
"execution_count": 95,
"metadata": {},
"outputs": [],
"source": [
"def date_str_to_ids(date_str, chars=INPUT_CHARS):\n",
" return [chars.index(c) for c in date_str]"
]
},
{
"cell_type": "code",
"execution_count": 96,
"metadata": {},
"outputs": [],
"source": [
"date_str_to_ids(x_example[0], INPUT_CHARS)"
]
},
{
"cell_type": "code",
"execution_count": 97,
"metadata": {},
"outputs": [],
"source": [
"date_str_to_ids(y_example[0], OUTPUT_CHARS)"
]
},
{
"cell_type": "code",
"execution_count": 98,
"metadata": {},
"outputs": [],
"source": [
"def prepare_date_strs(date_strs, chars=INPUT_CHARS):\n",
" X_ids = [date_str_to_ids(dt, chars) for dt in date_strs]\n",
" X = tf.ragged.constant(X_ids, ragged_rank=1)\n",
" return (X + 1).to_tensor() # using 0 as the padding token ID\n",
"\n",
"def create_dataset(n_dates):\n",
" x, y = random_dates(n_dates)\n",
" return prepare_date_strs(x, INPUT_CHARS), prepare_date_strs(y, OUTPUT_CHARS)"
]
},
{
"cell_type": "code",
"execution_count": 99,
"metadata": {},
"outputs": [],
"source": [
"np.random.seed(42)\n",
"\n",
"X_train, Y_train = create_dataset(10000)\n",
"X_valid, Y_valid = create_dataset(2000)\n",
"X_test, Y_test = create_dataset(2000)"
]
},
{
"cell_type": "code",
"execution_count": 100,
"metadata": {},
"outputs": [],
"source": [
"Y_train[0]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### First version: a very basic seq2seq model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's first try the simplest possible model: we feed in the input sequence, which first goes through the encoder (an embedding layer followed by a single LSTM layer), which outputs a vector, then it goes through a decoder (a single LSTM layer, followed by a dense output layer), which outputs a sequence of vectors, each representing the estimated probabilities for all possible output character.\n",
"\n",
"Since the decoder expects a sequence as input, we repeat the vector (which is output by the decoder) as many times as the longest possible output sequence."
]
},
{
"cell_type": "code",
"execution_count": 101,
"metadata": {},
"outputs": [],
"source": [
"embedding_size = 32\n",
"max_output_length = Y_train.shape[1]\n",
"\n",
"np.random.seed(42)\n",
"tf.random.set_seed(42)\n",
"\n",
"encoder = keras.models.Sequential([\n",
" keras.layers.Embedding(input_dim=len(INPUT_CHARS) + 1,\n",
" output_dim=embedding_size,\n",
" input_shape=[None]),\n",
" keras.layers.LSTM(128)\n",
"])\n",
"\n",
"decoder = keras.models.Sequential([\n",
" keras.layers.LSTM(128, return_sequences=True),\n",
" keras.layers.Dense(len(OUTPUT_CHARS) + 1, activation=\"softmax\")\n",
"])\n",
"\n",
"model = keras.models.Sequential([\n",
" encoder,\n",
" keras.layers.RepeatVector(max_output_length),\n",
" decoder\n",
"])\n",
"\n",
"optimizer = keras.optimizers.Nadam()\n",
"model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=optimizer,\n",
" metrics=[\"accuracy\"])\n",
"history = model.fit(X_train, Y_train, epochs=20,\n",
" validation_data=(X_valid, Y_valid))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Looks great, we reach 100% validation accuracy! Let's use the model to make some predictions. We will need to be able to convert a sequence of character IDs to a readable string:"
]
},
{
"cell_type": "code",
"execution_count": 102,
"metadata": {},
"outputs": [],
"source": [
"def ids_to_date_strs(ids, chars=OUTPUT_CHARS):\n",
" return [\"\".join([(\"?\" + chars)[index] for index in sequence])\n",
" for sequence in ids]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can use the model to convert some dates"
]
},
{
"cell_type": "code",
"execution_count": 103,
"metadata": {},
"outputs": [],
"source": [
"X_new = prepare_date_strs([\"September 17, 2009\", \"July 14, 1789\"])"
]
},
{
"cell_type": "code",
"execution_count": 104,
"metadata": {},
"outputs": [],
"source": [
"#ids = model.predict_classes(X_new)\n",
"ids = np.argmax(model.predict(X_new), axis=-1)\n",
"for date_str in ids_to_date_strs(ids):\n",
" print(date_str)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Perfect! :)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"However, since the model was only trained on input strings of length 18 (which is the length of the longest date), it does not perform well if we try to use it to make predictions on shorter sequences:"
]
},
{
"cell_type": "code",
"execution_count": 105,
"metadata": {},
"outputs": [],
"source": [
"X_new = prepare_date_strs([\"May 02, 2020\", \"July 14, 1789\"])"
]
},
{
"cell_type": "code",
"execution_count": 106,
"metadata": {},
"outputs": [],
"source": [
"#ids = model.predict_classes(X_new)\n",
"ids = np.argmax(model.predict(X_new), axis=-1)\n",
"for date_str in ids_to_date_strs(ids):\n",
" print(date_str)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Oops! We need to ensure that we always pass sequences of the same length as during training, using padding if necessary. Let's write a little helper function for that:"
]
},
{
"cell_type": "code",
"execution_count": 107,
"metadata": {},
"outputs": [],
"source": [
"max_input_length = X_train.shape[1]\n",
"\n",
"def prepare_date_strs_padded(date_strs):\n",
" X = prepare_date_strs(date_strs)\n",
" if X.shape[1] < max_input_length:\n",
" X = tf.pad(X, [[0, 0], [0, max_input_length - X.shape[1]]])\n",
" return X\n",
"\n",
"def convert_date_strs(date_strs):\n",
" X = prepare_date_strs_padded(date_strs)\n",
" #ids = model.predict_classes(X)\n",
" ids = np.argmax(model.predict(X), axis=-1)\n",
" return ids_to_date_strs(ids)"
]
},
{
"cell_type": "code",
"execution_count": 108,
"metadata": {},
"outputs": [],
"source": [
"convert_date_strs([\"May 02, 2020\", \"July 14, 1789\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Cool! Granted, there are certainly much easier ways to write a date conversion tool (e.g., using regular expressions or even basic string manipulation), but you have to admit that using neural networks is way cooler. ;-)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"However, real-life sequence-to-sequence problems will usually be harder, so for the sake of completeness, let's build a more powerful model."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Second version: feeding the shifted targets to the decoder (teacher forcing)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Instead of feeding the decoder a simple repetition of the encoder's output vector, we can feed it the target sequence, shifted by one time step to the right. This way, at each time step the decoder will know what the previous target character was. This should help is tackle more complex sequence-to-sequence problems.\n",
"\n",
"Since the first output character of each target sequence has no previous character, we will need a new token to represent the start-of-sequence (sos).\n",
"\n",
"During inference, we won't know the target, so what will we feed the decoder? We can just predict one character at a time, starting with an sos token, then feeding the decoder all the characters that were predicted so far (we will look at this in more details later in this notebook).\n",
"\n",
"But if the decoder's LSTM expects to get the previous target as input at each step, how shall we pass it it the vector output by the encoder? Well, one option is to ignore the output vector, and instead use the encoder's LSTM state as the initial state of the decoder's LSTM (which requires that encoder's LSTM must have the same number of units as the decoder's LSTM).\n",
"\n",
"Now let's create the decoder's inputs (for training, validation and testing). The sos token will be represented using the last possible output character's ID + 1."
]
},
{
"cell_type": "code",
"execution_count": 109,
"metadata": {},
"outputs": [],
"source": [
"sos_id = len(OUTPUT_CHARS) + 1\n",
"\n",
"def shifted_output_sequences(Y):\n",
" sos_tokens = tf.fill(dims=(len(Y), 1), value=sos_id)\n",
" return tf.concat([sos_tokens, Y[:, :-1]], axis=1)\n",
"\n",
"X_train_decoder = shifted_output_sequences(Y_train)\n",
"X_valid_decoder = shifted_output_sequences(Y_valid)\n",
"X_test_decoder = shifted_output_sequences(Y_test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's take a look at the decoder's training inputs:"
]
},
{
"cell_type": "code",
"execution_count": 110,
"metadata": {},
"outputs": [],
"source": [
"X_train_decoder"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's build the model. It's not a simple sequential model anymore, so let's use the functional API:"
]
},
{
"cell_type": "code",
"execution_count": 111,
"metadata": {},
"outputs": [],
"source": [
"encoder_embedding_size = 32\n",
"decoder_embedding_size = 32\n",
"lstm_units = 128\n",
"\n",
"np.random.seed(42)\n",
"tf.random.set_seed(42)\n",
"\n",
"encoder_input = keras.layers.Input(shape=[None], dtype=tf.int32)\n",
"encoder_embedding = keras.layers.Embedding(\n",
" input_dim=len(INPUT_CHARS) + 1,\n",
" output_dim=encoder_embedding_size)(encoder_input)\n",
"_, encoder_state_h, encoder_state_c = keras.layers.LSTM(\n",
" lstm_units, return_state=True)(encoder_embedding)\n",
"encoder_state = [encoder_state_h, encoder_state_c]\n",
"\n",
"decoder_input = keras.layers.Input(shape=[None], dtype=tf.int32)\n",
"decoder_embedding = keras.layers.Embedding(\n",
" input_dim=len(OUTPUT_CHARS) + 2,\n",
" output_dim=decoder_embedding_size)(decoder_input)\n",
"decoder_lstm_output = keras.layers.LSTM(lstm_units, return_sequences=True)(\n",
" decoder_embedding, initial_state=encoder_state)\n",
"decoder_output = keras.layers.Dense(len(OUTPUT_CHARS) + 1,\n",
" activation=\"softmax\")(decoder_lstm_output)\n",
"\n",
"model = keras.models.Model(inputs=[encoder_input, decoder_input],\n",
" outputs=[decoder_output])\n",
"\n",
"optimizer = keras.optimizers.Nadam()\n",
"model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=optimizer,\n",
" metrics=[\"accuracy\"])\n",
"history = model.fit([X_train, X_train_decoder], Y_train, epochs=10,\n",
" validation_data=([X_valid, X_valid_decoder], Y_valid))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This model also reaches 100% validation accuracy, but it does so even faster."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's once again use the model to make some predictions. This time we need to predict characters one by one."
]
},
{
"cell_type": "code",
"execution_count": 112,
"metadata": {},
"outputs": [],
"source": [
"sos_id = len(OUTPUT_CHARS) + 1\n",
"\n",
"def predict_date_strs(date_strs):\n",
" X = prepare_date_strs_padded(date_strs)\n",
" Y_pred = tf.fill(dims=(len(X), 1), value=sos_id)\n",
" for index in range(max_output_length):\n",
" pad_size = max_output_length - Y_pred.shape[1]\n",
" X_decoder = tf.pad(Y_pred, [[0, 0], [0, pad_size]])\n",
" Y_probas_next = model.predict([X, X_decoder])[:, index:index+1]\n",
" Y_pred_next = tf.argmax(Y_probas_next, axis=-1, output_type=tf.int32)\n",
" Y_pred = tf.concat([Y_pred, Y_pred_next], axis=1)\n",
" return ids_to_date_strs(Y_pred[:, 1:])"
]
},
{
"cell_type": "code",
"execution_count": 113,
"metadata": {},
"outputs": [],
"source": [
"predict_date_strs([\"July 14, 1789\", \"May 01, 2020\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Works fine! :)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Third version: using TF-Addons's seq2seq implementation"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's build exactly the same model, but using TF-Addon's seq2seq API. The implementation below is almost very similar to the TFA example higher in this notebook, except without the model input to specify the output sequence length, for simplicity (but you can easily add it back in if you need it for your projects, when the output sequences have very different lengths)."
]
},
{
"cell_type": "code",
"execution_count": 114,
"metadata": {},
"outputs": [],
"source": [
"import tensorflow_addons as tfa\n",
"\n",
"np.random.seed(42)\n",
"tf.random.set_seed(42)\n",
"\n",
"encoder_embedding_size = 32\n",
"decoder_embedding_size = 32\n",
"units = 128\n",
"\n",
"encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)\n",
"decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)\n",
"sequence_lengths = keras.layers.Input(shape=[], dtype=np.int32)\n",
"\n",
"encoder_embeddings = keras.layers.Embedding(\n",
" len(INPUT_CHARS) + 1, encoder_embedding_size)(encoder_inputs)\n",
"\n",
"decoder_embedding_layer = keras.layers.Embedding(\n",
" len(OUTPUT_CHARS) + 2, decoder_embedding_size)\n",
"decoder_embeddings = decoder_embedding_layer(decoder_inputs)\n",
"\n",
"encoder = keras.layers.LSTM(units, return_state=True)\n",
"encoder_outputs, state_h, state_c = encoder(encoder_embeddings)\n",
"encoder_state = [state_h, state_c]\n",
"\n",
"sampler = tfa.seq2seq.sampler.TrainingSampler()\n",
"\n",
"decoder_cell = keras.layers.LSTMCell(units)\n",
"output_layer = keras.layers.Dense(len(OUTPUT_CHARS) + 1)\n",
"\n",
"decoder = tfa.seq2seq.basic_decoder.BasicDecoder(decoder_cell,\n",
" sampler,\n",
" output_layer=output_layer)\n",
"final_outputs, final_state, final_sequence_lengths = decoder(\n",
" decoder_embeddings,\n",
" initial_state=encoder_state)\n",
"Y_proba = keras.layers.Activation(\"softmax\")(final_outputs.rnn_output)\n",
"\n",
"model = keras.models.Model(inputs=[encoder_inputs, decoder_inputs],\n",
" outputs=[Y_proba])\n",
"optimizer = keras.optimizers.Nadam()\n",
"model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=optimizer,\n",
" metrics=[\"accuracy\"])\n",
"history = model.fit([X_train, X_train_decoder], Y_train, epochs=15,\n",
" validation_data=([X_valid, X_valid_decoder], Y_valid))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And once again, 100% validation accuracy! To use the model, we can just reuse the `predict_date_strs()` function:"
]
},
{
"cell_type": "code",
"execution_count": 115,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"predict_date_strs([\"July 14, 1789\", \"May 01, 2020\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"However, there's a much more efficient way to perform inference. Until now, during inference, we've run the model once for each new character. Instead, we can create a new decoder, based on the previously trained layers, but using a `GreedyEmbeddingSampler` instead of a `TrainingSampler`.\n",
"\n",
"At each time step, the `GreedyEmbeddingSampler` will compute the argmax of the decoder's outputs, and run the resulting token IDs through the decoder's embedding layer. Then it will feed the resulting embeddings to the decoder's LSTM cell at the next time step. This way, we only need to run the decoder once to get the full prediction."
]
},
{
"cell_type": "code",
"execution_count": 116,
"metadata": {},
"outputs": [],
"source": [
"inference_sampler = tfa.seq2seq.sampler.GreedyEmbeddingSampler(\n",
" embedding_fn=decoder_embedding_layer)\n",
"inference_decoder = tfa.seq2seq.basic_decoder.BasicDecoder(\n",
" decoder_cell, inference_sampler, output_layer=output_layer,\n",
" maximum_iterations=max_output_length)\n",
"batch_size = tf.shape(encoder_inputs)[:1]\n",
"start_tokens = tf.fill(dims=batch_size, value=sos_id)\n",
"final_outputs, final_state, final_sequence_lengths = inference_decoder(\n",
" start_tokens,\n",
" initial_state=encoder_state,\n",
" start_tokens=start_tokens,\n",
" end_token=0)\n",
"\n",
"inference_model = keras.models.Model(inputs=[encoder_inputs],\n",
" outputs=[final_outputs.sample_id])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A few notes:\n",
"* The `GreedyEmbeddingSampler` needs the `start_tokens` (a vector containing the start-of-sequence ID for each decoder sequence), and the `end_token` (the decoder will stop decoding a sequence once the model outputs this token).\n",
"* We must set `maximum_iterations` when creating the `BasicDecoder`, or else it may run into an infinite loop (if the model never outputs the end token for at least one of the sequences). This would force you would to restart the Jupyter kernel.\n",
"* The decoder inputs are not needed anymore, since all the decoder inputs are generated dynamically based on the outputs from the previous time step.\n",
"* The model's outputs are `final_outputs.sample_id` instead of the softmax of `final_outputs.rnn_outputs`. This allows us to directly get the argmax of the model's outputs. If you prefer to have access to the logits, you can replace `final_outputs.sample_id` with `final_outputs.rnn_outputs`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can write a simple function that uses the model to perform the date format conversion:"
]
},
{
"cell_type": "code",
"execution_count": 117,
"metadata": {},
"outputs": [],
"source": [
"def fast_predict_date_strs(date_strs):\n",
" X = prepare_date_strs_padded(date_strs)\n",
" Y_pred = inference_model.predict(X)\n",
" return ids_to_date_strs(Y_pred)"
]
},
{
"cell_type": "code",
"execution_count": 118,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"fast_predict_date_strs([\"July 14, 1789\", \"May 01, 2020\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's check that it really is faster:"
]
},
{
"cell_type": "code",
"execution_count": 119,
"metadata": {},
"outputs": [],
"source": [
"%timeit predict_date_strs([\"July 14, 1789\", \"May 01, 2020\"])"
]
},
{
"cell_type": "code",
"execution_count": 120,
"metadata": {},
"outputs": [],
"source": [
"%timeit fast_predict_date_strs([\"July 14, 1789\", \"May 01, 2020\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"That's more than a 10x speedup! And it would be even more if we were handling longer sequences."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Fourth version: using TF-Addons's seq2seq implementation with a scheduled sampler"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Warning**: due to a TF bug, this version only works using TensorFlow 2.2 or above."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When we trained the previous model, at each time step _t_ we gave the model the target token for time step _t_ - 1. However, at inference time, the model did not get the previous target at each time step. Instead, it got the previous prediction. So there is a discrepancy between training and inference, which may lead to disappointing performance. To alleviate this, we can gradually replace the targets with the predictions, during training. For this, we just need to replace the `TrainingSampler` with a `ScheduledEmbeddingTrainingSampler`, and use a Keras callback to gradually increase the `sampling_probability` (i.e., the probability that the decoder will use the prediction from the previous time step rather than the target for the previous time step)."
]
},
{
"cell_type": "code",
"execution_count": 121,
"metadata": {},
"outputs": [],
"source": [
"import tensorflow_addons as tfa\n",
"\n",
"np.random.seed(42)\n",
"tf.random.set_seed(42)\n",
"\n",
"n_epochs = 20\n",
"encoder_embedding_size = 32\n",
"decoder_embedding_size = 32\n",
"units = 128\n",
"\n",
"encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)\n",
"decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)\n",
"sequence_lengths = keras.layers.Input(shape=[], dtype=np.int32)\n",
"\n",
"encoder_embeddings = keras.layers.Embedding(\n",
" len(INPUT_CHARS) + 1, encoder_embedding_size)(encoder_inputs)\n",
"\n",
"decoder_embedding_layer = keras.layers.Embedding(\n",
" len(OUTPUT_CHARS) + 2, decoder_embedding_size)\n",
"decoder_embeddings = decoder_embedding_layer(decoder_inputs)\n",
"\n",
"encoder = keras.layers.LSTM(units, return_state=True)\n",
"encoder_outputs, state_h, state_c = encoder(encoder_embeddings)\n",
"encoder_state = [state_h, state_c]\n",
"\n",
"sampler = tfa.seq2seq.sampler.ScheduledEmbeddingTrainingSampler(\n",
" sampling_probability=0.,\n",
" embedding_fn=decoder_embedding_layer)\n",
"# we must set the sampling_probability after creating the sampler\n",
"# (see https://github.com/tensorflow/addons/pull/1714)\n",
"sampler.sampling_probability = tf.Variable(0.)\n",
"\n",
"decoder_cell = keras.layers.LSTMCell(units)\n",
"output_layer = keras.layers.Dense(len(OUTPUT_CHARS) + 1)\n",
"\n",
"decoder = tfa.seq2seq.basic_decoder.BasicDecoder(decoder_cell,\n",
" sampler,\n",
" output_layer=output_layer)\n",
"final_outputs, final_state, final_sequence_lengths = decoder(\n",
" decoder_embeddings,\n",
" initial_state=encoder_state)\n",
"Y_proba = keras.layers.Activation(\"softmax\")(final_outputs.rnn_output)\n",
"\n",
"model = keras.models.Model(inputs=[encoder_inputs, decoder_inputs],\n",
" outputs=[Y_proba])\n",
"optimizer = keras.optimizers.Nadam()\n",
"model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=optimizer,\n",
" metrics=[\"accuracy\"])\n",
"\n",
"def update_sampling_probability(epoch, logs):\n",
" proba = min(1.0, epoch / (n_epochs - 10))\n",
" sampler.sampling_probability.assign(proba)\n",
"\n",
"sampling_probability_cb = keras.callbacks.LambdaCallback(\n",
" on_epoch_begin=update_sampling_probability)\n",
"history = model.fit([X_train, X_train_decoder], Y_train, epochs=n_epochs,\n",
" validation_data=([X_valid, X_valid_decoder], Y_valid),\n",
" callbacks=[sampling_probability_cb])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Not quite 100% validation accuracy, but close enough!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For inference, we could do the exact same thing as earlier, using a `GreedyEmbeddingSampler`. However, just for the sake of completeness, let's use a `SampleEmbeddingSampler` instead. It's almost the same thing, except that instead of using the argmax of the model's output to find the token ID, it treats the outputs as logits and uses them to sample a token ID randomly. This can be useful when you want to generate text. The `softmax_temperature` argument serves the \n",
"same purpose as when we generated Shakespeare-like text (the higher this argument, the more random the generated text will be)."
]
},
{
"cell_type": "code",
"execution_count": 122,
"metadata": {},
"outputs": [],
"source": [
"softmax_temperature = tf.Variable(1.)\n",
"\n",
"inference_sampler = tfa.seq2seq.sampler.SampleEmbeddingSampler(\n",
" embedding_fn=decoder_embedding_layer,\n",
" softmax_temperature=softmax_temperature)\n",
"inference_decoder = tfa.seq2seq.basic_decoder.BasicDecoder(\n",
" decoder_cell, inference_sampler, output_layer=output_layer,\n",
" maximum_iterations=max_output_length)\n",
"batch_size = tf.shape(encoder_inputs)[:1]\n",
"start_tokens = tf.fill(dims=batch_size, value=sos_id)\n",
"final_outputs, final_state, final_sequence_lengths = inference_decoder(\n",
" start_tokens,\n",
" initial_state=encoder_state,\n",
" start_tokens=start_tokens,\n",
" end_token=0)\n",
"\n",
"inference_model = keras.models.Model(inputs=[encoder_inputs],\n",
" outputs=[final_outputs.sample_id])"
]
},
{
"cell_type": "code",
"execution_count": 123,
"metadata": {},
"outputs": [],
"source": [
"def creative_predict_date_strs(date_strs, temperature=1.0):\n",
" softmax_temperature.assign(temperature)\n",
" X = prepare_date_strs_padded(date_strs)\n",
" Y_pred = inference_model.predict(X)\n",
" return ids_to_date_strs(Y_pred)"
]
},
{
"cell_type": "code",
"execution_count": 124,
"metadata": {},
"outputs": [],
"source": [
"tf.random.set_seed(42)\n",
"\n",
"creative_predict_date_strs([\"July 14, 1789\", \"May 01, 2020\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Dates look good at room temperature. Now let's heat things up a bit:"
]
},
{
"cell_type": "code",
"execution_count": 125,
"metadata": {},
"outputs": [],
"source": [
"tf.random.set_seed(42)\n",
"\n",
"creative_predict_date_strs([\"July 14, 1789\", \"May 01, 2020\"],\n",
" temperature=5.)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Oops, the dates are overcooked, now. Let's call them \"creative\" dates."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Fifth version: using TFA seq2seq, the Keras subclassing API and attention mechanisms"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The sequences in this problem are pretty short, but if we wanted to tackle longer sequences, we would probably have to use attention mechanisms. While it's possible to code our own implementation, it's simpler and more efficient to use TF-Addons's implementation instead. Let's do that now, this time using Keras' subclassing API.\n",
"\n",
"**Warning**: due to a TensorFlow bug (see [this issue](https://github.com/tensorflow/addons/issues/1153) for details), the `get_initial_state()` method fails in eager mode, so for now we have to use the subclassing API, as Keras automatically calls `tf.function()` on the `call()` method (so it runs in graph mode)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this implementation, we've reverted back to using the `TrainingSampler`, for simplicity (but you can easily tweak it to use a `ScheduledEmbeddingTrainingSampler` instead). We also use a `GreedyEmbeddingSampler` during inference, so this class is pretty easy to use:"
]
},
{
"cell_type": "code",
"execution_count": 126,
"metadata": {},
"outputs": [],
"source": [
"class DateTranslation(keras.models.Model):\n",
" def __init__(self, units=128, encoder_embedding_size=32,\n",
" decoder_embedding_size=32, **kwargs):\n",
" super().__init__(**kwargs)\n",
" self.encoder_embedding = keras.layers.Embedding(\n",
" input_dim=len(INPUT_CHARS) + 1,\n",
" output_dim=encoder_embedding_size)\n",
" self.encoder = keras.layers.LSTM(units,\n",
" return_sequences=True,\n",
" return_state=True)\n",
" self.decoder_embedding = keras.layers.Embedding(\n",
" input_dim=len(OUTPUT_CHARS) + 2,\n",
" output_dim=decoder_embedding_size)\n",
" self.attention = tfa.seq2seq.LuongAttention(units)\n",
" decoder_inner_cell = keras.layers.LSTMCell(units)\n",
" self.decoder_cell = tfa.seq2seq.AttentionWrapper(\n",
" cell=decoder_inner_cell,\n",
" attention_mechanism=self.attention)\n",
" output_layer = keras.layers.Dense(len(OUTPUT_CHARS) + 1)\n",
" self.decoder = tfa.seq2seq.BasicDecoder(\n",
" cell=self.decoder_cell,\n",
" sampler=tfa.seq2seq.sampler.TrainingSampler(),\n",
" output_layer=output_layer)\n",
" self.inference_decoder = tfa.seq2seq.BasicDecoder(\n",
" cell=self.decoder_cell,\n",
" sampler=tfa.seq2seq.sampler.GreedyEmbeddingSampler(\n",
" embedding_fn=self.decoder_embedding),\n",
" output_layer=output_layer,\n",
" maximum_iterations=max_output_length)\n",
"\n",
" def call(self, inputs, training=None):\n",
" encoder_input, decoder_input = inputs\n",
" encoder_embeddings = self.encoder_embedding(encoder_input)\n",
" encoder_outputs, encoder_state_h, encoder_state_c = self.encoder(\n",
" encoder_embeddings,\n",
" training=training)\n",
" encoder_state = [encoder_state_h, encoder_state_c]\n",
"\n",
" self.attention(encoder_outputs,\n",
" setup_memory=True)\n",
" \n",
" decoder_embeddings = self.decoder_embedding(decoder_input)\n",
"\n",
" decoder_initial_state = self.decoder_cell.get_initial_state(\n",
" decoder_embeddings)\n",
" decoder_initial_state = decoder_initial_state.clone(\n",
" cell_state=encoder_state)\n",
" \n",
" if training:\n",
" decoder_outputs, _, _ = self.decoder(\n",
" decoder_embeddings,\n",
" initial_state=decoder_initial_state,\n",
" training=training)\n",
" else:\n",
" start_tokens = tf.zeros_like(encoder_input[:, 0]) + sos_id\n",
" decoder_outputs, _, _ = self.inference_decoder(\n",
" decoder_embeddings,\n",
" initial_state=decoder_initial_state,\n",
" start_tokens=start_tokens,\n",
" end_token=0)\n",
"\n",
" return tf.nn.softmax(decoder_outputs.rnn_output)"
]
},
{
"cell_type": "code",
"execution_count": 127,
"metadata": {},
"outputs": [],
"source": [
"np.random.seed(42)\n",
"tf.random.set_seed(42)\n",
"\n",
"model = DateTranslation()\n",
"optimizer = keras.optimizers.Nadam()\n",
"model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=optimizer,\n",
" metrics=[\"accuracy\"])\n",
"history = model.fit([X_train, X_train_decoder], Y_train, epochs=25,\n",
" validation_data=([X_valid, X_valid_decoder], Y_valid))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Not quite 100% validation accuracy, but close. It took a bit longer to converge this time, but there were also more parameters and more computations per iteration. And we did not use a scheduled sampler.\n",
"\n",
"To use the model, we can write yet another little function:"
]
},
{
"cell_type": "code",
"execution_count": 128,
"metadata": {},
"outputs": [],
"source": [
"def fast_predict_date_strs_v2(date_strs):\n",
" X = prepare_date_strs_padded(date_strs)\n",
" X_decoder = tf.zeros(shape=(len(X), max_output_length), dtype=tf.int32)\n",
" Y_probas = model.predict([X, X_decoder])\n",
" Y_pred = tf.argmax(Y_probas, axis=-1)\n",
" return ids_to_date_strs(Y_pred)"
]
},
{
"cell_type": "code",
"execution_count": 129,
"metadata": {},
"outputs": [],
"source": [
"fast_predict_date_strs_v2([\"July 14, 1789\", \"May 01, 2020\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"There are still a few interesting features from TF-Addons that you may want to look at:\n",
"* Using a `BeamSearchDecoder` rather than a `BasicDecoder` for inference. Instead of outputing the character with the highest probability, this decoder keeps track of the several candidates, and keeps only the most likely sequences of candidates (see chapter 16 in the book for more details).\n",
"* Setting masks or specifying `sequence_length` if the input or target sequences may have very different lengths.\n",
"* Using a `ScheduledOutputTrainingSampler`, which gives you more flexibility than the `ScheduledEmbeddingTrainingSampler` to decide how to feed the output at time _t_ to the cell at time _t_+1. By default it feeds the outputs directly to cell, without computing the argmax ID and passing it through an embedding layer. Alternatively, you specify a `next_inputs_fn` function that will be used to convert the cell outputs to inputs at the next step."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 10.\n",
"_Exercise: Go through TensorFlow's [Neural Machine Translation with Attention tutorial](https://homl.info/nmttuto)._"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Simply open the Colab and follow its instructions. Alternatively, if you want a simpler example of using TF-Addons's seq2seq implementation for Neural Machine Translation (NMT), look at the solution to the previous question. The last model implementation will give you a simpler example of using TF-Addons to build an NMT model using attention mechanisms."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 11.\n",
"_Exercise: Use one of the recent language models (e.g., GPT) to generate more convincing Shakespearean text._"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The simplest way to use recent language models is to use the excellent [transformers library](https://huggingface.co/transformers/), open sourced by Hugging Face. It provides many modern neural net architectures (including BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet and more) for Natural Language Processing (NLP), including many pretrained models. It relies on either TensorFlow or PyTorch. Best of all: it's amazingly simple to use."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First, let's load a pretrained model. In this example, we will use OpenAI's GPT model, with an additional Language Model on top (just a linear layer with weights tied to the input embeddings). Let's import it and load the pretrained weights (this will download about 445MB of data to `~/.cache/torch/transformers`):"
]
},
{
"cell_type": "code",
"execution_count": 130,
"metadata": {},
"outputs": [],
"source": [
"from transformers import TFOpenAIGPTLMHeadModel\n",
"\n",
"model = TFOpenAIGPTLMHeadModel.from_pretrained(\"openai-gpt\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next we will need a specialized tokenizer for this model. This one will try to use the [spaCy](https://spacy.io/) and [ftfy](https://pypi.org/project/ftfy/) libraries if they are installed, or else it will fall back to BERT's `BasicTokenizer` followed by Byte-Pair Encoding (which should be fine for most use cases)."
]
},
{
"cell_type": "code",
"execution_count": 131,
"metadata": {},
"outputs": [],
"source": [
"from transformers import OpenAIGPTTokenizer\n",
"\n",
"tokenizer = OpenAIGPTTokenizer.from_pretrained(\"openai-gpt\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's use the tokenizer to tokenize and encode the prompt text:"
]
},
{
"cell_type": "code",
"execution_count": 132,
"metadata": {},
"outputs": [],
"source": [
"prompt_text = \"This royal throne of kings, this sceptred isle\"\n",
"encoded_prompt = tokenizer.encode(prompt_text,\n",
" add_special_tokens=False,\n",
" return_tensors=\"tf\")\n",
"encoded_prompt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Easy! Next, let's use the model to generate text after the prompt. We will generate 5 different sentences, each starting with the prompt text, followed by 40 additional tokens. For an explanation of what all the hyperparameters do, make sure to check out this great [blog post](https://huggingface.co/blog/how-to-generate) by Patrick von Platen (from Hugging Face). You can play around with the hyperparameters to try to obtain better results."
]
},
{
"cell_type": "code",
"execution_count": 133,
"metadata": {},
"outputs": [],
"source": [
"num_sequences = 5\n",
"length = 40\n",
"\n",
"generated_sequences = model.generate(\n",
" input_ids=encoded_prompt,\n",
" do_sample=True,\n",
" max_length=length + len(encoded_prompt[0]),\n",
" temperature=1.0,\n",
" top_k=0,\n",
" top_p=0.9,\n",
" repetition_penalty=1.0,\n",
" num_return_sequences=num_sequences,\n",
")\n",
"\n",
"generated_sequences"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's decode the generated sequences and print them:"
]
},
{
"cell_type": "code",
"execution_count": 134,
"metadata": {},
"outputs": [],
"source": [
"for sequence in generated_sequences:\n",
" text = tokenizer.decode(sequence, clean_up_tokenization_spaces=True)\n",
" print(text)\n",
" print(\"-\" * 80)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can try more recent (and larger) models, such as GPT-2, CTRL, Transformer-XL or XLNet, which are all available as pretrained models in the transformers library, including variants with Language Models on top. The preprocessing steps vary slightly between models, so make sure to check out this [generation example](https://github.com/huggingface/transformers/blob/master/examples/run_generation.py) from the transformers documentation (this example uses PyTorch, but it will work with very little tweaks, such as adding `TF` at the beginning of the model class name, removing the `.to()` method calls, and using `return_tensors=\"tf\"` instead of `\"pt\"`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Hope you enjoyed this chapter! :)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.9"
},
"nav_menu": {},
"toc": {
"navigate_menu": true,
"number_sections": true,
"sideBar": true,
"threshold": 6,
"toc_cell": false,
"toc_section_display": "block",
"toc_window_display": false
}
},
"nbformat": 4,
"nbformat_minor": 4
}