Merge branch 'master' of github.com:ageron/handson-ml2
commit
6311ef8184
|
@ -1089,15 +1089,6 @@
|
|||
"y_pred_main, y_pred_aux = model.predict((X_new_A, X_new_B))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 67,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"model = WideAndDeepModel(30, activation=\"relu\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
|
|
@ -2040,8 +2040,8 @@
|
|||
"outputs": [],
|
||||
"source": [
|
||||
"train_set = mnist_dataset(train_filepaths, shuffle_buffer_size=60000)\n",
|
||||
"valid_set = mnist_dataset(train_filepaths)\n",
|
||||
"test_set = mnist_dataset(train_filepaths)"
|
||||
"valid_set = mnist_dataset(valid_filepaths)\n",
|
||||
"test_set = mnist_dataset(test_filepaths)"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -2274,7 +2274,7 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"But let's pretend the dataset does not fit in memory, just to make things more interesting. Luckily, each review fits on just one line (they use `<br />` to indicate line breaks), so we can read the reviews using a `TextLineDataset`. If they didn't we would have to preprocess the input files (e.g., converting them to TFRecords). For very large datasets, it would make sense a tool like Apache Beam for that."
|
||||
"But let's pretend the dataset does not fit in memory, just to make things more interesting. Luckily, each review fits on just one line (they use `<br />` to indicate line breaks), so we can read the reviews using a `TextLineDataset`. If they didn't we would have to preprocess the input files (e.g., converting them to TFRecords). For very large datasets, it would make sense to use a tool like Apache Beam for that."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -2473,7 +2473,7 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let's run it on the same `X_example`, just to make sure the word IDs are larger now, since the vocabulary bigger:"
|
||||
"Let's run it on the same `X_example`, just to make sure the word IDs are larger now, since the vocabulary is bigger:"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -2540,7 +2540,7 @@
|
|||
"source": [
|
||||
"class BagOfWords(keras.layers.Layer):\n",
|
||||
" def __init__(self, n_tokens, dtype=tf.int32, **kwargs):\n",
|
||||
" super().__init__(dtype=tf.int32, **kwargs)\n",
|
||||
" super().__init__(dtype=dtype, **kwargs)\n",
|
||||
" self.n_tokens = n_tokens\n",
|
||||
" def call(self, inputs):\n",
|
||||
" one_hot = tf.one_hot(inputs, self.n_tokens)\n",
|
||||
|
|
|
@ -565,7 +565,7 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let's create a neural network that will take observations as inputs, and output the action to take for each observation. To choose an action, the network will estimate a probability for each action, then we will select an action randomly according to the estimated probabilities. In the case of the Cart-Pole environment, there are just two possible actions (left or right), so we only need one output neuron: it will output the probability `p` of the action 0 (left), and of course the probability of action 1 (right) will be `1 - p`."
|
||||
"Let's create a neural network that will take observations as inputs, and output the probabilities of actions to take for each observation. To choose an action, the network will estimate a probability for each action, then we will select an action randomly according to the estimated probabilities. In the case of the Cart-Pole environment, there are just two possible actions (left or right), so we only need one output neuron: it will output the probability `p` of the action 0 (left), and of course the probability of action 1 (right) will be `1 - p`."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
|
Loading…
Reference in New Issue