Aurélien Geron
b3081ceab6
Use cloned model when reusing a pretrained model, fixes #454
2021-10-07 17:41:46 +13:00
austin-chan
995a980df8
[Chapter 11] Text fix for l1 l2 regularization section
2021-09-11 12:21:27 -07:00
Aurélien Geron
108fe1fa53
Replace lr with learning_rate in Keras optimizers, fixes #456
2021-08-31 20:54:35 +12:00
Aurélien Geron
1b96533668
Replace 'Open in Colab' button
2021-05-26 07:40:58 +12:00
Peretz Cohen
e8caf8f1f9
Update 11_training_deep_neural_networks.ipynb
...
fix link
2021-05-25 11:16:59 -07:00
Peretz Cohen
51bfd5c4f2
Update 11_training_deep_neural_networks.ipynb
...
add Open in Kaggle badge
2021-05-24 20:01:17 -07:00
Aurélien Geron
9d33d65ef0
logs["loss"] is the mean loss, not the batch loss anymore (since TF 2.2), fixes #188
2021-03-19 10:50:13 +13:00
Aurélien Geron
55ee303e56
Merge pull request #275 from ibeauregard/changes-chap11
...
(Chapter 11) Adjust computation of steps per epoch
2021-03-02 22:12:35 +13:00
Aurélien Geron
97af3c635b
layer.updates is deprecated, and model_B.summary() instead of model.summary(), fixes #380
2021-02-19 08:26:32 +13:00
Aurélien Geron
670873843d
Update libraries to latest version, including TensorFlow 2.4.1 and Scikit-Learn 0.24.1
2021-02-14 15:02:09 +13:00
Ian Beauregard
01464e2216
Adjust computation of steps per epoch
...
The number of steps per epoch is ceil(len(X) / batch_size) rather than floor(len(X) / batch_size). This change also means we do not have to take max(rate, self.last_rate) in the last steps of the OneCycleScheduler.
2020-09-25 08:25:17 -04:00
Aurélien Geron
2f7ab70295
Update notebooks to latest nbformat
2020-04-06 19:13:12 +12:00
Aurélien Geron
523a878531
Merge branch 'master' of github.com:ageron/handson-ml2
2020-03-11 09:55:57 +13:00
Aurélien Geron
507735beed
Add solution to the chapter 11's coding exercises, fixes #102 and fixes #120
2020-03-11 09:55:45 +13:00
Fai Sharji
de450d8077
Update 11_training_deep_neural_networks.ipynb
...
Swapped the Activation and BatchNormalization lines in order to make the code consistent with the description and the book (p. 343), i.e. adding the BN layers BEFORE the activation function.
2019-12-25 23:00:26 -05:00
Aurélien Geron
f810964f51
Make notebooks 10 and 11 runnable in Colab without changes
2019-11-06 11:38:13 +08:00
Aurélien Geron
35bcff5450
Fix error in OneCycleSchedule._interpolate(), fixes #56
2019-11-05 11:17:15 +08:00
Aurélien Geron
3db31444cd
SGD now defaults to lr=0.01 so use 1e-3 explicitely
2019-06-10 10:48:00 +08:00
Aurélien Geron
01784d2f98
Do not use a layer as an activation function, especially with weights
2019-06-09 20:08:53 +08:00
Aurélien Geron
107b90d9b3
Do not use learning_phase anymore, just set training=True/False
2019-05-09 10:39:02 +08:00
Aurélien Geron
fd1e088dab
Add 1cycle scheduling
2019-05-05 12:42:08 +08:00
Aurélien Geron
df9b91e2e8
Rename chapters 11 to 15 and split chapter 15 into 15 and 16
2019-04-16 20:39:14 +08:00