[Udemy] [Andrei Neagoie, Daniel Bourke] TensorFlow Developer Certificate in 2021: Zero to Mastery [ENG, 2021]

[Daniel Bourke] Learn TensorFlow and Deep Learning fundamentals with Python (code-first introduction) [ENG, 2021]

Первые 14 (из 37) часов можно бесплатно посмотреть в YouTube. Далее можно или купить, или найти.

Course Link


В курсе ссылаются и рекомендуют книгу, Орельен Жерон, Прикладное машинное обучение с помощью Scikit-Learn, Keras и TensorFlow

00. Getting started with TensorFlow: A guide to the fundamentals

01. Neural Network Regression with TensorFlow

Hyperparameter Typical value
Input layer shape Same shape as number of features (e.g. 3 for # bedrooms, # bathrooms, # car spaces in housing price prediction)
Hidden layer(s) Problem specific, minimum = 1, maximum = unlimited
Neurons per hidden layer Problem specific, generally 10 to 100
Output layer shape Same shape as desired prediction shape (e.g. 1 for house price)
Hidden activation Usually ReLU (rectified linear unit)
Output activation None, ReLU, logistic/tanh
Loss function MSE (mean square error) or MAE (mean absolute error)/Huber (combination of MAE/MSE) if outliers
Optimizer SGD (stochastic gradient descent), Adam

Steps in modelling with TensorFlow

In TensorFlow, there are typically 3 fundamental steps to creating and training a model.

  1. Creating a model - piece together the layers of a neural network yourself (using the Functional or Sequential API) or import a previously built model (known as transfer learning).
  2. Compiling a model - defining how a models performance should be measured (loss/metrics) as well as defining how it should improve (optimizer).
  3. Fitting a model - letting the model try to find patterns in the data (how does X get to y).

# Set random seed

# Create a model using the Sequential API
model = tf.keras.Sequential([
  tf.keras.layers.Dense(50, activtion=None),

# Compile the model
model.compile(loss=tf.keras.losses.mae, # mae is short for mean absolute error
              optimizer=tf.keras.optimizers.Adam(learning_rate=0.01), # SGD is short for stochastic gradient descent

# Fit the model
model.fit(X, y, epochs=100)

# Make a prediction with the model

Improving a model

To improve our model, we alter almost every part of the 3 steps we went through before.

  1. Creating a model - here you might want to add more layers, increase the number of hidden units (also called neurons) within each layer, change the activation functions of each layer.
  2. Compiling a model - you might want to choose optimization function or perhaps change the learning rate of the optimization function.
  3. Fitting a model - perhaps you could fit a model for more epochs (leave it training for longer) or on more data (give the model more examples to learn from).

Evaluating a model

Running experiments to improve a model

Comparing results

Saving a model

Пробуем запустить пример из всего выше изученного на примере какой-то задачи страхования

02. Neural Network Classification with TensorFlow

Types of classification

  • Binary classification
  • Milticlass classification (fashion_mnist)
  • Multilabel classification

03. Convolutional Neural Networks and Computer Vision with TensorFlow

Традиционные Кошечки / собачки - заменены на pizza / steak