CUSTOMISED
Expert-led training for your team
Dismiss
A Beginner's Guide to Deep Learning, Neural Networks, and Convolutional Neural Networks

10 May 2023

Deep Learning 101: A Comprehensive Guide to Neural Networks and Convolutional Neural Networks

This article is brought to you by JBI Training, the UK's leading technology training provider.   Learn more about JBI's Python training courses including Python (Advanced), Python Machine Learning, Python for Financial TradersData Science and AI/ML (Python)Azure Cloud Introduction & DevOps Introduction

Introduction:

Deep learning is a subset of machine learning that uses artificial neural networks to model and solve complex problems. Deep learning has seen widespread adoption in recent years and has been applied to a wide range of applications, including image recognition, natural language processing, and autonomous vehicles.

In this comprehensive guide, we will explore the fundamentals of deep learning, including neural networks and convolutional neural networks. We will provide step-by-step instructions for building and optimizing deep learning models, along with real-world use cases and code examples.

I. Introduction to Deep Learning

A. Definition of Deep Learning Deep Learning is a subfield of machine learning that deals with training artificial neural networks to learn from large amounts of data. The term "deep" refers to the depth of the network, which is determined by the number of layers in the neural network.

B. Brief History of Deep Learning Deep Learning has its roots in the field of artificial neural networks, which were first proposed in the 1940s. However, it wasn't until the 1980s that the backpropagation algorithm was developed, allowing neural networks to learn more effectively. In the 2000s, deep learning started to gain popularity, with breakthroughs in areas such as image and speech recognition. The advent of big data and powerful GPUs has also contributed to the recent surge in interest in deep learning.

C. Applications of Deep Learning Deep Learning has a wide range of applications, including image and speech recognition, natural language processing, autonomous vehicles, fraud detection, and recommender systems. Some notable examples of deep learning in action include AlphaGo, the computer program that beat the world champion at the game of Go, and GPT-3, the natural language processing model that can generate human-like text.

II. Neural Networks

A. Definition of Neural Networks

Neural networks are a class of machine learning algorithms that are inspired by the structure and function of the human brain. They are designed to learn and make predictions or decisions based on input data. Neural networks are composed of a large number of interconnected processing nodes, or neurons, which are organized into layers.

The neurons in a neural network are designed to process and transmit information in a way that is similar to the way biological neurons work. Each neuron takes in inputs from other neurons, performs a simple computation, and then sends its output to other neurons in the next layer. This process continues until the output of the network is produced.

The main advantage of neural networks is their ability to learn and generalize from data, meaning they can make predictions on new data that they haven't seen before. This makes neural networks suitable for a wide range of tasks, including image and speech recognition, natural language processing, and predictive analytics.

B. Structure of Neural Networks

Neural networks are structured as a series of interconnected layers that process and transform data. The layers can be thought of as simple processing units that take in data and perform some computation on it. The output of each layer is then passed to the next layer until the final output is produced.

There are three main types of layers in a neural network: input layers, hidden layers, and output layers. Input layers receive the raw data that is fed into the network, hidden layers perform computations on the data, and output layers produce the final output of the network.

In addition to the type of layers, the number of layers in a neural network also plays a critical role in its structure. Networks with just one hidden layer are known as shallow neural networks, while those with two or more hidden layers are known as deep neural networks.

The structure of a neural network is typically defined by its architecture, which specifies the number of layers, the number of nodes in each layer, and the connections between the nodes. Common neural network architectures include feedforward neural networks, convolutional neural networks, and recurrent neural networks. Each architecture is suited to different types of data and tasks.

C. Types of Layers in Neural Networks

Neural networks are composed of several layers of neurons, each with a different function. The most common types of layers in neural networks are:

  1. Input layer: The first layer in a neural network, where the input data is fed into the network.

  2. Hidden layer: A layer between the input and output layers, where the neural network processes and transforms the input data.

  3. Output layer: The last layer in a neural network, where the network outputs the results of the computation.

  4. Convolutional layer: A layer used in convolutional neural networks (CNNs), which are commonly used in image and video recognition. Convolutional layers apply a filter to the input data to identify patterns and features.

  5. Recurrent layer: A layer used in recurrent neural networks (RNNs), which are commonly used in natural language processing and speech recognition. Recurrent layers allow the network to process sequences of data by using the output from the previous step as input for the next step.

  6. Pooling layer: A layer used in CNNs to reduce the dimensionality of the input data and extract the most important features.

Each layer in a neural network has a specific function and plays a crucial role in the network's ability to learn and generalize from data.

D. Activation Functions

Activation functions are an essential component of a neural network. They introduce non-linearity into the output of a neuron by transforming the input signal into an output signal. Without non-linearity, a neural network would be reduced to a linear function, and therefore, would not be able to learn complex relationships between inputs and outputs.

There are several activation functions commonly used in neural networks, each with its own advantages and disadvantages. Some of the most commonly used activation functions are:

  1. Sigmoid Function: The sigmoid function maps any input value to a value between 0 and 1. It is commonly used in the output layer of binary classification problems.
  2. ReLU Function: The Rectified Linear Unit (ReLU) function returns the input if it is positive, and zero otherwise. It is a commonly used activation function in hidden layers of deep neural networks.
  3. Softmax Function: The softmax function is used in the output layer of multi-class classification problems. It transforms the output of the neural network into a probability distribution over the classes.

Choosing the right activation function depends on the type of problem being solved and the characteristics of the data being used. A deep understanding of the activation functions and their behavior is crucial for designing and building efficient neural networks.

E. Backpropagation Algorithm

Once the network is initialized, it needs to be trained to make accurate predictions. The backpropagation algorithm is a widely used method to train a neural network. It is an iterative optimization method, where the weights of the network are updated in each iteration.

The basic idea of backpropagation is to compute the error between the predicted output and the actual output of the network. This error is then propagated back through the network to adjust the weights of the neurons in the network. The weights are adjusted in such a way that the error between the predicted output and actual output is minimized.

The backpropagation algorithm consists of two phases: forward propagation and backward propagation. In the forward propagation phase, the input data is fed into the network and the output is computed. In the backward propagation phase, the error is calculated and the weights are updated.

The weight update in the backpropagation algorithm can be done using several optimization techniques such as Stochastic Gradient Descent (SGD), Adagrad, Adam, etc. These optimization techniques differ in how they calculate the weight updates and how they adjust the learning rate of the network.

The backpropagation algorithm is a fundamental technique used in deep learning and has been successful in training large-scale neural networks.

F. Example Use Case: Image Classification with Neural Networks

  1. Data Preparation: To build a neural network for image classification, a labeled dataset is required. This dataset should be divided into training, validation, and testing sets. The training set is used to train the model, while the validation set is used to tune the hyperparameters and avoid overfitting. The testing set is used to evaluate the performance of the model.

  2. Building a Neural Network: For image classification, a common architecture is the Convolutional Neural Network (CNN). A CNN consists of multiple convolutional and pooling layers followed by fully connected layers. The convolutional layers learn feature maps from the images, while the pooling layers reduce the spatial dimensions. The fully connected layers learn the class probabilities from the learned features.

  3. Training and Testing the Model: Once the CNN architecture is defined, the model needs to be trained using the training dataset. During the training process, the model adjusts its parameters to minimize the loss function. The loss function measures the difference between the predicted and actual values. The optimization algorithm used to minimize the loss function is called stochastic gradient descent.

  4. Evaluating the Model: After the model is trained, it needs to be evaluated using the testing dataset. The evaluation metrics used for image classification are accuracy, precision, recall, and F1-score. Accuracy measures the percentage of correctly classified images, while precision measures the percentage of true positive predictions among all positive predictions. Recall measures the percentage of true positive predictions among all actual positive instances. The F1-score is the harmonic mean of precision and recall.

III. Convolutional Neural Networks

Convolutional Neural Networks, or CNNs, are a type of deep neural network that have revolutionized image and video analysis, natural language processing, and other areas of artificial intelligence. In particular, they have been extremely successful in tasks such as image classification, object detection, and facial recognition, among others. In this section, we will explore the definition of CNNs, their structure, and the different layers that compose them. We will also provide an example use case of object detection with CNNs, highlighting the data preparation, building, training, testing, and evaluation steps involved.

A. Definition of Convolutional Neural Networks

Convolutional Neural Networks (CNNs) are a type of neural network that are primarily used for image processing, video analysis, and natural language processing. CNNs are designed to automatically and adaptively learn spatial hierarchies of features from raw input data. They employ a variant of multilayer perceptrons designed to require minimal preprocessing. In contrast to traditional neural networks, CNNs take advantage of the fact that the inputs are images, and constrain the architecture in a more sensible way.

B. Structure of Convolutional Neural Networks

Convolutional Neural Networks (CNNs) are composed of multiple layers, including convolutional layers, pooling layers, and fully connected layers. The layers of a CNN are arranged in a way that allows the network to extract relevant features from the input data while retaining spatial information.

The first layer in a CNN is typically a convolutional layer, which applies filters to the input image to extract features such as edges, corners, and textures. These filters are learned during the training process and are optimized to recognize specific patterns in the data.

After the convolutional layer(s), a pooling layer is often added to reduce the dimensionality of the feature maps and make computation more efficient. Pooling layers downsample the feature maps by taking the maximum or average value in each region.

The output of the convolutional and pooling layers is then flattened and passed through one or more fully connected layers, which perform the final classification or regression task. The fully connected layers are similar to those in a traditional neural network and can be thought of as a function that maps the feature representations to the output.

Overall, the structure of a CNN allows it to automatically learn hierarchical representations of the input data and achieve state-of-the-art performance in tasks such as image classification and object detection.

C. Convolutional Layers

Convolutional layers are the key building blocks of CNNs. A convolution operation involves sliding a filter (also called kernel) of predefined size over the input image to perform a mathematical operation known as convolution. This process helps in detecting features such as edges, curves, and corners in an image.

In a convolutional layer, several filters are used to perform multiple convolutions in parallel, producing a set of output feature maps. Each filter is trained to detect a specific type of feature, such as a vertical edge or a diagonal curve. The filters are learned through the backpropagation algorithm, just like in a regular neural network.

The size of the filter is an important hyperparameter in a convolutional layer. A larger filter will detect larger features, while a smaller filter will detect smaller features. Typically, 3x3 or 5x5 filters are used in modern CNN architectures.

After the convolution operation, a bias term is added to each output feature map. The bias helps in introducing non-linearity to the network, allowing it to model more complex functions.

In addition to convolutional layers, CNNs also use other types of layers, such as pooling layers and fully connected layers, to build more complex architectures for various computer vision tasks.

D. Pooling Layers

Pooling layers are typically used in convolutional neural networks to reduce the dimensions of the feature maps generated by convolutional layers, while retaining the most important information. The pooling operation is performed independently on each feature map and involves sliding a window over the input and outputting the maximum or average value in that window as the new value for that region.

There are two common types of pooling layers: max pooling and average pooling. Max pooling returns the maximum value in each window, while average pooling returns the average value. Both types of pooling layers reduce the size of the input by a factor of the pooling window size, which can help to reduce the computational complexity of the network.

Pooling layers can also help to make the model more robust to small variations in the input, as the output of the pooling layer will be less sensitive to small changes in the input compared to the output of a convolutional layer.

In summary, pooling layers are an important component of convolutional neural networks, as they can help to reduce the dimensions of the feature maps, improve the computational efficiency of the network, and make the model more robust to small variations in the input.

E. Example Use Case: Object Detection with Convolutional Neural Networks

A popular application of CNNs is object detection, which involves detecting and localizing objects within an image. In this section, we will walk through an example of using a CNN to detect objects in images.

  1. Data Preparation

The first step in building a CNN for object detection is to prepare the data. This involves gathering and labeling a dataset of images that contain the objects we want to detect. The dataset should include a mix of positive examples (images that contain the object) and negative examples (images that do not contain the object).

Once the dataset has been collected, we need to label the images. This involves drawing bounding boxes around the objects in each image, indicating the location and size of the object. This labeled dataset will be used to train our CNN to detect objects in new images.

  1. Building a Convolutional Neural Network

After the data has been prepared, the next step is to build the CNN. The architecture of the CNN will depend on the specific object detection task and the characteristics of the dataset. Generally, a CNN for object detection will include several convolutional and pooling layers, followed by one or more fully connected layers and a final output layer.

  1. Training and Testing the Model

Once the CNN architecture has been defined, we can train the model using the labeled dataset. During training, the CNN will learn to recognize the features of the objects we want to detect, and to map these features to the appropriate bounding boxes.

After training is complete, we can test the model using a separate dataset of images. This testing dataset should be similar to the training dataset, but should not include any of the same images. We can evaluate the performance of the model by measuring its precision and recall on the testing dataset.

  1. Evaluating the Model

To evaluate the performance of the model, we can use various metrics such as precision, recall, and F1 score. These metrics will provide a quantitative measure of how well the model is able to detect the objects in the images. We can also visualize the output of the CNN to see how well it is localizing the objects in the images.

IV. Optimization Techniques

Optimization techniques play a critical role in enhancing the performance of deep learning models. These techniques can help to improve model accuracy, reduce overfitting, and speed up the training process. In this section, we will explore some of the most common optimization techniques used in deep learning.

A. Gradient Descent

Gradient descent is a popular optimization technique used in machine learning to minimize the error of a model. It is an iterative method that updates the parameters of the model in the direction of the negative gradient of the loss function, which is calculated with respect to the model's parameters. This update process continues until the model converges to a minimum of the loss function.

There are two main types of gradient descent: batch gradient descent and stochastic gradient descent. In batch gradient descent, the model updates the parameters using the average gradient of the loss function calculated over the entire training dataset. In contrast, stochastic gradient descent updates the parameters using the gradient of the loss function calculated over a single training example at a time.

Both batch gradient descent and stochastic gradient descent have their advantages and disadvantages. Batch gradient descent converges to the minimum of the loss function in fewer iterations than stochastic gradient descent. However, it can be computationally expensive, especially for large datasets. Stochastic gradient descent, on the other hand, can converge faster, but its convergence is more erratic.

B. Regularization Techniques

Regularization is a set of techniques that are used to prevent overfitting, which is a common problem in machine learning models. Overfitting happens when a model is too complex and captures not only the signal but also the noise in the training data. As a result, the model performs well on the training data but poorly on new, unseen data.

There are several regularization techniques that can be used to prevent overfitting, including:

  1. L1 and L2 regularization
  2. Dropout regularization
  3. Early stopping

L1 and L2 regularization add a penalty term to the loss function that is being optimized. This penalty term is proportional to the magnitude of the model weights. L1 regularization is also known as Lasso regularization, and it encourages sparse solutions by shrinking the coefficients of irrelevant features to zero. L2 regularization is also known as Ridge regularization and it encourages small but non-zero weights.

Dropout regularization randomly drops out some neurons during the training process, which forces the model to learn more robust features. This technique has been shown to be effective in preventing overfitting, especially in deep learning models.

Early stopping is a technique that monitors the model's performance on a validation set during the training process. If the performance on the validation set starts to degrade, the training process is stopped early to prevent the model from overfitting.

By using regularization techniques, machine learning practitioners can prevent overfitting and build models that generalize well to new, unseen data.

C. Dropout

Dropout is a regularization technique commonly used in deep learning models to prevent overfitting. It works by randomly dropping out (setting to zero) a certain proportion of the neurons in a layer during each training iteration. By doing so, the network is forced to learn redundant representations, which helps it to generalize better on new data.

Here's an example of how to implement dropout in a Keras model:

from keras.models import Sequential from keras.layers import Dense, Dropout model = Sequential() model.add(Dense(64, activation='relu', input_dim=100)) model.add(Dropout(0.5)) model.add(Dense(64, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])

In this example, a dropout layer with a rate of 0.5 is added after each hidden layer. This means that during each training iteration, half of the neurons in the layer will be randomly dropped out.

By adding dropout layers to our model, we can significantly reduce overfitting and improve generalization performance. However, it's important to note that using too high of a dropout rate can result in underfitting, so it's important to experiment with different dropout rates to find the optimal value for your specific problem.

D. Early Stopping

Early stopping is another popular optimization technique used in deep learning models. It involves monitoring the performance of the model during training and stopping the training process early if the model's performance on a validation set stops improving.

The basic idea behind early stopping is that during training, the model learns to fit the training data better and better. However, at a certain point, the model starts to overfit, which means it starts to memorize the training data instead of learning from it. When this happens, the model's performance on the validation set starts to degrade, even though its performance on the training set continues to improve.

To prevent the model from overfitting, early stopping stops the training process when the model's performance on the validation set stops improving. This means that the model is trained just enough to achieve good performance on both the training and validation sets, without overfitting to the training data.

Early stopping is a simple but effective optimization technique that can be easily implemented in most deep learning frameworks. By using early stopping, you can save time and resources by stopping the training process before it becomes unnecessary to continue.

E. Example Use Case: Improving a Deep Learning Model with Optimization Techniques

  1. Building a Deep Learning Model

To demonstrate the use of optimization techniques, we will use a simple deep learning model to classify images of handwritten digits from the MNIST dataset. The dataset consists of 60,000 training images and 10,000 test images, each of which is a 28x28 grayscale image of a digit from 0 to 9.

To build our deep learning model, we will use the Keras library, which provides an easy-to-use API for building and training deep neural networks. Here is the code to import the necessary libraries and load the MNIST dataset:

import tensorflow as tf from tensorflow import keras # Load the MNIST dataset mnist = keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() # Normalize the pixel values to be between 0 and 1 x_train = x_train / 255.0 x_test = x_test / 255.0

Next, we will define our deep learning model. We will use a simple architecture consisting of two convolutional layers followed by two fully connected layers. We will also include dropout regularization to reduce overfitting.

model = keras.Sequential([ keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28,28,1)), keras.layers.MaxPooling2D((2,2)), keras.layers.Conv2D(64, (3,3), activation='relu'), keras.layers.MaxPooling2D((2,2)), keras.layers.Flatten(), keras.layers.Dropout(0.5), keras.layers.Dense(128, activation='relu'), keras.layers.Dropout(0.5), keras.layers.Dense(10) ])

  1. Training and Testing the Model

Now that we have defined our model, we can train and test it using the MNIST dataset. We will use the Adam optimizer and the sparse categorical crossentropy loss function.

model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test))

After training, we can evaluate our model on the test set:

test_loss, test_acc = model.evaluate(x_test, y_test, verbose=2) print('\nTest accuracy:', test_acc)

This initial model achieves an accuracy of around 98% on the test set.

  1. Applying Optimization Techniques

To improve the performance of our model, we can apply various optimization techniques. One such technique is learning rate scheduling, where we gradually decrease the learning rate during training to help the model converge faster.

lr_scheduler = keras.callbacks.ReduceLROnPlateau(factor=0.5, patience=3) model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.001), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) model.fit(x_train, y_train, epochs=20, validation_data=(x_test, y_test), callbacks=[lr_scheduler])

We can also try using different regularization techniques, such as L1 and L2 regularization. Here's an example of adding L2 regularization to the fully connected layers:

from tensorflow.keras import regularizers model = keras.Sequential([ keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28,28,1)), keras.layers.MaxPooling2D((2,2)),    

keras.layers.Flatten(), keras.layers.Dense(64, activation='relu', kernel_regularizer=regularizers.l2(0.01)), keras.layers.Dense(10) ])

model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy'])

history = model.fit(x_train, y_train, epochs=20, validation_data=(x_test, y_test)) We can also apply dropout to the model to reduce overfitting. Here's an example of adding dropout to the fully connected layers:

python Copy code model = keras.Sequential([ keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28,28,1)), keras.layers.MaxPooling2D((2,2)), keras.layers.Flatten(), keras.layers.Dense(64, activation='relu'), keras.layers.Dropout(0.5), keras.layers.Dense(10) ])

model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy'])

history = model.fit(x_train, y_train, epochs=20, validation_data=(x_test, y_test)) Finally, we can also use early stopping to prevent overfitting. Here's an example of using early stopping:

javascript Copy code early_stopping = keras.callbacks.EarlyStopping(patience=2, restore_best_weights=True)

model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy'])

history = model.fit(x_train, y_train, epochs=20, validation_data=(x_test, y_test), callbacks=[early_stopping])

Evaluating the Improved Model After applying these optimization techniques, we can evaluate the improved model on the test data to see if there is an improvement in performance. Here's an example of evaluating the model and plotting the training and validation accuracy and loss:

python Copy code test_loss, test_acc = model.evaluate(x_test, y_test, verbose=2)

plt.plot(history.history['accuracy'], label='accuracy') plt.plot(history.history['val_accuracy'], label = 'val_accuracy') plt.plot(history.history['loss'], label='loss') plt.plot(history.history['val_loss'], label = 'val_loss') plt.xlabel('Epoch') plt.ylabel('Accuracy/Loss') plt.legend(loc='lower right') plt.show()

print(test_acc)

In this example, we see that the addition of optimization techniques has improved the accuracy of the model on the test data. By gradually decreasing the learning rate during training, adding L2 regularization and dropout, and using early stopping to prevent overfitting, we were able to achieve better performance on the task of classifying handwritten digits.

  1. Evaluating the Improved Model After applying optimization techniques, we need to evaluate the performance of our improved model. We can do this by comparing the evaluation metrics of the original model with those of the improved model. Here's an example of how to evaluate our improved model:

java Copy code loss, accuracy = model.evaluate(x_test, y_test) print('Test accuracy:', accuracy)

With this approach, we can see if the optimization techniques we applied have resulted in a better-performing model. We can also visualize the training and validation loss and accuracy curves to get a better understanding of the performance of our model.

Overall, optimization techniques are essential for improving the performance of deep learning models. By using techniques such as gradient descent, regularization, dropout, and early stopping, we can achieve better accuracy and generalization in our models. It's essential to evaluate the performance of the model after applying these techniques to ensure that they are effective.

V. Conclusion

A. Summary of Key Points

In this guide, we've covered some essential topics in deep learning, including the basics of neural networks, convolutional neural networks, recurrent neural networks, and autoencoders. We've also explored some key optimization techniques that can help improve model performance, such as learning rate scheduling and regularization.

B. Future Directions for Deep Learning

Deep learning is a rapidly evolving field, with new techniques and approaches being developed all the time. Some areas of active research include unsupervised learning, reinforcement learning, and the development of more efficient deep learning algorithms that can handle larger datasets and more complex models.

Here are the top 5 courses from JBI Training that are relevant for your further training in Machine Learning

  1. Python Machine Learning - This course covers the fundamentals of machine learning and how to implement various algorithms using Python. It is a great course for anyone who wants to build a strong foundation in machine learning before diving into deep learning.

  2. TensorFlow - This course covers the popular deep learning framework, TensorFlow. It is a great course for anyone who wants to learn how to build and train deep neural networks using TensorFlow.

  3. Data Science and AI/ML (Python) - This course covers various aspects of data science, including machine learning and deep learning. It is a comprehensive course that covers everything from data preprocessing to model evaluation.

  4. Apache Spark Development - This course covers the popular big data processing engine, Apache Spark. It is a great course for anyone who wants to learn how to process large amounts of data efficiently, which is crucial for many deep learning applications.

  5. AI for Business & IT Staff - This course is designed for business and IT professionals who want to understand the basics of artificial intelligence and how it can be applied to solve business problems. It covers various AI concepts, including deep learning.

Here are some official documentation and guides for your further learning. 

  1. Deep Learning: https://www.deeplearningbook.org/
  2. Neural Networks: https://towardsdatascience.com/understanding-neural-networks-19020b758230
  3. Convolutional Neural Networks: https://cs231n.github.io/convolutional-networks/
  4. TensorFlow: https://www.tensorflow.org/
  5. Keras: https://keras.io/
  6. Backpropagation Algorithm: https://en.wikipedia.org/wiki/Backpropagation
  7. Image Classification with TensorFlow: https://www.tensorflow.org/tutorials/images/classification
  8. MNIST Dataset: https://yann.lecun.com/exdb/mnist/
  9. CIFAR-10 Dataset: https://www.cs.toronto.edu/~kriz/cifar.html
  10. Activation Functions: https://towardsdatascience.com/activation-functions-neural-networks-1cbd9f8d91d6
About the author: Daniel West
Tech Blogger & Researcher for JBI Training

CONTACT
+44 (0)20 8446 7555

[email protected]

SHARE

 

Copyright © 2024 JBI Training. All Rights Reserved.
JB International Training Ltd  -  Company Registration Number: 08458005
Registered Address: Wohl Enterprise Hub, 2B Redbourne Avenue, London, N3 2BS

Modern Slavery Statement & Corporate Policies | Terms & Conditions | Contact Us

POPULAR

Rust training course                                                                          React training course

Threat modelling training course   Python for data analysts training course

Power BI training course                                   Machine Learning training course

Spring Boot Microservices training course              Terraform training course

Kubernetes training course                                                            C++ training course

Power Automate training course                               Clean Code training course