Neural Network Mastery: A Journey Through Deep Learning

Author(s): Rizk Nouhad

Edition: 1

Copyright: 2025

Pages: 137

Choose Your Format

Choose Your Platform | Help Me Choose

Ebook

$50.00 USD

ISBN 9798385175963

Details Electronic Delivery EBOOK 180 days

Preface  
About the Author

Chapter 1 Introduction to Neural Networks 
1.1 Overview of Neural Networks 
1.2 History of Neural Networks  
1.2.1 First attempts 
1.2.2 Promising and emerging technology 
1.2.3 Period of frustration and disgrace 
1.2.4 Innovation 
1.2.5 Today
1.3 Applications of Neural Networks 
1.3.1 Prediction 
1.4 Biological Inspiration  
1.4.1 Biological neurons  
1.4.2 Artificial neurons 
1.5 Mathematical Foundations 
1.5.1 Scalars, vectors, matrices, and tensors 
1.5.2 Vector, matrices, and tensors 
1.5.3 Special types of matrix 
1.5.4 Matrix calculus 
1.6 Setting Up Your Python Environment 
1.6.1 Installing Python 
1.6.2 Installing libraries 
1.7 Practice 
1.7.1 Predicting student success with a neural network 
References 

Chapter 2 Building Blocks of Neural Networks 
2.1 Neurons and Activation Functions 
2.1.1 Concept of artificial neurons 
2.1.2 Common activation functions 
2.1.3 Implementing activation functions in python 
2.2 Architecture of Neural Networks 
2.2.1 Understanding weights and biases 
2.2.2 Backpropagation 
2.2.3 Forward and backward passes 
2.3 Implementing a Simple Neural Network in Python 
2.3.1 Exercise on perceptron with one output node 

Chapter 3 Training Neural Networks 
3.1 The Learning Process 
3.1.1 Introduction to supervised learning 
3.1.2 Loss function 
3.1.3 Implementing loss functions in python 
3.2 Optimization Techniques 
3.2.1 Gradient descent and its variants (SGD, Adam, RMSprop) 
3.3 Backpropagation Algorithm 
3.3.1 The four equations of backpropagation 
3.3.2 The backpropagation algorithm 
3.3.3 Backpropagation in code 
3.4 Metrics 
3.4.1 Recall, or true positive rate 
3.4.2 Precision 

Chapter 4  Deep Neural Networks 
4.1 Introduction to Deep Learning 
4.1.1 Shallow networks 
4.1.2 Deep neural networks 
4.1.3 Shallow versus deep neural networks 
4.1.4 Challenges of training deep networks: Vanishing and exploding  gradients 
4.2 Architectures of Deep Neural Networks 
4.2.1 Concept of deep feedforward networks 
4.2.2 Implementing deep networks in Python using TensorFlow/Keras 
4.3 Regularization 
4.3.1 Other techniques for regularization 
4.4 Implementing Regularization in Python 

Chapter 5  Convolutional Neural Networks (CNNs)  
5.1 Introduction to CNNs 
5.1.1 LeNet-like 
5.1.2 Residual networks 
5.2 Practical Example 
5.3 Advanced Topics in CNN 
5.3.1 Transfer Learning and Fine Tuning 
5.3.2 Fine Tuning 
5.4 Implementing Transfer Learning with Pre-Trained Models
5.4.1 Loading an Example Dataset 
5.4.2 Training  
5.5 Sarang Example 
5.5.1 MNIST dataset 
5.5.2 Exploring data 
5.5.3 Preprocessing data 
5.5.4 Model Creation 
5.5.5 Model Training 
5.5.6 Model Evaluation

 Chapter 6 Recurrent Neural Networks (RNNs) 
6.1 Introduction to RNNs 
6.1.1 The simple recurrent network 
6.1.2 Handling sequential data with RNNs 
6.1.3 Challenges: Vanishing gradient and long-term dependencies 
6.2 Variants of RNNs 
6.2.1 LSTMs and GRUs: Overcoming RNN limitations 
6.2.2 Gated Units, Layers, and Networks 
6.3 Implementing LSTMs/GRUs in Python 
6.3.1 Load the dataset 
6.4 Applications of RNNs 
6.4.1 Natural language processing (NLP)
6.4.2 Time series forecasting 3 
6.4.3 Building an RNN model for sequence prediction in Python 
6.4.4 Practical applications of RNNs 
6.5 Sarang Example 
6.5.1 Dataset visualization 
6.5.2 Data preprocessing 
6.5.3 Model creation 

Chapter 7  Generative Adversarial Networks (GANs) 
7.1 Introduction to Generative Models 
7.1.1 Generative Versus Discriminative Models 
7.1.2 Overview of GANs: Generator and Discriminator Networks 
7.2 Implementing GANs in Python 
7.2.1 Setting up the GAN Architecture 
7.2.2 Building the GAN Training Process 
7.2.3 Training the GAN and Generating Images 
7.2.4 Practical Applications 
7.3 Advanced GAN Topics 
7.3.1 Conditional GANs 
7.3.2 StyleGAN  7.3.3 Implementing advanced GAN variants in Python 
7.3.4 Practical Applications and Extensions of Advanced GANs 
7.3.5 Sarang Example 
7.3.6 Generator 
7.3.7 Discriminator 
7.3.8 Build Models 
7.3.9 Setup Loss functions 
7.3.10 Train 

Chapter 8  Autoencoders and Unsupervised Learning  
8.1 Introduction to Autoencoders  
8.1.1 Understanding the concept of unsupervised learning  
8.1.2 Architecture of autoencoders: Encoder, bottleneck, and decoder   
8.2 Implementing Autoencoders in Python 
8.2.1 Building a basic autoencoder for dimensionality reduction 
8.2.2 Applications in anomaly detection and data compression 
8.3 Implementation in Python 
8.3.1 Understanding the Implementation 
8.3.2 Practical Applications 

Chapter 9  Transformers and Attention Mechanisms 
9.1 Introduction to Transformers 
9.1.1 The limitations of recurrent neural networks (RNNs) and the  rise of transformers 
9.1.2 Understanding attention mechanisms: Self-attention and multihead attention 
9.2 Implementing Transformers in Python  
9.2.1 Building a basic transformer model for NLP tasks  
9.2.2 Fine-tuning a transformer (e.g., BERT, GPT) for text  classification in Python 
9.3 Applications and Future Directions
9.3.1 Exploration of transformers in diverse applications: NLP, vision, etc.  
9.3.2 Implementing cutting-edge transformer models in Python 

Chapter 10 Reinforcement Learning 
10.1 Fundamentals of Reinforcement Learning (Expanded) 
10.1.1 Core components (expanded) 
10.1.2 The learning process (expanded) 
Example: Maze Navigation 
10.1.3 Exploration versus exploitation trade-off 
10.2 Deep Q-Networks (Expanded) 
10.2.1 Q-learning fundamentals (Expanded) 
10.2.2 DQN architecture (expanded) 
10.2.3 Key innovations in DQN  
10.2.4 Real-world applications of DQNs 
10.3 Advantages and Disadvantages of Reinforcement Learning 
10.3.1 Advantages 
10.3.2 Disadvantages 
10.4 Implementing Basic RL Algorithms  
10.4.1 Q-learning implementation structure 
10.4.2 DQN implementation essentials 
10.4.3 Training loop structure
10.5 Practical Considerations 
10.5.1 Hyperparameter selection 
10.5.2 Common challenges 
10.6 Additional Considerations for Scalability

Rizk Nouhad

Preface  
About the Author

Chapter 1 Introduction to Neural Networks 
1.1 Overview of Neural Networks 
1.2 History of Neural Networks  
1.2.1 First attempts 
1.2.2 Promising and emerging technology 
1.2.3 Period of frustration and disgrace 
1.2.4 Innovation 
1.2.5 Today
1.3 Applications of Neural Networks 
1.3.1 Prediction 
1.4 Biological Inspiration  
1.4.1 Biological neurons  
1.4.2 Artificial neurons 
1.5 Mathematical Foundations 
1.5.1 Scalars, vectors, matrices, and tensors 
1.5.2 Vector, matrices, and tensors 
1.5.3 Special types of matrix 
1.5.4 Matrix calculus 
1.6 Setting Up Your Python Environment 
1.6.1 Installing Python 
1.6.2 Installing libraries 
1.7 Practice 
1.7.1 Predicting student success with a neural network 
References 

Chapter 2 Building Blocks of Neural Networks 
2.1 Neurons and Activation Functions 
2.1.1 Concept of artificial neurons 
2.1.2 Common activation functions 
2.1.3 Implementing activation functions in python 
2.2 Architecture of Neural Networks 
2.2.1 Understanding weights and biases 
2.2.2 Backpropagation 
2.2.3 Forward and backward passes 
2.3 Implementing a Simple Neural Network in Python 
2.3.1 Exercise on perceptron with one output node 

Chapter 3 Training Neural Networks 
3.1 The Learning Process 
3.1.1 Introduction to supervised learning 
3.1.2 Loss function 
3.1.3 Implementing loss functions in python 
3.2 Optimization Techniques 
3.2.1 Gradient descent and its variants (SGD, Adam, RMSprop) 
3.3 Backpropagation Algorithm 
3.3.1 The four equations of backpropagation 
3.3.2 The backpropagation algorithm 
3.3.3 Backpropagation in code 
3.4 Metrics 
3.4.1 Recall, or true positive rate 
3.4.2 Precision 

Chapter 4  Deep Neural Networks 
4.1 Introduction to Deep Learning 
4.1.1 Shallow networks 
4.1.2 Deep neural networks 
4.1.3 Shallow versus deep neural networks 
4.1.4 Challenges of training deep networks: Vanishing and exploding  gradients 
4.2 Architectures of Deep Neural Networks 
4.2.1 Concept of deep feedforward networks 
4.2.2 Implementing deep networks in Python using TensorFlow/Keras 
4.3 Regularization 
4.3.1 Other techniques for regularization 
4.4 Implementing Regularization in Python 

Chapter 5  Convolutional Neural Networks (CNNs)  
5.1 Introduction to CNNs 
5.1.1 LeNet-like 
5.1.2 Residual networks 
5.2 Practical Example 
5.3 Advanced Topics in CNN 
5.3.1 Transfer Learning and Fine Tuning 
5.3.2 Fine Tuning 
5.4 Implementing Transfer Learning with Pre-Trained Models
5.4.1 Loading an Example Dataset 
5.4.2 Training  
5.5 Sarang Example 
5.5.1 MNIST dataset 
5.5.2 Exploring data 
5.5.3 Preprocessing data 
5.5.4 Model Creation 
5.5.5 Model Training 
5.5.6 Model Evaluation

 Chapter 6 Recurrent Neural Networks (RNNs) 
6.1 Introduction to RNNs 
6.1.1 The simple recurrent network 
6.1.2 Handling sequential data with RNNs 
6.1.3 Challenges: Vanishing gradient and long-term dependencies 
6.2 Variants of RNNs 
6.2.1 LSTMs and GRUs: Overcoming RNN limitations 
6.2.2 Gated Units, Layers, and Networks 
6.3 Implementing LSTMs/GRUs in Python 
6.3.1 Load the dataset 
6.4 Applications of RNNs 
6.4.1 Natural language processing (NLP)
6.4.2 Time series forecasting 3 
6.4.3 Building an RNN model for sequence prediction in Python 
6.4.4 Practical applications of RNNs 
6.5 Sarang Example 
6.5.1 Dataset visualization 
6.5.2 Data preprocessing 
6.5.3 Model creation 

Chapter 7  Generative Adversarial Networks (GANs) 
7.1 Introduction to Generative Models 
7.1.1 Generative Versus Discriminative Models 
7.1.2 Overview of GANs: Generator and Discriminator Networks 
7.2 Implementing GANs in Python 
7.2.1 Setting up the GAN Architecture 
7.2.2 Building the GAN Training Process 
7.2.3 Training the GAN and Generating Images 
7.2.4 Practical Applications 
7.3 Advanced GAN Topics 
7.3.1 Conditional GANs 
7.3.2 StyleGAN  7.3.3 Implementing advanced GAN variants in Python 
7.3.4 Practical Applications and Extensions of Advanced GANs 
7.3.5 Sarang Example 
7.3.6 Generator 
7.3.7 Discriminator 
7.3.8 Build Models 
7.3.9 Setup Loss functions 
7.3.10 Train 

Chapter 8  Autoencoders and Unsupervised Learning  
8.1 Introduction to Autoencoders  
8.1.1 Understanding the concept of unsupervised learning  
8.1.2 Architecture of autoencoders: Encoder, bottleneck, and decoder   
8.2 Implementing Autoencoders in Python 
8.2.1 Building a basic autoencoder for dimensionality reduction 
8.2.2 Applications in anomaly detection and data compression 
8.3 Implementation in Python 
8.3.1 Understanding the Implementation 
8.3.2 Practical Applications 

Chapter 9  Transformers and Attention Mechanisms 
9.1 Introduction to Transformers 
9.1.1 The limitations of recurrent neural networks (RNNs) and the  rise of transformers 
9.1.2 Understanding attention mechanisms: Self-attention and multihead attention 
9.2 Implementing Transformers in Python  
9.2.1 Building a basic transformer model for NLP tasks  
9.2.2 Fine-tuning a transformer (e.g., BERT, GPT) for text  classification in Python 
9.3 Applications and Future Directions
9.3.1 Exploration of transformers in diverse applications: NLP, vision, etc.  
9.3.2 Implementing cutting-edge transformer models in Python 

Chapter 10 Reinforcement Learning 
10.1 Fundamentals of Reinforcement Learning (Expanded) 
10.1.1 Core components (expanded) 
10.1.2 The learning process (expanded) 
Example: Maze Navigation 
10.1.3 Exploration versus exploitation trade-off 
10.2 Deep Q-Networks (Expanded) 
10.2.1 Q-learning fundamentals (Expanded) 
10.2.2 DQN architecture (expanded) 
10.2.3 Key innovations in DQN  
10.2.4 Real-world applications of DQNs 
10.3 Advantages and Disadvantages of Reinforcement Learning 
10.3.1 Advantages 
10.3.2 Disadvantages 
10.4 Implementing Basic RL Algorithms  
10.4.1 Q-learning implementation structure 
10.4.2 DQN implementation essentials 
10.4.3 Training loop structure
10.5 Practical Considerations 
10.5.1 Hyperparameter selection 
10.5.2 Common challenges 
10.6 Additional Considerations for Scalability

Rizk Nouhad