# Neural-Networks **Repository Path**: yuanbo-peng/Neural-Networks ## Basic Information - **Project Name**: Neural-Networks - **Description**: The project is to implement the Error Back-Propagation (EBP) training algorithm for a multi-layer perceptron (MLP) 4-2-4 encoder. - **Primary Language**: Matlab - **License**: GPL-3.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 8 - **Forks**: 1 - **Created**: 2019-11-06 - **Last Updated**: 2023-10-09 ## Categories & Tags **Categories**: machine-learning **Tags**: None ## README # Neural Networks - Author: Yuanbo Peng <> - Create Date: 18.03.2019 - Project Name: Neural Networks - Revision: 1.0 ### The EBP Training Algorithm for an MLP Encoder The project is to implement the Error Back-Propagation (EBP) training algorithm for a multi- layer perceptron (MLP) 4-2-4 encoder using MatLab. Intuitively the structure of the encoder is as shown below:
- An input layer with 4 units. - A single hidden layer with 2 units. - An output layer with 4 units. Each unit has a sigmoid activation function. The task of the encoder is to map the following inputs onto outputs:
|Input Pattern|Output Pattern| |:-:|:-:| |1, 0, 0, 0|1, 0, 0, 0| |0, 1, 0, 0|0, 1, 0, 0| |0, 0, 1, 0|0, 0, 1, 0| |0, 0, 0, 1|0, 0, 0, 1|
### Activation Functions Activation functions are used for a neural network to learn and make sense of some data complicated and Non-linear complex functional mappings between the inputs and response variables. There are several commonly used activation functions to fit different data types better, such as ***Sigmoid***, ***Tanh***, and ***ReLu*** etc. In this case, the sigmoid function would be applied.
### Total Error Calculation A training set consists of - A set of input vectors 𝑖*1*, ..., 𝑖*N*, where the dimension of 𝑖*n* is equal to the number of MLP input units. - For each 𝑛, a target vector 𝑑*n*, where the dimension of 𝑑*n* is equal to the number of output units. The error 𝐸 is defined by:
### Weights Modification Let the weights between input and hidden layer, hidden and output layer be two sets of matrices π‘Š1, π‘Š2. The size of these two matrices are 4 Γ— 2, 2 Γ— 4. The values in these two matrices are automatically generated. Each value in π‘Š2 and π‘Š1 needs to be updated after each iteration of forward propagation. #### Update W2 (the weights between hidden and output layer) The new weights between hidden and output layer are calculated by:
#### Update W1 (the weights between input and hidden layer) The new weights between input and hidden layer are calculated by:
### An Improved EBP Training Algorithm **Bias** is a constant which helps the model in a way that it can fit better for the given data. A bias unit is an β€˜extra’ neuron which doesn’t have any incoming connections added to pre-output layer.
### Evaluation: Bias vs Non-bias The MLP parameters are below: - Learning rate: 6.0 - Number of iterations: 1000 - Initial weights in two systems are equal