Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
pallab_haldar
Active Participant

In this blog I am going to discuss about the basic understanding Deep Learning, Neural Network, Activate function, Gradient Decent function and LR.

In ML is a subset of AI where using predefined Algorithm we train the data and base on the training we predict or forecast the result.

Deep learning use the multi layered Neurons  which buid called deep neural networks and  simulate the  the human brain to solve complex decision making and prediction.  Artificial Neuron follows the below mathematical model -

pallab_haldar_0-1716309764964.pngpallab_haldar_1-1716309782653.png

 

Where f(x) is the Activation function. Activation function is used to activate the neurons t by calculating the weighted sum and further adding bias to it. It is used to bring non-linearity in the model. It is used in each layer after each neuron to activate it.

Loss Function : Loss functions are the functions that calculate Loss  (Error) on the outcomes by comparing  it with the target value using function like Mean Squire Error. It calculates the difference between the network's prediction and the actual Target. The main objective is to minimize this loss. Examples  : Mean Squared Error (MSE).

Optimizer : Optimization function used to optimize the Weight and Bias which we get from Loss Functions by minimizing the error E(X)  or loss function using optimum resource and critical path. Example : Adams.

pallab_haldar_1-1716320850572.png

Learning rate and Gradient decent : 

The learning rate is a hyper-parameter used to control the weight and bias correction rate at which an algorithm updates or learns the values of a parameter estimate.  Learning rate denoted by the symbol α.

pallab_haldar_0-1717086891777.png

Gradient Decent : 

pallab_haldar_0-1717087354477.png

Optimization algorithm Gradient descent (GD) is an iterative optimization algorithm, which is used to find a minimum or maximum of a given function. It is used to minimize a loss function. Gradient Descent Algorithm iteratively calculates the next point using gradient at the current position. It subtracts the value because we want to minimize the function and the process can be defined by below -

pallab_haldar_1-1717087881361.png

pallab_haldar_2-1717088309497.pngpallab_haldar_3-1717088344001.png

 

 

 

 

 

I am attaching a simple Neural Network creation in python using Keras which will explain all the above function and flow i.e. how flows are defined.

 

#@title Define neural network parameters
in_neurons = datain_tr_calibrated.shape[1]
ou_neurons = dataou_tr_calibrated.shape[1]

# 100 is the First layer Nuron Count which contain bout input and 1 st hidden layer
# 50 is the 2nd hidden layer Nurom count
# 25 is the 3rd hidden layer Nuron Count

hn_neurons = [100,50,25]


# We used 3 relu function to covern the hidden layer from liner to non linear mode as all the data is not linear.
# The ReLU activation function is used to introduce nonlinearity in a neural network, helping mitigate the vanishing 
# gradient problem during machine learning model training and enabling neural networks to learn more complex
# relationships in data. 

ac_fun     = ['relu', 'relu', 'relu', 'linear']

#We used  loss function Mean Squared Error.

ls_fun     = 'mean_squared_error'

# We used adam OPtimization function to optimize the loss funtion
op_val     = 'adam'

# Total no of itteration of training (back propagation) will be 99 of 16 batch size of data 
it_val     = 99
bt_size    = 16

sh_val     = False # shuffle
vr_val     = 1    # learning to be printed


#@title Construct the neural network

net        = tf.keras.models.Sequential() # back-to-back layers of neurons (platform)

net.add( tf.keras.layers.Dense(units=hn_neurons[0], activation=ac_fun[0], input_dim = in_neurons) ) # input layer and the 1st hidden layer
net.add( tf.keras.layers.Dense(units=hn_neurons[1], activation=ac_fun[1]) )                         # 2nd hidden layer
net.add( tf.keras.layers.Dense(units=hn_neurons[2], activation=ac_fun[2]) )                         # 3rd hidden layer
net.add( tf.keras.layers.Dense(units=ou_neurons   , activation=ac_fun[3]) )                         # output layer

# In the below line all the function has been defined to attached.

net.compile(optimizer = op_val, loss = ls_fun) # compile the network


#@title Network summary
net.summary()


#@title Train the network - Learn the network using optinization function via back propagation -
history = net.fit(datain_tr_calibrated,
                  dataou_tr_calibrated,
                  epochs     = it_val ,
                  batch_size = bt_size, 
                  verbose    = vr_val ,
                  shuffle    = sh_val)


#@title Estimate outputs of testing data : 
dataes_te_calibrated = net.predict(datain_te_calibrated)


# uncalibrate and compare the data with original data by export to excel

predicted_target =  dataes_te_calibrated.flatten()
actual_target = np.array(dataou_te_calibrated).flatten()

 

In my next blog I am going to discuss about the different kind of activation function and where we will use it.

 

 

 

 

 

 

 

 

 

 

 

Labels in this area