Home Artificial Intelligence Implementing an ANN model from scratch using Numpy

Implementing an ANN model from scratch using Numpy

0
Implementing an ANN model from scratch using Numpy

// Importing required libraries

import numpy as np

// Preparing dataset

x = np.array([
[1,0,1,0],
[1,0,1,1],
[0,1,0,1]
])

y = np.array([[1],[1],[0]])

// Defining the activation function

# Activation function 
# Here we have now used sigmoid function that provides the output between 0 and 1
def sigmoid(x):
return 1/(1+np.exp(-x))
def derivativeSigmoid(x):
return x * (1 - x)

// Initializing the variety of neurons

# input layer - neurons might be at all times equal to the variety of columns
inputNeurons = x.shape[1]
# hidden neurons - determine by hit and trial
hiddenNeurons = 3
# output neurons - rely upon variety of classes we have now in goal column,for eg to categorise 0 and 1
outputNeurons = 1

# Initializing weights and biases and constructing the ANN model

# initializing weights and biases matrices for hidden and output layer randomly
weightsHidden = np.random.uniform(size=(inputNeurons, hiddenNeurons))
biasHidden = np.random.uniform(size=(1, hiddenNeurons))
weightsOutput = np.random.uniform(size=(hiddenNeurons, outputNeurons))
biasOutput = np.random.uniform(size=(1, outputNeurons))
# learning rate (by hit and trial)
alpha = 0.04
# variety of iterations (by hit and trial)
epochs = 20000

for i in range(epochs):

# Feedforward propagation

# Step 1 - apply dot product and add bias : f(x) = x.wh + biasHidden
fx = np.dot(x, weightsHidden) + biasHidden
# Step 2 - apply activation function
hiddenLayer = sigmoid(fx)
# Step 3 - apply dot product and add bias : f(x) = hiddenLayer.wout + biasOut
fx_ = np.dot(hiddenLayer, weightsOutput) + biasOutput
# Step 4 - apply activation on output layer
outputLayer = sigmoid(fx_)

# Backpropagation - loss(y - y^) and optimization of weights and bias

errorOutput = outputLayer - y
# Slope on output layer - derivative of activation function applied on this layer
slopeOutput = derivativeSigmoid(outputLayer)
# Delta = error x slope
deltaOutput = errorOutput * slopeOutput

# for hidden layer
errorHidden = np.dot(deltaOutput, weightsOutput.T) # T for taking transpose
slopeHidden = derivativeSigmoid(hiddenLayer)
deltaHidden = errorHidden * slopeHidden

# updating the weights (weights optimization)
weightsOutput = weightsOutput - hiddenLayer.T.dot(deltaOutput)*alpha
weightsHidden = weightsHidden - x.T.dot(deltaHidden)*alpha
biasOutput = biasOutput - np.sum(deltaOutput)*alpha
biasHidden = biasHidden - np.sum(deltaOutput)*alpha

print("Output->", outputLayer)

Output -> array([[0.98788798], [0.98006967], [0.02688157]])

The rounded-off predicted output might be [1, 1, 0] which is the same as y, i.e., [1,1,0] hence the anticipated output is near the actual output.

LEAVE A REPLY

Please enter your comment!
Please enter your name here