Home Artificial Intelligence Neural network visualization using Visualkeras

Neural network visualization using Visualkeras

0
Neural network visualization using Visualkeras

Neural networks are a robust tool for machine learning, but they will be obscure and visualize. Fortunately, there are tools available to make this task easier. One such tool is VisualKeras, a Python library that permits you to create and visualize neural networks in an intuitive way. On this blog, we are going to explore how one can use VisualKeras to visualise a neural network.

First, we’d like to put in VisualKeras. We will do that using pip, the Python package manager. Open a terminal window and enter the next command:

pip install visualkeras

Once VisualKeras is installed, we are able to begin using it to visualise our neural network. Let’s start by creating an easy neural network with one input layer, one hidden layer, and one output layer. We’ll use the Sequential model from the Keras library to create this network. Here’s the code to create the model:

from keras.models import Sequential
from keras.layers import Dense
from visualkeras import layered_view

# create the model
model = Sequential([
Dense(64, activation='relu', input_shape=(784,)),
Dense(10, activation='softmax')
])

# visualize the model
layered_view(model, legend=True)

Within the code above, we import the Sequential model from Keras and the Dense layer. We also import the layered_view function from VisualKeras. We then create our neural network using the Sequential model and add a Dense layer with 64 nodes, a ReLU activation function, and an input shape of (784,). We then add one other Dense layer with 10 nodes, a softmax activation function, and no input shape specified.

Finally, we pass our model to the layered_view function to visualise it. This function generates a diagram of our neural network with each layer represented as an oblong box. The scale of every box corresponds to the variety of nodes in that layer. The colour of every box indicates the variety of layer (green for input, blue for hidden, and red for output).

If we run this code, we must always see a window appear with our neural network visualization. We will zoom out and in of the diagram by scrolling with our mouse wheel, and we are able to move across the diagram by clicking and dragging.

Output for the above code

Now, let’s try making a more complex neural network with multiple hidden layers. Here’s the code to create this network:

from keras import layers, Model, backend
from visualkeras import layered_view

channel_axis = -1
def model():
img_input = layers.Input(shape = (224, 224, 3))
x = layers.Conv2D(32, (3,3),
padding = 'same', use_bias = False,
name = 'block1_conv1')(img_input)
x = layers.BatchNormalization(axis = channel_axis, name = 'block1_bn1')(x)
x = layers.Activation('relu', name = 'block1_act1')(x)
x = layers.Conv2D(32, (3,3),
padding = 'same', use_bias = False,
name = 'block1_conv2')(x)
x = layers.BatchNormalization(axis = channel_axis, name = 'block1_bn2')(x)
x = layers.Activation('relu', name = 'block1_act2')(x)
x = layers.MaxPooling2D((2, 2),
strides=(2, 2),
padding='same',
name='block1_pool')(x)

# block 2
x = layers.Conv2D(64, (3,3),
padding = 'same', use_bias = False,
name = 'block2_conv1')(x)
x = layers.BatchNormalization(axis = channel_axis, name = 'block2_bn1')(x)
x = layers.Activation('relu', name = 'block2_act1')(x)
x = layers.Conv2D(64, (3,3),
padding = 'same', use_bias = False,
name = 'block2_conv2')(x)
x = layers.BatchNormalization(axis = channel_axis, name = 'block2_bn2')(x)
x = layers.Activation('relu', name = 'block2_act2')(x)
x = layers.MaxPooling2D((2, 2),
strides=(2, 2),
padding='same',
name='block2_pool')(x)

# block 3
x = layers.Conv2D(128, (3,3),
padding = 'same', use_bias = False,
name = 'block3_conv1')(x)
x = layers.BatchNormalization(axis = channel_axis, name = 'block3_bn1')(x)
x = layers.Activation('relu', name = 'block3_act1')(x)
x = layers.Conv2D(128, (3,3),
padding = 'same', use_bias = False,
name = 'block3_conv2')(x)
x = layers.BatchNormalization(axis = channel_axis, name = 'block3_bn2')(x)
x = layers.Activation('relu', name = 'block311_act2')(x)
x = layers.MaxPooling2D((3, 3),
strides=(3, 3),
padding='same',
name='block3_pool')(x)

x = layers.Conv2D(256, (3,3),
padding = 'same', use_bias = False,
name = 'block31_conv1')(x)
x = layers.BatchNormalization(axis = channel_axis, name = 'block31_bn1')(x)
x = layers.Activation('relu', name = 'block31_act1')(x)
x = layers.Conv2D(128, (3,3),
padding = 'same', use_bias = False,
name = 'block31_conv2')(x)
x = layers.BatchNormalization(axis = channel_axis, name = 'block31_bn2')(x)
x = layers.Activation('relu', name = 'block31_act2')(x)
x = layers.MaxPooling2D((3, 3),
strides=(3, 3),
padding='same',
name='block31_pool')(x)

# block 4
x = layers.Conv2D(1024, (3,3),
padding = 'same', use_bias = False,
name = 'block41_conv1')(x)
x = layers.BatchNormalization(axis = channel_axis, name = 'block41_bn1')(x)
x = layers.Activation('relu', name = 'block41_act1')(x)
x = layers.Conv2D(512, (3,3),
padding = 'same', use_bias = False,
name = 'block41_conv2')(x)
x = layers.Dropout(0.5, name = 'block4_dropout')(x)
x = layers.BatchNormalization(axis = channel_axis, name = 'block4_bn2')(x)
x = layers.Activation('relu', name = 'block4_act2')(x)
x = layers.MaxPooling2D((3, 3),
strides=(3, 3),
padding='same',
name='block4_pool')(x)
x = layers.Flatten(name='flatten')(x)
x = layers.Dense(512, activation='relu', name='fc1')(x)
x = layers.Dense(1024, activation='relu', name='fc11')(x)
x = layers.Dense(512, activation='relu', name='fc3')(x)
x = layers.Dense(512, activation='relu', name='fc4')(x)
x = layers.Dense(256, activation='relu', name='fc5')(x)
x = layers.Dense(64, activation='relu', name='fc6')(x)
x = layers.Dense(2, activation='softmax', name='predictions')(x)
model = Model(inputs=img_input, outputs=x, name = 'own_build_model')
return model
model = model()

layered_view(model, legend=True)

On this code, we create a neural network with multiple blocks, each with various parameters. We also increase the variety of nodes in each hidden layer, which makes our network more complex.

If we run this code, we must always see a latest visualization window appear with our more complex neural network. We will see that the diagram now has more rectangular boxes, one for every layer in our network.

In conclusion, VisualKeras is a robust tool for visualizing neural networks. It allows us to create intuitive diagrams of our networks, making it easier to know and debug them. Through the use of VisualKeras, we are able to quickly and simply create visualizations of our neural networks, which will be a fantastic asset when developing and testing machine learning models.

LEAVE A REPLY

Please enter your comment!
Please enter your name here