Home Artificial Intelligence A Easy Conceptual Overview of Neural Network and Deep Learning

A Easy Conceptual Overview of Neural Network and Deep Learning

1
A Easy Conceptual Overview of Neural Network and Deep Learning

Now we’ve gained some basic understanding of what a neural network is, the way it functions, and what hyperparameters are involved in tunning, we will bring up the concept of deep learning.

So, what exactly is deep about deep learning? Deep learning refers to a strong technique that utilizes neural networks with multiple hidden layers, hence the term “deep.” Unlike traditional shallow networks, deep learning methods employ architectures composed of diverse interconnected layers. These hidden layers enable the network to learn hierarchical representations of the information, capturing intricate patterns and features at different levels of abstraction. By leveraging this depth, deep learning models excel at tackling complex problems across various domains, including computer vision, natural language processing, and speech recognition.

Image from Towards AI

Image classification is an example of deep learning or the product of a convolutional neural network. The convolutional neural network is considered one of the deep learning methods. It’s used to investigate images and videos. CNN reduces the complexity of the network computation or modeling overfitting.

Using an example of distinguishing digits (0–9), the means of a CNN shows below:

  1. Create the convolutional base
import tensorflow as tf
import seaborn as sns
from matplotlib import pyplot as plt
from tensorflow.keras import datasets, layers, models, callbacks
from sklearn.metrics import ConfusionMatrixDisplay

model = models.Sequential()
model.add(layers.Conv2D(filters=32, kernel_size=(3, 3), activation='relu', input_shape=(28, 28, 1))) # (x, y, depth)
model.add(layers.MaxPooling2D(pool_size=(2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))

2. Add dense layers

model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))

3. Compile & Train the model

model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# add early stop to avoid wasting the computation resource or to stop earlier before the assigned epochs.
early_stop = callbacks.EarlyStopping(monitor='val_loss', min_delta=1e-4, verbose=1, mode='min', patience=5)
results = model.fit(train_images, train_labels, epochs=100, batch_size=1024, validation_data=[test_images, test_labels],
callbacks=[early_stop])

4. Evaluate the model on the testing data

test_loss, test_acc = model.evaluate(test_images, test_labels)
print(test_acc)

5. Plot the graphs for evaluation metrics

train_loss = results.history['loss']
train_acc = results.history['accuracy']
val_loss = results.history['val_loss']
val_acc = results.history['val_accuracy']
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5))
sns.lineplot(x=results.epoch, y=train_loss, ax=ax1, label='train_loss')
sns.lineplot(x=results.epoch, y=train_acc, ax=ax2, label='train_accuracy')
sns.lineplot(x=results.epoch, y=val_loss, ax=ax1, label='val_loss')
sns.lineplot(x=results.epoch, y=val_acc, ax=ax2, label='val_accuracy');

6. Lastly, check a picture!

model.predict(test_images[0].reshape(1, 28, 28, 1))
plt.imshow(test_images[0].reshape(28, 28), cmap='gray')
test_pred= model.predict(test_images).argmax(axis=1)
print (test_pred)
ConfusionMatrixDisplay.from_predictions(test_labels, test_pred)

That may be a wrap of summarizing the concepts of neural networks and deep learning! Hopefully, this blog is useful and simple to interpret!

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here