Home Artificial Intelligence Dog Vision: A Transfer Learning Project👨‍💻. Introduction to Deep Learning: Dog Vision project:

Dog Vision: A Transfer Learning Project👨‍💻. Introduction to Deep Learning: Dog Vision project:

17
Dog Vision: A Transfer Learning Project👨‍💻.
Introduction to Deep Learning:
Dog Vision project:

Deep learning is a branch of machine learning based on artificial neural networks. It’s able to learning complex patterns and relationships inside data. In deep learning, we don’t must program every thing explicitly. It has grow to be increasingly popular lately resulting from the advances in processing power and the provision of huge datasets. Since it is predicated on artificial neural networks (ANNs) also referred to as deep neural networks (DNNs), these neural networks are inspired by the structure and performance of the human brain’s biological neurons, they usually are designed to learn from large amounts of information.

Neural networks taking the form of a human brain.

Deep learning drives many artificial intelligence (AI) applications and services that improve automation, performing analytical and physical tasks without human intervention. Deep learning technology lies behind on a regular basis services (equivalent to digital assistants, voice-enabled TV remotes, and bank card fraud detection) in addition to emerging technologies (equivalent to self-driving cars). The demand available in the market for deep learning engineers and developers is on the rise.

With the event of Bard (Developed by Google) and ChatGPT (Developed by OpenAI), many commoners and students are taking a keen interest in complex topics like Deep learning and Artificial Intelligence. This blog will just help people who find themselves recent to this world of Deep learning and neural networks on how go about making a project of their very own and exploring the depth of this idea.

Consider you might be sitting in a restaurant and sipping your favorite coffee and a dog passes by, you begin wondering what breed is that. You begin to wonder and there isn’t any definitive answer on the google. I even have give you a project that may let you’re taking a photograph of the dog, run it through the code and it would predict precisely the breed of the dog, amongst over 120 different dog breeds.

The dataset was obtained from Kaggle for the training of the model. It consists of over 10,000 images within the training dataset. The labels which can be the names of the dog breeds are given within the format of a CSV file that was zipped along with the Training images folder. After Primary data evaluation, there are roughly 82 to 84 images per dog breed. The environment getting used is a Google collab “.ipynb” notebook, since there’s a necessity for a GPU to coach our data which is using TensorFlow and a big volume.

The info is unstructured and in the shape of images. The photographs usually are not of the identical size, to coach a specific machine learning model for the pictures would require the pictures to be of the identical size and file type. This dataset has all the pictures in the shape of a jpeg.

Within the algorithms of Deep learning and machine learning overall, getting the information ready is the toughest definitive part, the remainder is just based on experimentation. For this project we are going to convert the pictures into tensors and convert the labels, that’s the names of the dog breeds by way of a boolean array. To convert the information we are going to need an array with the address of all the pictures to access them and to convert them into tensors:

filenames = ["drive/MyDrive/Dog vision/train/"+fname + ".jpg" for fname in labels_csv["id"]]

filenames

This array has all of the names of the addresses of the pictures. To get the unique variables and convert them right into a boolean array:

unique_breeds = np.unique(labels)
bool_labels = [label == unique_breeds for label in labels]
bool_labels

That is our X and y respectively. While within the initial stages of writing Deep learning codes, it’s all the time higher to begin with small sets and hence it is suggested to make use of a slider to vary the variety of images. I can be taking the dataset but have included the slider option in my .ipynb notebook.

The info is processed using a number of functions that turn the image into tensors and supply the required preprocessing that’s resizing the pictures.

IMG_SIZE = 224

def process_image(image_path):
"""
Takes a picture file path and turns it right into a Tensor.
"""
# Read in image file
image = tf.io.read_file(image_path)
# Turn the jpeg image into numerical Tensor with 3 color channels (Red, Green, Blue)
image = tf.image.decode_jpeg(image, channels=3)
# Convert the color channel values from 0-225 values to 0-1 values
image = tf.image.convert_image_dtype(image, tf.float32)
# Resize the image to our desired size (224, 244)
image = tf.image.resize(image, size=[IMG_SIZE, IMG_SIZE])
return image

A deep learning algorithm works best by way of batches, as loading 10k+ images at a single time might excessively burden the GPU. Hence, the pictures are became data batches.

BATCH_SIZE = 32 #Ideal batch size.. Can vary and take any but 32 is a default batch size for a pc vision problem

# Create a function to show data into batches
def create_data_batches(x, y=None, batch_size=BATCH_SIZE, valid_data=False, test_data=False):
"""
Creates batches of information out of image (x) and label (y) pairs.
Shuffles the information if it's training data but doesn't shuffle it if it's validation data.
Also accepts test data as input (no labels).
"""
# If the information is a test dataset, we probably do not have labels
if test_data:
print("Creating test data batches...")
data = tf.data.Dataset.from_tensor_slices((tf.constant(x))) # only filepaths
data_batch = data.map(process_image).batch(BATCH_SIZE)
return data_batch

# If the information if a legitimate dataset, we needn't shuffle it
elif valid_data:
print("Creating validation data batches...")
data = tf.data.Dataset.from_tensor_slices((tf.constant(x), # filepaths
tf.constant(y))) # labels
data_batch = data.map(get_image_label).batch(BATCH_SIZE)
return data_batch

else:
# If the information is a training dataset, we shuffle it
print("Creating training data batches...")
# Turn filepaths and labels into Tensors
data = tf.data.Dataset.from_tensor_slices((tf.constant(x), # filepaths
tf.constant(y))) # labels

# Shuffling pathnames and labels before mapping image processor function is quicker than shuffling images
data = data.shuffle(buffer_size=len(x))

# Create (image, label) tuples (this also turns the image path right into a preprocessed image)
data = data.map(get_image_label)

# Turn the information into batches
data_batch = data.batch(BATCH_SIZE)
return data_batch

This above snippet will turn any set be it train, test, or validation based on what those datasets are used for. For higher understanding and as a checkpoint, the following step is visualizing data in the shape of a table to assist get a grasp of the train data:

def show_25_images(images, labels):
plt.figure(figsize=(10,10))
for i in range(25):
ax = plt.subplot(5,5,i+1)
plt.imshow(images[i])
plt.title(unique_breeds[labels[i].argmax()])
plt.axis("off")
train_images, train_labels = next(train_data.as_numpy_iterator())
show_25_images(train_images,train_labels)

This function shows 25 images with its label. The subsequent and .as_numpy_iterator converts the tensor constants into an iterative array that will be comprehended and became a picture.

The hard part is over, now comes the modeling of the information set. This section takes a deep learning model and trains it with our dataset, which understands the features of the information i.e. images versus their labels. For this project, we can be using Transfer learning.
Transfer learning is a machine learning method where we reuse a pre-trained model as the start line for a model on a recent task. The model used is MobileNet-V2 “https://tfhub.dev/google/imagenet/mobilenet_v2_130_224/classification/5”.

The subsequent step is to create a function that builds the model based on our input size and output size with the KerasLayers.

def create_model(input_shape = INPUT_SHAPE, output_shape = OUTPUT_SHAPE, model_url = MODEL_URL):
# setup model layers:
model = tf.keras.Sequential([hub.KerasLayer(model_url), #layer 1
tf.keras.layers.Dense(units= output_shape, activation="softmax")]) #layer 2
# the second layer is passed in because the output layers because the eventual shape of the ultimate matrix ie the list.
# compile the model:
model.compile(loss=tf.keras.losses.CategoricalCrossentropy(),
optimizer=tf.keras.optimizers.Adam(),
metrics=["accuracy"])
# constructing the model:
model.construct(INPUT_SHAPE)
return model

After creating the model, callbacks have to be arrange since there are situations where the model is training on the information set without increasing its accuracy and to assist us log the changes within the accuracy of the training and validation sets. [The validation case will not appear when we are training the model for the entire dataset].

from keras.callbacks import TensorBoard
def train_model():
# create a model
model = create_model()
# create recent tensorboard session everytime we train a model
tensorboard = create_tensorboard_callback()

# fit the model to the information passing it the callbacks we created
model.fit(x=train_data, epochs=NUM_EPOCHS,
validation_data = val_data,
validation_freq=1, callbacks = [tensorboard, early_stopping],
validation_steps = len(val_data))
return model

This function matches the model with our train set. For your complete set use:

full_model.fit(x=full_data, epochs = NUM_EPOCHS, callbacks = [full_model_tensorboard, full_model_early_stopping])

It will fit the model with your complete dataset. After fitting the information, observe the logs of the tensorborad that may have a comprehensive real-time graph displaying the changes within the values of coaching and validation accuracy.

Now that model is trained on our dataset, we will use it to make predictions through the use of the predict() , it will predict on a probability scale the possibilities of being a specific breed out of our unique breeds. To know the probability matrix, use the function of numpy like argmax:

def get_pred_label(prediction_probabilities):
return unique_breeds[np.argmax(prediction_probabilities)]

It will provide the names of the anticipated dogs.

This is sort of a bonus function to visualise the anticipated value of the actual image and the true value. This function will print the name in green whether it is correct, and red if the anticipated name is wrong, simply to make the notebook a bit more interesting.

def plot_pred(prediction_probabilities, lables,images, n=69):
pred_proba, true_label, image = prediction_probabilities[n], lables[n], images[n]
pred_label = get_pred_label(pred_proba)
plt.imshow(image)
plt.xticks([])
plt.yticks([])
# change color of the title bsed on if the prediction is true or incorrect:
if(pred_label== true_label):
color= "green"
else:
color = "red"
plt.title("{} {:2.0f} {}".format(pred_label, np.max(pred_proba)*100, true_label), color = color)
something along these lines.

Deep learning models are saved by the .save() function:

def save_model(model, suffix=None):
# create a model directory:
modeldir= os.path.join("drive/MyDrive/Dog vision/models", datetime.datetime.now().strftime("%Y%m%d-%H%M%s"))
model_path = modeldir + "-" + suffix + ".h5" #save format for file is in .h5 extension
print(f"saving model to {model_path}.....")
model.save(model_path)
return model_path

There we’ve got it, this model that was just saved is a trained model on over 10k+ images.

The model that I even have used will be substituted by some other model they usually will be explored in “https://tfhub.dev/”.

I hope you understood the essential workflow for designing your individual end-to-end Deep learning model.

17 COMMENTS

  1. … [Trackback]

    […] Find More Informations here: bardai.ai/artificial-intelligence/dog-vision-a-transfer-learning-project👨💻-introduction-to-deep-learningdog-vision-project/ […]

  2. … [Trackback]

    […] Read More Information here on that Topic: bardai.ai/artificial-intelligence/dog-vision-a-transfer-learning-project👨💻-introduction-to-deep-learningdog-vision-project/ […]

  3. Hey I know this is off topic butt I was wondering if you knew of aany widgets I
    could add to my blog that automatically tweet my newest twitter updates.
    I’ve been looking for a plug-in like this for quite some time and was hoping mawybe
    you would have some experience with something like this.

    Please let me know if yyou run into anything.
    I truly enjoy reading your blog and I look forward to your new updates.

    Have a look at my web page – 카지노사이트

  4. Hi there I amm so grateful I found your weblog, I really found yoou by error,while I was searching on Aol for something else, Nonetheless I am here now
    and would just like to say thank you for a incredible post and
    a all round enjoyable blog (I also love the theme/design), I don’t have time to read through it all at
    thhe moment but I have bookmarked it and also included your RSS feeds, so
    when I have time I will be bacck to read a lot more, Please do kewp up the superb job.

    my blog – 카지노사이트

  5. It’s a pity you don’t have a donate button! I’d without a
    doubt donate to this outstanding blog! I suppose for now i’ll settle for bookmarking and adding your RSS feed to my
    Google account. I look forward to fresh updates and will talk about this blog with my Facebook group.
    Chat soon!

  6. Hey there! I could have sworn I’ve been to this blog before but after reading through some of the
    post I realized it’s new to me. Anyhow, I’m definitely
    glad I found it and I’ll be bookmarking and checking back often!

LEAVE A REPLY

Please enter your comment!
Please enter your name here