With regards to machine learning or deep learning applications, only language that is available in to our mind is python. But, did you recognize that it’s not the one language with that capability. Programming languages aside from python that could possibly be used for the aim are Java, Scala, Julia, MATLAB, and R. Its just the wide list of modules and libraries which are available in python to help the method that makes it unbeatable.
Here in this text, We are going to undergo model development using R for Hand written digits classification.
We start by importing the required libraries for our project. We use the library()
function to import dslabs
, keras
, and tensorflow
libraries. We must be certain that these libraries are installed within the R environment before importing them. We are able to use the install.packages()
function to put in libraries into our R programming environment if we are usually not capable of import them.
library(dslabs)
library(keras)
library(tensorflow)
We then load the MNIST dataset into our variables and store it. We use the read_mnist()
function from the dslabs
library to load the dataset.
mnist <- read_mnist()
We display any image from the info to confirm that the dataset was loaded accurately.
i <- 5
image(1:28, 1:28, matrix(mnist$test$images[i,], nrow=28)[ , 28:1],
col = gray(seq(0, 1, 0.05)), xlab = "", ylab="")
Next we implement any data preparation steps needed to organize the MNIST data for training and validation, and prepare the category’s Handwritten Digit Data Set in order that it’s formatted in the identical way.
We load the category’s Handwritten Digit Data Set using the read.csv()
function and store it within the chd
variable.
chd <- read.csv("combined_digits_1.csv")
dim(chd)
After that we process the category’s Handwritten Digit Data Set right into a matrix containing one row per image, and 28*28 (784) columns. This data set is to be considered for testing our model. We must always consider converting to gray scale, resizing, and rotating images where crucial. Background pixels should contain values of 0 while pixels with writing should contain values near 255.
xtest = chd[,1:784]
ytest = chd[,785]xtest = as.matrix(xtest)
xtest <- array_reshape(xtest, c(nrow(xtest), 28, 28, 1))
Next, we also obtain data for training model using read_mnist()
function. It is actually necessary to preprocess the MNIST dataset before constructing our model. This includes reshaping, scaling, and adjusting the style of targets based on the consequence we expect.
mnist = read_mnist()
x_train = mnist$train$images
y_train = mnist$train$labels
x_val = mnist$test$images
y_val = mnist$test$labelsx_train = array_reshape(x_train, c(nrow(x_train), 28, 28, 1))
x_val = array_reshape(x_val, c(nrow(x_val), 28, 28, 1))
y_train = to_categorical(y_train, 10)
y_val = to_categorical(y_val, 10)
ytest = to_categorical(ytest,10)
Now, once now we have our data processed, we will move to designing and training of our model. For that we use the keras_model_sequential()
function from the keras
library to create our model. Our model consists of two convolutional layers, two max pooling layers, two dropout layers, two dense layers, and a softmax activation function.
#Model Constructing: Code Here
input_shape <- c(28, 28, 1)
batch_size <- 128
num_classes <- 10
epochs <- 10 model <- keras_model_sequential() %>%
layer_conv_2d(filters = 32, kernel_size = c(3,3), activation = 'relu', input_shape = input_shape) %>%
layer_max_pooling_2d(pool_size = c(2, 2)) %>%
layer_conv_2d(filters = 64, kernel_size = c(3,3), activation = 'relu') %>%
layer_max_pooling_2d(pool_size = c(2, 2)) %>%
layer_dropout(rate = 0.25) %>%
layer_flatten() %>%
layer_dense(units = 128, activation = 'relu') %>%
layer_dropout(rate = 0.5) %>%
layer_dense(units = num_classes, activation = 'softmax')
summary(model);
The above code builds a sequential model, which is a linear stack of layers. The model starts with a convolutional layer that applies a set of 32 filters (each of size 3×3) to the input image. The output of this layer is then passed through a max pooling layer that reduces the spatial dimensions of the output. This process is repeated with one other convolutional layer, one other max pooling layer, after which a dropout layer that randomly drops out a fraction of the input units. The output of this layer is flattened after which passed through a dense layer with 128 units and a ReLU activation function. One other dropout layer is added before the ultimate dense layer, which has 10 units and a softmax activation function. This layer outputs a probability distribution over the ten possible classes.
Next, we compile the model by specifying the loss function, optimizer, and evaluation metric.
# compiling our model
model %>% compile(
loss = loss_categorical_crossentropy,
optimizer = optimizer_adadelta(),
metrics = c('accuracy')
)
Finally, we train the model on the training data and validate it on the validation data.
# fitting the model(training it)
model_history <- model %
fit(x_train, y_train,
batch_size = batch_size,
epochs = epochs,
validation_data = list(x_val, y_val),
verbose = 1)
After model is trained we will determine how well it could perform in motion by passing our test data through our model.
#Model Testing
model %>% evaluate(xtest, ytest)
Above scores look acceptable and further training could improve performance of our model as well.
Lastly, We save our model in order that we could train it again and deploy it as well if model works rather well on test data.
#saveRDS(model,"/_model.RDS")
saveRDS(model, "digit_classifier.rds")
In conclusion, machine learning and deep learning have revolutionized the sector of artificial intelligence and have opened up latest possibilities for solving complex problems. Python has emerged as the most well-liked language for constructing machine learning and deep learning models as a result of its simplicity, versatility, and powerful libraries. Nevertheless, other languages reminiscent of R, Java, and C++ are also used for specific tasks in machine learning and deep learning. Whatever the language used, constructing a successful machine learning or deep learning model requires a solid understanding of the underlying concepts, careful number of appropriate algorithms, and thorough preprocessing of the info. With the continued advancements in technology and the increasing availability of information, the sector of machine learning and deep learning is anticipated to grow and produce much more exciting applications within the years to return.
samurai lofi mix