Home Artificial Intelligence 3 months at TAI: Our internship journey Background Application Features Frontend BackEnd AI/ML DevOps How all these parts work together Keywords:

3 months at TAI: Our internship journey Background Application Features Frontend BackEnd AI/ML DevOps How all these parts work together Keywords:

0
3 months at TAI: Our internship journey
Background
Application Features
Frontend
BackEnd
AI/ML
DevOps
How all these parts work together
Keywords:

During our internship at TAI, we were presented with a final project that allowed us to use the talents we had acquired during our time here. In contrast to previous projects, which were accomplished individually, we were encouraged to work as a team to realize practical experience in collaboration while working on real-world applications. Also, we got freedom to decide on the project we would love to work on our own. So, two groups were formed,

Group 1: React + Node + ML + DevOps

Group 2: PHP + ML + DevOps

We belonged to the previous. Our team consisted of Manish Hyongoju as React Developer, Prashant Gharti Magar as Node Developer, Sanjeev Bhushal as DevOps Engineer and Anish Dahal, Sonika Acharya, Pratik Dahal as ML engineers.

Thus, following considerations were made as we researched and brainstormed the project ideas.

  1. Can chatbot be seamlessly integrated with this project?
  2. Is the project time feasible in developing at the very least a working prototype inside 1 month?
  3. How different is it from previously accomplished projects in the primary 2 months?
  4. How significant will the educational outcomes be?
  5. Does the project have potential for real-world application?

Our React-Node-DevOps + ML team proposed developing a chat application tailored for TAI’s business needs, combining features from Google Chats and Slack. The proposed software passed feasibility checks, including integration with an ML team’s chatbot, feasibility inside time constraints, and a big growth in learning outcomes. Moreover, potential real-world use cases from final considerations included using the in-house product totally free if it met promised outcomes. The mentors approved the proposal.

  • AI chatbot that answers queries about TAI.
  • Real time text-based group conversation.
  • Direct messaging.
  • Wealthy text formatting.
  • Add/Update/Delete workspace and channels.
  • Invite or remove members to the channel.
  • Update profile.
  • Update password.
  • Social check in with TAI Google account.

Before we began constructing our actual application, we designed the prototype of our application to realize a concrete overview of how our final application would look and address potential issues and changes in early stages of development. For this, we used figma as a design tool.

Figma is a cloud-based design tool that is comparable to Sketch in functionality and features, but with big differences that make Figma higher for team collaboration.Itis a collaborative web application for interface design, with additional offline features enabled by desktop applications for macOS and Windows.
As we began developing our prototype during our internship, we recognized the importance of getting a transparent understanding of the applying’s flow. To attain this, we created a Data Flow Diagram (DFD) using Figma, which helped us to visualise and plan the info flow inside our application.

After planning, we began designing our prototype in Figma and following are the designs.

After completing the design, the actual app is built using React and Typescript.

Backend refers back to the server-side a part of an online application that’s liable for processing data, managing databases, and communicating with frontend, other servers and systems. Backend in our project has been developed using Node.js as runtime environment, express as framework, mongodb as database and socket.io for handling Realtime communications.

Node.js is an open-source server-side runtime environment built on Chrome’s V8 JavaScript engine thus allowing developers to run JavaScript code on the server-side, outside of an online browser. As node is non-blocking i.e. it runs asynchronously, so it’s capable of handle many simultaneous requests. Thus, it is extremely suitable for web applications like chat applications.

Regarding Express, it’s a well-liked web application framework but it’s also minimalist and opinionated framework so it gives developer flexibility without boilerplate and strict algorithm while it simplifies the means of constructing web applications provides needed set of features for constructing web application similar to routing, middleware, template rendering, etc.

As higher system architecture design and database design is indispensable for an application software’s robustness, efficiency, security and scalability, significant time was devoted to contemplate all available options for chat application. For collection of the database, there have been three industry standard selections fitted to chat applications, CassandraDB; a column-based No-SQL database, PostgreSQL; one in all the favored and best SQL databases and MongoDB; a document-oriented database. Each option had its own benefits over others. We were constructing an application that’s at a small scale so MongoDB and PostgreSQL were our remaining selections. We elect MongoDB due to the pliability it provides if we want to update our database schemas and it’s higher for potential and eventual need of scaling horizontally.

As ER(Entity-Relationship) diagrams usually are not needed for MongoDB because it is a NoSQL database and doesn’t follow the normal relational model of databases like SQL, we didn’t design a ER diagram.

For handling real time, bidirectional communication between clients and servers over the web, there isn’t any more sensible choice than Socket.io because it is primarily written in Javascript and is designed to work with Node.js on the server side. Socket.io is a library that gives an abstraction layer on top of WebSocket and other transport protocols, similar to HTTP long-polling, to supply real-time communication between clients and servers. It provides a spread of features, including automatic reconnection, fallback to other transport protocols, and built-in support for rooms and namespaces.

As previously mentioned, this project had a chatbot implementation. It was made using various NLP techniques and a sequential model. The chatbot we designed was an easy retrieval based chatbot, which suggests it retrieves already predefined answers for a given query.

At first, the info was collected via the corporate website and the survey form, and a few questions were asked to the person employees. Then, the collected data were stored in a JSON file which, after some preprocessing, was used for training the classification model. The model would perform intent classification. The classified intent is used to get the response from a user input query by mapping the output of the classification into the JSON dataset.

The dataset collected was quite small, so we went through data augmentation using the word swapping-technique, so as to expand data and increase model performance.

Before training, the dataset goes through various preprocessing like lower casing, punctuation removal, lemmatization, and tokenization. Finally, the tokenized word will undergo the word2vec embeddings generator which converts the text data to encoded data. For this embedding generation we used a pre-trained word2vec model. The pre-trained word2vec model which was trained within the glove dataset.

The encoded text data will now undergo the model which consists of an embedding layer, LSTM layers, and a Dense layer. First embedding layer was only used to map each encoded text into its respective vector size of 200. Then the vector goes through the LSTM layers which help to learn long-term dependencies in data. The output of the hidden state of LSTM layers goes through the dense layer with the softmax activation function which provides the required classification, on this case, the probability of the respective query belonging to respective intents.

The used model was the sequential model which is probably the most easy models which will be used for chatbots. Because the dataset used was small which contained 47 intent and about 600 data points, and the used model was easy so the anticipated intent will not be correct and the model in some cases. The explanations for incorrect prediction could also be attributable to two major reasons.

  • The user query may contain recent words which will not be present within the pre-trained word2vec model so the vector of this word will turn out to be a zeros vector of size 200. And the model may not process it as required and resulting in incorrect predictions.
  • The intent of the user query may very well be entirely recent intent, on which the model was not trained on resulting in incorrect prediction.

DevOps is a set of practices that promotes collaboration and communication between development and operations teams to construct, test, and deploy software more quickly and reliably.

As a part of our chatbot application development, we wanted to be sure that our DevOps processes were efficient and effective. To do that, we first created an EC2 instance on which we’d host our application in order that our application may very well be accessed from anywhere on the web.

We ensured that the server environment was arrange appropriately, including configuring the needed network settings to be sure that the applying may very well be accessed securely. Next, we created a Dockerfile to specify the dependencies and settings required to run our chatbot application. This allowed us to package the applying in a transportable way, making it easy to deploy and run on different environments. To be sure that our application was all the time up up to now and running the most recent version of the code, we arrange a CI/CD pipeline. This pipeline mechanically built and deployed the applying to the EC2 instance

As discussed above, “TAI communication platform” was developed using various technologies and components similar to frontend, backend, DevOps, and AI/ML. All these components worked together to create a seamless and efficient chat application.

The frontend was liable for providing the user interface for the applying. It included the design, layout, and features of the applying that the TAI employees and clients could see and interact with. The backend, then again, was liable for handling all of the server-side tasks of the applying, similar to storing user data, managing chats, channels and handling authentication. DevOps played an important role in ensuring that the applying was deployed and managed efficiently. It involved establishing the infrastructure, configuring servers, and ensuring that the applying was available and performing optimally. Finally, AI/ML was used to supply the chat application a dedicated AI chatbot. Machine learning models were used to reinforce the chatbot’s capabilities, enabling it to know and reply to user queries more accurately. All these components worked together seamlessly to create an efficient and reliable chat application for team collaboration of TAI as envisioned and proposed.

MongoDB-Express-React-Node, a widely used technology stack for developing full stack web applications.

Popular general-purpose scripting language especially fitted to developing web applications.

Machine Learning, a branch of AI and computer science which focuses on the use of information and algorithms to mimic the way in which that humans learn, step by step improving its accuracy. https://www.ibm.com/topics/machine-learning

Natural Language Processing, a branch of artificial intelligence or AI — concerned with giving computers the power to know text and spoken words in much the identical way human beings can. https://www.ibm.com/topics/natural-language-processing

LEAVE A REPLY

Please enter your comment!
Please enter your name here