“Currently, ontology is a complementary concept to augmented search generation (RAG), however the goal of Persona AI is to supply a whole AI agent using only a text database that utilizes natural language without RAG.”
Yoo Seung-jae, CEO of Persona AI, emphasized that ontology will probably be the technology that can overcome the present limitations of RAG.
Currently, RAG builds a vector database (DB) and retrieves probably the most similar answer based on the user’s query. It has been adopted as a technology that may reduce the illusion of huge language models (LLMs) while revealing the source by utilizing internal corporate data or external serps.
Nonetheless, when introducing a knowledge management system (KMS) in an organization, there was an issue in having to construct a separate vector DB together with the LLM learning dataset. Even so, 100% accuracy couldn’t be guaranteed, and there was a fantastic burden of getting to construct a separate DB for RAG.
CEO Yoo said, “Now, knowledge management is feasible and not using a vector DB with a semantic ontology DB that may even understand meaning,” and explained, “Our self-developed generative AI ‘SONA’ can analyze and generate unstructured data in keeping with the ontology data structure.”
He continued, “We are able to create a semantic ontology and compare it with the source documents to supply fact-based answers,” and emphasized, “It also reduces construction costs in comparison with vector DBs.”
Constructing an ontology in text form also improves its usability in natural language. “Since unusual people don’t speak clearly as in the event that they were writing prompts, it will be important for LLM to know colloquial language,” said Yu.
Which means that the accuracy of answers will be greatly improved by converting spoken commands into AI-ready answers using not only natural language processing (NLP) but additionally natural language generation (NLG).
Moreover, Persona AI will be provided in a buildable, on-device form by specializing in lightweight design.
Currently, we offer 2B, 5B, and 7B models as small language models. Particularly, the 2B model will be used with only CPU. Recently, we also released the server AI engine ‘ON SERVER AI’.
“We were capable of secure many global corporate users because we could use high-quality AI at the same cost to existing servers and there have been no security issues,” said CEO Yoo.
He added that by combining several small language models (SLMs), “we’re providing a specialized model that reduces the model size by 70% but improves performance by 20%.”
Persona AI has been conducting research on embedded AI since early on, akin to applying AI to DB Insurance kiosks in 2020. At the moment, there have been no AI acceleration chips akin to NPUs, so the function was implemented using programmable semiconductors (FPGAs).
Considering the difficulties of real users, we combined guidance and call bots using metahumans in addition to chatbots. We implemented interactive functions akin to recognizing the user’s face, increasing the font size to suit elderly users, or switching to call bots once they experience difficulties while using the chatbot. Due to this, the waiting time on the counter was reduced by 31%, and user satisfaction was also increased.
Based on this experience, we added technology to supply various visual materials that address users’ needs.
CEO Seungjae Yoo explained that the metahuman on the time had an issue with consuming excessive GPU resources, so that they recently switched to providing 4K 3D visual data to Porsche Financial and Daimler Trucks.
If there may be an issue with the vehicle, you possibly can seek for the suitable manual for the situation using a 3D image of the automobile implemented on the net and a chatbot. Particularly, Porsche Financial reported that after introducing Persona AI’s solution, customer satisfaction increased by 30% and worker satisfaction increased by 40%.
“It’s available as a 4K web experience, so it may well be used on quite a lot of devices, and it’s easier and simpler than implementing metahumans,” Yoo said.
Currently, Persona AI’s AI Contact Center (AICC) supports not only 4 languages (Korean, Chinese, English, and Japanese), but additionally multimodal functions akin to detecting text from voice and providing images.

“Persona AI’s AICC is only one step within the technique of developing into an AI agent,” he said. It’s because through research and development thus far, not only have we secured highly accurate AI chatbots and callbots, but we have now also secured the technology to create metahumans, that are probably the most human-friendly UI (user interface).
Particularly, it was predicted that the on a regular basis use of AI agents will speed up as devices advance. “When AR glasses are introduced, AI agents will be used as metahumans for anyone,” he predicted.
In actual fact, at an exhibition held in May, AICC content utilizing AR glasses was showcased and received a positive response from visitors. Using AI agents implemented in the shape of characters and real people, the response was, “The immersion is higher since you experience visual and auditory stimulation at the identical time with AR glasses.”
CEO Seungjae Yoo predicted that the sort of technological development would also bring about recent social opportunities.
“There may be a view that AI will threaten jobs, but it may well even be a brand new opportunity,” he said, expressing his expectations by saying, “I consider that experiences and know-how will be passed on even after retirement resulting from physical disabilities or old age.”
Reporter Park Soo-bin sbin08@aitimes.com