Kaarel Kotkas, CEO and Founding father of Veriff – Interview Series

-

Kaarel Kotkas is the CEO and Founding father of Veriff and serves because the strategic thinker and visionary behind the corporate. He leads Veriff’s team in staying ahead of fraud and competition within the rapidly changing field of online identification. Known for his energy and enthusiasm, Kotkas encourages the team to uphold integrity within the digital world. In 2023, he was recognized within the EU Forbes 30 Under 30, and in 2020, he was named the EY Entrepreneur of the Yr in Estonia. Nordic Business Report has also included him among the many 25 most influential young entrepreneurs in Northern Europe.

Veriff is a worldwide identity verification company that helps online businesses reduce fraud and comply with regulations. Using AI, Veriff robotically verifies identities by analyzing various technological and behavioral indicators, including facial recognition.

What inspired you to found Veriff, and what challenges did you face in constructing an AI-powered fraud prevention platform?

My motivation for Veriff got here after witnessing firsthand how easy it was for people online to pretend to be another person. When buying biodegradable string from eBay for my family’s farm on the age of 14, I effortlessly bypassed PayPal’s 18+ age restrictions with a touch of Photoshop to vary my birth 12 months on the copy of my identity document.

I continued to see the issue of online users misrepresenting their identity to pass age checks and other security measures. It was because of these experiences that I got here up with the thought for Veriff.

As for challenges, a 12 months after founding the corporate, we gave our team the weekend off. This was the identical day we did a bug fix, which resulted in a full interruption in monitoring capabilities. We didn’t notice our service shutting itself down until Saturday morning. Come Monday morning, I had to fulfill face-to-face with our biggest customer, who had lost hundreds of dollars in revenue. I used to be transparent in that meeting, explaining the mistakes on our end. We shook hands and went back to work. What I learned from that is that as a founder and business leader, we must expect and prepare for challenges. Moreover, transparency is vital for constructing trust. Lastly, demonstrating a history of overcoming challenges can prove more priceless since it shows you possibly can successfully tackle problems and are resilient.

With deepfakes becoming more sophisticated, especially in political settings, what do you’re thinking that are probably the most significant risks they pose to elections and democracy?

This election season, the integrity of the voting process is in jeopardy. AI can analyze vast amounts of information to discover voter preferences and trends, enabling campaigns to tailor messages and goal voters with messages they care most about. Bad actors are well equipped to create false narratives of candidates performing actions they never did or making statements they never said, thus damaging their reputations and misleading voters.

So far, we now have seen deepfakes of celebrities endorsing presidential candidates and a fake Biden robocall. While technology does exist to assist distinguish between AI-generated content and the true deal, it isn’t viable to implement broadly at scale. With the high stakes and election credibility on the road, something should be done to preserve public trust. The longer term growth of the digital economy and its fight against digital fraud centers around proven identities and authentic and verified online accounts.

Deepfakes can manipulate not only images but in addition voices. Do you suspect one medium is more dangerous than the opposite in the case of deceiving voters?

Usually, especially within the U.S. context of elections, each needs to be treated equally as threats to democracy. Our most up-to-date report Veriff Fraud Index 2024: Part 2, found that 74% of respondents within the US are fearful about AI and deepfakes impacting elections.

The evolution of AI has turbocharged the threat to security, not only within the US but across the globe, during this 12 months’s elections. Whether or not it’s deepfake images, AI-generated voices in robocalls attempting to skew voter opinions, or fabricated videos of candidates, they each provoke warranted concern.

Let’s take a look at the larger picture here. When there are a number of data points available, it’s easier to evaluate the “threat level.” A single image won’t be enough to inform if it’s fraudulent, but a video provides more clues, especially if it has audio. Adding details just like the device used, location, or who recorded the video increases confidence in its authenticity. Fraudsters all the time attempt to limit the scope of data since it makes it easier to govern. I view robocalls as more dangerous than deepfakes because creating fake audio is simpler than generating high-quality fake videos. Plus, using LLMs makes it possible to regulate fake audio during calls, making it much more convincing.

Given the upcoming elections, what should governments and election commissions be most concerned about regarding AI-driven disinformation?

Governments and election commissions need to know the potential scope of deepfake capabilities, including how sophisticated and much more convincing these instances of fraud have turn into. Deepfakes are especially effective when deployed against enterprises with disjointed and inconsistent identity management processes and poor cybersecurity, making it more critical today to implement robust security measures or have a layered approach to security.

Still, there isn’t a one-size-fits-all solution, so a coordinated, multi-faceted approach is vital. This might include robust and comprehensive checks on asserted identity documents, counter-AI to discover manipulation of incoming images, especially concerning distant voting, and, most significantly, identifying the creators of deepfakes and fraudulent content on the source. The responsibility of verifying votes lies with governments and electoral commissions, in addition to technology and identity providers.

What role can AI and identity verification technologies like Veriff play in countering the impact of deepfakes on elections and political campaigns?

AI is a threat and a chance. Nearly 78% of U.S. decision-makers have seen a rise in the usage of AI in fraudulent attacks over the past 12 months. On the flip side, nearly 79% of CEOs use AI and ML in fraud prevention. In a time when fraud is on the rise, fraud prevention strategies should be holistic – no single tool can combat such a multitudinous threat. Still, AI and identity verification can empower businesses and users with a multilayered stack that brings in biometrics, identity verification, crosslinking, and other solutions to get ahead of fraudsters.

At Veriff, we use our own AI-powered technology to construct our deepfake detection capabilities. This implies our tools improve from the learnings after we see a deepfake. Taking large amounts of information and trying to find patterns which have appeared before to find out future outcomes relies on each automated technologies and human knowledge and intelligence. Humans have a greater understanding of context, identifying anomalies to create a feedback loop that may be used to boost AI models. Combining different insights and expertise to create a comprehensive approach to identity verification and deepfake detection has allowed Veriff and its customers to remain ahead of the curve.

How can businesses and individuals higher protect themselves from being influenced by deepfakes and AI-driven disinformation?

Protecting yourself from being influenced by deepfakes and AI-driven disinformation starts with education and cognizance of AI’s expansive capabilities, coupled with proven identities and authentic, verified online accounts. To find out when you can trust a source, you need to take a look at the cause relatively than the symptoms. We must confront the issue at its source, where and by whom these deepfakes and fraudulent resources are being generated.

Consumers and businesses must only trust information from verified sources, resembling verified social media platform users and well-credited news outlets. As well as, using fact-checking web sites and in search of visual anomalies in audio or video clips—unnatural movements, strange lighting, blurriness, or mismatched lip-syncing—are only a number of the ways that companies can protect themselves from being misled by deepfake technology.

Do you’re thinking that there’s enough public awareness concerning the dangers of deepfakes? If not, what steps needs to be taken to enhance understanding?

We’re still within the growing awareness phase about AI and educating people on its potential.

In keeping with the Veriff Fraud Index 2024: Part 2 over 1 / 4 (28%) of respondents have experienced some type of AI- or deepfake-generated fraud over the past 12 months, a striking result for an emerging technology and a sign of the growing nature of this threat. What’s more essential is that this number could actually be much higher, as 20% say they don’t know in the event that they have been targeted or not. Given the delicate nature of AI-generated fraud attempts, it is extremely likely that many respondents have been targeted without their knowing it.

Individuals needs to be cautious when encountering suspicious emails or unexpected phone calls from unfamiliar sources. Requests for sensitive information or money should all the time be met with skepticism, and it’s crucial to trust your instincts and seek clarity if something feels incorrect.

What role do you see regulatory bodies playing within the fight against AI-generated disinformation, and the way can they collaborate with firms like Veriff?

Given the extent to which deepfake technology has been used to deceive the general public and amplify disinformation efforts, and with the U.S. election still underway, it’s yet to be seen how great an impact this technology may have on that election in addition to broader society. Still, regulatory bodies are taking motion to mitigate the threats of deepfake technology.

A variety of responsibility for mitigating the impact of disinformation falls on the owners of the platforms we use most frequently. As an illustration, leading social media firms must take more responsibility by taking motion and implementing robust measures to detect and forestall fraudulent attacks and safeguard users from harmful misinformation.

How do you see Veriff’s technology evolving in the subsequent few years to remain ahead of fraudsters, particularly within the context of elections?

In our rapidly digital world, the web’s future hinges on online users’ ability to prove who they’re; that way, businesses and users alike can confidently interact with one another. At Veriff, trust is synonymous with verification. We aim to be certain that digital environments foster a way of safety and security for the end-user. This goal would require technology to evolve to confront the challenges of today, and we’re already seeing this with wider acceptance of facial recognition and biometrics. Data shows that customers view facial recognition and biometrics as probably the most secure approach to logging into a web-based service.

Looking ahead, we envision this trend continuing and a future where relatively than users consistently entering and re-entering their credentials as they perform different tasks online, they’ve “one reusable identity” that represents their persona across the online.

To bring us a step closer to our goal, we recently updated our Biometric Authentication solution to enhance accuracy and user experience, and to strengthen security for stronger identity assurance. These latest advancements in biometric technology have enabled our technology to adapt to individual user behaviors, ensuring user authentication relatively than simply during one session. This advancement, specifically, represents forward progress on our journey to 1 reusable digital identity.

Veriff is recognized for its global reach in fraud prevention. What makes Veriff’s technology stand out in such a competitive space?

Veriff’s solution offers speed and convenience because it’s 30x more accurate and 6x faster than competing offerings. We now have the most important identity document specimen database within the IDV/Know Your Customer (KYC) industry. We will confirm people against 11,500 government-issued ID documents from greater than 230 countries and territories, in 48 different languages. Moreover, this convenience and reduced friction enable organizations to convert more users, mitigate fraud, and comply with regulations. We even have a 91% automation rate, and 95% of real users are verified successfully on their first try.

Veriff was one in all the primary IDV firms to acquire the Cyber Essentials certification. Cyber Essentials is an efficient government-backed standard that protects against probably the most common cyber attacks. Obtaining this certification demonstrates that Veriff takes cybersecurity seriously and has taken steps to guard its data and systems. This achievement is a testament to the corporate’s unwavering commitment to cybersecurity and our dedication to protecting our customers’ data. Most recently, we accomplished the ISO/IEC 30107-3 iBeta Level 2 Compliance evaluation for biometric passive liveness detection, an independent external validation to solidify that Veriff’s solution meets the best standard of biometric security.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x