Igor Jablokov is the CEO and Founding father of Pryon. Named an “Industry Luminary” by Speech Technology Magazine, he previously founded industry pioneer Yap, the world’s first high-accuracy, fully-automated cloud platform for voice recognition. After its products were deployed by dozens of enterprises, the corporate became Amazon’s first AI-related acquisition. The firm’s inventions then served because the nucleus for follow-on products akin to Alexa, Echo, and Fire TV. As a Program Director at IBM, Igor led the team that designed the precursor to Watson and developed the world’s first multimodal Web browser.
Igor was awarded Eisenhower and Truman National Security fellowships to explore and expand the role of entrepreneurship and enterprise capital in addressing geopolitical concerns. As an innovator in human language technologies, he believes in fostering profession and academic opportunities for others entering STEM fields. As such, he serves as a mentor within the TechStars’ Alexa Accelerator, was a Blackstone NC Entrepreneur-In-Residence (EIR), and founded a chapter of the Global Shapers, a program of the World Economic Forum.
Igor holds a B.S. in Computer Engineering from The Pennsylvania State University, where he was named an Outstanding Engineering Alumnus, and an MBA from The University of North Carolina.
Your journey in AI began with the primary cloud-based speech recognition engine at Yap, later acquired by Amazon. How did that have shape your vision for AI and influence your current work at Pryon?
I’ll start a bit earlier in my profession as Yap wasn’t our first rodeo in coping with natural language interactions.
My first foray into natural language interactions began at IBM, where I began as an intern within the early 90s and eventually became Program Director of Multimodal Research. There I had a team that discovered what you can consider a baby Watson. It was far ahead of its time, but IBM never greenlit it. Eventually I became frustrated with the choice and departed.
Around that point (2006), I recruited top engineers and scientists from Broadcom, IBM, Intel, Microsoft, Nuance, NVIDIA and more to start out the primary AI cloud company, Yap. We quickly acquired dozens of enterprise and carrier customers, including Sprint and Microsoft, and almost 50,000,000 users on the platform.
Since we had former iPod engineers on the team, we were in a position to back-channel into Apple inside a 12 months of founding the corporate. They brought us in to prototype a version of Siri—this was before the iPhone was released. Half a decade later, we were secretly acquired by Amazon to develop Alexa for them.
Are you able to elaborate on the concept of “knowledge friction” that Pryon goals to resolve and why it’s crucial for contemporary enterprises?
Knowledge friction comes from the indisputable fact that, historically, organizations haven’t had one unified instantiation of information. While we’ve had such repositories in our college campuses and civic communities in the shape of libraries, there was no unification of information and knowledge on the enterprise side as a consequence of a myriad of vendors they used.
Because of this, everyone across virtually every organization feels friction when searching for the knowledge they should perform their jobs and workflows. That is where we saw the chance for Pryon. We thought that there was a possibility for a brand new layer above the enterprise software stack that, by utilizing natural language prompts, could traverse systems of records and retrieve various object types—text, images, videos, structured and unstructured data—and pull all the pieces together in a sub-second response time.
That was the birth of Pryon, the world’s first AI-enhanced knowledge cloud.
Pryon’s platform integrates advanced AI technologies like computer vision and huge language models. Are you able to explain how these components work together to reinforce knowledge management?
Pryon developed an AIP, a synthetic intelligence platform, that transforms content from its fundamental static units into interactive knowledge. It achieves this by integrating an ingestion pipeline, a retrieval pipeline, and a generative pipeline right into a single experience. The platform taps into your existing systems of record, which might include a wide range of content types akin to Confluence, Documentum, SAP, ServiceNow, Salesforce, SharePoint, and lots of more. This content will be in the shape of audio, video, images, text, PowerPoints, PDFs, Word files, and web pages.
The AIP transforms these objects right into a knowledge cloud, which might then publish and subscribe to any interactive or sensory experiences you could need. Whether people have to interact with this data or there are machine-to-machine transactions requiring the union of all this disparate knowledge, the platform ensures consistency and accessibility. Essentially, it performs ETL (Extract, Transform, Load) on the left side, powering experiences via APIs on the suitable side.
What are a few of the key challenges Pryon faces in developing AI solutions for enterprise use, and the way are you addressing them?
Because we’re vertically integrated, we receive top marks in accuracy, scale, security, and speed. One in every of the problems with deconstructed approaches, where you wish several different vendors and bolt them together to realize the identical workflow we do, is that you simply find yourself with something less performant. You’ll be able to’t match models, and also you haven’t got security signaling flowing through as well.
It’s like iPhones: there is a reason Apple builds their very own chip, device, operating system, and applications. By doing so, they achieve the very best level of performance with the bottom energy use. In contrast, other vendors who integrate from several different sources are likely to be a generation or two behind them in any respect times.
How does Pryon make sure the accuracy, scalability, security, and speed of its AI solutions, particularly in large-scale enterprise environments?
Supported by a strong Retrieval-Augmented Generation (RAG) framework, Pryon was designed to satisfy the rigorous demands of companies. Using best-in-class information retrieval technology, Pryon securely delivers accurate, timely answers — empowering businesses to beat knowledge friction.
- Accuracy: Pryon excels in accuracy by precisely ingesting and understanding content stored in various formats, including text, images, audio, and video. Using advanced custom-developed technologies, Pryon retrieves mission-critical knowledge with over 90% accuracy and delivers answers with clear attribution to source documents. This ensures that the knowledge provided is each reliable and verifiable.
- Enterprise Scale: Pryon is built to handle large-scale enterprise environments. It scales to tens of millions of pages of content and supports 1000’s of concurrent users. Pryon also includes out-of-the-box connectors to major platforms like SharePoint, ServiceNow, Amazon S3, Box, and more, making it easy to integrate into existing workflows and systems.
- Security: Security is a top priority for Pryon. It protects against data leaks through document-level access controls and ensures that AI models usually are not trained on customer data. Moreover, Pryon will be implemented in on-premises environments, offering additional layers of security and control for sensitive information.
- Speed: Pryon offers rapid deployment, with implementation possible in as little as two weeks. The platform incorporates a no-code interface for updating content, allowing for quick and simple modifications. Moreover, Pryon provides the flexibleness to decide on a public, custom, or Pryon-developed large language model (LLM), making the implementation process seamless and highly customizable.
That is why academic institutions, Fortune 500 firms, government agencies, and NGOs in critical sectors like defense, energy, financial services, and semiconductors leverage us.
Pryon emphasizes Responsible AI with initiatives like respecting authorship and ethical sourcing of coaching data. How do you implement these principles in your day-to-day operations?
Our clients and partners control what goes into their instance of Pryon. This includes public information from trusted academic institutions and government agencies, published information they’ve properly licensed for his or her organizations, proprietary information that forms the core IP of their business, and private content for individual use. Pryon synthesizes these 4 source types right into a unified knowledge cloud, completely under the control of the sponsoring organization. This ability to securely manage diverse content types is why we’re trusted in robust environments, including critical infrastructure.
With Pryon recently securing $100 million in Series B funding, what are your top priorities for the corporate’s growth and innovation in the approaching years?
Post-Series B, we’re in early growth territory. One a part of this phase is industrializing the product market fit we have established to support the cloud environments and server types our clients and partners are prone to encounter.
The primary focal area is ensuring our product can handle these demands while also offering them modular access to our capabilities to support their workflows.
The second major area is developing scaling partners who can construct practices around our work with our tooling and manage the needed change as organizations transform to support the brand new era of digital intelligence. The third focus is sustained R&D to remain ahead of the curve and define the cutting-edge on this space.
As someone who has been on the forefront of AI innovation, how do you view the present state of AI regulation, and what role do you suspect Pryon can play in shaping these discussions?
I believe all of us wonder how the world would have turned out if we had been in a position to regulate some technologies closer to their infancy, like social media, an example. We didn’t realize how much it could affect our communities. Different nation-states have different perspectives on regulation. The Europeans have a somewhat constrained perspective that matches their values with the EU AI Act.
On the flip side, some environments are completely unconstrained. Within the US, we’re searching for a balance between allowing innovation to thrive, especially in industrial activities, and safeguarding sensitive use cases to avoid biases and other risks, akin to in approving loan applications.
Most regulation tends to focus on essentially the most sensitive use cases, particularly in consumer applications and public sector or government uses. Personally, that is why I’m on the board of With Honor, a bipartisan coalition of veterans, policymakers, and lawmakers. We’ve got seen convergence, no matter political views, on concerns in regards to the introduction of AI technologies into all facets of our lives. A part of our role is to influence the evolution of regulation, providing feedback to seek out the suitable balance all of us wanted for other technology areas.
What advice would you give to other AI entrepreneurs seeking to construct impactful and responsible AI solutions?
Without delay, it is going to be each a wild west and a fantastical environment for developing recent types of AI applications. When you haven’t got extensive experience in AI—say, 10, 20, or 30 years—I would not recommend developing an AI platform from scratch. As an alternative, find an application area where the technology intersects together with your subject material expertise.
Whether you are an artist, attorney, engineer, lineman, physician, or in one other field, leveraging your expertise will provide you with a singular voice, perspective, and product within the marketplace. This approach is prone to be the very best use of your time, energy, and experience, relatively than creating one other “me too” product.