Rob Clark, President and CTO of Seekr – Interview Series

-

Rob Clark is the President and Chief Technology Officer (CTO) of Seekr. Rob has over 20 years of experience in software engineering, product management, operations, and the event of leading-edge artificial intelligence and web-scale technologies. Before joining Seekr, he led several artificial intelligence and search solutions for among the world’s largest telecommunications, publishing, and e-commerce corporations.

Seekr a synthetic intelligence company, creates trustworthy large language models (LLMs) that discover, rating, and generate reliable content at scale.

Starting with web search, Seekr’s ongoing innovation has produced patented technologies enhancing web safety and value. Their models, developed with expert human input and explainability, address customer needs in content evaluation, generative AI, and trustworthy LLM training and validation.

Are you able to describe the core mission of Seekr and the way it goals to distinguish itself within the competitive AI landscape?

We founded Seekr on the straightforward yet necessary principle that everybody must have access to reliable, credible and trustworthy information – irrespective of its form or where it exists. It doesn’t matter in the event you are a web based consumer or a business using that information to make key decisions – responsible AI systems allow all of us to completely and higher understand information, as you’ll want to ensure what’s coming out of Generative AI is accurate and reliable.

When information is unreliable it runs the entire spectrum, for instance from more benign sensationalism to the more serious case of coordinated inauthentic behaviors intended to mislead or influence as an alternative of inform. Seekr’s approach to AI is to make sure the user has full transparency into the content including provenance, lineage and objectivity, and the flexibility to construct and leverage AI that’s transparent, trustworthy, features explainability and has all of the guardrails so consumers and businesses alike can trust it.

Along with providing industry optimized Large Language Models (LLMs), Seekr has began constructing Foundation Models differentiated by greater transparency and accuracy with reduced error and bias, including all of the validation tools. That is made possible through Seekr’s collaboration with Intel to make use of its latest generation Gaudi AI accelerators at one of the best possible price-performance. We selected to not depend on outside foundation models where the training data was unknown and showed errors and inherent bias, especially as they are sometimes trained on popular data slightly than essentially the most credible and reliable data. We expect to release these towards the tip of the 12 months.

Our core product known as SeekrFlow, a whole end-to-end platform that trains, validates, deploys and scales trustworthy AI. It allows enterprises to leverage their data securely, to rapidly develop AI they’ll depend on optimized for his or her industry.

What are the critical features of SeekrFlow, and the way does it simplify the AI development and deployment process for enterprises?

SeekrFlow takes a top-down, consequence first approach, allowing enterprises to resolve problems using AI with each operational efficiencies and latest revenue opportunities through one cohesive platform. This integration includes secure data access, automated data preparation, model fine-tuning inference, guardrails, validation tools, and scaling, eliminating the necessity for multiple disparate tools and reducing the complexity of in-house technical talent managing various facets of AI development individually.

For enterprises, customization is essential, as a one model matches all approach doesn’t solve unique business problems. SeekrFlow allows customers to each cost-effectively leverage their enterprise data safely and align to their industry’s specific needs. This is particularly necessary in regulated industries like finance, healthcare and government.

Seekr’s AI assisted training approach greatly reduces the associated fee, time, and wish for human supervision related to data labeling and acquisition, by synthesizing high-quality and domain-specific data using domain-specific principles akin to policy documents, guidelines, or user-provided enterprise data.

Seekr’s commitment to reliability and explainability is engrained throughout SeekrFlow. No enterprise desires to deploy a model to production and discover its hallucinating, giving out mistaken information or a worst-case scenario akin to making a gift of its services totally free! SeekrFlow includes the tools needed to validate models for reliability, to scale back errors and to transparently allow the user to see what’s impacting a model’s output right back to the unique training data. In the identical way software engineers and QA can scan, test and validate their code, we offer the identical capabilities for AI models.

That is all provided at optimal cost to enterprises. Our Intel collaboration running in Intel’s AI cloud allows Seekr one of the best price-performance that we pass on to our customers.

How does SeekrFlow address common issues akin to cost, complexity, accuracy, and explainability in AI adoption?

High price points and scarcity of AI hardware are two of the most important barriers to entry facing enterprises. Because of the aforementioned collaboration with Intel, Seekr flow has access to vast amounts of next generation AI hardware leveraging Intel’s AI Cloud. This provides customers with scalable and cost-effective compute resources that may handle large-scale AI workloads leveraging each Intel Gaudi AI accelerators and AI optimized Xeon CPUs.

It’s necessary to notice that SeekrFlow is cloud provider and platform agnostic and runs on latest generation AI chips from Intel, Nvidia and beyond. Our goal is to abstract the complexity of the AI hardware and avoid vendor lock-in, while still unlocking all of the unique value of every of the chip’s software, tools and ecosystem. This includes each running within the cloud or on-premise and operated datacenters.

When constructing SeekrFlow we clearly saw the shortage of contestability that existed in other tools. Contestability is paramount at Seekr, as we would like to be certain the user has the fitting to say something isn’t accurate and have a straightforward solution to resolve it. With other models and platforms, it’s often difficult or unknown methods to even correct errors. Point fixes after the actual fact are sometimes ineffective, for instance manipulating the input prompt doesn’t guarantee the reply shall be corrected each time or in every scenario. We give the user all of the tools for transparency, explainability and an easy solution to teach and proper the model in a clean user interface. From constructing on top of trustworthy foundation models at the bottom all through to easy-to-use testing and measurement tools, SeekrFlow ensures accurate outcomes that may be understood and validated. It’s essential to know that AI guardrails aren’t just something that is good to have or to take into consideration later – slightly, we provide customers easy to make use of explainability and contestability tools from the beginning of implementation.

How does the platform integrate data preparation, fine-tuning, hosting, and inference to enable faster experimentation and adaptation of LLMs?

SeekrFlow integrates the end-to-end AI development process, in a single platform. From handling the labeling and formatting of the information with its AI agent assisted generation approach, to fine-tuning a base model, all of the solution to serving for inference and monitoring the fine-tuned model.  As well as, SeekrFlow’s explainability tooling allows AI modelers to find gaps within the knowledge of the model, understand why mistakes and hallucinations occur, and directly act upon them. This integrated, end-to-end approach enables rapid experimentation and model iterations

What other unique technologies or methodologies has Seekr developed to make sure the accuracy and reliability of its AI models?

Seekr has developed patented AI technology for assessing the standard, reliability, bias, toxicity and veracity of content, whether that could be a text, a visible, or an audio signal. This technology provides wealthy data and knowledge that may be fused into any AI model, in the shape of coaching data, algorithms, or model guardrails. Ultimately, Seekr’s technology for assessing content may be leveraged to make sure the security, factuality, helpfulness, unbiasedness and fairness of AI models.  An example of that is SeekrAlign, which helps brands and publishers grow reach and revenue with responsible AI that appears on the context of content through our patented Civility Scoring.

Seekr’s approach to explainability ensures that AI model responses are comprehensible and traceable. As AI models develop into involved in decisions of consequences, the necessity for AI modelers to know and contest model decisions, becomes increasingly necessary.

How does SeekrFlow’s principle alignment agent help developers align AI models with their enterprise’s values and industry regulations?

SeekrFlow’s principle alignment agent is a critical feature that helps developers and enterprises reduce the general cost of their RAG-based systems and efficiently align their AI to their very own unique principles, values, and industry regulations with no need to collect and process structured data.

The Seekr agent uses advanced alignment algorithms to make sure that LLMs’ behavior adheres to those unique and predefined standards, intentions, rules, or values. Throughout the training and inference phases, the principle alignment agent guides users through your complete data preparation and fine-tuning process while constantly integrating expert input and ethical guidelines. This ensures that our AI models operate inside acceptable boundaries.

By providing tools to customize and implement these principles, SeekrFlow empowers enterprises to take care of control over their AI applications, ensuring that they reflect the corporate’s values and cling to legal and industry requirements. This capability is crucial for constructing trust with customers and stakeholders, because it demonstrates a commitment to responsible AI.

Are you able to discuss the collaboration with OneValley and the way Seekr’s technology powers the Haystack AI platform?

OneValley is a trusted resource for tens of hundreds of entrepreneurs and small to medium sized businesses (SMBs) worldwide. A typical problem these leaders face is finding the fitting advice, services to begin and grow their business.  Seekr’s industry specific LLMs power OneValley’s latest product Haystack AI, which offers customers access to vast databases of obtainable products, their attributes, pricing, pros and cons and more. Haystack AI intelligently makes recommendations to users and answers questions, all accessible through an in-app chatbot.

What specific advantages does Haystack offer to startups and SMBs, and the way does Seekr’s AI enhance these advantages?

Whether a user needs a quick answer to know which business bank card offers the very best money rewards with the bottom fees per user and lowest APR or to contrast and compare two CRM systems they’re considering as the fitting solution, Haystack AI powered by Seekr provides them the fitting answers quickly.

Haystack AI answers users’ questions rapidly and in essentially the most cost-effective manner. Having to attend for and ask a human to reply these questions and all of the research that goes into this form of process is unmanageable for very busy business leaders. Customers want accurate answers they’ll depend on fast, without having to trawl through the outcomes (and sponsored links!) of an internet search engine. Their time is best spent running their core business. That is an incredible example where Seekr AI solves an actual business need.

How does Seekr make sure that its AI solutions remain scalable and cost-effective for businesses of all sizes?

The easy answer is to make sure scale and low price, you would like a strategic collaboration for access to compute at scale. Delivering scalable, cost-effective and reliable AI requires the wedding of best-in-class AI software running on leading generation hardware. Our collaboration with Intel involves a multi-year commitment for access to an ever-growing amount of AI hardware, including upgrading through generations from current Gaudi 2 to Gaudi 3 in early 2025 and onwards onto the following chip innovations. We placed a bet that one of the best availability and price of compute would come from the actual manufacturer of the silicon, of which Intel is just one in every of two on this planet that produces its own chips. This solves any issues around scarcity, especially as we and our customers scale and ensure one of the best possible price performance that advantages the shopper.

Seekr customers running on their very own hosted service only pay for actual usage. We don’t charge for GPUs sat idle. SeekrFlow has a highly competitive pricing model in comparison with contemporaries within the space, that supports the smallest to largest deployments.

ASK DUKE

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x