Krishna Rangasayee, Founder & CEO of – Interview Series


Krishna Rangasayee is Founder and CEO of Previously, Krishna was COO of Groq and at Xilinx for 18 years, where he held multiple senior leadership roles including Senior Vice President and GM of the general business, and Executive Vice President of worldwide sales. While at Xilinx, Krishna grew the business to $2.5B in revenue at 70% gross margin while creating the muse for 10+ quarters of sustained sequential growth and market share expansion. Prior to Xilinx, he held various engineering and business roles at Altera Corporation and Cypress Semiconductor. He holds 25+ international patents and has served on the board of directors of private and non-private firms.

What initially attracted you to machine learning?

I’ve been a student of the embedded edge and cloud markets for the past 20 years. I’ve seen tons of innovation within the cloud, but little or no towards enabling machine learning at the sting. It’s a massively underserved $40B+ market that’s been surviving on old technology for a long time.

So, we launched into something nobody had done before–enable Effortless ML for the embedded edge.

Could you share the genesis story behind SiMa?

In my 20 + profession, I had yet to witness architecture innovation happening within the embedded edge market. Yet, the necessity for ML on the embedded edge increased within the cloud and elements of IoT. This proves that while firms are demanding ML at the sting, the technology to make this a reality is simply too stodgy to truly work.

Subsequently, before even began on our design, it was essential to know our customers’ biggest challenges. Nevertheless, getting them to spend time with an early-stage startup to attract meaningful and candid feedback was its own challenge. Luckily, the team and I were in a position to leverage our network from past relationships where we could solidify’s vision with the correct targeted firms.

We met with over 30 customers and asked two basic questions: “What are the largest challenges scaling ML to the embedded edge?” and “How can we help?” After many discussions on how they desired to reshape the industry and listening to their challenges to realize it, we gained a deep understanding of their pain points and developed ideas on the best way to solve them. These include:

  • Getting the advantages of ML with out a steep learning curve.
  • Preserving legacy applications together with future-proofing ML implementations.
  • Working with a high-performance, low-power solution in a user-friendly environment.

Quickly, we realized that we wanted to deliver a risk mitigated phased approach to assist our customers. As a startup we needed to bring something so compelling and differentiated from everyone else. No other company was addressing this clear need, so this was the trail we selected to take. achieved this rare feat by architecting from the bottom up the industry’s first software-centric, purpose-built Machine Learning System-on-Chip (MLSoC) platform. With its combination of silicon and software, machine learning can now be added to embedded edge applications by the push of a button.

Could you share your vision of how machine learning will reshape all the pieces to be at the sting?

Most ML firms concentrate on high growth markets comparable to cloud and autonomous driving. Yet, it’s robotics, drones, frictionless retail, smart cities, and industrial automation that demand the most recent ML technology to enhance efficiency and reduce costs.

These growing sectors coupled with current frustrations deploying ML on the embedded edge is why we consider the time is ripe with opportunity. is approaching this problem in a very different way; we have the desire to make widespread adoption a reality.

What has thus far prevented scaling machine learning at the sting?

Machine learning must easily integrate with legacy systems. Fortune 500 firms and startups alike have invested heavily of their current technology platforms, but most of them is not going to rewrite all their code or completely overhaul their underlying infrastructure to integrate ML. To mitigate risk while reaping the advantages of ML, there must be technology that permits for seamless integration of legacy code together with ML into their systems. This creates a simple path to develop and deploy these systems to handle the appliance needs while providing the advantages from the intelligence that machine learning brings.

There aren’t any big sockets, there’s nobody large customer that’s going to maneuver the needle, so we had no selection but to give you the option to support a thousand plus customers to actually scale machine learning and really bring the experience to them. We discovered that these customers have the will for ML but they don’t have the capability to get the educational experience because they lack the inner capability to accumulate they usually don’t have the inner fundamental knowledge base. In order that they need to implement the ML experience but to accomplish that without the embedded edge learning curve and what it really quickly got here to is that we have now to make this ML experience very effortless for purchasers.

How is SiMA in a position to so dramatically decrease power consumption in comparison with competitors?

Our MLSoC is the underlying engine that actually enables all the pieces, it is crucial to distinguish that we should not constructing an ML accelerator. For the two billion dollars invested into edge ML SoC startups, everybody’s industry response for innovation has been an ML accelerator block as a core or a chip. What people should not recognizing is to migrate people from a classic SoC to an ML environment you wish an MLSoC environment so people can run legacy code from day one and progressively in a phased risk mitigated way deploy their capability into an ML component or someday they’re doing semantic segmentation using a classic computer vision approach and the subsequent day they may do it using an ML approach but in some way we allow our customers the chance to deploy and partition their problem as they deem fit using classic computer vision, classic ARM processing of systems, or a heterogeneous ML compute. To us ML is just not an end product and subsequently an ML accelerator is just not going to achieve success by itself, ML is a capability and it’s a toolkit along with the opposite tools we enable our customers in order that using a push button methodology, they will iterate their design of pre-processing, post-processing, analytics, and ML acceleration all on a single platform while delivering the best system wide application performance at the bottom power.

What are a number of the primary market priorities for SiMa?

We have now identified several key markets, a few of that are quicker to revenue than others. The quickest time to revenue is wise vision, robotics, industry 4.0, and drones. The markets that take a bit more time attributable to qualifications and standard requirements are automotive and healthcare applications. We have now broken ground in all the above working with the highest players of every category.

Image capture has generally been on the sting, with analytics on the cloud. What are the advantages of shifting this deployment strategy?

Edge applications need the processing to be done locally, for a lot of applications there is just not enough time for the information to go to the cloud and back. ML capabilities is prime in edge applications because decisions should be made in real time, for example in automotive applications and robotics where decisions should be processed quickly and efficiently.

Why should enterprises consider SiMa solutions versus your competitors?

Our unique methodology of a software centric approach packaged with a whole hardware solution. We have now focused on a whole solution that addresses what we wish to call the Any, 10x and Pushbutton because the core of customer issues. The unique thesis for the corporate is you push a button and also you get a WOW! The experience really must be abstracted to some extent where you wish to get hundreds of developers to make use of it, but you don’t need to require them to all be ML geniuses, you don’t want all of them to be tweaking layer by layer hand coding to get desired performance, you wish them to remain at the best level of abstraction and meaningfully quickly deploy effortless ML. So the thesis behind why we latched on this was a really strong correlation with scaling in that it really must be an easy ML experience and never require numerous hand holding and services engagement that may get in the way in which of scaling.

We spent the primary yr visiting 50 plus customers globally trying to know if all of you wish ML but you’re not deploying it. Why? What is available in the way in which of you meaningfully deploying ML and or what’s required to actually push ML right into a scale deployment and it really comes right down to three key pillars of understanding, the primary being ANY. As an organization we have now to resolve problems given the breadth of consumers, and the breadth of use models together with the disparity between the ML networks, the sensors, the frame rate, the resolution. It’s a really disparate world where each market has completely different front end designs and if we actually just take a narrow slice of it we cannot economically construct an organization, we actually must create a funnel that’s able to taking in a really big selection of application spaces, almost consider the funnel because the Ellis Island of all the pieces computer vision. People may very well be in tensorflow, they may very well be using Python, they may very well be using camera sensor with 1080 resolution or it may very well be a 4K resolution sensor, it really doesn’t matter if we will homogenize and produce all of them and in the event you don’t have the front end like this you then don’t have a scalable company.

The second pillar is 10x which implies that there’s also the issue why customers should not in a position to deploy and create derivative platforms because all the pieces is a return to scratch to accumulate a recent model or pipeline. The second challenge is little doubt as a startup we want to bring something very exciting, very compelling where anybody and everybody is willing to take the chance even in the event you’re a startup based on a 10x performance metric. The one key technical merit we concentrate on solving for in computer vision problems is the frames per second per watt metric. We should be illogically higher than anybody else in order that we will stay a generation or two ahead, so we took this as a part of our software centric approach. That approach created a heterogeneous compute platform so people can solve your entire computer vision pipeline in a single chip and deliver at 10x in comparison with another solutions. The third pillar of Pushbutton is driven by the necessity to scale ML on the embedded edge in a meaningful way. ML tool chains are very nascent, continuously broken, no single company has really built a world class ML software experience. We further recognized that for the embedded promote it’s essential to mask the complexity of the embedded code while also giving them an iterative process to quickly come back and update and optimize their platforms. Customers actually need a pushbutton experience that offers them a response or an answer in minutes versus in months to realize effortless ML. Any, 10x, and pushbutton are the important thing value propositions that became really clear for us that if we do a bang up job on these three things we’ll absolutely move the needle on effortless ML and scaling ML on the embedded edge.

Is there anything that you want to to share about SiMa?

Within the early development of the MLSoC platform, we were pushing the boundaries of technology and architecture. We were going all-in on a software centric platform, which was a wholly recent approach, that went against the grain of all conventional wisdom. The journey in figuring it out after which implementing it was hard.

A recent monumental win validates the strength and uniqueness of the technology we’ve built. achieved a serious milestone In April 2023 by outperforming the incumbent leader in our debut MLPerf Benchmark performance within the Closed Edge Power category. We’re proud to be the primary startup to participate and achieve winning ends in the industry’s hottest and well recognized MLPerf benchmark of Resnet-50 for our performance and power.

We began with lofty aspirations and to today, I’m proud to say that vision has remained unchanged. Our MLSoC was purpose-built to go against industry norms for delivering a revolutionary ML solution to the embedded edge market.


What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
Inline Feedbacks
View all comments

Share this article

Recent posts

Would love your thoughts, please comment.x