David Driggers, CTO of Cirrascale – Interview Series

-

David Driggers is the Chief Technology Officer at Cirrascale Cloud Services, a number one provider of deep learning infrastructure solutions. Guided by values of integrity, agility, and customer focus, Cirrascale delivers modern, cloud-based Infrastructure-as-a-Service (IaaS) solutions. Partnering with AI ecosystem leaders like Red Hat and WekaIO, Cirrascale ensures seamless access to advanced tools, empowering customers to drive progress in deep learning while maintaining predictable costs.

Cirrascale is the one GPUaaS provider partnering with major semiconductor corporations like NVIDIA, AMD, Cerebras, and Qualcomm. How does this unique positioning profit your customers by way of performance and scalability?

Because the industry evolves from Training Models to the deployment of those models called Inferencing, there isn’t any one size matches all.  Depending upon the dimensions and latency requirements of the model, different accelerators offer different values that might be necessary. Time to reply, cost per token benefits, or performance per watt can all affect the fee and user experience.  Since Inferencing is for production these features/capabilities matter.

What sets Cirrascale’s AI Innovation Cloud aside from other GPUaaS providers in supporting AI and deep learning workflows?

Cirrascale’s AI Innovation Cloud allows users to try in a secure, assisted, and fully supported manner recent technologies that will not be available in every other cloud.  This will aid not only in cloud technology decisions but in addition in potential on-site purchases.

How does Cirrascale’s platform ensure seamless integration for startups and enterprises with diverse AI acceleration needs?

Cirrascale takes an answer approach for our cloud.  Because of this for each startups and enterprises, we provide a turnkey solution that features each the Dev-Ops and Infra-Ops.  While we call it bare-metal to tell apart our offerings as not being shared or virtualized, Cirrascale fully configures all features of the offering including fully configuring the servers, networking, Storage, Security and User Access requirements prior to turning the service over to our clients. Our clients can immediately start using the service quite than having to configure all the pieces themselves.

Enterprise-wide AI adoption faces barriers like data quality, infrastructure constraints, and high costs. How does Cirrascale address these challenges for businesses scaling AI initiatives?

While Cirrascale doesn’t offer Data Quality type services, we do partner with corporations that may assist with Data issues.  So far as Infrastructure and costs, Cirrascale can tailor an answer specific to a client’s specific needs which ends up in higher overall performance and related costs specific to the client’s requirements.

With Google’s advancements in quantum computing (Willow) and AI models (Gemini 2.0), how do you see the landscape of enterprise AI shifting within the near future?

Quantum Computing remains to be quite a way off from prime time for most people resulting from the shortage of programmers and off-the-shelf programs that may reap the benefits of the features.  Gemini 2.0 and other large-scale offerings like GPT4 and Claude are definitely going to get some uptake from Enterprise customers, but a big a part of the Enterprise market will not be prepared at the moment to trust their data with third parties, and particularly ones that will use said data to coach their models.

Finding the correct balance of power, price, and performance is critical for scaling AI solutions. What are your top recommendations for corporations navigating this balance?

Test, test, test. It’s critical for a corporation to check their model on different platforms. Production is different than development—cost matters in production. Training could also be one and done, but inferencing is endlessly.  If performance requirements will be met at a lower cost, those savings fall to the underside line and might even make the answer viable.  Very often deployment of a big model is just too expensive to make it practical to be used. End users must also seek corporations that might help with this testing as often an ML Engineer might help with deployment vs. the Data Scientist that created the model.

How is Cirrascale adapting its solutions to satisfy the growing demand for generative AI applications, like LLMs and image generation models?

Cirrascale offers the widest array of AI accelerators, and with the proliferation of LLMs and GenAI models ranging each in size and scope (like multi-modal scenarios), and batch vs. real-time, it truly is a horse for a course scenario.

Are you able to provide examples of how Cirrascale helps businesses overcome latency and data transfer bottlenecks in AI workflows?

Cirrascale has quite a few data centers in multiple regions and doesn’t have a look at network connectivity as a profit center.  This enables our users to “right-size” the connections needed to maneuver data, in addition to utilize more that one location if latency is a critical feature.  Also, by profiling the actual workloads, Cirrascale can assist with balancing latency, performance and price to deliver the perfect value after meeting performance requirements.

What emerging trends in AI hardware or infrastructure are you most enthusiastic about, and the way is Cirrascale preparing for them?

We’re most enthusiastic about recent processors which can be purpose built for inferencing vs. generic GPU-based processors that luckily fit quite nicely for training, but will not be optimized for inference use cases which have inherently different compute requirements than training.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x