Andy Nightingale, VP of Product Marketing at Arteris – Interview Series

-

Andy Nightingale, VP of Product Marketing at Arteris is a seasoned global business leader with a various background in engineering and product marketing. He’s a Chartered Member of the British Computer Society and the Chartered Institute of Marketing, and has over 35 years of experience within the high-tech industry.

Throughout his profession, Andy has held a spread of roles, including engineering and product management positions at Arm, where he spent 23 years. In his current role as VP of product marketing at Arteris, Andy oversees the Magillem system-on-chip deployment tooling and FlexNoC and Ncore network-on-chip products.

Arteris is a catalyst for system-on-chip (SoC) innovation because the leading provider of semiconductor system IP for the acceleration of SoC development. Arteris Network-on-Chip (NoC) interconnect mental property (IP) and SoC integration technology enable higher product performance with lower power consumption and faster time to market, delivering proven flexibility and higher economics for system and semiconductor firms, so revolutionary brands are free to dream up what comes next.

Along with your extensive experience at Arm and now leading product management at Arteris, how has your perspective on the evolution of semiconductor IP and interconnect technologies modified over time? What key trends excite you probably the most today?

It’s been a unprecedented journey—from my early days writing test benches for ASICs at Arm to helping shape product strategy at Arteris, where we’re on the forefront of interconnect IP innovation. Back in 1999, system complexity rapidly accelerated, but the main target was still totally on processor performance and essential SoC integration. Verification methodologies were evolving, but interconnect was often seen as a set infrastructure—mandatory but not strategic.

Fast-forward to today and interconnect IP has grow to be a critical enabler of SoC (System-on-Chip) scalability, power efficiency, and AI/ML performance. The rise of chiplets, domain-specific accelerators, and multi-die architectures has placed immense pressure on interconnect technologies to grow to be more adaptive, revolutionary, physically, and software-aware.

Some of the exciting trends I see is the convergence of AI and interconnect design. At Arteris, we’re exploring how machine learning can optimize NoC (Network-on-Chip) topologies, intelligently route data traffic, and even anticipate congestion to enhance real-time performance. This just isn’t nearly speed—it’s about making systems more revolutionary and responsive.

What excites me is how semiconductor IP is becoming more accessible to AI innovators. With high-level SoC configuration IP and abstraction layers, startups in automotive, robotics, and edge AI can now leverage advanced interconnect architectures with no need a deep background in RTL design. That democratization of capability is gigantic.

One other key shift is the role of virtual prototyping and system-level modeling. Having worked on ESL (Electronic System Level) tools early in my profession, it’s rewarding to see those methodologies now enabling early AI workload evaluation, performance prediction, and architectural trade-offs long before silicon is taped out.

Ultimately, the longer term of AI will depend on how efficiently we move data—not only how briskly we process it. That’s why I imagine the evolution of interconnect IP is central to the following generation of intelligent systems.

Arteris’ FlexGen leverages AI driven automation and machine learning to automate NoC (Network-on-Chip) topology generation. How do you see AI’s role evolving in chip design over the following five years?

AI is fundamentally transforming chip design, and over the following five years, its role will only deepen—from productivity aid to intelligent design partner. At Arteris, we’re already living that future with FlexGen, where AI, formal methods, and machine learning are central to automating Network-on-Chip (NoC) topology optimization and SoC integration workflows.

What sets FlexGen apart is its mix of ML algorithms—all combined to initialize floorplans from images, generate topologies, configure clocks, reduce Clock Domain Crossings, and optimize the connectivity topology and its placement and routing bandwidth, streamlining communication between IP blocks. Furthermore, that is all done deterministically, meaning that results might be replicated and incremental adjustments made, enabling predictable best-in-class results to be used cases starting from AI assistance for an authority SoC designer to creating the precise NoC for a novice.

Over the following five years, AI’s role in chip design will shift from assisting human designers to co-designing and co-optimizing with them—learning from every iteration, navigating design complexity in real-time, and ultimately accelerating the delivery of AI-ready chips. We see AI not only making chips faster but making faster chips smarter.

The semiconductor industry is witnessing rapid innovation with AI, HPC, and multi-die architectures. What are the most important challenges that NoC design needs to unravel to maintain up with these advancements?

As AI, HPC, and multi-die architectures drive unprecedented complexity, the most important challenge for NoC design is scalability without sacrificing power, performance, or time to market. Today’s chips feature tens to a whole lot of IP blocks, each with different bandwidth, latency, and power needs. Managing this diversity—across multiple dies, voltage domains, and clock domains—requires NoC solutions that go far beyond manual methods.

NoC solution technologies akin to FlexGen help address key bottlenecks: minimizing wire length, maximizing bandwidth, aligning with physical constraints, and doing all the things with speed and repeatability.

The longer term of NoC must even be automation-first and AI-enabled, with tools that may adapt to evolving floorplans, chipset-based architectures, and late-stage changes without requiring complete rework. That is the one technique to keep pace with modern SoCs’ massive design cycles and heterogeneous demands and ensure efficient, scalable connectivity at the guts of next-gen semiconductors.

The AI chipset market is projected to grow significantly. How does Arteris position itself to support the increasing demands of AI workloads, and what unique benefits does FlexGen offer on this space?

Arteris just isn’t only uniquely positioned to support the AI chiplet market but has been doing this already for years by delivering automated, scalable Network-on-Chip (NoC) IP solutions purpose-built for the demands of AI workloads including Generative AI and Large Language Models (LLM) compute —supporting high bandwidth, low latency, and power efficiency across increasingly complex architectures.  FlexGen, as the latest addition to the Arteris NoC IP lineup, will play a good more significant role in rapidly creating optimal topologies best fitted to different large-scale, heterogeneous SoCs.

FlexGen offers incremental design, partial completion mode, and advanced pathfinding to dynamically optimize NoC configurations without complete redesigns—critical for AI chips that evolve throughout development.

Our customers are already constructing Arteris technology into multi-die and chiplet-based systems, efficiently routing traffic while respecting floorplan and clock domain constraints on each chiplet. Non-coherent multi-die connectivity is supported over industry-standard interfaces provided by third- party controllers.

As AI chip complexity grows, so does the necessity for automation, adaptability, and speed. FlexGen delivers all three, helping teams construct smarter interconnects—faster—so that they can concentrate on what matters: advancing AI performance at scale.

With the rise of RISC-V and custom silicon for AI, how does Arteris’ approach to NoC design differ from traditional interconnect architectures?

Traditional interconnect architectures were primarily built for fixed-function designs, but today’s RISC-V and custom AI silicon demand a more configurable, scalable, and automatic approach than a modified one-size-fits-all solution. That’s where Arteris stands apart. Our NoC IP, especially with FlexGen, is designed to adapt to the range and modularity of contemporary SoCs, including custom cores, accelerators, and chiplets, as mentioned above.

FlexGen enables designers to generate and optimize topologies that reflect unique workload characteristics, whether low-latency paths for AI inference or high-bandwidth routes for shared memory across RISC-V clusters. Unlike static interconnects, FlexGen’s algorithms tailor each NoC to the chip’s architecture across clock domains, voltage islands, and floorplan constraints.

Consequently, Arteris enables teams constructing custom silicon to maneuver faster, reduce risk, and get probably the most from their highly differentiated designs—something traditional interconnects weren’t built to handle.

FlexGen claims a 10x improvement in design iteration speed. Are you able to walk us through how this automation reduces complexity and accelerates time-to-market for System-on-Chip (SoC) designers?

FlexGen delivers a 10x improvement in design iteration speed by automating among the most complex and time-consuming tasks in NoC design. As an alternative of manually configuring topologies, resolving clock domains, or optimizing routes, designers use FlexGen’s physically aware, AI-powered engine to handle these in hours (or less)—tasks that traditionally took weeks.

As mentioned above, partial completion mode can mechanically finish even partially accomplished designs, preserving manual intent while accelerating timing closure.

The result’s a faster, more accurate, and easier-to-iterate design flow, enabling SoC teams to explore more architectural options, reply to late-stage changes, and get to market faster—with higher-quality results and fewer risk of costly rework.

Considered one of FlexGen’s standout features is wire length reduction, which improves power efficiency. How does this impact overall chip performance, particularly in power-sensitive applications like edge AI and mobile computing?

Wire length directly impacts power consumption, latency, and overall chip efficiency—each in cloud AI / HPC applications that use the more advanced nodes and edge AI inference applications where every milliwatt matters. FlexGen’s ability to mechanically minimize wire length—often as much as 30%—means shorter data paths, reduced capacitance, and fewer dynamic power draw.

In real-world terms, this translates to lower heat generation, longer battery life, and higher performance-per-watt, all of that are critical for AI workloads at the sting or in mobile environments and the cloud by directly impacting the entire cost of ownership (TCO). By optimizing the NoC topology with AI-guided placement and routing, FlexGen ensures that performance targets are met without sacrificing power efficiency—making it a great fit for today and tomorrow’s energy-sensitive designs.

Arteris has partnered with leading semiconductor firms in AI data centers, automotive, consumer, communications, and industrial electronics. Are you able to share insights on how FlexGen is being adopted across these industries?

Arteris NoC IP sees strong adoption across all markets, particularly for high-end, more advanced chiplets and SoCs. That’s since it addresses each sector’s top challenges: performance, power efficiency, and design complexity while preserving the core functionality and area constraints.

In automotive, for instance, firms like Dream Chip use FlexGen to hurry up the intersection of AI and Safety for autonomous driving by leveraging Arteris for his or her ADAS SoC design while meeting strict power and safety constraints. FlexGen’s smart NoC optimization and generation in data centers help manage massive bandwidth demands and scalability, especially for AI training and overall acceleration workloads.

FlexGen provides a quick, repeatable path to optimized NoC architectures for industrial electronics, where design cycles are tight and product longevity is vital. Customers value its incremental design flow, AI-based optimization, and talent to adapt quickly to evolving requirements, making FlexGen a cornerstone for next-generation SoC development.

The semiconductor supply chain has faced significant disruptions in recent times. How is Arteris adapting its technique to ensure Network-on-Chip (NoC) solutions remain accessible and scalable despite these challenges?

Arteris responds to produce chain disruptions by doubling down on what makes our NoC solutions resilient and scalable: automation, flexibility, and ecosystem compatibility.

FlexGen helps customers design faster and remain more agile to regulate to changing silicon availability, node shifts, or packaging strategies. Whether or not they are doing derivative designs or creating recent interconnects from scratch.

We also support customers with different process nodes, IP vendors, and design environments, ensuring customers can deploy Arteris solutions no matter their foundry, EDA tools, or SoC architecture.

By reducing dependency on anyone a part of the availability chain and enabling faster, iterative design, we’re helping customers derisk their designs and stay on schedule —even in uncertain times.

Looking ahead, what are the most important shifts you anticipate in SoC development, and the way is Arteris preparing for them?

Some of the significant shifts in SoC development is the move toward heterogeneous architectures, chiplet-based designs, and AI-centric workloads. These trends demand way more flexible, scalable, and intelligent interconnects—something traditional methods can’t sustain with.

Arteris is preparing by investing in AI-driven automation, as seen in FlexGen, and expanding support for multi-die systems, complex clock/power domains, and late-stage floorplan changes. We’re also focused on enabling incremental design, faster iteration, and seamless IP integration—so our customers can keep pace with shrinking development cycles and rising complexity.

Our goal is to make sure SoC (and chiplet) teams stay agile, whether or not they’re constructing for edge AI, cloud AI, or anything in between, all while providing the very best power, performance, and area (PPA) regardless of the complexity of the design, XPU architecture, and foundry node used.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x