Resources & Support

SiFive Blog

The latest insights, and deeper technology dives, from RISC-V leaders

September 08, 2025

The Perfect Solution for Local AI

The New X100 series; Part of the 2nd generation SiFive Intelligence Family

There has been significant focus in the tech industry over recent years on AI. With the fast growth of RISC-V, SiFive has taken the lead through our Intelligence family of IP, offering a single ISA-based scalable compute platform and the ability to customize to specific AI workloads.

One request we often get from our customers is for help in moving AI capabilities from the cloud into factories, into people’s homes and offices, and into their hands – making AI local. There are several reasons why they want to do this. Firstly, processing data locally can give much quicker responses, making interacting with a device seem far more natural. It also helps ensure privacy and protection of your data – if that data never leaves your local device, it’s much easier to know who has access to it and what they do with it. As we come to rely on this technology more, it also becomes important to know that it will work reliably, even when you can’t rely on network connectivity or infrastructure availability.

Achieving this is not straightforward, however. AI typically requires large amounts of data, and intensive compute power. Existing embedded processors often do not provide sufficient data processing capabilities, and the cost and power constraints of these devices do not allow more powerful CPUs designed for the datacenter to be used. SiFive’s new X100 series addresses these challenges in several key ways.

Firstly, we have scaled down the vector engine used on our X200 and X300 series to fit the stringent cost and power constraints, while still providing the data processing performance required for many edge-AI inference use cases. The RISC-V vector ISA and a range of clever implementation features have resulted in a flexible vector engine that can address these challenges even with low cost, high latency memory (a future blog post will delve into these features in more technical detail).

The combined result of these features, however, is that a SiFive X160 based design can perform many AI inference tasks at over twice the speed of the Arm™ Cortex®-M85, with the same silicon footprint.

SiFive vs. Arm Chart Figure 1: SiFive Intelligence X160 performance vs competition in typical AI tasks.

Some edge AI tasks however will require more performance, and this can only be realized within the cost and power budgets of edge devices by adding custom accelerators. These allow our customers to provide just the right compute profile for the intended task, within the smallest area and power constraints. However, integrating such accelerators into a complex compute subsystem can be difficult and inefficient from both a hardware and software perspective. To assist in this task and improve efficiency, we’ve provided two different direct-to-core coprocessor interfaces that can be used individually or together, depending on the end applications requirements. This allows the X100 series to be used as an Accelerator Control Unit (ACU), where it can take on the detailed management and partnership with a customer's custom acceleratoe, another area that will be explored in detail in a future blog post.

Although only recently introduced, we have a number of customers already working hard on integrating the X100 series into their systems to benefit from this unrivaled AI efficiency. Two of these are Tier 1 US semiconductor companies, one leveraging the vector performance of X160 to power their next-gen Edge AI SoCs, and the other using X160 as control and assist for their own custom AI accelerator.

In addition to the X160, we have also released the X180. This Intelligence IP implements the 64-bit RV64I ISA. The larger physical address space makes this ideal for working alongside larger CPUs in a complex SoC.

SiFive Intelligence 160 diagram SiFive Intelligence 180 diagram
Figure 2: Introducing the SiFive Intelligence X100-series products: X160 and X180

These new products are ideal for supporting on-device Local AI, inference at the edge and vector processing at the deeply embedded far edge. To learn more about these products, review their specs and read about customer and vertical use cases, please take a look at the Intelligence Family webpage and download our product briefs X160, X180, or the Intelligence Family.

If you are working on a Local AI solution, and want to explore if these, or any other SiFive products are a good fit, please contact our SiFive sales team.