Benefits

  • Future-Proof – Aligns with the chiplet ecosystem led by UCIe Consortium.
  • Cross-Domain Utility – Edge, data center, AI, and safety-critical systems.
  • Early Risk Reduction – Validate performance, partitioning, and thermal design before silicon.
  • Flexible Architecture – Supports heterogeneous chiplet integration.
  • Scalable Designs – From 2 chiplets to 1000s in complex AI clusters.

The UCIe block in VisualSim models the next-generation interconnect standard for chiplets, enabling multiple dies — processors, memory, accelerators, I/O, and custom IP — to work seamlessly as part of one system.

As chiplet-based design becomes the future of SoC architecture, UCIe provides a standardized, high-bandwidth, low-latency fabric for integrating heterogeneous chiplets into a single package. This allows system architects to partition workloads, optimize power vs. performance trade-offs, and scale designs from edge devices to massive AI/data center platforms.

VisualSim’s UCIe library allows users to construct any form of chiplet-based system, validate power and performance targets, and size the architecture to ensure that throughput, latency, and reliability requirements are met before tape-out.

Overview

  • Configurable Buffers – TX/RX buffers with adjustable depth.
  • Error Handling – Retry mechanisms and error-checking logic.
  • Flow Control – Prevents congestion between chiplets.
  • Selective Acknowledgment – Reliable packet delivery.
  • Performance Parameters – Read request size, link speed, timeout control.

Supported Standards

  • UCIe 1.0 – Initial release (2022).
  • UCIe 1.1 – Enhancements for interoperability (2023).
  • UCIe 3.0 – Support for future UCIe extensions as consortium updates are released.

Key Parameters

  • UCIe_Switch_Name – Interconnect topology.
  • Package_Type – 2.5D, 3D stacking, or advanced packaging.
  • Buffer_Size_Bytes – TX/RX buffer depth.
  • BER – Bit error rate.
  • NumOfRetry – Retry count for error handling.
  • Timeout – Transaction timeout.

Application

  • AI/ML Infrastructure – Scale-out TPU/GPU/accelerator chiplets in AI training clusters.
  • Data Centers – Disaggregated CPUs, memory, storage, and accelerators.
  • Automotive SoCs – Safety-critical ECU consolidation via chiplets.
  • Edge Computing – Compact, power-efficient multi-chiplet devices.
  • Aerospace & Defense – Radiation-tolerant, modular computing platforms.

Integrations

  • Works with memory models (DDR, HBM, LPDDR, GDDR).
  • Integrates with processors (ARM, RISC-V, Power, DSPs).
  • Co-simulates with NoCs, PCIe, CXL, and Ethernet for full-system modeling.
  • Links with Task Graphs to simulate workload partitioning across chiplets.

Schedule a consultation with our experts

    Subscribe