Do you really need AI predictions for everything?

Do you really need AI predictions for everything?

We hear a lot of chatter on AI-this and AI-that. Can we take a step back and understand if and why we need AI. I am looking at this from the perspective of EDA tools for electronics and semiconductor designs.

The constant challenge from users is the speed of simulation. There was the transition from gates to RTL to SystemC. Then there was the transition from simulation to prototyping and emulation. Of course, thanks to Intel and AMD constantly increasing the speed of the processor and Samsung and Micron providing faster and larger DDR DRAM, some of the challenges were deflected.

But the arrival of new technologies such as CXL, PCIe6.0, HBM and UCIe has revived the need for more widespread test cases with large workloads. The addition of LLM and ML practice requires complex topology between GPUs and accesses to larger, faster memories through proprietary interconnects. 

As the number of tests of the full SoC or system is limited because of the cost of EDA software, limits of emulation and limited availability of hardware, EDA companies are bending head-over-heel to offer an alternate way to generate the statistics for the full suite of tests cases. The alternate is AI for predictive analysis.

To be truly effective, many data points is required for Predictive Analysis. The long simulation time means that the semiconductor and embedded system designers have a limited amount of data points. This is more so for power predictions. AI for predictive analysis is only as good as the number of data points available for learning. To generate a small set of data points, considerable amount of effort and intelligence is required.

A good alternate is system-level modeling. System-level modeling provides an abstraction that enables scenarios that would take 5-6 days of runtime at RTL would be simulated 3-4 minutes. This way designers can simulate the real scenario with the real use-cases at the accuracy that is sufficient to make decisions. 

Moreover, these system-level power models could leverage the data provided by PowerTime TX (PTPX), network probes, spreadsheets and trace files. Rather than extrapolating decision, the analysis can be made via simulate the model topology with the right configuration of data rates, resolution and other application attributes.  It can also have the software task graph and the definition of the hardware components.

VisualSim Power Modeling (VPM) provides the file integration and the modelling components or system-level IP to assemble these models, run simulation and generate reports. There are three levels of reports generated- hierarchical domain power, instant and average power based on the current state of each device for the set task activity and cumulative power for each individual IP block in the SoC or system. 

Secondary reports include heat, temperature and battery lifecycle.  The modelling components contain the state power table, power management table, battery model, power generators and modelling components to define the behavior and the architecture. The system power model can handle concurrent tasks and multi-device dependencies.

 

To learn more, check out the https://www.mirabilisdesign.com/power-and-energy/ or send us a message at: https://www.mirabilisdesign.com/contact/