Model. Simulate. WIN 🏆

Welcome to Model. Simulate. WIN, the premier competition for system designers, engineers, and semiconductor professionals. Hosted by Mirabilis Design, this competition offers you a unique opportunity to test and experience real time, practical application of our flagship product, VisualSim. Join us to showcase your skills, compete with peers, and win exciting prizes!

Our goal is to provide participants with hands-on experience using a Industry leading software- VisualSim. This competition challenges you to model and simulate scenarios, complete tasks, and demonstrate your expertise.

Stand a chance to WIN exciting prizes

  Don’t Wait! You can register for the Competition Now!

How to Enter the Competition

1. Submit the Entry Form

Fill out the form to successfully participate in the competition.

2. Choose Your Experiment

Select one of three experiments to model & simulate using VisualSim

3. Complete the given tasks

Perform specific tasks and answer the questions based on the results of your experiments.

Important Dates

  • Submission Deadline: October 15th
  • Winners Announcement: October 28th

You can submit your entries via Email, write us at info@mirabilisdesign.com  

Judging Criteria

Participants will be judged based on:

  • Experiment Response: Accuracy and innovation in modeling and simulation.
  • Task Completion: Effectiveness and efficiency in completing assigned tasks and answering the questions.

Steps to Access VisualSim Models

Follow the steps mentioned below to gain access:

  • Step 1: Visit the Location mentioned in each of the experiment to access the pre-built VisualSim Model
  • Step 2: Download OpenWebstart- OpenWebstart- https://openwebstart.com/download/
  • Step 3: Download Java 17- Java 17-  https://www.oracle.com/java/technologies/javase/jdk17-archive-downloads.html
  • Step 4: After installing Openwebstart, open, click on JVM Manager, click on Add Local, and select the directory containing Java 17 folder.
  • Step 6: Click on the red Launch button from the html page. A X.jnlp will be downloaded to your computer. Hit Launch !
  • Step 7: Double-click on the downloaded file to get started
  • Step 8: You need to accept and click on Yes on two Java security warnings. 

Follow the Instructions as per your chosen experiment

  Don’t Wait! You can register for the Competition Now!

How to Submit your experiment results?

After submitting the entry form your registered email id will be sued for further communication. You can submit the experiment repsonse to info@mirabilisdesign.com with your “Name” and don’t forget to mention “DAC 2024” in the subject of the mail. 

 

Frequently Asked Questions (FAQs)

Multi-Chiplet Interconnect Performance Analysis

You can access the UCle Interconnect configuration block by navigating to the simulation model located at VS_AR\demo\Bus_Std\UCle\Chiplet_UCIe_NOC_with_TG and opening the configuration settings for the UCle interconnect.

Adjusting the buffer size for transmission is important as it impacts the efficiency of data transfers. A larger buffer size can accommodate larger data packets, reduce buffer overflows, and minimize data transfer delays, leading to improved performance.

Potential trade-offs include increased power consumption, higher thermal output, and increased design complexity. These changes might necessitate more advanced cooling solutions and sophisticated management mechanisms to maintain system stability.

RISC-V SIMD Experiment

Reducing the number of pipeline stages aims to streamline the instruction execution process, potentially reducing pipeline stalls and improving resource management, which can lead to more efficient processor performance.

To modify the pipeline stages, access the processor configuration block in the simulation model located at VS_AR\doc\Training_Material\Architecture\Processor\RISC_V\ RISC_V_InOrder.xml and change the Number_of_Pipeline_Stages parameter from 4 to 3.

Reducing pipeline stages can increase context switching efficiency due to fewer stages to save and restore. However, it might also introduce a higher risk of hazards, requiring more sophisticated control logic to maintain performance.

Combining the decode and execute stages can lead to improved resource management by simplifying the pipeline control logic and potentially reducing instruction latency through these stages.

Potential trade-offs include an increased risk of data and control hazards, which might require more complex bypassing and forwarding mechanisms to avoid pipeline stalls. Additionally, the reduced flexibility in handling independent instructions could impact overall processor performance under certain workloads.

DNN Model Mask R-CNN CPU Experiment

You can find the experiment setup and configuration files in the following directory: VisualSim/VS_AR/demo/DNN/DNN_Model_Mask_R_CNN_CPU

The base experiment uses 168 AI cores and assumes that 90% of the data is readily available in the SRAM cache.

Reducing the number of cores generally lowers power consumption but also decreases the FPS. Increasing data availability in SRAM reduces off-chip memory accesses, which can improve both FPS and power consumption.

The primary bottlenecks often revolve around memory bandwidth limitations and the number of available MAC (Multiply-Accumulate) units. These factors can restrict how quickly data can be fetched and processed, affecting the overall speed and efficiency of the DNN model.

Dynamic allocation shines in situations where the workload varies. It can save power when demand is low and ramp up resources during peak demand, offering flexibility that static allocation lacks.

The findings from this experiment are particularly relevant for deploying DNN models on devices with limited resources, such as edge devices or embedded systems. By dynamically adjusting resource allocation, we can potentially improve energy efficiency and performance, making DNNs more practical for real-time applications in various fields.

General FAQs 

Document all simulation parameters and modifications accurately, noting the impact of changes on key performance metrics such as data throughput and latency. Record observations, findings, and conclusions for each simulation run in a detailed report.

Use the TextDisplay block within the simulation software to monitor outputs and results. This tool will help you track real-time changes and analyze the performance impact of different configurations.

To analyze trade-offs, compare the performance metrics such as data throughput, latency, power consumption, and thermal output before and after each modification. Consider the balance between improved performance and potential drawbacks like increased power usage and design complexity.

  Don’t Wait! You can register for the Competition Now!

For any questions or further information, please contact us at info@mirabilisdesign.com.