You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| min_chan_width | The minimum routable channel width | Medium\*|
395
+
| crit_path_routed_wirelength | The routed wirelength at the relaxed channel width | Medium |
396
+
| NoC_agg_bandwidth\**| The total link bandwidth utilized by all traffic flows | Low |
397
+
| NoC_latency\**| The total time of traffic flow data transfer (summed over all traffic flows) | Low |
398
+
| NoC_latency_constraints_cost\**| The total number of traffic flow latency constraints | Low |
397
399
398
400
\* By default, VPR attempts to find the minimum routable channel width; it then performs routing at a relaxed (e.g. 1.3x minimum) channel width. At minimum channel width routing congestion can distort the true timing/wirelength characteristics. Combined with the fact that most FPGA architectures are built with an abundance of routing, post-routing metrics are usually only evaluated at the relaxed channel width.
399
401
402
+
\** NoC-related metrics are only reported when --noc option is enabled.
403
+
400
404
Run-time/Memory Usage Metrics:
401
405
402
406
| Metric | Meaning | Sensitivity |
@@ -493,7 +497,7 @@ k6_frac_N10_frac_chain_mem32K_40nm.xml boundtop.v common 9f591f6-
The [Titan benchmarks](https://docs.verilogtorouting.org/en/latest/vtr/benchmarks/#titan-benchmarks) are a group of large benchmark circuits from a wide range of applications, which are compatible with the VTR project.
499
503
The are typically used as post-technology mapped netlists which have been pre-synthesized with Quartus.
#NoC benchmarks are run as several different tasks. Therefore, QoR results should be gathered from multiple directories,
565
+
#one for each task.
566
+
$ head -5 tasks/noc_qor/large_complex_synthetic/latest/parse_results.txt
567
+
$ head -5 tasks/noc_qor/large_simple_synthetic/latest/parse_results.txt
568
+
$ head -5 tasks/noc_qor/small_complex_synthetic/latest/parse_results.txt
569
+
$ head -5 tasks/noc_qor/small_simple_synthetic/latest/parse_results.txt
570
+
$ head -5 tasks/noc_qor/MLP/latest/parse_results.txt
571
+
```
572
+
531
573
### Example: Koios Benchmarks QoR Measurement
532
574
533
575
The [Koios benchmarks](https://github.com/verilog-to-routing/vtr-verilog-to-routing/tree/master/vtr_flow/benchmarks/verilog/koios) are a group of Deep Learning benchmark circuits distributed with the VTR project.
Copy file name to clipboardExpand all lines: doc/src/vtr/benchmarks.rst
+14-1Lines changed: 14 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -191,7 +191,20 @@ The SymbiFlow benchmarks can be downloaded and extracted by running the followin
191
191
cd$VTR_ROOT
192
192
make get_symbiflow_benchmarks
193
193
194
-
Once downloaded and extracted, benchmarks are provided as post-synthesized eblif files under: ::
194
+
Once downloaded and extracted, benchmarks are provided as post-synthesized blif files under: ::
195
195
196
196
$VTR_ROOT/vtr_flow/benchmarks/symbiflow
197
197
198
+
.. _noc_benchmarks:
199
+
200
+
NoC Benchmarks
201
+
----------------
202
+
NoC benchmarks are composed of synthetic and MLP benchmarks and target NoC-enhanced FPGA architectures. Synthetic
203
+
benchmarks include a wide variety of traffic flow patters and are divided into two groups: 1) simple and 2) complex
204
+
benchmarks. As their names imply, simple benchmarks use very simple and small logic modules connected to NoC routers,
205
+
while complex benchmarks implement more complicated functionalities like encryption. These benchmarks do not come from
206
+
real application domains. On the other hand, MLP benchmarks include modules that perform matrix-vector multiplication
207
+
and move data. Pre-synthesized netlists for the synthetic benchmarks are added to VTR project, but MLP netlists should
208
+
be downloaded separately.
209
+
210
+
.. note:: The NoC MLP benchmarks are not included with the VTR release (due to their size). However they can be downloaded and extracted by running ``make get_noc_mlp_benchmarks`` from the root of the VTR tree. They can also be `downloaded manually <https://www.eecg.utoronto.ca/~vaughn/titan/>`_.
- The above command will generate an output file in the run directory that contains all the place and route metrics. This is a txt file with a name which matches the
53
+
- The above command will generate an output file in the run directory that contains all the place and route metrics. This is a txt file with a name which matches
52
54
the flows file provided. So for the command shown above the output file is 'mlp_1.txt'
53
55
54
56
Special benchmarks:
@@ -64,8 +66,13 @@ Running the benchmarks:
64
66
of the NoC routers needs to be locked. A
65
67
- To run a single instance of this benchmark, pass in the following command line parameter and its value to the command shown above:
- All synthetic benchmarks can be run as VTR tasks. Example tasks are provided in vtr_flow/tasks/noc_qor
74
+
- Instructions on how to run VTR tasks to measure QoR for NoC benchmarks in available in VTR Developer Guide.
75
+
69
76
Expected run time:
70
77
- These benchmarks are quite large so the maximum expected run time for a single run is a few hours
71
78
- To speed up the run time with multiple VPR runs the thread count can be increased from 1. Set thread count equal to number seeds for fastest run time.
- The above command will generate an output file in the run directory that contains all the place and route metrics. This is a txt file with a name which matches the
45
-
the flows file provided. So for the command shown above the outout file is 'complex_2_noc_1D_chain.txt'
47
+
flows file provided. So for the command shown above the output file is 'complex_2_noc_1D_chain.txt'
48
+
49
+
Running VTR tasks:
50
+
- All synthetic benchmarks can be run as VTR tasks. Example tasks are provided in vtr_flow/tasks/noc_qor
51
+
- Instructions on how to run VTR tasks to measure QoR for NoC benchmarks in available in VTR Developer Guide.
46
52
47
53
Expected run time:
48
54
- These benchmarks are quite small so the maximum expected run time for a single run is ~30 minutes
Copy file name to clipboardExpand all lines: vtr_flow/benchmarks/titan_blif/README.rst
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,10 +1,10 @@
1
-
The `Titan <http://www.eecg.utoronto.ca/~vaughn/titan/>` benchmarks are distributed seperately from VTR due to their large size.
1
+
The `Titan <http://www.eecg.utoronto.ca/~vaughn/titan/>` benchmarks are distributed separately from VTR due to their large size.
2
2
3
-
The Titan repo is located under /home/kmurray/trees/titan on the U of T EECG network. Memebers of Vaughn Betz's research lab have read/write privileges.
3
+
The Titan repo is located under /home/kmurray/trees/titan on the U of T EECG network. Members of Vaughn Betz's research lab have read/write privileges.
4
4
5
5
This repo is where the Titan flow is developed and where any changes to it should be made.
6
6
7
-
In addition to the titan benchmarks, this repo contains scripts that are used ingeneration of the architecture description for Stratix IV.
7
+
In addition to the titan benchmarks, this repo contains scripts that are used in generation of the architecture description for Stratix IV.
8
8
9
9
More specifically, they contain scripts that generate memory blocks & complex switch blocks.
0 commit comments