Vectorization of Verilog Designs and its Effects on Verification and Synthesis
23 points by matt_d 4 days ago | 4 comments

taktoa 5 hours ago
Regarding synthesis, I think approaches like this often seem promising to software engineers but ignore the realities of physical design. Hierarchical physical design tends to be worse than flat PD because there are many variables to optimize (placement density, congestion, IR drop, thermal, parasitics, signal integrity, di/dt, ...) and even if you have some solution in mind that optimizes area for a highly regular block, that layout could be worse than a solution that intersperses lower-power cells throughout that regular logic to reduce hotspots. And since placement is not going to be regular in any real design, delay won't be either, and there is a technique called resynthesis that restructures the logic network based on exactly which paths are critical which will essentially destroy whatever logic regularity existed.

The other thing is that high level optimizations tend to be hard to come by in hardware. Most datapath hardware is not highly fixed-function, instead it consists of somewhat general blocks that contain a few domain specific fused ops. So we either have hardware specifications that are natural language or RTL specifications that are too low level to do meaningful design exploration. Newer RTL languages and high level synthesis tools _also_ tend to be too low level for this kind of thing, it's a pretty challenging problem to design a formal specification language that is simultaneously high level enough and yet allows a compiler to do a good job of finding the optimal chip design. Approximate numerics are the most concrete example of this: there just aren't really any good algorithms for solving the problem of "what is the most efficient way to approximate this algorithm with N% precision", and that's not even including the flexibility vs efficiency tradeoff which requires something like human judgement, or the fact that in many domains it's hard to formulate an error metric that isn't either too conservative or too permissive.

reply
tasty_freeze 3 hours ago
I read the summary but not the paper and it seems like it has nothing to do with physical design. This is a means of making the elaborate/compile/simulation performance of the language faster.

Say someone wrote this code:

    wire [31:0] a, b, c;
    for(i=0; i<32; i=i+1) begin
       assign c[i] = a[i] & b[i];
    end
it sounds like this paper is about recognizing it could be implemented something akin to this:

    wire [31:0] a, b, c;
    assign c = a & b;
Both will produce the exact same gates, but the latter form will compile and simulate faster.
reply
taktoa 3 hours ago
In section 4.4 it discusses the effect of the technique on Cadence Genus, which is a PD/synthesis tool. My point is that you have to flatten the graph at some point, and most of the benefit of flattening it later (keeping/making things vectorized) is to do higher level transformations, which are mostly not effective.
reply
ebuyan 2 hours ago
what is the best vectorization for large documents hundreds of pages
reply