1 comments

  • taktoa 1 hour ago
    Regarding synthesis, I think approaches like this often seem promising to software engineers but ignore the realities of physical design. Hierarchical physical design tends to be worse than flat PD because there are many variables to optimize (placement density, congestion, IR drop, thermal, parasitics, signal integrity, di/dt, ...) and even if you have some solution in mind that optimizes area for a highly regular block, that layout could be worse than a solution that intersperses lower-power cells throughout that regular logic to reduce hotspots. And since placement is not going to be regular in any real design, delay won't be either, and there is a technique called resynthesis that restructures the logic network based on exactly which paths are critical which will essentially destroy whatever logic regularity existed.

    The other thing is that high level optimizations tend to be hard to come by in hardware. Most datapath hardware is not highly fixed-function, instead it consists of somewhat general blocks that contain a few domain specific fused ops. So we either have hardware specifications that are natural language or RTL specifications that are too low level to do meaningful design exploration. Newer RTL languages and high level synthesis tools _also_ tend to be too low level for this kind of thing, it's a pretty challenging problem to design a formal specification language that is simultaneously high level enough and yet allows a compiler to do a good job of finding the optimal chip design. Approximate numerics are the most concrete example of this: there just aren't really any good algorithms for solving the problem of "what is the most efficient way to approximate this algorithm with N% precision", and that's not even including the flexibility vs efficiency tradeoff which requires something like human judgement, or the fact that in many domains it's hard to formulate an error metric that isn't either too conservative or too permissive.