-
Notifications
You must be signed in to change notification settings - Fork 49
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Faster timing estimates #46
Comments
As seen in the linked discussion, there are routes to faster timing feedback than running full syn+pnr tools - thanks @bartokon. This likely involves estimation/modeling of some sort. Currently There still needs to be some work to make models like |
https://www.rapidwright.io/docs/Introduction.html#what-is-rapidwright |
Hiho, here are the times for artix implementation mypipeline. (Yea I left my PC for more than 8h of synth xD) . |
And this is for Yosys+nextpnr (This graph looks so broken...) LFE5UM5G-85F-8BG756C |
Wow how interesting that the nextpnr version came back so ...unpredictable... Doesnt seem like results from nextpnr ive gotten before... |
I find this paper pretty interesting on how to do static timings esimations: "Timing Driven C-Slow Retiming on RTL There are timing models for various primitives in function of width And related article where the author states "the µLN indicator can be used for fast static timing estimations", see https://www.eetimes.com/introducing-c-slow-retiming-system-hyper-pipelining/2/ |
I'm sending a pull request with a function that estimates delays based on operations and bit sizes |
Lets continue here @suarezvictor You ask: I encourage you to take the work in the direction you most want to work on. Feel free to experiment adding things to |
I think getting to 1) models of basic pipelined operations and then 2) models of 'full netlist' timing are generally what we want. The goal being instead of asking a synthesis+pnr tool to give the timing of a pipelined operation (or entire design) there should be some kind of faster modeling that can be done... So lets focus on the smaller initial goal from 1): just predicting timing of pipelined basic operators... For that I need to explain how pipelining works right now... |
I propose that for any select work, that you provide the data, and I try to develop the models. In case of the pipelining of a single binary operation (split in smaller widths) it should specify the operation, all the intermediate and final widths, plus the delay (or maybe, as a start, the input widths and number of stages) |
Lets consider the basics first: You can say "pipeline an operation to N stages". Ex. pipeline an 32b adder to 2 stages (16b per stage) And these are the kinds of pipeline cache values @suarezvictor is looking for However the tool does not really use just a single 'stages' latency number to characterize the pipeline. Consider a design with three of these adders back to back in series
If you ask 'pipeline that to 2 stages', i.e. split into half - where does the split happen?
The stage split would occur half way through So pipelining is not ultimately done on the basic operations as 'pipeline to N' stages but instead is done as 'pipeline the op breaking into these pieces/slices' So the model I think would be most useful is Right now the tool describes the chunk/slices/locations of pipelining by each operation instance having an associated list of numbers.
It would not be hard to configure the tool to collect more data by doing a bunch of
And then you could work with that as Maybe the even stage slicing is all thats needed... Consider an odd example
Do we really need to model that case specifically? So maybe ex.
Biggest stage is 75%, then should have the same timing as a design "split evenly" into So maybe big TLDR here is that the timing model can take the form of: |
The data is available in the repo path_delay_cache for a few different FPGA parts. Any more data needs to be manually collected by running the synthesis+pnr tools to build the cache+models. That is something I can help you setup and run (ex. write a adder function, tell to pipeline 0,1,2,3 etc clock cycles, collect data of latency->fmax, repeat for other operators, script ti to repeat for all the FPGA parts, repeat for all the widths etc) I personally dont have time to create and maintain these models so I am leaving it up to whoever wants to contribute to |
My proposal is that you throw to the model all the data, and then the model can simplify the data before the estimation. In the current state, for example, when you specify the width of two operands, I simplify it by just taking the greater one. |
In regards to who will maintain the models, I'm inclined to do it. I ask you the data. You can use whatever you actually have, or try to create more. Let's agree on the interfaces ;) |
Yeah I will help create a cache of pipelined operators that can be used to do
But making the model is up to you |
I can live with those real numbered "num stages". It seems I prefer it as a fraction of 1, let's sat, 1/N (N=number of stages) |
OK as discussed there will now be cached files of period values for pipelined built in functions Paths like meaning that using vivado synthesis estimates for a 100t part, a plus operation for Note that this simply records that 'a plus operation sliced that way was seen in a design working a frequency=F (period=P)'. It is not recording based on specific measurements about specific operator but instead a ~guess based on the design the operator exists in. |
#48 issue also very much will be interested in that data as it builds up |
@suarezvictor the first data points for the pipeline period cache have been uploaded That cache is what results from building the full RT 1080p targetting 148MHz... so in theory if we can model this cache we could get to the RT final pipeline faster - have to think about how best to use models ... |
Seem as good progrees
|
It will increase in reliability over time as more data points are used to update the cache.(It likely isn't very accurate to start) Or if more specific tests are intentionally done (ex. a design of just a single BIN_OP_MINUS_int22_t_int22_t set to |
let's assume it's useful data |
I may not understand the data. When we hace lets say a 22+22 bit add, and you register a 0.5 factor, does it mean you made a 2 stage pipeline of 11 bit each? If you used 11 bit, we'll need that number. We need to know what the synthetizer was asked and what it returned. Maybe if you van illustrate an example with VHDL code it would be even clearer |
|
how to round up if the operands doesn't have same size? |
Generally for reference these questions about exactly what is seen at the lowest level might be getting towards 'what LUTs is this?' #45 |
Also @suarezvictor I encourage you, if you want good data for a specific operator, write a little demo function of just that operator and do a ex. |
I'll use the algorithm of taking the widest operando and round upwards with the factor, until we can do it in a cleaner way (i.e. just saving the actual operands sizes) |
Re: stuff like building our own ~nextpnr - or just timing estimate tool etc Very cool to see ECP5 delays documented here from |
I don't get it... 5 years ago? |
I was noting that in that |
You say before PNR? |
In theory yeah - but its almost recreating PNR in difficulty - its still a big project to undertake - I was just noting I know where the raw timing data lives now, giving some more scope to the problem |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Discussed in #42
Originally posted by bartokon November 9, 2021
Hi,
I'm thinking if it is possible to create a fake part with fake timings, so whole place and route could take seconds (Cached synthesis p&r?) That part should have unlimited resources and each path delay could be for example 1.5*logic_delay. Logic delay could be avg from different FPGA families. All latency finding algorithms could much faster to iterate...
For example generate netlist from Pipeline parsed code then generate fake timing paths depending on logic depth and wire length.
The text was updated successfully, but these errors were encountered: