Hi everyone,
I have a question about the different results of specifying clock skew vs. specifying clock latency in SDC.
As pointed out by Rysc in his popular manual TimeQuest User Guide and also in other posts (for example, http://www.alteraforum.com/forum/sho..._clock_latency), set_clock_latency has the same function as the clock skew we specify in set_input/output_delay constraints. My understanding is that they are equivalent in timing analysis. That is, if we apply these two methods to the same timing netlist after compilation, we will get the same STA result from TimeQuest.
But I find they have different effects when used as timing constraints to compile a design.
In my design, the clock signal from an on-board oscillator goes to an ADC and an FPGA, and the output data from the ADC go to the FPGA. It is estimated that the clock arrives at FPGA 0.5 ns later than it arrives at ADC.
So taking the clock at ADC as the reference point and creating a virtual clock, I can use the following two methods to constrain the ADC input delay to FPGA:
Method (1):
set ADCLK_skew 0.5
set_input_delay -clock ADCLK_virt [expr $DATA_delay - $CLK_skew + $Tco_ADC]
Method (2):
set_clock_latency -source 0.5 [get_ports {ADCLK}] #ADCLK is the input port of the clock at FPGA
set_input_delay -clock ADCLK_virt [expr $DATA_delay + $Tco_ADC]
I attached the TimeQuest results of the two timing netlists generated by these two methods.
As expected, in the STA result of the timing netlist generated by method (2), both the input data delay and clock source latency are increased by 0.5 compared to those of method (1), and they cancel off in the calculation of setup slack.
But method (1) reports a better timing result with a bigger setup slack (2.135).
The full path details show that some logics are placed in different locations with these two methods (FF1_X1_Y8_N21 vs FF_X2_Y13_N9), resulting the difference in data arrival path timing and data required path timing.
So I think these two methods do drive the compiler differently when used as timing constraints to a design and they are not 100% equivalent.
Can anybody give a further explanation?
Thank you!
I have a question about the different results of specifying clock skew vs. specifying clock latency in SDC.
As pointed out by Rysc in his popular manual TimeQuest User Guide and also in other posts (for example, http://www.alteraforum.com/forum/sho..._clock_latency), set_clock_latency has the same function as the clock skew we specify in set_input/output_delay constraints. My understanding is that they are equivalent in timing analysis. That is, if we apply these two methods to the same timing netlist after compilation, we will get the same STA result from TimeQuest.
But I find they have different effects when used as timing constraints to compile a design.
In my design, the clock signal from an on-board oscillator goes to an ADC and an FPGA, and the output data from the ADC go to the FPGA. It is estimated that the clock arrives at FPGA 0.5 ns later than it arrives at ADC.
So taking the clock at ADC as the reference point and creating a virtual clock, I can use the following two methods to constrain the ADC input delay to FPGA:
Method (1):
set ADCLK_skew 0.5
set_input_delay -clock ADCLK_virt [expr $DATA_delay - $CLK_skew + $Tco_ADC]
Method (2):
set_clock_latency -source 0.5 [get_ports {ADCLK}] #ADCLK is the input port of the clock at FPGA
set_input_delay -clock ADCLK_virt [expr $DATA_delay + $Tco_ADC]
I attached the TimeQuest results of the two timing netlists generated by these two methods.
As expected, in the STA result of the timing netlist generated by method (2), both the input data delay and clock source latency are increased by 0.5 compared to those of method (1), and they cancel off in the calculation of setup slack.
But method (1) reports a better timing result with a bigger setup slack (2.135).
The full path details show that some logics are placed in different locations with these two methods (FF1_X1_Y8_N21 vs FF_X2_Y13_N9), resulting the difference in data arrival path timing and data required path timing.
So I think these two methods do drive the compiler differently when used as timing constraints to a design and they are not 100% equivalent.
Can anybody give a further explanation?
Thank you!