I know there is a lot of discussion on this board regarding timing constraints, but I couldn't find this topic well addressed.
I have a system where my FPGA interfaces to an ADC using a SPI like master interface. I am struggling a bit with how to properly constrain this IO. For the purposes of this discussion assume the interface contains SCLK (out), CS(out), MOSI(out), MISO(in). I think that I understand that the CS and MOSI signals should be treated as source synchronous signals with respect to SCLK for setup and hold analysis. However I am struggling a bit with how the MISO input should be treated. I know that I need to create a virtual clock to launch this signal with my latch clock being the internal SCLK.
To further complicate things, my setup contains several buffer stages for reasons I won't go into here that have potential for introducing skew between signals.
I have attached a diagram of my basic setup.
SPI Data Path.jpg
This is what I have so far in my SDC:
# base clocks
create_clock -name data_clk -period "32.768Mhz"; #main clock
create_clock -name data_io_clk -period "32.768Mhz"; #latch clock for MOSI and CS
create_clock -name data_io_clk_ext -period "32.768Mhz"; #virtual clock for launch of MISO
derive_pll_clocks
derive_clock_uncertainty
#------------------------------------------------------------------------------
# Component timing parameters
#------------------------------------------------------------------------------
# ISO_BUF
set iso_buf_tsk 6.5
# CLK_BUF
set clk_buf_tsk 0.05
set clk_buf_tpd_min 0.8
set clk_buf_tpd_max 2.0
# DAT_BUF
set dat_buf_tsk 1.0
set dat_buf_tpd_min 1.5
set dat_buf1_tpd_max 5.1
# ADC
set adc_tsu 5.0
set adc_th 5.0
set adc_cto_min 10.0
set adc_cto_max 25.0
#------------------------------------------------------------------------------
#------------------------------------------------------------------------------
# FPGA <-> ADC SPI Interface Constraints
#------------------------------------------------------------------------------
set adc_in_min [expr $adc_cto_min]
set adc_in_max [expr $adc_cto_max]
set adc_out_min [expr -($clk_buf_tsk + $iso_buf_tsk + ($clk_buf_tpd_max - $dat_buf_tpd_min) + $adc_th)]
set adc_out_max [expr $clk_buf_tsk + $iso_buf_tsk + ($dat_buf_tpd_max - $clk_buf_tpd_min) + $adc_tsu]
set_input_delay -min -clock { data_io_clk_ext } $adc_in_min [get_ports adc_miso\[*\]]
set_input_delay -max -clock { data_io_clk_ext } $adc_in_max [get_ports adc_miso\[*\]]
set_output_delay -min -clock { data_io_clk } $adc_out_min [get_ports { adc_cs \
adc_mosi\[*\]}]
set_output_delay -min -clock { data_io_clk } $adc_out_min [get_ports { adc_cs \
adc_mosi\[*\]}]
#------------------------------------------------------------------------------
My basic strategy here was to analyze these signals for skew only as board delays will be basically negligible due to the fact that these signals begin routed together as a group. So my main concern is that I meet tsu and thd of the adc and that the miso meets tsu and thd of the FPGA. The overall tpd will be more then one clock cycle, but I can handle that in logic using delay states.
I am assuming I need to somehow account for the skew that would occur on the clock signal for the MISO signal, but I am having trouble understanding what this might look like.
Also for my outputs, when I compile this code and look at it in timequest, the tco of the FPGA is ~10ns, but when I actually run this configuration on hardware, the tco looks more like 5ns. In order to adjust for the potential skew introduced by the system my minimum tco needs to be greater then 12.05ns (adc_out_min) and my maximum tco must be less then 15.85ns. Anyone have any idea what I am doing wrong here and what I should change to achieve this?
Thanks in advance!
I have a system where my FPGA interfaces to an ADC using a SPI like master interface. I am struggling a bit with how to properly constrain this IO. For the purposes of this discussion assume the interface contains SCLK (out), CS(out), MOSI(out), MISO(in). I think that I understand that the CS and MOSI signals should be treated as source synchronous signals with respect to SCLK for setup and hold analysis. However I am struggling a bit with how the MISO input should be treated. I know that I need to create a virtual clock to launch this signal with my latch clock being the internal SCLK.
To further complicate things, my setup contains several buffer stages for reasons I won't go into here that have potential for introducing skew between signals.
I have attached a diagram of my basic setup.
SPI Data Path.jpg
This is what I have so far in my SDC:
# base clocks
create_clock -name data_clk -period "32.768Mhz"; #main clock
create_clock -name data_io_clk -period "32.768Mhz"; #latch clock for MOSI and CS
create_clock -name data_io_clk_ext -period "32.768Mhz"; #virtual clock for launch of MISO
derive_pll_clocks
derive_clock_uncertainty
#------------------------------------------------------------------------------
# Component timing parameters
#------------------------------------------------------------------------------
# ISO_BUF
set iso_buf_tsk 6.5
# CLK_BUF
set clk_buf_tsk 0.05
set clk_buf_tpd_min 0.8
set clk_buf_tpd_max 2.0
# DAT_BUF
set dat_buf_tsk 1.0
set dat_buf_tpd_min 1.5
set dat_buf1_tpd_max 5.1
# ADC
set adc_tsu 5.0
set adc_th 5.0
set adc_cto_min 10.0
set adc_cto_max 25.0
#------------------------------------------------------------------------------
#------------------------------------------------------------------------------
# FPGA <-> ADC SPI Interface Constraints
#------------------------------------------------------------------------------
set adc_in_min [expr $adc_cto_min]
set adc_in_max [expr $adc_cto_max]
set adc_out_min [expr -($clk_buf_tsk + $iso_buf_tsk + ($clk_buf_tpd_max - $dat_buf_tpd_min) + $adc_th)]
set adc_out_max [expr $clk_buf_tsk + $iso_buf_tsk + ($dat_buf_tpd_max - $clk_buf_tpd_min) + $adc_tsu]
set_input_delay -min -clock { data_io_clk_ext } $adc_in_min [get_ports adc_miso\[*\]]
set_input_delay -max -clock { data_io_clk_ext } $adc_in_max [get_ports adc_miso\[*\]]
set_output_delay -min -clock { data_io_clk } $adc_out_min [get_ports { adc_cs \
adc_mosi\[*\]}]
set_output_delay -min -clock { data_io_clk } $adc_out_min [get_ports { adc_cs \
adc_mosi\[*\]}]
#------------------------------------------------------------------------------
My basic strategy here was to analyze these signals for skew only as board delays will be basically negligible due to the fact that these signals begin routed together as a group. So my main concern is that I meet tsu and thd of the adc and that the miso meets tsu and thd of the FPGA. The overall tpd will be more then one clock cycle, but I can handle that in logic using delay states.
I am assuming I need to somehow account for the skew that would occur on the clock signal for the MISO signal, but I am having trouble understanding what this might look like.
Also for my outputs, when I compile this code and look at it in timequest, the tco of the FPGA is ~10ns, but when I actually run this configuration on hardware, the tco looks more like 5ns. In order to adjust for the potential skew introduced by the system my minimum tco needs to be greater then 12.05ns (adc_out_min) and my maximum tco must be less then 15.85ns. Anyone have any idea what I am doing wrong here and what I should change to achieve this?
Thanks in advance!