Working on updating the MHZ100Q project, and one significant design issue is I/O delay. To review, the Xilinx Spartan-3A FPGA generates a 100MHz clock using an internal DCM block, and this clock drives an external 100MHz A/D converter. The total delays through the output buffer, A/D clock to output pins, and FPGA input buffer add up to 9 to 16 ns, while the clock cycle time is 10 ns. So aligning the clock and data at the input latch is tricky.
Per the A/D data sheet, the value on the output pins is stable for about 5ns (minimum) and the FPGA data sheet says the latches need about 1ns to capture the value (setup+hold times from the clock edge). So we have a 4 ns window in which everything will work right.
It's hard to measure the actual offset at the internal latch input of the FPGA, so for initial setup I would like to be able to adjust the phasing of the A/D clock relative to the internal latch clock over the full 10 ns range.
First option is to use the built-in delays. The *.ucf file supports the per-pin specifications IFD_DELAY_VALUE and IBUF_DELAY_VALUE which put variable amounts of delay between the input pin and the logic (the former applies when using the latch built-in to the I/O Block, and the latter applies when the I/O Block is just used as a buffer). Total adjustment range is about 2 ns, which might be enough.
Next step up in complexity is to use the Digital Clock Module (DCM) to adjust the phase of the generated clock. The DCM can produce essentially any clock phase, but for this application, I think I can just use the 4-phase quadrature outputs. That gives me effectively 0 2.5, 5, 7.5 ns adjustment points, and combined with the IFD_DELAY_VALUE, I can get within 0.5 ns of any required timing offset.
Monday, June 30, 2014
Subscribe to:
Posts (Atom)