Hi Don, This device was released long ago so it is hard to find much clues... The designer believes that the concern was that when CONV changes away from the CLK rising edge then for a fixed CONV period you may end up with the internal number of clks fluctuating by +/-1 clk per integration. Every clk edge contributes a small disturbance and if the number of those edges changes one would end up with small, sample to sample differences. Also, I kind of remember that in the newer devices like the DDC264 this was actually not respected in the EVM and the CONV edge could match a rising or a falling edge of CLK, depending on power up/FPGA start-up. When that happened there was no noticeable difference in performance. Nevertheless, these are newer devices and the effect described above could be no issue on them while they may be for the DDC118 . So, we can't tell 100%. My feeling is that I doubt customer can see the difference of using 12ns instead of being within 10ns, but the only way for them to know is to try it out... Regards, Edu
↧