Makesh, 1. The PGA is basically a programmable gain amplifier. It allows for the measurement of smaller signals so the resolution is better. For the measurement using a 2.5V reference: PGA=1: Full-scale range=±2.5V, LSB size=3uV PGA=1: Full-scale range=±19.53mV, LSB size=23nV Note that the negative full-scale range is differential and just refers to an input where AINN is higher than AINP (for example, AINN=3.5V, and AINP=1.5V means that the input is -2V). There will some noise dependent gain and decimation ratio, which you can calculate from the typical characteristics curves in Figures 1 through 7. 2. As the ADS1218 powers up, there are internal registers that are read at start up that are programmed at our final test to trim and errors in the device. Therefore at the startup, there will some extra supply current needed as these registers are loaded and set. 3. The maximum output data rate of 1kHz refers to the output data of the ADC readings. It is more appropriate to say number of samples per second for output data. The fastest digital output from the DOUT pin is different. This is based on the maximum speed where DOUT can be clocked by SCLK. You can find this information in the Timing Specification Table. It is listed as the minimum SCLK period. For this device the min SCLK period is 4 tosc periods. In this case, the ADS1218 running at 2.4576MHz, has a min SCLK period of 16.27us, or a max SCLK rate of 614kHz. 4. I believe that you should only run a SELFCAL and SELFGCAL at PGA=1. However, I think you can run a SELFOCAL at any PGA. SELFOCAL is basically a measure of the offset from the ADC, which should scale with the PGA gain. Once the offset is measured from the SELFOCAL, this offset is subtracted from future measurements. However, anything that involves a gain calibration, the device is expecting a positive full scale input at the time of input. If there is gain, I don't think the SELF calibration commands will work. a. Correct, if the PGA=128, the self calibration won't work. b. At PGA=128 and a reference of 2.5V, the measurement will be in the range of ±19.53mV. Voltages larger than this input will appear as a full scale reading 7FFFFFh, (or negative full scale reading of 800000h for negative over-voltages. c. The self cal is used to remove offset and gain error within the device. The offset cal removes any offset that appears from the input multiplexer/PGA/ADC. The gain calibration removes any gain error that the ADC sees from the input channel and the comparision with the reference input channel. For system calibrations, imagine that you have an external amplifier. This will have it's own gain and offset error. You can use the system calibrations to calibrate for those errors. For the system offset calibration, you would short the input of your external amplifier to use as the ADC's measured offset. Then for the system gain calibration, you would put in what your system calls the full scale measurement. Note that the system calibrations, you should first ensure that the gain calibration is moderately close, then perform the system offset calibration, and then perform the gain calibration. If the gain error is extremely large at the start, the offset calibration will be off as well. If you have a smaller gain error to start, then the offset calibration will be much more accurate. 5. I mentioned this in 1), but the analog input pins should not be negative voltage. If AVDD=5V and GND=0, then the AIN pins must be between 0V and 5V. If any pin goes outside this range by 0.3V (outside -0.3V and 5.3V) there may be damage to the device. As a differential input for the ADC, the negative measurements comes when AINN is higher than AINP. 6. I don't think we have any software tools for this device. There had been an ADS1218EVM, but it was obsoleted many years ago. 7. The ADS1218 is a delta-sigma (or oversampling) type of ADC. That means that the ADC is using many samples of the input to get one output ADC data that you can read. The ratio of the number of input samples it takes to create one ADC data is know as the oversampling (or decimation ratio). Most of the definitions are given at the end of the ADS1218 datasheets but I'll summarize them here: fosc is the oscillator clock frequency. The typical is 2.4576MHz, with a maximum of 5MHz. fmod is the modulator frequency. Generally this is the frequency at which the input is sampled. This frequency is fmod=fosc/128 or fosc/256 depending on the SPEED bit in the configuration ratio. However, for higer gains, the input is sampled faster than fmod. fdata is the output data rate. This is the rate at which the ADC puts out a measurement reading. decimation ratio is the ration between fmod and fdata. I would note that there may be b. The buffer is an just a unity gain buffer and is used to reduce the input impedance of the ADC, the downside to the buffer is that it limits the input range for the analog inputs. This is listed in the electrical characteristics table in the datasheet: Hopefully this answers your questions about the ADS1218 . Out of curiosity, what are you measuring and how did you settle on this device? This is an older device, and while it is fine for use and is still a popular device, there are other devices with better specifications and more features than this one. If you do settle on this device for your system, feel free to post a schematic for review. There are always plenty of details to consider in constructing a system with a precision ADC and it's best to have the schematic reviewed. Joseph Wu
↧