Re: [PATCH 2/2] iio: adc: Add driver for TI ADS1100 and ADS1000 chips

From: Mike Looijmans
Date: Sun Feb 26 2023 - 03:26:22 EST



Met vriendelijke groet / kind regards,

Mike Looijmans
System Expert


TOPIC Embedded Products B.V.
Materiaalweg 4, 5681 RJ Best
The Netherlands

T: +31 (0) 499 33 69 69
E: mike.looijmans@xxxxxxxxxxxxxxxxx
W: www.topic.nl

Please consider the environment before printing this e-mail
On 25-02-2023 18:01, Jonathan Cameron wrote:
On Sat, 25 Feb 2023 12:03:05 +0100
Mike Looijmans <mike.looijmans@xxxxxxxx> wrote:

Comments below - mailserver has a top-post fetish and will inject a
signature here somewhere...

No further comment from me means "agree, will implement in v2"...
One 'nice to have' when replying where you have chunks like that
is to just crop them out so it's easier to spot the interesting bits.

I've done that below.

On 18-02-2023 17:48, Jonathan Cameron wrote:
On Fri, 17 Feb 2023 10:31:28 +0100
Mike Looijmans <mike.looijmans@xxxxxxxx> wrote:
The ADS1100 is a 16-bit ADC (at 8 samples per second).
The ADS1000 is similar, but has a fixed data rate.

Signed-off-by: Mike Looijmans <mike.looijmans@xxxxxxxx>
Hi Mike,

I probably overlapped a lot in here with what Andy said, so just ignore
anything you've already fixed.

Biggest thing is HARDWAREGAIN.
That is very rarely used with ADCs. It exists to support cases where
the gain is not directly coupled to the thing being measured (light
sensitivity of a time of flight sensor for example). Userspace expects
to use SCALE to both control amplifier gains and to multiply with
the _raw value to get a value in the real world units.

It's a bit fiddly to do the computation, but typically at start up time
you work out what the combination of each PGA gain and the reference
voltage means for the scales available and stash that somewhere to use later.
Complicating factor with this ADC is that the final gain value depends
on the sampling rate as well as the power supply voltage. Which would
lead to the "available" list being different after changing the sample
rate and confusing userspace. If a userspace app would read the
available sample rates and gains, and then proceeded to set them, the
results would be weird, as the order in which they were set would
matter. Setting the gain after setting the sample rate would likely
result in an EINVAL error because the selected gain is no longer applicable.

To me it seemed more logical to provide the analog amplification and
sample rate as separate, unrelated values.
It may be logical, but it isn't how the IIO ABI has ever worked and it doesn't
extend to more complex cases. It's in general true that a PGA will result
in a change to the scale that userspace needs to apply. There are devices
where it doesn't. There are lots of things that at first glance 'could'
affect scale but often do it in complex non linear ways that userspace
simply can't know about - hence if we are pushing calculations into userspace
we need it to just be (_raw + _offset) * _scale.
Note that there is some wiggle room with how raw "raw" is.

There are two solutions that have been taken in drivers.
1) The above software flow is broken as any ABI write in IIO is allowed
to affect other ABI elements. This is less than ideal though.
2) Let the scale free float. So the attempt is to keep as close as possible
to the set value as other things change (i.e. the sampling frequency).
The scale_avail reflects current settings of everything else, and indeed
changes with other ABI wirtes (this is quite common) but the interface is
made more intuitive by matching as closely as possible. Thus if you change the
sampling rate and the scale changes then you attempt to modify the PGA
to keep it approximately the same. Obviously it clamps at end points
but nothing we can do about that.

However, having said that, I don't 'think' we need either here...
A common thing to do for scale vs sampling rate (which is typically
either oversampling based, or based on another SAR cycle) is to just shift
the raw data to incorporate it - a common sensor design is to justify it
so that the unused bits are the LSBs - so may be a case of just not shifting
it to compensate for the datarate change.. That's not true here if I read
the datasheet correctly, but a simple
sysfs raw read value == raw_value << (16 - bits for data rate) should fix that.

I agree, a bit of shifting (pun intended) is by far the better solution.

Interesting the datasheet argues they deliberately right aligned and sign extended
to allow oversampling without shifts, though counter argument is they made everyone
who wants a real scale apply a shift. I guess it depends on the expected use case.

My guess is that the chip internally always runs at 128Hz and simply adds the sampled values together in a 16-bit register for the lower sampling rates. Someone came up with that datasheet text later on.

+
+static const struct iio_chan_spec ads1100_channel = {
+ .type = IIO_VOLTAGE,
+ .differential = 0,
+ .indexed = 0,
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
+ .info_mask_shared_by_all =
+ BIT(IIO_CHAN_INFO_SCALE) |
+ BIT(IIO_CHAN_INFO_HARDWAREGAIN) |
+ BIT(IIO_CHAN_INFO_SAMP_FREQ),
+ .info_mask_shared_by_all_available =
+ BIT(IIO_CHAN_INFO_HARDWAREGAIN) |
Hardware gain is normally only used for cases where the gain is not
directly related to scale. Things like the gain on a signal that is
being used for measuring time of flight. Here you should probably
just be using SCALE.
In this case, SCALE depends on SAMP_FREQ as well as GAIN. Which will be
very confusing to userspace.

Hoping for some leeway here...
Sorry no. Though I think applying the shift as mentioned above deals
with your data rate dependent scaling and makes this all a lot easier.

True.

...
+ iio_device_unregister(indio_dev);
+
+ ads1100_set_conv_mode(data, ADS1100_SINGLESHOT);
+
+ pm_runtime_disable(&client->dev);
I'd expect runtime pm to be disable before messing with the conv mode.
With a little care you can use devm_runtime_pm_enable()
Setting the conv mode involves I2C traffic. After runtime_disable, the
power supply to the chip may have been disabled, leading to
communication errors on the I2C bus. Hence I thought it appropriate to
write the config register before turning off power.
pm_runtime_disable() is disabling the runtime management of the power
not the power itself. That you need to do after turning off the
management (thus avoiding any races)

Ah. But does that mean I'd need to disable the Vdd power supply here as well?

--
Mike Looijmans