0

I am trying to get data from a SPI ADC on a Linux v5.10 (imx8mp, no preempt RT patch).

First, I used shell commands and spitools to interact with the chip and find out the configuration that suits my needs.

Example of command line:

printf '\x00\x00\x00\x00' | spi-pipe --device /dev/spidev1.0 --speed 1000000 --blocksize=2 --number=2 | hexdump -C

The next step was to create a proper Linux driver to be more efficient. Surprisingly, spying on the SPI bus gave me results I can't explain.

This is the kind of frame I'm having using command line / spi-pipe based on spidev driver. Behavior is very consistent, the whole SPI exchange lasts for about 200uS, perfect for my needs.

enter image description here

This is the kind of frame I'm having using my custom driver, very inconsistent, most of the time I got one or multiple additional 300uS delay before releasing chip select.

enter image description here

enter image description here

In rare cases (I would say every 10th sample approx.), I got one with no delay:

enter image description here

This is the code I'm using in my custom driver to transfer SPI data:

static int max11122_read_raw(struct iio_dev *indio_dev, struct iio_chan_spec const *chan, int *val, int *val2, long mask)
{
    int ret, i;
    struct spi_message m;
    struct max11122_state *st = iio_priv(indio_dev);

    mutex_lock(&st->data_lock);

    memset(st->tx_buf, 0, sizeof(st->tx_buf));
    memset(st->rx_buf, 0, sizeof(st->rx_buf));

    struct spi_transfer xfer[] = {
        {
            .tx_buf = st->tx_buf,
            .rx_buf = st->rx_buf,
            .len = 2,
            .cs_change = 1,
            .delay_usecs = 1,
            .cs_change_delay = {
                .value = 1,
                .unit = SPI_DELAY_UNIT_USECS
            },
            .word_delay = {
                .value = 1,
                .unit = SPI_DELAY_UNIT_USECS
            }
        },
        {
            .tx_buf = &st->tx_buf[2],
            .rx_buf = &st->rx_buf[2],
            .len = 2,
            .delay_usecs = 1,
            .word_delay = {
                .value = 1,
                .unit = SPI_DELAY_UNIT_USECS
            }
        }
    };

    toggle_cnvst_pin(st);

    ret = spi_sync_transfer(st->spi_dev, xfer, ARRAY_SIZE(xfer));
    if (ret < 0) {
        printk("failed to get conversion data\n");
        mutex_unlock(&st->data_lock);
        return ret;
    }

    *val = cpu_to_be32(*((int *)st->rx_buf));
    *val2 = 1000000;

    mutex_unlock(&st->data_lock);

    return IIO_VAL_INT;
}

I tried a lot to tweek delay_usecs, word_delay, cs_change_delay but these seems to hold minimum values. So leaving these variables to 0 doesn't help apparently.

Why do I have this behavior? What seems to be the reason(s) of these additional 300uS delays?

0andriy
  • 4,183
  • 1
  • 24
  • 37
Martin
  • 877
  • 8
  • 20
  • What prevents you from (writing and) using in-kernel driver for the ADC? Note `spidev` exists **solely for debugging** purposes and must not be used otherwise. – 0andriy May 17 '23 at 23:07
  • Yes agree, current driver on linux master branch is obsolete. You have an error message saying it's using an old API. Plus it's quite minimalist so I reworked a driver on my side to suits my needs. – Martin May 19 '23 at 12:03
  • Cool! So, you can now upstream your changes that everybody else will benefit from your excellent job. – 0andriy May 21 '23 at 13:58

0 Answers0