0

I have created a file under proc to read at string but instead of using seq_file implementation I am able to read it using the ssize_t (*read) method in file_operations. Is this approach wrong?

Could anyone please explain in what situations I am better of using seq_file implementation?

    ssize_t my_proc_read(struct file *file, char __user *buffer, size_t count, loff_t *offset)
    {
      printk(KERN_INFO "loff_t *offset = %lld\n", *offset);
      printk(KERN_INFO "count = %lu\n", count);

      /* comment below two lines and check what happens. If we don't return
         0 it means there is still some data and read is called repeatedly. */
      if((int)*offset >= ARRAY_LEN)
        return 0;
      *offset += ARRAY_LEN;

  return sprintf(buffer, "%s\n", param);
}

Also in output I always get count to be 131072, why is that?

[ 7317.855146] loff_t *offset = 131073
[ 7317.855149] count = 131072
anukalp
  • 2,780
  • 5
  • 15
  • 24
  • And **how you create file** with this `.read` function? Also, accessing `__user` buffers *directly* (which is done by `sprintf` too) is incorrect: you should use `copy_to_user`/`copy_from_user`. – Tsyvarev Jan 14 '16 at 21:16
  • This file is created under /proc using proc_create function from linux/proc_fs.h – anukalp Jan 15 '16 at 03:20
  • Hmm, and how you access this file from user space? By using `cat`, or some handwritten C program? – Tsyvarev Jan 15 '16 at 09:00

1 Answers1

1

seq_file is useful for files, which content is not stored somewhere, but is generated on the fly when user needs it. It can be generation using single format string (like %s\n in the question post), or combining chunks of different or same types.

When use seq_file functionality, code writer doesn't bother about size of the buffer to read in, current file's offset and accessing to __user data with copy_to_user(). Instead, he concentrates on generating file's content, as if it is some stream of unlimited size. Everything else is processed by seq_file mechanism automatically.

E.g., given example can be implememted using seq_file as follows:

int param_show(struct seq_file *m, void *v)
{
    (void)v; /* Unused */
    seq_printf(m, "%s\n", param); /* Just generate content of the "file" */
    return 0;
}

int my_proc_open(struct inode *inode, struct file *filp)
{
    return single_open(filp, &param_show, NULL);
}

const struct file_operations my_proc_ops = {
    .owner = THIS_MODULE,
    .open = &my_proc_open,
    .read = &seq_read
};

For comparison, the same read functionality implemented directly:

ssize_t my_proc_read(struct file *file, char __user *buffer, size_t count, loff_t *offset)
{
    ssize_t ret;
    size_t param_len = strlen(param);

    /* Write to buffer everything except terminating '\n' */
    ret = simple_read_from_buffer(buffer, count, offset, param, param_len);
    /* But processing of additional '\n' is much more complex */
    if (ret >= 0 && ret < (ssize_t)count && *offset == param_len) {
        char ch = '\n';
        int err = put_user(buffer + ret, &ch); /* Try to append '\n' */

        if (!err) {
            /* Success */
            ++ret;
            ++(*offset);
        } else if (!ret) {
            /* Fail and nothing has been read before */
            ret = err;
        }
    }

    return ret;
}

const struct file_operations my_proc_ops = {
    .owner = THIS_MODULE,
    .read = &my_proc_read
};

As one can see, while reading ready content of params is single-line (uses helper simple_read_from_buffer), additional \n, which should be generated, makes implementation much more difficult.

Disadvantage of seq_file is its perfomance: content, generated in .show function doesn't cached, so every subsequent read() syscall requires .show to be called again. Also, for generating file's content internal buffer is used, and this buffer should be allocated in the heap.

But in most cases, files generated on the fly are small and/or are read rarely and/or are not performance critical. So seq_file is suitable for such files in almost all cases.

Sam Protsenko
  • 14,045
  • 4
  • 59
  • 75
Tsyvarev
  • 60,011
  • 17
  • 110
  • 153