You could imagine a case where the overhead of running interrupt handlers (invalidating your caches, setting up the interrupt stack to run on) could be slower than actually doing the read or write, in which case I guess polling would be faster.
However, SSDs are fast compared to disk, but still much slower than memory. SSDs still take anywhere from tens of microseconds to milliseconds to complete each I/O, whereas doing the interrupt setup and teardown uses all in-memory operations and probably takes a maximum of, say, 100-1000 cycles (~100ns to 1us).
The main benefit of using interrupts instead of polling is that the "disabled" effect of using interrupts is much lower, since you don't have to schedule your I/O threads to continuously poll for more data while there is none available. It has the added benefit that I/O is handled immediately, so if a user types a key, there won't be a pause before the letter appears on the screen while the I/O thread is being scheduled. Combining these issues is a mess - inserting arbitrary stalls into your I/O thread makes polling less resource-intense at the expense of even slower response times. These may be the primary reasons nobody uses polling in kernel I/O designs.
From a user process' perspective (using software interrupt systems such as Unix signals), either way can make sense since polling usually means a blocking syscall such as read()
or select()
rather than (say) checking a boolean memory-mapped variable or issuing device instructions like the kernel version of polling might do. Having system calls to do this work for us in the OS can be beneficial for performance because the userland thread doesn't get its cache invalidated any more or less by using signals versus any other syscall. However this is pretty OS dependent, so profiling is your best bet for figuring out which codepath is faster.