The law is very clear here, you are not responsible for caching partial/complete files/metadata in almost all cases.
distributed p2p storage and sharing
No, it works more like in BitTorrent protocol and less like in TV series "Silicon Valley". You need to share a file and somebody will need to find a hash to download a file (the difference is that .torrent file was mostly preferred over magnet hash before around 2015 in BitTorrent, while in IPFS hashes are the only way, also the system is global, that means the complex hash function is used so that no collisions are possible (at least not in a billion years) and thus it can check for hashes over THE WHOLE network and thus do not store duplicate chunks of data that folder/file structure is reconstructed from).
The point here is that just like in BitTorrent you store no files you do not request, now just like in BitTorrent you can do IPFS BitSwap stuff to accelerate the swarms, that is what cloudflare-ipfs.com does and ipfs.infura.io and others (?). In BitTorrent such things also exist in particular to automatically attach to updated torrents that have the same hashes for file parts... That is very cool, but in IPFS it is done automatically. Also different servers exist that propagate .torrent file (a.k.a. magnet metadata) using just magnet hash. I believe even DHT crawlers play some role, like BTDigg or https://btdb.eu/, but not much of course, you can set up you own crawler (as Btdigg is open source) that will do precisely that: share metadata of torrents, that requires almost no resources... (You can even set up your own bootstrap supernode to create you OWN seperate DHT.) But is very cool to do as a lot of stuff can be found there. As I understand IPFS also does that by default, i.e. it stores some metadata to help data propagation. You can further read this:
https://discuss.ipfs.io/t/ipfs-propagation/4301
https://discuss.ipfs.io/t/how-fast-do-ipns-changes-propagate/311
https://docs.ipfs.io/concepts/bitswap/
There is also this: https://collab.ipfscluster.io/