I actually use such approach in one of my programs.
Now do note that I have a custom made database and database engine for this.
I really doubt this could be made with common database engines without making some changes to them. Why?
In order for this to work you need to modify record preparations routines. What are record preparation routines?
Record preparation routines are number of functions which read data from multiple database tables (relational database connections) and then join them into a single structured record which is finally sent to the client.
It is these routines that control how and from where is some data being read.
Why have I decided to implement this?
Well that was many years ago when I only owned old Win98 based computer. And because Win98 does not support NTFS file system but only FAT32 I was limited by 3 GB file size which could be quickly exceeded when saving bunch of files directly into the database which is normally stored in single file or several different files (one for each database table) in some databases.
Because todays computers file systems no longer have this limitations there is no need for such solution any more.
But I still decided to keep this functionality. Why?
Most files that I store are textual files. So in order to reduce the space requirements on the database server even more I went and implemented word compression.
So all files are compressed but when client requests certain record that contain one of these files the file gets uncompressed and sent as such to the client (to maintain backward compatibility with older client software).
In newer client software I transfer these files in compressed format along with the word dictionary needed for decompression. So the files are decompressed on the client side now.
As for non text files that I also store I'm planning to implement a custom file server so these won't even be transmitted through the database engine itself as this is causing some slowdowns especially in case of larger files.