In a word: Pain and suffering...
Understanding the file format used by tar, you could build some tools to help with what you are doing, but there are complications that may or may not be relevant to your particular file. The tar file uses headers of 512 bytes that specify the file-name and length of the file, among other things. You could use this information to build up a series of offsets within the tar file of each file entry. You could then do something like traverse the tar file backwards, and truncate the file as you extract the files.
However, there are some issues of sequence that you have to deal with. GNU tar, for example, can create some "fake" entries for file with long file names, to store additional information that can't fit in the 512 byte header. Also you may need to be careful about directory entries, which might specify permissions that would not allow you to extract files into the directory if you extract the directory entry before the contents.
The Python programming language, among others, includes a nice library for handling tar files.
However, another option would be to just split the large tar file up into many smaller files irrespective of the tar format. Split out the end and and then truncate the source file. Repeat until instead of a single 130GB file you have 130 1GB files. Obviously, getting these split/truncates right may be a little tricky. Could be done using the "dd" and "truncate" commands
Then it would be an easy matter to make a script that would "cat" the first file, delete the first file, cat the second file, delete the second file, etc... Pipe that script to "tar x" and as the tar file extracts the source files would be deleted.
Of course, these are all destructive operations, so you basically get one shot to do them right.
The easiest would be if you have a place to copy the 130GB file to and extract it from there. Say an external USB hard drive, or another machine and extract it over an SSH tunnel.