Dataframe and table both are different in spark.
Dataframe is an immutable distributed collection of data.
Table is the one which has metadata that points to the physical location form where it has to read the data.
When you are converting spark dataframe to a table , you are physically writing data to disc and that could be anything like hdfs,S3, Azure container etc. Once you have the data saved as table you can read it from anywhere like from different spark job or through any other work flow.
Now talking about dataframe it is just valid for the specific spark session in which you created that dataframe and once you close your spark session you cannot read that dataframe or access it values. Dataframe does not have any specific memory location or physical path where it gets saved. Dataframe is just the representation of the data that you read from any specific location.