You are not writing about any other transformations so i am assuming that you want to create very simple job which is only performing this one join
You are not asking about file3 so i am assuming that you are going to broadcast it with hint and this is a good direction.
If you are not doing anything before this join i am not sure if this is worth to repartition file1/file2 because most probably they are going to be joined with SMJ (sort merge join - its shuffling both datasets based on column from join condition) and output df from this join will have number of partitions equals to spark.sql.shuffle.partitions so you may try to tune this parameter (this will affect also other shuffles so keep in mind my assumption from first line)
You may try to adjust this parameter to bigger dataset (file1) to create partitions around 100-200 mb. I think its worth to read this blog post: Medium blog post