Everything is an option as in the following code snippets:
CSV
Dataset<Row> df = spark.read().format("csv")
.option("header", "true")
.load("data/books.csv");
Source: https://github.com/jgperrin/net.jgp.books.spark.ch08/blob/master/src/main/java/net/jgp/books/spark/ch08/lab100_mysql_ingestion/MySQLToDatasetWithOptionsApp.java
JDBC
Dataset<Row> df = spark.read()
.option("url", "jdbc:mysql://localhost:3306/sakila")
.option("dbtable", "actor")
.option("user", "root")
.option("password", "Spark<3Java")
.option("useSSL", "false")
.option("serverTimezone", "EST")
.format("jdbc")
.load();
Source: https://github.com/jgperrin/net.jgp.books.spark.ch01/blob/master/src/main/java/net/jgp/books/spark/ch01/lab100_csv_to_dataframe/CsvToDataframeApp.java
You could read all those information from a configuration file. Each read will also result in a unique dataframe, so you could keep an array, a map, or a list of all those dataframes once ingested.