You can rebuild Livy passing spark-3.0 profile in maven to create a custom build for spark 3, for example:
git clone https://github.com/apache/incubator-livy.git && \
cd incubator-livy && \
mvn clean package -B -V -e \
-Pspark-3.0 \
-Pthriftserver \
-DskipTests \
-DskipITs \
-Dmaven.javadoc.skip=true
This profile is defined in pom.xml, the default one installs Spark 3.0.0. You can change it to use different spark version.
<profile>
<id>spark-3.0</id>
<activation>
<property>
<name>spark-3.0</name>
</property>
</activation>
<properties>
<spark.scala-2.12.version>3.0.0</spark.scala-2.12.version>
<spark.scala-2.11.version>2.4.5</spark.scala-2.11.version>
<spark.version>${spark.scala-2.11.version}</spark.version>
<netty.spark-2.12.version>4.1.47.Final</netty.spark-2.12.version>
<netty.spark-2.11.version>4.1.47.Final</netty.spark-2.11.version>
<netty.version>${netty.spark-2.11.version}</netty.version>
<java.version>1.8</java.version>
<py4j.version>0.10.9</py4j.version>
<json4s.spark-2.11.version>3.5.3</json4s.spark-2.11.version>
<json4s.spark-2.12.version>3.6.6</json4s.spark-2.12.version>
<json4s.version>${json4s.spark-2.11.version}</json4s.version>
<spark.bin.download.url>
https://archive.apache.org/dist/spark/spark-3.0.0/spark-3.0.0-bin-hadoop2.7.tgz
</spark.bin.download.url>
<spark.bin.name>spark-3.0.0-bin-hadoop2.7</spark.bin.name>
</properties>
</profile>
As long as I know, Livy supports spark 3.0.x. But worth testing with 3.1.1, and let us know :)