I have a spark on dataproc serverless use case which requires to read/write with iceberg format on GCS.
Reading through documentation I realized that I cannot use hadoop table catalog because GCS does not support atomic rename:
A Hadoop catalog doesn’t need to connect to a Hive MetaStore, but can only be used with HDFS or similar file systems that support atomic rename. Ref
On the other side, in the official Dataproc metastore documentation looks like it supports iceberg table with both hive catalog and hadoop tables catalog: here. Hence the question: am I safe by using a Dataproc metastore with hadoop table catalog?