2

I am trying to find some integration to use iceberg table format on adls /azure data lake to perform crud operations. Is it possible to not use any other computation engine like spark to use it on azure. I think aws s3 supports this usecase. Any thoughts on it.

Manfred Moser
  • 29,539
  • 13
  • 92
  • 123
John
  • 35
  • 5

2 Answers2

1

spark can use Iceberg with the abfs connector, hdfs, even local files. you just need the classpath and authentication right

stevel
  • 12,567
  • 1
  • 39
  • 50
0

A bit late to the party but Starburst Galaxy deploys Trino on any Azure region and has a Great Lakes connector that supports Hive (parquet, orc, csv,etc..), Delta Lake and Iceberg. https://blog.starburst.io/introducing-great-lakes-connectivity-for-starburst-galaxy

Tom N
  • 9
  • 1
  • As it’s currently written, your answer is unclear. Please [edit] to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community May 18 '22 at 12:27