0

I am using ScalikeJDBC to fetch a large table, convert the data to JSON and then calling a web service with 50 JSON objects (rows) each. This is my code:

val rows = sql"SELECT * FROM bigtable"
val jsons = rows.map { row =>
  // build JSON object for each row
}.toList().apply()
jsons.grouped(50).foreach { batch =>
  // send 50 objects at once to an HTTP server
}

This works, but unfortunately, the intermediate list is huge and consumes alot of memory. I am looking for a way to iterate over the resultset in a "lazy" fashion, similar to foreach, except I want to iterate over batches of 50 rows. Is that possible with ScalikeJDBC?


I solved the memory issues by filling and clearing a mutable list instead of using grouped, but I am still looking for a better solution.

stholzm
  • 3,395
  • 19
  • 31

1 Answers1

0

Try specifying fetchSize.

See also: http://scalikejdbc.org/documentation/operations.html#setting-jdbc-fetchsize

Kazuhiro Sera
  • 1,822
  • 12
  • 15
  • Sorry, my problem is not the JDBC driver hogging memory, but the intermediate list which I use for `grouped`. – stholzm Jun 18 '16 at 10:33