5

I have a file where each line is a JSON object. I'd like to convert the file to a JSON array.

The file looks something like this:

{"address":"email1@foo.bar.com", "topic":"Some topic."}
{"address":"email2@foo.bar.com", "topic":"Another topic."}
{"address":"email3@foo.bar.com", "topic":"Yet another topic."}

I'm using bash and jq.

I tried

jq --slurp --raw-input 'split("\n")[:-1]' my_file

But that just treats each line as a string creating a JSON array of strings.

[
  "{\"address\":\"email1@foo.bar.com\", \"topic\":\"Some topic.\"}",
  "{\"address\":\"email2@foo.bar.com\", \"topic\":\"Another topic.\"}",
  "{\"address\":\"email3@foo.bar.com\", \"topic\":\"Yet another topic.\"}"
]

I'd like to have:

[
  {"address":"email1@foo.bar.com", "topic":"Some topic."},
  {"address":"email2@foo.bar.com", "topic":"Another topic."},
  {"address":"email3@foo.bar.com", "topic":"Yet another topic."}
]
peak
  • 105,803
  • 17
  • 152
  • 177
km1
  • 2,383
  • 1
  • 22
  • 27

2 Answers2

9
jq -n '[inputs]' <in.jsonl >out.json

...or, as suggested by @borrible:

jq --slurp . <in.jsonl >out.json
Charles Duffy
  • 280,126
  • 43
  • 390
  • 441
3

For the task at hand, using jq's "slurp" option or [inputs] entails a potentially huge waste of resources.

A trivial but efficient solution can be implemented in awk as follows:

awk 'BEGIN {print "[";} NF==0{next;} n=="" {print;n++;next;} {print ","; print;} END {print "]"}'

An equivalent efficient solution in jq is possible using foreach and inputs, and is left as an exercise.

peak
  • 105,803
  • 17
  • 152
  • 177