2

I am looking for tools to convert json to turtle format.

For example:

{
    "name": "Bart Simpson",
    "age": "11"
}

to something like:

@base <http://example.com/people> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix people: <http://example.com/people.rdf> .

<http://example.com/people_1> people:name "Bart Simpson" ; 
             people:age "11" .

For me, the challenge is to identify what are the right steps to perform the conversion. It seems I need to define a vocabulary first, like the http://example.com/people.rdf. But it's unclear to me how to define the vocabulary.

Also, I am looking for tools to do this json2turtle conversion, with a given vocabulary.

I might have misunderstood the concept of linked data here. Please let me know if this question doesn't make sense.

FewKey
  • 152
  • 9

2 Answers2

3

With JARQL you can use SPARQL construct queries on JSON files and thus create RDF in any serialization (Turtle, RDF/XML, etc.) you would like.

Reto Gmür
  • 2,497
  • 1
  • 20
  • 25
  • Thanks! Though JARQL focuses on querying based on JSON to RDF, it does direct me to a format conversion tool __[JSON2RDF](https://github.com/AtomGraph/JSON2RDF)__. – FewKey Mar 04 '21 at 11:41
2

It is useful to consider how you intend to use the tool. As suggested in the comments, you should take a look first at JSON-LD, i.e. JSON for Linked Data. This is the primary way of connecting JSON and RDF, suitable if you want to publish RDF-compatible data but want to preserve the main JSON structure for some reason. You don't need to use Turtle, as most tools should be able to process JSON-LD just fine.

All you need is to add a @context property which describes how the keys and values map to RDF vocabularies. If you have a service that communicates regularly in JSON, there is no need for anything else.

Of course there are other tools, direct mapping or Tarql-like. You can use them if you have a single large dataset and simply want to convert it and then you are done, but I don't think it's worth incorporating them into your pipeline when @context is all you need (and is sufficient).

Also in RDF you never have to define a vocabulary in order to use it. You may need it for some consumers, you may need it for reasoning, it is generally useful, but you can do it later (kinda like you can publish XML data without linking to a DTD/schema).


There is also SPARQL Anything which uses its own pseudo-vocabulary to encode any keyed or indexed collection. For your example, you would have:

@prefix xyz: <http://sparql.xyz/facade-x/data/> .
@prefix fx: <http://sparql.xyz/facade-x/ns/> .
[ a fx:root ;
  xyz:name "Bart Simpson" ;
  xyz:age "11"
] .

The JSON properties translate directly to the xyz: namespace.

IS4
  • 11,945
  • 2
  • 47
  • 86