What we are using at the company (and at some other projects I've involved) is the FluentD. You can find a well-maintained Jenkins plugin here or here.
The main features are: sending any valid json, which can be analyzed later using your own tool, something like ELK stack (you can find a lot HOWTO articles), or some other stacks.
Why FluentD? I guess it clearly states on the author's github page: " Our choice was to use Fluentd as looks mature enough to handle a lot of data and it supports a lot of destination endpoints (DBs, file, http, etc)." The data outputs could be anythong you wantL Hadoop, Mongo, S3, AWS/Azure services, or any existing database (https://www.fluentd.org/dataoutputs), and a lot of visualization.
I assume you can ask "why not push it directly to the database?". The answer is - you will need to push it in a specific format and the data won't be easily formatted to another one. I.e. You've built your own tool, but later realized you can use Kibana instead, so you'll need to migrate over. If you start using the FluentD, the only thing you should think about is how to format the json, which is much more trivial, than designing a database. Also, FluentD is highly scalable and supports robust failovers.
One more potential issue is converting XML, you've mentioned you have as test output, to json. First of all, a lot of test runners support json formatting. Secondly, there's a common method of generating json out of xml and I would suggest something like "xml2json" tool, which does a pretty good job:
XML JSON
<e/> "e": null
<e>text</e> "e": "text"
<e name="value" /> "e": { "@name": "value" }
<e name="value">text</e> "e": { "@name": "value", "#text": "text" }
<e> <a>text</a ><b>text</b> </e> "e": { "a": "text", "b": "text" }
<e> <a>text</a> <a>text</a> </e> "e": { "a": ["text", "text"] }
<e> text <a>text</a> </e> "e": { "#text": "text", "a": "text" }