Here is how my row looks like:
{"id": x ,
"data": [
{
"someId": 1 ,
"url": https://twitter.com/HillaryClinton?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor, »
} ,
{
"someId": 2 ,
"url": http://nymag.com/daily/intelligencer/2016/05/hillary-clinton-candidacy.html, »
} ,
]}
I created secondary index on data.url so retriving document is easy, but how do I most efficiently update just that specific,nested object?
I might be adding new keys to it or just updating existing ones(newField, anotherField in example below).
End result should look like:
"data": [
{
"someId": 1 ,
"url": https://twitter.com/HillaryClinton?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor,
"newField": 5,
"anotherField": 12
}
...
edit: made it work like this(python):
a = r.db("lovli").table("KeyO").get_all("https://www.hillaryclinton.com/", index= "url").update(
lambda doc:
{"data": doc['data'].map(lambda singleData:
r.branch(
singleData['url'] == "https://www.hillaryclinton.com/",
singleData.merge({"status_tweet":3, "pda": 1}),
singleData
)
)
}
).run(conn)
Can this be improved? Also, I am going to be updating a lot of urls at the same time... Anyway to further improve performance by doing this in bulk?