0

I've read thru the Tinkerpop documentation but I don't see (or I missed) a way to do atomic incrementing of properties on a vertex.

I'd like to do something like adding a document to a folder and atomically update a property to cache counts

g.V('1234').as('folder')
 //how? .property('single','documentCount', documentCount++) 
 //how? .property('single','iNodeCount', iNodeCount++) 
 .addV('iNode').as('document')
 .property('single','type','document')
 .addE('contains').from('folder').to('document')
  

and then could also cache a folder count

g.V('1234').as('folder')
 //how? .property('single','folderCount', folderCount++)
 //how? .property('single','iNodeCount', iNodeCount++) 
 .addV('iNode').as('childFolder')
 .property('single','type','folder')
 .addE('contains').from('folder').to('childFolder')

This would help avoid doing count() operations when requiring the counts.

Is this possible?

Adam
  • 1,202
  • 11
  • 25

1 Answers1

1

You can implement such a thing with sack() step - here's an example:

gremlin> g = TinkerGraph.open().traversal()
==>graphtraversalsource[tinkergraph[vertices:0 edges:0], standard]
gremlin> v = g.addV('folder').property('documentCount',0).next()
==>v[0]
gremlin> g.V(v).sack(assign).by('documentCount').sack(sum).by(constant(1)).property('documentCount', sack())
==>v[0]
gremlin> g.V(v).sack(assign).by('documentCount').sack(sum).by(constant(1)).property('documentCount', sack())
==>v[0]
gremlin> g.V(v).elementMap()
==>[id:0,label:folder,documentCount:2]
gremlin> g.V(v).sack(assign).by('documentCount').sack(sum).by(constant(1)).property('documentCount', sack())
==>v[0]
gremlin> g.V(v).elementMap()
==>[id:0,label:folder,documentCount:3]
stephen mallette
  • 45,298
  • 5
  • 67
  • 135
  • Stephen, this worked like a charm. Is this guaranteed atomic? Would I have to worry about concurrency? – Adam Sep 01 '22 at 17:10
  • Well, this is the Gremlin answer, but the behavior will be implementation specific. I see you tagged your question with Neptune so I assume you're using that. i've not tested this exactly but using this pattern described in the neptune docs should give you a lock on the property key you are replacing and thus prevent concurrent transactions: https://docs.aws.amazon.com/neptune/latest/userguide/transactions-examples.html#transactions-examples-replace – stephen mallette Sep 01 '22 at 17:24
  • Hmm, okay, so something like: g.V(folderID) .sack(assign).by('documentCount').sack(sum).by(constant(1)) .property(single, 'documentCount', sack()) .as('folder') .addV('document').as('document') .addE('contains').from('folder').to('document') Per docs, Neptune considers this a Mutation Query because it contains the 'addV' and 'addE' so it runs with REPEATABLE READ. Sounds like that would ensure that what is read by the (assign).by(documentCount) would be safe. That all sound accurate? – Adam Sep 01 '22 at 17:52
  • actually it says that "Neptune provides the strong guarantee that neither NON-REPEATABLE nor PHANTOM reads can happen", but the Mutation Query runs with READ COMMITTED, not REPEATABLE READ. This makes me think it's not guaranteed that calculated documentCount would be accurate. – Adam Sep 01 '22 at 17:57
  • After thinking this thru and reading it thru, I believe that READ COMMITED is enough to make this safe. From docs "In other words, when a range of the index has been read by a mutation transaction, there is a strong guarantee that this range will not be modified by any concurrent transactions until the end of the reading transaction. This guarantees that no non-repeatable reads will occur." – Adam Sep 02 '22 at 01:47