2

Struggling with this for sometime now, and applogies I changed the query name for the question to getDeviceReadings, I have been using getAllUserDevices (sorry for any confusion)

type Device {
   id: String
   device: String!
}

type Reading {
   device: String
   time: Int
}

type PaginatedDevices {
   devices: [Device]
   readings: [Reading]
   nextToken: String
}

type Query {
   getDevicesReadings(nextToken: String, count: Int): PaginatedDevices
}

Then I have a resolver on the query getDevicesReadings which works fine and returns all the devices a user has so far so good

{
"version": "2017-02-28",
"operation": "Query",
"query" : {
  "expression": "id = :id",
    "expressionValues" : {
      ":id" : { "S" : "${context.identity.username}" }
    }
}
#if( ${context.arguments.count} )
    ,"limit": ${context.arguments.count}
#end
#if( ${context.arguments.nextToken} )
    ,"nextToken": "${context.arguments.nextToken}"
#end
}

now I want to return all the readings that devices has based on the source result so I have a resolver on getDevicesReadings/readings

#set($ids = [])
#foreach($id in ${ctx.source.devices})
  #set($map = {})
  $util.qr($map.put("device", $util.dynamodb.toString($id.device)))
  $util.qr($ids.add($map))
#end

{
"version" : "2018-05-29",
"operation" : "BatchGetItem",
 "tables" : {
    "readings": {
        "keys": $util.toJson($ids),
        "consistentRead": true
    }
  }
}

With a response mapping like so ..

$utils.toJson($context.result.data.readings)

I run a query

query getShit{
  getDevicesReadings{
    devices{
      device
     }
    readings{
      device
      time
    }
  }
}

this returns the following results

{
  "data": {
    "getAllUserDevices": {
     "devices": [
       {
         "device": "123"
       },
       {
         "device": "a935eeb8-a0d0-11e8-a020-7c67a28eda41"
       }
     ],
     "readings": [
       null,
       null
     ]
   }
 }
}

enter image description here

As you can see on the image the primary partition key is device on the readings table I look at the logs and I have the following

enter image description here

Sorry if you cant read the log it basically says that there are unprocessedKeys

and the following error message

"message": "The provided key element does not match the schema (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException; Request ID: 0H21LJE234CH1GO7A705VNQTJVVV4KQNSO5AEMVJF66Q9ASUAAJG)",

I'm guessing some how my mapping isn't quite correct and I'm passing in readings as my keys ?

Any help greatly appreciated

hounded
  • 666
  • 10
  • 21

3 Answers3

3

No, you can absolutely use batch resolvers when you have a primary sort key. The error in your example is that you were not providing the primary sort key to the resolver.

This code needs to provide a "time" as well a "device" because you need both to fully specify the primary key.

#set($ids = [])
#foreach($id in ${ctx.source.devices})
  #set($map = {})
  $util.qr($map.put("device", $util.dynamodb.toString($id.device)))
  $util.qr($ids.add($map))
#end

You should have something like it:

#set($ids = [])
#foreach($id in ${ctx.source.devices})
  #set($map = {})
  # The tables primary key is made up of "device" AND "time"
  $util.qr($map.put("device", $util.dynamodb.toString($id.device)))
  $util.qr($map.put("time", $util.dynamodb.toString($id.time)))
  $util.qr($ids.add($map))
#end

If you want to get many records that share the same "device" value but that have different "time" values, you need to use a DynamoDB Query operation, not a batch get.

mparis
  • 3,623
  • 1
  • 17
  • 16
2

You're correct, the request mapping template you provided doesn't match the primary key on the readings table. A BatchGetItem expects keys to be primary keys, however you are only passing the hash key.

For the BatchGetItem call to succeed you must pass both hash and sort key, so in this case, both device and time attributes.

Maybe a Query on the readings table would be more appropriate?

Tinou
  • 5,908
  • 4
  • 21
  • 24
  • Thanks for the answer both you and mparis have put me right, i'm going to give the answer to mparis as he threw in an example. Once again thanks guys! Hey extra props if you can tell me if there is any advantage of having an appsync VTL resolver as apposed to a lambda resolver – hounded Aug 29 '18 at 20:58
  • Hi Tinou, you see how I loop through then extend the list in my lambda function, how would I do that with VTL in a query? I will ask as a separate question – hounded Aug 29 '18 at 22:10
  • no worries, same team! If you use a Lambda resolver, you introduce a intermediate step to query your DynamoDB table. Your request lifecycle will be AppSync -> Lambda -> DynamoDB vs AppSync -> DynamoDB if you use a DynamoDB resolver within AppSync. That means the overall latency of your GraphQL query will increase. For more complicated workflows, using a Lambda resolver should give you more control. – Tinou Aug 29 '18 at 22:13
  • Awesome thanks Tinou, really appreciate the time taken to answer. I have asked a question here https://stackoverflow.com/questions/52086862/aws-appsync-query-resolver Is what i'm asking possible? – hounded Aug 29 '18 at 22:23
0

So you can't have a batch resolver when you have primary sort key ?!

So the answer was to create a lambda function and tack that on as my resolver

import boto3
from boto3.dynamodb.conditions import Key

def lambda_handler(event, context):

   list = []
   for device in event['source']['devices'] :
       dynamodb = boto3.resource('dynamodb')
       readings  = dynamodb.Table('readings')
       response = readings.query(
           KeyConditionExpression=Key('device').eq(device['device'])
       )
       items = response['Items']
       list.extend(items)
   return list
hounded
  • 666
  • 10
  • 21