A brief background: I have a serverless mysql database connected to AWS RDS which consists of 70 columns and 7000+ rows. Many of the columns defined in this RDS have datatype as VARCHAR(2000), BIGINT
.
I'm trying to return all the details in this database using an AWS Lambda api, following is the code snippet of the handler function that I'm using to achieve the same:
def handler(event, context):
try:
#Storing data to be returned
dataReturn=[]
#Insert Data into Database
with conn.cursor() as cur:
cur.execute("SELECT <names of all 70 columns> FROM <table_name>" )
row_headers=[x[0] for x in cur.description] #this will extract row headers
rv = cur.fetchall()
json_data=[]
for result in rv:
json_data.append(dict(zip(row_headers,result)))
#print(json.dumps(json_data))
conn.commit()
return {
'statusCode': 200,
'headers': {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": 'GET, POST, PUT, DELETE, OPTIONS'
},
'body': json.dumps(json_data, cls=DecimalEncoder)
}
except Exception:
print(traceback.format_exc())
return {
'statusCode': 502,
'headers': {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": 'GET, POST, PUT, DELETE, OPTIONS'
},
'body': json.dumps("Server Error Occured", cls=DecimalEncoder)
}
However, when I run this it gives me a '502 server error' with the following logs:
Traceback (most recent call last):
File "/var/task/getInstrumentsDetails.py", line 55, in handler
print(json.dumps(json_data))
File "/var/lang/lib/python3.7/json/__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
File "/var/lang/lib/python3.7/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/var/lang/lib/python3.7/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/var/lang/lib/python3.7/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type Decimal is not JSON serializable
As a sidenote, I am able to return 24 columns from this table using:
SELECT <name_of_25_columns> FROM <table_name>
, but it fails if I try to display more than 24 columns.
Is this an issue related to the amount of memory that AWS Lambda can read/return from the RDS? Please suggest a suitable fix to this problem.
Thank you