1

I'm trying to read a parquet file using S3 Select, but running into issues when the data contains missing values - the results returned from S3 select skip all missing values, making it impossible to parse the output. A reproducible example with python and boto3:

import pandas as pd
import numpy as np
import boto3

session = boto3.session.Session()
s3 = session.client('s3')

df = pd.DataFrame({'A': [1.0, 2.0, 3.0], 'B': [5, np.nan, 7]})
df['C'] = np.nan
print(df)

# Prints:
#        A    B   C
#   0  1.0  5.0 NaN
#   1  2.0  NaN NaN
#   2  3.0  7.0 NaN

bucket = 'your-test-bucket'
key = 'temp/s3_select/df.parquet'
df.to_parquet(f's3://{bucket}/{key}')

r = s3.select_object_content(
    Bucket=bucket,
    Key=key,
    ExpressionType='SQL',
    Expression='select s.A, s.B, s.C from s3object s',
    InputSerialization = {'Parquet': {}},
    OutputSerialization = {'CSV': {}},
)

records = []
for event in r['Payload']:
    if 'Records' in event:
        records.append(event['Records']['Payload'].decode('utf-8'))
print(records[0])

# Prints:
#    1.0,5.0
#    2.0
#    3.0,7.0

i.e., all missing values are simply skipped.

Is there a way to get a result with missing values appropriately encoded?

ytsaig
  • 3,267
  • 3
  • 23
  • 27

1 Answers1

0

This was indeed a bug in AWS S3, and as of May 9th, 2019, has been fixed. The code above now produces

1.0,5.0,
2.0,,
3.0,7.0,
ytsaig
  • 3,267
  • 3
  • 23
  • 27