0

When I am reading impala data using pyhive library and pandas.read_sql I am getting an error UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe2 in position 3071: unexpected end of data

Reason for this error might be that data might be corrupted there.

How can I change it to a different encoding so that I can get the data in a dataframe?

Shankar Pandala
  • 969
  • 2
  • 8
  • 28

1 Answers1

0

A workaround is the following:

1) we retrieve data chunk by chunk via Pyhive cursor.

2) Preprocess: encode/decode.

3) Attach to a final Dataframe.


# cursor to the database.
cursor = hive.Connection(host=HOST, port=PORT, username=USERNAME).cursor()

# execute the query on the database side.
cursor.execute("SELECT id, message FROM table")

# result dataframe, empty for now.
df = pd.DataFrame(columns=['id', 'message'])

while True:
    # fetch 10k rows (as tuples).
    rows = cursor.fetchmany(10000)

    # if no more rows to retrieve, we stop.
    if not rows:
        break

    # Preprocessing: do encoding/decoding here
    rows = [(id, message.decode('utf-8', 'ignore')) for id, message in rows]

    # put result in a temporary dataframe
    df_tmp = pd.DataFrame(rows, columns=['id', 'message'])

    # merge the temporary dataframe to the original df
    df = pd.concat([df, df_tmp])

df = ...

Ahmed Elsafty
  • 542
  • 4
  • 11