You can start with psycopg2
to make your connection to redshift. This is completely separate from any Tableau function. After you have connected with psycopg2
you can run your sql and use pandas
to create a csv file from the results. Then you will use the tableauhyperapi
run_create_hyper_file_from_csv()
function to create the actual hyper file. After this you will want to use tableauserverclient
to publish to your server or Tableau Online.
redshift to pandas(csv)
import psycopg2
import pandas as pd
REDSHIFT_USER = ''
REDSHIFT_PASSWORD = ''
REDSHIFT_HOST = ''
REDSHIFT_PORT = ''
REDSHIFT_DATABASE = ''
cnxn = psycopg2.connect(f"postgresql://{REDSHIFT_USER}:
{REDSHIFT_PASSWORD}@{REDSHIFT_HOST}:{REDSHIFT_PORT}/{REDSHIFT_DATABASE}")
sql = ''' <full sql statement> '''
df = pd.read_sql(sql, cnxn)
del cnxn
#df.head()
df.to_csv('yourFile.csv', index=False, encoding='utf-8', mode='w')
tableauhyperapi - csv to hyper: I use a version of this exact sample for creating a hyper file from a csv. https://github.com/tableau/hyper-api-samples/blob/main/Tableau-Supported/Python/create_hyper_file_from_csv.py
tableauserverclient for publishing: https://tableau.github.io/server-client-python/docs/api-ref#workbooks
This can all then be scheduled to run and publish without using any Tableau backgrounder resources. If you run into any issues with the hyperapi or serverclient setup let me know.