0

I have 5 tables in an Oracle database. I need to create similar structures of them in AWS Redshift. I am using cx_oracle to connect to Oracle and dump the ddl in a csv file. But changing that DDL for each datatype in python to make it run in Redshift is turning out to be a very tedious process.

Is there any easy way to do in Python? Is there any library or function to do this seamlessly.

PS: I tried to use AWS Schema Conversion Tool for this. The tables got created in Redshift, but, with a glitch. Every datatype got doubled in Redshift.

For example: varchar(100) in Oracle became varchar(200) in Redshift Has anyone faced a similar issue before with SCT?

Sourav Gupta
  • 227
  • 5
  • 17
  • I was able to create a ddl for redshift by writing elif statements to change datatypes. Was looking for a simpler approach.. Any help is appreciated. – Sourav Gupta Jun 14 '18 at 14:22

1 Answers1

1

The cx_OracleTools project and specifically the DescribeObject tool within that project have the ability to extract the DDL from an Oracle database. You may be able to use that.

Anthony Tuininga
  • 6,388
  • 2
  • 14
  • 23
  • I am able to extract the Oracle DDL. I am looking for a way to change that DDL to a DDL which can execute in Redshift. Redshift uses different datatype names. Is there any library or package which can do the necessary DDL changes? As of now, I have written custom if-else statements to create a Redshift DDL from an Oracle DDL. – Sourav Gupta Jun 14 '18 at 15:55
  • Well, you can use the underlying code for cx_OracleTools and adjust the bits and pieces of the metadata (such as data types) before writing the metadata out -- that may be of use since you can get at the bits and pieces without having to parse. Other than that I have no knowledge of RedShift to aid you in this! – Anthony Tuininga Jun 15 '18 at 14:29
  • Yes... that sounds to be an option... Will try and get back. – Sourav Gupta Jun 18 '18 at 17:22