3

I want to import data in the form of csv file into a table.[using Oracle SQL developer].I have such hundred files and each has about 50 columns.

From the wiki of SQL*Loader (http://www.orafaq.com/wiki/SQL*Loader_FAQ)

 load data
 infile 'c:\data\mydata.csv'
 into table emp
 fields terminated by "," optionally enclosed by '"'          
 ( empno, empname, sal, deptno )  //these are the columns headers

What i don't want to do is list down all the column headers.I just want all the enteries in the csv file to be assigned to members in the tables in the order in which they appear.

Moreover after all think i want to automate it for all the 100 files.

prateek
  • 69
  • 1
  • 1
  • 4
  • 3
    It would probably take you longer to write something than to do it manually. How about just copy and paste the header from your CSV file into your code, all you need to do is add commas. – Limey Jun 11 '12 at 13:17
  • IF you want to opt out of assigning field names, then there MUST be the same number of fields in every record in the file, as there are fields in the table. What OS are you using? UNIX or a unix tool kit can do this with minimal coding effort. – jim mcnamara Jun 11 '12 at 19:51
  • @jimmcnamara , Yes there are the same no. of fields in record file and in the table.I am using windows – prateek Jun 12 '12 at 05:50
  • In cases like these (small number of rows and I assume it's a one-off) I use excel. Open the CSV file, save as an xls and then for each row build up an insert statement, referencing the data in the columns. End each line with a semicolon, cut and paste into your favourite database tool then just hit the run button. You can automate the 100 files with a bit of vb script. Altogether I find it's usually simpler than using block importers. – Dave Richardson Jun 16 '12 at 12:35

3 Answers3

1

You should write down the columns (and their type optionally) so as to assign the values of your csv file to each column. You should do this because the order of the columns in the table in your Oracle Database is not known in the script.

After you write the columns in the order they appear in your csv files, you can automate this script for all of your files by typing:

infile *.csv
iwita
  • 91
  • 6
0

You can try oracle csv loader. It automatically creates the table and the controlfile based on the csv content and loads the csv into an oracle table using sql loader.

Ari
  • 159
  • 3
0

An alternative to sqlldr that does what you are looking for is the LOAD command in SQLcl. It simply matches header row in the csv to the table and loads it. However this is not as performant nor as much control as sqlldr.

LOAD [schema.]table_name[@db_link] file_name

Here's the full help for it.

sql klrice/klrice
...
KLRICE@xe>help load
LOAD
-----

Loads a comma separated value (csv) file into a table.
The first row of the file must be a header row.  The columns in the header row must match the columns defined on the table.

The columns must be delimited by a comma and may optionally be enclosed in double quotes.
Lines can be terminated with standard line terminators for windows, unix or mac.
File must be encoded UTF8.

The load is processed with 50 rows per batch.
If AUTOCOMMIT is set in SQLCL, a commit is done every 10 batches.
The load is terminated if more than 50 errors are found.

LOAD [schema.]table_name[@db_link] file_name
KLRICE@xe>

Example from a git repo I have at https://github.com/krisrice/maxmind-oracledb

SQL> drop table geo_lite_asn;

Table GEO_LITE_ASN dropped.

SQL> create table geo_lite_asn (
  2     "network" varchar2(32),
  3     "autonomous_system_number" number,
  4     "autonomous_system_organization" varchar2(200))
  5  /

Table GEO_LITE_ASN created.

SQL> load geo_lite_asn GeoLite2-ASN-CSV_20180130/GeoLite2-ASN-Blocks-IPv4.csv
--Number of rows processed: 397,040
--Number of rows in error: 0
0 - SUCCESS: Load processed without errors
SQL> 
Kris Rice
  • 3,300
  • 15
  • 33