UPDATE:
Right now I am trying to take in to an array around 120M of rows. Reason I'm not doing this over UTL_file is because in our production server it requires oracle user access to write and move the written flat-file into our desired directory. Add in to the problem is that there are still a lot of text manipulations needed after extraction which I think is a job for Perl.
What I want to do right now is to write my own implementation of the DBI::Iterator and all other dependencies it may need; to be clear though, not necessarily all. (like some methods only of DBI, then just idb_rows... just to get this running since I cannot install modules)
Original Question:
Good day, I'm relatively new to perl programming, A week ago I started receiving out of memory messages again in perl. I was able to get around this before by switching to 64 bit perl.
I just found yesterday that the kernel of my production machine is not allowing me to use more than 4GB of memory (in my other production server I am able to load large amounts of data to memory)
Here are my specific restrictions in my production server
- I am not allowed to install new modules of perl
- I am allowed, in a way, to install them locally but I am having trouble
What I intend to do now is to recreate this module. Iterator::DBI
I have no background of iterators. for the longest time I develop database extractions and ETL processes through the function below. It's my 1st time to encounter an out of memory error again after a year and a half of using the function below.
sub myDBI
{
my ($filename) = @_;
my $query = "";
unless(open(FILE,$filename))
{
Logger("[ ERR ] unable to open $SETTINGS{SQL_TRIGGER}\n");
print
die;
}
my @result=`sqlplus -S $SETTINGS{USER_NAME}/$SETTINGS{PASSWORD}\@$SETTINGS{DB_NAME} <<END
SET HEADING OFF
SET FEEDBACK OFF
SET SERVEROUTPUT ON
SET LINES 5000
SET COLSEP "||"
$query
/
`
;
@result;
}