We are spooling data into CSV file from oracle table at Linux server.
Using the below code:
sqlplus -s Schema_User/Schema_Password@DB_Service <<EOF
set termout off
set heading off
set underline "_"
set pagesize 0 embedded on
set linesize 32000
set LONG 50000
set colsep '"|"'
set feedback off
spool ./Temp/ABC.csv
Select COLUMN1,COLUMN2 from ABC;
spool off
EOF
The free space on the Linux server (df -h) is 500GB.
The generation of spool file ABC.csv terminates after reaching the size 500GB.
We suspect that the final size of ABC.csv will be much more than 500GB.
Please suggest a way by which we can compress the data during spool process.
Should I first create a compressed temporary table from the oracle table then spool it?
CREATE TABLE XXX COMPRESS FOR QUERY AS SELECT * FROM ABC;
I have modified my code as follows and it is working fine now:
mknod ../Temp/ABC.csv p
nohup gzip -c < ../Temp/ABC.csv > ../Files/ABC.gz &
sqlplus -s Schema_User/Schema_Password@DB_Service <<EOF
set termout off
set heading off
set underline "_"
set pagesize 0 embedded on
set linesize 32000
set LONG 50000
set colsep '"|"'
set feedback off
set trimspool on
spool ./Temp/ABC.csv
Select COLUMN1,COLUMN2 from ABC;
spool off
EOF
Let's say your script with definition of sqlplus is called script.sh. Isn't
script.sh | gzip > data.csv.gz
what you're looking for?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With