I am pushing pandas dataframe in redshift table and getting following error
cur.execute("INSERT INTO sir_main VALUES " + str(args_str))
psycopg2.ProgrammingError: Statement is too large. Statement Size: 58034743
bytes. Maximum Allowed: 16777216 bytes`
And it halts the execution. Is there any way to configure the limit while pushing into database?
If you are loading more than a few hundred rows you should save the dataframe as a flat file to S3 and load it into Redshift using COPY
. https://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With