my first question here!
I’m learning Spark and so far is awesome. Now I’m writing some DFs to Oracle using DF.write.mode(“append”).jdbc
Now, I need to truncate the table since I don’t want to append. If I use “overwrite” mode, it will drop the table and create a new one but I’ll will have to reGRANT users to Get access to it. Not good.
Can I do something like truncate in Oracle using spark SQL? Open for suggestions! Thanks for your time.
There is an option to make Spark to truncate target Oracle table instead of dropping it. You can find the syntax https://github.com/apache/spark/pull/14086
spark.range(10).write.mode("overwrite").option("truncate", true).jdbc(url, "table_with_index", prop)
Depending on the versions of Spark, Oracle and JDBC driver, there are other parameters that you could use to make the truncate on cascade as you can see from https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html
From my experience, that works on some of DB engines, and depends a lot on the JDBC that you use, because not all of them support it
Hope this helps
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With