Right now I have one test data which have 1 partition and inside that partition it has 2 parquet files
If I read data as:
val df = spark.read.format("delta").load("./test1510/table@v1")
Then I get latest data with 10,000 rows and if I read:
val df = spark.read.format("delta").load("./test1510/table@v0")
Then I get 612 rows, now my question is: How can I view only those new rows which were added in version 1 which is 10,000 - 612 = 9388 rows only
In short at each version I just want to view which data changed. Overall in delta log I am able to see json files and inside there json file I can see that it create separate parquet file at each version but how can I view it in code ?
I am using Spark with Scala
you don't even need to go at parquet
file level. you could simply use SQL query to achieve this.
%sql
SELECT * FROM test_delta VERSION AS OF 2 minus SELECT * FROM test_delta VERSION AS OF 1
Above code will give you a newly added rows in version 2 which were not in version 1
in your case you can do the following
val df1 = spark.read.format("delta").load("./test1510/table@v1")
val df2 = spark.read.format("delta").load("./test1510/table@v0")
display(df2.except(df1))
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With