Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to view specific changes in data at particular version in Delta Lake

Right now I have one test data which have 1 partition and inside that partition it has 2 parquet files

If I read data as:

val df = spark.read.format("delta").load("./test1510/table@v1")

Then I get latest data with 10,000 rows and if I read:

val df = spark.read.format("delta").load("./test1510/table@v0")

Then I get 612 rows, now my question is: How can I view only those new rows which were added in version 1 which is 10,000 - 612 = 9388 rows only

In short at each version I just want to view which data changed. Overall in delta log I am able to see json files and inside there json file I can see that it create separate parquet file at each version but how can I view it in code ?

I am using Spark with Scala

like image 995
Shashank Sharma Avatar asked Sep 02 '25 10:09

Shashank Sharma


1 Answers

you don't even need to go at parquet file level. you could simply use SQL query to achieve this.

%sql 
SELECT * FROM test_delta VERSION AS OF 2 minus SELECT * FROM test_delta VERSION AS OF 1

Above code will give you a newly added rows in version 2 which were not in version 1

in your case you can do the following

val df1 = spark.read.format("delta").load("./test1510/table@v1")
val df2 = spark.read.format("delta").load("./test1510/table@v0")
display(df2.except(df1))
like image 157
Gaurang Shah Avatar answered Sep 05 '25 02:09

Gaurang Shah