Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Count empty values in dataframe column in Spark (Scala)

I'm trying to count empty values in column in DataFrame like this:

df.filter((df(colname) === null) || (df(colname) === "")).count()

In colname there is a name of the column. This works fine if column type is string but if column type is integer and there are some nulls this code always returns 0. Why is this so? How to change it to make it work?

like image 655
sergeda Avatar asked Jan 18 '26 17:01

sergeda


1 Answers

As mentioned on the question that df.filter((df(colname) === null) || (df(colname) === "")).count() works for String data types but the testing shows that null are not handled.

@Psidom's answer handles both null and empty but does not handle for NaN.

checking for .isNaN should handle all three cases

df.filter(df(colName).isNull || df(colName) === "" || df(colName).isNaN).count()
like image 177
Ramesh Maharjan Avatar answered Jan 21 '26 07:01

Ramesh Maharjan



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!