Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Pyspark dataframe.limit is slow

I am trying to work with a large dataset, but just play around with a small part of it. Each operation takes a long time, and I want to look at the head or limit of the dataframe.

So, for example, I call a UDF (user defined function) to add a column, but I only care to do so on the first, say, 10 rows.

sum_cols = F.udf(lambda x:x[0] + x[1], IntegerType())
df_with_sum = df.limit(10).withColumn('C',sum_cols(F.array('A','B')))

However, this still to take the same long time it would take if I did not use limit.

like image 799
eran Avatar asked Aug 31 '25 02:08

eran


2 Answers

If you work with 10 rows first, I think it is better that to create a new df and cache it

df2 = df.limit(10).cache()
df_with_sum = df2.withColumn('C',sum_cols(F.array('A','B')))
like image 101
Ali Yesilli Avatar answered Sep 02 '25 21:09

Ali Yesilli


limit will first try to get the required data from single partition. If the it does not get the whole data in one partition then it will get remaining data from next partition.

So please check how many partition you have by using df.rdd.getNumPartition

To prove this I would suggest first coalsce your df to one partition and do a limit. You can see this time limit is faster as it’s filtering data from one partition

like image 42
Chandan Ray Avatar answered Sep 02 '25 21:09

Chandan Ray