Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

is Dataframe.toPandas always on driver node or on worker nodes?

Imagine you are loading a large dataset by the SparkContext and Hive. So this dataset is then distributed in your Spark cluster. For instance a observations (values + timestamps) for thousands of variables.

Now you would use some map/reduce methods or aggregations to organize/analyze your data. For instance grouping by variable name.

Once grouped, you could get all observations (values) for each variable as a timeseries Dataframe. If you now use DataFrame.toPandas

def myFunction(data_frame):
   data_frame.toPandas()

df = sc.load....
df.groupBy('var_name').mapValues(_.toDF).map(myFunction)
  1. is this converted to a Pandas Dataframe (per Variable) on each worker node, or
  2. are Pandas Dataframes always on the driver node and the data is therefore transferred from the worker nodes to the driver?
like image 525
Matthias Avatar asked Jan 25 '26 06:01

Matthias


1 Answers

There is nothing special about Pandas DataFrame in this context.

  • If DataFrame is created by using toPandas method on pyspark.sql.dataframe.DataFrame this collects data and creates local Python object on the driver.
  • If pandas.core.frame.DataFrame is created inside executor process (for example in mapPartitions) you simply get RDD[pandas.core.frame.DataFrame]. There is no distinction between Pandas objects and let's say a tuple.
  • Finally pseudocode in you example couldn't work becasue you cannot create (in a sensible way) Spark DataFrame (I assume this what you mean by _.toDF) inside executor thread.
like image 97
zero323 Avatar answered Jan 26 '26 20:01

zero323



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!