Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Error when importing udf from module -> SparkContext should only be created and accessed on the driver

i am having trouble with the sparkcontext: Here's my project structure:

dependencies | 
-------------|spark.py
etl.py
shared       |
-------------|tools.py

In dependencies.spark.py I have a function that creates the spark session:

# dependencies.spark.py

from pyspark.sql import SparkSession
def get_or_create_session(app_name, master="local[*]"):

    spark_builder = SparkSession.builder.master(master).appName(app_name)
    session = spark_builder.getOrCreate()
    return session

In etl.py i have my main(), where i import a function defined in shared.tools.py, with the help of a pandas UDF.

# etl.py

from dependencies.spark import get_or_create_session
from shared.tools import cleanup_pob_column

def main():

    spark = get_or_create_session(app_name="my_app"))
    data = get_data(input_file)
    transformed_data = transform_data(data)
    transformed_data.printSchema()
    tranformed_data.show(truncate=False)

def get_data(input_file):
    ... 
    return data

def transform_data(data):
    return (
       data
       .transform(cleanup_pob_column)
)

if __name__ == "__main__":
    main()

# shared.tools.py

def extract_iso(x):
    ...from x to iso_string
    return iso_string

@F.pandas_udf("string")
def cleanup_geo_column_udf(col: pd.Series) -> pd.Series:
    return col.apply(lambda x: extract_iso(x=x))

def cleanup_pob_column(df):
    return df.withColumn("pob_cln", cleanup_geo_column_udf(F.col("place_of_birth")))

Now I am in an error loop which i do not understand.

If atop shared.tools I don't get the session (meaning if I OMIT the code below):

from dependencies.spark import get_or_create_session
spark = get_or_create_session(app_name="my_app))

I get this error (which seems to be caused by the fact that the context is None):

Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/Users/gsimeone/PycharmProjects/assignment/shared/geographic_tools.py", line 39, in <module>
    def cleanup_geo_column_udf(col: pd.Series) -> pd.Series:
  File "/Users/gsimeone/PycharmProjects/sayaritest/sayari_test/lib/python3.8/site-packages/pyspark/python/lib/pyspark.zip/pyspark/sql/pandas/functions.py", line 450, in _create_pandas_udf
    return _create_udf(f, returnType, evalType)
  File "/Users/gsimeone/PycharmProjects/sayaritest/sayari_test/lib/python3.8/site-packages/pyspark/python/lib/pyspark.zip/pyspark/sql/udf.py", line 74, in _create_udf
    return udf_obj._wrapped()
  File "/Users/gsimeone/PycharmProjects/sayaritest/sayari_test/lib/python3.8/site-packages/pyspark/python/lib/pyspark.zip/pyspark/sql/udf.py", line 286, in _wrapped
    wrapper.returnType = self.returnType  # type: ignore[attr-defined]
  File "/Users/gsimeone/PycharmProjects/sayaritest/sayari_test/lib/python3.8/site-packages/pyspark/python/lib/pyspark.zip/pyspark/sql/udf.py", line 134, in returnType
    self._returnType_placeholder = _parse_datatype_string(self._returnType)
  File "/Users/gsimeone/PycharmProjects/sayaritest/sayari_test/lib/python3.8/site-packages/pyspark/python/lib/pyspark.zip/pyspark/sql/types.py", line 1010, in _parse_datatype_string
    assert sc is not None
AssertionError

But if DO include the snippet above, I get another error:

Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/Users/gsimeone/PycharmProjects/assignment/shared/geographic_tools.py", line 15, in <module>
    spark = get_or_create_session(app_name=config.get("app_name"))
  File "/Users/gsimeone/PycharmProjects/assignment/dependencies/spark.py", line 22, in get_or_create_session
    session = spark_builder.getOrCreate()
  File "/Users/gsimeone/PycharmProjects/sayaritest/sayari_test/lib/python3.8/site-packages/pyspark/python/lib/pyspark.zip/pyspark/sql/session.py", line 277, in getOrCreate
    return session
  File "/Users/gsimeone/PycharmProjects/sayaritest/sayari_test/lib/python3.8/site-packages/pyspark/python/lib/pyspark.zip/pyspark/context.py", line 485, in getOrCreate
    return SparkContext._active_spark_context
  File "/Users/gsimeone/PycharmProjects/sayaritest/sayari_test/lib/python3.8/site-packages/pyspark/python/lib/pyspark.zip/pyspark/context.py", line 186, in __init__
    SparkContext._assert_on_driver()
  File "/Users/gsimeone/PycharmProjects/sayaritest/sayari_test/lib/python3.8/site-packages/pyspark/python/lib/pyspark.zip/pyspark/context.py", line 1533, in _assert_on_driver
    raise RuntimeError("SparkContext should only be created and accessed on the driver.")
RuntimeError: SparkContext should only be created and accessed on the driver.

Help?

UPDATE:

If i take the entire content of shared.tools.py and paste in etl.py. The app runs with no problem.

like image 788
Tytire Recubans Avatar asked Oct 16 '25 13:10

Tytire Recubans


1 Answers

I had a similar issue, and it was resolved by changing the return data type of the UDF function from a 'human-readable string' to a Spark built-in data type. The suggestion was to modify this specific aspect.

@F.pandas_udf("string")

to

from pyspark.sql.types import StringType
@F.pandas_udf(StringType())
like image 120
Harry Avatar answered Oct 18 '25 07:10

Harry



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!