Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Read Array Of Jsons From File to Spark Dataframe

I have a gzipped JSON file that contains Array of JSON, something like this:

[{"Product":{"id"1,"image":"/img.jpg"},"Color":"black"},{"Product":{"id"2,"image":"/img1.jpg"},"Color":"green"}.....]

I know this is not the ideal data format to read into scala, however there is no other alternative but to process the feed in this manner.

I have tried :

spark.read.json("file-path") 

which seems to take a long time (processes very quickly if you have data in MBs, however takes way long for GBs worth of data ), probably because spark is not able to split the file and distribute accross to other executors.

Wanted to see if there is a any way out to preprocess this data and load it into spark context as a dataframe.

Functionality I want seems to be similar to: Create pandas dataframe from json objects . But I wanted to see if there is any scala alternative which could do similar and convert the data to spark RDD / dataframe .

like image 539
Dipayan Avatar asked Sep 17 '25 19:09

Dipayan


1 Answers

You can read the "gzip" file using spark.read().text("gzip-file-path"). Since Spark API's are built on top of HDFS API , Spark can read the gzip file and decompress it to read the files.

https://github.com/mesos/spark/blob/baa30fcd99aec83b1b704d7918be6bb78b45fbb5/core/src/main/scala/spark/SparkContext.scala#L239

However, gzip is non-splittable so spark creates an RDD with single partition. Hence, reading gzip files using spark doe not make sense.

You may decompress the gzip file and read the decompressed files to get most out of the distributed processing architecture.

like image 163
wandermonk Avatar answered Sep 20 '25 10:09

wandermonk