I have a dataframe that contains the following:
movieId / movieName / genre
1 example1 action|thriller|romance
2 example2 fantastic|action
I would like to obtain a second dataframe (from the first one), that contains the following:
movieId / movieName / genre
1 example1 action
1 example1 thriller
1 example1 romance
2 example2 fantastic
2 example2 action
How can we do it using pyspark?
Use split
function will return an array
then explode
function on array.
Example:
df.show(10,False)
#+-------+---------+-----------------------+
#|movieid|moviename|genre |
#+-------+---------+-----------------------+
#|1 |example1 |action|thriller|romance|
#+-------+---------+-----------------------+
from pyspark.sql.functions import *
df.withColumnRenamed("genre","genre1").\
withColumn("genre",explode(split(col("genre1"),'\\|'))).\
drop("genre1").\
show()
#+-------+---------+--------+
#|movieid|moviename| genre|
#+-------+---------+--------+
#| 1| example1| action|
#| 1| example1|thriller|
#| 1| example1| romance|
#+-------+---------+--------+
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With