How to flatten Array of Strings into multiple rows of a dataframe in Spark 2.2.0?
Input Row ["foo", "bar"]
val inputDS = Seq("""["foo", "bar"]""").toDF
inputDS.printSchema()
root
|-- value: string (nullable = true)
Input Dataset inputDS
inputDS.show(false)
value
-----
["foo", "bar"]
Expected output dataset outputDS
value
-------
"foo" |
"bar" |
I tried explode function like below but it didn't quite work
inputDS.select(explode(from_json(col("value"), ArrayType(StringType))))
and I get the following error
org.apache.spark.sql.AnalysisException: cannot resolve 'jsontostructs(`value`)' due to data type mismatch: Input schema string must be a struct or an array of structs
Also tried the following
inputDS.select(explode(col("value")))
And I get the following error
org.apache.spark.sql.AnalysisException: cannot resolve 'explode(`value`)' due to data type mismatch: input to function explode should be array or map type, not StringType
Exception is thrown by:
from_json(col("value"), ArrayType(StringType))
not explode, specifically:
Input schema array must be a struct or an array of structs.
You can:
inputDS.selectExpr(
"split(substring(value, 2, length(value) - 2), ',\\s+') as value")
and explode the output.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With