I have a Dataframe containing 3 columns
| str1 | array_of_str1 | array_of_str2 |
+-----------+----------------------+----------------+
| John | [Size, Color] | [M, Black] |
| Tom | [Size, Color] | [L, White] |
| Matteo | [Size, Color] | [M, Red] |
I want to add the Array column that contains the 3 columns in a struct type
| str1 | array_of_str1 | array_of_str2 | concat_result |
+-----------+----------------------+----------------+-----------------------------------------------+
| John | [Size, Color] | [M, Black] | [[[John, Size , M], [John, Color, Black]]] |
| Tom | [Size, Color] | [L, White] | [[[Tom, Size , L], [Tom, Color, White]]] |
| Matteo | [Size, Color] | [M, Red] | [[[Matteo, Size , M], [Matteo, Color, Red]]] |
If the number of elements in the arrays in fixed, it is quite straightforward using the array
and struct
functions. Here is a bit of code in scala.
val result = df
.withColumn("concat_result", array((0 to 1).map(i => struct(
col("str1"),
col("array_of_str1").getItem(i),
col("array_of_str2").getItem(i)
)) : _*))
And in python, since you were asking about pyspark:
import pyspark.sql.functions as F
df.withColumn("concat_result", F.array(*[ F.struct(
F.col("str1"),
F.col("array_of_str1").getItem(i),
F.col("array_of_str2").getItem(i))
for i in range(2)]))
And you get the following schema:
root
|-- str1: string (nullable = true)
|-- array_of_str1: array (nullable = true)
| |-- element: string (containsNull = true)
|-- array_of_str2: array (nullable = true)
| |-- element: string (containsNull = true)
|-- concat_result: array (nullable = false)
| |-- element: struct (containsNull = false)
| | |-- str1: string (nullable = true)
| | |-- col2: string (nullable = true)
| | |-- col3: string (nullable = true)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With