Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to select all columns except 2 of them from a large table on pyspark sql?

In joining two tables, I would like to select all columns except 2 of them from a large table with many columns on pyspark sql on databricks.

My pyspark sql:

 %sql
 set hive.support.quoted.identifiers=none;
 select a.*, '?!(b.year|b.month)$).+'
 from MY_TABLE_A as a
 left join 
      MY_TABLE_B as b
 on a.year = b.year and a.month = b.month 

I followed hive:select all column exclude two Hive How to select all but one column?

but, it does not work for me. All columns are in the results. I would like to remove the duplicated columns (year and month in the result).

like image 619
user3448011 Avatar asked Sep 01 '25 16:09

user3448011


2 Answers

As of Databricks runtime 9.0, you can use the * except() command like this:

df = spark.sql("select a.* except(col1, col2, col3) from my_table_a...")

or if just using %sql as in your example

select a.* except(col1, col2, col3) from my_table_a...
like image 76
David Maddox Avatar answered Sep 04 '25 06:09

David Maddox


set hive.support.quoted.identifiers=nonenot supported in Spark.

Instead in Spark set spark.sql.parser.quotedRegexColumnNames=true to get same behavior as hive.

Example:

df=spark.createDataFrame([(1,2,3,4)],['id','a','b','c'])
df.createOrReplaceTempView("tmp")
spark.sql("SET spark.sql.parser.quotedRegexColumnNames=true")

#select all columns except a,b
sql("select `(a|b)?+.+` from tmp").show()
#+---+---+
#| id|  c|
#+---+---+
#|  1|  4|
#+---+---+
like image 32
notNull Avatar answered Sep 04 '25 05:09

notNull