Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

PySpark: Can saveAsNewAPIHadoopDataset() be used as bulk loading to HBase?

We currently import data to HBase tables via Spark RDDs (pyspark) by using saveAsNewAPIHadoopDataset().

Is this function using the HBase bulk loading feature via mapreduce? In other words, would saveAsNewAPIHadoopDataset(), which imports directly to HBase, be equivalent to using saveAsNewAPIHadoopFile() to write Hfiles to HDFS, and then invoke org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles to load to HBase?

Here is an example snippet of our HBase loading routine:

conf = {"hbase.zookeeper.quorum": config.get(gethostname(),'HBaseQuorum'),
        "zookeeper.znode.parent":config.get(gethostname(),'ZKznode'),
        "hbase.mapred.outputtable": table_name,
        "mapreduce.outputformat.class": "org.apache.hadoop.hbase.mapreduce.TableOutputFormat",
        "mapreduce.job.output.key.class": "org.apache.hadoop.hbase.io.ImmutableBytesWritable",
        "mapreduce.job.output.value.class": "org.apache.hadoop.io.Writable"}

keyConv = "org.apache.spark.examples.pythonconverters.StringToImmutableBytesWritableConverter"
valueConv = "org.apache.spark.examples.pythonconverters.StringListToPutConverter"

spark_rdd.saveAsNewAPIHadoopDataset(conf=conf,keyConverter=keyConv,valueConverter=valueConv)
like image 425
kentt Avatar asked Jan 29 '26 01:01

kentt


1 Answers

Not exactly. RDD.saveAsNewAPIHadoopDataset and RDD.saveAsNewAPIHadoopFile do almost the same thing. Their API is just a little different. Each provides a different 'mechanism vs policy' choice.

like image 109
Brandon Bradley Avatar answered Jan 30 '26 19:01

Brandon Bradley