找到你要的答案

Q:Loading a large Hbase table into SPARK RDD takes long time

Q:一个大的HBase表加载到火花RDD需要很长时间

I am trying to load a large Hbase table into SPARK RDD to run a SparkSQL query on the entity. For an entity with about 6 million rows, it will take about 35 seconds to load it to RDD. Is it expected? Is there any way to shorten the loading process? I have been getting some tips from http://hbase.apache.org/book/perf.reading.html to speed up the process, e.g., scan.setCaching(cacheSize) and only add the necessary attributes/column to scan. I am just wondering if there are other ways to improve the speed?

Here is the code snippet:

SparkConf sparkConf = new SparkConf().setMaster("spark://url").setAppName("SparkSQLTest");
JavaSparkContext jsc = new JavaSparkContext(sparkConf);
Configuration hbase_conf = HBaseConfiguration.create();
hbase_conf.set("hbase.zookeeper.quorum","url");
hbase_conf.set("hbase.regionserver.port", "60020");
hbase_conf.set("hbase.master", "url");
hbase_conf.set(TableInputFormat.INPUT_TABLE, entityName);
Scan scan = new Scan();
scan.addColumn(Bytes.toBytes("MetaInfo"), Bytes.toBytes("col1"));
scan.addColumn(Bytes.toBytes("MetaInfo"), Bytes.toBytes("col2"));
scan.addColumn(Bytes.toBytes("MetaInfo"), Bytes.toBytes("col3"));
scan.setCaching(this.cacheSize);
hbase_conf.set(TableInputFormat.SCAN, convertScanToString(scan));
JavaPairRDD<ImmutableBytesWritable, Result> hBaseRDD 
= jsc.newAPIHadoopRDD(hbase_conf,
            TableInputFormat.class, ImmutableBytesWritable.class,
            Result.class);
logger.info("count is " + hBaseRDD.cache().count());    

I am trying to load a large Hbase table into SPARK RDD to run a SparkSQL query on the entity. For an entity with about 6 million rows, it will take about 35 seconds to load it to RDD. Is it expected? Is there any way to shorten the loading process? I have been getting some tips from http://hbase.apache.org/book/perf.reading.html to speed up the process, e.g., scan.setCaching(cacheSize) and only add the necessary attributes/column to scan. I am just wondering if there are other ways to improve the speed?

这里是代码片段:

SparkConf sparkConf = new SparkConf().setMaster("spark://url").setAppName("SparkSQLTest");
JavaSparkContext jsc = new JavaSparkContext(sparkConf);
Configuration hbase_conf = HBaseConfiguration.create();
hbase_conf.set("hbase.zookeeper.quorum","url");
hbase_conf.set("hbase.regionserver.port", "60020");
hbase_conf.set("hbase.master", "url");
hbase_conf.set(TableInputFormat.INPUT_TABLE, entityName);
Scan scan = new Scan();
scan.addColumn(Bytes.toBytes("MetaInfo"), Bytes.toBytes("col1"));
scan.addColumn(Bytes.toBytes("MetaInfo"), Bytes.toBytes("col2"));
scan.addColumn(Bytes.toBytes("MetaInfo"), Bytes.toBytes("col3"));
scan.setCaching(this.cacheSize);
hbase_conf.set(TableInputFormat.SCAN, convertScanToString(scan));
JavaPairRDD<ImmutableBytesWritable, Result> hBaseRDD 
= jsc.newAPIHadoopRDD(hbase_conf,
            TableInputFormat.class, ImmutableBytesWritable.class,
            Result.class);
logger.info("count is " + hBaseRDD.cache().count());    
answer1: 回答1:

Depending on your cluster size, and the size of the rows (columns and column families, and how your regions are split), it may vary - but that doesn't sound unreasonable. Consider how many rows per second that is :)

根据你的簇大小,行的大小(列和列的家庭,以及你的地区如何分裂),它可能会有所不同-但这听起来不合理。考虑每秒多少行:

hbase  apache-spark  apache-spark-sql