spark查询主子表,如果条件中对应的子表不存在、未attachCL,报错,
例如:
select count(1) from st_refund where date='201621' ;
Error: org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
Exchange SinglePartition
+- *HashAggregate(keys=[], functions=[partial_count(1)], output=[count#26757L])
+- *Project
+- *Filter (isnotnull(date#9973) && (date#9973 = 201621))
+- *Scan SequoiadbRelation(com.sequoiadb.spark.SequoiadbConfigBuilder$$anon$1@cb27aa0,Some(StructType(StructField(id,StringType,true), StructField(applyRefundSum,StringType,true), StructField(status,StringType,true), StructField(orderId,StringType,true), StructField(created,StringType,true), StructField(good_return_time,StringType,true), StructField(buyerId,StringType,true), StructField(shop_id,StringType,true), StructField(shop_type,StringType,true), StructField(etl_data_source,StringType,true), StructField(etluuid,StringType,true), StructField(date,StringType,true)))) default.st_refund[date#9973] PushedFilters: [IsNotNull(date), EqualTo(date,201621)], ReadSchema: struct<> (state=,code=0)
但是,在postgresql中未报错,建议,如果对应子表为空,或者未添加,应该返回0行数据即可,不应该报错。
谢谢。