A simple Hive query in BigInsights 4.2, which is configured to use the Spark engine does not work.

Example query and error:

export SPARK_HOME=/usr/iop/4.1.0.0/spark
# or export SPARK_HOME=/usr/iop/4.2.0.0/spark
hive << EOF
set hive.execution.engine=spark;
set spark.master=local;
set spark.eventLog.enabled=true;
set spark.eventLog.dir=/tmp/sparkEventLog;
set spark.executor.memory=512m;
set spark.serializer=org.apache.spark.serializer.KryoSerializer;
use mydb;
insert into tab1 values (10);
select count(*) from tab1;
EOF
...
Query ID = hive_20161202113142_56ee0465-f2cd-40c9-996b-c9a02e760852
Total jobs = 1
Launching Job 1 out of 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark client.)'
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask

i am experiencing the same error using IOP 4.2.5 where Hive on Spark is tech preview - but only on certain types of queries such as the one below.

hive> select location, count(*) from drivers group by location;
Query ID = admin_20170607135738_138223c2-1a7e-466b-aae2-8e4d1e42c375
Total jobs = 1
Launching Job 1 out of 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark client.)'
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask
i have followed the instructions outlined here https://www.ibm.com/support/knowledgecenter/SSPT3X_4.3.0/com.ibm.swg.im.infosphere.biginsights.admin.doc/doc/admin_hive_on_spark.html but would appreciate some advice/assistance on how to resolve this issue.