[Spark Shell on AWS K8s Cluster]: Is there more documentation regarding how to run spark-shell on k8s cluster?

[Spark Shell on AWS K8s Cluster]: Is there more documentation regarding how to run spark-shell on k8s cluster?

Hello guys,

I am Yuqi from Teradata Tokyo. Sorry to disturb but I have some problem regarding using spark 2.4 client mode function on kubernetes cluster, so I would like to ask if there is some solution to my problem.

The problem is when I am trying to run spark-shell on kubernetes v1.11.3 cluster on AWS environment, I couldn’t successfully run stateful set using the docker image built from spark 2.4. The error message
is showing below. The version I am using is spark v2.4.0-rc3.

This e-mail is from Teradata Corporation and may contain information that is confidential or proprietary. If you are not the intended recipient, do not read, copy
or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.
Please consider the environment before printing.

Re: [Spark Shell on AWS K8s Cluster]: Is there more documentation regarding how to run spark-shell on k8s cluster?

I am Yuqi from Teradata Tokyo. Sorry to disturb but I have some problem regarding using spark 2.4 client mode function on kubernetes cluster, so I would like to ask if there is some solution to my problem.

The problem is when I am trying to run spark-shell on kubernetes v1.11.3 cluster on AWS environment, I couldn’t successfully run stateful set using the docker image built from spark 2.4. The error message
is showing below. The version I am using is spark v2.4.0-rc3.

This e-mail is from Teradata Corporation and may contain information that is confidential or proprietary. If you are not the intended recipient, do not read, copy
or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.
Please consider the environment before printing.

I am Yuqi from Teradata Tokyo. Sorry to disturb but I have some problem regarding using spark 2.4 client mode function on kubernetes cluster, so I would like to ask if there is some solution to my problem.

The problem is when I am trying to run spark-shell on kubernetes v1.11.3 cluster on AWS environment, I couldn’t successfully run stateful set using the docker image built from spark 2.4. The error message
is showing below. The version I am using is spark v2.4.0-rc3.

This e-mail is from Teradata Corporation and may contain information that is confidential or proprietary. If you are not the intended recipient, do not read, copy
or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.
Please consider the environment before printing.

There are still some errors which is in the attachment picture local-spark-shell-error.png, but seems spark-shell could be started by this way, which is shown like attachment picture local-spark-shell.png. I guess by using this command
the spark driver is on my local, but l don’t know how to create the driver pod on k8s cluster. So I would like to ask if there is any more detailed documentation regarding how to use spark client mode on k8s of spark 2.4.

This e-mail is from Teradata Corporation and may contain information that is confidential or proprietary. If you are not the intended recipient, do not read, copy
or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.
Please consider the environment before printing.

I am Yuqi from Teradata Tokyo. Sorry to disturb but I have some problem regarding using spark 2.4 client mode function on kubernetes cluster, so I would like to ask if there is
some solution to my problem.

The problem is when I am trying to run spark-shell on kubernetes v1.11.3 cluster on AWS environment, I couldn’t successfully run stateful set using the docker image built from spark
2.4. The error message is showing below. The version I am using is spark v2.4.0-rc3.

This e-mail is from Teradata Corporation and may contain information that is confidential or proprietary.
If you are not the intended recipient, do not read, copy or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.
Please consider the environment before printing.

Re: [Spark Shell on AWS K8s Cluster]: Is there more documentation regarding how to run spark-shell on k8s cluster?

I haven’t try glue or EMK, but I guess it’s integrating kubernetes on aws instances?

I could set up the k8s cluster on AWS, but my problem is don’t know how to run spark-shell on kubernetes…

Since spark only support client mode on k8s from 2.4 version which is not officially released yet, I would like to ask if there is more detailed documentation regarding the way to run spark-shell on k8s cluster?

This e-mail is from Teradata Corporation and may contain information that is confidential or proprietary. If you are not the intended recipient, do not read, copy
or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.
Please consider the environment before printing.

I am Yuqi from Teradata Tokyo. Sorry to disturb but I have some problem regarding using spark 2.4 client mode function on kubernetes cluster, so I would like to ask if there is
some solution to my problem.

The problem is when I am trying to run spark-shell on kubernetes v1.11.3 cluster on AWS environment, I couldn’t successfully run stateful set using the docker image built from spark
2.4. The error message is showing below. The version I am using is spark v2.4.0-rc3.

This e-mail is from Teradata Corporation and may contain information that is confidential or proprietary.
If you are not the intended recipient, do not read, copy or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.
Please consider the environment before printing.

Re: [Spark Shell on AWS K8s Cluster]: Is there more documentation regarding how to run spark-shell on k8s cluster?

Yuqi,

Your error seems unrelated to headless service config you need to enable. For headless service you need to create a headless service that matches to your driver pod name exactly in order for spark 2.4 RC to work under client mode. We have this running for a while now using Jupyter kernel as the driver client.

I haven’t try glue or EMK, but I guess it’s integrating kubernetes on aws instances?

I could set up the k8s cluster on AWS, but my problem is don’t know how to run spark-shell on kubernetes…

Since spark only support client mode on k8s from 2.4 version which is not officially released yet, I would like to ask if there is more detailed documentation regarding the way to run spark-shell on k8s cluster?

This e-mail is from Teradata Corporation and may contain information that is confidential or proprietary. If you are not the intended recipient, do not read, copy
or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.
Please consider the environment before printing.

I am Yuqi from Teradata Tokyo. Sorry to disturb but I have some problem regarding using spark 2.4 client mode function on kubernetes cluster, so I would like to ask if there is
some solution to my problem.

The problem is when I am trying to run spark-shell on kubernetes v1.11.3 cluster on AWS environment, I couldn’t successfully run stateful set using the docker image built from spark
2.4. The error message is showing below. The version I am using is spark v2.4.0-rc3.

This e-mail is from Teradata Corporation and may contain information that is confidential or proprietary.
If you are not the intended recipient, do not read, copy or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.
Please consider the environment before printing.

Re: [Spark Shell on AWS K8s Cluster]: Is there more documentation regarding how to run spark-shell on k8s cluster?

Hi Li,

Thank you for your reply.

Do you mean running Jupyter client on k8s cluster to use spark 2.4? Actually I am also trying to set up JupyterHub on k8s to use spark, that’s why I would like to know how to run spark client mode on k8s cluster. If there is any related
documentation on how to set up the Jupyter on k8s to use spark, could you please share with me?

This e-mail is from Teradata Corporation and may contain information that is confidential or proprietary. If you are not the intended recipient, do not read, copy
or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.
Please consider the environment before printing.

Your error seems unrelated to headless service config you need to enable. For headless service you need to create a headless service that matches to your driver pod name exactly in order for spark 2.4 RC to work under client mode. We have
this running for a while now using Jupyter kernel as the driver client.

I haven’t try glue or EMK, but I guess it’s integrating kubernetes on aws instances?

I could set up the k8s cluster on AWS, but my problem is don’t know how to run spark-shell on kubernetes…

Since spark only support client mode on k8s from 2.4 version which is not officially released yet, I would like to ask if there is more detailed documentation regarding the way
to run spark-shell on k8s cluster?

This e-mail is from Teradata Corporation and may contain information that is confidential or proprietary.
If you are not the intended recipient, do not read, copy or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.
Please consider the environment before printing.

I am Yuqi from Teradata Tokyo. Sorry to disturb but I have some problem regarding using spark 2.4 client mode function on kubernetes cluster, so I would like to ask if there is
some solution to my problem.

The problem is when I am trying to run spark-shell on kubernetes v1.11.3 cluster on AWS environment, I couldn’t successfully run stateful set using the docker image built from spark
2.4. The error message is showing below. The version I am using is spark v2.4.0-rc3.

This e-mail is from Teradata Corporation and may contain information that is confidential or proprietary.
If you are not the intended recipient, do not read, copy or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.
Please consider the environment before printing.

Re: [Spark Shell on AWS K8s Cluster]: Is there more documentation regarding how to run spark-shell on k8s cluster?

Hi Yuqi,

Yes we are running Jupyter Gateway and kernels on k8s and using Spark 2.4's client mode to launch pyspark. In client mode your driver is running on the same pod where your kernel runs.

I am planning to write some blog post on this on some future date. Did you make the headless service that reflects the driver pod name? Thats one of critical pieces we automated in our custom code that makes the client mode works.

Do you mean running Jupyter client on k8s cluster to use spark 2.4? Actually I am also trying to set up JupyterHub on k8s to use spark, that’s why I would like to know how to run spark client mode on k8s cluster. If there is any related
documentation on how to set up the Jupyter on k8s to use spark, could you please share with me?

This e-mail is from Teradata Corporation and may contain information that is confidential or proprietary. If you are not the intended recipient, do not read, copy
or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.
Please consider the environment before printing.

Your error seems unrelated to headless service config you need to enable. For headless service you need to create a headless service that matches to your driver pod name exactly in order for spark 2.4 RC to work under client mode. We have
this running for a while now using Jupyter kernel as the driver client.

I haven’t try glue or EMK, but I guess it’s integrating kubernetes on aws instances?

I could set up the k8s cluster on AWS, but my problem is don’t know how to run spark-shell on kubernetes…

Since spark only support client mode on k8s from 2.4 version which is not officially released yet, I would like to ask if there is more detailed documentation regarding the way
to run spark-shell on k8s cluster?

This e-mail is from Teradata Corporation and may contain information that is confidential or proprietary.
If you are not the intended recipient, do not read, copy or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.
Please consider the environment before printing.

I am Yuqi from Teradata Tokyo. Sorry to disturb but I have some problem regarding using spark 2.4 client mode function on kubernetes cluster, so I would like to ask if there is
some solution to my problem.

The problem is when I am trying to run spark-shell on kubernetes v1.11.3 cluster on AWS environment, I couldn’t successfully run stateful set using the docker image built from spark
2.4. The error message is showing below. The version I am using is spark v2.4.0-rc3.

This e-mail is from Teradata Corporation and may contain information that is confidential or proprietary.
If you are not the intended recipient, do not read, copy or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.
Please consider the environment before printing.

This e-mail is from Teradata Corporation and may contain information that is confidential or proprietary. If you are not the intended recipient, do not read, copy
or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.
Please consider the environment before printing.

Yes we are running Jupyter Gateway and kernels on k8s and using Spark 2.4's client mode to launch pyspark. In client mode your driver is running on the same pod where your kernel runs.

I am planning to write some blog post on this on some future date. Did you make the headless service that reflects the driver pod name? Thats one of critical pieces we automated in our custom code that makes the client mode works.

Do you mean running Jupyter client on k8s cluster to use spark 2.4? Actually I am also trying to set up JupyterHub on k8s to use spark, that’s why I would like to know how to run
spark client mode on k8s cluster. If there is any related documentation on how to set up the Jupyter on k8s to use spark, could you please share with me?

This e-mail is from Teradata Corporation and may contain information that is confidential or proprietary.
If you are not the intended recipient, do not read, copy or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.
Please consider the environment before printing.

Your error seems unrelated to headless service config you need to enable. For headless service you need to create a headless service that matches to your driver pod name exactly
in order for spark 2.4 RC to work under client mode. We have this running for a while now using Jupyter kernel as the driver client.

I haven’t try glue or EMK, but I guess it’s integrating kubernetes on aws instances?

I could set up the k8s cluster on AWS, but my problem is don’t know how to run spark-shell on kubernetes…

Since spark only support client mode on k8s from 2.4 version which is not officially released yet, I would like to ask if there is more detailed documentation regarding the way
to run spark-shell on k8s cluster?

This e-mail is from Teradata Corporation and may contain information that is confidential or proprietary.
If you are not the intended recipient, do not read, copy or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.
Please consider the environment before printing.

I am Yuqi from Teradata Tokyo. Sorry to disturb but I have some problem regarding using spark 2.4 client mode function on kubernetes cluster, so I would like to ask if there is
some solution to my problem.

The problem is when I am trying to run spark-shell on kubernetes v1.11.3 cluster on AWS environment, I couldn’t successfully run stateful set using the docker image built from spark
2.4. The error message is showing below. The version I am using is spark v2.4.0-rc3.

This e-mail is from Teradata Corporation and may contain information that is confidential or proprietary.
If you are not the intended recipient, do not read, copy or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.
Please consider the environment before printing.

This e-mail is from Teradata Corporation and may contain information that is confidential or proprietary. If you are not the intended recipient, do not read, copy
or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.
Please consider the environment before printing.

Yes we are running Jupyter Gateway and kernels on k8s and using Spark 2.4's client mode to launch pyspark. In client mode your driver is running on the same pod where your kernel runs.

I am planning to write some blog post on this on some future date. Did you make the headless service that reflects the driver pod name? Thats one of critical pieces we automated in our custom code that makes the client mode works.

Do you mean running Jupyter client on k8s cluster to use spark 2.4? Actually I am also trying to set up JupyterHub on k8s to use spark, that’s why I would like to know how to run
spark client mode on k8s cluster. If there is any related documentation on how to set up the Jupyter on k8s to use spark, could you please share with me?

This e-mail is from Teradata Corporation and may contain information that is confidential or proprietary.
If you are not the intended recipient, do not read, copy or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.
Please consider the environment before printing.

Your error seems unrelated to headless service config you need to enable. For headless service you need to create a headless service that matches to your driver pod name exactly
in order for spark 2.4 RC to work under client mode. We have this running for a while now using Jupyter kernel as the driver client.

I haven’t try glue or EMK, but I guess it’s integrating kubernetes on aws instances?

I could set up the k8s cluster on AWS, but my problem is don’t know how to run spark-shell on kubernetes…

Since spark only support client mode on k8s from 2.4 version which is not officially released yet, I would like to ask if there is more detailed documentation regarding the way
to run spark-shell on k8s cluster?

This e-mail is from Teradata Corporation and may contain information that is confidential or proprietary.
If you are not the intended recipient, do not read, copy or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.
Please consider the environment before printing.

I am Yuqi from Teradata Tokyo. Sorry to disturb but I have some problem regarding using spark 2.4 client mode function on kubernetes cluster, so I would like to ask if there is
some solution to my problem.

The problem is when I am trying to run spark-shell on kubernetes v1.11.3 cluster on AWS environment, I couldn’t successfully run stateful set using the docker image built from spark
2.4. The error message is showing below. The version I am using is spark v2.4.0-rc3.

This e-mail is from Teradata Corporation and may contain information that is confidential or proprietary.
If you are not the intended recipient, do not read, copy or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.
Please consider the environment before printing.

Re: [Spark Shell on AWS K8s Cluster]: Is there more documentation regarding how to run spark-shell on k8s cluster?

Hi Holden,

Thank you very much for your reply and your tutorial video.

I watched your video and have a question regarding the spark driver pod. In your tutorial, are you running the driver pod on your local? I saw in your tutorial set “spark.driver.host” to be “10.142.0.2” and “spark.driver.port” to be “7778”,
could you share how to find the host and port for your spark driver?

Because previous I tried spark client mode on my local by using the command

And by checking the sparkUI, I saw the value of spark.driver.host is “192.168.1.104” and “spark.driver.port” is 50331. And I tried again with setting “spark.driver.host” and “spark.driver.port “ to be the value “192.168.1.104” and 50331,
the spark-shell could start successfully on kubernetes cluster. The complete command is like this:

I didn’t set these two values, so I wonder where does they come from. Do you have any idea about this?

Another question is after I setup the spark-shell on k8s cluster, when I tried running spark, I will receive error message like “Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and
have sufficient resources”. I allocated sufficient resource for the executors, so I guess it might related to the failure communication between spark driver and executors, but I am not sure what’s the exact cause it is. I attached the screenshot of the detailed
error message and the log of the executor pod, could you please take a look and see if you know the cause?

This e-mail is from Teradata Corporation and may contain information that is confidential or proprietary. If you are not the intended recipient, do not read, copy
or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.
Please consider the environment before printing.

This e-mail is from Teradata Corporation and may contain information that is confidential or proprietary.
If you are not the intended recipient, do not read, copy or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.
Please consider the environment before printing.

Yes we are running Jupyter Gateway and kernels on k8s and using Spark 2.4's client mode to launch pyspark. In client mode your driver is running on the same pod where your kernel
runs.

I am planning to write some blog post on this on some future date. Did you make the headless service that reflects the driver pod name? Thats one of critical pieces we automated
in our custom code that makes the client mode works.

Do you mean running Jupyter client on k8s cluster to use spark 2.4? Actually I am also trying to set up JupyterHub on k8s to use spark, that’s why I would like to know how to run
spark client mode on k8s cluster. If there is any related documentation on how to set up the Jupyter on k8s to use spark, could you please share with me?

This e-mail is from Teradata Corporation and may contain information that is confidential or proprietary.
If you are not the intended recipient, do not read, copy or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.
Please consider the environment before printing.

Your error seems unrelated to headless service config you need to enable. For headless service you need to create a headless service that matches to your driver pod name exactly
in order for spark 2.4 RC to work under client mode. We have this running for a while now using Jupyter kernel as the driver client.

I haven’t try glue or EMK, but I guess it’s integrating kubernetes on aws instances?

I could set up the k8s cluster on AWS, but my problem is don’t know how to run spark-shell on kubernetes…

Since spark only support client mode on k8s from 2.4 version which is not officially released yet, I would like to ask if there is more detailed documentation regarding the way
to run spark-shell on k8s cluster?

This e-mail is from Teradata Corporation and may contain information that is confidential or proprietary.
If you are not the intended recipient, do not read, copy or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.
Please consider the environment before printing.

I am Yuqi from Teradata Tokyo. Sorry to disturb but I have some problem regarding using spark 2.4 client mode function on kubernetes cluster, so I would like to ask if there is
some solution to my problem.

The problem is when I am trying to run spark-shell on kubernetes v1.11.3 cluster on AWS environment, I couldn’t successfully run stateful set using the docker image built from spark
2.4. The error message is showing below. The version I am using is spark v2.4.0-rc3.

This e-mail is from Teradata Corporation and may contain information that is confidential or proprietary.
If you are not the intended recipient, do not read, copy or distribute the e-mail or any attachments. Instead, please notify the sender and delete the e-mail and any attachments. Thank you.
Please consider the environment before printing.