Comments

First check the access to the high port on the node itself. First use loopback, then the 10.106.224.107 IP. This may narrow down if the issue is a firewall problem. Remember that only the high-port is accessable with services. The nc command can be helpful to troubleshoot, such as: nc 10.106.224.107 30619

If the nc command works using 127.0.0.1 but not the 10. IP it probably is a firewall issue. If you load a webserver on the host node, can you access that server via the public IP?

Can you please list down the steps, to fix the firewall issue. All my services are woking inside cluster but when I try to test ouside , get no response. Even my kubernetes dashboard is not running becuse of this, It will be great help.

The iptables command would be where I started, something like sudo iptables -vL and see if there are any rules which would drop or reject the expected traffic. The use of a LOG target early in the rules can be helpful so you can see the packet enter, then you can find which rule is dropping, rejecting or perhaps even sending the packet to an unexpected place.

Chances are the packet is dopped on INPUT instead of OUTPUT, so one easy way to check is to add as the first rule to ACCEPT all traffic from the sending host, something like sudo iptables -I INPUT -s <ip of sending node here> -j ACCEPT Then try the curl, wget or HTTP commands again and see if it works. You can learn more about iptables here: https://www.howtogeek.com/177621/the-beginners-guide-to-iptables-the-linux-firewall/

Did it work using loopback/127.0.0.1 but not the exterior IP address? Are you using virtualbox or a cloud provider for your instances, like AWS or GCE?

Hi, I also use GCE for the labs in this course. I remember having a similar issue with the access from outside the cluster, and after a little bit of google-shooting I realized it was a GCE firewall issue. I added another firewall rule and after that I was able to complete the exercise. Hope this helps.

Did you attempt to access the port using the loopback IP address? As Chris pointed out GCE, which runs SDN on our behalf, has a firewall by default. One thing you may try is to log into the console and add a firewall rule to GCE that opens up all ports, from all source IP addresses. If the curl traffic works once you add the rule, you know it was the cause.

From your example, the 104.196.99.153 is your public IP address. Which would indicate traffic would navigate throught the Google SDN to get to your node. The firewall would not be inside the node, but inside google's network.

For simplicity, I created a rule to allow all tcp traffic (rather than allowing a specific port), and only then I was able to access from the outside on the [nodeIP]:[nodePort], and I verified access on all running nodes.

-Chris

PS: The new rule I created was in the same project where my nodes were (in my case a custom project created only for the purpose of lfs258 where I run all the master/worker nodes)

If you have added an allow-all rule to GCE and still cannot gain access to your Pod, perhaps the block is not in GCE.

Like Chris I had opened all ports from all source IPs. To test which rules are actually necessary I returned to the GCE firewall page and added only the port exposed by the service, or 30494 in my case. When I removed the all-traffic firewall rule traffic my request for the page timed out. When I added only tcp:30494 I was able see the Welcome to nginx! page again. From this testing the only necessary rule is the particular port being exposed, which will change with each time you run the kubectl expose command. Please note that it took about minute for the rule to actually take affect after selecting the save button in the GCE console.

Could there be a corportate firewall or proxy which is blocking the high ports?