1 answer

I don't think you can do this purely with boto3, so unless you have a requirement to use it, use the metadata service. AWS docs would tell you to invoke the metadata service from within the container and parse the json response for the public IP. Documentation here. Note ECS "classic" has a different metadata endpoint when the ecs-agent version < 1.17.0.

Something like this should work from inside a container in Fargate:

import requests
try:
response = requests.get('http://169.254.170.2/v2/metadata').json()
for container in response.get('Containers', []):
if 'Networks' in container:
for network in container.get('Networks', []):
if 'IPv4Addresses' in network:
for ipv4address in network.get('IPv4Addresses', []):
print ipv4address # or do something else
except requests.exceptions.RequestException:
# parse the smoldering remains of the response object

See also questions close to this topic

I've written code to read in certain data from a CSV file, perform a PCA analysis on the data with the sklearn library, and then plot the resulting data as a heatmap. The code doesn't show any errors when run, but it also outputs no graph and just a line saying AxesSubplot(0.125,0.11;0.62x0.77).

I'm wondering if Visual Studio is unable to display plots like this and if so what would be a better IDE for me to use for this project. If not can anyone see a problem that would prevent this code from displaying a heatmap? Copying the relevant code below

This website seems to be written by jquery(AJAX). I would like to scrape all pages' tables. When I inspect the 1,2,3,4 page tags, they do not have a specific href link. Besides, clicking on them does not create a clear pattern of get requests, therefore, I find it hard to use Python urllib to send a get request for each page.

After stumbling onto this link, which states "All requests made through the SDK are asynchronous," I was wondering if that applied only to the JavaScript sdk. Does it also apply to the Python AWS-SDK, boto3?

I trained and deployed a model using Amazon SageMaker and I want to make realtime predictions by invoking the deployed endpoint. I found R shiny to be easy to create a quick interactive user interface web app. I used the reticulate R package to communicate with Python. First, I saved a Python function that invokes the endpoint and gets predictions for the feature inputs passed to it. Then in R, in the server.R, I source the Python function and I pass an R-object to it to get predictions. My Python function is shown below:

My app works fine in my local machine but I want to deploy it to the cloud. I created a shiny server in AWS EC2 and installed all required packages. However, when I open the shiny app in the browser, I get an error message ModuleNotFound error: No module named 'boto3'. However, when I open an R-server in the same instance and call the same Python script in an R-notebook, it works fine and it gives predictions. Why is shiny server not getting boto3? As you can see in the server part of the code, I am telling it which Python to use and I confirmed that boto3 has been installed in that Python environment. If I run the app from R-server in the same instance, I get "NameError: name 'ssl' is not defined"

I am currently using pynsist to package up a python application so that it can be installed on any Windows machine.

Is there any way I can keep certain variables or values hidden from the user? Currently the user can inspect and edit each python file in the application. This is fine for 99% of my application, but if for example I wanted the user to have access to files in a private S3 bucket of mine, would there be a way to connect using boto3, without exposing the access key and secret access key to the user?

I am trying to get metric data from my containers running in the AWS/ECS service. I already have metric data for my ECS Service and ECS Cluster which are readily available from Cloudwatch. However, capturing metrics from the ECS cluster containers has been a challenge with Grafana.

What I have done so far:
1. Once I realized that Grafana needs to have inputs for ContainerID, ContainerName, and taskID to display the metric for a given container, I started putting together variables for each to build the lists.

I believed this would return lists of all running containers for that metric. When building the variables, each one produced a good list in the "preview" of values. However combining these lists in a single query seems to not work. Thoughts?

FYI I have manually plugged in a containerName, ID, and taskID of a known running container and Grafana is able to display metric data...it seems to be unable to read the list from the variables though.

My guess is that there is some dimension_values query string for this and I'm not doing to correctly. Something like this makes sense to me but doesn't work in Grafana.

I have set up Amazon ECS using Fargate, and the task definition contains two containers, one listening to port 9090 and the other to port 8080 . By creating a service and running the task, logs show that both services are up and running. Port mapping is also done in the container configuration of the task definition.

The security group used in the network interface of the task also allows both ports. (tested also by opening all ports)

But I can only access the service running on port 8080, and not the 9090!

Anything I am missing in the configuration? or any thoughts about what to check?