please read this in detail. answer the 11 questions below fully and clearly. do not say anything else. we've been getting a lot of spam and need your response to be as concise as possible.

responses that violate this request will be ignored.

we need a service written in python to crawl through urls on two web domains, extracting data found in json objects on the default page source.

this needs to crawl thousands of instances of around 8 unique web pages.

data will be saved in a db with about 5 tables with about 10 columns each.

this needs to be completed each day. a scheduler should start an ec2 instance and the code should begin executing. when the crawl is finished for the day, the ec2 instance should be terminated.

also, if the IP address ever gets blocked by the website being crawled, then that ec2 instance should be shut down, and a new one started (with a unique IP address)

all required data is held in the page source accessible with a simple curl or GET of a url. no clicking is necessary for this web scraping project.

QUESTIONS - YOU MUST ANSWER ALL. please number your answers for clarity

1. we need to use the aws serverless sql-based db. what is it called?

2. how would you start the ec2 instances automatically each day?

3. how would you terminate the ec2 instances when the crawl was completed?

4. visit [login to view URL] -- the name, location, date, price, and age limit for this event can all be found in a single json object in the html returned by this url. what is this json value? copy and paste this entire json object in your response.

5. how would you programmatically extract this json from the url?

6. how would you programmatically extract this json from the url if there were multiple similar json objects on the page?

13 freelancere byder i gennemsnit $659 på dette job

Hi there
Me and my team can deliver your tasks with great quality
We are focused on Web Development and created many beautiful sites, mostly in Python. We like to use Laravel as REST api and Vuejs as SPA for new app Flere

Hi, How are you?
I am very interested in your project and I have read your descriptions carefully.
I can answer to you.
As you can see from my profile, I have enough experience on linux, scrap, crawl and etc.
but I waFlere

Hi There,
a. We can develop the python program you want us to code for you.
b. Please check our reply for the questions you have asked.
1. aurora db
2. using lambda
3. lambda
4. ,5, 6-using Python with django
7. dFlere

1. we need to use the aws serverless sql-based db. what is it called?
You want to use AWS lambda & RDS service with Nodejs/python, we can use server less framework for this and great experience
2. how would you start Flere

1) Aurora DB
2) By scheduling lambda function we can start EC2 instance each day
3) By executing a cron job for python script on instance start will do the crawling job and once completed it will shutdown the instance
Flere

Hi,
I am Manish with HybridSkill, We have a team that has Expertise in Highly Specialized Technical Training and Infrastructure Management Services.
Using our Expertise in niche technologies, for instance, public and pFlere

Hi there,
I am a talented Scrapy Programmer.
I can build the crawler to get thousands of instances of around 8 unique web pages.
1. we need to use the aws serverless sql-based db. what is it called?
Amazon Aurora you Flere

Hello,
Here are the answers to your questions
1. Amazon Aurora
2. We can use AWS Instance Scheduler for this
3. For this the instance can be started using "--instance-initiated-shutdown-behavior terminate" flag, using Flere

Nice to meet you
I am an Amazon Cloud Architect for the web infrastructure serving 90 million page impressions and 12 TB Internet traffic per month. The AWS services I use are EC2, ELB, MySQL RDS, VPC, CloudFront, ElasFlere