Using GScraper Links in GSA-SER

i'm using GScraper and GSA and having problems and have a few questions I hope you all can answer:

Basically, it's to do with using GScraper for scraping and GSA-SER for posting. From reading somethings on the many forums, I believe that most people are using GScraper and their proxies for scraping, and obviously GSA-SER for link building.

Firstly when using GScraper, what are the best ways to maximise the LPM. I've had it go as high as 70K LPM but mostly it stays around 5k to 6k. And I cannot figure what are the optimal settings to maintain it at 65k. I am using GScraper on a dedicated VPS (the Wizard plan from Solid SEO host) using the GScraper subscription proxies. All that VPS does is scrape. I am mixing up the threads between 500 and 1500 and same thing... the LPM jumps up and down, but is mostly stuck at 5k, which is really annoying.

My next question is about using the scraped links in GSA-SER. I should say that what I do is I extract the footprints from GSA into GScraper so it only scrapes links that GSA can successfully post too. Upon finishing the scraping of the links, there is 2 options that people take from here. They either trim to root domain and remove duplicates, or they remove the duplicate URLs.

This is what I can't get my head around. Most people seem to be trimming to the root domain, running PR checks and then adding them into GSA. But isn't the whole point of GSCraper to scrape URLs, filter them, then PR check them? I mean, just because the root domain has PR, doesn't mean the inner pages (URLs) will too. Also, a URL will also help with relevancy if you are scraping using niche related keywords. What is your advice/opinion here. Is it better for general lists to trim to domain and for specific niches to use URLs? Or better to use root domains period, or URLs period?

Next, regardless of whether using URLs/root domains. Importing these links into GSA-SER is proving a bit challenging. To import them. I go to Options > Advanced > Tools > Import URLs (identify platform and sort in) > From File. This imports the merged and filtered links I put in. The thing is.... no matter whether I use URLs or Root domains, it gets rid of most the links? I'm confused? I used the GSA footprints to scrape, so how could most of the links not be useful?

I think that's pretty much the big stumbling block for me. Again, if you could help me out here, it would provide a ton of clarity.

it happens to me all the time,after scraping over 3 mil urls with gscraper and removed dup domains as long there was social network sites and forums there is no reason to remove only duplicate url i had a 220k unique domains list,from those 220k domains GSA posted on no more than 200-300 of them,that means GSA footprints from their own software are not the best and we need better footprints.

3mil urls is nothing, i scrape 100-200 mil urls (depending on footprints) per 24 hours. For easy footprints i hit 250+ mil urls per 24 hours. Ofc these urls are not unique. Your problem can be proxies + also footprints but even with good footprints and slow proxies you wont scrape alot.

3mil urls is nothing, i scrape 100-200 mil urls (depending on footprints) per 24 hours. For easy footprints i hit 250+ mil urls per 24 hours. Ofc these urls are not unique. Your problem can be proxies + also footprints but even with good footprints and slow proxies you wont scrape alot.

Click to expand...

i didnt say i scraped 1 month for 3 mil urls,i did it for 1-2 hours.yes those urls are unique as long i removed duplicate domains.the problem is with footprints

3mil urls is nothing, i scrape 100-200 mil urls (depending on footprints) per 24 hours. For easy footprints i hit 250+ mil urls per 24 hours. Ofc these urls are not unique. Your problem can be proxies + also footprints but even with good footprints and slow proxies you wont scrape alot.

3mil urls is nothing, i scrape 100-200 mil urls (depending on footprints) per 24 hours. For easy footprints i hit 250+ mil urls per 24 hours. Ofc these urls are not unique. Your problem can be proxies + also footprints but even with good footprints and slow proxies you wont scrape alot.

Note that adblockers might block our captcha, and other functionality on BHW so if you don't see the captcha or see reduced functionality please disable adblockers to ensure full functionality, note we only allow relevant management verified ads on BHW.