Choose one version of PHP, say 7.3. It doesn't matter Windows, Linux, or Mac, so choose the system you want and try to debug that. The crawl status is no longer updating message shows up when you are on the manage crawl activity and then don't do anything for more the 20 minutes or so. If you did a page refresh it would go away and you could see if anything had been crawled. In any case, if crawling was occurring, you would see it within a minute or two, you don't need to wait twenty minutes. My suggestion for debugging stuff is to get as simple a test crawl as possible working. That is, under Manage Crawl - Options make sure the Server Channel is 0, Crawl Order is Page Importance, Max Depth is no limit, Repeat Type is No Repeat, robots.txt is Always Follow. Make sure there are no sites listed under Disallowed Sites. Under Seeds Sites, put a single site that you know should be crawlable, for example, https://www.yahoo.com/ . Don't use a site like Facebook because their robots.txt forbids all crawlers other than a few that are paying them like Google. Save your settings. Then look at the Page Options activity make sure that all the Page File types are checked, if not, check them. Go to the Manage Machines activity. For a simple crawl, you should have only one machine with a Queue Server and some Fetchers. Turn on the Queue Server and one Fetcher. Make sure you see log messages that each is running. Next go back to Manage Crawls and start the crawl. Go back to Manage Machines and look at the logs for the Queue Server and the running Fetcher, you should see some changes indicating the crawl is started. Go back to Manage Crawls you should see that it says a few pages have been crawled. If this doesn't happen within maybe two minutes, some thing isn't working, post your Queue Server and Fetcher logs here, and I will look at it.

Best,

Chris

Choose one version of PHP, say 7.3. It doesn't matter Windows, Linux, or Mac, so choose the system you want and try to debug that. The crawl status is no longer updating message shows up when you are on the manage crawl activity and then don't do anything for more the 20 minutes or so. If you did a page refresh it would go away and you could see if anything had been crawled. In any case, if crawling was occurring, you would see it within a minute or two, you don't need to wait twenty minutes. My suggestion for debugging stuff is to get as simple a test crawl as possible working. That is, under Manage Crawl - Options make sure the Server Channel is 0, Crawl Order is Page Importance, Max Depth is no limit, Repeat Type is No Repeat, robots.txt is Always Follow. Make sure there are no sites listed under Disallowed Sites. Under Seeds Sites, put a single site that you know should be crawlable, for example, https://www.yahoo.com/ . Don't use a site like Facebook because their robots.txt forbids all crawlers other than a few that are paying them like Google. Save your settings. Then look at the Page Options activity make sure that all the Page File types are checked, if not, check them. Go to the Manage Machines activity. For a simple crawl, you should have only one machine with a Queue Server and some Fetchers. Turn on the Queue Server and one Fetcher. Make sure you see log messages that each is running. Next go back to Manage Crawls and start the crawl. Go back to Manage Machines and look at the logs for the Queue Server and the running Fetcher, you should see some changes indicating the crawl is started. Go back to Manage Crawls you should see that it says a few pages have been crawled. If this doesn't happen within maybe two minutes, some thing isn't working, post your Queue Server and Fetcher logs here, and I will look at it.
Best,
Chris

Good afternoon, I did it again exactly as you said, all the settings are according to the standard. As you wrote above, after launching the page loads and you cannot go anywhere else, that is, everything hangs. 20-30 minutes later I close all the windows and then after some time the site starts working again, Sometimes you have to stop the Apache Server and start it again so that the site is available, while writing that Previous Crawls:
No Previous Crawls ...
as well as empty logs there is nothing

Good afternoon, I did it again exactly as you said, all the settings are according to the standard. As you wrote above, after launching the page loads and you cannot go anywhere else, that is, everything hangs. 20-30 minutes later I close all the windows and then after some time the site starts working again, Sometimes you have to stop the Apache Server and start it again so that the site is available, while writing that Previous Crawls:
No Previous Crawls ...
as well as empty logs there is nothing

((resource:Screenshot_20190702-213502.png|Resource Description for Screenshot_20190702-213502.png))
Its normal?
in the menu I have everything marked in red and you can click only on the inclusion while clicking nothing happens the page reloads and still everything is red

Im turned on machines after HTTPS and server addr change (original http://site.net\/ yes, double slashes. I have changed to https://site.net/ , then machines turned on and indexing) BUT no search result (Search Index] turned on) and crawler dont turned off after indexing, and if i STOP crawler he stopping but onli in sleep mode

Im turned on machines after HTTPS and server addr change (original http://site.net\/ yes, double slashes. I have changed to https://site.net/ , then machines turned on and indexing) BUT no search result (Search Index] turned on) and crawler dont turned off after indexing, and if i STOP crawler he stopping but onli in sleep mode

Again, I still can't tell what you did from your description. Did you go to the Manage Machines activity? What machines were listed there with what urls. For a machine listed there, did you turn on a Queue Server and a Fetcher before you started to crawl? When you started a crawl under Manage Crawls, how many urls did you crawl before you stopped it? After stopping its crawl, once it appeared under the Previous Crawls, what exactly was written there? After you set it as default crawl, try in a different browser, just to avoid some weird settings you might have previously done in your current browser, a simple search. For example, does a search on a something like "site:all" work?

Best,

Chris

Last Edited: 04/07/2019

Hi,
Again, I still can't tell what you did from your description. Did you go to the Manage Machines activity? What machines were listed there with what urls. For a machine listed there, did you turn on a Queue Server and a Fetcher before you started to crawl? When you started a crawl under Manage Crawls, how many urls did you crawl before you stopped it? After stopping its crawl, once it appeared under the Previous Crawls, what exactly was written there? After you set it as default crawl, try in a different browser, just to avoid some weird settings you might have previously done in your current browser, a simple search. For example, does a search on a something like "site:all" work?
Best,
Chris