Quick How Tos

Indexing settings

Indexing rules are directly responsible for the quality of your search
results and can be set under the Indexing Control section. Whenever you edit these settings, you
need to re-index your site
for the changes to be applied. We also update your index automatically, the re-crawl frequency
depends on your plan.

What is indexing and how does the Site Search 360 crawler work?

By default, the Indexing Behavior is set to "Crawl the root URLs periodically". This
means that the Site Search 360 crawler goes to your Root URL(s) - typically the homepage - and
then follows outgoing links that point to other pages within your site.

Indexing means adding
the pages that were discovered by the crawler to a search index which is unique for
every site
ID (Site Search 360 account). A search index is a pool of pages and documents that are used
to generate search results when users enter a query into a search box on your site. The Index Status
table allows you to look up any URL and check if it's indexed.

Tip: If you notice that some
search results are missing, the first thing to check is whether the missing URL is indexed. You
can also try re-indexing it and see if it triggers any errors.

General principles:

The crawler does NOT go to external websites including Facebook, Twitter, LinkedIn, etc. but
you
can turn the "Crawl Subdomains" setting on under the Crawler
Settings if you'd like include the pages from your subdomains, for example, from
https://blog.domain.com if your start URL is https://domain.com.

The crawler always checks if there're any rules preventing link discovery. These rules can
be set for all search engines (e.g. with the robots meta tag) or only applied for your
internal search
results, in which case you need to specify them under your Site Search 360 Crawler Settings.
Check out how.

If you are blocking access for certain IPs but want the Site Search 360 crawler to have access to
your site, please whitelist the following IP addresses in your firewall:

88.99.218.202

88.99.149.30

88.99.162.232

88.99.29.101

158.69.116.43

139.99.121.235

You can also look at the User Agent in the HTTP header. Our crawler identifies itself with this
user agent string:

How do I index and search over multiple sites?

You now want to index content from all three sites into one index and provide a search that finds
content on all of those pages.

This can be easily achieved by using one of the following three methods.

Multiple Root URLs

Just let the crawler index multiple sites by providing multiple start URLs in your Crawler Settings. All
the other settings, e.g. white- and blacklisting will be the same for all the sites. You
probably want to create some Content
Groups to visually separate results from different sites or site sections.

Sitemap

Create a sitemap that contains URLs from all sites that you want to index or submit multiple
sitemaps, one per line. In this case our crawler only picks up
the links that are present in your sitemap(s).

To set up sitemap indexing:

Go to your Crawler
Settings, switch the Indexing Behavior to "Index URLs found
in the sitemap(s) periodically".

Provide the link to your sitemap (or a list of them, one per line) under the Sitemap
URL(s) field which will open below. For example, https://mysite.com/sitemap.xml.

Press "Check Sitemap" to make sure your sitemaps are valid and the crawler can work with
them.

Remember to save your changes and re-index your
site. Emptying the index should not be necessary, however you can press "Empty
Entire Index" before
re-indexing your site if you want to rebuild the search index from scratch.

Note: The sitemap XML file should be formatted correctly for the crawler to
process it. Check these guidelines.

API

You can also add pages from any of your sites via the API using your API key. You can
either index by URL or send a JSON object with the indexable contents.

How to index secure or password-protected content?

If you have private content that you’d like to include in the search results, you need to
authenticate our crawler so we’re able to access the secure pages. There are a few options to
add search to password-protected websites:

Go to your Crawler
Settings > Advanced Settings. If you use HTTP Basic Authentication, simply fill
out a username and a password. If you have a custom login page, use the Custom Login Screen
Settings instead (read on below for the instructions). You can also set a cookie to
authenticate our crawler.

Whitelist our crawler's IP addresses for it to access all pages without login:

How to index content behind a custom login page

Provide the login form XPath: on your login page, right-click the login form element, press
Inspect, and find its id in the markup. For example, you can see something like:

So you'd take id="loginform" and address it with the following XPath:

Define the authentication parameter names and map them with the credentials for the crawler
to access the content. Let's find out what parameter name is used for your login field
first. Right-click the field and press Inspect. For example, you'll have:

So you’d take log and use it as Parameter Name. The login (username, email,
etc.) would be the Parameter Value. Click Add and repeat the same process for the
password field.

Some login screens have a single field, usually for the password (e.g. in Weebly), in which
case you'd only need one parameter name-value pair.

Save and go to the Index
Control section where you can test your setup on a single URL and re-index the
entire site to add the password-protected pages to your search results.

How to index JavaScript content?

The Site Search 360 crawler can index content that is dynamically loaded via
JavaScript. To enable JS crawling, activate the respective toggle under the Crawler
Settings, and re-index
your
site.

Note: it takes more time and more resources to index JavaScript-rendered
content. If there're no search results or some important information is missing unless you
activate
the JavaScript Crawling feature, make sure to add it to your Custom Plan
options to be able to use it after your trial period expires.

How do I avoid duplicate indexed content?

If you find duplicate content in your index, there are a few options under Crawler
Settings to help you resolve that. For your changes to take effect you'll need to
re-index your site. We recommend clearing the index (Index
Control -> Empty Entire Index) first to make sure the duplicates are removed.

Canonical Tags

Canonical tags are a great strategy to avoid duplicate results not only for your internal ssite
earch, but also
in Google and other global search engines. Learn more about the changes required on your
side.

So let's assume you have 3 distinct URLs but the content is exactly the same:

Ignore URL parameters

Even though these URLs refer to the same page, they are different for the crawler and would
appear as separate entries in the index. You can avoid that by
removing URL parameters that have no influence over the content of the page. To do so, go to Crawler
Settings
and turn ON "Ignore Query Parameters".

Note: The setting cannot be applied safely if you use query parameters:

For pagination (?p=1, ?p=2 or ?page=1, ?page=2 etc.)

Not only as a sorting method (?category=news) but to identify pages (?id=1, ?id=2, etc.)

In these cases ignoring all query parameters might prevent our crawler from picking up relevant
pages and documents. We can recommend 2 strategies instead:

Submiting a sitemap with 'clean' URLs and switching from root URL
crawling to sitemap indexing which is a better and a faster indexing method in general.

Remove Trailing Slashes

How to control what's indexed and shown in search results (exclude or include specific pages)?

Global search engine rules

When you don't want Google to pick up specific pages or your entire site (e.g. when it's still in
development), you might already be
using the noindex robots meta tag:

Note: the noindex robots meta tag can usually be set with a simple checkbox
setting, e.g. in WordPress it's called Search Engine Visibility and can be found under
Settings > Reading in the WP admin panel. This setting essentially prevents all
crawlers, including Site Search 360, from indexing your
pages. When a page isn't indexed, it is missing from search results.

If you want to keep your site pages hidden from Google while allowing Site Search 360 to index
them,
simply turn on the Ignore Robots Meta Tag toggle under your Site Search 360 Crawler Settings.

If it's the other way round, i.e. you want to keep the pages for Google but remove them from your
on-site search results, use the blacklisting or no-indexing fields under the Crawler Settings
(check the Site Search 360-specific indexing rules below).

Alternatively, you could add a meta tag to the selected pages
and use ss360 instead of robots:

Important! Make sure you're not blocking the same pages in your
robots.txt file. When a page is blocked from crawling through robots.txt, your
noindex tag won't be found by our crawler, which means, if
other pages link out to the no-indexed page, it will still appear in search results.

Site Search 360-specific indexing rules

If you want to get Site Search 360 to show or ignore specific pages, use the options described
below. You can find them under Indexing Control > Crawler Settings
(scroll down the page).

General tips:

URL and XPath patterns are interpreted as regular expressions so it's better to put
a backslash
(\) before special characters such as []\^$.|?*+(){}.

Important! When you modify Crawler Settings, remember to go to the
Index
Control section
and press "Re-index Entire Site" for the changes to take effect.

This is also be useful for multi-language websites. Depending on your URL structure, for
example you could limit the search to French pages only:

Note: make sure that your root URL matches your
whitelisting pattern
(e.g.
https://website.com/blog/ or https://website.com/fr/). If the root URL
itself doesn't contain the whitelist pattern, it will be
blacklisted -> nothing can be indexed -> no search results.

Blacklist URL patterns:

Tell the crawler to completely ignore specific areas of your site.

For example, you want our crawler to ignore certain files or skip an entire section
of your website. Go ahead and put one pattern per line here:

Note: blacklisting takes priority over whitelisting. If there's a
conflict in your settings, the whitelisted patterns will be ignored.

No-index URL patterns:

This setting is the same as noindex,follow robots
meta tag: the crawler follows the page and all the outgoing links but doesn't include
the no-indexed
page in the results. It is different from blacklisting where the crawler fully
ignores the page without checking it for other "useful" links.

For example, URLs that are important for the search are linked from the pages you want
exclude (e.g. your homepage, product listings, FAQ pages. Add them as no-index patterns:

Note the $ sign: it indicates where the matching
pattern should stop. In this case, URLs linking from the escaped page, such as /specific-url-to-ignore/product1
will still be followed, indexed, and shown in search results.

No-index XPaths:

Sometimes you need to no-index pages that do not share any specific URL patterns. Instead
of adding them one by one to no-index URL patterns (see above), check if you can
no-index
them based on a specific CSS class or id.

For example, you have category or product listing pages that you wish to hide from search
results. If those pages have a distinct element which isn't used elsewhere, e.g. ,
add the following No-Index XPath: .

In this case the crawler would still follow all linking out URLs so your product pages
get indexed and shown in the results. Learn how to use XPaths
or reach out to us if you need any help.

Note: using a lot of No-index URL patterns or No-Index XPaths slows down the
indexing process as the crawler needs to scan every page and check it against all the indexing
rules. If you're sure that a page with all its outgoing links can be safely excluded from
indexng, use Blacklist URL patterns instead.

What are XPaths?

First: you can find a detailed post on how to use XPaths with Site Search 360 under Working with XPaths.

XPaths are expressions that allow you to identify elements on your web page. For example, the
Xpath //img selects all images. If you are not used to
XPath expressions but rather
know CSS
selectors, you can look at a very simple conversion
table here.

Here is a list of potentially useful XPaths that you could modify and use for your purposes:

XPath

Description

//h1

Selects all your <h1> elements

//div[@id="main"]

Selects the div element with the id "main": <div
id="main"></div>. This can be useful if you want to only index text
within a certain element of the page and avoid indexing text from footers and
sidebars.

//p[contains(@class,"notes")]

Selects the p elements that have a class called "notes": <p
class="something notes whatever">.

//img[contains(@class,"main-image")]//@src

Selects the src attribute of all image elements that have a class called
"main-image":
<img class="main-image" src="image.jpg" />.
This path can be used if you want to tell the crawler which image to index for your
page.

If you're using Chrome, the "XPath
Helper" extension is useful for finding the right XPaths to your elements. Follow these
steps to use the extension:

After installing the extension, go to your web page that you want to analyze and press
CTRL+SHIFT+X to open the XPath Helper.

Now hover with your mouse over the element to which you want to find the XPath to, e.g. an
image on your page and press the Shift key. The XPath that points to this element is now
shown in the black XPath Helper box.

In most cases, you do not need the entire XPath so you can shorten it. That sometimes
requires some testing but a good indicator is if there is an element with an id in the
XPath. You can remove everything before that element and start the XPath with two forward
slashes. Example: The XPath shown is ,
you can shorten that to .

Copy your XPath in the control panel and test it again there to see whether the crawler can
use the XPath to find the exact content.

What are Content Groups and how to use them?

Content Groups are a quick
way to divide your search results into large categories, e.g. Products, News,
PDFs,
Reviews. We usually recommend
using no more than 5-7 groups. For more facets, you might want to implement Filters instead. Content groups make your
search result navigation easier and the results themselves - when combined with Data Points - much more
informative.

Let's say you have recipes and kitchen appliance reviews on your site. So you can show
Recipes and Reviews as content groups. If a user types
"crock pot", the results are now grouped into Recipes that you make in a crock pot and
Reviews of crock pots sold on your site. Additionally, you can show "calories" for the
recipes and
"price" for the product reviews as data
points directly in the search result snippets.

How to add segment your results into content groups

URL patterns, e.g. /recipes/ and /reviews/ for the Recipes and Reviews categories
respectively. Another common example would be \.pdf, if
you want to group all your PDF files.

XPath patterns matching any DOM element on your site pages, e.g.
//div[@id='reviews']. You could even point
to a specific text snippet, e.g. //div[@class='recipe-types' and text()='Vegetarian']

Content groups (except the All results view) are mutually exclusive, i.e. if a page or a document
is assigned to a group, it can't belong to any other group. If a page
does not match any of your groups, it is put in the "Other" (=uncategorized results)
category. You can rename or hide the Other content group by using the parameters contentGroups.otherName and contentGroup.ignoreOther
in your ss360Config
on your site.

Note: starting from the version 13 of our search script, we display content groups as tabs by default, with the first tab showing All Results that
combine matches from all other groups. The related settings can be found and adjusted under the
layout.navigation parameters:

How to show and hide specific content groups

Sometimes you need to limit your search results to a particular content group which can be done
with the include or exclude parameters (either one is enough). For example, you
want to show Products and Reviews and ignore all other results. There are two ways
of setting this up:

You can adjust your ss360Config code like this:

Or you can add data-ss360-include or data-ss360-exclude attributes to your search box
HTML markup:

This is especially useful if you have multiple search boxes and you want to
restrict one of them to search a specific area of your site (here's an example). The data-ss360-include and data-ss360-exclude attributes apply both to search
results and search suggestions. If you want to show different groups in the suggestion
dropdown, you can apply data-ss360-include-suggestion and data-ss360-exclude-suggestion attributes
respectively. To include all possible results and suggestions, leave it empty: data-ss360-include="[]".

Note: when there're several groups to include or exclude, make sure they are formatted as
a JSON
Array, separated by a comma, e.g. ['Group 1', 'Group 2', 'Group 3']. The group names should
match the names given in the Content
Groups section.

Remember to empty and
re-index your site to apply the changes. You can test if a page is correctly
assigned to a group by re-indexing a single URL first. For more details, check out our
articles on Content Groups and Data Points.

How do I index filters if I'm not using the API?

Our crawler can also index filter settings from the pages themselves. Just write the JSON array
in an invisible element on your page and give it the id ss360IndexFilters. For
example, you want a page to get a certain set of filters, you could add a content block like
this to your page:

NOTE: Filters have to be defined in the Control Panel and referenced
with the generated Filter ID, e.g. fid#1, fid#2, etc.

How do I fix Client Error 499?

When indexing your site pages, we need to send HTTP requests to your web server. Every request
received by the server gets a 3-digit status code in response. You can check these response
codes in your Index Status table.

Client errors are the result of HTTP requests sent by a user client (i.e. a web browser or other
HTTP client). Client Error 499 means that the client closed the
connection before the server could answer the request (often because of a timeout) so the
requested page or document could not be loaded. Re-indexing specific URLs or your entire site
would usually help in this case.

This error can also occur when our crawlers are denied access to your site content by
Cloudflare. Please make sure to whitelist our crawler IPs at Cloudflare (under
Firewall > Tools):

Here's the list of IPs used by our crawlers:

88.99.218.202

88.99.149.30

88.99.162.232

88.99.29.101

158.69.116.43

139.99.121.235

Note: Cloudflare can be set up as part of your CMS (Content Management System - e.g. WordPress,
Magento, Drupal, etc.). If you're not sure, please check with your CMS and ask them to whitelist
Site Search 360 crawler IPs for you.

Search settings

How do I adjust how "fuzzy" the search is?

To adjust how precisely your search results should match the search queries, go to Search
Settings. You can control the relevance of your search suggestions (autocomplete,
search-as-you-type results) with Search Suggestion Fuzziness and the relevance of your
search results (triggered by Enter/search button) with the Search Fuzziness. There're
currenly six fuzziness levels to choose from:

Level 0 means all searched keywords should be present in the results (AND logic
between the terms) and results should be a 100% match.

Level 1 means all searched keywords should be present in the results (AND logic
between the terms) but the matching percentage is a bit more forgiving (>=90%).

Level 2 means either one of the searched keywords should be present in the results (OR logic
between the terms) but it has to be a >=90% match.

Level 3 also uses the OR Boolean logic between the query terms but the matching conditions
are more lenient (>=50%)

Level 4: OR logic and even more lenient matches (>=40%)

Level 5: all results at least slightly related to the search query should be shown.

The optimal level differs from site to site as it is related to the type of the your site content
(product desciptions? blog articles? research papers?) and to what your users search the most
(product numbers and SKUs? longtail search phrases? - the Search Result Click table on your Dashboard can help with
that). Regardless of the level, the best matching results are always shown on top, so it really
depends on how many results you want to show even if they don't fully match the search
query. Please note that the difference between the levels is often unnoticeable for
single-word terms, e.g. "mittens", "payment" but becomes apparent when you test multiple-word
queries, e.g. "women ski mittens" or "monthly or yearly payment".

Tip: if you want the autocomplete suggestions to search for matches through the entire
content of your pages and documents (vs the titles only), make sure to have the Suggestions
only on titles toggle OFF.

To choose the best fuzziness level for your site, we recommend searching your site from the
search box on top of your control panel, then changing the fuzziness in the Search
Settings section, and running the same search queries again. All control panel queries
are
NOT tracked and NOT counted towards your search limit quotas. Also, the caching is disabled so
there is no delay in reflecting your changes.

How do I change what search result snippet is shown?

You can control where the text shown in the search results is coming from on the Search settings page.

You can choose from:

Content around the search query (selected by default).

First sentences of the indexed content.

A specific area of the website that you determine via XPath under Crawler Settings.

No snippet at all (titles and URLs will still be shown).

If you want to use Meta Descriptions in your snippets, select the option "Use content
behind the Search Snippet XPath" and save. By default, we already tell the crawler to index meta
descriptions with the following XPath: //meta[@name="description"]/@content

If you set a different Search Snippet XPath, you need to run a full re-index. When you
add or update your meta descriptions, you also need to manually re-index your site or wait until
the crawler picks up the changes on the next scheduled re-crawl.

How do I change what search suggestions are shown?

Once your website content is properly indexed (see How to control what's
indexed?), you can change the behavior and the
precision of your search suggestions without initiating a re-index. Just go to the Search Settings
where you can:

Choose the degree of Search Suggestion Fuzziness, i.e. whether you want your suggestions
to
strictly match the search query or you'd like to allow more flexibility. More information on fuzziness levels.

Restrict suggestions to only derive from page titles.

Restrict your suggestions to a specific content group

You can add data-ss360-include-suggest or data-ss360-exclude-suggest attributes to your
search box HTML markup. For example, you have a content group
named 'Products' and you want the autocomplete to only suggest products, nothing else.
You could then add the following to your search input code:

Or, to ignore the uncategorized results, i.e. the 'Other' group, you would use:

If the same restrictions should apply to full search results as well, use the data-ss360-include or data-ss360-exclude attributes instead.More information on content groups.

When is the setting "Suggestions only on titles" useful? For example, you have a series of
products called "Travelr." When your customers type travel in your search box, you may
only want to prompt them with the "Travelr" product pages and ignore all the other pages that
mention, let's say, travel budgeting.

Tip: By default, the first h1 header on your page (//h1) is taken as a title
that comes up in search suggestions and results. However, you can point our crawler to any part
of a page by adjusting the Title XPath under the Crawler settings.
Here's how to work with
XPaths. When editing the XPaths, remember to run a
re-index for the changes to take effect.

How to show popular query suggestions

You can enable the Suggest popular searches feature under Search Settings
(available for paid plans). All search queries are anonymously tracked and counted and the
data is updated every 3 days. The tracking works even while the feature is disabled so once you
turn it on, you can test popular suggestions immediately, unless there is not enough search data
(e.g. for new accounts). See it in action by typing, e.g. curry on our demo
site.

Popular searches are shown when they fully or partially match the query that is typed in to your
search box. They appear on top of your regular suggested results. Here are the defaults that you
can change by adjusting the following parameters in your configuration code:
layout.navigation parameters:

Tip: check how many suggestions you already show (default max: 6, can be modified with the
suggestions.num parameter) and how many popular searches you
want to add (default max: 3) and make sure the total number of entries (default: 9) fits above
the screen breakpoint.

Can I boost certain pages?

Imagine that more than one page is relevant to a certain query but you'd like one of them to
always be ranked a bit higher. There are a few ways to boost and give higher search rankings to
a specific type of your search results while decreasing the importance of the others.

For example, let us assume that your users give "likes" to your articles or products. You can
create a "Popularity" data point and boost the pages that have more upvotes. To do so:

Create a data point and tell the crawler where to find the information about the number of
likes on the page. You can source this information from a meta tag or even a hidden
HTML-element on your pages. Use XPaths to point the crawler to the right element:

You will see 4 checkbox options for every data point you create:

Show - lets choose if this data point is visible or hidden from your audience.

Single - check it to only extract one value per page, recommended for boosting.

Boost - check it to be able to use the data point for boosting.

Sort - checking this option will trigger another toggle setting where you'd
choose between ASC (ascending) or DESC (descending) order of results. This would also
automatically add a dropdown filter to your search results enabling your searchers to
only sort results by this value, thus overriding the default search result relevance.

Now go to Search Settings and set Page boosting to "Use numeric data
point" with the corresponding data point name.

Another use case: to boost the pages that are newer, use a timestamp or the year as a data
point.

Boost by using URL patterns:

This essentially works as "levels". Even though you can set anything between 1 and
100, it is advisable to always use levels of 10 (20, 30 etc.)

Example: you can boost your /products/ by 90, /category/ by 30,
and /blog/ by 10: the higher the boosting score is, the more priority is given to
the respective results.

How exactly does it work? Let's say you are boosting /blog/ by 10 and
/products/
by 30. You type in a search query and you see that matching results under
/products/ come up above /blog/ results, even if a blog post is a very
good match to your query (e.g. has the exact same title). So boosting happens more or less
independently of how well the query matches, although the query relevance does still play a
role, especially for smaller boosting levels such as 1, 2 etc.

You can also downrank or "penalize" certain results by setting the value at anything
between
0 and 1. This is something to play around with as there are no linear dependencies,
it is a logarithmic function.

Note: with URL boosting, a re-index from scratch (empty the index first) is necessary.

Learn each document's importance automatically

How can I add promotions or custom content to the search results?

You can add your custom HTML content anywhere in the search results for any query you like. For
example, if you want your users to see a banner promotion when they search for food, you
would follow this process:

Type the query for which you would like to add your custom content, e.g. food.

Decide whether the query must match exactly, only contain the term, or even match a regular
expression.

Choose the tab "Order Results" and press "Add Custom Result".

Edit the newly created custom search result by writing any HTML you want the user to see.

Don't forget to save your mapping. You can edit or delete that mapping later.

If you have an XML file with your Google Custom Search promotions, you won't have to
rebuild them for Site Search 360. Use the import function at the bottom of the Query Mapping section.

How to use all the Query Mapping features:

Query Mappings allow you to
customize and display desired results for specific queries. This is a powerful tool that guides
your site visitors in their search journey. Please refer to this detailed post on query
mappings or read this short overview. So you can:

Order results: just click to create a new mapping, enter your query, and
drag and drop the best results to the top. You can also hide (by clicking on the red cross)
some results from showing for this query.

Rewrite a query: when a certain query already triggers correct search
result rankings and you'd like a similar query or a synonym bring up the same exact results,
e.g. you have a product called "Travelr" and you want to display the same results for
"Traveler" so you can rewrite Traveler to Travelr.

Redirect to URL: when you want certain keywords to directly open a
dedicated landing or a promo page or save your customers some clicks, e.g. redirect
contact
or email to your contact information page.

Every time you create a mapping, you have to choose the Matching Type. This defines how
your visitor's query compares to your mapped query. Let's say you want to customize search
results for update:

Match means that it should be 100% match, so when someone types exactly
update,
they will get the results that you have specified for update. The lowercase/uppercase
difference is ignored.

Phrase matches if your query is part of a phrase, e.g. with this setting software
update or last update download would bring up the results specified for
update.

Contains matches when the searcher's query contains your query: e.g. updates
would work the same as update in this case. NOTE: keep in mind that, for example,
prevention
would bring matching results for event because it is contained in prEVENTion.

Regular Expression matches when your expression is found in the search query. This is
the most precise way but you need to be fluent in regex.

With the regexp matching type you can map several search terms at once by separating your
terms with the pipe (vertical line) symbol: termA|termB|termC

If you want your customized queries to be featured in search suggestions (search-as-you-type
results), switch the "Suggest this Query" toggle to YES. Now if someone types in a part of your
query, it will be
shown above other suggestions. The feature is available for the Holmes and Batman plans.

Query Mappings (unlike the Dictionary entries) DO NOT
require a re-index, they are applied on the go.
You can test them
immediately after saving by typing in the mapped keyword into the search box on top of your
Control Panel.

How do I prevent logging and tracking for certain users?

You might have your own team using your website's search often and don't want these searches to
skew your logs. You can simply set a cookie in your browser for those users which prevents
logging of their queries. To do so, simply open your browser console (F12 in Chrome and Firefox)
and write document.cookie = "ss360-tracking=0; expires=Sun, 14 Jun
2020 10:28:31 GMT; path=/";. Of course you can change path and expiration date
depending on your needs.

You can also block IPs from within the Control Panel under IP Blacklisting if the
cookie approach does not work for you.

Note: when you test your search by using the search bar in your Control Panel,
these test queries are not logged either.

What search operators can I use with Site Search 360?

Search operators are special characters that you can add to your query in order to refine the
search results. We currently support these 2 most frequently used operators (based on our research after analyzing 10 mln queries):

Quotes to search by phrase: put your query in quotes (" ")
for an exact match. Example: "bill gates". In this case no pages or documents that
mention something like "you have to pay this bill to open the gates" would
come up.

Negation to exclude a term. Put a minus (-) right before the word that you
want to remove from search results. Example: bill -gates. In this case all pages or
documents mentioning bill and NOT mentioning gates would be found.

Implementation options

How do I switch from search results in a layer to embedded results?

When the search is triggered Site Search 360 allows you to show results in a layover (default) or
embed the results seamlessly into your page. To embed the results,
adjust the results parameter of your ss360Config:

results: {'contentBlock':'CSS-SELECTOR'} where CSS-SELECTOR is
one or a comma-separated list
of CSS selectors to DOM elements where the SERP (search result page) content should be embedded.
For example, if
your main content block can be found under
and that is where search results should appear you would write results: {'contentBlock':'div#main'}

How to show embedded results in a new page?

If you choose to embed the search results, by default they will be embedded in the same page
where the
search is triggered. That is fast and avoids reloading the site. However, if you have a certain
search result page that you want to use instead, you can adjust your ss360Config object as follows:

You would have to replace /search.html with the path to your search result page and
CSS-SELECTOR
with a selector pointing to the area of the page where the search results should be
embedded.

How do I implement pagination?

To put it shortly: you shouldn't (read
here why). Site Search 360 offers a "load more" button out of the box. To use it, just
adjust the results parameter in your ss360Config
object.

If you still want to implement pagination you can use the API with offset
and limit parameters.

You can now also allow you users to infinitely scroll down the search results by setting results.inifiniteScroll: to true.
This would replace the 'Show more results' button and is only available when you don't have any
content group set up or if your content group navigation is tabbed.

Can I use multiple search boxes on one page?

Yes, you just have to adjust the searchBox.selector in your
ss360Config to point to all of your search fields. Usually, you
would give your search input fields a CSS class like so

Then your selector in your ss360Config object would look like
this
searchBox.selector: '.ss360SearchBox'.
See a demo site example with multiple
search boxes on one page.

If you have set up content
groups, you can restrict every search box to trigger results from one of a few select
groups by
adding data-ss360-include or data-ss360-exclude attributes to your search input markup.
For example:

How can I show more information in the search suggestions?

You can choose to show static or dynamic data in your search suggestion next to each result. For
example, let's assume you have some products and you want to show their category and price
directly in the search suggestions.

Create the necessary data
points, for example, 'Category' and 'Price'.

Reference them in your ss360Config under the
suggestions.dataPoints parameter:

Why are search suggestions different from search results?

Search suggestions (= autocomplete, or search-as-you-type results) are
generated differently from full search results which are rendered after you hit Enter
or the search button. That is because when you type, it's impossible to be sure whether you are
done typing or not, so the engine has to search _within_ words. When you submit a search query,
this indicates that you have typed in everything you wanted to type.

For example, if you type hot, showing search suggestions for hotel would make total
sense, but once you press Enter, it becomes clear that you want to find pages with the word
hot and not hotel-related pages. Unlke search results, search suggestions are NOT
counted against your monthly search volume.

To show the user that there're more results available, you can display a
call-to-action button at the end of your search suggestions, e.g. View All Results. To
add a CTA, use the suggestions.viewAllLabel in your ss360Config
code.

There is also a setting allowing you to trigger instant search results after every typed
character (and skip search suggestions altogether):

Important! With the triggersSearch setting set
to true every unfinished
query will be counted against your plan search quota whereas our
default suggestions do not take
away from your search volume.

How to add a "View Product", "Add to Cart", or "Read more" CTA button to my search results?

Call-to-action (CTA) buttons encourage users to take further action and can increase your search
result click-through rate. To add one, use the results.cta
parameter and customize it using the
following settings:

It is also possible to add custom callbacks and set up different call-to-action elements for
different Content Groups. Check the advanced
parameter list.

If you use our Lightspeed
plugin, you can choose whether clicking on the CTA should directly
add the product to cart or simply open the product page. Manual configuration is not required,
simply click the respective checkboxes under CTA in the Plugin Config section:

How to change the font, colors, styles of suggestions and results? How to override the default CSS?

The easiest way is to just change the themeColor in your ss360Config like so themeColor:
'#00aa35'. If you want to have more influence on the design, keep reading.

First, you can use our Search Designer to not only change the colors but
the entire search result layout if you'd like. The corresponding code will be auto-generated for
you at the bottom of the Designer page. If you want to make more specific changes, you can add
some inline CSS by modifying the style.additionalCss
parameter.

By default Site Search 360 brings its own stylesheet where we use, for example: font-family: sans-serif;.
You can simply deactivate it by editing the style parameter
of ss360Config object and setting defaultCss: false.

Here's a copy of the CSS brought by the Site Search 360 plugin so you know what can be styled
(take out the .min for a
readable version):

Can I use Site Search 360 with WordPress?

Definitely! As long as you can add our JavaScript code to your site, the search will work with
any CMS like Joomla, Magento, HubSpot, Drupal, etc. But
for an even easier integration with WordPress, we have developed a plugin.

Simply go to the Plugins section in your WP admin and look for Site
Search 360 (by SEMKNOX GmbH):

Can I use Site Search 360 with Cloudflare?

Yes, you can! We have a Cloudflare app that you can simply enable in your
Cloudflare account. There are fewer configuration options than if you choose to insert the
JavaScript by yourself but the search integration is even faster via the app.

Can I use Site Search 360 with Weebly?

Yes, we have developed a special app for Weebly so that you could easily add a search box and
customize your search result page within the Weebly interface. You simply need to connect the Site
Search 360 Weebly app to your site and drag and drop the app elements to your pages. You
can refer to our Weebly
integration guide for a step-by-step walkthrough.

When you connect the app, we automatically open a Site Search 360 trial account for you and start
indexing your site's content. In order to check what URLs are indexed, remove unnecessary links,
add quick filters (Content Groups
such as "Blog", "Products", etc.) and Query Mappings, you'll need
to log in to your Control Panel.

Can I show the search in Google's Sitelinks Searchbox?

Absolutely, and you should. It allows your users to quickly search your website without landing
on it. Here's how it looks:

Pay attention to the search query parameter in your ss360Config
object: it should have the same name as in the target URL. By default (v8 and up of our
script) you have searchQueryParamName: 'ss360Query'.

To test your configuration, replace "search_term_string" with a test query and copy that URL to a
web browser. For example, if your website is site.com, and you want to test the query "pizza",
you would browse to https://www.site.com/search/?q={pizza}

How do I track the search with Google Analytics or Google Tag Manager

The Site Search 360 Javascript allows you to set up external tracking for Google Analytics and
Google Tag Manager. All you have to do is configure the tracking object in your ss360Config object. You just have to set the provider that
you want and can optionally react to tracking events using the searchCallback.

This tracking will add a ?ss360Query={query} to the search
result pages which can be viewed and analyzed in Google Analytics. To see the search terms in
Google Analytics, you have to specify the URL parameter for the query (ss360Query): https://support.google.com/analytics/answer/1012264?hl=en.Please
note that you need at least v7 of the script.

How to integrate Site Search 360 with Google Tag Manager?

Head over to Google Tag Manager, log in to your account, and add a New Tag.

In the Tag Configuration select Custom HTML as tag type.

If you're using v13 of our search script, add the code snippet below to
your tag. Note that you need to replace 'mysiteid.com' with your site ID (which can
be found under Profile) and
specify other parameters (searchBox, results, etc.) of your
ss360Config
(the easiest way to generate them is to use our Search
Designer). Everything else, starting from var
e=, is ready for a copy-paste:

https://samplesite.com/favicon.ico with a link to an icon (e.g. favicon)

https://samplesite.com with a link to your webpage (where the search
results are located, or homepage if you are using the layover layout)

ss360Query with the name of your search query parameter (if you've adjusted
the default one)

And finally you'll need to reference the uploaded file in your HTML templates (e.g. on your
homepage, or on your search result page). To do this, just add the following meta
tag to the <head></head> part of the page (and update the content of
the href attribute if the opensearch.xml isn't placed in the root directory
+ replace the TITLE with the name of your website):

We're an agency, how can we administer multiple accounts?

For users that want to administer multiple accounts there is a simple solution. Just follow these
steps:

Create an account using the normal sign
up form. This will be your master account, you could for example use the domain of
your
agency, just don't use a customer's name here.

Log into that account and go to Managed Accounts.
Here you can add sites of your customers. These will be fully functional accounts but they
will be attached to your master account. That means you can log into them with your
credentials from your master account and jump between them easily.

You can also manage permissions on the Team page by inviting your
clients and colleagues to join your account and showing/hiding specific sections of the
Control Panel to a particular user.

What can I do in the Control Panel?

Watch this brief overview of the Control Panel to get a better understanding of what is
possible: