- Enterprise Search Guide: other versions:
- What is Enterprise Search?
- Connectors
- Connectors references
- Web crawler
- App Search and Workplace Search
- Programming language clients
- Search UI
- Enterprise Search server
- Prerequisites
- Known issues
- Troubleshooting
- Help, support, and feedback
- Release notes
- 8.13.4 release notes
- 8.13.3 release notes
- 8.13.2 release notes
- 8.13.1 release notes
- 8.13.0 release notes
- 8.12.2 release notes
- 8.12.1 release notes
- 8.12.0 release notes
- 8.11.4 release notes
- 8.11.3 release notes
- 8.11.2 release notes
- 8.11.1 release notes
- 8.11.0 release notes
- 8.10.4 release notes
- 8.10.3 release notes
- 8.10.2 release notes
- 8.10.1 release notes
- 8.10.0 release notes
- 8.9.2 release notes
- 8.9.1 release notes
- 8.9.0 release notes
- 8.8.2 release notes
- 8.8.1 release notes
- 8.8.0 release notes
- 8.7.1 release notes
- 8.7.0 release notes
- 8.6.2 release notes
- 8.6.1 release notes
- 8.6.0 release notes
- 8.5.3 release notes
- 8.5.2 release notes
- 8.5.1 release notes
- 8.5.0 release notes
- 8.4.3 release notes
- 8.4.2 release notes
- 8.4.1 release notes
- 8.4.0 release notes
- 8.3.3 release notes
- 8.3.2 release notes
- 8.3.1 release notes
- 8.3.0 release notes
- 8.2.3 release notes
- 8.2.2 release notes
- 8.2.1 release notes
- 8.2.0 release notes
- 8.1.3 release notes
- 8.1.2 release notes
- 8.1.1 release notes
- 8.1.0 release notes
- 8.0.1 release notes
- 8.0.0 release notes
- 8.0.0-rc2 release notes
- 8.0.0-rc1 release notes
- 8.0.0-beta1 release notes
- 8.0.0-alpha2 release notes
- 8.0.0-alpha1 release notes
Elastic web crawler known issues
editElastic web crawler known issues
editThe Elastic web crawler has the following known issues:
-
The crawler does not crawl pure JavaScript single-page applications (SPAs).
We recommend looking at dynamic rendering to help your crawler properly index your JavaScript websites. Another option is to serve a static HTML version of your Javascript website, using a solution such as Prerender.
-
The crawler does not support dynamic content.
The crawler does not execute JavaScript, and it only pulls text from HTML elements.
-
The crawler does not support form-based authentication.
The crawler currently only supports basic authentication and authentication header (e.g. bearer tokens) authentication methods.
-
URLs being indexed despite having duplicate content and a canonical URL setting.
Canonical URL link tags are embedded within HTML source for pages that duplicate the content of other pages. Refer to Duplicate document handling for details. The crawler identifies duplicate content by hashing the content of default deduplication fields derived from the page. These fields are defined by the configuration setting
connector.crawler.extraction.default_deduplication_fields
.The web crawler checks your index for an existing document with the same content hash. Users have faced issues where they set canonical link tags for a page that does not have identical content, because the hashes are different. However, upon inspection, the content is the same.
Use the following workaround:
You can manage which fields the web crawler uses to create the content hash. If your pages all define canonical URLs, you could safely change your deduplication fields settings to include only the
url
field. Otherwise, you may need more fields to help check for duplicates. By default, the web crawler checksbody_content
,headings
,links
,meta_description
,meta_keywords
, andtitle
fields. -
Custom scheduling might break when upgrading from version 8.6 or earlier.
If you encounter the error
'custom_schedule_triggered': undefined method 'each' for nil:NilClass (NoMethodError)
, it means the custom scheduling feature migration failed. You can use the following manual workaround:POST /.elastic-connectors/_update/<connector-id> { "doc": { "custom_scheduling": {} } }
This error can appear on Connectors or Crawlers that aren’t the cause of the issue. If the error continues, try running the above command for every document in the
.elastic-connectors
index. -
The web crawler ignores uppercase
noindex
tags.Make sure these tags are lowercase.
-
Updates to the default
connector.crawler.http.user_agent
are not applied.A workaround is to remove the
connector
prefix and update thecrawler.http_agent
setting in your Enterprise Search configuration file. -
The web crawler uses a non-deterministic method to determine thread pool size, which can lead to unexpected behavior.
This can be worked around by overriding the
crawler.workers.pool_size.limit
value in theelasticsearch.yml
file. -
Entry points should not have leading spaces.
Whitespace is not stripped from entry points, so leading spaces will be included in the URL, leading to errors.
-
Updates to the default
connector.crawler.http.user_agent
are not applied.A workaround is to remove the
connector
prefix and update thecrawler.http_agent
setting in your Enterprise Search configuration file. -
Crawler does not support UTF16-LE encoding.
The crawler does not support UTF16 little endian encoding. A workaround is to encode your files in a supported format such as UTF-8.
-
Upgrading Enterprise Search from versions `⇐`8.5.x to versions `>=`8.9.x will fail if Connectors or Elastic Crawler were used
In 8.5, Connectors and Elastic Crawler created records in
.elastic-connectors-sync-jobs
with a pre-GA structure that is incompatible with the GA document schema.As a result, a migration added in 8.9.0 will fail with the error:
Status: [Failed] Applying Actastic Migration: <ChangeSchedulingAndAddJobTypeAndSyncCursorToConnectors version=20230526133312>: Error = NoMethodError: undefined method `[]=' for nil:NilClass
Additionally, after this error appears 20 times, a second error will begin to appear:
Status: [Failed] Applying Actastic Migration: <ChangeSchedulingAndAddJobTypeAndSyncCursorToConnectors version=20230526133312>: Error = Swiftype::ES::UnknownEsError: [1:2807] failed to parse: Limit of mapping depth [20] has been exceeded due to object field [scheduling.full.full.full.full.full.full.full.full.full.full.full.full.full.full.full.full.full.full.access_control]
To work around these issues and move forward with upgrading, you should remove the
.elastic-connectors*
indices.DELETE .elastic-connectors*
ElasticON events are back!
Learn about the Elastic Search AI Platform from the experts at our live events.
Register now