Elastic SharePoint Online connector reference
editElastic SharePoint Online connector reference
editLooking for the SharePoint Server connector? See SharePoint Server reference.
The Elastic SharePoint Online connector is a connector for Microsoft SharePoint Online.
This connector is written in Python using the Elastic connector framework.
View the source code for this connector (branch 8.17, compatible with Elastic 8.17).
Elastic managed connector reference
editView Elastic managed connector reference
Availability and prerequisites
editThis connector is available as a managed connector in Elastic versions 8.9.0 and later. To use this connector natively in Elastic Cloud, satisfy all managed connector requirements.
This connector requires a subscription. View the requirements for this feature under the Elastic Search section of the Elastic Stack subscriptions page.
Usage
editTo use this connector as a managed connector, see Elastic managed connectors.
For additional operations, see Connectors UI in Kibana.
SharePoint prerequisites
editCreate SharePoint OAuth app
editBefore you can configure the connector, you must create an OAuth App in the SharePoint Online platform.
Your connector will authenticate to SharePoint as the registered OAuth application/client.
You’ll collect values (client ID
, tenant ID
, and client secret
) during this process that you’ll need for the configuration step in Kibana.
To get started, first log in to SharePoint Online and access your administrative dashboard. Ensure you are logged in as the Azure Portal service account.
Follow these steps:
- Sign in to https://portal.azure.com/ and click on Azure Active Directory.
- Locate App Registrations and Click New Registration.
- Give your app a name - like "Search".
- Leave the Redirect URIs blank for now.
- Register the application.
- Find and keep the Application (client) ID and Directory (tenant) ID handy.
- Locate the Secret by navigating to Client credentials: Certificates & Secrets.
- Select New client secret
-
Pick a name for your client secret. Select an expiration date. (At this expiration date, you will need to generate a new secret and update your connector configuration.)
- Save the client secret Secret ID before leaving this screen.
- Save the client secret Value before leaving this screen.
-
Set up the permissions the OAuth App will request from the Azure Portal service account.
- Navigate to API Permissions and click Add Permission.
-
Add application permissions until the list looks like the following:
Graph API - Sites.Selected - Files.Read.All - Group.Read.All - User.Read.All Sharepoint - Sites.Selected
If the
Comma-separated list of sites
configuration is set to*
or if a user enables the toggle buttonEnumerate all sites
, the connector requiresSites.Read.All
permission.
-
Grant admin consent, using the
Grant Admin Consent
link from the permissions screen. - Save the tenant name (i.e. Domain name) of Azure platform.
The connector requires application permissions. It does not support delegated permissions (scopes).
SharePoint permissions
editRefer to the following documentation for setting SharePoint permissions.
-
To set
DisableCustomAppAuthentication
to false, connect to SharePoint using PowerShell and runset-spotenant -DisableCustomAppAuthentication $false
-
To assign full permissions to the tenant in SharePoint Online, go to the tenant URL in your browser. The URL follows this pattern:
https://<office_365_admin_tenant_URL>/_layouts/15/appinv.aspx
. This loads the SharePoint admin center page.- In the App ID box, enter the application ID that you recorded earlier, and then click Lookup. The application name will appear in the Title box.
- In the App Domain box, type <tenant_name>.onmicrosoft.com
-
In the App’s Permission Request XML box, type the following XML string:
<AppPermissionRequests AllowAppOnlyPolicy="true"> <AppPermissionRequest Scope="http://sharepoint/content/tenant" Right="FullControl" /> <AppPermissionRequest Scope="http://sharepoint/social/tenant" Right="Read" /> </AppPermissionRequests>
Compatibility
editThis connector is compatible with SharePoint Online.
Configuration
editUse the following configuration fields to set up the connector:
- Tenant ID
- The tenant id for the Azure account hosting the Sharepoint Online instance.
- Tenant Name
- The tenant name for the Azure account hosting the Sharepoint Online instance.
- Client ID
- The client id to authenticate with SharePoint Online.
- Secret value
- The secret value to authenticate with SharePoint Online.
- Comma-separated list of sites
-
List of site collection names or paths to fetch from SharePoint. When enumerating all sites, these values should be the names of the sites. Use
*
to include all available sites. Examples:-
collection1
-
collection1,sub-collection
-
*
When not enumerating all sites, these values should be the paths (URL after
/sites/
) of the sites. Examples: -
collection1
-
collection1,collection1/sub-collection
-
- Enumerate all sites?
-
If enabled, the full list of all sites will be fetched from the API, in bulk, and will be filtered down to match the configured list of site names.
If disabled, each path in the configured list of site paths will be fetched individually from the API.
When disabled,
*
is not a valid configuration forComma-separated list of sites
. Enabling this configuration is most useful when syncing large numbers (more than total/200) of sites. This is because, at high volumes, it is more efficient to fetch sites in bulk. When syncing fewer sites, disabling this configuration can result in improved performance. This is because, at low volumes, it is more efficient to only fetch the sites that you need. - Fetch sub-sites of configured sites?
- Whether sub-sites of the configured site(s) should be automatically fetched. This option is only available when not enumerating all sites (see above).
- Enable document level security
-
Toggle to enable document level security (DLS). When enabled, full and incremental syncs will fetch access control lists for each document and store them in the
_allow_access_control
field. Access control syncs will fetch users' access control lists and store them in a separate index.Once enabled, the following granular permissions toggles will be available:
- Fetch drive item permissions: Enable this option to fetch drive item specific permissions.
- Fetch unique page permissions: Enable this option to fetch unique page permissions. If this setting is disabled a page will inherit permissions from its parent site.
- Fetch unique list permissions: Enable this option to fetch unique list permissions. If this setting is disabled a list will inherit permissions from its parent site.
-
Fetch unique list item permissions: Enable this option to fetch unique list item permissions. If this setting is disabled a list item will inherit permissions from its parent site.
If left empty the default value
true
will be used for these granular permissions toggles. Note that these settings may increase sync times.
Documents and syncs
editThe connector syncs the following SharePoint object types:
- Sites (and subsites)
- Lists
- List items and attachment content
- Document libraries and attachment content (including web pages)
- Content from files bigger than 10 MB won’t be extracted. (Self-managed connectors can use the self-managed local extraction service to handle larger binary files.)
- Permissions are not synced by default. Enable document-level security (DLS) to sync permissions.
Making Sharepoint Site Pages Web Part content searchable
If you’re using Web Parts on Sharepoint Site Pages and want to make this content searchable, you’ll need to consult the official documentation.
We recommend setting isHtmlString
to True for all Web Parts that need to be searchable.
Limitations
edit- The connector does not currently sync content from Teams-connected sites.
Sync rules
editBasic sync rules are identical for all connectors and are available by default. For more information read Types of sync rule.
Advanced sync rules
editA full sync is required for advanced sync rules to take effect.
The following section describes advanced sync rules for this connector. Advanced sync rules are defined through a source-specific DSL JSON snippet.
Advanced rules for the Sharepoint Online connector enable you to avoid extracting and syncing older data that might no longer be relevant for search.
Example:
{ "skipExtractingDriveItemsOlderThan": 60 }
This rule will not extract content of any drive items (files in document libraries) that haven’t been modified for 60 days or more.
Limitations of sync rules with incremental syncs
Changing sync rules after Sharepoint Online content has already been indexed can bring unexpected results, when using incremental syncs.
Incremental syncs ensure updates from 3rd-party system, but do not modify existing documents in the index.
To avoid these issues, run a full sync after changing sync rules (basic or advanced).
Let’s take a look at several examples where incremental syncs might lead to inconsistent data on your index.
Example: Restrictive basic sync rule added after a full sync
Imagine your Sharepoint Online drive contains the following drive items:
/Documents/Report.doc /Documents/Spreadsheet.xls /Presentations/Q4-2020-Report.pdf /Presentations/Q4-2020-Report-Data.xls /Personal/Documents/Sales.xls
After a sync, all these drive items will be stored on your Elasticsearch index. Let’s add a basic sync rule, filtering files by their path:
Exclude WHERE path CONTAINS "Documents"
These filtering rules will exclude all files with "Documents" in their path, leaving only files in /Presentations
directory:
/Presentations/Q4-2020-Report.pdf /Presentations/Q4-2020-Report-Data.xls
If no files were changed, incremental sync will not receive information about changes from Sharepoint Online and won’t be able to delete any files, leaving the index in the same state it was before the sync.
After a full sync, the index will be updated and files that are excluded by sync rules will be removed.
Example: Restrictive basic sync rules removed after a full sync
Imagine that Sharepoint Online drive has the following drive items:
/Documents/Report.doc /Documents/Spreadsheet.xls /Presentations/Q4-2020-Report.pdf /Presentations/Q4-2020-Report-Data.xls /Personal/Documents/Sales.xls
Before doing a sync, we add a restrictive basic filtering rule:
Exclude WHERE path CONTAINS "Documents"
After a full sync, the index will contain only files in the /Presentations
directory:
/Presentations/Q4-2020-Report.pdf /Presentations/Q4-2020-Report-Data.xls
Afterwards, we can remove the filtering rule and run an incremental sync. If no changes happened to the files, incremental sync will not mirror these changes in the Elasticsearch index, because Sharepoint Online will not report any changes to the items. Only a full sync will include the items previously ignored by the sync rule.
Example: Advanced sync rules edge case
Advanced sync rules can be applied to limit which documents will have content extracted. For example, it’s possible to set a rule so that documents older than 180 days won’t have content extracted.
However, there is an edge case. Imagine a document that is 179 days old and its content is extracted and indexed into Elasticsearch. After 2 days, this document will be 181 days old. Since this document was already ingested it will not be modified. Therefore, the content will not be removed from the index, following an incremental sync.
In this situation, if you want older documents to be removed, you will need to clean the index up manually. For example, you can manually run an Elasticsearch query that removes drive item content older than 180 days:
resp = client.update_by_query( index="INDEX_NAME", conflicts="proceed", query={ "bool": { "filter": [ { "match": { "object_type": "drive_item" } }, { "exists": { "field": "file" } }, { "range": { "lastModifiedDateTime": { "lte": "now-180d" } } } ] } }, script={ "source": "ctx._source.body = ''", "lang": "painless" }, ) print(resp)
const response = await client.updateByQuery({ index: "INDEX_NAME", conflicts: "proceed", query: { bool: { filter: [ { match: { object_type: "drive_item", }, }, { exists: { field: "file", }, }, { range: { lastModifiedDateTime: { lte: "now-180d", }, }, }, ], }, }, script: { source: "ctx._source.body = ''", lang: "painless", }, }); console.log(response);
POST INDEX_NAME/_update_by_query?conflicts=proceed { "query": { "bool": { "filter": [ { "match": { "object_type": "drive_item" } }, { "exists": { "field": "file" } }, { "range": { "lastModifiedDateTime": { "lte": "now-180d" } } } ] } }, "script": { "source": "ctx._source.body = ''", "lang": "painless" } }
Document-level security
editDocument-level security (DLS) enables you to restrict access to documents based on a user’s permissions. This feature is available by default for this connector.
Refer to configuration on this page for how to enable DLS for this connector.
Refer to DLS in Search Applications to learn how to ingest data from SharePoint Online with DLS enabled, when building a search application.
Content extraction
editDefault content extraction
editThe default content extraction service is powered by the Enterprise Search default ingest pipeline. (See Ingest pipelines for Search indices.)
See Content extraction.
Local content extraction (for large files)
editThe SharePoint Online self-managed connector supports large file content extraction (> 100MB). This requires:
- A self-managed deployment of the Elastic Text Extraction Service.
- Text extraction to be disabled in the default ingest pipeline settings.
Refer to local content extraction for more information.
Known issues
edit-
Documents failing to sync due to SharePoint file and folder limits
SharePoint has limits on the number of files and folders that can be synced. You might encounter an error like the following written to the body of documents that failed to sync:
The file size exceeds the allowed limit. CorrelationId: fdb36977-7cb8-4739-992f-49878ada6686, UTC DateTime: 4/21/2022 11:24:22 PM
Refer to SharePoint documentation for more information about these limits.
-
Syncing a large number of files
The connector will fail to download files from folders that contain more than 5000 files. The List View Threshold (default 5000) is a limit that prevents operations with a high performance impact on the SharePoint Online environment.
Workaround: Reduce batch size to avoid this issue.
-
Syncing large files
SharePoint has file size limits, but these are configurable.
Workaround: Increase the file size limit. Refer to SharePoint documentation for more information.
-
Deleted documents counter is not updated during incremental syncs
If the configuration
Enumerate All Sites?
is enabled, incremental syncs may not behave as expected. Drive Item documents that were deleted between incremental syncs may not be detected as deleted.Workaround: Disable
Enumerate All Sites?
, and configure full site paths for all desired sites.
-
Refer to Known issues for a list of known issues for all connectors.
Troubleshooting
editSee Troubleshooting.
Security
editSee Security.
Self-managed connector
editView self-managed connector reference
Availability and prerequisites
editThis connector is available as a self-managed self-managed connector. To use this connector as a self-managed connector, satisfy all self-managed connector requirements.
This connector requires a subscription. View the requirements for this feature under the Elastic Search section of the Elastic Stack subscriptions page.
Usage
editTo use this connector as a self-managed connector, see Self-managed connectors For additional operations, see Connectors UI in Kibana.
SharePoint prerequisites
editCreate SharePoint OAuth app
editBefore you can configure the connector, you must create an OAuth App in the SharePoint Online platform.
Your connector will authenticate to SharePoint as the registered OAuth application/client.
You’ll collect values (client ID
, tenant ID
, and client secret
) during this process that you’ll need for the configuration step in Kibana.
To get started, first log in to SharePoint Online and access your administrative dashboard. Ensure you are logged in as the Azure Portal service account.
Follow these steps:
- Sign in to https://portal.azure.com/ and click on Azure Active Directory.
- Locate App Registrations and Click New Registration.
- Give your app a name - like "Search".
- Leave the Redirect URIs blank for now.
- Register the application.
- Find and keep the Application (client) ID and Directory (tenant) ID handy.
- Locate the Secret by navigating to Client credentials: Certificates & Secrets.
- Select New client secret
-
Pick a name for your client secret. Select an expiration date. (At this expiration date, you will need to generate a new secret and update your connector configuration.)
- Save the client secret Secret ID before leaving this screen.
- Save the client secret Value before leaving this screen.
-
Set up the permissions the OAuth App will request from the Azure Portal service account.
- Navigate to API Permissions and click Add Permission.
-
Add application permissions until the list looks like the following:
Graph API - Sites.Selected - Files.Read.All - Group.Read.All - User.Read.All Sharepoint - Sites.Selected
If the
Comma-separated list of sites
configuration is set to*
or if a user enables the toggle buttonEnumerate all sites
, the connector requiresSites.Read.All
permission.
-
Grant admin consent, using the
Grant Admin Consent
link from the permissions screen. - Save the tenant name (i.e. Domain name) of Azure platform.
The connector requires application permissions. It does not support delegated permissions (scopes).
SharePoint permissions
editRefer to the following documentation for setting SharePoint permissions.
-
To set
DisableCustomAppAuthentication
to false, connect to SharePoint using PowerShell and runset-spotenant -DisableCustomAppAuthentication $false
-
To assign full permissions to the tenant in SharePoint Online, go to the tenant URL in your browser. The URL follows this pattern:
https://<office_365_admin_tenant_URL>/_layouts/15/appinv.aspx
. This loads the SharePoint admin center page.- In the App ID box, enter the application ID that you recorded earlier, and then click Lookup. The application name will appear in the Title box.
- In the App Domain box, type <tenant_name>.onmicrosoft.com
-
In the App’s Permission Request XML box, type the following XML string:
<AppPermissionRequests AllowAppOnlyPolicy="true"> <AppPermissionRequest Scope="http://sharepoint/content/tenant" Right="FullControl" /> <AppPermissionRequest Scope="http://sharepoint/social/tenant" Right="Read" /> </AppPermissionRequests>
Compatibility
editThis connector is compatible with SharePoint Online.
Configuration
editWhen using the self-managed connector workflow, initially these fields will use the default configuration set in the connector source code.
These are set in the get_default_configuration
function definition.
These configurable fields will be rendered with their respective labels in the Kibana UI. Once connected, you’ll be able to update these values in Kibana.
Use the following configuration fields to set up the connector:
-
tenant_id
- The tenant id for the Azure account hosting the Sharepoint Online instance.
-
tenant_name
- The tenant name for the Azure account hosting the Sharepoint Online instance.
-
client_id
- The client id to authenticate with SharePoint Online.
-
secret_value
- The secret value to authenticate with SharePoint Online.
-
site_collections
-
List of site collection names or paths to fetch from SharePoint. When enumerating all sites, these values should be the names of the sites. Use
*
to include all available sites. Examples:-
collection1
-
collection1,sub-collection
-
*
When not enumerating all sites, these values should be the paths (URL after
/sites/
) of the sites. Examples: -
collection1
-
collection1,collection1/sub-collection
-
-
enumerate_all_sites
-
If enabled, the full list of all sites will be fetched from the API, in bulk, and will be filtered down to match the configured list of site names. If disabled, each path in the configured list of site paths will be fetched individually from the API. Enabling this configuration is most useful when syncing large numbers (more than total/200) of sites. This is because, at high volumes, it is more efficient to fetch sites in bulk. When syncing fewer sites, disabling this configuration can result in improved performance. This is because, at low volumes, it is more efficient to only fetch the sites that you need.
When disabled,
*
is not a valid configuration forComma-separated list of sites
. -
fetch_subsites
- Whether sub-sites of the configured site(s) should be automatically fetched. This option is only available when not enumerating all sites (see above).
-
use_text_extraction_service
-
Toggle to enable local text extraction service for documents.
Requires a separate deployment of the Elastic Text Extraction Service.
Requires that ingest pipeline settings disable text extraction.
Default value is
False
. -
use_document_level_security
-
Toggle to enable document level security (DLS). When enabled, full and incremental syncs will fetch access control lists for each document and store them in the
_allow_access_control
field. Access control syncs will fetch users' access control lists and store them in a separate index.Once enabled, the following granular permissions toggles will be available:
- Fetch drive item permissions: Enable this option to fetch drive item specific permissions.
- Fetch unique page permissions: Enable this option to fetch unique page permissions. If this setting is disabled a page will inherit permissions from its parent site.
- Fetch unique list permissions: Enable this option to fetch unique list permissions. If this setting is disabled a list will inherit permissions from its parent site.
-
Fetch unique list item permissions: Enable this option to fetch unique list item permissions. If this setting is disabled a list item will inherit permissions from its parent site.
If left empty the default value
true
will be used for these granular permissions toggles. Note that these settings may increase sync times.
Deployment using Docker
editYou can deploy the SharePoint Online connector as a self-managed connector using Docker. Follow these instructions.
Step 1: Download sample configuration file
Download the sample configuration file. You can either download it manually or run the following command:
curl https://raw.githubusercontent.com/elastic/connectors/main/config.yml.example --output ~/connectors-config/config.yml
Remember to update the --output
argument value if your directory name is different, or you want to use a different config file name.
Step 2: Update the configuration file for your self-managed connector
Update the configuration file with the following settings to match your environment:
-
elasticsearch.host
-
elasticsearch.api_key
-
connectors
If you’re running the connector service against a Dockerized version of Elasticsearch and Kibana, your config file will look like this:
# When connecting to your cloud deployment you should edit the host value elasticsearch.host: http://host.docker.internal:9200 elasticsearch.api_key: <ELASTICSEARCH_API_KEY> connectors: - connector_id: <CONNECTOR_ID_FROM_KIBANA> service_type: sharepoint_online api_key: <CONNECTOR_API_KEY_FROM_KIBANA> # Optional. If not provided, the connector will use the elasticsearch.api_key instead
Using the elasticsearch.api_key
is the recommended authentication method. However, you can also use elasticsearch.username
and elasticsearch.password
to authenticate with your Elasticsearch instance.
Note: You can change other default configurations by simply uncommenting specific settings in the configuration file and modifying their values.
Step 3: Run the Docker image
Run the Docker image with the Connector Service using the following command:
docker run \ -v ~/connectors-config:/config \ --network "elastic" \ --tty \ --rm \ docker.elastic.co/enterprise-search/elastic-connectors:8.17.0.0 \ /app/bin/elastic-ingest \ -c /config/config.yml
Refer to DOCKER.md
in the elastic/connectors
repo for more details.
Find all available Docker images in the official registry.
We also have a quickstart self-managed option using Docker Compose, so you can spin up all required services at once: Elasticsearch, Kibana, and the connectors service.
Refer to this README in the elastic/connectors
repo for more information.
Documents and syncs
editThe connector syncs the following SharePoint object types:
- Sites (and subsites)
- Lists
- List items and attachment content
- Document libraries and attachment content (including web pages)
Making Sharepoint Site Pages Web Part content searchable
If you’re using Web Parts on Sharepoint Site Pages and want to make this content searchable, you’ll need to consult the official documentation.
We recommend setting isHtmlString
to True for all Web Parts that need to be searchable.
- Content from files bigger than 10 MB won’t be extracted by default. Use the self-managed local extraction service to handle larger binary files.
- Permissions are not synced by default. Enable document-level security (DLS) to sync permissions.
Limitations
edit- The connector does not currently sync content from Teams-connected sites.
Sync rules
editBasic sync rules are identical for all connectors and are available by default. For more information read Types of sync rule.
Advanced sync rules
editA full sync is required for advanced sync rules to take effect.
The following section describes advanced sync rules for this connector. Advanced sync rules are defined through a source-specific DSL JSON snippet.
Advanced rules for the Sharepoint Online connector enable you to avoid extracting and syncing older data that might no longer be relevant for search.
Example:
{ "skipExtractingDriveItemsOlderThan": 60 }
This rule will not extract content of any drive items (files in document libraries) that haven’t been modified for 60 days or more.
Limitations of sync rules with incremental syncs
Changing sync rules after Sharepoint Online content has already been indexed can bring unexpected results, when using incremental syncs.
Incremental syncs ensure updates from 3rd-party system, but do not modify existing documents in the index.
To avoid these issues, run a full sync after changing sync rules (basic or advanced).
Let’s take a look at several examples where incremental syncs might lead to inconsistent data on your index.
Example: Restrictive basic sync rule added after a full sync
Imagine your Sharepoint Online drive contains the following drive items:
/Documents/Report.doc /Documents/Spreadsheet.xls /Presentations/Q4-2020-Report.pdf /Presentations/Q4-2020-Report-Data.xls /Personal/Documents/Sales.xls
After a sync, all these drive items will be stored on your Elasticsearch index. Let’s add a basic sync rule, filtering files by their path:
Exclude WHERE path CONTAINS "Documents"
These filtering rules will exclude all files with "Documents" in their path, leaving only files in /Presentations
directory:
/Presentations/Q4-2020-Report.pdf /Presentations/Q4-2020-Report-Data.xls
If no files were changed, incremental sync will not receive information about changes from Sharepoint Online and won’t be able to delete any files, leaving the index in the same state it was before the sync.
After a full sync, the index will be updated and files that are excluded by sync rules will be removed.
Example: Restrictive basic sync rules removed after a full sync
Imagine that Sharepoint Online drive has the following drive items:
/Documents/Report.doc /Documents/Spreadsheet.xls /Presentations/Q4-2020-Report.pdf /Presentations/Q4-2020-Report-Data.xls /Personal/Documents/Sales.xls
Before doing a sync, we add a restrictive basic filtering rule:
Exclude WHERE path CONTAINS "Documents"
After a full sync, the index will contain only files in the /Presentations
directory:
/Presentations/Q4-2020-Report.pdf /Presentations/Q4-2020-Report-Data.xls
Afterwards, we can remove the filtering rule and run an incremental sync. If no changes happened to the files, incremental sync will not mirror these changes in the Elasticsearch index, because Sharepoint Online will not report any changes to the items. Only a full sync will include the items previously ignored by the sync rule.
Example: Advanced sync rules edge case
Advanced sync rules can be applied to limit which documents will have content extracted. For example, it’s possible to set a rule so that documents older than 180 days won’t have content extracted.
However, there is an edge case. Imagine a document that is 179 days old and its content is extracted and indexed into Elasticsearch. After 2 days, this document will be 181 days old. Since this document was already ingested it will not be modified. Therefore, the content will not be removed from the index, following an incremental sync.
In this situation, if you want older documents to be removed, you will need to clean the index up manually. For example, you can manually run an Elasticsearch query that removes drive item content older than 180 days:
resp = client.update_by_query( index="INDEX_NAME", conflicts="proceed", query={ "bool": { "filter": [ { "match": { "object_type": "drive_item" } }, { "exists": { "field": "file" } }, { "range": { "lastModifiedDateTime": { "lte": "now-180d" } } } ] } }, script={ "source": "ctx._source.body = ''", "lang": "painless" }, ) print(resp)
const response = await client.updateByQuery({ index: "INDEX_NAME", conflicts: "proceed", query: { bool: { filter: [ { match: { object_type: "drive_item", }, }, { exists: { field: "file", }, }, { range: { lastModifiedDateTime: { lte: "now-180d", }, }, }, ], }, }, script: { source: "ctx._source.body = ''", lang: "painless", }, }); console.log(response);
POST INDEX_NAME/_update_by_query?conflicts=proceed { "query": { "bool": { "filter": [ { "match": { "object_type": "drive_item" } }, { "exists": { "field": "file" } }, { "range": { "lastModifiedDateTime": { "lte": "now-180d" } } } ] } }, "script": { "source": "ctx._source.body = ''", "lang": "painless" } }
Document-level security
editDocument-level security (DLS) enables you to restrict access to documents based on a user’s permissions. This feature is available by default for this connector.
Refer to configuration on this page for how to enable DLS for this connector.
Refer to DLS in Search Applications to learn how to ingest data from SharePoint Online with DLS enabled, when building a search application.
Content extraction
editDefault content extraction
editThe default content extraction service is powered by the Enterprise Search default ingest pipeline. (See Ingest pipelines for Search indices.)
See Content extraction.
Local content extraction (for large files)
editThe SharePoint Online self-managed connector supports large file content extraction (> 100MB). This requires:
- A self-managed deployment of the Elastic Text Extraction Service.
- Text extraction to be disabled in the default ingest pipeline settings.
Refer to local content extraction for more information.
End-to-end testing
editThe connector framework enables operators to run functional tests against a real data source. Refer to Connector testing for more details.
To perform E2E testing for the SharePoint Online connector, run the following command:
$ make ftest NAME=sharepoint_online
For faster tests, add the DATA_SIZE=small
flag:
make ftest NAME=sharepoint_online DATA_SIZE=small
Known issues
edit-
Documents failing to sync due to SharePoint file and folder limits
SharePoint has limits on the number of files and folders that can be synced. You might encounter an error like the following written to the body of documents that failed to sync:
The file size exceeds the allowed limit. CorrelationId: fdb36977-7cb8-4739-992f-49878ada6686, UTC DateTime: 4/21/2022 11:24:22 PM
Refer to SharePoint documentation for more information about these limits.
-
Syncing a large number of files
The connector will fail to download files from folders that contain more than 5000 files. The List View Threshold (default 5000) is a limit that prevents operations with a high performance impact on the SharePoint Online environment.
Workaround: Reduce batch size to avoid this issue.
-
Syncing large files
SharePoint has file size limits, but these are configurable.
Workaround: Increase the file size limit. Refer to SharePoint documentation for more information.
-
Deleted documents counter is not updated during incremental syncs
If the configuration
Enumerate All Sites?
is enabled, incremental syncs may not behave as expected. Drive Item documents that were deleted between incremental syncs may not be detected as deleted.Workaround: Disable
Enumerate All Sites?
, and configure full site paths for all desired sites.
-
Refer to Known issues for a list of known issues for all connectors.
Troubleshooting
editSee Troubleshooting.
Security
editSee Security.