Elastic OneDrive connector reference
editElastic OneDrive connector reference
editThe Elastic OneDrive connector is a connector for OneDrive. This connector is written in Python using the Elastic connector framework.
View the source code for this connector (branch 8.x, compatible with Elastic 8.17).
Elastic managed connector reference
editView Elastic managed connector reference
Availability and prerequisites
editThis connector is available as a managed connector as of Elastic version 8.11.0.
To use this connector natively in Elastic Cloud, satisfy all managed connector requirements.
Create a OneDrive connector
editUse the UI
editTo create a new OneDrive connector:
- In the Kibana UI, navigate to the Search → Content → Connectors page from the main menu, or use the global search field.
- Follow the instructions to create a new native OneDrive connector.
For additional operations, see Connectors UI in Kibana.
Use the API
editYou can use the Elasticsearch Create connector API to create a new native OneDrive connector.
For example:
resp = client.connector.put( connector_id="my-{service-name-stub}-connector", index_name="my-elasticsearch-index", name="Content synced from {service-name}", service_type="{service-name-stub}", is_native=True, ) print(resp)
PUT _connector/my-onedrive-connector { "index_name": "my-elasticsearch-index", "name": "Content synced from OneDrive", "service_type": "onedrive", "is_native": true }
You’ll also need to create an API key for the connector to use.
The user needs the cluster privileges manage_api_key
, manage_connector
and write_connector_secrets
to generate API keys programmatically.
To create an API key for the connector:
-
Run the following command, replacing values where indicated. Note the
id
andencoded
return values from the response:resp = client.security.create_api_key( name="my-connector-api-key", role_descriptors={ "my-connector-connector-role": { "cluster": [ "monitor", "manage_connector" ], "indices": [ { "names": [ "my-index_name", ".search-acl-filter-my-index_name", ".elastic-connectors*" ], "privileges": [ "all" ], "allow_restricted_indices": False } ] } }, ) print(resp)
const response = await client.security.createApiKey({ name: "my-connector-api-key", role_descriptors: { "my-connector-connector-role": { cluster: ["monitor", "manage_connector"], indices: [ { names: [ "my-index_name", ".search-acl-filter-my-index_name", ".elastic-connectors*", ], privileges: ["all"], allow_restricted_indices: false, }, ], }, }, }); console.log(response);
POST /_security/api_key { "name": "my-connector-api-key", "role_descriptors": { "my-connector-connector-role": { "cluster": [ "monitor", "manage_connector" ], "indices": [ { "names": [ "my-index_name", ".search-acl-filter-my-index_name", ".elastic-connectors*" ], "privileges": [ "all" ], "allow_restricted_indices": false } ] } } }
-
Use the
encoded
value to store a connector secret, and note theid
return value from this response:resp = client.connector.secret_post( body={ "value": "encoded_api_key" }, ) print(resp)
const response = await client.connector.secretPost({ body: { value: "encoded_api_key", }, }); console.log(response);
POST _connector/_secret { "value": "encoded_api_key" }
-
Use the API key
id
and the connector secretid
to update the connector:resp = client.connector.update_api_key_id( connector_id="my_connector_id>", api_key_id="API key_id", api_key_secret_id="secret_id", ) print(resp)
const response = await client.connector.updateApiKeyId({ connector_id: "my_connector_id>", api_key_id: "API key_id", api_key_secret_id: "secret_id", }); console.log(response);
PUT /_connector/my_connector_id>/_api_key_id { "api_key_id": "API key_id", "api_key_secret_id": "secret_id" }
Refer to the Elasticsearch API documentation for details of all available Connector APIs.
Usage
editTo use this connector natively in Elastic Cloud, see Elastic managed connectors.
For additional operations, see Connectors UI in Kibana.
Connecting to OneDrive
editTo connect to OneDrive you need to create an Azure Active Directory application and service principal that can access resources.
Follow these steps:
- Go to the Azure portal and sign in with your Azure account.
- Navigate to the Azure Active Directory service.
- Select App registrations from the left-hand menu.
- Click on the New registration button to register a new application.
- Provide a name for your app, and optionally select the supported account types (e.g., single tenant, multi-tenant).
- Click on the Register button to create the app registration.
- After the registration is complete, you will be redirected to the app’s overview page. Take note of the Application (client) ID value, as you’ll need it later.
- Scroll down to the API permissions section and click on the Add a permission button.
- In the Request API permissions pane, select Microsoft Graph as the API.
-
Choose the application permissions and select the following permissions under the Application tab:
User.Read.All
,File.Read.All
- Click on the Add permissions button to add the selected permissions to your app. Finally, click on the Grant admin consent button to grant the required permissions to the app. This step requires administrative privileges. NOTE: If you are not an admin, you need to request the Admin to grant consent via their Azure Portal.
-
Click on Certificates & Secrets tab. Go to Client Secrets. Generate a new client secret and keep a note of the string present under
Value
column.
Configuration
editThe following configuration fields are required:
- Azure application Client ID
-
Unique identifier for your Azure Application, found on the app’s overview page. Example:
-
ab123453-12a2-100a-1123-93fd09d67394
-
- Azure application Client Secret
-
String value that the application uses to prove its identity when requesting a token, available under the
Certificates & Secrets
tab of your Azure application menu. Example:-
eyav1~12aBadIg6SL-STDfg102eBfCGkbKBq_Ddyu
-
- Azure application Tenant ID
-
Unique identifier of your Azure Active Directory instance. Example:
-
123a1b23-12a3-45b6-7c8d-fc931cfb448d
-
- Enable document level security
-
Toggle to enable document level security. When enabled:
-
Full syncs will fetch access control lists for each document and store them in the
_allow_access_control
field. - Access control syncs will fetch users' access control lists and store them in a separate index.
-
Full syncs will fetch access control lists for each document and store them in the
Enabling DLS for your connector will cause a significant performance degradation, as the API calls to the data source required for this functionality are rate limited. This impacts the speed at which your content can be retrieved.
Content Extraction
editRefer to Content extraction for more details.
Documents and syncs
editThe connector syncs the following objects and entities:
-
Files
- Includes metadata such as file name, path, size, content, etc.
- Folders
- Content from files bigger than 10 MB won’t be extracted. (Self-managed connectors can use the self-managed local extraction service to handle larger binary files.)
- Permissions are not synced by default. You must first enable DLS. Otherwise, all documents indexed to an Elastic deployment will be visible to all users with access to that Elastic Deployment.
Sync types
editFull syncs are supported by default for all connectors.
This connector also supports incremental syncs.
Document level security
editDocument level security (DLS) enables you to restrict access to documents based on a user’s permissions. This feature is available by default for the OneDrive connector. See Configuration for how to enable DLS for this connector.
Refer to document level security for more details about this feature.
Refer to DLS in Search Applications to learn how to ingest data with DLS enabled, when building a search application.
Sync rules
editBasic sync rules are identical for all connectors and are available by default. For more information read Types of sync rule.
Advanced sync rules
editThis connector supports advanced sync rules for remote filtering. These rules cover complex query-and-filter scenarios that cannot be expressed with basic sync rules. Advanced sync rules are defined through a source-specific DSL JSON snippet.
A full sync is required for advanced sync rules to take effect.
Here are a few examples of advanced sync rules for this connector.
This rule skips indexing for files with .xlsx
and .docx
extensions.
All other files and folders will be indexed.
[ { "skipFilesWithExtensions": [".xlsx" , ".docx"] } ]
This rule focuses on indexing files and folders owned by user1-domain@onmicrosoft.com
and user2-domain@onmicrosoft.com
but excludes files with .py
extension.
[ { "owners": ["user1-domain@onmicrosoft.com", "user2-domain@onmicrosoft.com"], "skipFilesWithExtensions": [".py"] } ]
This rule indexes only the files and folders directly inside the root folder, excluding any .md
files.
[ { "skipFilesWithExtensions": [".md"], "parentPathPattern": "/drive/root:" } ]
This rule indexes files and folders owned by user1-domain@onmicrosoft.com
and user3-domain@onmicrosoft.com
that are directly inside the abc
folder, which is a subfolder of any folder under the hello
directory in the root. Files with extensions .pdf
and .py
are excluded.
[ { "owners": ["user1-domain@onmicrosoft.com", "user3-domain@onmicrosoft.com"], "skipFilesWithExtensions": [".pdf", ".py"], "parentPathPattern": "/drive/root:/hello/**/abc" } ]
This example contains two rules.
The first rule indexes all files and folders owned by user1-domain@onmicrosoft.com
and user2-domain@onmicrosoft.com
.
The second rule indexes files for all other users, but skips files with a .py
extension.
[ { "owners": ["user1-domain@onmicrosoft.com", "user2-domain@onmicrosoft.com"] }, { "skipFilesWithExtensions": [".py"] } ]
This example contains two rules.
The first rule indexes all files owned by user1-domain@onmicrosoft.com
and user2-domain@onmicrosoft.com
, excluding .md
files.
The second rule indexes files and folders recursively inside the abc
folder.
[ { "owners": ["user1-domain@onmicrosoft.com", "user2-domain@onmicrosoft.com"], "skipFilesWithExtensions": [".md"] }, { "parentPathPattern": "/drive/root:/abc/**" } ]
Content Extraction
editSee Content extraction.
Known issues
edit-
Enabling document-level security impacts performance.
Enabling DLS for your connector will cause a significant performance degradation, as the API calls to the data source required for this functionality are rate limited. This impacts the speed at which your content can be retrieved.
Refer to Known issues for a list of known issues for all connectors.
Troubleshooting
editSee Troubleshooting.
Security
editSee Security.
Self-managed connector
editView self-managed connector reference
Availability and prerequisites
editThis connector is available as a self-managed self-managed connector.
This self-managed connector is compatible with Elastic versions 8.10.0+.
To use this connector, satisfy all self-managed connector requirements.
Create a OneDrive connector
editUse the UI
editTo create a new OneDrive connector:
- In the Kibana UI, navigate to the Search → Content → Connectors page from the main menu, or use the global search field.
- Follow the instructions to create a new OneDrive self-managed connector.
Use the API
editYou can use the Elasticsearch Create connector API to create a new self-managed OneDrive self-managed connector.
For example:
resp = client.connector.put( connector_id="my-{service-name-stub}-connector", index_name="my-elasticsearch-index", name="Content synced from {service-name}", service_type="{service-name-stub}", ) print(resp)
PUT _connector/my-onedrive-connector { "index_name": "my-elasticsearch-index", "name": "Content synced from OneDrive", "service_type": "onedrive" }
You’ll also need to create an API key for the connector to use.
The user needs the cluster privileges manage_api_key
, manage_connector
and write_connector_secrets
to generate API keys programmatically.
To create an API key for the connector:
-
Run the following command, replacing values where indicated. Note the
encoded
return values from the response:resp = client.security.create_api_key( name="connector_name-connector-api-key", role_descriptors={ "connector_name-connector-role": { "cluster": [ "monitor", "manage_connector" ], "indices": [ { "names": [ "index_name", ".search-acl-filter-index_name", ".elastic-connectors*" ], "privileges": [ "all" ], "allow_restricted_indices": False } ] } }, ) print(resp)
const response = await client.security.createApiKey({ name: "connector_name-connector-api-key", role_descriptors: { "connector_name-connector-role": { cluster: ["monitor", "manage_connector"], indices: [ { names: [ "index_name", ".search-acl-filter-index_name", ".elastic-connectors*", ], privileges: ["all"], allow_restricted_indices: false, }, ], }, }, }); console.log(response);
POST /_security/api_key { "name": "connector_name-connector-api-key", "role_descriptors": { "connector_name-connector-role": { "cluster": [ "monitor", "manage_connector" ], "indices": [ { "names": [ "index_name", ".search-acl-filter-index_name", ".elastic-connectors*" ], "privileges": [ "all" ], "allow_restricted_indices": false } ] } } }
-
Update your
config.yml
file with the API keyencoded
value.
Refer to the Elasticsearch API documentation for details of all available Connector APIs.
Usage
editFor additional operations, see Connectors UI in Kibana.
Connecting to OneDrive
editTo connect to OneDrive you need to create an Azure Active Directory application and service principal that can access resources.
Follow these steps:
- Go to the Azure portal and sign in with your Azure account.
- Navigate to the Azure Active Directory service.
- Select App registrations from the left-hand menu.
- Click on the New registration button to register a new application.
- Provide a name for your app, and optionally select the supported account types (e.g., single tenant, multi-tenant).
- Click on the Register button to create the app registration.
- After the registration is complete, you will be redirected to the app’s overview page. Take note of the Application (client) ID value, as you’ll need it later.
- Scroll down to the API permissions section and click on the Add a permission button.
- In the Request API permissions pane, select Microsoft Graph as the API.
-
Choose the application permissions and select the following permissions under the Application tab:
User.Read.All
,File.Read.All
- Click on the Add permissions button to add the selected permissions to your app. Finally, click on the Grant admin consent button to grant the required permissions to the app. This step requires administrative privileges. NOTE: If you are not an admin, you need to request the Admin to grant consent via their Azure Portal.
-
Click on Certificates & Secrets tab. Go to Client Secrets. Generate a new client secret and keep a note of the string present under
Value
column.
Deployment using Docker
editSelf-managed connectors are run on your own infrastructure.
You can deploy the OneDrive connector as a self-managed connector using Docker. Follow these instructions.
Step 1: Download sample configuration file
Download the sample configuration file. You can either download it manually or run the following command:
curl https://raw.githubusercontent.com/elastic/connectors/main/config.yml.example --output ~/connectors-config/config.yml
Remember to update the --output
argument value if your directory name is different, or you want to use a different config file name.
Step 2: Update the configuration file for your self-managed connector
Update the configuration file with the following settings to match your environment:
-
elasticsearch.host
-
elasticsearch.api_key
-
connectors
If you’re running the connector service against a Dockerized version of Elasticsearch and Kibana, your config file will look like this:
# When connecting to your cloud deployment you should edit the host value elasticsearch.host: http://host.docker.internal:9200 elasticsearch.api_key: <ELASTICSEARCH_API_KEY> connectors: - connector_id: <CONNECTOR_ID_FROM_KIBANA> service_type: onedrive api_key: <CONNECTOR_API_KEY_FROM_KIBANA> # Optional. If not provided, the connector will use the elasticsearch.api_key instead
Using the elasticsearch.api_key
is the recommended authentication method. However, you can also use elasticsearch.username
and elasticsearch.password
to authenticate with your Elasticsearch instance.
Note: You can change other default configurations by simply uncommenting specific settings in the configuration file and modifying their values.
Step 3: Run the Docker image
Run the Docker image with the Connector Service using the following command:
docker run \ -v ~/connectors-config:/config \ --network "elastic" \ --tty \ --rm \ docker.elastic.co/enterprise-search/elastic-connectors:8.17.0.0 \ /app/bin/elastic-ingest \ -c /config/config.yml
Refer to DOCKER.md
in the elastic/connectors
repo for more details.
Find all available Docker images in the official registry.
We also have a quickstart self-managed option using Docker Compose, so you can spin up all required services at once: Elasticsearch, Kibana, and the connectors service.
Refer to this README in the elastic/connectors
repo for more information.
Configuration
editThe following configuration fields are required:
-
client_id
-
Azure application Client ID, unique identifier for your Azure Application, found on the app’s overview page. Example:
-
ab123453-12a2-100a-1123-93fd09d67394
-
-
client_secret
-
Azure application Client Secret, string value that the application uses to prove its identity when requesting a token. Available under the
Certificates & Secrets
tab of your Azure application menu. Example:-
eyav1~12aBadIg6SL-STDfg102eBfCGkbKBq_Ddyu
-
-
tenant_id
-
Azure application Tenant ID: unique identifier of your Azure Active Directory instance. Example:
-
123a1b23-12a3-45b6-7c8d-fc931cfb448d
-
-
retry_count
-
The number of retry attempts after failed request to OneDrive. Default value is
3
. -
use_document_level_security
-
Toggle to enable document level security. When enabled:
-
Full syncs will fetch access control lists for each document and store them in the
_allow_access_control
field. -
Access control syncs will fetch users' access control lists and store them in a separate index.
Enabling DLS for your connector will cause a significant performance degradation, as the API calls to the data source required for this functionality are rate limited. This impacts the speed at which your content can be retrieved.
-
Full syncs will fetch access control lists for each document and store them in the
-
use_text_extraction_service
-
Requires a separate deployment of the Elastic Text Extraction Service.
Requires that ingest pipeline settings disable text extraction.
Default value is
False
.
Content Extraction
editRefer to Content extraction for more details.
Documents and syncs
editThe connector syncs the following objects and entities:
-
Files
- Includes metadata such as file name, path, size, content, etc.
- Folders
- Content from files bigger than 10 MB won’t be extracted by default. You can use the self-managed local extraction service to handle larger binary files.
- Permissions are not synced by default. You must first enable DLS. Otherwise, all documents indexed to an Elastic deployment will be visible to all users with access to that Elastic Deployment.
Sync types
editFull syncs are supported by default for all connectors.
This connector also supports incremental syncs.
Document level security
editDocument level security (DLS) enables you to restrict access to documents based on a user’s permissions. This feature is available by default for the OneDrive connector. See Configuration for how to enable DLS for this connector.
Refer to document level security for more details about this feature.
Refer to DLS in Search Applications to learn how to ingest data with DLS enabled, when building a search application.
Sync rules
editBasic sync rules are identical for all connectors and are available by default. For more information read Types of sync rule.
Advanced sync rules
editThis connector supports advanced sync rules for remote filtering. These rules cover complex query-and-filter scenarios that cannot be expressed with basic sync rules. Advanced sync rules are defined through a source-specific DSL JSON snippet.
A full sync is required for advanced sync rules to take effect.
Here are a few examples of advanced sync rules for this connector.
This rule skips indexing for files with .xlsx
and .docx
extensions.
All other files and folders will be indexed.
[ { "skipFilesWithExtensions": [".xlsx" , ".docx"] } ]
This rule focuses on indexing files and folders owned by user1-domain@onmicrosoft.com
and user2-domain@onmicrosoft.com
but excludes files with .py
extension.
[ { "owners": ["user1-domain@onmicrosoft.com", "user2-domain@onmicrosoft.com"], "skipFilesWithExtensions": [".py"] } ]
This rule indexes only the files and folders directly inside the root folder, excluding any .md
files.
[ { "skipFilesWithExtensions": [".md"], "parentPathPattern": "/drive/root:" } ]
This rule indexes files and folders owned by user1-domain@onmicrosoft.com
and user3-domain@onmicrosoft.com
that are directly inside the abc
folder, which is a subfolder of any folder under the hello
directory in the root. Files with extensions .pdf
and .py
are excluded.
[ { "owners": ["user1-domain@onmicrosoft.com", "user3-domain@onmicrosoft.com"], "skipFilesWithExtensions": [".pdf", ".py"], "parentPathPattern": "/drive/root:/hello/**/abc" } ]
This example contains two rules.
The first rule indexes all files and folders owned by user1-domain@onmicrosoft.com
and user2-domain@onmicrosoft.com
.
The second rule indexes files for all other users, but skips files with a .py
extension.
[ { "owners": ["user1-domain@onmicrosoft.com", "user2-domain@onmicrosoft.com"] }, { "skipFilesWithExtensions": [".py"] } ]
This example contains two rules.
The first rule indexes all files owned by user1-domain@onmicrosoft.com
and user2-domain@onmicrosoft.com
, excluding .md
files.
The second rule indexes files and folders recursively inside the abc
folder.
[ { "owners": ["user1-domain@onmicrosoft.com", "user2-domain@onmicrosoft.com"], "skipFilesWithExtensions": [".md"] }, { "parentPathPattern": "/drive/root:/abc/**" } ]
Content Extraction
editSee Content extraction.
Self-managed connector operations
editEnd-to-end testing
editThe connector framework enables operators to run functional tests against a real data source. Refer to Connector testing for more details.
To perform E2E testing for the GitHub connector, run the following command:
$ make ftest NAME=onedrive
For faster tests, add the DATA_SIZE=small
flag:
make ftest NAME=onedrive DATA_SIZE=small
Known issues
edit-
Enabling document-level security impacts performance.
Enabling DLS for your connector will cause a significant performance degradation, as the API calls to the data source required for this functionality are rate limited. This impacts the speed at which your content can be retrieved.
Refer to Known issues for a list of known issues for all connectors.
Troubleshooting
editSee Troubleshooting.
Security
editSee Security.