Elastic OneDrive connector reference

edit

The Elastic OneDrive connector is a connector for OneDrive. This connector is written in Python using the Elastic connector framework.

View the source code for this connector (branch 8.16, compatible with Elastic 8.16).

Elastic managed connector reference

edit
View Elastic managed connector reference
Availability and prerequisites
edit

This connector is available as a managed connector as of Elastic version 8.11.0.

To use this connector natively in Elastic Cloud, satisfy all managed connector requirements.

Create a OneDrive connector
edit

Use the UI

edit

To create a new OneDrive connector:

  1. Navigate to the Search → Connectors page in the Kibana UI.
  2. Follow the instructions to create a new native OneDrive connector.

For additional operations, see Connectors UI in Kibana.

Use the API

edit

You can use the Elasticsearch Create connector API to create a new native OneDrive connector.

For example:

PUT _connector/my-onedrive-connector
{
  "index_name": "my-elasticsearch-index",
  "name": "Content synced from OneDrive",
  "service_type": "onedrive",
  "is_native": "true"
}
You’ll also need to create an API key for the connector to use.

The user needs the cluster privileges manage_api_key, manage_connector and write_connector_secrets to generate API keys programmatically.

To create an API key for the connector:

  1. Run the following command, replacing values where indicated. Note the id and encoded return values from the response:

    const response = await client.security.createApiKey({
      name: "my-connector-api-key",
      role_descriptors: {
        "my-connector-connector-role": {
          cluster: ["monitor", "manage_connector"],
          indices: [
            {
              names: [
                "my-index_name",
                ".search-acl-filter-my-index_name",
                ".elastic-connectors*",
              ],
              privileges: ["all"],
              allow_restricted_indices: false,
            },
          ],
        },
      },
    });
    console.log(response);
    POST /_security/api_key
    {
      "name": "my-connector-api-key",
      "role_descriptors": {
        "my-connector-connector-role": {
          "cluster": [
            "monitor",
            "manage_connector"
          ],
          "indices": [
            {
              "names": [
                "my-index_name",
                ".search-acl-filter-my-index_name",
                ".elastic-connectors*"
              ],
              "privileges": [
                "all"
              ],
              "allow_restricted_indices": false
            }
          ]
        }
      }
    }
  2. Use the encoded value to store a connector secret, and note the id return value from this response:

    const response = await client.connector.secretPost({
      body: {
        value: "encoded_api_key",
      },
    });
    console.log(response);
    POST _connector/_secret
    {
      "value": "encoded_api_key"
    }
  3. Use the API key id and the connector secret id to update the connector:

    const response = await client.connector.updateApiKeyId({
      connector_id: "my_connector_id>",
      api_key_id: "API key_id",
      api_key_secret_id: "secret_id",
    });
    console.log(response);
    PUT /_connector/my_connector_id>/_api_key_id
    {
      "api_key_id": "API key_id",
      "api_key_secret_id": "secret_id"
    }

Refer to the Elasticsearch API documentation for details of all available Connector APIs.

Usage
edit

To use this connector natively in Elastic Cloud, see Elastic managed connectors.

For additional operations, see Connectors UI in Kibana.

Connecting to OneDrive
edit

To connect to OneDrive you need to create an Azure Active Directory application and service principal that can access resources.

Follow these steps:

  1. Go to the Azure portal and sign in with your Azure account.
  2. Navigate to the Azure Active Directory service.
  3. Select App registrations from the left-hand menu.
  4. Click on the New registration button to register a new application.
  5. Provide a name for your app, and optionally select the supported account types (e.g., single tenant, multi-tenant).
  6. Click on the Register button to create the app registration.
  7. After the registration is complete, you will be redirected to the app’s overview page. Take note of the Application (client) ID value, as you’ll need it later.
  8. Scroll down to the API permissions section and click on the Add a permission button.
  9. In the Request API permissions pane, select Microsoft Graph as the API.
  10. Choose the application permissions and select the following permissions under the Application tab: User.Read.All, File.Read.All
  11. Click on the Add permissions button to add the selected permissions to your app. Finally, click on the Grant admin consent button to grant the required permissions to the app. This step requires administrative privileges. NOTE: If you are not an admin, you need to request the Admin to grant consent via their Azure Portal.
  12. Click on Certificates & Secrets tab. Go to Client Secrets. Generate a new client secret and keep a note of the string present under Value column.
Configuration
edit

The following configuration fields are required:

Azure application Client ID

Unique identifier for your Azure Application, found on the app’s overview page. Example:

  • ab123453-12a2-100a-1123-93fd09d67394
Azure application Client Secret

String value that the application uses to prove its identity when requesting a token, available under the Certificates & Secrets tab of your Azure application menu. Example:

  • eyav1~12aBadIg6SL-STDfg102eBfCGkbKBq_Ddyu
Azure application Tenant ID

Unique identifier of your Azure Active Directory instance. Example:

  • 123a1b23-12a3-45b6-7c8d-fc931cfb448d
Enable document level security

Toggle to enable document level security. When enabled:

  • Full syncs will fetch access control lists for each document and store them in the _allow_access_control field.
  • Access control syncs will fetch users' access control lists and store them in a separate index.

Enabling DLS for your connector will cause a significant performance degradation, as the API calls to the data source required for this functionality are rate limited. This impacts the speed at which your content can be retrieved.

Content Extraction
edit

Refer to Content extraction for more details.

Documents and syncs
edit

The connector syncs the following objects and entities:

  • Files

    • Includes metadata such as file name, path, size, content, etc.
  • Folders
  • Content from files bigger than 10 MB won’t be extracted. (Self-managed connectors can use the self-managed local extraction service to handle larger binary files.)
  • Permissions are not synced by default. You must first enable DLS. Otherwise, all documents indexed to an Elastic deployment will be visible to all users with access to that Elastic Deployment.
Sync types
edit

Full syncs are supported by default for all connectors.

This connector also supports incremental syncs.

Document level security
edit

Document level security (DLS) enables you to restrict access to documents based on a user’s permissions. This feature is available by default for the OneDrive connector. See Configuration for how to enable DLS for this connector.

Refer to document level security for more details about this feature.

Refer to DLS in Search Applications to learn how to ingest data with DLS enabled, when building a search application.

Sync rules
edit

Basic sync rules are identical for all connectors and are available by default. For more information read Types of sync rule.

Advanced sync rules
edit

This connector supports advanced sync rules for remote filtering. These rules cover complex query-and-filter scenarios that cannot be expressed with basic sync rules. Advanced sync rules are defined through a source-specific DSL JSON snippet.

A full sync is required for advanced sync rules to take effect.

Here are a few examples of advanced sync rules for this connector.

======= Example 1

This rule skips indexing for files with .xlsx and .docx extensions. All other files and folders will be indexed.

[
  {
    "skipFilesWithExtensions": [".xlsx" , ".docx"]
  }
]

======= Example 2

This rule focuses on indexing files and folders owned by user1-domain@onmicrosoft.com and user2-domain@onmicrosoft.com but excludes files with .py extension.

[
  {
    "owners": ["user1-domain@onmicrosoft.com", "user2-domain@onmicrosoft.com"],
    "skipFilesWithExtensions": [".py"]
  }
]

======= Example 3

This rule indexes only the files and folders directly inside the root folder, excluding any .md files.

[
  {
    "skipFilesWithExtensions": [".md"],
    "parentPathPattern": "/drive/root:"
  }
]

======= Example 4

This rule indexes files and folders owned by user1-domain@onmicrosoft.com and user3-domain@onmicrosoft.com that are directly inside the abc folder, which is a subfolder of any folder under the hello directory in the root. Files with extensions .pdf and .py are excluded.

[
  {
    "owners": ["user1-domain@onmicrosoft.com", "user3-domain@onmicrosoft.com"],
    "skipFilesWithExtensions": [".pdf", ".py"],
    "parentPathPattern": "/drive/root:/hello/**/abc"
  }
]

======= Example 5

This example contains two rules. The first rule indexes all files and folders owned by user1-domain@onmicrosoft.com and user2-domain@onmicrosoft.com. The second rule indexes files for all other users, but skips files with a .py extension.

[
  {
    "owners": ["user1-domain@onmicrosoft.com", "user2-domain@onmicrosoft.com"]
  },
  {
    "skipFilesWithExtensions": [".py"]
  }
]

======= Example 6

This example contains two rules. The first rule indexes all files owned by user1-domain@onmicrosoft.com and user2-domain@onmicrosoft.com, excluding .md files. The second rule indexes files and folders recursively inside the abc folder.

[
  {
    "owners": ["user1-domain@onmicrosoft.com", "user2-domain@onmicrosoft.com"],
    "skipFilesWithExtensions": [".md"]
  },
  {
    "parentPathPattern": "/drive/root:/abc/**"
  }
]
Content Extraction
edit

See Content extraction.

Known issues
edit
  • Enabling document-level security impacts performance.

    Enabling DLS for your connector will cause a significant performance degradation, as the API calls to the data source required for this functionality are rate limited. This impacts the speed at which your content can be retrieved.

Refer to Known issues for a list of known issues for all connectors.

Troubleshooting
edit

See Troubleshooting.

Security
edit

See Security.

Self-managed connector

edit
View self-managed connector reference
Availability and prerequisites
edit

This connector is available as a self-managed self-managed connector.

This self-managed connector is compatible with Elastic versions 8.10.0+.

To use this connector, satisfy all self-managed connector requirements.

Create a OneDrive connector
edit

Use the UI

edit

To create a new OneDrive connector:

  1. Navigate to the Search → Connectors page in the Kibana UI.
  2. Follow the instructions to create a new OneDrive self-managed connector.

Use the API

edit

You can use the Elasticsearch Create connector API to create a new self-managed OneDrive self-managed connector.

For example:

PUT _connector/my-onedrive-connector
{
  "index_name": "my-elasticsearch-index",
  "name": "Content synced from OneDrive",
  "service_type": "onedrive"
}
You’ll also need to create an API key for the connector to use.

The user needs the cluster privileges manage_api_key, manage_connector and write_connector_secrets to generate API keys programmatically.

To create an API key for the connector:

  1. Run the following command, replacing values where indicated. Note the encoded return values from the response:

    const response = await client.security.createApiKey({
      name: "connector_name-connector-api-key",
      role_descriptors: {
        "connector_name-connector-role": {
          cluster: ["monitor", "manage_connector"],
          indices: [
            {
              names: [
                "index_name",
                ".search-acl-filter-index_name",
                ".elastic-connectors*",
              ],
              privileges: ["all"],
              allow_restricted_indices: false,
            },
          ],
        },
      },
    });
    console.log(response);
    POST /_security/api_key
    {
      "name": "connector_name-connector-api-key",
      "role_descriptors": {
        "connector_name-connector-role": {
          "cluster": [
            "monitor",
            "manage_connector"
          ],
          "indices": [
            {
              "names": [
                "index_name",
                ".search-acl-filter-index_name",
                ".elastic-connectors*"
              ],
              "privileges": [
                "all"
              ],
              "allow_restricted_indices": false
            }
          ]
        }
      }
    }
  2. Update your config.yml file with the API key encoded value.

Refer to the Elasticsearch API documentation for details of all available Connector APIs.

Usage
edit

For additional operations, see Connectors UI in Kibana.

Connecting to OneDrive
edit

To connect to OneDrive you need to create an Azure Active Directory application and service principal that can access resources.

Follow these steps:

  1. Go to the Azure portal and sign in with your Azure account.
  2. Navigate to the Azure Active Directory service.
  3. Select App registrations from the left-hand menu.
  4. Click on the New registration button to register a new application.
  5. Provide a name for your app, and optionally select the supported account types (e.g., single tenant, multi-tenant).
  6. Click on the Register button to create the app registration.
  7. After the registration is complete, you will be redirected to the app’s overview page. Take note of the Application (client) ID value, as you’ll need it later.
  8. Scroll down to the API permissions section and click on the Add a permission button.
  9. In the Request API permissions pane, select Microsoft Graph as the API.
  10. Choose the application permissions and select the following permissions under the Application tab: User.Read.All, File.Read.All
  11. Click on the Add permissions button to add the selected permissions to your app. Finally, click on the Grant admin consent button to grant the required permissions to the app. This step requires administrative privileges. NOTE: If you are not an admin, you need to request the Admin to grant consent via their Azure Portal.
  12. Click on Certificates & Secrets tab. Go to Client Secrets. Generate a new client secret and keep a note of the string present under Value column.
Deployment using Docker
edit

Self-managed connectors are run on your own infrastructure.

You can deploy the OneDrive connector as a self-managed connector using Docker. Follow these instructions.

Step 1: Download sample configuration file

Download the sample configuration file. You can either download it manually or run the following command:

curl https://raw.githubusercontent.com/elastic/connectors/main/config.yml.example --output ~/connectors-config/config.yml

Remember to update the --output argument value if your directory name is different, or you want to use a different config file name.

Step 2: Update the configuration file for your self-managed connector

Update the configuration file with the following settings to match your environment:

  • elasticsearch.host
  • elasticsearch.api_key
  • connectors

If you’re running the connector service against a Dockerized version of Elasticsearch and Kibana, your config file will look like this:

# When connecting to your cloud deployment you should edit the host value
elasticsearch.host: http://host.docker.internal:9200
elasticsearch.api_key: <ELASTICSEARCH_API_KEY>

connectors:
  -
    connector_id: <CONNECTOR_ID_FROM_KIBANA>
    service_type: onedrive
    api_key: <CONNECTOR_API_KEY_FROM_KIBANA> # Optional. If not provided, the connector will use the elasticsearch.api_key instead

Using the elasticsearch.api_key is the recommended authentication method. However, you can also use elasticsearch.username and elasticsearch.password to authenticate with your Elasticsearch instance.

Note: You can change other default configurations by simply uncommenting specific settings in the configuration file and modifying their values.

Step 3: Run the Docker image

Run the Docker image with the Connector Service using the following command:

docker run \
-v ~/connectors-config:/config \
--network "elastic" \
--tty \
--rm \
docker.elastic.co/enterprise-search/elastic-connectors:8.16.0.0 \
/app/bin/elastic-ingest \
-c /config/config.yml

Refer to DOCKER.md in the elastic/connectors repo for more details.

Find all available Docker images in the official registry.

We also have a quickstart self-managed option using Docker Compose, so you can spin up all required services at once: Elasticsearch, Kibana, and the connectors service. Refer to this README in the elastic/connectors repo for more information.

Configuration
edit

The following configuration fields are required:

client_id

Azure application Client ID, unique identifier for your Azure Application, found on the app’s overview page. Example:

  • ab123453-12a2-100a-1123-93fd09d67394
client_secret

Azure application Client Secret, string value that the application uses to prove its identity when requesting a token. Available under the Certificates & Secrets tab of your Azure application menu. Example:

  • eyav1~12aBadIg6SL-STDfg102eBfCGkbKBq_Ddyu
tenant_id

Azure application Tenant ID: unique identifier of your Azure Active Directory instance. Example:

  • 123a1b23-12a3-45b6-7c8d-fc931cfb448d
retry_count
The number of retry attempts after failed request to OneDrive. Default value is 3.
use_document_level_security

Toggle to enable document level security. When enabled:

  • Full syncs will fetch access control lists for each document and store them in the _allow_access_control field.
  • Access control syncs will fetch users' access control lists and store them in a separate index.

    Enabling DLS for your connector will cause a significant performance degradation, as the API calls to the data source required for this functionality are rate limited. This impacts the speed at which your content can be retrieved.

use_text_extraction_service
Requires a separate deployment of the Elastic Text Extraction Service. Requires that ingest pipeline settings disable text extraction. Default value is False.
Content Extraction
edit

Refer to Content extraction for more details.

Documents and syncs
edit

The connector syncs the following objects and entities:

  • Files

    • Includes metadata such as file name, path, size, content, etc.
  • Folders
  • Content from files bigger than 10 MB won’t be extracted by default. You can use the self-managed local extraction service to handle larger binary files.
  • Permissions are not synced by default. You must first enable DLS. Otherwise, all documents indexed to an Elastic deployment will be visible to all users with access to that Elastic Deployment.
Sync types
edit

Full syncs are supported by default for all connectors.

This connector also supports incremental syncs.

Document level security
edit

Document level security (DLS) enables you to restrict access to documents based on a user’s permissions. This feature is available by default for the OneDrive connector. See Configuration for how to enable DLS for this connector.

Refer to document level security for more details about this feature.

Refer to DLS in Search Applications to learn how to ingest data with DLS enabled, when building a search application.

Sync rules
edit

Basic sync rules are identical for all connectors and are available by default. For more information read Types of sync rule.

Advanced sync rules
edit

This connector supports advanced sync rules for remote filtering. These rules cover complex query-and-filter scenarios that cannot be expressed with basic sync rules. Advanced sync rules are defined through a source-specific DSL JSON snippet.

A full sync is required for advanced sync rules to take effect.

Here are a few examples of advanced sync rules for this connector.

======= Example 1

This rule skips indexing for files with .xlsx and .docx extensions. All other files and folders will be indexed.

[
  {
    "skipFilesWithExtensions": [".xlsx" , ".docx"]
  }
]

======= Example 2

This rule focuses on indexing files and folders owned by user1-domain@onmicrosoft.com and user2-domain@onmicrosoft.com but excludes files with .py extension.

[
  {
    "owners": ["user1-domain@onmicrosoft.com", "user2-domain@onmicrosoft.com"],
    "skipFilesWithExtensions": [".py"]
  }
]

======= Example 3

This rule indexes only the files and folders directly inside the root folder, excluding any .md files.

[
  {
    "skipFilesWithExtensions": [".md"],
    "parentPathPattern": "/drive/root:"
  }
]

======= Example 4

This rule indexes files and folders owned by user1-domain@onmicrosoft.com and user3-domain@onmicrosoft.com that are directly inside the abc folder, which is a subfolder of any folder under the hello directory in the root. Files with extensions .pdf and .py are excluded.

[
  {
    "owners": ["user1-domain@onmicrosoft.com", "user3-domain@onmicrosoft.com"],
    "skipFilesWithExtensions": [".pdf", ".py"],
    "parentPathPattern": "/drive/root:/hello/**/abc"
  }
]

======= Example 5

This example contains two rules. The first rule indexes all files and folders owned by user1-domain@onmicrosoft.com and user2-domain@onmicrosoft.com. The second rule indexes files for all other users, but skips files with a .py extension.

[
  {
    "owners": ["user1-domain@onmicrosoft.com", "user2-domain@onmicrosoft.com"]
  },
  {
    "skipFilesWithExtensions": [".py"]
  }
]

======= Example 6

This example contains two rules. The first rule indexes all files owned by user1-domain@onmicrosoft.com and user2-domain@onmicrosoft.com, excluding .md files. The second rule indexes files and folders recursively inside the abc folder.

[
  {
    "owners": ["user1-domain@onmicrosoft.com", "user2-domain@onmicrosoft.com"],
    "skipFilesWithExtensions": [".md"]
  },
  {
    "parentPathPattern": "/drive/root:/abc/**"
  }
]
Content Extraction
edit

See Content extraction.

Self-managed connector operations
edit
End-to-end testing
edit

The connector framework enables operators to run functional tests against a real data source. Refer to Connector testing for more details.

To perform E2E testing for the GitHub connector, run the following command:

$ make ftest NAME=onedrive

For faster tests, add the DATA_SIZE=small flag:

make ftest NAME=onedrive DATA_SIZE=small
Known issues
edit
  • Enabling document-level security impacts performance.

    Enabling DLS for your connector will cause a significant performance degradation, as the API calls to the data source required for this functionality are rate limited. This impacts the speed at which your content can be retrieved.

Refer to Known issues for a list of known issues for all connectors.

Troubleshooting
edit

See Troubleshooting.

Security
edit

See Security.