CI Metrics

edit

In addition to running our tests, CI collects metrics about the Kibana build. These metrics are sent to an external service to track changes over time, and to provide PR authors insights into the impact of their changes.

Metric types

edit

Bundle size

edit

These metrics help contributors know how they are impacting the size of the bundles Kibana creates, and help make sure that Kibana loads as fast as possible.

page load bundle size

The size of the entry file produced for each bundle/plugin. This file is always loaded on every page load, so it should be as small as possible. To reduce this metric you can put any code that isn’t necessary on every page load behind an async import().

Code that is shared statically with other plugins will contribute to the page load bundle size of that plugin. This includes exports from the public/index.ts file and any file referenced by the extraPublicDirs manifest property.

async chunks size
An "async chunk" is created for the files imported by each async import() statement. This metric tracks the sum size of these chunks, in bytes, broken down by plugin/bundle id. You can think of this as the amount of code users will have to download if they access all the components/applications within a bundle.
miscellaneous assets size
A "miscellaneous asset" is anything that isn’t an async chunk or entry chunk, often images. This metric tracks the sum size of these assets, in bytes, broken down by plugin/bundle id.
@kbn/optimizer bundle module count
The number of separate modules included in each bundle/plugin. This is the best indicator we have for how long a specific bundle will take to be built by the @kbn/optimizer, so we report it to help people know when they’ve imported a module which might include a surprising number of sub-modules.

Distributable size

edit

The size of the Kibana distributable is an essential metric as it not only contributes to the time it takes to download, but it also impacts time it takes to extract the archive once downloaded.

There are several metrics that we don’t report on PRs because gzip-compression produces different file sizes even when provided the same input, so this metric would regularly show changes even though PR authors hadn’t made any relevant changes.

All metrics are collected from the tar.gz archive produced for the linux platform.

distributable file count
The number of files included in the default distributable.
distributable size
The size, in bytes, of the default distributable. (not reported on PRs)

Saved Object field counts

edit

Elasticsearch limits the number of fields in an index to 1000 by default, and we want to avoid raising that limit.

Saved Objects .kibana field count
The number of saved object fields broken down by saved object type.

Adding new metrics

edit

You can report new metrics by using the CiStatsReporter class provided by the @kbn/dev-utils package. This class is automatically configured on CI and its methods noop when running outside of CI. For more details checkout the CiStatsReporter readme.

Resolving page load bundle size overages

edit

In order to prevent the page load bundles from growing unexpectedly large we limit the page load asset size metric for each plugin. When a PR increases this metric beyond the limit defined for that plugin in limits.yml a failed commit status is set and the PR author needs to decide how to resolve this issue before the PR can be merged.

In most cases the limit should be high enough that PRs shouldn’t trigger overages, but when they do make sure it’s clear what is causing the overage by trying the following:

  1. Run the optimizer locally with the --profile flag to produce webpack stats.json files for bundles which can be inspected using a number of different online tools. Focus on the chunk named {pluginId}.plugin.js; the *.chunk.js chunks make up the async chunks size metric which is currently unlimited and is the main way that we reduce the size of page load chunks.

    node scripts/build_kibana_platform_plugins --focus {pluginid} --profile
    # builds and creates {pluginDir}target/public/stats.json files for {pluginId} and any plugin it depends on
  2. You might want to create stats for the upstream branch of your PR as well and then compare them side by side in Webpack visualizer to spot where the size difference is (using two browser tabs).
  3. For relatively small changes you might be able to better understand the problem by sticking stats.json files from two different branches into Beyond Compare
  4. If the number of changes in Beyond Compare are too large, you can reduce the stats.json file down to just a sorted list of module ids using jq:

    jq -r .modules[].id {pluginDir}/target/public/stats.json | sort - > moduleids.txt

    Produce a moduleids.txt file for both your branch and master and then pop them into Beyond Compare to get a very specific view of what’s new.

  5. As a last resort you might want to try comparing the bundle source directly. It’s usually best to do this using the production source so that you’re inspecting the actual change in bytes that CI is seeing. After building the distributable version of your bundle run it through prettier and then dropping it into Beyond Compare along with the chunk from upstream:

    node scripts/build_kibana_platform_plugins --focus {pluginId} --dist
    npm install -g prettier
    prettier -w {pluginDir}/target/public/{pluginId}.plugin.js
    # repeat these steps for upstream and then compare the two {pluginId}.plugin.js files in Beyond Compare
  6. If all else fails reach out to Operations for help.

Once you’ve identified the files which were added to the build you likely just need to stick them behind an async import as described in Plugin performance.

In the case that the bundle size is not being bloated by anything obvious, but it’s still larger than the limit, you can raise the limit in your PR. Do this either by editing the limits.yml file manually or by running the following to have the limit updated to the current size + 15kb

node scripts/build_kibana_platform_plugins --focus {pluginId} --update-limits

This command has to run the optimizer in distributable mode so it will take a lot longer and spawn one worker for each CPU on your machine.

Changes to the limits.yml file will trigger review from the Operations team, who will attempt to verify that the size increase is justified. If you have findings you can share from the steps above that would be very helpful!

Validating page load bundle size limits

edit

While you’re trying to track down changes which will improve the bundle size, try running the following command locally:

node scripts/build_kibana_platform_plugins --dist --watch --focus {pluginId}

This will build the front-end bundles for your plugin and only the plugins your plugin depends on. Whenever you make changes the bundles are rebuilt and you can inspect the metrics of that build in the target/public/metrics.json file within your plugin. This file will be updated as you save changes to the source and should be helpful to determine if your changes are lowering the page load asset size enough.

If you only want to run the build once you can run:

node scripts/build_kibana_platform_plugins --validate-limits --focus {pluginId}

This command needs to apply production optimizations to get the right sizes, which means that the optimizer will take significantly longer to run and on most developer machines will consume all of your machines resources for 20 minutes or more. If you’d like to multi-task while this is running you might need to limit the number of workers using the --max-workers flag.