Script Python qui collecte les données de la mémoire Cobalt Strike générées par les événements de sécurité d'un cluster Elasticsearch, extrait la configuration de la balise CS et réécrit les données dans Elasticsearch.
Download cobalt-strike-extractor.tar.gz
Overview
This tool provides a Python module and command line tool that will search Elastic Endpoint alert data for detections of Cobalt Strike and the extracted memory data. When present, this tool will extract the implant configuration using the cobaltstrike-config-extractor. The information is then normalized into an ECS-formatted JSON document and indexed into an Elasticsearch cluster or output to the terminal as JSON.
For help on creating Fleet policies to collect and analyze Cobalt Strike beacons in the Elastic Stack, check out our blog posts detailing this:
Getting Started
Docker
The recommended and easiest way to get going is to use Docker. From the directory this README is in, you can build a local container.
docker build . -t cobalt-strike-extractor
Next, make a copy of config.reference.yml and name it config.local.yml and edit for your environment. A minimal config looks like the example below. The input and output could use the same values, but you can optionally push it to a different cluster for analysis.
## Using an Elastic Cloud instance (this is a randomly generated example)
input.elasticsearch:
enabled: True
cloud.id: security-cluster:dXMtd2VzdDEuZ2NwLmNsb3VkLmVzLmlvJGU0MWU1YTc3YmRjNzY2OTY0MDg2NjIzNDA5NzFjNjFkJDdlYjRlYTJkMzJkMTgzYTRiMmJkMjlkNTNjODhjMjQ4
cloud.auth: elastic:<PASSWORD>
## Default output will use localhost:9092, see reference config
output.elasticsearch:
enabled: True
username: elastic
password: <PASSWORD>
Now, run the container, passing in our local configuration. The -v flag here will add informational messages to the log output. Here, it tells us how many documents were successfully parsed and written.
docker run -ti --rm -v "$(pwd)/config.local.yml:/config.yml" \
cobalt-strike-extractor:latest -c /config.yml -v
Output:
[2022-01-10T21:33:31.493][INFO] Setting up input/output
[2022-01-10T21:33:31.493][INFO] Connecting to Elasticsearch for input
[2022-01-10T21:33:31.493][INFO] Successfully connected to Elasticsearch for input
[2022-01-10T21:33:31.834][INFO] Connecting to Elasticsearch for output
[2022-01-10T21:33:31.835][INFO] Successfully connected to Elasticsearch for output
[2022-01-10T21:33:33.030][WARNING] Could not parse source as PE file (DOS Header magic not found.)
[2022-01-10T21:33:33.078][WARNING] CobaltStrike Beacon config not found:
[2022-01-10T21:33:33.093][WARNING] Could not parse source as PE file (DOS Header magic not found.)
[2022-01-10T21:33:33.096][WARNING] CobaltStrike Beacon config not found:
[2022-01-10T21:33:33.097][WARNING] Could not parse source as PE file (DOS Header magic not found.)
[2022-01-10T21:33:33.097][WARNING] CobaltStrike Beacon config not found:
[2022-01-10T21:33:33.097][WARNING] Could not parse source as PE file (DOS Header magic not found.)
[2022-01-10T21:33:33.098][WARNING] CobaltStrike Beacon config not found:
[2022-01-10T21:33:33.186][WARNING] Could not parse source as PE file (DOS Header magic not found.)
[2022-01-10T21:33:33.191][WARNING] CobaltStrike Beacon config not found:
[2022-01-10T21:33:33.461][WARNING] Could not parse source as PE file (DOS Header magic not found.)
[2022-01-10T21:33:33.516][WARNING] CobaltStrike Beacon config not found:
[2022-01-10T21:33:33.927][INFO] Wrote 2 docs to Elasticsearch
The [WARNING] messages here are to be expected. These are simply source documents that didn’t contain the configuration information.
Filter by time
To limit the search by time frame, you can add the --since argument, which takes either an ISO-formatted date time string or you can use Elastic date math. For example, to limit search to the last 30 days, you can do the following.
docker run -ti --rm -v "$(pwd)/config.local.yml:/config.yml" \
cobalt-strike-extractor:latest --since "now-30d/d" -c config.local.yml
Pipe output to other tools
Lastly, you can pipe the output to other commands, such as jq to do local analysis. You can also override the configuration file values using environment variables.
docker run -i --rm -a stdin -a stdout -a stderr \
-v "$(pwd)/config.local.yml:/config.yml" \
-e "OUTPUT_ELASTICSEARCH_ENABLED=False" \
-e "OUTPUT_CONSOLE_ENABLED=True" cobalt-strike-extractor:latest -c /config.yml -q | jq '.cobaltstrike.server.hostname'
In the example above, we disabled the Elasticsearch output and enabled the Console output using environment variables. We made the output more quiet using the -q flag (hiding the warnings). Then, we used jq to just pull out the “hostname” value of the configuration.
Running it Locally
As mentioned above, Docker is the recommended approach to running this project, however you can also run this locally. This project uses Poetry to manage dependencies, testing, and metadata. If you have Poetry installed already, from this directory, you can simply run the following commands to run the tool. This will setup a virtual environment, install the dependencies, activate the virtual environment, and run the console script.
poetry lock
poetry install
poetry shell
cobalt-strike-extractor --help
Once that works, you can do the same sort of things as mentioned in the Docker instructions above.