Backup Elasticsearch data

Elasticsearch provides many ways to backup your data. See official Elastic documentation for options.

The simplest method is to create a local copy of the Elasticsearch data in a local directory on your STINGAR server. You can mount a remote storage container directly or copy to a local file. In order to copy the data to a local file you must first configure the elasticsearch container with a yml file.
First, stop the running containers & modify the docker-compose.yml file to add the following volume

- ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml

then create a local file elasticsearch.yml with the following contents:

---
cluster.name: "elasticsearch"
network.host: 0.0.0.0
#discovery.zen.minimum_master_nodes: 1
xpack.security.enabled: false
path.repo: ["/usr/share/elasticsearch/backup"]

Next, restart the containers and then you can make a backup of your elasticsearch data to a the local disk.

Backup steps

  1. Browse to kibana snapshot page
    (/kibana/app/management/data/snapshot_restore/repositories)
    click "Register a Repository": Backup
  2. Create a backup repository in "Shared file system" with name "backup" & click "Next" Backup
  3. Locate repository at /usr/share/elasticsearch/backup Backup
  4. Click "Register" Backup To confirm the backup repository has been created & verify the repository (right menu)
  5. Verify repository Backup
    If you see any verification status errors, double check the ./backup directory write permissions on your server (chmod 666 backup)
  6. Creating a snapshot
    In order to create a backup of the data, you will need to create a "snapshot" of the Elasticsearch indicies and their corresponding data content. The first steop in creating a snapshot is to create a Policy (Click "Create a policy" button)
    Backup
  7. Creating a Policy
    There are several steps to creating a new policy, in our simple example we will create a policy that creates a snapshot in the next hour. Backup Firstly, name the policy (e.g. "my_policy")
    Next, name the snapshot (e.g. "snapshot_now")
    Next, select the frequency (e.g. "hourly")
    Next, select the minute the snapshot should be created (e.g. "00")
    The rest of the settings can remain defaults. Backup Backup Backup

  8. Confirming snapshot saved
    Once the hour passes the "00" minute, the snapshot will be automatically created, once it completes (typically in a few seconds, up to 1 minute) the file details will appear in the list Backup

You may now make a copy of this data on the local filesystem by going to a terminal window & using standard linux commands tar cvf archive.tar backup & gzip archive.tar)

Restore Elasticsearch data steps

  1. If you are migrating to a different host vm, copy the backup files you created (see above) to elasticsearch container into the local ./backup directory - (Note: if you have zipped the backup directory you will need to unzip (e.g gunzip archive.tar.gz the file then tar xvf archive.tar and you should have a new local directory called backup).
  2. Browse to kibana snapshot page
    (/kibana/app/management/data/snapshot_restore/snapshots)
    and click on the snapshot and then click restore butto n to restore the data from the snapshot. Once complete, confirm you have the attack data history in the indices. restore
  3. Finally, you may edit the docker-compose.yml to remove the elasticsearch.yml config file mount & restart the containers.