Tag Archives: Dashboards

So you want to OpenSearch


OpenSearch

OpenSearch has a real potential to cause a disruption in the world of ELK deployments. As of October-2021 OpenSearch has now reached version 1.1.0 and it’s possible that Amazon is gearing up for a release of version 2.0 to push ahead with new features.

The biggest reasons I’m deciding this for my use-case at work are:

  • Less/no monitoring of usage-habbits via X-Pack compared to elasticsearch
  • Anomaly-Detection/Notebooks via Dashboards
  • More ‘Killer-features’ are behind the X-Pack and/or premium/enterprise bundles and we’ve no funding to support the latter

As of OpenSearch 1.1 the documentation on how to get up and running with the minimum cluster with containers for experimenting is still rather rough. This is a summary of the instructions I had to run, they may still or not apply for all.

In order to setup a minimal OpenSearch cluster you just need docker-compose, 5 containers, a pile of ssl certs and some nginx goodness.


Full&Minimal Docker-Compose Configuration:

A good starting point is the minimal docker-compose.yml which we have below:

docker-compose.yml (click to expand)
version: '3'
services:

  dashboards:
    image: "opensearchproject/opensearch-dashboards:${DASHBOARD_VERSION}"
    container_name: dashboards
    hostname: 'dashboards'
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    volumes:
      - "${STORAGE_PATH}/configs/dashboards.yml:/usr/share/opensearch-dashboards/config/opensearch_dashboards.yml:ro"
      - "${STORAGE_PATH}/security/os_keys/root/root-ca.pem:/usr/share/opensearch-dashboards/config/root-ca.pem:ro"
    networks:
      - os-net
    restart: always

  os01:
    image: "opensearchproject/opensearch:${OS_VERSION}"
    container_name: os01
    hostname: 'os01'
    environment:
      - node.name=os01
      - cluster.name=os-cluster
      - cluster.initial_master_nodes=os01
      - bootstrap.memory_lock=true
      - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
      - "DISABLE_INSTALL_DEMO_CONFIG=true"
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    volumes:
      - "${STORAGE_PATH}/volumes/os_data/01:/usr/share/opensearch/data:rw"
      - "${STORAGE_PATH}/volumes/snapshots:/usr/share/opensearch/snapshots:rw"
      - "${STORAGE_PATH}/configs/opensearch_os01.yml:/usr/share/opensearch/config/opensearch.yml:ro"
      - "${STORAGE_PATH}/security/internal_users.yml:/usr/share/opensearch/plugins/opensearch-security/securityconfig/internal_users.yml:rw"
      - "${STORAGE_PATH}/security/os_keys/root/root-ca.pem:/usr/share/opensearch/config/root-ca.pem:ro"
      - "${STORAGE_PATH}/security/os_keys/os01/os01.pem:/usr/share/opensearch/config/node.pem:ro"
      - "${STORAGE_PATH}/security/os_keys/os01/os01-key.pem:/usr/share/opensearch/config/node-key.pem:ro"
      - "${STORAGE_PATH}/security/os_keys/os01/os01.pem:/usr/share/opensearch/config/esnode.pem:ro"
      - "${STORAGE_PATH}/security/os_keys/os01/os01-key.pem:/usr/share/opensearch/config/esnode-key.pem:ro"
      - "${STORAGE_PATH}/security/os_keys/admin/admin.pem:/usr/share/opensearch/config/admin.pem:ro"
      - "${STORAGE_PATH}/security/os_keys/admin/admin-key.pem:/usr/share/opensearch/config/admin-key.pem:ro"
      - "${STORAGE_PATH}/security/os_keys/admin/admin.pem:/usr/share/opensearch/config/kirk.pem:ro"
      - "${STORAGE_PATH}/security/os_keys/admin/admin-key.pem:/usr/share/opensearch/config/kirk-key.pem:ro"
      - "${STORAGE_PATH}/logs/os01:/usr/share/opensearch/logs:rw"
    networks:
      - os-net
    restart: always
    stop_grace_period: 60m

  gui:
    image: "cars10/elasticvue:${GUI_VERSION}"
    container_name: os-gui
    hostname: 'os-gui'
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    networks:
      - os-net
    restart: always

  nginx-proxy:
    image: nginx:latest
    container_name: nginx-proxy
    hostname: 'nginx'
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    volumes:
      - "${STORAGE_PATH}/configs/nginx/etc/nginx/conf.d/default.conf:/etc/nginx/conf.d/default.conf:ro"
      - "${STORAGE_PATH}/security/os_keys/root/root-ca.pem:/etc/nginx/certs/neeps.ph.ed.ac.uk/root.pem:ro"
      - "${STORAGE_PATH}/security/os_keys/nginx/nginx.pem:/etc/nginx/certs/neeps.ph.ed.ac.uk/cert.pem:ro"
      - "${STORAGE_PATH}/security/os_keys/nginx/nginx-key.pem:/etc/nginx/certs/neeps.ph.ed.ac.uk/key.pem:ro"
    ports:
      - 80:80
      - 443:443
      - 8080:8080
      - 9200:9200
      - 9600:9600
    networks:
      - os-net
    restart: always
    stop_grace_period: 60m

  hostmaster:
    image: iamluc/docker-hostmanager:latest
    container_name: hostmaster
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /etc/hosts:/hosts:rw
    networks:
      - os-net
    restart: always
    stop_grace_period: 1s


networks:
  os-net:
    driver: bridge
    name: os-net

 

So let’s break this down. We have 5 containers which are all on the same bridge network “os-net“:

  • opensearch (elasticsearch)
  • opensearch-dashboards (kibana)
  • gui (elasticsearch-web-gui)
  • hostmaster (managing hosts to be able to talk to containers by name, poor-mans DNS)
  • nginx (A proxy for managing access)

Here the “${STORAGE_PATH}” is the root level path for everything that we want to persist on disk.
Underneath this folder we have our “docker-compose.yml“, “.env” files and several paths containing everything important data:

  • volumes (rw):
    This folder contains all of the cluster data from the opensearch (or elasticsearch) nodes.
  • configs (ro):
    This folder is intended to contain all of the config files needed to launch the service in the intended correct way.
  • security (ro):
    This folder contains all of the keys/certificates needed to support full https and ssl encryption internally and externally.
  • logs (rw):
    This is where the various containers will write additional logging that isn’t handled by the docker logging service.

Opensearch Components:

These are opensearch and opensearch-dashboard containers

The main 2 containers for opensearch are obviously the first 2. The logic behind the remaining pieces is to make life easier. We have the elasticvue tool which is an excellent tool for debugging the

opensearch.yml (click to expand)
cluster.name: "os-cluster"
network.host: 0.0.0.0

logger.level: INFO

plugins.security.ssl.transport.pemcert_filepath: node.pem
plugins.security.ssl.transport.pemkey_filepath: node-key.pem
plugins.security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
plugins.security.ssl.transport.enforce_hostname_verification: false
plugins.security.ssl.http.enabled: true
plugins.security.ssl.http.pemcert_filepath: node.pem
plugins.security.ssl.http.pemkey_filepath: node-key.pem
plugins.security.ssl.http.pemtrustedcas_filepath: root-ca.pem
plugins.security.authcz.admin_dn:
  - CN=HOSTNAME,OU=Admin,O=MY-O,L=Edinburgh,ST=Britain,C=GB
plugins.security.nodes_dn:
  - 'CN=os*,OU=MY-OU,O=MY-O,L=Edinburgh,ST=Britain,C=GB'
plugins.security.allow_default_init_securityindex: true

plugins.security.audit.type: internal_opensearch
plugins.security.enable_snapshot_restore_privilege: true
plugins.security.check_snapshot_restore_write_privileges: true
plugins.security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
plugins.security.system_indices.enabled: true
plugins.security.system_indices.indices: [".opendistro-alerting-config", ".opendistro-alerting-alert*", ".opendistro-anomaly-results*", ".opendistro-anomaly-detector*", ".opendistro-anomaly-checkpoints", ".opendistro-anomaly-detection-state", ".opendistro-reports-*", ".opendistro-notifications-*", ".opendistro-notebooks", ".opendistro-asynchronous-search-response*"]
#node.max_local_storage_nodes: 3

indices.queries.cache.size: "10%"
indices.recovery.max_bytes_per_sec: "256m"
indices.recovery.max_concurrent_file_chunks: 2
indices.recovery.max_concurrent_operations: 2
indices.memory.index_buffer_size: "10%"
indices.requests.cache.size: "10%"
path.repo: ["/usr/share/opensearch/snapshots"]

This configuration creates a single opensearch node names “os01“.
Setting this up required generating a consistent set of SSL certs in order to support https within the cluster.

The instructions on the OS own docs https://opensearch.org/docs/security-plugin/configuration/generate-certificates/ are more than good enough to explain this.

Caveats, the lessons Iearnt when trying to vary from this were:

  • Certificates
    Java or the Opensearch cluster don’t play nice with certs with spaces in their DN.
    Yes. This I’m having to write this in 2021, and yes the error you get is a full stack trace from java’s https stack when you encounter this.
  • Plugins
    Although not overly clearly documented, the Opensearch stack has been modified to move almost all of the authentication to the security plugin. This has several advantages including support for real authentication tooling like LDAP, but for now I’m sticking with simple user/pass authentication for a basic setup
  • Logging
    Don’t be fooled. Turning logging up really causes a lot of noise to be generated. Turning it up wold help debug an actual coding error or stack trace

internal_users.yml

I won’t spend too long describing how I set this up. Basically I copied the existing file from the opensearch docker container and used the hash.sh tool also in the container to create a new default password for the admin account.

This file is important to setup the default passwords when the pool is first being initialised from scratch as it creates the first password set. Once the system is up, I recommend changing all passwords on a “live” cluster which stores the current passwords inside the pool.
This can probably be removed from the docker-compose.yml after the pool has been setup, but I’m yet to try this.

The OpenDashboards configuration is, in this-case much simpler. The service itself has to trust the https of the opensearch cluster that was made to self-sign. However for simplicity sake here I’m going to add https support via nginx rather than directly in the dashboards configuration itself.

opensearch_dashboards.yml (click to expand)
server.port: 5601

server.host: 0.0.0.0

server.name: "My-OpenDashboard"

opensearch.hosts: ["https://os01:9200", ]
opensearch.username: "kibana_user"
opensearch.password: "PASSWORD"

opensearch.ssl.certificateAuthorities: [ "/usr/share/opensearch-dashboards/config/root-ca.pem" ]
opensearch.ssl.verificationMode: full
opensearch_security.cookie.secure: false

opensearch_security.multitenancy.enabled: true

OS-GUI:

This is provided by the “cars10/elasticvue” container. Think of this as an excel equivalent, or a database viewer.

This should be a self-evident container to setup and try to use, just point it at a secure and trusted OS head-node (also externally accessible to your browser).

From this you can see a nice UI summary of the various host and node states in combination with indices and the ability to peek down into the general data structure.


nginx Proxy:

My nginx configuration: default.conf. This is just to manage external access to the various components within this docker cluster as they’re initially just on an internal bridge within your host system.

default.conf (click to expand)
server {
    listen 80 default_server;
    server_name _;
    return 301 https://$host$request_uri;
}

server {
  listen 8080;
  server_name HOSTNAME;
  client_max_body_size 1024m;
  location / {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header Authorization $http_authorization;
    proxy_pass_header Authorization;
    proxy_pass http://os-gui:8080;
  }
}

server {
  listen 9600;
  server_name HOSTNAME;
  client_max_body_size 1024m;
  location / {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header Authorization $http_authorization;
    proxy_pass_header Authorization;
    proxy_pass http://os01:9600;
  }
}

server {
  listen 9200 ssl;
  server_name HOSTNAME;
  ssl_certificate /etc/nginx/certs/HOSTNAME/cert.pem;
  ssl_certificate_key /etc/nginx/certs/HOSTNAME/key.pem;
  ssl_protocols       TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers         HIGH:!aNULL:!MD5;
  ssl_trusted_certificate /etc/nginx/certs/HOSTNAME/root.pem;
  ssl_stapling on;
  ssl_stapling_verify on;
  client_max_body_size 1024m;
  location / {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header Authorization $http_authorization;
    proxy_pass_header Authorization;
    proxy_pass https://os01:9200;
  }
}

server {
  listen 443 ssl;
  server_name HOSTNAME;
  ssl_certificate /etc/nginx/certs/HOSTNAME/cert.pem;
  ssl_certificate_key /etc/nginx/certs/HOSTNAME/key.pem;
  ssl_protocols       TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers         HIGH:!aNULL:!MD5;
  ssl_trusted_certificate /etc/nginx/certs/HOSTNAME/root.pem;
  ssl_stapling on;
  ssl_stapling_verify on;
  client_max_body_size 1024m;
  location / {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header Authorization $http_authorization;
    proxy_pass_header Authorization;
    proxy_pass http://dashboards:5601;
  }
}

This example shows how nginx can be used to control external access to the internal cluster based on whatever ssl permissions are most applicable.


Results:

The end result of this is a 1-node opensearch cluster which is accessible via https on :9200 with it’s performance monitoring on :9600.
The opensearch-dashboards is also accessible via the normal https endpoints on your system.

With dark-mode enabled you will get something like the following after logging in with your browser 🙂

At some point in the future I’ll document how to expand this. How I’m using opensearch/dashboards and the various tools I’m plugging in and how.