Skip to content

Getting Started

Prerequisites

  • Ansible 2.18 or later
  • Target hosts running Debian 11/12/13, Ubuntu 22.04/24.04, or Rocky Linux/RHEL 8/9/10
  • SSH access to target hosts with root or sudo privileges
  • Minimum 4 GB RAM per Elasticsearch node (8 GB recommended)
  • Python 3 on all target hosts (with python3-apt on Debian/Ubuntu)

Install the collection

ansible-galaxy collection install oddly.elasticstack

Or add to your requirements.yml:

collections:
  - name: oddly.elasticstack

Single-node deployment (simplest)

This deploys everything on one host — useful for development and testing.

inventory.yml:

all:
  children:
    elasticsearch:
      hosts:
        elastic1:
          ansible_host: 192.168.1.10

playbook.yml:

- hosts: all
  vars:
    elasticstack_release: 9
    elasticstack_full_stack: false
    elasticsearch_heap: 2
  roles:
    - oddly.elasticstack.repos
    - oddly.elasticstack.elasticsearch
ansible-playbook -i inventory.yml playbook.yml

After the run completes, Elasticsearch will be listening on https://localhost:9200 with security enabled. The initial passwords are stored in /usr/share/elasticsearch/initial_passwords on the host.

Multi-node full-stack deployment

inventory.yml:

all:
  children:
    elasticsearch:
      hosts:
        es1: { ansible_host: 10.0.1.10 }
        es2: { ansible_host: 10.0.1.11 }
        es3: { ansible_host: 10.0.1.12 }
    kibana:
      hosts:
        kb1: { ansible_host: 10.0.1.20 }
    logstash:
      hosts:
        ls1: { ansible_host: 10.0.1.30 }
    beats:
      hosts:
        app1: { ansible_host: 10.0.1.40 }
        app2: { ansible_host: 10.0.1.41 }

group_vars/all.yml:

elasticstack_release: 9
elasticstack_full_stack: true
elasticstack_security: true

playbook.yml:

- hosts: all
  roles:
    - oddly.elasticstack.repos
    - oddly.elasticstack.elasticsearch
    - oddly.elasticstack.kibana
    - oddly.elasticstack.logstash
    - oddly.elasticstack.beats

Each role only acts on hosts in its matching group. Elasticsearch nodes form a cluster, Kibana connects to ES, Logstash creates its writer user in ES and opens port 5044 for Beats, and Beats ships logs to Logstash. TLS certificates are automatically generated and distributed.

Disabling security

Warning

Only disable security for isolated development environments. Never run without security on networks accessible to untrusted users.

For internal networks or development environments where TLS is not needed:

elasticstack_security: false
elasticsearch_security: false
beats_security: false

This disables TLS, authentication, and HTTPS across all roles.

Using a package mirror

If your hosts can't reach artifacts.elastic.co, point the repo at a local mirror:

elasticstack_repo_base_url: "https://elastic-cache.internal.example.com"

Or set the ELASTICSTACK_REPO_BASE_URL environment variable.

Configuring Beats inputs

The Beats role supports several input types. Here's a more complete example:

# Filebeat with multiple log inputs
beats_filebeat: true
beats_filebeat_output: logstash
beats_filebeat_log_inputs:
  syslog:
    name: syslog
    paths:
      - /var/log/syslog
      - /var/log/messages
  nginx:
    name: nginx
    paths:
      - /var/log/nginx/access.log
      - /var/log/nginx/error.log
    fields:
      app: nginx

# Syslog TCP/UDP listeners
beats_filebeat_syslog_tcp: true
beats_filebeat_syslog_tcp_port: 5514
beats_filebeat_syslog_tcp_fields:
  source_protocol: tcp

beats_filebeat_syslog_udp: true
beats_filebeat_syslog_udp_port: 5515
beats_filebeat_syslog_udp_fields:
  source_protocol: udp

# Journald input
beats_filebeat_journald: true

# Disk-backed queue for reliability
beats_queue_type: disk
beats_queue_disk_max_size: 2GB

# Metricbeat for system metrics
beats_metricbeat: true
beats_metricbeat_modules:
  - system

Custom Logstash pipelines

For simple cases, use inline filters:

logstash_filters: |
  grok {
    match => { "message" => "%{SYSLOGLINE}" }
  }

For complex pipelines, use filter files:

logstash_filter_files:
  - files/logstash/10-syslog.conf
  - files/logstash/20-nginx.conf

Or take full control with a custom pipeline:

logstash_custom_pipeline: |
  input {
    beats { port => 5044 }
  }
  filter {
    grok { match => { "message" => "%{SYSLOGLINE}" } }
  }
  output {
    elasticsearch {
      hosts => ["https://es1:9200"]
      index => "logs-%{+YYYY.MM.dd}"
    }
  }

Upgrading from 8.x to 9.x

Set the target version and run the playbook:

elasticstack_release: 9
elasticstack_version: "9.0.2"

The Elasticsearch role detects the version mismatch and performs a rolling upgrade automatically — one node at a time with shard allocation management. All other roles simply upgrade their packages.

Important

All nodes must be running 8.19.x before upgrading to 9.x. The role enforces this: if any node is on an older 8.x version, the play fails immediately with an upgrade path violation error. This matches Elastic's official upgrade requirements.

Certificate renewal

Certificates are checked on every run and renewed automatically when they approach expiry (default: 30 days before). To force renewal, use tags:

# Renew all certificates
ansible-playbook -i inventory.yml playbook.yml --tags certificates

# Renew only Elasticsearch certificates
ansible-playbook -i inventory.yml playbook.yml --tags renew_es_cert

# Renew only the CA (triggers renewal of all dependent certs)
ansible-playbook -i inventory.yml playbook.yml --tags renew_ca

Next steps