Podman and Graylog

Graylog container with MongoDB and Elastic Search


---
# tasks file for podman-graylog-role
#- name: run containers
#  hosts: localhost
- name: Run Mongo DB
  containers.podman.podman_container:
    name: mongodb
    image: mongo:4.0
    state: started
    ip:
       10.88.0.38
    ports:
     - 27017:27017
    env:
     BIND_IP: "0.0.0.0"
    volume:
      - /data/mongo_data:/data/db
      - /data/mongo_data/conf:/etc/mongo
    network:
      - name: graylog
- name: Run ES   
  containers.podman.podman_container:   
    name: ES
    image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2
    state: started
    ip:
      10.88.0.39
    ports:
     - 9200:9200
    volume:
      - /data/es_data:/usr/share/elasticsearch/data
      - /data/es_data/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
    network:
      - name: graylog

- name: Run Graylog 
  containers.podman.podman_container:
    name: graylog
  # image: amazon/opendistro-for-elasticsearch:1.13.2
    image: graylog/graylog:4.0.1
    state: started
    ports:
     - 9000:9000
    env:
       GRAYLOG_ELASTICSEARCH_VERSION: "7"
    user:  1100:1100
    volume:
     - /data/graylog/config:/usr/share/graylog/data/config
     - /data/graylog/config/node-id:/usr/share/graylog/data/config/node-id 
     - /data/graylog:/usr/share/graylog/data
     - /data/graylog/journal:/usr/share/graylog/data/journal
    network:
     - name: graylog


Elastic search config file elasticsearch.yml


discovery.type: single-node
network.host: 0.0.0.0
http.port : 9200

Mongo db conf file

 
net:
  port: 27017
  bindIp: 0.0.0.0
 

Graylog config file



############################
# GRAYLOG CONFIGURATION FILE
############################

# If you are running more than one instances of Graylog server you have to select one of these
# instances as master. The master will perform some periodical tasks that non-masters won't perform.
is_master = true

# The auto-generated node ID will be stored in this file and read after restarts. It is a good idea
# to use an absolute file path here if you are starting Graylog server from init scripts or similar.
node_id_file = /usr/share/graylog/data/config/node-id

password_secret = replacethiswithyourownsecret!

# The default root user is named 'admin'
root_username = admin

# Default password: admin
# CHANGE THIS!
root_password_sha2 = 8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918

# The email address of the root user.
# Default is empty
#root_email = ""

# The time zone setting of the root user. See http://www.joda.org/joda-time/timezones.html for a list of valid time zones.
# Default is UTC
root_timezone = UTC

# Set plugin directory here (relative or absolute)
plugin_dir = /usr/share/graylog/plugin


# Default: 127.0.0.1:9000
http_bind_address = 0.0.0.0:9000
#http_bind_address = [2001:db8::1]:9000
#http_bind_address = 0.0.0.0:9000


# Default: http://$http_bind_address/
http_publish_uri = http://127.0.0.1:9000/


#http_external_uri =

elasticsearch_hosts = http://10.88.0.39:9200

# Maximum amount of time to wait for successfull connection to Elasticsearch HTTP port.
#
# Default: 10 Seconds
elasticsearch_connect_timeout = 100s


# Disable checking the version of Elasticsearch for being compatible with this Graylog release.
# WARNING: Using Graylog with unsupported and untested versions of Elasticsearch may lead to data loss!
elasticsearch_disable_version_check = true


# Do you want to allow searches with leading wildcards? This can be extremely resource hungry and should only
# be enabled with care. See also: http://docs.graylog.org/en/2.1/pages/queries.html
allow_leading_wildcard_searches = false

# Do you want to allow searches to be highlighted? Depending on the size of your messages this can be memory hungry and
# should only be enabled after making sure your Elasticsearch cluster has enough memory.
allow_highlighting = false

# ("outputbuffer_processors" variable)
output_batch_size = 500
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30

processbuffer_processors = 5
outputbuffer_processors = 3

# Default: 5000
#outputbuffer_processor_keep_alive_time = 5000

# The number of threads to keep in the pool, even if they are idle, unless allowCoreThreadTimeOut is set
# Default: 3
#outputbuffer_processor_threads_core_pool_size = 3

# The maximum number of threads to allow in the pool
# Default: 30
#outputbuffer_processor_threads_max_pool_size = 30

# UDP receive buffer size for all message inputs (e. g. SyslogUDPInput).
#udp_recvbuffer_sizes = 1048576

# Wait strategy describing how buffer processors wait on a cursor sequence. (default: sleeping)
# Possible types:
#  - yielding
#     Compromise between performance and CPU usage.
#  - sleeping
#     Compromise between performance and CPU usage. Latency spikes can occur after quiet periods.
#  - blocking
#     High throughput, low latency, higher CPU usage.
#  - busy_spinning
#     Avoids syscalls which could introduce latency jitter. Best when threads can be bound to specific CPU cores.
processor_wait_strategy = blocking

# Size of internal ring buffers. Raise this if raising outputbuffer_processors does not help anymore.
# For optimum performance your LogMessage objects in the ring buffer should fit in your CPU L3 cache.
# Must be a power of 2. (512, 1024, 2048, ...)
ring_size = 65536

inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking

# Enable the disk based message journal.
message_journal_enabled = true


message_journal_dir = /usr/share/graylog/data/journal



# How many seconds to wait between marking node as DEAD for possible load balancers and starting the actual
# shutdown process. Set to 0 if you have no status checking load balancers in front.
lb_recognition_period_seconds = 3

# Journal usage percentage that triggers requesting throttling for this server node from load balancers. The feature is
# disabled if not set.
#lb_throttle_threshold_percentage = 95



# MongoDB connection string
# See https://docs.mongodb.com/manual/reference/connection-string/ for details
#mongodb_uri = mongodb://mongo/graylog
mongodb_uri = mongodb://10.88.0.38:27017/graylog

# Authenticate against the MongoDB server
#mongodb_uri = mongodb://grayloguser:secret@127.0.0.1:27017/graylog

# Use a replica set instead of a single host
#mongodb_uri = mongodb://grayloguser:secret@mongo:27017,mongo:27018,mongo:27019/graylog

# Increase this value according to the maximum connections your MongoDB server can handle from a single client
# if you encounter MongoDB connection problems.
mongodb_max_connections = 1000

# Number of threads allowed to be blocked by MongoDB connections multiplier. Default: 5
# If mongodb_max_connections is 100, and mongodb_threads_allowed_to_block_multiplier is 5,
# then 500 threads can block. More than that and an exception will be thrown.
# http://api.mongodb.com/java/current/com/mongodb/MongoOptions.html#threadsAllowedToBlockForConnectionMultiplier
mongodb_threads_allowed_to_block_multiplier = 5
For any permission error set your graylog data using chown -R 1100:1100 (your graylog data directory) node-id file contents for example 6033137e-d56b-47fc-9762-cd699c11a5a9

---
# tasks file for podman-graylog-role
#- name: run containers
#  hosts: localhost
- name: Run Mongo DB
  containers.podman.podman_container:
    name: mongodb
    image: mongo:4.0
    state: started
    ip:
       10.88.0.38
    ports:
     - 27017:27017
    env:
     BIND_IP: "0.0.0.0"
    volume:
      - /data/mongo_data:/data/db
      - /data/mongo_data/conf:/etc/mongo
    network:
      - name: graylog
- name: Run ES   
  containers.podman.podman_container:   
    name: ES
    image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2
    state: started
    ip:
      10.88.0.39
    ports:
     - 9200:9200
    volume:
      - /data/es_data:/usr/share/elasticsearch/data
      - /data/es_data/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
    network:
      - name: graylog

- name: Run Graylog 
  containers.podman.podman_container:
    name: graylog
  # image: amazon/opendistro-for-elasticsearch:1.13.2
    image: graylog/graylog:4.0.1
    state: started
    ports:
     - 9000:9000
     - 5044:5044
    env:
       GRAYLOG_ELASTICSEARCH_VERSION: "7"
    user:  1100:1100
    volume:
     - /data/graylog/config:/usr/share/graylog/data/config
     - /data/graylog/config/node-id:/usr/share/graylog/data/config/node-id 
     - /data/graylog:/usr/share/graylog/data
     - /data/graylog/journal:/usr/share/graylog/data/journal
    network:
     - name: graylog

Send Data to Graylog using Fluent and Graylog SideCar and FileBeat

Once logged in to Graylog, click on System in the top nav. Next, click on Inputs from the left navigation bar. (Or, simply go to http://localhost:9000/system/inputs.

Then, from the dropdown, choose GELF UDP and click on Launch new input, which should pop up a modal dialogue, Select the Node and fill the Title. Then, click Save.

Install td-agent for your linux distro in our example it is CentOS

Then, install the out_gelf plugin to send data to Graylog. Currently, the GELF plugin is not available on RubyGems, so we need to download the plugin file and place it in /etc/td-agent/plugin,We also need to gem-install GELF's Ruby client

$ wget https://raw.githubusercontent.com/emsearcy/fluent-plugin-gelf/master/lib/fluent/plugin/out_gelf.rb
$ sudo mv out_gelf.rb /etc/td-agent/plugin
$ sudo /usr/sbin/td-agent-gem install gelf
Configure /etc/td-agent/td-agent.conf as follows:

<source>
  @type syslog
  tag graylog2
<source>

<match graylog2.**>
  @type gelf
  host 127.0.0.1
  port 12201
  <buffer>
    flush_interval 5s
  </buffer>
</match>

Open /etc/rsyslog.conf and add the following line to the file: *.* @127.0.0.1:5140

Finally, restart rsyslog and Fluentd with the following commands:

$ sudo systemctl restart rsyslog $ sudo systemctl restart td-agent

Gray Log Side Car

Install the Graylog Sidecar repository configuration and Graylog Sidecar itself with the following commands: $ sudo rpm -Uvh (location of rpm file) $ sudo yum install graylog-sidecar

Edit the configuration (see Configuration) and activate the Sidecar as a system service:

add url of graylog server and the token (create the token in the sidecar section of graylog web interface)

$ vi /etc/graylog/sidecar/sidecar.yml

$ sudo graylog-sidecar -service install

$ sudo systemctl start graylog-sidecar

FileBeat Install

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-oss-7.8.1-x86_64.rpm

sudo rpm -vi filebeat-oss-7.8.1-x86_64.rpm