Skip to content

Log Aggregators — Splunk, ELK, Graylog, and Loki

Modern log aggregation platforms provide centralized storage, search, correlation, and visualization of logs from multiple sources. MideyeServer integrates with these platforms through file-based log collection — a lightweight agent tails the log files and forwards entries to the aggregation backend.

Supported platforms:

  • Splunk — Enterprise log management and SIEM
  • Elasticsearch (ELK Stack) — Open-source search and analytics
  • Graylog — Open-source log management
  • Grafana Loki — Cloud-native log aggregation

Before configuring collection agents, identify your log file paths. See Overview → Log file locations for the full platform reference.

PlatformLog Directory
Debian / Ubuntu/opt/mideyeserver6/log/
RHEL / Rocky/opt/mideyeserver6/log/
WindowsC:\Program Files (x86)\Mideye Server 6\log\
Docker/home/mideye/log/

Both mideyeserver.log and mideyeserver.error are available in these directories.


The Splunk Universal Forwarder is a lightweight agent for forwarding logs to Splunk Enterprise or Splunk Cloud.

Download and install from Splunk Downloads.

Terminal window
wget -O splunkforwarder.tgz '<download-url>'
tar xvzf splunkforwarder.tgz -C /opt
/opt/splunkforwarder/bin/splunk start --accept-license
  1. Add MideyeServer log files as inputs

    Terminal window
    /opt/splunkforwarder/bin/splunk add monitor /opt/mideyeserver6/log/mideyeserver.log \
    -sourcetype mideyeserver:log \
    -index main
    /opt/splunkforwarder/bin/splunk add monitor /opt/mideyeserver6/log/mideyeserver.error \
    -sourcetype mideyeserver:error \
    -index main

    Or edit inputs.conf directly:

    /opt/splunkforwarder/etc/system/local/inputs.conf
    [monitor:///opt/mideyeserver6/log/mideyeserver.log]
    disabled = false
    sourcetype = mideyeserver:log
    index = main
    [monitor:///opt/mideyeserver6/log/mideyeserver.error]
    disabled = false
    sourcetype = mideyeserver:error
    index = main
  2. Configure forwarding destination

    Terminal window
    /opt/splunkforwarder/bin/splunk add forward-server splunk.example.com:9997

    Or edit:

    /opt/splunkforwarder/etc/system/local/outputs.conf
    [tcpout]
    defaultGroup = default-autolb-group
    [tcpout:default-autolb-group]
    server = splunk.example.com:9997
  3. Restart Universal Forwarder

    Terminal window
    /opt/splunkforwarder/bin/splunk restart

Create a custom field extraction to parse MideyeServer’s log format:

props.conf
[mideyeserver:log]
SHOULD_LINEMERGE = false
TIME_PREFIX = ^
TIME_FORMAT = %Y-%m-%d %H:%M:%S.%3N%z
MAX_TIMESTAMP_LOOKAHEAD = 32
EXTRACT-level = ^\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}\.\d{3}\+\d{2}:\d{2}\s(?<level>\w+)
EXTRACT-thread = \[(?<thread>[^\]]+)\]
EXTRACT-logger = \]\s(?<logger>\w+):

Filebeat is a lightweight shipper for forwarding logs to Elasticsearch or Logstash.

Terminal window
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.x.x-amd64.deb
sudo dpkg -i filebeat-8.x.x-amd64.deb

Edit filebeat.yml:

/etc/filebeat/filebeat.yml
filebeat.inputs:
# MideyeServer main log
- type: log
enabled: true
paths:
- /opt/mideyeserver6/log/mideyeserver.log
fields:
application: mideyeserver
log_type: application
fields_under_root: true
# Multiline configuration for stack traces
multiline.type: pattern
multiline.pattern: '^\d{4}-\d{2}-\d{2}'
multiline.negate: true
multiline.match: after
# MideyeServer error log
- type: log
enabled: true
paths:
- /opt/mideyeserver6/log/mideyeserver.error
fields:
application: mideyeserver
log_type: error
fields_under_root: true
# Output to Elasticsearch
output.elasticsearch:
hosts: ["elasticsearch.example.com:9200"]
index: "mideyeserver-%{+yyyy.MM.dd}"
# Optional: authentication
username: "filebeat"
password: "changeme"
# Optional: TLS
ssl.enabled: true
ssl.certificate_authorities: ["/etc/filebeat/ca.crt"]
# Pipeline for parsing (optional)
setup.ilm.enabled: false
setup.template.name: "mideyeserver"
setup.template.pattern: "mideyeserver-*"

If using Logstash for parsing, configure Filebeat to output to Logstash:

/etc/filebeat/filebeat.yml
output.logstash:
hosts: ["logstash.example.com:5044"]

And create a Logstash pipeline:

/etc/logstash/conf.d/mideyeserver.conf
input {
beats {
port => 5044
}
}
filter {
if [application] == "mideyeserver" {
grok {
match => {
"message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} \[%{DATA:thread}\] %{DATA:logger}: %{GREEDYDATA:log_message}"
}
}
date {
match => ["timestamp", "yyyy-MM-dd HH:mm:ss.SSSZ"]
target => "@timestamp"
}
}
}
output {
elasticsearch {
hosts => ["elasticsearch.example.com:9200"]
index => "mideyeserver-%{+YYYY.MM.dd}"
}
}
Terminal window
sudo systemctl enable filebeat
sudo systemctl start filebeat
sudo systemctl status filebeat

Graylog can receive logs from Filebeat via the Beats input plugin.

  1. Create Beats Input

    • Navigate to System → Inputs
    • Select Beats and click Launch new input
    • Configure:
      • Title: MideyeServer Logs
      • Bind address: 0.0.0.0
      • Port: 5044
    • Click Save
  2. Create Extractors (Optional)

    Create extractors to parse the log format:

    • Navigate to System → Inputs → MideyeServer Logs → Manage extractors
    • Click Get started
    • Use Grok pattern:
      %{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} \[%{DATA:thread}\] %{DATA:logger}: %{GREEDYDATA:message}
/etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /opt/mideyeserver6/log/mideyeserver.log
fields:
application: mideyeserver
output.logstash:
hosts: ["graylog.example.com:5044"]

Promtail is the agent for shipping logs to Grafana Loki.

Terminal window
docker run -d --name promtail \
-v /var/log:/var/log \
-v /opt/mideyeserver6/log:/mideyeserver/log \
-v /etc/promtail:/etc/promtail \
grafana/promtail:latest \
-config.file=/etc/promtail/config.yml
/etc/promtail/config.yml
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://loki.example.com:3100/loki/api/v1/push
scrape_configs:
- job_name: mideyeserver
static_configs:
- targets:
- localhost
labels:
application: mideyeserver
host: mideyeserver01
__path__: /opt/mideyeserver6/log/mideyeserver.log
- job_name: mideyeserver-error
static_configs:
- targets:
- localhost
labels:
application: mideyeserver
log_type: error
host: mideyeserver01
__path__: /opt/mideyeserver6/log/mideyeserver.error

Query MideyeServer logs in Grafana:

# All MideyeServer logs
{application="mideyeserver"}
# Error logs only
{application="mideyeserver", log_type="error"}
# Filter by log level
{application="mideyeserver"} |= "ERROR"
# Count errors per minute
count_over_time({application="mideyeserver"} |= "ERROR" [1m])

For advanced use cases, you can configure MideyeServer to output JSON-formatted logs using logstash-logback-encoder.

Add the encoder dependency to MideyeServer’s classpath (contact Mideye Support for custom builds).

Replace the pattern encoder with JSON encoder in logback.xml:

logback.xml
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<!-- Replace <encoder> with JSON encoder -->
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<includeContext>true</includeContext>
<includeMdc>true</includeMdc>
<includeStructuredArguments>true</includeStructuredArguments>
<fieldNames>
<timestamp>@timestamp</timestamp>
<message>message</message>
<logger>logger_name</logger>
<thread>thread_name</thread>
<level>level</level>
<levelValue>[ignore]</levelValue>
</fieldNames>
</encoder>
<!-- ... rest of configuration ... -->
</appender>

This outputs logs in JSON format:

{
"@timestamp": "2026-02-25T14:32:15.847Z",
"level": "INFO",
"thread_name": "main",
"logger_name": "com.mideye.mideyeserver.MideyeServerApp",
"message": "Application 'MideyeServer' is running!",
"host": "mideyeserver01"
}

Use this Grok pattern to parse MideyeServer’s default log format:

%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level}\s+\[%{DATA:thread}\] %{DATA:logger}: %{GREEDYDATA:message}

Example match:

2026-02-25 14:32:15.847+00:00 INFO [main] MideyeServerApp: Application 'MideyeServer' is running!

Extracted fields:

  • timestamp: 2026-02-25 14:32:15.847+00:00
  • level: INFO
  • thread: main
  • logger: MideyeServerApp
  • message: Application 'MideyeServer' is running!

Check file permissions:

Terminal window
ls -l /opt/mideyeserver6/log/
# Agent user needs read access

Check agent logs:

Terminal window
# Filebeat
sudo journalctl -u filebeat -f
# Promtail
sudo journalctl -u promtail -f

Verify file path:

Terminal window
# Test with tail
tail -f /opt/mideyeserver6/log/mideyeserver.log

Check network connectivity:

Terminal window
# Elasticsearch
curl -X GET "elasticsearch.example.com:9200"
# Graylog Beats input
telnet graylog.example.com 5044
# Loki
curl http://loki.example.com:3100/ready

Check agent status:

Terminal window
# Filebeat test output
sudo filebeat test output
# Filebeat test config
sudo filebeat test config

Reduce log verbosity in MideyeServer (see Log Levels)

Limit multiline processing in Filebeat:

multiline.max_lines: 500
multiline.timeout: 5s

Increase harvest interval:

close_inactive: 5m
scan_frequency: 30s

  • Overview: Log file locations and paths per platform
  • Syslog: Alternative to log aggregators for syslog-based collection
  • Log Levels: Configure verbosity before forwarding
  • Log Rotation: Configure rotation to prevent disk space issues