
11 KiB
Dropshell LogClient Template
An auto-configuring Filebeat agent that collects Docker container logs via the Docker API and system logs, shipping them to a centralized logging server with minimal configuration.
Overview
This template deploys a lightweight Filebeat agent that:
- Uses Docker API to collect logs from all containers (regardless of logging driver)
- Allows containers to use any Docker logging driver (json-file, local, journald, etc.)
- Collects system logs (syslog, auth logs, kernel logs)
- Ships logs to a centralized ELK stack (logserver)
- Requires minimal configuration - just the server address
- Handles connection failures with local buffering
- Auto-reconnects when the server becomes available
Features
Docker API Log Collection
- Direct API access: Reads logs via Docker API, not from files
- Driver independent: Works with any Docker logging driver (local, json-file, journald)
- Automatic discovery: Finds all running containers dynamically
- Container metadata: Enriches logs with container names, images, labels
- Real-time streaming: Gets logs as they're generated
- Multi-line handling: Properly handles stack traces and multi-line logs
- JSON parsing: Automatically parses JSON-formatted logs
- Label-based filtering: Can include/exclude containers based on labels
System Log Collection
- /var/log/syslog or /var/log/messages: System events
- /var/log/auth.log or /var/log/secure: Authentication events
- /var/log/kern.log: Kernel messages
- journald: SystemD journal (if available)
- Custom paths: Configurable additional log paths
Reliability Features
- Local buffering: Stores logs locally when server is unreachable
- Automatic retry: Reconnects automatically with exponential backoff
- Compression: Compresses logs before sending to save bandwidth
- Secure transmission: Optional TLS/SSL encryption
- Backpressure handling: Slows down when server is overwhelmed
Architecture
How It Works
- Filebeat runs as a container with Docker socket access
- Uses Docker API to stream logs from all containers
- Monitors Docker API for container lifecycle events
- Automatically starts collecting logs from new containers
- Reads host system logs from mounted volumes
- Ships all logs to configured Logstash/Elasticsearch endpoint
- Maintains connection state and buffering information
Log Flow
Docker Containers → Docker API →
↘
Filebeat → Logstash → Elasticsearch → Kibana
↗
System Logs (mounted volumes) →
Why Docker API Instead of Log Files?
- Logging driver flexibility: Containers can use
local
,json-file
,journald
, or any driver - No log file management: Don't need to worry about log rotation or file paths
- Better performance: Direct streaming without file I/O overhead
- Consistent access: Same method regardless of storage backend
- Real-time streaming: Get logs immediately as they're generated
- Simplified permissions: Only need Docker socket access
Minimum Configuration
The template requires minimal configuration - server address and authentication:
# Required - Server connection
LOGSERVER_HOST=192.168.1.100
LOGSERVER_PORT=5044
# Required - Authentication (choose one method)
AUTH_MODE=mtls # Options: mtls, apikey, basic
# For mTLS authentication
CLIENT_CERT_PATH=/certs/client.crt
CLIENT_KEY_PATH=/certs/client.key
CA_CERT_PATH=/certs/ca.crt
# For API key authentication
API_KEY=your-api-key-here
# For basic auth (not recommended)
USERNAME=filebeat
PASSWORD=changeme
Configuration Options
Environment Variables (service.env)
# REQUIRED: Log server connection
LOGSERVER_HOST=logserver.example.com
LOGSERVER_PORT=5044
# REQUIRED: Authentication method
AUTH_MODE=mtls # mtls, apikey, or basic
# mTLS Authentication (if AUTH_MODE=mtls)
CLIENT_CERT_PATH=/certs/${HOSTNAME}.crt
CLIENT_KEY_PATH=/certs/${HOSTNAME}.key
CA_CERT_PATH=/certs/ca.crt
SSL_VERIFICATION_MODE=full
# API Key Authentication (if AUTH_MODE=apikey)
API_KEY="" # Will be provided by logserver admin
# Basic Authentication (if AUTH_MODE=basic)
USERNAME=filebeat
PASSWORD=changeme
# Optional: Performance tuning
BULK_MAX_SIZE=2048 # Maximum batch size
WORKER_THREADS=1 # Number of worker threads
QUEUE_SIZE=4096 # Internal queue size
MAX_BACKOFF=60s # Maximum retry backoff
# Optional: Filtering
EXCLUDE_CONTAINERS="" # Comma-separated container names to exclude
INCLUDE_CONTAINERS="" # Only include these containers (if set)
EXCLUDE_LABELS="" # Exclude containers with these labels
INCLUDE_LABELS="" # Only include containers with these labels
# Optional: Additional log paths
CUSTOM_LOG_PATHS="" # Comma-separated additional paths to monitor
# Optional: Resource limits
MAX_CPU=50 # Maximum CPU usage percentage
MAX_MEMORY=200MB # Maximum memory usage
Collected Log Types
Docker Container Logs (via Docker API)
- stdout/stderr: All container output regardless of logging driver
- Container metadata: Name, ID, image, labels
- Docker events: Start, stop, die, kill events
- Health check results: If configured
- Works with all logging drivers: local, json-file, journald, syslog, etc.
System Logs
- System messages: Service starts/stops, errors
- Authentication: SSH logins, sudo usage
- Kernel messages: Hardware events, driver messages
- Package management: apt/yum operations
- Cron jobs: Scheduled task execution
Log Enrichment
Logs are automatically enriched with:
- Hostname: Source host identification
- Timestamp: Precise event time with timezone
- Log level: Parsed from log content when possible
- Container info: For Docker logs
- Process info: PID, command for system logs
- File path: Source log file
Resource Requirements
Minimum
- CPU: 0.5 cores
- RAM: 128MB
- Storage: 1GB (for buffer)
Typical Usage
- CPU: 1-5% of one core
- RAM: 150-200MB
- Network: Varies with log volume
- Storage: Depends on buffer size
Installation
Prerequisites
- A running logserver (ELK stack)
- Network connectivity to logserver
- Docker installed on host
- Authentication credentials from logserver admin
Setup Authentication
For mTLS (Recommended):
# Get client certificate from logserver admin
# They will run: dropshell exec logserver /scripts/generate-client-cert.sh $(hostname)
# Copy the generated certificate files to this client
mkdir -p /etc/dropshell/certs
# Copy ca.crt, client.crt, and client.key to /etc/dropshell/certs/
For API Key:
# Get API key from logserver admin
# They will run: dropshell exec logserver /scripts/generate-api-key.sh $(hostname)
# Add the API key to service.env
Deploy
# Configure authentication in service.env
dropshell install logclient
Monitoring
Check Status
dropshell status logclient
View Filebeat Logs
dropshell logs logclient
Verify Connectivity
# Check if logs are being shipped
docker exec logclient-filebeat filebeat test output
Monitor Metrics
# View Filebeat statistics
docker exec logclient-filebeat curl -s http://localhost:5066/stats
Troubleshooting
No Logs Appearing on Server
-
Check connectivity
telnet $LOGSERVER_HOST $LOGSERVER_PORT
-
Verify Filebeat is running
dropshell status logclient
-
Check Filebeat logs
dropshell logs logclient | tail -50
-
Test configuration
docker exec logclient-filebeat filebeat test config
High CPU Usage
- Reduce worker threads in service.env
- Increase bulk_max_size to send larger batches
- Add exclude filters for noisy containers
Missing Container Logs
- Verify Docker socket is mounted
- Check container isn't in exclude list
- Ensure Filebeat has permissions to Docker socket
- Verify container is actually producing output
- Check if container uses a supported logging driver
Buffer Full Errors
- Increase queue_size in service.env
- Check network connectivity to server
- Verify server isn't overwhelmed
Security Considerations
-
Authentication:
- Always use mTLS or API keys in production
- Never use basic auth except for testing
- Store credentials securely
- Rotate certificates/keys regularly
-
Docker Socket Access:
- Requires Docker socket access to read logs via API
- Understand security implications of socket access
- Consider read-only socket access if possible
-
Network Security:
- All connections are TLS encrypted
- Verify server certificates
- Configure firewall rules appropriately
- Use private networks when possible
-
Data Protection:
- Logs may contain sensitive data
- Filter sensitive information before shipping
- Exclude containers with sensitive data if needed
-
Resource Limits:
- Set CPU and memory limits
- Monitor resource usage
- Prevent resource exhaustion attacks
Performance Tuning
For High-Volume Environments
# Increase workers and batch size
WORKER_THREADS=4
BULK_MAX_SIZE=4096
QUEUE_SIZE=8192
For Low-Resource Hosts
# Reduce resource usage
WORKER_THREADS=1
BULK_MAX_SIZE=512
MAX_MEMORY=100MB
MAX_CPU=25
Network Optimization
# Enable compression (CPU vs bandwidth tradeoff)
COMPRESSION_LEVEL=3 # 0-9, higher = more compression
Integration with LogServer
This template is designed to work seamlessly with the logserver
template:
- Deploy logserver first
- Note the logserver's IP/hostname
- Configure logclient with server address
- Logs automatically start flowing
Maintenance
Regular Tasks
- Monitor buffer usage
- Check for connection errors
- Review excluded/included containers
- Update Filebeat version
Logs Rotation
Filebeat handles log rotation automatically:
- Detects renamed/rotated files
- Continues reading from correct position
- Cleans up old file handles
Advanced Configuration
Custom Filebeat Configuration
Create a custom filebeat.yml
in the config directory for advanced scenarios:
- Custom processors
- Additional inputs
- Complex filtering rules
- Multiple outputs
Docker Labels for Control
Control log collection per container:
# In docker-compose.yml
services:
myapp:
logging:
driver: local # Can use any driver - Filebeat reads via API
options:
max-size: "10m"
max-file: "3"
labels:
- "filebeat.enable=false" # Exclude this container
- "filebeat.multiline.pattern=^\\[" # Custom multiline pattern
Logging Driver Compatibility
The Docker API input works with all Docker logging drivers:
- local: Recommended for production (efficient, no file access needed)
- json-file: Traditional default driver
- journald: SystemD journal integration
- syslog: Forward to syslog
- none: Disables logging (Filebeat won't collect)
You can use the local
driver for better performance since Filebeat doesn't need to read files.