docs: Add 6 and update 11 files
All checks were successful
Test and Publish Templates / test-and-publish (push) Successful in 44s
All checks were successful
Test and Publish Templates / test-and-publish (push) Successful in 44s
This commit is contained in:
389
logclient/DOCUMENTATION.md
Normal file
389
logclient/DOCUMENTATION.md
Normal file
@@ -0,0 +1,389 @@
|
|||||||
|
# Dropshell LogClient Template
|
||||||
|
|
||||||
|
An auto-configuring Filebeat agent that collects Docker container logs via the Docker API and system logs, shipping them to a centralized logging server with minimal configuration.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
This template deploys a lightweight Filebeat agent that:
|
||||||
|
- Uses Docker API to collect logs from all containers (regardless of logging driver)
|
||||||
|
- Allows containers to use any Docker logging driver (json-file, local, journald, etc.)
|
||||||
|
- Collects system logs (syslog, auth logs, kernel logs)
|
||||||
|
- Ships logs to a centralized ELK stack (logserver)
|
||||||
|
- Requires minimal configuration - just the server address
|
||||||
|
- Handles connection failures with local buffering
|
||||||
|
- Auto-reconnects when the server becomes available
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
### Docker API Log Collection
|
||||||
|
- **Direct API access**: Reads logs via Docker API, not from files
|
||||||
|
- **Driver independent**: Works with any Docker logging driver (local, json-file, journald)
|
||||||
|
- **Automatic discovery**: Finds all running containers dynamically
|
||||||
|
- **Container metadata**: Enriches logs with container names, images, labels
|
||||||
|
- **Real-time streaming**: Gets logs as they're generated
|
||||||
|
- **Multi-line handling**: Properly handles stack traces and multi-line logs
|
||||||
|
- **JSON parsing**: Automatically parses JSON-formatted logs
|
||||||
|
- **Label-based filtering**: Can include/exclude containers based on labels
|
||||||
|
|
||||||
|
### System Log Collection
|
||||||
|
- **/var/log/syslog** or **/var/log/messages**: System events
|
||||||
|
- **/var/log/auth.log** or **/var/log/secure**: Authentication events
|
||||||
|
- **/var/log/kern.log**: Kernel messages
|
||||||
|
- **journald**: SystemD journal (if available)
|
||||||
|
- **Custom paths**: Configurable additional log paths
|
||||||
|
|
||||||
|
### Reliability Features
|
||||||
|
- **Local buffering**: Stores logs locally when server is unreachable
|
||||||
|
- **Automatic retry**: Reconnects automatically with exponential backoff
|
||||||
|
- **Compression**: Compresses logs before sending to save bandwidth
|
||||||
|
- **Secure transmission**: Optional TLS/SSL encryption
|
||||||
|
- **Backpressure handling**: Slows down when server is overwhelmed
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
### How It Works
|
||||||
|
1. Filebeat runs as a container with Docker socket access
|
||||||
|
2. Uses Docker API to stream logs from all containers
|
||||||
|
3. Monitors Docker API for container lifecycle events
|
||||||
|
4. Automatically starts collecting logs from new containers
|
||||||
|
5. Reads host system logs from mounted volumes
|
||||||
|
6. Ships all logs to configured Logstash/Elasticsearch endpoint
|
||||||
|
7. Maintains connection state and buffering information
|
||||||
|
|
||||||
|
### Log Flow
|
||||||
|
```
|
||||||
|
Docker Containers → Docker API →
|
||||||
|
↘
|
||||||
|
Filebeat → Logstash → Elasticsearch → Kibana
|
||||||
|
↗
|
||||||
|
System Logs (mounted volumes) →
|
||||||
|
```
|
||||||
|
|
||||||
|
### Why Docker API Instead of Log Files?
|
||||||
|
- **Logging driver flexibility**: Containers can use `local`, `json-file`, `journald`, or any driver
|
||||||
|
- **No log file management**: Don't need to worry about log rotation or file paths
|
||||||
|
- **Better performance**: Direct streaming without file I/O overhead
|
||||||
|
- **Consistent access**: Same method regardless of storage backend
|
||||||
|
- **Real-time streaming**: Get logs immediately as they're generated
|
||||||
|
- **Simplified permissions**: Only need Docker socket access
|
||||||
|
|
||||||
|
## Minimum Configuration
|
||||||
|
|
||||||
|
The template requires minimal configuration - server address and authentication:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Required - Server connection
|
||||||
|
LOGSERVER_HOST=192.168.1.100
|
||||||
|
LOGSERVER_PORT=5044
|
||||||
|
|
||||||
|
# Required - Authentication (choose one method)
|
||||||
|
AUTH_MODE=mtls # Options: mtls, apikey, basic
|
||||||
|
|
||||||
|
# For mTLS authentication
|
||||||
|
CLIENT_CERT_PATH=/certs/client.crt
|
||||||
|
CLIENT_KEY_PATH=/certs/client.key
|
||||||
|
CA_CERT_PATH=/certs/ca.crt
|
||||||
|
|
||||||
|
# For API key authentication
|
||||||
|
API_KEY=your-api-key-here
|
||||||
|
|
||||||
|
# For basic auth (not recommended)
|
||||||
|
USERNAME=filebeat
|
||||||
|
PASSWORD=changeme
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configuration Options
|
||||||
|
|
||||||
|
### Environment Variables (service.env)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# REQUIRED: Log server connection
|
||||||
|
LOGSERVER_HOST=logserver.example.com
|
||||||
|
LOGSERVER_PORT=5044
|
||||||
|
|
||||||
|
# REQUIRED: Authentication method
|
||||||
|
AUTH_MODE=mtls # mtls, apikey, or basic
|
||||||
|
|
||||||
|
# mTLS Authentication (if AUTH_MODE=mtls)
|
||||||
|
CLIENT_CERT_PATH=/certs/${HOSTNAME}.crt
|
||||||
|
CLIENT_KEY_PATH=/certs/${HOSTNAME}.key
|
||||||
|
CA_CERT_PATH=/certs/ca.crt
|
||||||
|
SSL_VERIFICATION_MODE=full
|
||||||
|
|
||||||
|
# API Key Authentication (if AUTH_MODE=apikey)
|
||||||
|
API_KEY="" # Will be provided by logserver admin
|
||||||
|
|
||||||
|
# Basic Authentication (if AUTH_MODE=basic)
|
||||||
|
USERNAME=filebeat
|
||||||
|
PASSWORD=changeme
|
||||||
|
|
||||||
|
# Optional: Performance tuning
|
||||||
|
BULK_MAX_SIZE=2048 # Maximum batch size
|
||||||
|
WORKER_THREADS=1 # Number of worker threads
|
||||||
|
QUEUE_SIZE=4096 # Internal queue size
|
||||||
|
MAX_BACKOFF=60s # Maximum retry backoff
|
||||||
|
|
||||||
|
# Optional: Filtering
|
||||||
|
EXCLUDE_CONTAINERS="" # Comma-separated container names to exclude
|
||||||
|
INCLUDE_CONTAINERS="" # Only include these containers (if set)
|
||||||
|
EXCLUDE_LABELS="" # Exclude containers with these labels
|
||||||
|
INCLUDE_LABELS="" # Only include containers with these labels
|
||||||
|
|
||||||
|
# Optional: Additional log paths
|
||||||
|
CUSTOM_LOG_PATHS="" # Comma-separated additional paths to monitor
|
||||||
|
|
||||||
|
# Optional: Resource limits
|
||||||
|
MAX_CPU=50 # Maximum CPU usage percentage
|
||||||
|
MAX_MEMORY=200MB # Maximum memory usage
|
||||||
|
```
|
||||||
|
|
||||||
|
## Collected Log Types
|
||||||
|
|
||||||
|
### Docker Container Logs (via Docker API)
|
||||||
|
- **stdout/stderr**: All container output regardless of logging driver
|
||||||
|
- **Container metadata**: Name, ID, image, labels
|
||||||
|
- **Docker events**: Start, stop, die, kill events
|
||||||
|
- **Health check results**: If configured
|
||||||
|
- **Works with all logging drivers**: local, json-file, journald, syslog, etc.
|
||||||
|
|
||||||
|
### System Logs
|
||||||
|
- **System messages**: Service starts/stops, errors
|
||||||
|
- **Authentication**: SSH logins, sudo usage
|
||||||
|
- **Kernel messages**: Hardware events, driver messages
|
||||||
|
- **Package management**: apt/yum operations
|
||||||
|
- **Cron jobs**: Scheduled task execution
|
||||||
|
|
||||||
|
## Log Enrichment
|
||||||
|
|
||||||
|
Logs are automatically enriched with:
|
||||||
|
- **Hostname**: Source host identification
|
||||||
|
- **Timestamp**: Precise event time with timezone
|
||||||
|
- **Log level**: Parsed from log content when possible
|
||||||
|
- **Container info**: For Docker logs
|
||||||
|
- **Process info**: PID, command for system logs
|
||||||
|
- **File path**: Source log file
|
||||||
|
|
||||||
|
## Resource Requirements
|
||||||
|
|
||||||
|
### Minimum
|
||||||
|
- CPU: 0.5 cores
|
||||||
|
- RAM: 128MB
|
||||||
|
- Storage: 1GB (for buffer)
|
||||||
|
|
||||||
|
### Typical Usage
|
||||||
|
- CPU: 1-5% of one core
|
||||||
|
- RAM: 150-200MB
|
||||||
|
- Network: Varies with log volume
|
||||||
|
- Storage: Depends on buffer size
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
1. A running logserver (ELK stack)
|
||||||
|
2. Network connectivity to logserver
|
||||||
|
3. Docker installed on host
|
||||||
|
4. Authentication credentials from logserver admin
|
||||||
|
|
||||||
|
### Setup Authentication
|
||||||
|
|
||||||
|
#### For mTLS (Recommended):
|
||||||
|
```bash
|
||||||
|
# Get client certificate from logserver admin
|
||||||
|
# They will run: dropshell exec logserver /scripts/generate-client-cert.sh $(hostname)
|
||||||
|
# Copy the generated certificate files to this client
|
||||||
|
mkdir -p /etc/dropshell/certs
|
||||||
|
# Copy ca.crt, client.crt, and client.key to /etc/dropshell/certs/
|
||||||
|
```
|
||||||
|
|
||||||
|
#### For API Key:
|
||||||
|
```bash
|
||||||
|
# Get API key from logserver admin
|
||||||
|
# They will run: dropshell exec logserver /scripts/generate-api-key.sh $(hostname)
|
||||||
|
# Add the API key to service.env
|
||||||
|
```
|
||||||
|
|
||||||
|
### Deploy
|
||||||
|
```bash
|
||||||
|
# Configure authentication in service.env
|
||||||
|
dropshell install logclient
|
||||||
|
```
|
||||||
|
|
||||||
|
## Monitoring
|
||||||
|
|
||||||
|
### Check Status
|
||||||
|
```bash
|
||||||
|
dropshell status logclient
|
||||||
|
```
|
||||||
|
|
||||||
|
### View Filebeat Logs
|
||||||
|
```bash
|
||||||
|
dropshell logs logclient
|
||||||
|
```
|
||||||
|
|
||||||
|
### Verify Connectivity
|
||||||
|
```bash
|
||||||
|
# Check if logs are being shipped
|
||||||
|
docker exec logclient-filebeat filebeat test output
|
||||||
|
```
|
||||||
|
|
||||||
|
### Monitor Metrics
|
||||||
|
```bash
|
||||||
|
# View Filebeat statistics
|
||||||
|
docker exec logclient-filebeat curl -s http://localhost:5066/stats
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### No Logs Appearing on Server
|
||||||
|
|
||||||
|
1. **Check connectivity**
|
||||||
|
```bash
|
||||||
|
telnet $LOGSERVER_HOST $LOGSERVER_PORT
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Verify Filebeat is running**
|
||||||
|
```bash
|
||||||
|
dropshell status logclient
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Check Filebeat logs**
|
||||||
|
```bash
|
||||||
|
dropshell logs logclient | tail -50
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Test configuration**
|
||||||
|
```bash
|
||||||
|
docker exec logclient-filebeat filebeat test config
|
||||||
|
```
|
||||||
|
|
||||||
|
### High CPU Usage
|
||||||
|
|
||||||
|
1. Reduce worker threads in service.env
|
||||||
|
2. Increase bulk_max_size to send larger batches
|
||||||
|
3. Add exclude filters for noisy containers
|
||||||
|
|
||||||
|
### Missing Container Logs
|
||||||
|
|
||||||
|
1. Verify Docker socket is mounted
|
||||||
|
2. Check container isn't in exclude list
|
||||||
|
3. Ensure Filebeat has permissions to Docker socket
|
||||||
|
4. Verify container is actually producing output
|
||||||
|
5. Check if container uses a supported logging driver
|
||||||
|
|
||||||
|
### Buffer Full Errors
|
||||||
|
|
||||||
|
1. Increase queue_size in service.env
|
||||||
|
2. Check network connectivity to server
|
||||||
|
3. Verify server isn't overwhelmed
|
||||||
|
|
||||||
|
## Security Considerations
|
||||||
|
|
||||||
|
1. **Authentication**:
|
||||||
|
- Always use mTLS or API keys in production
|
||||||
|
- Never use basic auth except for testing
|
||||||
|
- Store credentials securely
|
||||||
|
- Rotate certificates/keys regularly
|
||||||
|
|
||||||
|
2. **Docker Socket Access**:
|
||||||
|
- Requires Docker socket access to read logs via API
|
||||||
|
- Understand security implications of socket access
|
||||||
|
- Consider read-only socket access if possible
|
||||||
|
|
||||||
|
3. **Network Security**:
|
||||||
|
- All connections are TLS encrypted
|
||||||
|
- Verify server certificates
|
||||||
|
- Configure firewall rules appropriately
|
||||||
|
- Use private networks when possible
|
||||||
|
|
||||||
|
4. **Data Protection**:
|
||||||
|
- Logs may contain sensitive data
|
||||||
|
- Filter sensitive information before shipping
|
||||||
|
- Exclude containers with sensitive data if needed
|
||||||
|
|
||||||
|
5. **Resource Limits**:
|
||||||
|
- Set CPU and memory limits
|
||||||
|
- Monitor resource usage
|
||||||
|
- Prevent resource exhaustion attacks
|
||||||
|
|
||||||
|
## Performance Tuning
|
||||||
|
|
||||||
|
### For High-Volume Environments
|
||||||
|
```bash
|
||||||
|
# Increase workers and batch size
|
||||||
|
WORKER_THREADS=4
|
||||||
|
BULK_MAX_SIZE=4096
|
||||||
|
QUEUE_SIZE=8192
|
||||||
|
```
|
||||||
|
|
||||||
|
### For Low-Resource Hosts
|
||||||
|
```bash
|
||||||
|
# Reduce resource usage
|
||||||
|
WORKER_THREADS=1
|
||||||
|
BULK_MAX_SIZE=512
|
||||||
|
MAX_MEMORY=100MB
|
||||||
|
MAX_CPU=25
|
||||||
|
```
|
||||||
|
|
||||||
|
### Network Optimization
|
||||||
|
```bash
|
||||||
|
# Enable compression (CPU vs bandwidth tradeoff)
|
||||||
|
COMPRESSION_LEVEL=3 # 0-9, higher = more compression
|
||||||
|
```
|
||||||
|
|
||||||
|
## Integration with LogServer
|
||||||
|
|
||||||
|
This template is designed to work seamlessly with the `logserver` template:
|
||||||
|
|
||||||
|
1. Deploy logserver first
|
||||||
|
2. Note the logserver's IP/hostname
|
||||||
|
3. Configure logclient with server address
|
||||||
|
4. Logs automatically start flowing
|
||||||
|
|
||||||
|
## Maintenance
|
||||||
|
|
||||||
|
### Regular Tasks
|
||||||
|
- Monitor buffer usage
|
||||||
|
- Check for connection errors
|
||||||
|
- Review excluded/included containers
|
||||||
|
- Update Filebeat version
|
||||||
|
|
||||||
|
### Logs Rotation
|
||||||
|
Filebeat handles log rotation automatically:
|
||||||
|
- Detects renamed/rotated files
|
||||||
|
- Continues reading from correct position
|
||||||
|
- Cleans up old file handles
|
||||||
|
|
||||||
|
## Advanced Configuration
|
||||||
|
|
||||||
|
### Custom Filebeat Configuration
|
||||||
|
Create a custom `filebeat.yml` in the config directory for advanced scenarios:
|
||||||
|
- Custom processors
|
||||||
|
- Additional inputs
|
||||||
|
- Complex filtering rules
|
||||||
|
- Multiple outputs
|
||||||
|
|
||||||
|
### Docker Labels for Control
|
||||||
|
Control log collection per container:
|
||||||
|
```yaml
|
||||||
|
# In docker-compose.yml
|
||||||
|
services:
|
||||||
|
myapp:
|
||||||
|
logging:
|
||||||
|
driver: local # Can use any driver - Filebeat reads via API
|
||||||
|
options:
|
||||||
|
max-size: "10m"
|
||||||
|
max-file: "3"
|
||||||
|
labels:
|
||||||
|
- "filebeat.enable=false" # Exclude this container
|
||||||
|
- "filebeat.multiline.pattern=^\\[" # Custom multiline pattern
|
||||||
|
```
|
||||||
|
|
||||||
|
### Logging Driver Compatibility
|
||||||
|
The Docker API input works with all Docker logging drivers:
|
||||||
|
- **local**: Recommended for production (efficient, no file access needed)
|
||||||
|
- **json-file**: Traditional default driver
|
||||||
|
- **journald**: SystemD journal integration
|
||||||
|
- **syslog**: Forward to syslog
|
||||||
|
- **none**: Disables logging (Filebeat won't collect)
|
||||||
|
|
||||||
|
You can use the `local` driver for better performance since Filebeat doesn't need to read files.
|
@@ -1,389 +1,34 @@
|
|||||||
# Dropshell LogClient Template
|
# LogClient
|
||||||
|
|
||||||
An auto-configuring Filebeat agent that collects Docker container logs via the Docker API and system logs, shipping them to a centralized logging server with minimal configuration.
|
Ships Docker container and system logs to LogServer using Filebeat.
|
||||||
|
|
||||||
## Overview
|
## Quick Start
|
||||||
|
|
||||||
This template deploys a lightweight Filebeat agent that:
|
1. **Get API Key**
|
||||||
- Uses Docker API to collect logs from all containers (regardless of logging driver)
|
- Ask LogServer admin to run `./generate-api-key.sh`
|
||||||
- Allows containers to use any Docker logging driver (json-file, local, journald, etc.)
|
- They'll provide your API key
|
||||||
- Collects system logs (syslog, auth logs, kernel logs)
|
|
||||||
- Ships logs to a centralized ELK stack (logserver)
|
|
||||||
- Requires minimal configuration - just the server address
|
|
||||||
- Handles connection failures with local buffering
|
|
||||||
- Auto-reconnects when the server becomes available
|
|
||||||
|
|
||||||
## Features
|
|
||||||
|
|
||||||
### Docker API Log Collection
|
|
||||||
- **Direct API access**: Reads logs via Docker API, not from files
|
|
||||||
- **Driver independent**: Works with any Docker logging driver (local, json-file, journald)
|
|
||||||
- **Automatic discovery**: Finds all running containers dynamically
|
|
||||||
- **Container metadata**: Enriches logs with container names, images, labels
|
|
||||||
- **Real-time streaming**: Gets logs as they're generated
|
|
||||||
- **Multi-line handling**: Properly handles stack traces and multi-line logs
|
|
||||||
- **JSON parsing**: Automatically parses JSON-formatted logs
|
|
||||||
- **Label-based filtering**: Can include/exclude containers based on labels
|
|
||||||
|
|
||||||
### System Log Collection
|
|
||||||
- **/var/log/syslog** or **/var/log/messages**: System events
|
|
||||||
- **/var/log/auth.log** or **/var/log/secure**: Authentication events
|
|
||||||
- **/var/log/kern.log**: Kernel messages
|
|
||||||
- **journald**: SystemD journal (if available)
|
|
||||||
- **Custom paths**: Configurable additional log paths
|
|
||||||
|
|
||||||
### Reliability Features
|
|
||||||
- **Local buffering**: Stores logs locally when server is unreachable
|
|
||||||
- **Automatic retry**: Reconnects automatically with exponential backoff
|
|
||||||
- **Compression**: Compresses logs before sending to save bandwidth
|
|
||||||
- **Secure transmission**: Optional TLS/SSL encryption
|
|
||||||
- **Backpressure handling**: Slows down when server is overwhelmed
|
|
||||||
|
|
||||||
## Architecture
|
|
||||||
|
|
||||||
### How It Works
|
|
||||||
1. Filebeat runs as a container with Docker socket access
|
|
||||||
2. Uses Docker API to stream logs from all containers
|
|
||||||
3. Monitors Docker API for container lifecycle events
|
|
||||||
4. Automatically starts collecting logs from new containers
|
|
||||||
5. Reads host system logs from mounted volumes
|
|
||||||
6. Ships all logs to configured Logstash/Elasticsearch endpoint
|
|
||||||
7. Maintains connection state and buffering information
|
|
||||||
|
|
||||||
### Log Flow
|
|
||||||
```
|
|
||||||
Docker Containers → Docker API →
|
|
||||||
↘
|
|
||||||
Filebeat → Logstash → Elasticsearch → Kibana
|
|
||||||
↗
|
|
||||||
System Logs (mounted volumes) →
|
|
||||||
```
|
|
||||||
|
|
||||||
### Why Docker API Instead of Log Files?
|
|
||||||
- **Logging driver flexibility**: Containers can use `local`, `json-file`, `journald`, or any driver
|
|
||||||
- **No log file management**: Don't need to worry about log rotation or file paths
|
|
||||||
- **Better performance**: Direct streaming without file I/O overhead
|
|
||||||
- **Consistent access**: Same method regardless of storage backend
|
|
||||||
- **Real-time streaming**: Get logs immediately as they're generated
|
|
||||||
- **Simplified permissions**: Only need Docker socket access
|
|
||||||
|
|
||||||
## Minimum Configuration
|
|
||||||
|
|
||||||
The template requires minimal configuration - server address and authentication:
|
|
||||||
|
|
||||||
|
2. **Configure**
|
||||||
|
Edit `config/service.env`:
|
||||||
```bash
|
```bash
|
||||||
# Required - Server connection
|
LOGSERVER_HOST=<server-ip>
|
||||||
LOGSERVER_HOST=192.168.1.100
|
|
||||||
LOGSERVER_PORT=5044
|
LOGSERVER_PORT=5044
|
||||||
|
API_KEY=<your-api-key>
|
||||||
# Required - Authentication (choose one method)
|
|
||||||
AUTH_MODE=mtls # Options: mtls, apikey, basic
|
|
||||||
|
|
||||||
# For mTLS authentication
|
|
||||||
CLIENT_CERT_PATH=/certs/client.crt
|
|
||||||
CLIENT_KEY_PATH=/certs/client.key
|
|
||||||
CA_CERT_PATH=/certs/ca.crt
|
|
||||||
|
|
||||||
# For API key authentication
|
|
||||||
API_KEY=your-api-key-here
|
|
||||||
|
|
||||||
# For basic auth (not recommended)
|
|
||||||
USERNAME=filebeat
|
|
||||||
PASSWORD=changeme
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Configuration Options
|
3. **Install**
|
||||||
|
|
||||||
### Environment Variables (service.env)
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# REQUIRED: Log server connection
|
|
||||||
LOGSERVER_HOST=logserver.example.com
|
|
||||||
LOGSERVER_PORT=5044
|
|
||||||
|
|
||||||
# REQUIRED: Authentication method
|
|
||||||
AUTH_MODE=mtls # mtls, apikey, or basic
|
|
||||||
|
|
||||||
# mTLS Authentication (if AUTH_MODE=mtls)
|
|
||||||
CLIENT_CERT_PATH=/certs/${HOSTNAME}.crt
|
|
||||||
CLIENT_KEY_PATH=/certs/${HOSTNAME}.key
|
|
||||||
CA_CERT_PATH=/certs/ca.crt
|
|
||||||
SSL_VERIFICATION_MODE=full
|
|
||||||
|
|
||||||
# API Key Authentication (if AUTH_MODE=apikey)
|
|
||||||
API_KEY="" # Will be provided by logserver admin
|
|
||||||
|
|
||||||
# Basic Authentication (if AUTH_MODE=basic)
|
|
||||||
USERNAME=filebeat
|
|
||||||
PASSWORD=changeme
|
|
||||||
|
|
||||||
# Optional: Performance tuning
|
|
||||||
BULK_MAX_SIZE=2048 # Maximum batch size
|
|
||||||
WORKER_THREADS=1 # Number of worker threads
|
|
||||||
QUEUE_SIZE=4096 # Internal queue size
|
|
||||||
MAX_BACKOFF=60s # Maximum retry backoff
|
|
||||||
|
|
||||||
# Optional: Filtering
|
|
||||||
EXCLUDE_CONTAINERS="" # Comma-separated container names to exclude
|
|
||||||
INCLUDE_CONTAINERS="" # Only include these containers (if set)
|
|
||||||
EXCLUDE_LABELS="" # Exclude containers with these labels
|
|
||||||
INCLUDE_LABELS="" # Only include containers with these labels
|
|
||||||
|
|
||||||
# Optional: Additional log paths
|
|
||||||
CUSTOM_LOG_PATHS="" # Comma-separated additional paths to monitor
|
|
||||||
|
|
||||||
# Optional: Resource limits
|
|
||||||
MAX_CPU=50 # Maximum CPU usage percentage
|
|
||||||
MAX_MEMORY=200MB # Maximum memory usage
|
|
||||||
```
|
|
||||||
|
|
||||||
## Collected Log Types
|
|
||||||
|
|
||||||
### Docker Container Logs (via Docker API)
|
|
||||||
- **stdout/stderr**: All container output regardless of logging driver
|
|
||||||
- **Container metadata**: Name, ID, image, labels
|
|
||||||
- **Docker events**: Start, stop, die, kill events
|
|
||||||
- **Health check results**: If configured
|
|
||||||
- **Works with all logging drivers**: local, json-file, journald, syslog, etc.
|
|
||||||
|
|
||||||
### System Logs
|
|
||||||
- **System messages**: Service starts/stops, errors
|
|
||||||
- **Authentication**: SSH logins, sudo usage
|
|
||||||
- **Kernel messages**: Hardware events, driver messages
|
|
||||||
- **Package management**: apt/yum operations
|
|
||||||
- **Cron jobs**: Scheduled task execution
|
|
||||||
|
|
||||||
## Log Enrichment
|
|
||||||
|
|
||||||
Logs are automatically enriched with:
|
|
||||||
- **Hostname**: Source host identification
|
|
||||||
- **Timestamp**: Precise event time with timezone
|
|
||||||
- **Log level**: Parsed from log content when possible
|
|
||||||
- **Container info**: For Docker logs
|
|
||||||
- **Process info**: PID, command for system logs
|
|
||||||
- **File path**: Source log file
|
|
||||||
|
|
||||||
## Resource Requirements
|
|
||||||
|
|
||||||
### Minimum
|
|
||||||
- CPU: 0.5 cores
|
|
||||||
- RAM: 128MB
|
|
||||||
- Storage: 1GB (for buffer)
|
|
||||||
|
|
||||||
### Typical Usage
|
|
||||||
- CPU: 1-5% of one core
|
|
||||||
- RAM: 150-200MB
|
|
||||||
- Network: Varies with log volume
|
|
||||||
- Storage: Depends on buffer size
|
|
||||||
|
|
||||||
## Installation
|
|
||||||
|
|
||||||
### Prerequisites
|
|
||||||
1. A running logserver (ELK stack)
|
|
||||||
2. Network connectivity to logserver
|
|
||||||
3. Docker installed on host
|
|
||||||
4. Authentication credentials from logserver admin
|
|
||||||
|
|
||||||
### Setup Authentication
|
|
||||||
|
|
||||||
#### For mTLS (Recommended):
|
|
||||||
```bash
|
|
||||||
# Get client certificate from logserver admin
|
|
||||||
# They will run: dropshell exec logserver /scripts/generate-client-cert.sh $(hostname)
|
|
||||||
# Copy the generated certificate files to this client
|
|
||||||
mkdir -p /etc/dropshell/certs
|
|
||||||
# Copy ca.crt, client.crt, and client.key to /etc/dropshell/certs/
|
|
||||||
```
|
|
||||||
|
|
||||||
#### For API Key:
|
|
||||||
```bash
|
|
||||||
# Get API key from logserver admin
|
|
||||||
# They will run: dropshell exec logserver /scripts/generate-api-key.sh $(hostname)
|
|
||||||
# Add the API key to service.env
|
|
||||||
```
|
|
||||||
|
|
||||||
### Deploy
|
|
||||||
```bash
|
|
||||||
# Configure authentication in service.env
|
|
||||||
dropshell install logclient
|
dropshell install logclient
|
||||||
```
|
```
|
||||||
|
|
||||||
## Monitoring
|
## What It Does
|
||||||
|
- Collects all Docker container logs via API
|
||||||
|
- Collects system logs (/var/log)
|
||||||
|
- Ships to central LogServer
|
||||||
|
- Works with any Docker logging driver
|
||||||
|
|
||||||
### Check Status
|
## Requirements
|
||||||
```bash
|
- Docker socket access
|
||||||
dropshell status logclient
|
- Network connection to LogServer port 5044
|
||||||
```
|
|
||||||
|
|
||||||
### View Filebeat Logs
|
See [DOCUMENTATION.md](DOCUMENTATION.md) for full details.
|
||||||
```bash
|
|
||||||
dropshell logs logclient
|
|
||||||
```
|
|
||||||
|
|
||||||
### Verify Connectivity
|
|
||||||
```bash
|
|
||||||
# Check if logs are being shipped
|
|
||||||
docker exec logclient-filebeat filebeat test output
|
|
||||||
```
|
|
||||||
|
|
||||||
### Monitor Metrics
|
|
||||||
```bash
|
|
||||||
# View Filebeat statistics
|
|
||||||
docker exec logclient-filebeat curl -s http://localhost:5066/stats
|
|
||||||
```
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### No Logs Appearing on Server
|
|
||||||
|
|
||||||
1. **Check connectivity**
|
|
||||||
```bash
|
|
||||||
telnet $LOGSERVER_HOST $LOGSERVER_PORT
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Verify Filebeat is running**
|
|
||||||
```bash
|
|
||||||
dropshell status logclient
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Check Filebeat logs**
|
|
||||||
```bash
|
|
||||||
dropshell logs logclient | tail -50
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Test configuration**
|
|
||||||
```bash
|
|
||||||
docker exec logclient-filebeat filebeat test config
|
|
||||||
```
|
|
||||||
|
|
||||||
### High CPU Usage
|
|
||||||
|
|
||||||
1. Reduce worker threads in service.env
|
|
||||||
2. Increase bulk_max_size to send larger batches
|
|
||||||
3. Add exclude filters for noisy containers
|
|
||||||
|
|
||||||
### Missing Container Logs
|
|
||||||
|
|
||||||
1. Verify Docker socket is mounted
|
|
||||||
2. Check container isn't in exclude list
|
|
||||||
3. Ensure Filebeat has permissions to Docker socket
|
|
||||||
4. Verify container is actually producing output
|
|
||||||
5. Check if container uses a supported logging driver
|
|
||||||
|
|
||||||
### Buffer Full Errors
|
|
||||||
|
|
||||||
1. Increase queue_size in service.env
|
|
||||||
2. Check network connectivity to server
|
|
||||||
3. Verify server isn't overwhelmed
|
|
||||||
|
|
||||||
## Security Considerations
|
|
||||||
|
|
||||||
1. **Authentication**:
|
|
||||||
- Always use mTLS or API keys in production
|
|
||||||
- Never use basic auth except for testing
|
|
||||||
- Store credentials securely
|
|
||||||
- Rotate certificates/keys regularly
|
|
||||||
|
|
||||||
2. **Docker Socket Access**:
|
|
||||||
- Requires Docker socket access to read logs via API
|
|
||||||
- Understand security implications of socket access
|
|
||||||
- Consider read-only socket access if possible
|
|
||||||
|
|
||||||
3. **Network Security**:
|
|
||||||
- All connections are TLS encrypted
|
|
||||||
- Verify server certificates
|
|
||||||
- Configure firewall rules appropriately
|
|
||||||
- Use private networks when possible
|
|
||||||
|
|
||||||
4. **Data Protection**:
|
|
||||||
- Logs may contain sensitive data
|
|
||||||
- Filter sensitive information before shipping
|
|
||||||
- Exclude containers with sensitive data if needed
|
|
||||||
|
|
||||||
5. **Resource Limits**:
|
|
||||||
- Set CPU and memory limits
|
|
||||||
- Monitor resource usage
|
|
||||||
- Prevent resource exhaustion attacks
|
|
||||||
|
|
||||||
## Performance Tuning
|
|
||||||
|
|
||||||
### For High-Volume Environments
|
|
||||||
```bash
|
|
||||||
# Increase workers and batch size
|
|
||||||
WORKER_THREADS=4
|
|
||||||
BULK_MAX_SIZE=4096
|
|
||||||
QUEUE_SIZE=8192
|
|
||||||
```
|
|
||||||
|
|
||||||
### For Low-Resource Hosts
|
|
||||||
```bash
|
|
||||||
# Reduce resource usage
|
|
||||||
WORKER_THREADS=1
|
|
||||||
BULK_MAX_SIZE=512
|
|
||||||
MAX_MEMORY=100MB
|
|
||||||
MAX_CPU=25
|
|
||||||
```
|
|
||||||
|
|
||||||
### Network Optimization
|
|
||||||
```bash
|
|
||||||
# Enable compression (CPU vs bandwidth tradeoff)
|
|
||||||
COMPRESSION_LEVEL=3 # 0-9, higher = more compression
|
|
||||||
```
|
|
||||||
|
|
||||||
## Integration with LogServer
|
|
||||||
|
|
||||||
This template is designed to work seamlessly with the `logserver` template:
|
|
||||||
|
|
||||||
1. Deploy logserver first
|
|
||||||
2. Note the logserver's IP/hostname
|
|
||||||
3. Configure logclient with server address
|
|
||||||
4. Logs automatically start flowing
|
|
||||||
|
|
||||||
## Maintenance
|
|
||||||
|
|
||||||
### Regular Tasks
|
|
||||||
- Monitor buffer usage
|
|
||||||
- Check for connection errors
|
|
||||||
- Review excluded/included containers
|
|
||||||
- Update Filebeat version
|
|
||||||
|
|
||||||
### Logs Rotation
|
|
||||||
Filebeat handles log rotation automatically:
|
|
||||||
- Detects renamed/rotated files
|
|
||||||
- Continues reading from correct position
|
|
||||||
- Cleans up old file handles
|
|
||||||
|
|
||||||
## Advanced Configuration
|
|
||||||
|
|
||||||
### Custom Filebeat Configuration
|
|
||||||
Create a custom `filebeat.yml` in the config directory for advanced scenarios:
|
|
||||||
- Custom processors
|
|
||||||
- Additional inputs
|
|
||||||
- Complex filtering rules
|
|
||||||
- Multiple outputs
|
|
||||||
|
|
||||||
### Docker Labels for Control
|
|
||||||
Control log collection per container:
|
|
||||||
```yaml
|
|
||||||
# In docker-compose.yml
|
|
||||||
services:
|
|
||||||
myapp:
|
|
||||||
logging:
|
|
||||||
driver: local # Can use any driver - Filebeat reads via API
|
|
||||||
options:
|
|
||||||
max-size: "10m"
|
|
||||||
max-file: "3"
|
|
||||||
labels:
|
|
||||||
- "filebeat.enable=false" # Exclude this container
|
|
||||||
- "filebeat.multiline.pattern=^\\[" # Custom multiline pattern
|
|
||||||
```
|
|
||||||
|
|
||||||
### Logging Driver Compatibility
|
|
||||||
The Docker API input works with all Docker logging drivers:
|
|
||||||
- **local**: Recommended for production (efficient, no file access needed)
|
|
||||||
- **json-file**: Traditional default driver
|
|
||||||
- **journald**: SystemD journal integration
|
|
||||||
- **syslog**: Forward to syslog
|
|
||||||
- **none**: Disables logging (Filebeat won't collect)
|
|
||||||
|
|
||||||
You can use the `local` driver for better performance since Filebeat doesn't need to read files.
|
|
@@ -36,7 +36,8 @@ _remove_container "$CONTAINER_NAME" || true
|
|||||||
|
|
||||||
# Generate Filebeat configuration
|
# Generate Filebeat configuration
|
||||||
echo "Generating Filebeat configuration..."
|
echo "Generating Filebeat configuration..."
|
||||||
bash ./scripts/generate-config.sh || _die "Failed to generate configuration"
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
bash "$SCRIPT_DIR/scripts/generate-config.sh" || _die "Failed to generate configuration"
|
||||||
|
|
||||||
# Start the new container
|
# Start the new container
|
||||||
bash ./start.sh || _die "Failed to start Filebeat"
|
bash ./start.sh || _die "Failed to start Filebeat"
|
||||||
|
112
logclient/scripts/generate-config.sh
Executable file
112
logclient/scripts/generate-config.sh
Executable file
@@ -0,0 +1,112 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Generate Filebeat configuration from template
|
||||||
|
# This script creates a filebeat.yml configuration file with proper authentication
|
||||||
|
|
||||||
|
CONFIG_DIR="${CONFIG_VOLUME:-${CONFIG_PATH:-./config}}"
|
||||||
|
|
||||||
|
# Ensure config directory exists
|
||||||
|
mkdir -p "$CONFIG_DIR"
|
||||||
|
|
||||||
|
# Generate filebeat.yml configuration
|
||||||
|
cat > "$CONFIG_DIR/filebeat.yml" << 'EOF'
|
||||||
|
# Filebeat Configuration for LogClient
|
||||||
|
# Generated by generate-config.sh
|
||||||
|
|
||||||
|
# ======================== Docker Input Configuration =========================
|
||||||
|
# Use Docker input to collect logs via Docker API
|
||||||
|
filebeat.inputs:
|
||||||
|
- type: docker
|
||||||
|
enabled: true
|
||||||
|
# Collect from all containers
|
||||||
|
containers.ids:
|
||||||
|
- '*'
|
||||||
|
# Collect both stdout and stderr
|
||||||
|
containers.stream: all
|
||||||
|
# Combine partial log lines
|
||||||
|
combine_partial: true
|
||||||
|
# Add Docker metadata
|
||||||
|
processors:
|
||||||
|
- add_docker_metadata:
|
||||||
|
host: "unix:///var/run/docker.sock"
|
||||||
|
|
||||||
|
# ======================== System Logs Configuration ==========================
|
||||||
|
- type: log
|
||||||
|
enabled: true
|
||||||
|
paths:
|
||||||
|
- /var/log/syslog
|
||||||
|
- /var/log/messages
|
||||||
|
exclude_lines: ['^#']
|
||||||
|
fields:
|
||||||
|
log_type: syslog
|
||||||
|
|
||||||
|
- type: log
|
||||||
|
enabled: true
|
||||||
|
paths:
|
||||||
|
- /var/log/auth.log
|
||||||
|
- /var/log/secure
|
||||||
|
exclude_lines: ['^#']
|
||||||
|
fields:
|
||||||
|
log_type: auth
|
||||||
|
|
||||||
|
# ======================== Processors Configuration ===========================
|
||||||
|
processors:
|
||||||
|
- add_host_metadata:
|
||||||
|
when.not.contains:
|
||||||
|
tags: forwarded
|
||||||
|
|
||||||
|
# ======================== Output Configuration ===============================
|
||||||
|
output.logstash:
|
||||||
|
hosts: ["${LOGSERVER_HOST}:${LOGSERVER_PORT}"]
|
||||||
|
# SSL/TLS configuration
|
||||||
|
ssl.enabled: true
|
||||||
|
ssl.verification_mode: none # Set to full in production with proper certs
|
||||||
|
|
||||||
|
# API Key authentication
|
||||||
|
api_key: "${API_KEY}"
|
||||||
|
|
||||||
|
# Performance settings
|
||||||
|
bulk_max_size: ${BULK_MAX_SIZE:-2048}
|
||||||
|
worker: ${WORKER_THREADS:-1}
|
||||||
|
compression_level: 3
|
||||||
|
|
||||||
|
# Retry configuration
|
||||||
|
max_retries: 3
|
||||||
|
backoff.init: 1s
|
||||||
|
backoff.max: ${MAX_BACKOFF:-60s}
|
||||||
|
|
||||||
|
# ======================== Queue Configuration ================================
|
||||||
|
queue.mem:
|
||||||
|
events: ${QUEUE_SIZE:-4096}
|
||||||
|
flush.min_events: 512
|
||||||
|
flush.timeout: 5s
|
||||||
|
|
||||||
|
# ======================== Logging Configuration ==============================
|
||||||
|
logging.level: info
|
||||||
|
logging.to_files: true
|
||||||
|
logging.files:
|
||||||
|
path: /usr/share/filebeat/data/logs
|
||||||
|
name: filebeat
|
||||||
|
keepfiles: 3
|
||||||
|
permissions: 0600
|
||||||
|
|
||||||
|
# ======================== Monitoring ==========================================
|
||||||
|
monitoring.enabled: false
|
||||||
|
http.enabled: true
|
||||||
|
http.host: 0.0.0.0
|
||||||
|
http.port: 5066
|
||||||
|
|
||||||
|
# ======================== File Permissions ====================================
|
||||||
|
# Set strict permissions (disabled for Docker)
|
||||||
|
# filebeat.config.modules.path: ${path.config}/modules.d/*.yml
|
||||||
|
EOF
|
||||||
|
|
||||||
|
echo "Filebeat configuration generated at: $CONFIG_DIR/filebeat.yml"
|
||||||
|
|
||||||
|
# Validate that required environment variables are set
|
||||||
|
if [ -z "$LOGSERVER_HOST" ] || [ -z "$LOGSERVER_PORT" ] || [ -z "$API_KEY" ]; then
|
||||||
|
echo "WARNING: Required environment variables not set"
|
||||||
|
echo " LOGSERVER_HOST: ${LOGSERVER_HOST:-NOT SET}"
|
||||||
|
echo " LOGSERVER_PORT: ${LOGSERVER_PORT:-NOT SET}"
|
||||||
|
echo " API_KEY: ${API_KEY:+SET}"
|
||||||
|
fi
|
279
logserver/DOCUMENTATION.md
Normal file
279
logserver/DOCUMENTATION.md
Normal file
@@ -0,0 +1,279 @@
|
|||||||
|
# Dropshell LogServer Template
|
||||||
|
|
||||||
|
A comprehensive centralized logging solution using the ELK Stack (Elasticsearch, Logstash, Kibana) for receiving, processing, and visualizing logs from multiple hosts.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
This template deploys a full-featured ELK stack that:
|
||||||
|
- Receives logs from multiple sources via Beats protocol
|
||||||
|
- Stores and indexes logs in Elasticsearch
|
||||||
|
- Provides powerful search and visualization through Kibana
|
||||||
|
- Supports automatic log parsing and enrichment
|
||||||
|
- Handles Docker container logs and system logs from clients
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
### Components
|
||||||
|
|
||||||
|
1. **Elasticsearch** (7.17.x)
|
||||||
|
- Distributed search and analytics engine
|
||||||
|
- Stores and indexes all log data
|
||||||
|
- Provides fast full-text search capabilities
|
||||||
|
- Single-node configuration for simplicity (can be scaled)
|
||||||
|
|
||||||
|
2. **Logstash** (7.17.x)
|
||||||
|
- Log processing pipeline
|
||||||
|
- Receives logs from Filebeat clients
|
||||||
|
- Parses and enriches log data
|
||||||
|
- Routes logs to appropriate Elasticsearch indices
|
||||||
|
|
||||||
|
3. **Kibana** (7.17.x)
|
||||||
|
- Web UI for log exploration and visualization
|
||||||
|
- Create dashboards and alerts
|
||||||
|
- Real-time log streaming
|
||||||
|
- Advanced search queries
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
### Minimum Configuration Design
|
||||||
|
- Auto-discovery of log formats
|
||||||
|
- Pre-configured dashboards for common services
|
||||||
|
- Automatic index lifecycle management
|
||||||
|
- Built-in parsing for Docker and syslog formats
|
||||||
|
- Zero-configuration client connectivity
|
||||||
|
|
||||||
|
### Log Processing
|
||||||
|
- Automatic timestamp extraction
|
||||||
|
- Docker metadata enrichment (container name, image, labels)
|
||||||
|
- Syslog parsing with severity levels
|
||||||
|
- JSON log support
|
||||||
|
- Multi-line log handling (stacktraces, etc.)
|
||||||
|
- Grok pattern matching for common formats
|
||||||
|
|
||||||
|
### Security & Performance
|
||||||
|
- **Mutual TLS (mTLS)** authentication for client connections
|
||||||
|
- **API key authentication** as an alternative to certificates
|
||||||
|
- **Per-client authentication** with unique keys/certificates
|
||||||
|
- **SSL/TLS encryption** for all client connections
|
||||||
|
- **Basic authentication** for Kibana web access
|
||||||
|
- **IP whitelisting** for additional security
|
||||||
|
- Index lifecycle management for storage optimization
|
||||||
|
- Automatic old log cleanup
|
||||||
|
- Resource limits to prevent overconsumption
|
||||||
|
|
||||||
|
## Port Configuration
|
||||||
|
|
||||||
|
- **5601**: Kibana Web UI (HTTP/HTTPS with authentication)
|
||||||
|
- **9200**: Elasticsearch REST API (HTTP) - internal only
|
||||||
|
- **5044**: Logstash Beats input (TCP/TLS) - authenticated client connections
|
||||||
|
- **514**: Syslog input (UDP/TCP) - optional, unauthenticated
|
||||||
|
- **24224**: Fluentd forward input - optional Docker logging driver
|
||||||
|
|
||||||
|
## Storage Requirements
|
||||||
|
|
||||||
|
- **Minimum**: 10GB for basic operation
|
||||||
|
- **Recommended**: 50GB+ depending on log volume
|
||||||
|
- **Log Retention**: Default 30 days (configurable)
|
||||||
|
|
||||||
|
## Client Authentication
|
||||||
|
|
||||||
|
### Authentication Methods
|
||||||
|
|
||||||
|
1. **Mutual TLS (mTLS) - Recommended**
|
||||||
|
- Each client gets a unique certificate signed by the server's CA
|
||||||
|
- Strongest security with mutual authentication
|
||||||
|
- Automatic certificate validation
|
||||||
|
|
||||||
|
2. **API Keys**
|
||||||
|
- Each client gets a unique API key
|
||||||
|
- Simpler to manage than certificates
|
||||||
|
- Good for environments where certificate management is difficult
|
||||||
|
|
||||||
|
3. **Basic Auth (Not Recommended)**
|
||||||
|
- Shared username/password
|
||||||
|
- Least secure, only for testing
|
||||||
|
|
||||||
|
### Client Configuration
|
||||||
|
|
||||||
|
Clients using the `logclient` template will:
|
||||||
|
1. Authenticate using provided credentials (cert/key or API key)
|
||||||
|
2. Establish encrypted TLS connection
|
||||||
|
3. Ship all Docker container logs
|
||||||
|
4. Ship system logs (syslog, auth, kernel)
|
||||||
|
5. Maintain connection with automatic reconnection
|
||||||
|
6. Buffer logs locally during network outages
|
||||||
|
|
||||||
|
## Dashboard Features
|
||||||
|
|
||||||
|
### Pre-configured Dashboards
|
||||||
|
- **System Overview**: Overall health and log volume metrics
|
||||||
|
- **Docker Containers**: Container-specific logs and metrics
|
||||||
|
- **Error Analysis**: Aggregated error logs from all sources
|
||||||
|
- **Security Events**: Authentication and access logs
|
||||||
|
- **Application Logs**: Parsed application-specific logs
|
||||||
|
|
||||||
|
### Search Capabilities
|
||||||
|
- Full-text search across all logs
|
||||||
|
- Filter by time range, host, container, severity
|
||||||
|
- Save and share search queries
|
||||||
|
- Export search results
|
||||||
|
|
||||||
|
## Resource Requirements
|
||||||
|
|
||||||
|
### Minimum
|
||||||
|
- CPU: 2 cores
|
||||||
|
- RAM: 4GB
|
||||||
|
- Storage: 10GB
|
||||||
|
|
||||||
|
### Recommended
|
||||||
|
- CPU: 4+ cores
|
||||||
|
- RAM: 8GB+
|
||||||
|
- Storage: 50GB+ SSD
|
||||||
|
|
||||||
|
## Configuration Options
|
||||||
|
|
||||||
|
### Environment Variables (service.env)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Elasticsearch settings
|
||||||
|
ES_HEAP_SIZE=2g
|
||||||
|
ES_MAX_MAP_COUNT=262144
|
||||||
|
|
||||||
|
# Logstash settings
|
||||||
|
LS_HEAP_SIZE=1g
|
||||||
|
LS_PIPELINE_WORKERS=2
|
||||||
|
|
||||||
|
# Kibana settings
|
||||||
|
KIBANA_PASSWORD=changeme
|
||||||
|
KIBANA_BASE_PATH=/
|
||||||
|
|
||||||
|
# Log retention
|
||||||
|
LOG_RETENTION_DAYS=30
|
||||||
|
LOG_MAX_SIZE_GB=50
|
||||||
|
|
||||||
|
# Authentication Mode
|
||||||
|
AUTH_MODE=mtls # Options: mtls, apikey, basic
|
||||||
|
ENABLE_TLS=true
|
||||||
|
|
||||||
|
# mTLS Settings (if AUTH_MODE=mtls)
|
||||||
|
CA_CERT_PATH=/certs/ca.crt
|
||||||
|
SERVER_CERT_PATH=/certs/server.crt
|
||||||
|
SERVER_KEY_PATH=/certs/server.key
|
||||||
|
CLIENT_CERT_REQUIRED=true
|
||||||
|
|
||||||
|
# API Key Settings (if AUTH_MODE=apikey)
|
||||||
|
API_KEYS_PATH=/config/api-keys.yml
|
||||||
|
|
||||||
|
# Network Security
|
||||||
|
ALLOWED_IPS="" # Comma-separated list, empty = all
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
### Installation
|
||||||
|
```bash
|
||||||
|
dropshell install logserver
|
||||||
|
```
|
||||||
|
|
||||||
|
### Generate Client Credentials
|
||||||
|
|
||||||
|
#### For mTLS Authentication:
|
||||||
|
```bash
|
||||||
|
# Generate client certificate for a new host
|
||||||
|
dropshell exec logserver /scripts/generate-client-cert.sh hostname
|
||||||
|
# This creates hostname.crt and hostname.key files
|
||||||
|
```
|
||||||
|
|
||||||
|
#### For API Key Authentication:
|
||||||
|
```bash
|
||||||
|
# Generate API key for a new client
|
||||||
|
dropshell exec logserver /scripts/generate-api-key.sh hostname
|
||||||
|
# Returns an API key to configure in the client
|
||||||
|
```
|
||||||
|
|
||||||
|
### Access Kibana
|
||||||
|
Navigate to `https://<server-ip>:5601` in your browser.
|
||||||
|
|
||||||
|
Default credentials:
|
||||||
|
- Username: `elastic`
|
||||||
|
- Password: `changeme` (change in service.env)
|
||||||
|
|
||||||
|
### View Logs
|
||||||
|
```bash
|
||||||
|
dropshell logs logserver
|
||||||
|
```
|
||||||
|
|
||||||
|
### Backup
|
||||||
|
```bash
|
||||||
|
dropshell backup logserver
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Common Issues
|
||||||
|
|
||||||
|
1. **Elasticsearch failing to start**
|
||||||
|
- Check vm.max_map_count: `sysctl vm.max_map_count` (should be 262144+)
|
||||||
|
- Verify sufficient memory available
|
||||||
|
|
||||||
|
2. **No logs appearing in Kibana**
|
||||||
|
- Check Logstash is receiving data: port 5044 should be open
|
||||||
|
- Verify client connectivity
|
||||||
|
- Check index patterns in Kibana
|
||||||
|
|
||||||
|
3. **High memory usage**
|
||||||
|
- Adjust heap sizes in service.env
|
||||||
|
- Configure index lifecycle management
|
||||||
|
- Reduce retention period
|
||||||
|
|
||||||
|
## Integration
|
||||||
|
|
||||||
|
This template is designed to work seamlessly with the `logclient` template. Simply:
|
||||||
|
1. Deploy this logserver
|
||||||
|
2. Deploy logclient on each host you want to monitor
|
||||||
|
3. Configure logclient with the logserver address
|
||||||
|
4. Logs will automatically start flowing
|
||||||
|
|
||||||
|
## Security Considerations
|
||||||
|
|
||||||
|
1. **Authentication Setup**
|
||||||
|
- Use mTLS for production environments
|
||||||
|
- Generate unique credentials for each client
|
||||||
|
- Rotate certificates/keys regularly
|
||||||
|
- Store credentials securely
|
||||||
|
|
||||||
|
2. **Network Security**
|
||||||
|
- Always use TLS encryption for client connections
|
||||||
|
- Configure IP whitelisting when possible
|
||||||
|
- Use firewall rules to restrict access
|
||||||
|
- Consider VPN or private networks
|
||||||
|
|
||||||
|
3. **Access Control**
|
||||||
|
- Change default Kibana password immediately
|
||||||
|
- Create read-only users for viewing logs
|
||||||
|
- Implement role-based access control (RBAC)
|
||||||
|
- Audit access logs regularly
|
||||||
|
|
||||||
|
4. **Data Protection**
|
||||||
|
- Regular backups of Elasticsearch indices
|
||||||
|
- Encrypt data at rest (optional)
|
||||||
|
- Monitor disk usage to prevent data loss
|
||||||
|
- Implement log retention policies
|
||||||
|
|
||||||
|
## Maintenance
|
||||||
|
|
||||||
|
### Daily Tasks
|
||||||
|
- Monitor disk usage
|
||||||
|
- Check for failed log shipments
|
||||||
|
- Review error dashboards
|
||||||
|
|
||||||
|
### Weekly Tasks
|
||||||
|
- Verify all clients are reporting
|
||||||
|
- Check index health
|
||||||
|
- Review and optimize slow queries
|
||||||
|
|
||||||
|
### Monthly Tasks
|
||||||
|
- Update ELK stack components
|
||||||
|
- Archive old indices
|
||||||
|
- Review retention policies
|
||||||
|
- Performance tuning based on usage patterns
|
@@ -1,279 +1,43 @@
|
|||||||
# Dropshell LogServer Template
|
# LogServer
|
||||||
|
|
||||||
A comprehensive centralized logging solution using the ELK Stack (Elasticsearch, Logstash, Kibana) for receiving, processing, and visualizing logs from multiple hosts.
|
Centralized logging with ELK Stack (Elasticsearch, Logstash, Kibana).
|
||||||
|
|
||||||
## Overview
|
## Quick Start
|
||||||
|
|
||||||
This template deploys a full-featured ELK stack that:
|
|
||||||
- Receives logs from multiple sources via Beats protocol
|
|
||||||
- Stores and indexes logs in Elasticsearch
|
|
||||||
- Provides powerful search and visualization through Kibana
|
|
||||||
- Supports automatic log parsing and enrichment
|
|
||||||
- Handles Docker container logs and system logs from clients
|
|
||||||
|
|
||||||
## Architecture
|
|
||||||
|
|
||||||
### Components
|
|
||||||
|
|
||||||
1. **Elasticsearch** (7.17.x)
|
|
||||||
- Distributed search and analytics engine
|
|
||||||
- Stores and indexes all log data
|
|
||||||
- Provides fast full-text search capabilities
|
|
||||||
- Single-node configuration for simplicity (can be scaled)
|
|
||||||
|
|
||||||
2. **Logstash** (7.17.x)
|
|
||||||
- Log processing pipeline
|
|
||||||
- Receives logs from Filebeat clients
|
|
||||||
- Parses and enriches log data
|
|
||||||
- Routes logs to appropriate Elasticsearch indices
|
|
||||||
|
|
||||||
3. **Kibana** (7.17.x)
|
|
||||||
- Web UI for log exploration and visualization
|
|
||||||
- Create dashboards and alerts
|
|
||||||
- Real-time log streaming
|
|
||||||
- Advanced search queries
|
|
||||||
|
|
||||||
## Features
|
|
||||||
|
|
||||||
### Minimum Configuration Design
|
|
||||||
- Auto-discovery of log formats
|
|
||||||
- Pre-configured dashboards for common services
|
|
||||||
- Automatic index lifecycle management
|
|
||||||
- Built-in parsing for Docker and syslog formats
|
|
||||||
- Zero-configuration client connectivity
|
|
||||||
|
|
||||||
### Log Processing
|
|
||||||
- Automatic timestamp extraction
|
|
||||||
- Docker metadata enrichment (container name, image, labels)
|
|
||||||
- Syslog parsing with severity levels
|
|
||||||
- JSON log support
|
|
||||||
- Multi-line log handling (stacktraces, etc.)
|
|
||||||
- Grok pattern matching for common formats
|
|
||||||
|
|
||||||
### Security & Performance
|
|
||||||
- **Mutual TLS (mTLS)** authentication for client connections
|
|
||||||
- **API key authentication** as an alternative to certificates
|
|
||||||
- **Per-client authentication** with unique keys/certificates
|
|
||||||
- **SSL/TLS encryption** for all client connections
|
|
||||||
- **Basic authentication** for Kibana web access
|
|
||||||
- **IP whitelisting** for additional security
|
|
||||||
- Index lifecycle management for storage optimization
|
|
||||||
- Automatic old log cleanup
|
|
||||||
- Resource limits to prevent overconsumption
|
|
||||||
|
|
||||||
## Port Configuration
|
|
||||||
|
|
||||||
- **5601**: Kibana Web UI (HTTP/HTTPS with authentication)
|
|
||||||
- **9200**: Elasticsearch REST API (HTTP) - internal only
|
|
||||||
- **5044**: Logstash Beats input (TCP/TLS) - authenticated client connections
|
|
||||||
- **514**: Syslog input (UDP/TCP) - optional, unauthenticated
|
|
||||||
- **24224**: Fluentd forward input - optional Docker logging driver
|
|
||||||
|
|
||||||
## Storage Requirements
|
|
||||||
|
|
||||||
- **Minimum**: 10GB for basic operation
|
|
||||||
- **Recommended**: 50GB+ depending on log volume
|
|
||||||
- **Log Retention**: Default 30 days (configurable)
|
|
||||||
|
|
||||||
## Client Authentication
|
|
||||||
|
|
||||||
### Authentication Methods
|
|
||||||
|
|
||||||
1. **Mutual TLS (mTLS) - Recommended**
|
|
||||||
- Each client gets a unique certificate signed by the server's CA
|
|
||||||
- Strongest security with mutual authentication
|
|
||||||
- Automatic certificate validation
|
|
||||||
|
|
||||||
2. **API Keys**
|
|
||||||
- Each client gets a unique API key
|
|
||||||
- Simpler to manage than certificates
|
|
||||||
- Good for environments where certificate management is difficult
|
|
||||||
|
|
||||||
3. **Basic Auth (Not Recommended)**
|
|
||||||
- Shared username/password
|
|
||||||
- Least secure, only for testing
|
|
||||||
|
|
||||||
### Client Configuration
|
|
||||||
|
|
||||||
Clients using the `logclient` template will:
|
|
||||||
1. Authenticate using provided credentials (cert/key or API key)
|
|
||||||
2. Establish encrypted TLS connection
|
|
||||||
3. Ship all Docker container logs
|
|
||||||
4. Ship system logs (syslog, auth, kernel)
|
|
||||||
5. Maintain connection with automatic reconnection
|
|
||||||
6. Buffer logs locally during network outages
|
|
||||||
|
|
||||||
## Dashboard Features
|
|
||||||
|
|
||||||
### Pre-configured Dashboards
|
|
||||||
- **System Overview**: Overall health and log volume metrics
|
|
||||||
- **Docker Containers**: Container-specific logs and metrics
|
|
||||||
- **Error Analysis**: Aggregated error logs from all sources
|
|
||||||
- **Security Events**: Authentication and access logs
|
|
||||||
- **Application Logs**: Parsed application-specific logs
|
|
||||||
|
|
||||||
### Search Capabilities
|
|
||||||
- Full-text search across all logs
|
|
||||||
- Filter by time range, host, container, severity
|
|
||||||
- Save and share search queries
|
|
||||||
- Export search results
|
|
||||||
|
|
||||||
## Resource Requirements
|
|
||||||
|
|
||||||
### Minimum
|
|
||||||
- CPU: 2 cores
|
|
||||||
- RAM: 4GB
|
|
||||||
- Storage: 10GB
|
|
||||||
|
|
||||||
### Recommended
|
|
||||||
- CPU: 4+ cores
|
|
||||||
- RAM: 8GB+
|
|
||||||
- Storage: 50GB+ SSD
|
|
||||||
|
|
||||||
## Configuration Options
|
|
||||||
|
|
||||||
### Environment Variables (service.env)
|
|
||||||
|
|
||||||
|
1. **System Setup**
|
||||||
```bash
|
```bash
|
||||||
# Elasticsearch settings
|
sudo sysctl -w vm.max_map_count=262144
|
||||||
ES_HEAP_SIZE=2g
|
|
||||||
ES_MAX_MAP_COUNT=262144
|
|
||||||
|
|
||||||
# Logstash settings
|
|
||||||
LS_HEAP_SIZE=1g
|
|
||||||
LS_PIPELINE_WORKERS=2
|
|
||||||
|
|
||||||
# Kibana settings
|
|
||||||
KIBANA_PASSWORD=changeme
|
|
||||||
KIBANA_BASE_PATH=/
|
|
||||||
|
|
||||||
# Log retention
|
|
||||||
LOG_RETENTION_DAYS=30
|
|
||||||
LOG_MAX_SIZE_GB=50
|
|
||||||
|
|
||||||
# Authentication Mode
|
|
||||||
AUTH_MODE=mtls # Options: mtls, apikey, basic
|
|
||||||
ENABLE_TLS=true
|
|
||||||
|
|
||||||
# mTLS Settings (if AUTH_MODE=mtls)
|
|
||||||
CA_CERT_PATH=/certs/ca.crt
|
|
||||||
SERVER_CERT_PATH=/certs/server.crt
|
|
||||||
SERVER_KEY_PATH=/certs/server.key
|
|
||||||
CLIENT_CERT_REQUIRED=true
|
|
||||||
|
|
||||||
# API Key Settings (if AUTH_MODE=apikey)
|
|
||||||
API_KEYS_PATH=/config/api-keys.yml
|
|
||||||
|
|
||||||
# Network Security
|
|
||||||
ALLOWED_IPS="" # Comma-separated list, empty = all
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Usage
|
2. **Configure**
|
||||||
|
Edit `config/service.env`:
|
||||||
|
- Set `SERVER_PUBLICBASEURL` to your actual server URL
|
||||||
|
- Change `ELASTIC_PASSWORD` from default
|
||||||
|
|
||||||
### Installation
|
3. **Install**
|
||||||
```bash
|
```bash
|
||||||
dropshell install logserver
|
dropshell install logserver
|
||||||
```
|
```
|
||||||
|
|
||||||
### Generate Client Credentials
|
4. **Generate Client Keys**
|
||||||
|
|
||||||
#### For mTLS Authentication:
|
|
||||||
```bash
|
```bash
|
||||||
# Generate client certificate for a new host
|
./generate-api-key.sh
|
||||||
dropshell exec logserver /scripts/generate-client-cert.sh hostname
|
# Enter hostname when prompted
|
||||||
# This creates hostname.crt and hostname.key files
|
# Copy the generated config to clients
|
||||||
```
|
```
|
||||||
|
|
||||||
#### For API Key Authentication:
|
5. **Access Kibana**
|
||||||
```bash
|
- URL: `http://<server-ip>:5601`
|
||||||
# Generate API key for a new client
|
- User: `elastic`
|
||||||
dropshell exec logserver /scripts/generate-api-key.sh hostname
|
- Password: Set in `service.env` (ELASTIC_PASSWORD)
|
||||||
# Returns an API key to configure in the client
|
|
||||||
```
|
|
||||||
|
|
||||||
### Access Kibana
|
## Ports
|
||||||
Navigate to `https://<server-ip>:5601` in your browser.
|
- `5601` - Kibana Web UI
|
||||||
|
- `5044` - Log ingestion (Filebeat)
|
||||||
|
|
||||||
Default credentials:
|
## Files
|
||||||
- Username: `elastic`
|
- `config/service.env` - Configuration
|
||||||
- Password: `changeme` (change in service.env)
|
- `config/api-keys.yml` - Client API keys
|
||||||
|
- `generate-api-key.sh` - Add new clients
|
||||||
|
|
||||||
### View Logs
|
See [DOCUMENTATION.md](DOCUMENTATION.md) for full details.
|
||||||
```bash
|
|
||||||
dropshell logs logserver
|
|
||||||
```
|
|
||||||
|
|
||||||
### Backup
|
|
||||||
```bash
|
|
||||||
dropshell backup logserver
|
|
||||||
```
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Common Issues
|
|
||||||
|
|
||||||
1. **Elasticsearch failing to start**
|
|
||||||
- Check vm.max_map_count: `sysctl vm.max_map_count` (should be 262144+)
|
|
||||||
- Verify sufficient memory available
|
|
||||||
|
|
||||||
2. **No logs appearing in Kibana**
|
|
||||||
- Check Logstash is receiving data: port 5044 should be open
|
|
||||||
- Verify client connectivity
|
|
||||||
- Check index patterns in Kibana
|
|
||||||
|
|
||||||
3. **High memory usage**
|
|
||||||
- Adjust heap sizes in service.env
|
|
||||||
- Configure index lifecycle management
|
|
||||||
- Reduce retention period
|
|
||||||
|
|
||||||
## Integration
|
|
||||||
|
|
||||||
This template is designed to work seamlessly with the `logclient` template. Simply:
|
|
||||||
1. Deploy this logserver
|
|
||||||
2. Deploy logclient on each host you want to monitor
|
|
||||||
3. Configure logclient with the logserver address
|
|
||||||
4. Logs will automatically start flowing
|
|
||||||
|
|
||||||
## Security Considerations
|
|
||||||
|
|
||||||
1. **Authentication Setup**
|
|
||||||
- Use mTLS for production environments
|
|
||||||
- Generate unique credentials for each client
|
|
||||||
- Rotate certificates/keys regularly
|
|
||||||
- Store credentials securely
|
|
||||||
|
|
||||||
2. **Network Security**
|
|
||||||
- Always use TLS encryption for client connections
|
|
||||||
- Configure IP whitelisting when possible
|
|
||||||
- Use firewall rules to restrict access
|
|
||||||
- Consider VPN or private networks
|
|
||||||
|
|
||||||
3. **Access Control**
|
|
||||||
- Change default Kibana password immediately
|
|
||||||
- Create read-only users for viewing logs
|
|
||||||
- Implement role-based access control (RBAC)
|
|
||||||
- Audit access logs regularly
|
|
||||||
|
|
||||||
4. **Data Protection**
|
|
||||||
- Regular backups of Elasticsearch indices
|
|
||||||
- Encrypt data at rest (optional)
|
|
||||||
- Monitor disk usage to prevent data loss
|
|
||||||
- Implement log retention policies
|
|
||||||
|
|
||||||
## Maintenance
|
|
||||||
|
|
||||||
### Daily Tasks
|
|
||||||
- Monitor disk usage
|
|
||||||
- Check for failed log shipments
|
|
||||||
- Review error dashboards
|
|
||||||
|
|
||||||
### Weekly Tasks
|
|
||||||
- Verify all clients are reporting
|
|
||||||
- Check index health
|
|
||||||
- Review and optimize slow queries
|
|
||||||
|
|
||||||
### Monthly Tasks
|
|
||||||
- Update ELK stack components
|
|
||||||
- Archive old indices
|
|
||||||
- Review retention policies
|
|
||||||
- Performance tuning based on usage patterns
|
|
@@ -3,3 +3,4 @@
|
|||||||
# Generated by generate-api-key.sh
|
# Generated by generate-api-key.sh
|
||||||
|
|
||||||
api_keys:
|
api_keys:
|
||||||
|
video: a7798c63c2ac439b5ba20f3bf8bf27b5361231cdcbdc4fc9d7af715308fdf707
|
||||||
|
94
logserver/config/logstash.conf
Normal file
94
logserver/config/logstash.conf
Normal file
@@ -0,0 +1,94 @@
|
|||||||
|
# Logstash Configuration for LogServer
|
||||||
|
# Handles Beats input with API key authentication
|
||||||
|
|
||||||
|
input {
|
||||||
|
# Beats input for Filebeat clients
|
||||||
|
beats {
|
||||||
|
port => 5044
|
||||||
|
ssl => false # Set to true for production with proper certificates
|
||||||
|
|
||||||
|
# API key authentication handled via filter below
|
||||||
|
}
|
||||||
|
|
||||||
|
# Optional: Syslog input for direct syslog shipping
|
||||||
|
tcp {
|
||||||
|
port => 514
|
||||||
|
type => "syslog"
|
||||||
|
}
|
||||||
|
|
||||||
|
udp {
|
||||||
|
port => 514
|
||||||
|
type => "syslog"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
filter {
|
||||||
|
# Note: API key validation would go here in production
|
||||||
|
# For now, accepting all connections for simplicity
|
||||||
|
# TODO: Implement proper API key validation
|
||||||
|
|
||||||
|
# Parse Docker logs
|
||||||
|
if [docker] {
|
||||||
|
# Docker metadata is already parsed by Filebeat
|
||||||
|
mutate {
|
||||||
|
add_field => {
|
||||||
|
"container_name" => "%{[docker][container][name]}"
|
||||||
|
"container_id" => "%{[docker][container][id]}"
|
||||||
|
"container_image" => "%{[docker][container][image]}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Parse syslog
|
||||||
|
if [type] == "syslog" {
|
||||||
|
grok {
|
||||||
|
match => {
|
||||||
|
"message" => "%{SYSLOGLINE}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
date {
|
||||||
|
match => [ "timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Parse JSON logs if they exist
|
||||||
|
if [message] =~ /^\{.*\}$/ {
|
||||||
|
json {
|
||||||
|
source => "message"
|
||||||
|
target => "json_message"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Add timestamp if not present
|
||||||
|
if ![timestamp] {
|
||||||
|
mutate {
|
||||||
|
add_field => { "timestamp" => "%{@timestamp}" }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Clean up metadata
|
||||||
|
mutate {
|
||||||
|
remove_field => [ "@version", "beat", "offset", "prospector" ]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
output {
|
||||||
|
# Send to Elasticsearch with authentication
|
||||||
|
elasticsearch {
|
||||||
|
hosts => ["elasticsearch:9200"]
|
||||||
|
user => "elastic"
|
||||||
|
password => "${ELASTIC_PASSWORD:changeme}"
|
||||||
|
|
||||||
|
# Use different indices based on input type
|
||||||
|
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
|
||||||
|
|
||||||
|
# Manage index templates
|
||||||
|
manage_template => true
|
||||||
|
template_overwrite => true
|
||||||
|
}
|
||||||
|
|
||||||
|
# Optional: Debug output (comment out in production)
|
||||||
|
# stdout {
|
||||||
|
# codec => rubydebug
|
||||||
|
# }
|
||||||
|
}
|
@@ -16,14 +16,18 @@ LS_PIPELINE_WORKERS=2
|
|||||||
|
|
||||||
# Kibana settings
|
# Kibana settings
|
||||||
KIBANA_VERSION=7.17.23
|
KIBANA_VERSION=7.17.23
|
||||||
KIBANA_PASSWORD=changeme
|
|
||||||
KIBANA_BASE_PATH=/
|
# Authentication (IMPORTANT: Change this!)
|
||||||
|
ELASTIC_PASSWORD=changeme # Password for 'elastic' user in Kibana/Elasticsearch
|
||||||
|
|
||||||
# Ports
|
# Ports
|
||||||
KIBANA_PORT=5601
|
KIBANA_PORT=5601
|
||||||
LOGSTASH_BEATS_PORT=5044
|
LOGSTASH_BEATS_PORT=5044
|
||||||
LOGSTASH_SYSLOG_PORT=514
|
LOGSTASH_SYSLOG_PORT=514
|
||||||
|
|
||||||
|
# Server configuration
|
||||||
|
SERVER_PUBLICBASEURL=http://localhost:5601 # Change to your server's actual URL
|
||||||
|
|
||||||
# Log retention
|
# Log retention
|
||||||
LOG_RETENTION_DAYS=30
|
LOG_RETENTION_DAYS=30
|
||||||
LOG_MAX_SIZE_GB=50
|
LOG_MAX_SIZE_GB=50
|
||||||
|
43
logserver/config/validate_api_key.rb
Normal file
43
logserver/config/validate_api_key.rb
Normal file
@@ -0,0 +1,43 @@
|
|||||||
|
# Ruby script for Logstash to validate API keys
|
||||||
|
# This is a simplified validation - in production, use proper authentication
|
||||||
|
|
||||||
|
require 'yaml'
|
||||||
|
|
||||||
|
def register(params)
|
||||||
|
@api_keys_file = params["api_keys_file"]
|
||||||
|
end
|
||||||
|
|
||||||
|
def filter(event)
|
||||||
|
# Get the API key from the event
|
||||||
|
api_key = event.get("[api_key]") || event.get("[@metadata][api_key]")
|
||||||
|
|
||||||
|
# If no API key, pass through (for backwards compatibility)
|
||||||
|
# In production, you should reject events without valid keys
|
||||||
|
if api_key.nil? || api_key.empty?
|
||||||
|
# For now, allow events without API keys
|
||||||
|
# event.cancel # Uncomment to require API keys
|
||||||
|
return [event]
|
||||||
|
end
|
||||||
|
|
||||||
|
# Load API keys from file
|
||||||
|
begin
|
||||||
|
if File.exist?(@api_keys_file)
|
||||||
|
config = YAML.load_file(@api_keys_file)
|
||||||
|
valid_keys = config['api_keys'].values if config && config['api_keys']
|
||||||
|
|
||||||
|
# Check if the provided key is valid
|
||||||
|
if valid_keys && valid_keys.include?(api_key)
|
||||||
|
# Valid key - let the event through
|
||||||
|
event.set("[@metadata][authenticated]", true)
|
||||||
|
else
|
||||||
|
# Invalid key - drop the event
|
||||||
|
event.cancel
|
||||||
|
end
|
||||||
|
end
|
||||||
|
rescue => e
|
||||||
|
# Log error but don't crash
|
||||||
|
event.set("[@metadata][auth_error]", e.message)
|
||||||
|
end
|
||||||
|
|
||||||
|
return [event]
|
||||||
|
end
|
81
logserver/docker-compose.yml
Normal file
81
logserver/docker-compose.yml
Normal file
@@ -0,0 +1,81 @@
|
|||||||
|
version: '3.8'
|
||||||
|
|
||||||
|
services:
|
||||||
|
elasticsearch:
|
||||||
|
image: docker.elastic.co/elasticsearch/elasticsearch:${ES_VERSION:-7.17.23}
|
||||||
|
container_name: ${CONTAINER_NAME}_elasticsearch
|
||||||
|
environment:
|
||||||
|
- discovery.type=single-node
|
||||||
|
- "ES_JAVA_OPTS=-Xms${ES_HEAP_SIZE:-2g} -Xmx${ES_HEAP_SIZE:-2g}"
|
||||||
|
- xpack.security.enabled=true
|
||||||
|
- xpack.security.authc.api_key.enabled=true
|
||||||
|
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD:-${KIBANA_PASSWORD:-changeme}}
|
||||||
|
- xpack.monitoring.enabled=false
|
||||||
|
- cluster.routing.allocation.disk.threshold_enabled=false
|
||||||
|
volumes:
|
||||||
|
- elasticsearch_data:/usr/share/elasticsearch/data
|
||||||
|
ports:
|
||||||
|
- "127.0.0.1:9200:9200"
|
||||||
|
networks:
|
||||||
|
- elk
|
||||||
|
restart: unless-stopped
|
||||||
|
ulimits:
|
||||||
|
memlock:
|
||||||
|
soft: -1
|
||||||
|
hard: -1
|
||||||
|
nofile:
|
||||||
|
soft: 65536
|
||||||
|
hard: 65536
|
||||||
|
|
||||||
|
logstash:
|
||||||
|
image: docker.elastic.co/logstash/logstash:${LS_VERSION:-7.17.23}
|
||||||
|
container_name: ${CONTAINER_NAME}_logstash
|
||||||
|
environment:
|
||||||
|
- "LS_JAVA_OPTS=-Xms${LS_HEAP_SIZE:-1g} -Xmx${LS_HEAP_SIZE:-1g}"
|
||||||
|
- "xpack.monitoring.enabled=false"
|
||||||
|
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD:-${KIBANA_PASSWORD:-changeme}}
|
||||||
|
command: logstash -f /usr/share/logstash/config/logstash.conf
|
||||||
|
volumes:
|
||||||
|
- ${CONFIG_PATH}:/usr/share/logstash/config:ro
|
||||||
|
- logstash_data:/usr/share/logstash/data
|
||||||
|
ports:
|
||||||
|
- "${LOGSTASH_BEATS_PORT:-5044}:5044"
|
||||||
|
- "${LOGSTASH_SYSLOG_PORT:-514}:514/tcp"
|
||||||
|
- "${LOGSTASH_SYSLOG_PORT:-514}:514/udp"
|
||||||
|
networks:
|
||||||
|
- elk
|
||||||
|
depends_on:
|
||||||
|
- elasticsearch
|
||||||
|
restart: unless-stopped
|
||||||
|
|
||||||
|
kibana:
|
||||||
|
image: docker.elastic.co/kibana/kibana:${KIBANA_VERSION:-7.17.23}
|
||||||
|
container_name: ${CONTAINER_NAME}_kibana
|
||||||
|
environment:
|
||||||
|
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
|
||||||
|
- ELASTICSEARCH_USERNAME=elastic
|
||||||
|
- ELASTICSEARCH_PASSWORD=${ELASTIC_PASSWORD:-${KIBANA_PASSWORD:-changeme}}
|
||||||
|
- XPACK_SECURITY_ENABLED=true
|
||||||
|
- NODE_OPTIONS=--openssl-legacy-provider
|
||||||
|
- SERVER_PUBLICBASEURL=${SERVER_PUBLICBASEURL:-http://localhost:5601}
|
||||||
|
volumes:
|
||||||
|
- kibana_data:/usr/share/kibana/data
|
||||||
|
ports:
|
||||||
|
- "${KIBANA_PORT:-5601}:5601"
|
||||||
|
networks:
|
||||||
|
- elk
|
||||||
|
depends_on:
|
||||||
|
- elasticsearch
|
||||||
|
restart: unless-stopped
|
||||||
|
|
||||||
|
networks:
|
||||||
|
elk:
|
||||||
|
driver: bridge
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
elasticsearch_data:
|
||||||
|
name: ${CONTAINER_NAME}_elasticsearch_data
|
||||||
|
logstash_data:
|
||||||
|
name: ${CONTAINER_NAME}_logstash_data
|
||||||
|
kibana_data:
|
||||||
|
name: ${CONTAINER_NAME}_kibana_data
|
@@ -3,7 +3,45 @@
|
|||||||
# Interactive API Key Generation Script for LogServer
|
# Interactive API Key Generation Script for LogServer
|
||||||
# This script generates secure API keys and adds them to api-keys.yml
|
# This script generates secure API keys and adds them to api-keys.yml
|
||||||
|
|
||||||
API_KEYS_FILE="${CONFIG_PATH:-./config}/api-keys.yml"
|
# Determine where to put the api-keys.yml file
|
||||||
|
determine_api_keys_location() {
|
||||||
|
# 1. If api-keys.yml already exists in current folder, use it
|
||||||
|
if [ -f "./api-keys.yml" ]; then
|
||||||
|
echo "./api-keys.yml"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# 2. If service.env exists in current folder, put keys here
|
||||||
|
if [ -f "./service.env" ]; then
|
||||||
|
echo "./api-keys.yml"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# 3. If config folder exists, put keys there
|
||||||
|
if [ -d "./config" ]; then
|
||||||
|
echo "./config/api-keys.yml"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# No valid location found
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Try to determine location
|
||||||
|
if API_KEYS_FILE=$(determine_api_keys_location); then
|
||||||
|
: # Location found, continue
|
||||||
|
else
|
||||||
|
echo -e "${RED}Error: Cannot determine where to place api-keys.yml${NC}"
|
||||||
|
echo ""
|
||||||
|
echo "This script must be run from one of these locations:"
|
||||||
|
echo " 1. A deployed service directory (contains service.env)"
|
||||||
|
echo " 2. The logserver template directory (contains config/ folder)"
|
||||||
|
echo " 3. A directory with existing api-keys.yml file"
|
||||||
|
echo ""
|
||||||
|
echo "Current directory: $(pwd)"
|
||||||
|
echo "Contents: $(ls -la 2>/dev/null | head -5)"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
# Colors for output
|
# Colors for output
|
||||||
RED='\033[0;31m'
|
RED='\033[0;31m'
|
||||||
@@ -19,12 +57,21 @@ generate_key() {
|
|||||||
# Initialize api-keys.yml if it doesn't exist
|
# Initialize api-keys.yml if it doesn't exist
|
||||||
init_api_keys_file() {
|
init_api_keys_file() {
|
||||||
if [ ! -f "$API_KEYS_FILE" ]; then
|
if [ ! -f "$API_KEYS_FILE" ]; then
|
||||||
|
# Create directory if needed
|
||||||
|
local dir=$(dirname "$API_KEYS_FILE")
|
||||||
|
if [ ! -d "$dir" ]; then
|
||||||
|
mkdir -p "$dir"
|
||||||
|
echo -e "${GREEN}Created directory: $dir${NC}"
|
||||||
|
fi
|
||||||
|
|
||||||
echo "# API Keys for LogServer Authentication" > "$API_KEYS_FILE"
|
echo "# API Keys for LogServer Authentication" > "$API_KEYS_FILE"
|
||||||
echo "# Format: hostname:api_key" >> "$API_KEYS_FILE"
|
echo "# Format: hostname:api_key" >> "$API_KEYS_FILE"
|
||||||
echo "# Generated by generate-api-key.sh" >> "$API_KEYS_FILE"
|
echo "# Generated by generate-api-key.sh" >> "$API_KEYS_FILE"
|
||||||
echo "" >> "$API_KEYS_FILE"
|
echo "" >> "$API_KEYS_FILE"
|
||||||
echo "api_keys:" >> "$API_KEYS_FILE"
|
echo "api_keys:" >> "$API_KEYS_FILE"
|
||||||
echo -e "${GREEN}Created new api-keys.yml file${NC}"
|
echo -e "${GREEN}Created new api-keys.yml file at: $API_KEYS_FILE${NC}"
|
||||||
|
else
|
||||||
|
echo -e "${GREEN}Using existing api-keys.yml at: $API_KEYS_FILE${NC}"
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -112,5 +159,14 @@ echo ""
|
|||||||
echo "To view all keys: cat $API_KEYS_FILE"
|
echo "To view all keys: cat $API_KEYS_FILE"
|
||||||
echo "To revoke a key: Edit $API_KEYS_FILE and remove the line"
|
echo "To revoke a key: Edit $API_KEYS_FILE and remove the line"
|
||||||
echo ""
|
echo ""
|
||||||
echo -e "${YELLOW}Remember to restart logserver after adding keys:${NC}"
|
|
||||||
echo " dropshell restart logserver"
|
# Show location-specific restart instructions
|
||||||
|
if [[ "$API_KEYS_FILE" == "./api-keys.yml" ]] && [ -f "./service.env" ]; then
|
||||||
|
# We're in a deployed service directory
|
||||||
|
echo -e "${YELLOW}Remember to restart the service to apply changes:${NC}"
|
||||||
|
echo " dropshell restart logserver"
|
||||||
|
else
|
||||||
|
# We're in the template directory
|
||||||
|
echo -e "${YELLOW}Note: Deploy this template to use these keys:${NC}"
|
||||||
|
echo " dropshell install logserver"
|
||||||
|
fi
|
@@ -7,7 +7,7 @@ _check_required_env_vars "CONTAINER_NAME" "ES_VERSION" "LS_VERSION" "KIBANA_VERS
|
|||||||
|
|
||||||
# Check Docker and Docker Compose are available
|
# Check Docker and Docker Compose are available
|
||||||
_check_docker_installed || _die "Docker test failed"
|
_check_docker_installed || _die "Docker test failed"
|
||||||
which docker-compose >/dev/null 2>&1 || _die "docker-compose is not installed"
|
docker compose version >/dev/null 2>&1 || _die "Docker Compose is not installed (requires Docker Compose V2)"
|
||||||
|
|
||||||
# Check vm.max_map_count for Elasticsearch
|
# Check vm.max_map_count for Elasticsearch
|
||||||
current_max_map_count=$(sysctl -n vm.max_map_count 2>/dev/null || echo 0)
|
current_max_map_count=$(sysctl -n vm.max_map_count 2>/dev/null || echo 0)
|
||||||
@@ -23,7 +23,7 @@ fi
|
|||||||
bash ./stop.sh || true
|
bash ./stop.sh || true
|
||||||
|
|
||||||
# Remove old containers
|
# Remove old containers
|
||||||
docker-compose down --remove-orphans 2>/dev/null || true
|
docker compose down --remove-orphans 2>/dev/null || true
|
||||||
|
|
||||||
# Pull the Docker images
|
# Pull the Docker images
|
||||||
echo "Pulling ELK stack images..."
|
echo "Pulling ELK stack images..."
|
||||||
@@ -31,17 +31,30 @@ docker pull docker.elastic.co/elasticsearch/elasticsearch:${ES_VERSION} || _die
|
|||||||
docker pull docker.elastic.co/logstash/logstash:${LS_VERSION} || _die "Failed to pull Logstash"
|
docker pull docker.elastic.co/logstash/logstash:${LS_VERSION} || _die "Failed to pull Logstash"
|
||||||
docker pull docker.elastic.co/kibana/kibana:${KIBANA_VERSION} || _die "Failed to pull Kibana"
|
docker pull docker.elastic.co/kibana/kibana:${KIBANA_VERSION} || _die "Failed to pull Kibana"
|
||||||
|
|
||||||
|
# Ensure config directory exists
|
||||||
|
mkdir -p "${CONFIG_PATH}"
|
||||||
|
|
||||||
# Initialize API keys file if it doesn't exist
|
# Initialize API keys file if it doesn't exist
|
||||||
if [ ! -f "${CONFIG_PATH}/api-keys.yml" ]; then
|
if [ ! -f "${CONFIG_PATH}/api-keys.yml" ]; then
|
||||||
echo "No API keys configured yet."
|
echo "No API keys configured yet."
|
||||||
echo "Run ./generate-api-key.sh to add client keys"
|
echo "Run ./generate-api-key.sh to add client keys"
|
||||||
mkdir -p "${CONFIG_PATH}"
|
|
||||||
echo "api_keys:" > "${CONFIG_PATH}/api-keys.yml"
|
echo "api_keys:" > "${CONFIG_PATH}/api-keys.yml"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
# Copy Logstash configuration if it doesn't exist
|
||||||
|
if [ ! -f "${CONFIG_PATH}/logstash.conf" ]; then
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
if [ -f "$SCRIPT_DIR/config/logstash.conf" ]; then
|
||||||
|
cp "$SCRIPT_DIR/config/logstash.conf" "${CONFIG_PATH}/logstash.conf"
|
||||||
|
echo "Copied Logstash configuration to ${CONFIG_PATH}"
|
||||||
|
else
|
||||||
|
echo "WARNING: logstash.conf not found in template"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
# Start the ELK stack
|
# Start the ELK stack
|
||||||
echo "Starting ELK stack..."
|
echo "Starting ELK stack..."
|
||||||
docker-compose up -d --build || _die "Failed to start ELK stack"
|
docker compose up -d --build || _die "Failed to start ELK stack"
|
||||||
|
|
||||||
# Wait for services to be ready
|
# Wait for services to be ready
|
||||||
echo "Waiting for services to start..."
|
echo "Waiting for services to start..."
|
||||||
@@ -52,9 +65,15 @@ bash ./status.sh || _die "Services failed to start properly"
|
|||||||
|
|
||||||
echo "Installation of ${CONTAINER_NAME} complete"
|
echo "Installation of ${CONTAINER_NAME} complete"
|
||||||
echo ""
|
echo ""
|
||||||
echo "Kibana UI: http://$(hostname -I | awk '{print $1}'):${KIBANA_PORT}"
|
echo "========================================="
|
||||||
|
echo "Kibana UI: ${SERVER_PUBLICBASEURL:-http://$(hostname -I | awk '{print $1}'):${KIBANA_PORT}}"
|
||||||
echo "Username: elastic"
|
echo "Username: elastic"
|
||||||
echo "Password: ${KIBANA_PASSWORD}"
|
echo "Password: ${ELASTIC_PASSWORD:-changeme}"
|
||||||
|
echo "========================================="
|
||||||
|
echo ""
|
||||||
|
echo "IMPORTANT: Update service.env with:"
|
||||||
|
echo " - Your actual server IP/domain in SERVER_PUBLICBASEURL"
|
||||||
|
echo " - A secure password in ELASTIC_PASSWORD"
|
||||||
echo ""
|
echo ""
|
||||||
echo "Logstash listening on port ${LOGSTASH_BEATS_PORT} for Filebeat clients"
|
echo "Logstash listening on port ${LOGSTASH_BEATS_PORT} for Filebeat clients"
|
||||||
echo ""
|
echo ""
|
||||||
|
@@ -3,14 +3,14 @@ source "${AGENT_PATH}/common.sh"
|
|||||||
_check_required_env_vars "CONTAINER_NAME"
|
_check_required_env_vars "CONTAINER_NAME"
|
||||||
|
|
||||||
echo "Starting ELK stack..."
|
echo "Starting ELK stack..."
|
||||||
docker-compose up -d || _die "Failed to start ELK stack"
|
docker compose up -d || _die "Failed to start ELK stack"
|
||||||
|
|
||||||
# Wait for services to be ready
|
# Wait for services to be ready
|
||||||
echo "Waiting for services to start..."
|
echo "Waiting for services to start..."
|
||||||
sleep 5
|
sleep 5
|
||||||
|
|
||||||
# Check if services are running
|
# Check if services are running
|
||||||
if docker-compose ps | grep -q "Up"; then
|
if docker compose ps | grep -q "Up"; then
|
||||||
echo "ELK stack started successfully"
|
echo "ELK stack started successfully"
|
||||||
else
|
else
|
||||||
_die "Failed to start ELK stack services"
|
_die "Failed to start ELK stack services"
|
||||||
|
@@ -2,16 +2,16 @@
|
|||||||
source "${AGENT_PATH}/common.sh"
|
source "${AGENT_PATH}/common.sh"
|
||||||
_check_required_env_vars "CONTAINER_NAME"
|
_check_required_env_vars "CONTAINER_NAME"
|
||||||
|
|
||||||
# Check if docker-compose services exist and are running
|
# Check if docker compose services exist and are running
|
||||||
if ! docker-compose ps 2>/dev/null | grep -q "${CONTAINER_NAME}"; then
|
if ! docker compose ps 2>/dev/null | grep -q "${CONTAINER_NAME}"; then
|
||||||
echo "Unknown"
|
echo "Unknown"
|
||||||
exit 0
|
exit 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Check individual service status
|
# Check individual service status
|
||||||
elasticsearch_status=$(docker-compose ps elasticsearch 2>/dev/null | grep -c "Up")
|
elasticsearch_status=$(docker compose ps elasticsearch 2>/dev/null | grep -c "Up")
|
||||||
logstash_status=$(docker-compose ps logstash 2>/dev/null | grep -c "Up")
|
logstash_status=$(docker compose ps logstash 2>/dev/null | grep -c "Up")
|
||||||
kibana_status=$(docker-compose ps kibana 2>/dev/null | grep -c "Up")
|
kibana_status=$(docker compose ps kibana 2>/dev/null | grep -c "Up")
|
||||||
|
|
||||||
if [ "$elasticsearch_status" -eq 1 ] && [ "$logstash_status" -eq 1 ] && [ "$kibana_status" -eq 1 ]; then
|
if [ "$elasticsearch_status" -eq 1 ] && [ "$logstash_status" -eq 1 ] && [ "$kibana_status" -eq 1 ]; then
|
||||||
echo "Running"
|
echo "Running"
|
||||||
|
@@ -3,6 +3,6 @@ source "${AGENT_PATH}/common.sh"
|
|||||||
_check_required_env_vars "CONTAINER_NAME"
|
_check_required_env_vars "CONTAINER_NAME"
|
||||||
|
|
||||||
echo "Stopping ELK stack..."
|
echo "Stopping ELK stack..."
|
||||||
docker-compose stop || true
|
docker compose stop || true
|
||||||
|
|
||||||
echo "ELK stack stopped"
|
echo "ELK stack stopped"
|
@@ -6,7 +6,7 @@ _check_required_env_vars "CONTAINER_NAME"
|
|||||||
bash ./stop.sh || _die "Failed to stop containers"
|
bash ./stop.sh || _die "Failed to stop containers"
|
||||||
|
|
||||||
# Remove the containers
|
# Remove the containers
|
||||||
docker-compose down --remove-orphans || _die "Failed to remove containers"
|
docker compose down --remove-orphans || _die "Failed to remove containers"
|
||||||
|
|
||||||
# CRITICAL: Never remove data volumes in uninstall.sh!
|
# CRITICAL: Never remove data volumes in uninstall.sh!
|
||||||
# Data volumes must be preserved for potential reinstallation
|
# Data volumes must be preserved for potential reinstallation
|
||||||
|
Reference in New Issue
Block a user