docs: Add 6 and update 11 files
All checks were successful
Test and Publish Templates / test-and-publish (push) Successful in 44s

This commit is contained in:
Your Name
2025-09-20 10:04:42 +12:00
parent 9045ee5def
commit 70585358b8
17 changed files with 1147 additions and 659 deletions

279
logserver/DOCUMENTATION.md Normal file
View File

@@ -0,0 +1,279 @@
# Dropshell LogServer Template
A comprehensive centralized logging solution using the ELK Stack (Elasticsearch, Logstash, Kibana) for receiving, processing, and visualizing logs from multiple hosts.
## Overview
This template deploys a full-featured ELK stack that:
- Receives logs from multiple sources via Beats protocol
- Stores and indexes logs in Elasticsearch
- Provides powerful search and visualization through Kibana
- Supports automatic log parsing and enrichment
- Handles Docker container logs and system logs from clients
## Architecture
### Components
1. **Elasticsearch** (7.17.x)
- Distributed search and analytics engine
- Stores and indexes all log data
- Provides fast full-text search capabilities
- Single-node configuration for simplicity (can be scaled)
2. **Logstash** (7.17.x)
- Log processing pipeline
- Receives logs from Filebeat clients
- Parses and enriches log data
- Routes logs to appropriate Elasticsearch indices
3. **Kibana** (7.17.x)
- Web UI for log exploration and visualization
- Create dashboards and alerts
- Real-time log streaming
- Advanced search queries
## Features
### Minimum Configuration Design
- Auto-discovery of log formats
- Pre-configured dashboards for common services
- Automatic index lifecycle management
- Built-in parsing for Docker and syslog formats
- Zero-configuration client connectivity
### Log Processing
- Automatic timestamp extraction
- Docker metadata enrichment (container name, image, labels)
- Syslog parsing with severity levels
- JSON log support
- Multi-line log handling (stacktraces, etc.)
- Grok pattern matching for common formats
### Security & Performance
- **Mutual TLS (mTLS)** authentication for client connections
- **API key authentication** as an alternative to certificates
- **Per-client authentication** with unique keys/certificates
- **SSL/TLS encryption** for all client connections
- **Basic authentication** for Kibana web access
- **IP whitelisting** for additional security
- Index lifecycle management for storage optimization
- Automatic old log cleanup
- Resource limits to prevent overconsumption
## Port Configuration
- **5601**: Kibana Web UI (HTTP/HTTPS with authentication)
- **9200**: Elasticsearch REST API (HTTP) - internal only
- **5044**: Logstash Beats input (TCP/TLS) - authenticated client connections
- **514**: Syslog input (UDP/TCP) - optional, unauthenticated
- **24224**: Fluentd forward input - optional Docker logging driver
## Storage Requirements
- **Minimum**: 10GB for basic operation
- **Recommended**: 50GB+ depending on log volume
- **Log Retention**: Default 30 days (configurable)
## Client Authentication
### Authentication Methods
1. **Mutual TLS (mTLS) - Recommended**
- Each client gets a unique certificate signed by the server's CA
- Strongest security with mutual authentication
- Automatic certificate validation
2. **API Keys**
- Each client gets a unique API key
- Simpler to manage than certificates
- Good for environments where certificate management is difficult
3. **Basic Auth (Not Recommended)**
- Shared username/password
- Least secure, only for testing
### Client Configuration
Clients using the `logclient` template will:
1. Authenticate using provided credentials (cert/key or API key)
2. Establish encrypted TLS connection
3. Ship all Docker container logs
4. Ship system logs (syslog, auth, kernel)
5. Maintain connection with automatic reconnection
6. Buffer logs locally during network outages
## Dashboard Features
### Pre-configured Dashboards
- **System Overview**: Overall health and log volume metrics
- **Docker Containers**: Container-specific logs and metrics
- **Error Analysis**: Aggregated error logs from all sources
- **Security Events**: Authentication and access logs
- **Application Logs**: Parsed application-specific logs
### Search Capabilities
- Full-text search across all logs
- Filter by time range, host, container, severity
- Save and share search queries
- Export search results
## Resource Requirements
### Minimum
- CPU: 2 cores
- RAM: 4GB
- Storage: 10GB
### Recommended
- CPU: 4+ cores
- RAM: 8GB+
- Storage: 50GB+ SSD
## Configuration Options
### Environment Variables (service.env)
```bash
# Elasticsearch settings
ES_HEAP_SIZE=2g
ES_MAX_MAP_COUNT=262144
# Logstash settings
LS_HEAP_SIZE=1g
LS_PIPELINE_WORKERS=2
# Kibana settings
KIBANA_PASSWORD=changeme
KIBANA_BASE_PATH=/
# Log retention
LOG_RETENTION_DAYS=30
LOG_MAX_SIZE_GB=50
# Authentication Mode
AUTH_MODE=mtls # Options: mtls, apikey, basic
ENABLE_TLS=true
# mTLS Settings (if AUTH_MODE=mtls)
CA_CERT_PATH=/certs/ca.crt
SERVER_CERT_PATH=/certs/server.crt
SERVER_KEY_PATH=/certs/server.key
CLIENT_CERT_REQUIRED=true
# API Key Settings (if AUTH_MODE=apikey)
API_KEYS_PATH=/config/api-keys.yml
# Network Security
ALLOWED_IPS="" # Comma-separated list, empty = all
```
## Usage
### Installation
```bash
dropshell install logserver
```
### Generate Client Credentials
#### For mTLS Authentication:
```bash
# Generate client certificate for a new host
dropshell exec logserver /scripts/generate-client-cert.sh hostname
# This creates hostname.crt and hostname.key files
```
#### For API Key Authentication:
```bash
# Generate API key for a new client
dropshell exec logserver /scripts/generate-api-key.sh hostname
# Returns an API key to configure in the client
```
### Access Kibana
Navigate to `https://<server-ip>:5601` in your browser.
Default credentials:
- Username: `elastic`
- Password: `changeme` (change in service.env)
### View Logs
```bash
dropshell logs logserver
```
### Backup
```bash
dropshell backup logserver
```
## Troubleshooting
### Common Issues
1. **Elasticsearch failing to start**
- Check vm.max_map_count: `sysctl vm.max_map_count` (should be 262144+)
- Verify sufficient memory available
2. **No logs appearing in Kibana**
- Check Logstash is receiving data: port 5044 should be open
- Verify client connectivity
- Check index patterns in Kibana
3. **High memory usage**
- Adjust heap sizes in service.env
- Configure index lifecycle management
- Reduce retention period
## Integration
This template is designed to work seamlessly with the `logclient` template. Simply:
1. Deploy this logserver
2. Deploy logclient on each host you want to monitor
3. Configure logclient with the logserver address
4. Logs will automatically start flowing
## Security Considerations
1. **Authentication Setup**
- Use mTLS for production environments
- Generate unique credentials for each client
- Rotate certificates/keys regularly
- Store credentials securely
2. **Network Security**
- Always use TLS encryption for client connections
- Configure IP whitelisting when possible
- Use firewall rules to restrict access
- Consider VPN or private networks
3. **Access Control**
- Change default Kibana password immediately
- Create read-only users for viewing logs
- Implement role-based access control (RBAC)
- Audit access logs regularly
4. **Data Protection**
- Regular backups of Elasticsearch indices
- Encrypt data at rest (optional)
- Monitor disk usage to prevent data loss
- Implement log retention policies
## Maintenance
### Daily Tasks
- Monitor disk usage
- Check for failed log shipments
- Review error dashboards
### Weekly Tasks
- Verify all clients are reporting
- Check index health
- Review and optimize slow queries
### Monthly Tasks
- Update ELK stack components
- Archive old indices
- Review retention policies
- Performance tuning based on usage patterns

View File

@@ -1,279 +1,43 @@
# Dropshell LogServer Template
# LogServer
A comprehensive centralized logging solution using the ELK Stack (Elasticsearch, Logstash, Kibana) for receiving, processing, and visualizing logs from multiple hosts.
Centralized logging with ELK Stack (Elasticsearch, Logstash, Kibana).
## Overview
This template deploys a full-featured ELK stack that:
- Receives logs from multiple sources via Beats protocol
- Stores and indexes logs in Elasticsearch
- Provides powerful search and visualization through Kibana
- Supports automatic log parsing and enrichment
- Handles Docker container logs and system logs from clients
## Architecture
### Components
1. **Elasticsearch** (7.17.x)
- Distributed search and analytics engine
- Stores and indexes all log data
- Provides fast full-text search capabilities
- Single-node configuration for simplicity (can be scaled)
2. **Logstash** (7.17.x)
- Log processing pipeline
- Receives logs from Filebeat clients
- Parses and enriches log data
- Routes logs to appropriate Elasticsearch indices
3. **Kibana** (7.17.x)
- Web UI for log exploration and visualization
- Create dashboards and alerts
- Real-time log streaming
- Advanced search queries
## Features
### Minimum Configuration Design
- Auto-discovery of log formats
- Pre-configured dashboards for common services
- Automatic index lifecycle management
- Built-in parsing for Docker and syslog formats
- Zero-configuration client connectivity
### Log Processing
- Automatic timestamp extraction
- Docker metadata enrichment (container name, image, labels)
- Syslog parsing with severity levels
- JSON log support
- Multi-line log handling (stacktraces, etc.)
- Grok pattern matching for common formats
### Security & Performance
- **Mutual TLS (mTLS)** authentication for client connections
- **API key authentication** as an alternative to certificates
- **Per-client authentication** with unique keys/certificates
- **SSL/TLS encryption** for all client connections
- **Basic authentication** for Kibana web access
- **IP whitelisting** for additional security
- Index lifecycle management for storage optimization
- Automatic old log cleanup
- Resource limits to prevent overconsumption
## Port Configuration
- **5601**: Kibana Web UI (HTTP/HTTPS with authentication)
- **9200**: Elasticsearch REST API (HTTP) - internal only
- **5044**: Logstash Beats input (TCP/TLS) - authenticated client connections
- **514**: Syslog input (UDP/TCP) - optional, unauthenticated
- **24224**: Fluentd forward input - optional Docker logging driver
## Storage Requirements
- **Minimum**: 10GB for basic operation
- **Recommended**: 50GB+ depending on log volume
- **Log Retention**: Default 30 days (configurable)
## Client Authentication
### Authentication Methods
1. **Mutual TLS (mTLS) - Recommended**
- Each client gets a unique certificate signed by the server's CA
- Strongest security with mutual authentication
- Automatic certificate validation
2. **API Keys**
- Each client gets a unique API key
- Simpler to manage than certificates
- Good for environments where certificate management is difficult
3. **Basic Auth (Not Recommended)**
- Shared username/password
- Least secure, only for testing
### Client Configuration
Clients using the `logclient` template will:
1. Authenticate using provided credentials (cert/key or API key)
2. Establish encrypted TLS connection
3. Ship all Docker container logs
4. Ship system logs (syslog, auth, kernel)
5. Maintain connection with automatic reconnection
6. Buffer logs locally during network outages
## Dashboard Features
### Pre-configured Dashboards
- **System Overview**: Overall health and log volume metrics
- **Docker Containers**: Container-specific logs and metrics
- **Error Analysis**: Aggregated error logs from all sources
- **Security Events**: Authentication and access logs
- **Application Logs**: Parsed application-specific logs
### Search Capabilities
- Full-text search across all logs
- Filter by time range, host, container, severity
- Save and share search queries
- Export search results
## Resource Requirements
### Minimum
- CPU: 2 cores
- RAM: 4GB
- Storage: 10GB
### Recommended
- CPU: 4+ cores
- RAM: 8GB+
- Storage: 50GB+ SSD
## Configuration Options
### Environment Variables (service.env)
## Quick Start
1. **System Setup**
```bash
# Elasticsearch settings
ES_HEAP_SIZE=2g
ES_MAX_MAP_COUNT=262144
# Logstash settings
LS_HEAP_SIZE=1g
LS_PIPELINE_WORKERS=2
# Kibana settings
KIBANA_PASSWORD=changeme
KIBANA_BASE_PATH=/
# Log retention
LOG_RETENTION_DAYS=30
LOG_MAX_SIZE_GB=50
# Authentication Mode
AUTH_MODE=mtls # Options: mtls, apikey, basic
ENABLE_TLS=true
# mTLS Settings (if AUTH_MODE=mtls)
CA_CERT_PATH=/certs/ca.crt
SERVER_CERT_PATH=/certs/server.crt
SERVER_KEY_PATH=/certs/server.key
CLIENT_CERT_REQUIRED=true
# API Key Settings (if AUTH_MODE=apikey)
API_KEYS_PATH=/config/api-keys.yml
# Network Security
ALLOWED_IPS="" # Comma-separated list, empty = all
sudo sysctl -w vm.max_map_count=262144
```
## Usage
2. **Configure**
Edit `config/service.env`:
- Set `SERVER_PUBLICBASEURL` to your actual server URL
- Change `ELASTIC_PASSWORD` from default
### Installation
3. **Install**
```bash
dropshell install logserver
```
### Generate Client Credentials
#### For mTLS Authentication:
4. **Generate Client Keys**
```bash
# Generate client certificate for a new host
dropshell exec logserver /scripts/generate-client-cert.sh hostname
# This creates hostname.crt and hostname.key files
./generate-api-key.sh
# Enter hostname when prompted
# Copy the generated config to clients
```
#### For API Key Authentication:
```bash
# Generate API key for a new client
dropshell exec logserver /scripts/generate-api-key.sh hostname
# Returns an API key to configure in the client
```
5. **Access Kibana**
- URL: `http://<server-ip>:5601`
- User: `elastic`
- Password: Set in `service.env` (ELASTIC_PASSWORD)
### Access Kibana
Navigate to `https://<server-ip>:5601` in your browser.
## Ports
- `5601` - Kibana Web UI
- `5044` - Log ingestion (Filebeat)
Default credentials:
- Username: `elastic`
- Password: `changeme` (change in service.env)
## Files
- `config/service.env` - Configuration
- `config/api-keys.yml` - Client API keys
- `generate-api-key.sh` - Add new clients
### View Logs
```bash
dropshell logs logserver
```
### Backup
```bash
dropshell backup logserver
```
## Troubleshooting
### Common Issues
1. **Elasticsearch failing to start**
- Check vm.max_map_count: `sysctl vm.max_map_count` (should be 262144+)
- Verify sufficient memory available
2. **No logs appearing in Kibana**
- Check Logstash is receiving data: port 5044 should be open
- Verify client connectivity
- Check index patterns in Kibana
3. **High memory usage**
- Adjust heap sizes in service.env
- Configure index lifecycle management
- Reduce retention period
## Integration
This template is designed to work seamlessly with the `logclient` template. Simply:
1. Deploy this logserver
2. Deploy logclient on each host you want to monitor
3. Configure logclient with the logserver address
4. Logs will automatically start flowing
## Security Considerations
1. **Authentication Setup**
- Use mTLS for production environments
- Generate unique credentials for each client
- Rotate certificates/keys regularly
- Store credentials securely
2. **Network Security**
- Always use TLS encryption for client connections
- Configure IP whitelisting when possible
- Use firewall rules to restrict access
- Consider VPN or private networks
3. **Access Control**
- Change default Kibana password immediately
- Create read-only users for viewing logs
- Implement role-based access control (RBAC)
- Audit access logs regularly
4. **Data Protection**
- Regular backups of Elasticsearch indices
- Encrypt data at rest (optional)
- Monitor disk usage to prevent data loss
- Implement log retention policies
## Maintenance
### Daily Tasks
- Monitor disk usage
- Check for failed log shipments
- Review error dashboards
### Weekly Tasks
- Verify all clients are reporting
- Check index health
- Review and optimize slow queries
### Monthly Tasks
- Update ELK stack components
- Archive old indices
- Review retention policies
- Performance tuning based on usage patterns
See [DOCUMENTATION.md](DOCUMENTATION.md) for full details.

View File

@@ -3,3 +3,4 @@
# Generated by generate-api-key.sh
api_keys:
video: a7798c63c2ac439b5ba20f3bf8bf27b5361231cdcbdc4fc9d7af715308fdf707

View File

@@ -0,0 +1,94 @@
# Logstash Configuration for LogServer
# Handles Beats input with API key authentication
input {
# Beats input for Filebeat clients
beats {
port => 5044
ssl => false # Set to true for production with proper certificates
# API key authentication handled via filter below
}
# Optional: Syslog input for direct syslog shipping
tcp {
port => 514
type => "syslog"
}
udp {
port => 514
type => "syslog"
}
}
filter {
# Note: API key validation would go here in production
# For now, accepting all connections for simplicity
# TODO: Implement proper API key validation
# Parse Docker logs
if [docker] {
# Docker metadata is already parsed by Filebeat
mutate {
add_field => {
"container_name" => "%{[docker][container][name]}"
"container_id" => "%{[docker][container][id]}"
"container_image" => "%{[docker][container][image]}"
}
}
}
# Parse syslog
if [type] == "syslog" {
grok {
match => {
"message" => "%{SYSLOGLINE}"
}
}
date {
match => [ "timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
# Parse JSON logs if they exist
if [message] =~ /^\{.*\}$/ {
json {
source => "message"
target => "json_message"
}
}
# Add timestamp if not present
if ![timestamp] {
mutate {
add_field => { "timestamp" => "%{@timestamp}" }
}
}
# Clean up metadata
mutate {
remove_field => [ "@version", "beat", "offset", "prospector" ]
}
}
output {
# Send to Elasticsearch with authentication
elasticsearch {
hosts => ["elasticsearch:9200"]
user => "elastic"
password => "${ELASTIC_PASSWORD:changeme}"
# Use different indices based on input type
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
# Manage index templates
manage_template => true
template_overwrite => true
}
# Optional: Debug output (comment out in production)
# stdout {
# codec => rubydebug
# }
}

View File

@@ -16,14 +16,18 @@ LS_PIPELINE_WORKERS=2
# Kibana settings
KIBANA_VERSION=7.17.23
KIBANA_PASSWORD=changeme
KIBANA_BASE_PATH=/
# Authentication (IMPORTANT: Change this!)
ELASTIC_PASSWORD=changeme # Password for 'elastic' user in Kibana/Elasticsearch
# Ports
KIBANA_PORT=5601
LOGSTASH_BEATS_PORT=5044
LOGSTASH_SYSLOG_PORT=514
# Server configuration
SERVER_PUBLICBASEURL=http://localhost:5601 # Change to your server's actual URL
# Log retention
LOG_RETENTION_DAYS=30
LOG_MAX_SIZE_GB=50

View File

@@ -0,0 +1,43 @@
# Ruby script for Logstash to validate API keys
# This is a simplified validation - in production, use proper authentication
require 'yaml'
def register(params)
@api_keys_file = params["api_keys_file"]
end
def filter(event)
# Get the API key from the event
api_key = event.get("[api_key]") || event.get("[@metadata][api_key]")
# If no API key, pass through (for backwards compatibility)
# In production, you should reject events without valid keys
if api_key.nil? || api_key.empty?
# For now, allow events without API keys
# event.cancel # Uncomment to require API keys
return [event]
end
# Load API keys from file
begin
if File.exist?(@api_keys_file)
config = YAML.load_file(@api_keys_file)
valid_keys = config['api_keys'].values if config && config['api_keys']
# Check if the provided key is valid
if valid_keys && valid_keys.include?(api_key)
# Valid key - let the event through
event.set("[@metadata][authenticated]", true)
else
# Invalid key - drop the event
event.cancel
end
end
rescue => e
# Log error but don't crash
event.set("[@metadata][auth_error]", e.message)
end
return [event]
end

View File

@@ -0,0 +1,81 @@
version: '3.8'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:${ES_VERSION:-7.17.23}
container_name: ${CONTAINER_NAME}_elasticsearch
environment:
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms${ES_HEAP_SIZE:-2g} -Xmx${ES_HEAP_SIZE:-2g}"
- xpack.security.enabled=true
- xpack.security.authc.api_key.enabled=true
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD:-${KIBANA_PASSWORD:-changeme}}
- xpack.monitoring.enabled=false
- cluster.routing.allocation.disk.threshold_enabled=false
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
ports:
- "127.0.0.1:9200:9200"
networks:
- elk
restart: unless-stopped
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
logstash:
image: docker.elastic.co/logstash/logstash:${LS_VERSION:-7.17.23}
container_name: ${CONTAINER_NAME}_logstash
environment:
- "LS_JAVA_OPTS=-Xms${LS_HEAP_SIZE:-1g} -Xmx${LS_HEAP_SIZE:-1g}"
- "xpack.monitoring.enabled=false"
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD:-${KIBANA_PASSWORD:-changeme}}
command: logstash -f /usr/share/logstash/config/logstash.conf
volumes:
- ${CONFIG_PATH}:/usr/share/logstash/config:ro
- logstash_data:/usr/share/logstash/data
ports:
- "${LOGSTASH_BEATS_PORT:-5044}:5044"
- "${LOGSTASH_SYSLOG_PORT:-514}:514/tcp"
- "${LOGSTASH_SYSLOG_PORT:-514}:514/udp"
networks:
- elk
depends_on:
- elasticsearch
restart: unless-stopped
kibana:
image: docker.elastic.co/kibana/kibana:${KIBANA_VERSION:-7.17.23}
container_name: ${CONTAINER_NAME}_kibana
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
- ELASTICSEARCH_USERNAME=elastic
- ELASTICSEARCH_PASSWORD=${ELASTIC_PASSWORD:-${KIBANA_PASSWORD:-changeme}}
- XPACK_SECURITY_ENABLED=true
- NODE_OPTIONS=--openssl-legacy-provider
- SERVER_PUBLICBASEURL=${SERVER_PUBLICBASEURL:-http://localhost:5601}
volumes:
- kibana_data:/usr/share/kibana/data
ports:
- "${KIBANA_PORT:-5601}:5601"
networks:
- elk
depends_on:
- elasticsearch
restart: unless-stopped
networks:
elk:
driver: bridge
volumes:
elasticsearch_data:
name: ${CONTAINER_NAME}_elasticsearch_data
logstash_data:
name: ${CONTAINER_NAME}_logstash_data
kibana_data:
name: ${CONTAINER_NAME}_kibana_data

View File

@@ -3,7 +3,45 @@
# Interactive API Key Generation Script for LogServer
# This script generates secure API keys and adds them to api-keys.yml
API_KEYS_FILE="${CONFIG_PATH:-./config}/api-keys.yml"
# Determine where to put the api-keys.yml file
determine_api_keys_location() {
# 1. If api-keys.yml already exists in current folder, use it
if [ -f "./api-keys.yml" ]; then
echo "./api-keys.yml"
return 0
fi
# 2. If service.env exists in current folder, put keys here
if [ -f "./service.env" ]; then
echo "./api-keys.yml"
return 0
fi
# 3. If config folder exists, put keys there
if [ -d "./config" ]; then
echo "./config/api-keys.yml"
return 0
fi
# No valid location found
return 1
}
# Try to determine location
if API_KEYS_FILE=$(determine_api_keys_location); then
: # Location found, continue
else
echo -e "${RED}Error: Cannot determine where to place api-keys.yml${NC}"
echo ""
echo "This script must be run from one of these locations:"
echo " 1. A deployed service directory (contains service.env)"
echo " 2. The logserver template directory (contains config/ folder)"
echo " 3. A directory with existing api-keys.yml file"
echo ""
echo "Current directory: $(pwd)"
echo "Contents: $(ls -la 2>/dev/null | head -5)"
exit 1
fi
# Colors for output
RED='\033[0;31m'
@@ -19,12 +57,21 @@ generate_key() {
# Initialize api-keys.yml if it doesn't exist
init_api_keys_file() {
if [ ! -f "$API_KEYS_FILE" ]; then
# Create directory if needed
local dir=$(dirname "$API_KEYS_FILE")
if [ ! -d "$dir" ]; then
mkdir -p "$dir"
echo -e "${GREEN}Created directory: $dir${NC}"
fi
echo "# API Keys for LogServer Authentication" > "$API_KEYS_FILE"
echo "# Format: hostname:api_key" >> "$API_KEYS_FILE"
echo "# Generated by generate-api-key.sh" >> "$API_KEYS_FILE"
echo "" >> "$API_KEYS_FILE"
echo "api_keys:" >> "$API_KEYS_FILE"
echo -e "${GREEN}Created new api-keys.yml file${NC}"
echo -e "${GREEN}Created new api-keys.yml file at: $API_KEYS_FILE${NC}"
else
echo -e "${GREEN}Using existing api-keys.yml at: $API_KEYS_FILE${NC}"
fi
}
@@ -112,5 +159,14 @@ echo ""
echo "To view all keys: cat $API_KEYS_FILE"
echo "To revoke a key: Edit $API_KEYS_FILE and remove the line"
echo ""
echo -e "${YELLOW}Remember to restart logserver after adding keys:${NC}"
echo " dropshell restart logserver"
# Show location-specific restart instructions
if [[ "$API_KEYS_FILE" == "./api-keys.yml" ]] && [ -f "./service.env" ]; then
# We're in a deployed service directory
echo -e "${YELLOW}Remember to restart the service to apply changes:${NC}"
echo " dropshell restart logserver"
else
# We're in the template directory
echo -e "${YELLOW}Note: Deploy this template to use these keys:${NC}"
echo " dropshell install logserver"
fi

View File

@@ -7,7 +7,7 @@ _check_required_env_vars "CONTAINER_NAME" "ES_VERSION" "LS_VERSION" "KIBANA_VERS
# Check Docker and Docker Compose are available
_check_docker_installed || _die "Docker test failed"
which docker-compose >/dev/null 2>&1 || _die "docker-compose is not installed"
docker compose version >/dev/null 2>&1 || _die "Docker Compose is not installed (requires Docker Compose V2)"
# Check vm.max_map_count for Elasticsearch
current_max_map_count=$(sysctl -n vm.max_map_count 2>/dev/null || echo 0)
@@ -23,7 +23,7 @@ fi
bash ./stop.sh || true
# Remove old containers
docker-compose down --remove-orphans 2>/dev/null || true
docker compose down --remove-orphans 2>/dev/null || true
# Pull the Docker images
echo "Pulling ELK stack images..."
@@ -31,17 +31,30 @@ docker pull docker.elastic.co/elasticsearch/elasticsearch:${ES_VERSION} || _die
docker pull docker.elastic.co/logstash/logstash:${LS_VERSION} || _die "Failed to pull Logstash"
docker pull docker.elastic.co/kibana/kibana:${KIBANA_VERSION} || _die "Failed to pull Kibana"
# Ensure config directory exists
mkdir -p "${CONFIG_PATH}"
# Initialize API keys file if it doesn't exist
if [ ! -f "${CONFIG_PATH}/api-keys.yml" ]; then
echo "No API keys configured yet."
echo "Run ./generate-api-key.sh to add client keys"
mkdir -p "${CONFIG_PATH}"
echo "api_keys:" > "${CONFIG_PATH}/api-keys.yml"
fi
# Copy Logstash configuration if it doesn't exist
if [ ! -f "${CONFIG_PATH}/logstash.conf" ]; then
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
if [ -f "$SCRIPT_DIR/config/logstash.conf" ]; then
cp "$SCRIPT_DIR/config/logstash.conf" "${CONFIG_PATH}/logstash.conf"
echo "Copied Logstash configuration to ${CONFIG_PATH}"
else
echo "WARNING: logstash.conf not found in template"
fi
fi
# Start the ELK stack
echo "Starting ELK stack..."
docker-compose up -d --build || _die "Failed to start ELK stack"
docker compose up -d --build || _die "Failed to start ELK stack"
# Wait for services to be ready
echo "Waiting for services to start..."
@@ -52,9 +65,15 @@ bash ./status.sh || _die "Services failed to start properly"
echo "Installation of ${CONTAINER_NAME} complete"
echo ""
echo "Kibana UI: http://$(hostname -I | awk '{print $1}'):${KIBANA_PORT}"
echo "========================================="
echo "Kibana UI: ${SERVER_PUBLICBASEURL:-http://$(hostname -I | awk '{print $1}'):${KIBANA_PORT}}"
echo "Username: elastic"
echo "Password: ${KIBANA_PASSWORD}"
echo "Password: ${ELASTIC_PASSWORD:-changeme}"
echo "========================================="
echo ""
echo "IMPORTANT: Update service.env with:"
echo " - Your actual server IP/domain in SERVER_PUBLICBASEURL"
echo " - A secure password in ELASTIC_PASSWORD"
echo ""
echo "Logstash listening on port ${LOGSTASH_BEATS_PORT} for Filebeat clients"
echo ""

View File

@@ -3,14 +3,14 @@ source "${AGENT_PATH}/common.sh"
_check_required_env_vars "CONTAINER_NAME"
echo "Starting ELK stack..."
docker-compose up -d || _die "Failed to start ELK stack"
docker compose up -d || _die "Failed to start ELK stack"
# Wait for services to be ready
echo "Waiting for services to start..."
sleep 5
# Check if services are running
if docker-compose ps | grep -q "Up"; then
if docker compose ps | grep -q "Up"; then
echo "ELK stack started successfully"
else
_die "Failed to start ELK stack services"

View File

@@ -2,16 +2,16 @@
source "${AGENT_PATH}/common.sh"
_check_required_env_vars "CONTAINER_NAME"
# Check if docker-compose services exist and are running
if ! docker-compose ps 2>/dev/null | grep -q "${CONTAINER_NAME}"; then
# Check if docker compose services exist and are running
if ! docker compose ps 2>/dev/null | grep -q "${CONTAINER_NAME}"; then
echo "Unknown"
exit 0
fi
# Check individual service status
elasticsearch_status=$(docker-compose ps elasticsearch 2>/dev/null | grep -c "Up")
logstash_status=$(docker-compose ps logstash 2>/dev/null | grep -c "Up")
kibana_status=$(docker-compose ps kibana 2>/dev/null | grep -c "Up")
elasticsearch_status=$(docker compose ps elasticsearch 2>/dev/null | grep -c "Up")
logstash_status=$(docker compose ps logstash 2>/dev/null | grep -c "Up")
kibana_status=$(docker compose ps kibana 2>/dev/null | grep -c "Up")
if [ "$elasticsearch_status" -eq 1 ] && [ "$logstash_status" -eq 1 ] && [ "$kibana_status" -eq 1 ]; then
echo "Running"

View File

@@ -3,6 +3,6 @@ source "${AGENT_PATH}/common.sh"
_check_required_env_vars "CONTAINER_NAME"
echo "Stopping ELK stack..."
docker-compose stop || true
docker compose stop || true
echo "ELK stack stopped"

View File

@@ -6,7 +6,7 @@ _check_required_env_vars "CONTAINER_NAME"
bash ./stop.sh || _die "Failed to stop containers"
# Remove the containers
docker-compose down --remove-orphans || _die "Failed to remove containers"
docker compose down --remove-orphans || _die "Failed to remove containers"
# CRITICAL: Never remove data volumes in uninstall.sh!
# Data volumes must be preserved for potential reinstallation