new logging systems
All checks were successful
Test and Publish Templates / test-and-publish (push) Successful in 40s
All checks were successful
Test and Publish Templates / test-and-publish (push) Successful in 40s
This commit is contained in:
279
logserver/README.md
Normal file
279
logserver/README.md
Normal file
@@ -0,0 +1,279 @@
|
||||
# Dropshell LogServer Template
|
||||
|
||||
A comprehensive centralized logging solution using the ELK Stack (Elasticsearch, Logstash, Kibana) for receiving, processing, and visualizing logs from multiple hosts.
|
||||
|
||||
## Overview
|
||||
|
||||
This template deploys a full-featured ELK stack that:
|
||||
- Receives logs from multiple sources via Beats protocol
|
||||
- Stores and indexes logs in Elasticsearch
|
||||
- Provides powerful search and visualization through Kibana
|
||||
- Supports automatic log parsing and enrichment
|
||||
- Handles Docker container logs and system logs from clients
|
||||
|
||||
## Architecture
|
||||
|
||||
### Components
|
||||
|
||||
1. **Elasticsearch** (7.17.x)
|
||||
- Distributed search and analytics engine
|
||||
- Stores and indexes all log data
|
||||
- Provides fast full-text search capabilities
|
||||
- Single-node configuration for simplicity (can be scaled)
|
||||
|
||||
2. **Logstash** (7.17.x)
|
||||
- Log processing pipeline
|
||||
- Receives logs from Filebeat clients
|
||||
- Parses and enriches log data
|
||||
- Routes logs to appropriate Elasticsearch indices
|
||||
|
||||
3. **Kibana** (7.17.x)
|
||||
- Web UI for log exploration and visualization
|
||||
- Create dashboards and alerts
|
||||
- Real-time log streaming
|
||||
- Advanced search queries
|
||||
|
||||
## Features
|
||||
|
||||
### Minimum Configuration Design
|
||||
- Auto-discovery of log formats
|
||||
- Pre-configured dashboards for common services
|
||||
- Automatic index lifecycle management
|
||||
- Built-in parsing for Docker and syslog formats
|
||||
- Zero-configuration client connectivity
|
||||
|
||||
### Log Processing
|
||||
- Automatic timestamp extraction
|
||||
- Docker metadata enrichment (container name, image, labels)
|
||||
- Syslog parsing with severity levels
|
||||
- JSON log support
|
||||
- Multi-line log handling (stacktraces, etc.)
|
||||
- Grok pattern matching for common formats
|
||||
|
||||
### Security & Performance
|
||||
- **Mutual TLS (mTLS)** authentication for client connections
|
||||
- **API key authentication** as an alternative to certificates
|
||||
- **Per-client authentication** with unique keys/certificates
|
||||
- **SSL/TLS encryption** for all client connections
|
||||
- **Basic authentication** for Kibana web access
|
||||
- **IP whitelisting** for additional security
|
||||
- Index lifecycle management for storage optimization
|
||||
- Automatic old log cleanup
|
||||
- Resource limits to prevent overconsumption
|
||||
|
||||
## Port Configuration
|
||||
|
||||
- **5601**: Kibana Web UI (HTTP/HTTPS with authentication)
|
||||
- **9200**: Elasticsearch REST API (HTTP) - internal only
|
||||
- **5044**: Logstash Beats input (TCP/TLS) - authenticated client connections
|
||||
- **514**: Syslog input (UDP/TCP) - optional, unauthenticated
|
||||
- **24224**: Fluentd forward input - optional Docker logging driver
|
||||
|
||||
## Storage Requirements
|
||||
|
||||
- **Minimum**: 10GB for basic operation
|
||||
- **Recommended**: 50GB+ depending on log volume
|
||||
- **Log Retention**: Default 30 days (configurable)
|
||||
|
||||
## Client Authentication
|
||||
|
||||
### Authentication Methods
|
||||
|
||||
1. **Mutual TLS (mTLS) - Recommended**
|
||||
- Each client gets a unique certificate signed by the server's CA
|
||||
- Strongest security with mutual authentication
|
||||
- Automatic certificate validation
|
||||
|
||||
2. **API Keys**
|
||||
- Each client gets a unique API key
|
||||
- Simpler to manage than certificates
|
||||
- Good for environments where certificate management is difficult
|
||||
|
||||
3. **Basic Auth (Not Recommended)**
|
||||
- Shared username/password
|
||||
- Least secure, only for testing
|
||||
|
||||
### Client Configuration
|
||||
|
||||
Clients using the `logclient` template will:
|
||||
1. Authenticate using provided credentials (cert/key or API key)
|
||||
2. Establish encrypted TLS connection
|
||||
3. Ship all Docker container logs
|
||||
4. Ship system logs (syslog, auth, kernel)
|
||||
5. Maintain connection with automatic reconnection
|
||||
6. Buffer logs locally during network outages
|
||||
|
||||
## Dashboard Features
|
||||
|
||||
### Pre-configured Dashboards
|
||||
- **System Overview**: Overall health and log volume metrics
|
||||
- **Docker Containers**: Container-specific logs and metrics
|
||||
- **Error Analysis**: Aggregated error logs from all sources
|
||||
- **Security Events**: Authentication and access logs
|
||||
- **Application Logs**: Parsed application-specific logs
|
||||
|
||||
### Search Capabilities
|
||||
- Full-text search across all logs
|
||||
- Filter by time range, host, container, severity
|
||||
- Save and share search queries
|
||||
- Export search results
|
||||
|
||||
## Resource Requirements
|
||||
|
||||
### Minimum
|
||||
- CPU: 2 cores
|
||||
- RAM: 4GB
|
||||
- Storage: 10GB
|
||||
|
||||
### Recommended
|
||||
- CPU: 4+ cores
|
||||
- RAM: 8GB+
|
||||
- Storage: 50GB+ SSD
|
||||
|
||||
## Configuration Options
|
||||
|
||||
### Environment Variables (service.env)
|
||||
|
||||
```bash
|
||||
# Elasticsearch settings
|
||||
ES_HEAP_SIZE=2g
|
||||
ES_MAX_MAP_COUNT=262144
|
||||
|
||||
# Logstash settings
|
||||
LS_HEAP_SIZE=1g
|
||||
LS_PIPELINE_WORKERS=2
|
||||
|
||||
# Kibana settings
|
||||
KIBANA_PASSWORD=changeme
|
||||
KIBANA_BASE_PATH=/
|
||||
|
||||
# Log retention
|
||||
LOG_RETENTION_DAYS=30
|
||||
LOG_MAX_SIZE_GB=50
|
||||
|
||||
# Authentication Mode
|
||||
AUTH_MODE=mtls # Options: mtls, apikey, basic
|
||||
ENABLE_TLS=true
|
||||
|
||||
# mTLS Settings (if AUTH_MODE=mtls)
|
||||
CA_CERT_PATH=/certs/ca.crt
|
||||
SERVER_CERT_PATH=/certs/server.crt
|
||||
SERVER_KEY_PATH=/certs/server.key
|
||||
CLIENT_CERT_REQUIRED=true
|
||||
|
||||
# API Key Settings (if AUTH_MODE=apikey)
|
||||
API_KEYS_PATH=/config/api-keys.yml
|
||||
|
||||
# Network Security
|
||||
ALLOWED_IPS="" # Comma-separated list, empty = all
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Installation
|
||||
```bash
|
||||
dropshell install logserver
|
||||
```
|
||||
|
||||
### Generate Client Credentials
|
||||
|
||||
#### For mTLS Authentication:
|
||||
```bash
|
||||
# Generate client certificate for a new host
|
||||
dropshell exec logserver /scripts/generate-client-cert.sh hostname
|
||||
# This creates hostname.crt and hostname.key files
|
||||
```
|
||||
|
||||
#### For API Key Authentication:
|
||||
```bash
|
||||
# Generate API key for a new client
|
||||
dropshell exec logserver /scripts/generate-api-key.sh hostname
|
||||
# Returns an API key to configure in the client
|
||||
```
|
||||
|
||||
### Access Kibana
|
||||
Navigate to `https://<server-ip>:5601` in your browser.
|
||||
|
||||
Default credentials:
|
||||
- Username: `elastic`
|
||||
- Password: `changeme` (change in service.env)
|
||||
|
||||
### View Logs
|
||||
```bash
|
||||
dropshell logs logserver
|
||||
```
|
||||
|
||||
### Backup
|
||||
```bash
|
||||
dropshell backup logserver
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Elasticsearch failing to start**
|
||||
- Check vm.max_map_count: `sysctl vm.max_map_count` (should be 262144+)
|
||||
- Verify sufficient memory available
|
||||
|
||||
2. **No logs appearing in Kibana**
|
||||
- Check Logstash is receiving data: port 5044 should be open
|
||||
- Verify client connectivity
|
||||
- Check index patterns in Kibana
|
||||
|
||||
3. **High memory usage**
|
||||
- Adjust heap sizes in service.env
|
||||
- Configure index lifecycle management
|
||||
- Reduce retention period
|
||||
|
||||
## Integration
|
||||
|
||||
This template is designed to work seamlessly with the `logclient` template. Simply:
|
||||
1. Deploy this logserver
|
||||
2. Deploy logclient on each host you want to monitor
|
||||
3. Configure logclient with the logserver address
|
||||
4. Logs will automatically start flowing
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **Authentication Setup**
|
||||
- Use mTLS for production environments
|
||||
- Generate unique credentials for each client
|
||||
- Rotate certificates/keys regularly
|
||||
- Store credentials securely
|
||||
|
||||
2. **Network Security**
|
||||
- Always use TLS encryption for client connections
|
||||
- Configure IP whitelisting when possible
|
||||
- Use firewall rules to restrict access
|
||||
- Consider VPN or private networks
|
||||
|
||||
3. **Access Control**
|
||||
- Change default Kibana password immediately
|
||||
- Create read-only users for viewing logs
|
||||
- Implement role-based access control (RBAC)
|
||||
- Audit access logs regularly
|
||||
|
||||
4. **Data Protection**
|
||||
- Regular backups of Elasticsearch indices
|
||||
- Encrypt data at rest (optional)
|
||||
- Monitor disk usage to prevent data loss
|
||||
- Implement log retention policies
|
||||
|
||||
## Maintenance
|
||||
|
||||
### Daily Tasks
|
||||
- Monitor disk usage
|
||||
- Check for failed log shipments
|
||||
- Review error dashboards
|
||||
|
||||
### Weekly Tasks
|
||||
- Verify all clients are reporting
|
||||
- Check index health
|
||||
- Review and optimize slow queries
|
||||
|
||||
### Monthly Tasks
|
||||
- Update ELK stack components
|
||||
- Archive old indices
|
||||
- Review retention policies
|
||||
- Performance tuning based on usage patterns
|
244
logserver/TODO.md
Normal file
244
logserver/TODO.md
Normal file
@@ -0,0 +1,244 @@
|
||||
# LogServer Template - Implementation TODO
|
||||
|
||||
## Phase 1: Core Infrastructure (Priority 1)
|
||||
|
||||
### Configuration Files
|
||||
- [ ] Create `config/.template_info.env` with template metadata
|
||||
- [ ] Create `config/service.env` with user-configurable settings
|
||||
- [ ] Define all required environment variables (ports, passwords, heap sizes)
|
||||
- [ ] Set appropriate default values for zero-config experience
|
||||
|
||||
### Docker Compose Setup
|
||||
- [ ] Create `docker-compose.yml` with ELK stack services
|
||||
- [ ] Configure Elasticsearch single-node setup
|
||||
- [ ] Configure Logstash with Beats input pipeline
|
||||
- [ ] Configure Kibana with Elasticsearch connection
|
||||
- [ ] Set up proper networking between services
|
||||
- [ ] Define named volumes for data persistence
|
||||
- [ ] Configure health checks for each service
|
||||
|
||||
### Required Scripts
|
||||
- [ ] Implement `install.sh` - Pull images, create volumes, start services
|
||||
- [ ] Implement `uninstall.sh` - Stop and remove containers (preserve volumes!)
|
||||
- [ ] Implement `start.sh` - Start all ELK services with docker-compose
|
||||
- [ ] Implement `stop.sh` - Gracefully stop all services
|
||||
- [ ] Implement `status.sh` - Check health of all three services
|
||||
|
||||
## Phase 2: Logstash Configuration (Priority 1)
|
||||
|
||||
### Input Configuration
|
||||
- [ ] Configure Beats input on port 5044 with TLS/SSL
|
||||
- [ ] Set up mutual TLS (mTLS) authentication
|
||||
- [ ] Configure client certificate validation
|
||||
- [ ] Add API key authentication option
|
||||
- [ ] Implement IP whitelisting
|
||||
- [ ] Add Syslog input on port 514 (UDP/TCP) - unauthenticated
|
||||
- [ ] Add Docker Fluentd input on port 24224 (optional)
|
||||
|
||||
### Filter Pipeline
|
||||
- [ ] Create Docker log parser (extract container metadata)
|
||||
- [ ] Create Syslog parser (RFC3164 and RFC5424)
|
||||
- [ ] Add JSON parser for structured logs
|
||||
- [ ] Implement multiline pattern for stack traces
|
||||
- [ ] Add timestamp extraction and normalization
|
||||
- [ ] Create field enrichment (add host metadata)
|
||||
- [ ] Implement conditional routing based on log type
|
||||
|
||||
### Output Configuration
|
||||
- [ ] Configure Elasticsearch output with index patterns
|
||||
- [ ] Set up index templates for different log types
|
||||
- [ ] Configure index lifecycle management (ILM)
|
||||
|
||||
## Phase 3: Elasticsearch Setup (Priority 1)
|
||||
|
||||
### System Configuration
|
||||
- [ ] Set appropriate heap size defaults (ES_HEAP_SIZE)
|
||||
- [ ] Configure vm.max_map_count requirement check
|
||||
- [ ] Set up single-node discovery settings
|
||||
- [ ] Configure data persistence volume
|
||||
- [ ] Set up index templates for:
|
||||
- [ ] Docker logs (docker-*)
|
||||
- [ ] System logs (syslog-*)
|
||||
- [ ] Application logs (app-*)
|
||||
- [ ] Error logs (errors-*)
|
||||
|
||||
### Index Management
|
||||
- [ ] Configure ILM policies for log rotation
|
||||
- [ ] Set retention period (default 30 days)
|
||||
- [ ] Configure max index size limits
|
||||
- [ ] Set up automatic cleanup of old indices
|
||||
- [ ] Create snapshot repository configuration
|
||||
|
||||
## Phase 4: Kibana Configuration (Priority 2)
|
||||
|
||||
### Initial Setup
|
||||
- [ ] Configure Kibana with Elasticsearch URL
|
||||
- [ ] Set up basic authentication
|
||||
- [ ] Configure server base path
|
||||
- [ ] Set appropriate memory limits
|
||||
|
||||
### Pre-built Dashboards
|
||||
- [ ] Create System Overview dashboard
|
||||
- [ ] Create Docker Containers dashboard
|
||||
- [ ] Create Error Analysis dashboard
|
||||
- [ ] Create Security Events dashboard
|
||||
- [ ] Create Host Metrics dashboard
|
||||
|
||||
### Saved Searches
|
||||
- [ ] Error logs across all sources
|
||||
- [ ] Authentication events
|
||||
- [ ] Container lifecycle events
|
||||
- [ ] Slow queries/performance issues
|
||||
- [ ] Critical system events
|
||||
|
||||
### Index Patterns
|
||||
- [ ] Configure docker-* pattern
|
||||
- [ ] Configure syslog-* pattern
|
||||
- [ ] Configure app-* pattern
|
||||
- [ ] Configure filebeat-* pattern
|
||||
|
||||
## Phase 5: Optional Scripts (Priority 2)
|
||||
|
||||
### Operational Scripts
|
||||
- [ ] Implement `logs.sh` - Show logs from all ELK services
|
||||
- [ ] Implement `backup.sh` - Snapshot Elasticsearch indices
|
||||
- [ ] Implement `restore.sh` - Restore from snapshots
|
||||
- [ ] Implement `destroy.sh` - Complete removal including volumes
|
||||
- [ ] Implement `ports.sh` - Display all exposed ports
|
||||
- [ ] Implement `ssh.sh` - Shell into specific container
|
||||
|
||||
### Helper Scripts
|
||||
- [ ] Create `_volumes.sh` for volume management helpers
|
||||
- [ ] Add health check script for all services
|
||||
- [ ] Create performance tuning script
|
||||
- [ ] Add certificate generation script for SSL
|
||||
|
||||
## Phase 6: Security Features (Priority 1 - CRITICAL)
|
||||
|
||||
### Certificate Authority Setup
|
||||
- [ ] Create CA certificate and key for signing client certs
|
||||
- [ ] Generate server certificate for Logstash
|
||||
- [ ] Create certificate generation script for clients
|
||||
- [ ] Set up certificate storage structure
|
||||
- [ ] Implement certificate rotation mechanism
|
||||
|
||||
### mTLS Authentication
|
||||
- [ ] Configure Logstash for mutual TLS
|
||||
- [ ] Set up client certificate validation
|
||||
- [ ] Create client certificate generation script
|
||||
- [ ] Implement certificate revocation list (CRL)
|
||||
- [ ] Add certificate expiry monitoring
|
||||
|
||||
### API Key Authentication
|
||||
- [ ] Create API key generation script
|
||||
- [ ] Configure Logstash to accept API keys
|
||||
- [ ] Implement API key storage (encrypted)
|
||||
- [ ] Add API key rotation mechanism
|
||||
- [ ] Create API key revocation process
|
||||
|
||||
### Network Security
|
||||
- [ ] Implement IP whitelisting in Logstash
|
||||
- [ ] Configure firewall rules
|
||||
- [ ] Set up rate limiting
|
||||
- [ ] Add connection throttling
|
||||
- [ ] Implement DDoS protection
|
||||
|
||||
### Kibana Security
|
||||
- [ ] Configure Kibana HTTPS
|
||||
- [ ] Set up basic authentication
|
||||
- [ ] Create user management scripts
|
||||
- [ ] Implement session management
|
||||
- [ ] Add audit logging
|
||||
|
||||
## Phase 7: Performance & Optimization (Priority 3)
|
||||
|
||||
### Resource Management
|
||||
- [ ] Configure CPU limits for each service
|
||||
- [ ] Set memory limits appropriately
|
||||
- [ ] Add swap handling configuration
|
||||
- [ ] Configure JVM options files
|
||||
- [ ] Add performance monitoring
|
||||
|
||||
### Optimization
|
||||
- [ ] Configure pipeline workers
|
||||
- [ ] Set batch sizes for optimal throughput
|
||||
- [ ] Configure queue sizes
|
||||
- [ ] Add caching configuration
|
||||
- [ ] Optimize index refresh intervals
|
||||
|
||||
## Phase 8: Testing & Documentation (Priority 3)
|
||||
|
||||
### Testing
|
||||
- [ ] Test installation process
|
||||
- [ ] Test uninstall (verify volume preservation)
|
||||
- [ ] Test log ingestion from sample client
|
||||
- [ ] Test all dashboard functionality
|
||||
- [ ] Test backup and restore procedures
|
||||
- [ ] Load test with high log volume
|
||||
- [ ] Test failover and recovery
|
||||
|
||||
### Documentation
|
||||
- [ ] Create README.txt for dropshell format
|
||||
- [ ] Document all configuration options
|
||||
- [ ] Add troubleshooting guide
|
||||
- [ ] Create quick start guide
|
||||
- [ ] Document upgrade procedures
|
||||
- [ ] Add performance tuning guide
|
||||
|
||||
## Phase 9: Integration Testing (Priority 3)
|
||||
|
||||
### With LogClient
|
||||
- [ ] Test automatic discovery
|
||||
- [ ] Verify log flow from client to server
|
||||
- [ ] Test reconnection scenarios
|
||||
- [ ] Verify all log types are parsed correctly
|
||||
- [ ] Test SSL communication
|
||||
- [ ] Measure end-to-end latency
|
||||
|
||||
### Compatibility Testing
|
||||
- [ ] Test with different Docker versions
|
||||
- [ ] Test on various Linux distributions
|
||||
- [ ] Verify with different log formats
|
||||
- [ ] Test with high-volume producers
|
||||
- [ ] Validate resource usage
|
||||
|
||||
## Phase 10: Production Readiness (Priority 4)
|
||||
|
||||
### Monitoring & Alerting
|
||||
- [ ] Add Elasticsearch monitoring
|
||||
- [ ] Configure disk space alerts
|
||||
- [ ] Set up index health monitoring
|
||||
- [ ] Add performance metrics collection
|
||||
- [ ] Create alert rules in Kibana
|
||||
|
||||
### Maintenance Features
|
||||
- [ ] Add automatic update check
|
||||
- [ ] Create maintenance mode
|
||||
- [ ] Add data export functionality
|
||||
- [ ] Create migration scripts
|
||||
- [ ] Add configuration validation
|
||||
|
||||
## Notes
|
||||
|
||||
### Design Principles
|
||||
1. **Minimum configuration**: Should work with just `dropshell install logserver`
|
||||
2. **Data safety**: Never delete volumes in uninstall.sh
|
||||
3. **Non-interactive**: All scripts must run without user input
|
||||
4. **Idempotent**: Scripts can be run multiple times safely
|
||||
5. **Clear feedback**: Provide clear status and error messages
|
||||
|
||||
### Dependencies
|
||||
- Docker and Docker Compose
|
||||
- Sufficient system resources (4GB+ RAM recommended)
|
||||
- Network connectivity for clients
|
||||
- Persistent storage for logs
|
||||
|
||||
### Testing Checklist
|
||||
- [ ] All required scripts present and executable
|
||||
- [ ] Template validates with dropshell test-template
|
||||
- [ ] Services start and connect properly
|
||||
- [ ] Logs flow from client to Kibana
|
||||
- [ ] Data persists across container restarts
|
||||
- [ ] Uninstall preserves data volumes
|
||||
- [ ] Resource limits are enforced
|
||||
- [ ] Error handling works correctly
|
17
logserver/config/.template_info.env
Normal file
17
logserver/config/.template_info.env
Normal file
@@ -0,0 +1,17 @@
|
||||
# Template identifier - MUST match the directory name
|
||||
TEMPLATE=logserver
|
||||
|
||||
# Requirements
|
||||
REQUIRES_HOST_ROOT=false # No root access on host needed
|
||||
REQUIRES_DOCKER=true # Docker is required
|
||||
REQUIRES_DOCKER_ROOT=false # Docker root privileges not specifically needed
|
||||
|
||||
# Docker compose used for ELK stack
|
||||
USES_DOCKER_COMPOSE=true
|
||||
|
||||
# Volume definitions for persistence
|
||||
DATA_VOLUME="${CONTAINER_NAME}_elasticsearch_data"
|
||||
LOGSTASH_VOLUME="${CONTAINER_NAME}_logstash_data"
|
||||
KIBANA_VOLUME="${CONTAINER_NAME}_kibana_data"
|
||||
CERTS_VOLUME="${CONTAINER_NAME}_certs"
|
||||
CONFIG_VOLUME="${CONTAINER_NAME}_config"
|
46
logserver/config/service.env
Normal file
46
logserver/config/service.env
Normal file
@@ -0,0 +1,46 @@
|
||||
# Service identification
|
||||
CONTAINER_NAME=logserver
|
||||
|
||||
# Elasticsearch settings
|
||||
ES_VERSION=7.17.23
|
||||
ES_HEAP_SIZE=2g
|
||||
ES_MAX_MAP_COUNT=262144
|
||||
|
||||
# Logstash settings
|
||||
LS_VERSION=7.17.23
|
||||
LS_HEAP_SIZE=1g
|
||||
LS_PIPELINE_WORKERS=2
|
||||
|
||||
# Kibana settings
|
||||
KIBANA_VERSION=7.17.23
|
||||
KIBANA_PASSWORD=changeme
|
||||
KIBANA_BASE_PATH=/
|
||||
|
||||
# Ports
|
||||
KIBANA_PORT=5601
|
||||
LOGSTASH_BEATS_PORT=5044
|
||||
LOGSTASH_SYSLOG_PORT=514
|
||||
|
||||
# Log retention
|
||||
LOG_RETENTION_DAYS=30
|
||||
LOG_MAX_SIZE_GB=50
|
||||
|
||||
# Authentication Mode
|
||||
AUTH_MODE=mtls # Options: mtls, apikey, basic
|
||||
ENABLE_TLS=true
|
||||
|
||||
# mTLS Settings (if AUTH_MODE=mtls)
|
||||
CA_CERT_PATH=/certs/ca.crt
|
||||
SERVER_CERT_PATH=/certs/server.crt
|
||||
SERVER_KEY_PATH=/certs/server.key
|
||||
CLIENT_CERT_REQUIRED=true
|
||||
|
||||
# API Key Settings (if AUTH_MODE=apikey)
|
||||
API_KEYS_PATH=/config/api-keys.yml
|
||||
|
||||
# Network Security
|
||||
ALLOWED_IPS="" # Comma-separated list, empty = all
|
||||
|
||||
# Resource limits
|
||||
MAX_CPU_PERCENT=80
|
||||
MAX_MEMORY=4GB
|
62
logserver/install.sh
Executable file
62
logserver/install.sh
Executable file
@@ -0,0 +1,62 @@
|
||||
#!/bin/bash
|
||||
source "${AGENT_PATH}/common.sh"
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
|
||||
# Check required environment variables
|
||||
_check_required_env_vars "CONTAINER_NAME" "ES_VERSION" "LS_VERSION" "KIBANA_VERSION"
|
||||
|
||||
# Check Docker and Docker Compose are available
|
||||
_check_docker_installed || _die "Docker test failed"
|
||||
which docker-compose >/dev/null 2>&1 || _die "docker-compose is not installed"
|
||||
|
||||
# Check vm.max_map_count for Elasticsearch
|
||||
current_max_map_count=$(sysctl -n vm.max_map_count 2>/dev/null || echo 0)
|
||||
if [ "$current_max_map_count" -lt 262144 ]; then
|
||||
echo "WARNING: vm.max_map_count is too low ($current_max_map_count)"
|
||||
echo "Elasticsearch requires at least 262144"
|
||||
echo "Please run: sudo sysctl -w vm.max_map_count=262144"
|
||||
echo "And add to /etc/sysctl.conf to persist"
|
||||
_die "System configuration needs adjustment"
|
||||
fi
|
||||
|
||||
# Stop any existing containers
|
||||
bash ./stop.sh || true
|
||||
|
||||
# Remove old containers
|
||||
docker-compose down --remove-orphans 2>/dev/null || true
|
||||
|
||||
# Pull the Docker images
|
||||
echo "Pulling ELK stack images..."
|
||||
docker pull docker.elastic.co/elasticsearch/elasticsearch:${ES_VERSION} || _die "Failed to pull Elasticsearch"
|
||||
docker pull docker.elastic.co/logstash/logstash:${LS_VERSION} || _die "Failed to pull Logstash"
|
||||
docker pull docker.elastic.co/kibana/kibana:${KIBANA_VERSION} || _die "Failed to pull Kibana"
|
||||
|
||||
# Generate certificates if using mTLS
|
||||
if [ "$AUTH_MODE" = "mtls" ]; then
|
||||
bash ./scripts/generate-ca.sh || _die "Failed to generate CA certificate"
|
||||
bash ./scripts/generate-server-cert.sh || _die "Failed to generate server certificate"
|
||||
fi
|
||||
|
||||
# Start the ELK stack
|
||||
echo "Starting ELK stack..."
|
||||
docker-compose up -d --build || _die "Failed to start ELK stack"
|
||||
|
||||
# Wait for services to be ready
|
||||
echo "Waiting for services to start..."
|
||||
sleep 10
|
||||
|
||||
# Check status
|
||||
bash ./status.sh || _die "Services failed to start properly"
|
||||
|
||||
echo "Installation of ${CONTAINER_NAME} complete"
|
||||
echo ""
|
||||
echo "Kibana UI: http://$(hostname -I | awk '{print $1}'):${KIBANA_PORT}"
|
||||
echo "Username: elastic"
|
||||
echo "Password: ${KIBANA_PASSWORD}"
|
||||
echo ""
|
||||
echo "Logstash listening on port ${LOGSTASH_BEATS_PORT} for Filebeat clients"
|
||||
if [ "$AUTH_MODE" = "mtls" ]; then
|
||||
echo "Authentication: mTLS (generate client certs with ./scripts/generate-client-cert.sh)"
|
||||
elif [ "$AUTH_MODE" = "apikey" ]; then
|
||||
echo "Authentication: API Keys (generate with ./scripts/generate-api-key.sh)"
|
||||
fi
|
17
logserver/start.sh
Executable file
17
logserver/start.sh
Executable file
@@ -0,0 +1,17 @@
|
||||
#!/bin/bash
|
||||
source "${AGENT_PATH}/common.sh"
|
||||
_check_required_env_vars "CONTAINER_NAME"
|
||||
|
||||
echo "Starting ELK stack..."
|
||||
docker-compose up -d || _die "Failed to start ELK stack"
|
||||
|
||||
# Wait for services to be ready
|
||||
echo "Waiting for services to start..."
|
||||
sleep 5
|
||||
|
||||
# Check if services are running
|
||||
if docker-compose ps | grep -q "Up"; then
|
||||
echo "ELK stack started successfully"
|
||||
else
|
||||
_die "Failed to start ELK stack services"
|
||||
fi
|
22
logserver/status.sh
Executable file
22
logserver/status.sh
Executable file
@@ -0,0 +1,22 @@
|
||||
#!/bin/bash
|
||||
source "${AGENT_PATH}/common.sh"
|
||||
_check_required_env_vars "CONTAINER_NAME"
|
||||
|
||||
# Check if docker-compose services exist and are running
|
||||
if ! docker-compose ps 2>/dev/null | grep -q "${CONTAINER_NAME}"; then
|
||||
echo "Unknown"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Check individual service status
|
||||
elasticsearch_status=$(docker-compose ps elasticsearch 2>/dev/null | grep -c "Up")
|
||||
logstash_status=$(docker-compose ps logstash 2>/dev/null | grep -c "Up")
|
||||
kibana_status=$(docker-compose ps kibana 2>/dev/null | grep -c "Up")
|
||||
|
||||
if [ "$elasticsearch_status" -eq 1 ] && [ "$logstash_status" -eq 1 ] && [ "$kibana_status" -eq 1 ]; then
|
||||
echo "Running"
|
||||
elif [ "$elasticsearch_status" -eq 0 ] && [ "$logstash_status" -eq 0 ] && [ "$kibana_status" -eq 0 ]; then
|
||||
echo "Stopped"
|
||||
else
|
||||
echo "Error"
|
||||
fi
|
8
logserver/stop.sh
Executable file
8
logserver/stop.sh
Executable file
@@ -0,0 +1,8 @@
|
||||
#!/bin/bash
|
||||
source "${AGENT_PATH}/common.sh"
|
||||
_check_required_env_vars "CONTAINER_NAME"
|
||||
|
||||
echo "Stopping ELK stack..."
|
||||
docker-compose stop || true
|
||||
|
||||
echo "ELK stack stopped"
|
16
logserver/uninstall.sh
Executable file
16
logserver/uninstall.sh
Executable file
@@ -0,0 +1,16 @@
|
||||
#!/bin/bash
|
||||
source "${AGENT_PATH}/common.sh"
|
||||
_check_required_env_vars "CONTAINER_NAME"
|
||||
|
||||
# Stop the containers
|
||||
bash ./stop.sh || _die "Failed to stop containers"
|
||||
|
||||
# Remove the containers
|
||||
docker-compose down --remove-orphans || _die "Failed to remove containers"
|
||||
|
||||
# CRITICAL: Never remove data volumes in uninstall.sh!
|
||||
# Data volumes must be preserved for potential reinstallation
|
||||
# Only destroy.sh should remove volumes, and it must be explicit
|
||||
|
||||
echo "Uninstallation of ${CONTAINER_NAME} complete"
|
||||
echo "Note: Data volumes have been preserved. To remove all data, use destroy.sh"
|
Reference in New Issue
Block a user