docs: Add 16, update 2 and remove 2 files
All checks were successful
Test and Publish Templates / test-and-publish (push) Successful in 9s

This commit is contained in:
j
2026-01-26 21:17:15 +13:00
parent eebd3efcf3
commit 70dab12114
20 changed files with 411 additions and 541 deletions

226
CLAUDE.md
View File

@@ -1,226 +0,0 @@
# Dropshell Template Development Guide
## Overview
Dropshell templates are service deployment configurations that allow users to easily install and manage Docker-based services on remote servers. Each template provides a standardized interface for service lifecycle management.
## Template Architecture
### Directory Structure
```
template-name/
├── config/
│ └── service.env # Service configuration variables
├── install.sh # Installation script (REQUIRED)
├── uninstall.sh # Uninstallation script (REQUIRED)
├── start.sh # Start service script (REQUIRED)
├── stop.sh # Stop service script (REQUIRED)
├── status.sh # Check service status (REQUIRED)
├── logs.sh # View service logs (optional)
├── backup.sh # Backup service data (optional)
├── restore.sh # Restore service data (optional)
├── destroy.sh # Complete removal including data (optional)
├── ports.sh # Display exposed ports (optional)
├── ssh.sh # SSH into container (optional)
└── _volumes.sh # Volume helper functions (optional)
```
## Required Scripts
### 1. install.sh
- Pull Docker images
- Create necessary volumes/directories
- Verify configuration files exist
- Stop and remove existing containers (if any)
- Start the service
- Must source `${AGENT_PATH}/common.sh`
### 2. uninstall.sh
- Stop the running container
- Remove the container
- Optionally clean up volumes (usually not)
- Must source `${AGENT_PATH}/common.sh`
### 3. start.sh
- Define Docker run command with all parameters
- Use `_create_and_start_container` helper function
- Verify container is running
- Must source `${AGENT_PATH}/common.sh`
### 4. stop.sh
- Stop the running container gracefully
- Use `_stop_container` helper function
- Must source `${AGENT_PATH}/common.sh`
### 5. status.sh
- Check if container exists and is running
- Display container status information
- Return appropriate exit codes
- Must source `${AGENT_PATH}/common.sh`
## Configuration (service.env)
The `config/service.env` file contains service-specific variables:
```bash
# Service identification
CONTAINER_NAME=service-name
IMAGE_REGISTRY=docker.io
IMAGE_REPO=vendor/image
IMAGE_TAG=latest
# Volumes (if using Docker volumes)
DATA_VOLUME=${CONTAINER_NAME}_data
CONFIG_VOLUME=${CONTAINER_NAME}_config
# Directories (if using host paths)
DATA_PATH=${SERVICE_PATH}/data
CONFIG_PATH=${SERVICE_PATH}/config
# Service-specific settings
PORT=8080
ENABLE_FEATURE=true
```
## Common Functions (from common.sh)
Available helper functions:
- `_die "message"` - Print error and exit
- `_check_docker_installed` - Verify Docker availability
- `_check_required_env_vars "VAR1" "VAR2"` - Validate environment
- `_create_and_start_container "$cmd" "$name"` - Start container
- `_is_container_exists "$name"` - Check if container exists
- `_is_container_running "$name"` - Check if running
- `_stop_container "$name"` - Stop container
- `_remove_container "$name"` - Remove container
- `_create_folder "$path"` - Create directory with permissions
## Best Practices
### 1. User Permissions
- Templates run as the `dropshell` user (non-root)
- User is in the `docker` group
- Avoid using `sudo` in scripts
- Set appropriate file permissions (usually 777 for shared volumes)
### 2. Container Management
- Always use `--restart unless-stopped` for reliability
- Name containers consistently using `$CONTAINER_NAME`
- Use official Docker images when possible
- Pin specific versions rather than using `:latest` in production
### 3. Data Persistence
- Use Docker volumes or host directories for persistent data
- Separate config from data volumes
- Document backup/restore procedures
- Never delete data in uninstall.sh (only in destroy.sh)
### 4. Network Configuration
- Expose ports using `-p HOST:CONTAINER`
- Document all exposed ports
- Consider using Docker networks for multi-container setups
### 5. Error Handling
- Use `set -e` at script start (optional, common.sh handles most)
- Check command success with proper error messages
- Use `_die` for fatal errors
- Provide meaningful feedback to users
### 6. Environment Variables
Available from dropshell:
- `${AGENT_PATH}` - Path to agent scripts (contains common.sh)
- `${SERVICE_PATH}` - Path to service directory on server
- `${CONFIG_PATH}` - Path to service config directory
- All variables from service.env
## Creating a New Template
### Step 1: Create Template Structure
```bash
mkdir template-name
mkdir template-name/config
```
### Step 2: Create service.env
Define all configuration variables with sensible defaults.
### Step 3: Implement Required Scripts
Start with the five required scripts, following the patterns from existing templates.
### Step 4: Test Locally
```bash
./test.sh # Run validation tests
./test_template.sh template-name # Integration test if dropshell installed
```
### Step 5: Add to versions.json
```json
{
"template-name": "1.0.0"
}
```
### Step 6: Document
Create a README.txt explaining:
- What the service does
- Configuration options
- Default ports and paths
- Any special requirements
## Template Examples
### Simple Service (watchtower)
- Single container
- Minimal configuration
- No exposed ports
- Docker socket access
### Web Service (caddy)
- HTTP/HTTPS ports
- Config files and static content
- Multiple volumes
- SSL certificate handling
### Complex Service (gitea-runner)
- Multiple configuration files
- Docker-in-Docker capability
- Registration process
- Cleanup procedures
## Testing Checklist
- [ ] All required scripts present and executable
- [ ] service.env contains necessary variables
- [ ] Scripts source common.sh correctly
- [ ] Container starts and stops properly
- [ ] Status script returns correct information
- [ ] Uninstall removes container but preserves data
- [ ] Install is idempotent (can run multiple times)
- [ ] No hardcoded paths (use environment variables)
- [ ] Error messages are clear and helpful
- [ ] Works without root/sudo access
## Version Management
Follow semantic versioning:
- **MAJOR**: Breaking changes to configuration or behavior
- **MINOR**: New features, backwards compatible
- **PATCH**: Bug fixes, backwards compatible
Update version with:
```bash
./bump-version.sh template-name patch|minor|major
```
## Publishing
Templates are automatically published when pushed to main branch:
1. CI runs tests
2. Detects changed templates
3. Publishes to templates.dropshell.app
4. Tags with version numbers
Manual publishing:
```bash
export SOS_WRITE_TOKEN=your-token
./publish.sh template-name
```

View File

@@ -1,309 +0,0 @@
# Development Guide
This guide covers the development workflow for dropshell templates, including testing, versioning, and publishing.
## Repository Structure
```
dropshell-templates/
├── .gitea/workflows/ # CI/CD workflows
├── caddy/ # Web server template
├── gitea-runner-docker/ # CI runner template
├── simple-object-server/ # Object storage template
├── squashkiwi/ # Squashkiwi service template
├── static-website/ # Static site hosting template
├── watchtower/ # Container auto-updater template
├── versions.json # Template version tracking
├── test.sh # Template validation script
├── publish.sh # Template publishing script
├── bump-version.sh # Version management script
├── detect-changes.sh # Change detection script
└── test_template.sh # Integration test script
```
## Template Structure
Each template must have:
- `install.sh` - Installation script
- `uninstall.sh` - Uninstallation script
- `start.sh` - Service start script
- `stop.sh` - Service stop script
- `status.sh` - Service status script
- `config/service.env` - Service configuration
Optional scripts:
- `logs.sh` - View service logs
- `backup.sh` - Backup service data
- `restore.sh` - Restore service data
- `ports.sh` - Display service ports
- `ssh.sh` - SSH into service container
- `destroy.sh` - Completely remove service
## Testing
### Local Testing
Run validation tests for all templates:
```bash
./test.sh
```
This checks:
- Required scripts exist
- Scripts are executable
- Config directory exists
- service.env file exists
- Shell script syntax is valid
### Integration Testing
If you have `ds` (dropshell) installed locally:
```bash
./test_template.sh caddy
```
This performs a full integration test:
- Creates a test service
- Installs the template
- Starts/stops the service
- Backs up and restores
- Destroys the service
## Version Management
Each template has independent semantic versioning tracked in `versions.json`.
### Version Format
Versions follow semantic versioning: `MAJOR.MINOR.PATCH`
- **MAJOR**: Breaking changes
- **MINOR**: New features, backwards compatible
- **PATCH**: Bug fixes, backwards compatible
### Bumping Versions
#### Single Template
```bash
# Bump patch version (1.0.0 -> 1.0.1)
./bump-version.sh caddy patch
# Bump minor version (1.0.0 -> 1.1.0)
./bump-version.sh caddy minor
# Bump major version (1.0.0 -> 2.0.0)
./bump-version.sh caddy major
# Set specific version
./bump-version.sh caddy 2.5.3
```
#### All Templates
```bash
# Bump patch for all templates
./bump-version.sh --all patch
# Set all templates to specific version
./bump-version.sh --all 2.0.0
```
### Version Workflow
1. **Make changes** to template(s)
2. **Test changes** locally:
```bash
./test.sh
# Optional: ./test_template.sh <template-name>
```
3. **Bump version** for changed templates:
```bash
# For bug fixes
./bump-version.sh caddy patch
# For new features
./bump-version.sh caddy minor
# For breaking changes
./bump-version.sh caddy major
```
4. **Commit changes** including `versions.json`:
```bash
git add .
git commit -m "feat(caddy): add custom domain support"
```
5. **Push to main** - CI automatically publishes changed templates
## Publishing
### Automatic Publishing (CI/CD)
When you push to the main branch:
1. CI runs tests on all templates
2. Detects which templates changed since last version update
3. Publishes only changed templates to templates.dropshell.app
4. Each template is tagged with:
- `:latest` - Always points to newest version
- `:1.0.0` - Specific version
- `:v1` - Major version only
### Manual Publishing
```bash
# Set environment variable
export SOS_WRITE_TOKEN=your-token-here
# Publish only changed templates (default)
./publish.sh
# Publish all templates
./publish.sh --all
# Publish specific templates
./publish.sh caddy watchtower
# Explicitly publish only changed
./publish.sh --changed-only
```
## Change Detection
The system automatically detects which templates have changed:
```bash
# List changed templates since last version update
./detect-changes.sh
# Output as JSON
./detect-changes.sh --json
```
Changes are detected by comparing against the last git tag that modified `versions.json`.
## CI/CD Configuration
The GitHub Actions workflow (`.gitea/workflows/test-and-publish.yaml`):
- Triggers on pushes to main and pull requests
- Runs tests for all templates
- Publishes only changed templates (main branch only)
- Requires `SOS_WRITE_TOKEN` secret in repository settings
### Setting up CI/CD
1. Go to your Gitea repository settings
2. Add a new secret named `SOS_WRITE_TOKEN`
3. Set the value to your Simple Object Server write token
4. Push to main branch to trigger the workflow
## Development Workflow Example
Here's a complete example of updating the caddy template:
```bash
# 1. Make changes to caddy template
vim caddy/config/Caddyfile
# 2. Test your changes
./test.sh
# 3. Run integration test (if ds is installed)
./test_template.sh caddy
# 4. Bump version (patch for bug fix)
./bump-version.sh caddy patch
# 5. Review version change
cat versions.json
# 6. Commit your changes
git add .
git commit -m "fix(caddy): correct reverse proxy configuration"
# 7. Push to trigger automatic publishing
git push origin main
```
The CI will:
- Run tests
- Detect that only caddy changed
- Publish caddy with new version (e.g., caddy:1.0.1, caddy:latest, caddy:v1)
## Best Practices
1. **Always test locally** before pushing
2. **Use semantic versioning** appropriately:
- Breaking changes = major bump
- New features = minor bump
- Bug fixes = patch bump
3. **Write clear commit messages** that explain the changes
4. **Update version before pushing** to main
5. **Document breaking changes** in commit messages
6. **Keep templates simple** and focused on single services
7. **Use environment variables** in service.env for configuration
8. **Test both install and uninstall** procedures
## Troubleshooting
### Tests failing locally
- Ensure all scripts are executable: `chmod +x template-name/*.sh`
- Check script syntax: `bash -n template-name/script.sh`
- Verify config directory exists
- Ensure service.env is present
### Publishing fails
- Verify `SOS_WRITE_TOKEN` is set
- Check that `sos` tool is installed
- Ensure versions.json is valid JSON
- Verify network connectivity to templates.dropshell.app
### Change detection not working
- Ensure git history is available: `git fetch --all`
- Check that versions.json was committed
- Verify detect-changes.sh is executable
## Template Guidelines
### Required Files
Every template must include:
- Core scripts: install.sh, uninstall.sh, start.sh, stop.sh, status.sh
- Configuration: config/service.env
### Script Requirements
- Use `#!/bin/bash` shebang
- Set `set -e` for error handling
- Use absolute paths where possible
- Handle missing dependencies gracefully
- Provide meaningful error messages
- Clean up resources on uninstall
### Environment Variables
Define in config/service.env:
- Service-specific configuration
- Port mappings
- Volume mounts
- Resource limits
- Feature flags
### Docker Integration
Most templates use Docker:
- Use official images when possible
- Pin specific versions (avoid :latest in production)
- Map configuration via volumes
- Use docker-compose for complex setups
- Handle container lifecycle properly
## Contributing
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add/update tests if needed
5. Bump versions appropriately
6. Submit a pull request
Pull requests should:
- Pass all tests
- Include version bumps for changed templates
- Have clear descriptions
- Follow existing code style
- Include any necessary documentation updates

10
graylog/_volumes.sh Executable file
View File

@@ -0,0 +1,10 @@
#!/bin/bash
# Define volume items for graylog containers
# These are used across backup, restore, create, and destroy operations
# Docker Compose creates volumes with project name prefix: {project}_{volume_name}
get_graylog_volumes() {
echo "volume:mongodb_data:${CONTAINER_NAME}_mongodb_data"
echo "volume:opensearch_data:${CONTAINER_NAME}_opensearch_data"
echo "volume:graylog_data:${CONTAINER_NAME}_graylog_data"
}

21
graylog/backup.sh Executable file
View File

@@ -0,0 +1,21 @@
#!/bin/bash
# shellcheck disable=SC1091
source "${AGENT_PATH}/common.sh"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPT_DIR}/_volumes.sh"
_check_required_env_vars "CONTAINER_NAME"
# BACKUP SCRIPT
# Creates a backup of all Graylog data volumes
# Stop containers before backup
docker compose -p "${CONTAINER_NAME}" stop || _die "Failed to stop Graylog stack"
# Backup all volumes
# shellcheck disable=SC2046
backup_items $(get_graylog_volumes) || _die "Failed to create backup"
# Restart containers
docker compose -p "${CONTAINER_NAME}" start || _die "Failed to restart Graylog stack"
echo "Backup created successfully"

View File

@@ -0,0 +1,28 @@
# Graylog Configuration
CONTAINER_NAME=graylog
# Server settings (REQUIRED by dropshell)
SSH_USER="root"
# Ports
WEB_PORT=9000 # Graylog web UI
GELF_UDP_PORT=12201 # GELF UDP input
GELF_TCP_PORT=12202 # GELF TCP input
SYSLOG_UDP_PORT=1514 # Syslog UDP input
SYSLOG_TCP_PORT=1515 # Syslog TCP input
BEATS_PORT=5044 # Beats input
# Graylog Admin Password (CHANGE THIS!)
# Generate a new secret with: pwgen -N 1 -s 96
GRAYLOG_PASSWORD_SECRET="somepasswordpepper"
# Admin password (plain text - converted to SHA256 during install)
GRAYLOG_ROOT_PASSWORD="admin"
# Graylog settings
GRAYLOG_HTTP_EXTERNAL_URI="http://localhost:9000/"
GRAYLOG_TIMEZONE="UTC"
# OpenSearch/Elasticsearch settings
OPENSEARCH_JAVA_OPTS="-Xms1g -Xmx1g"
# MongoDB settings (no authentication by default for internal use)

23
graylog/destroy.sh Executable file
View File

@@ -0,0 +1,23 @@
#!/bin/bash
# shellcheck disable=SC1091
source "${AGENT_PATH}/common.sh"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPT_DIR}/_volumes.sh"
_check_required_env_vars "CONTAINER_NAME"
# DESTROY SCRIPT
# Completely removes the service AND all data
# WARNING: This is irreversible!
echo "WARNING: This will PERMANENTLY DELETE all data for ${CONTAINER_NAME}"
echo "This includes all logs, configurations, dashboards, and indexes!"
./uninstall.sh
# Remove docker compose volumes
docker compose -p "${CONTAINER_NAME}" down -v 2>/dev/null || true
# shellcheck disable=SC2046
destroy_items $(get_graylog_volumes) || _die "Failed to destroy docker volumes"
echo "Destroyed ${CONTAINER_NAME} and all data."

View File

@@ -0,0 +1,78 @@
services:
# MongoDB - stores Graylog configuration and metadata
mongodb:
image: mongo:6.0
container_name: ${CONTAINER_NAME}_mongodb
volumes:
- mongodb_data:/data/db
restart: unless-stopped
networks:
- graylog-net
# OpenSearch - stores and indexes log data
opensearch:
image: opensearchproject/opensearch:2
container_name: ${CONTAINER_NAME}_opensearch
environment:
- "OPENSEARCH_JAVA_OPTS=${OPENSEARCH_JAVA_OPTS:--Xms1g -Xmx1g}"
- "bootstrap.memory_lock=true"
- "discovery.type=single-node"
- "action.auto_create_index=false"
- "plugins.security.disabled=true"
- "DISABLE_INSTALL_DEMO_CONFIG=true"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
volumes:
- opensearch_data:/usr/share/opensearch/data
restart: unless-stopped
networks:
- graylog-net
# Graylog - the main log management application
graylog:
image: graylog/graylog:6.1
container_name: ${CONTAINER_NAME}
environment:
- GRAYLOG_PASSWORD_SECRET=${GRAYLOG_PASSWORD_SECRET:-somepasswordpepper}
- GRAYLOG_ROOT_PASSWORD_SHA2=${GRAYLOG_ROOT_PASSWORD_SHA2:-8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918}
- GRAYLOG_HTTP_EXTERNAL_URI=${GRAYLOG_HTTP_EXTERNAL_URI:-http://localhost:9000/}
- GRAYLOG_HTTP_BIND_ADDRESS=0.0.0.0:9000
- GRAYLOG_ELASTICSEARCH_HOSTS=http://opensearch:9200
- GRAYLOG_MONGODB_URI=mongodb://mongodb:27017/graylog
- GRAYLOG_TIMEZONE=${GRAYLOG_TIMEZONE:-UTC}
entrypoint: /usr/bin/tini -- wait-for-it opensearch:9200 -- /docker-entrypoint.sh
volumes:
- graylog_data:/usr/share/graylog/data
restart: unless-stopped
depends_on:
- mongodb
- opensearch
ports:
# Graylog web interface and REST API
- "${WEB_PORT:-9000}:9000"
# GELF UDP
- "${GELF_UDP_PORT:-12201}:12201/udp"
# GELF TCP
- "${GELF_TCP_PORT:-12202}:12202"
# Syslog UDP
- "${SYSLOG_UDP_PORT:-1514}:1514/udp"
# Syslog TCP
- "${SYSLOG_TCP_PORT:-1515}:1515"
# Beats
- "${BEATS_PORT:-5044}:5044"
networks:
- graylog-net
networks:
graylog-net:
driver: bridge
volumes:
mongodb_data:
opensearch_data:
graylog_data:

69
graylog/install.sh Executable file
View File

@@ -0,0 +1,69 @@
#!/bin/bash
source "${AGENT_PATH}/common.sh"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
_check_required_env_vars "CONTAINER_NAME" "GRAYLOG_PASSWORD_SECRET" "GRAYLOG_ROOT_PASSWORD"
# Convert plain text password to SHA256 for Graylog
export GRAYLOG_ROOT_PASSWORD_SHA2=$(echo -n "${GRAYLOG_ROOT_PASSWORD}" | sha256sum | cut -d' ' -f1)
# Check Docker
_check_docker_installed || _die "Docker test failed"
docker compose version >/dev/null 2>&1 || _die "Docker Compose V2 is required"
# Check vm.max_map_count for OpenSearch
CURRENT_MAP_COUNT=$(cat /proc/sys/vm/max_map_count 2>/dev/null || echo "0")
if [ "$CURRENT_MAP_COUNT" -lt 262144 ]; then
echo "WARNING: vm.max_map_count is $CURRENT_MAP_COUNT (should be at least 262144)"
echo "OpenSearch may fail to start. To fix, run:"
echo " sudo sysctl -w vm.max_map_count=262144"
echo " echo 'vm.max_map_count=262144' | sudo tee -a /etc/sysctl.conf"
fi
# Stop any existing containers
bash ./stop.sh 2>/dev/null || true
# Start the stack
echo "Starting Graylog..."
docker compose -p "${CONTAINER_NAME}" up -d || _die "Failed to start Graylog stack"
# Wait for Graylog to be ready
echo -n "Waiting for Graylog to start (this may take a few minutes)..."
MAX_WAIT=180
WAITED=0
while [ $WAITED -lt $MAX_WAIT ]; do
if curl -s "http://localhost:${WEB_PORT:-9000}/api/system/lbstatus" 2>/dev/null | grep -q "ALIVE"; then
echo " Ready!"
break
fi
echo -n "."
sleep 5
WAITED=$((WAITED + 5))
done
if [ $WAITED -ge $MAX_WAIT ]; then
echo ""
echo "WARNING: Graylog may still be starting. Check logs with: dropshell logs graylog"
fi
echo ""
echo "========================================="
echo "Graylog Installed!"
echo "========================================="
echo ""
echo "Web UI: http://$(hostname -I | awk '{print $1}'):${WEB_PORT:-9000}"
echo "Login: admin / ${GRAYLOG_ROOT_PASSWORD}"
echo ""
echo "INPUT PORTS:"
echo " GELF UDP: ${GELF_UDP_PORT:-12201}"
echo " GELF TCP: ${GELF_TCP_PORT:-12202}"
echo " Syslog UDP: ${SYSLOG_UDP_PORT:-1514}"
echo " Syslog TCP: ${SYSLOG_TCP_PORT:-1515}"
echo " Beats: ${BEATS_PORT:-5044}"
echo ""
echo "IMPORTANT: Configure inputs in the Graylog web UI:"
echo " System -> Inputs -> Select input type -> Launch"
echo ""
echo "SECURITY: Change GRAYLOG_PASSWORD_SECRET and"
echo "GRAYLOG_ROOT_PASSWORD in service.env!"
echo "========================================="

11
graylog/logs.sh Executable file
View File

@@ -0,0 +1,11 @@
#!/bin/bash
source "${AGENT_PATH}/common.sh"
_check_required_env_vars "CONTAINER_NAME"
# LOGS SCRIPT
# Shows the container logs
echo "Graylog logs:"
_grey_start
docker compose -p "${CONTAINER_NAME}" logs "$@"
_grey_end

13
graylog/ports.sh Executable file
View File

@@ -0,0 +1,13 @@
#!/bin/bash
source "${AGENT_PATH}/common.sh"
_check_required_env_vars "WEB_PORT" "GELF_UDP_PORT" "GELF_TCP_PORT" "SYSLOG_UDP_PORT" "SYSLOG_TCP_PORT" "BEATS_PORT"
# PORTS SCRIPT
# Lists the exposed ports
echo "${WEB_PORT:-9000}"
echo "${GELF_UDP_PORT:-12201}"
echo "${GELF_TCP_PORT:-12202}"
echo "${SYSLOG_UDP_PORT:-1514}"
echo "${SYSLOG_TCP_PORT:-1515}"
echo "${BEATS_PORT:-5044}"

21
graylog/restore.sh Executable file
View File

@@ -0,0 +1,21 @@
#!/bin/bash
# shellcheck disable=SC1091
source "${AGENT_PATH}/common.sh"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "${SCRIPT_DIR}/_volumes.sh"
_check_required_env_vars "CONTAINER_NAME"
# RESTORE SCRIPT
# Restores Graylog data from a backup
# Uninstall containers before restore
./uninstall.sh || _die "Failed to uninstall service before restore"
# Restore data from backup file
# shellcheck disable=SC2046
restore_items $(get_graylog_volumes) || _die "Failed to restore data from backup file"
# Reinstall service
./install.sh || _die "Failed to reinstall service after restore"
echo "Restore complete! Graylog is running again."

8
graylog/ssh.sh Executable file
View File

@@ -0,0 +1,8 @@
#!/bin/bash
source "${AGENT_PATH}/common.sh"
_check_required_env_vars "CONTAINER_NAME"
# SSH SCRIPT
# Opens a shell inside the main Graylog container
docker exec -it "${CONTAINER_NAME}" /bin/bash

15
graylog/start.sh Executable file
View File

@@ -0,0 +1,15 @@
#!/bin/bash
source "${AGENT_PATH}/common.sh"
_check_required_env_vars "CONTAINER_NAME" "GRAYLOG_ROOT_PASSWORD"
# START SCRIPT
# The start script is required for all templates.
# It is used to start the service on the server.
# Convert plain text password to SHA256 for Graylog
export GRAYLOG_ROOT_PASSWORD_SHA2=$(echo -n "${GRAYLOG_ROOT_PASSWORD}" | sha256sum | cut -d' ' -f1)
docker compose -p "${CONTAINER_NAME}" up -d || _die "Failed to start Graylog stack"
echo "Graylog stack started"
echo "Access Graylog at http://localhost:${WEB_PORT:-9000}"

43
graylog/status.sh Executable file
View File

@@ -0,0 +1,43 @@
#!/bin/bash
source "${AGENT_PATH}/common.sh"
_check_required_env_vars "CONTAINER_NAME"
# STATUS SCRIPT
# The status script is REQUIRED.
# It is used to return the status of the service.
# Must output exactly one of: Running, Stopped, Error, Unknown
# Check if main graylog container exists
if ! docker ps -a --format "{{.Names}}" | grep -q "^${CONTAINER_NAME}$"; then
echo "Unknown"
exit 0
fi
# Check all container states
GRAYLOG_STATE=$(docker inspect -f '{{.State.Status}}' "$CONTAINER_NAME" 2>/dev/null)
MONGODB_STATE=$(docker inspect -f '{{.State.Status}}' "${CONTAINER_NAME}_mongodb" 2>/dev/null)
OPENSEARCH_STATE=$(docker inspect -f '{{.State.Status}}' "${CONTAINER_NAME}_opensearch" 2>/dev/null)
# All must be running for "Running" status
if [ "$GRAYLOG_STATE" = "running" ] && [ "$MONGODB_STATE" = "running" ] && [ "$OPENSEARCH_STATE" = "running" ]; then
echo "Running"
exit 0
fi
# Any stopped means "Stopped"
if [ "$GRAYLOG_STATE" = "exited" ] || [ "$GRAYLOG_STATE" = "stopped" ] || \
[ "$MONGODB_STATE" = "exited" ] || [ "$MONGODB_STATE" = "stopped" ] || \
[ "$OPENSEARCH_STATE" = "exited" ] || [ "$OPENSEARCH_STATE" = "stopped" ]; then
echo "Stopped"
exit 0
fi
# Any restarting or paused means "Error"
if [ "$GRAYLOG_STATE" = "restarting" ] || [ "$GRAYLOG_STATE" = "paused" ] || \
[ "$MONGODB_STATE" = "restarting" ] || [ "$MONGODB_STATE" = "paused" ] || \
[ "$OPENSEARCH_STATE" = "restarting" ] || [ "$OPENSEARCH_STATE" = "paused" ]; then
echo "Error"
exit 0
fi
echo "Unknown"

11
graylog/stop.sh Executable file
View File

@@ -0,0 +1,11 @@
#!/bin/bash
source "${AGENT_PATH}/common.sh"
_check_required_env_vars "CONTAINER_NAME"
# STOP SCRIPT
# The stop script is required for all templates.
# It is used to stop the service on the server.
docker compose -p "${CONTAINER_NAME}" stop || _die "Failed to stop Graylog stack"
echo "Graylog stack stopped"

22
graylog/template_info.env Normal file
View File

@@ -0,0 +1,22 @@
# DO NOT EDIT THIS FILE FOR YOUR SERVICE!
# This file is replaced from the template whenever there is an update.
# Edit the service.env file to make changes.
# Template metadata
TEMPLATE=graylog
TEMPLATE_VERSION="1.0.0"
TEMPLATE_DESCRIPTION="Graylog log management platform with OpenSearch and MongoDB. Enterprise-grade centralized log collection, analysis, and alerting."
TEMPLATE_AUTHOR="Dropshell"
TEMPLATE_LICENSE="MIT"
TEMPLATE_HOMEPAGE="https://github.com/dropshell/templates"
TEMPLATE_TAGS="logging,monitoring,graylog,opensearch,mongodb,siem"
TEMPLATE_REQUIRES="docker,docker-compose"
TEMPLATE_CONFLICTS=""
TEMPLATE_MIN_MEMORY="4096"
TEMPLATE_MIN_DISK="10000"
TEMPLATE_CATEGORY="monitoring"
# System requirements
REQUIRES_HOST_ROOT=false
REQUIRES_DOCKER=true
REQUIRES_DOCKER_ROOT=false

22
graylog/uninstall.sh Executable file
View File

@@ -0,0 +1,22 @@
#!/bin/bash
source "${AGENT_PATH}/common.sh"
_check_required_env_vars "CONTAINER_NAME"
# UNINSTALL SCRIPT
# The uninstall script is required for all templates.
# It is used to uninstall the service from the server.
# IMPORTANT: This script MUST preserve data volumes!
# Stop and remove containers
docker compose -p "${CONTAINER_NAME}" down 2>/dev/null || true
# Verify containers are removed
for suffix in "" "_mongodb" "_opensearch"; do
container="${CONTAINER_NAME}${suffix}"
if docker ps -a --format "{{.Names}}" | grep -q "^${container}$"; then
docker rm -f "$container" 2>/dev/null || true
fi
done
echo "Uninstallation of ${CONTAINER_NAME} complete."
echo "Data volumes preserved. To remove all data, use destroy.sh"

7
logserver/logs.sh Normal file
View File

@@ -0,0 +1,7 @@
#!/bin/bash
source "${AGENT_PATH}/common.sh"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
_check_required_env_vars "CONTAINER_NAME"
cd "$SCRIPT_DIR" || _die "Failed to change to script directory"
docker compose logs "$@"

View File

@@ -16,6 +16,8 @@ services:
timeout: 5s
retries: 5
start_period: 30s
networks:
- shlink-net
shlink:
image: ${IMAGE_REGISTRY}/${IMAGE_REPO}:${IMAGE_TAG}
@@ -37,3 +39,9 @@ services:
DB_PORT: 3306
ports:
- "${HTTP_PORT}:8080"
networks:
- shlink-net
networks:
shlink-net:
name: ${CONTAINER_NAME}_net

View File

@@ -3,10 +3,5 @@ source "${AGENT_PATH}/common.sh"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
_check_required_env_vars "CONTAINER_NAME"
# Export variables for docker compose
export CONTAINER_NAME DATA_PATH HTTP_PORT DEFAULT_DOMAIN IS_HTTPS_ENABLED GEOLITE_LICENSE_KEY INITIAL_API_KEY
export DB_NAME DB_USER DB_PASSWORD DB_ROOT_PASSWORD
export IMAGE_REGISTRY IMAGE_REPO IMAGE_TAG DB_IMAGE_REGISTRY DB_IMAGE_REPO DB_IMAGE_TAG
cd "$SCRIPT_DIR" || _die "Failed to change to script directory"
docker compose logs --tail=100 -f
docker compose logs "$@"