Benchmarks
One of the reasons development was started on Spage is to achieve better performance than Ansible. This document aims to showcase the performance differences between Spage and Ansible in realistic scenarios.
What We're Benchmarking
Our benchmarks compare six different execution methods across multiple types of realistic playbooks:
Execution Methods
- Ansible - Traditional Ansible playbook execution using
ansible-playbook
- Bash - Equivalent native bash scripts performing the same operations (baseline for optimal performance)
- Spage Run - Direct execution using the
spage run
command (interprets playbooks at runtime) - Spage Temporal - Spage execution using the Temporal workflow engine (
SPAGE_EXECUTOR=temporal
) - Go Generated - Running generated Go code with
spage generate
andgo run generated_tasks.go
- Compiled Binary - Pre-compiled binary execution with
spage generate
,go build
, and./generated_tasks
. Note that the generation and compilation is excluded from the benchmark timings. This benchmark only measures the execution time of thegenerated_tasks
binary execution.
Playbook Types
- File Operations - Tasks involving file creation, modification, and management
- Command Operations - Shell command execution and system interactions
- Jinja Templating - Complex template rendering and variable substitution
Methodology
Each benchmark:
- Runs identical tasks across all execution methods
- Measures wall-clock execution time (end-to-end performance)
- Includes realistic workloads that mirror common automation scenarios
- Averages results across multiple iterations for statistical accuracy
- Uses Go's built-in benchmarking framework for precise measurements
- Includes bash scripts as a performance baseline representing optimal native execution
The benchmarks are run on the same hardware with identical system conditions to ensure fair comparison. Bash scripts provide a theoretical performance ceiling since they represent the most direct way to execute the same operations without any abstraction layers.
Benchmarks
File operations
This benchmark tests file system operations including creating files, setting permissions, managing symlinks, and directory operations. It exercises the file
, copy
, stat
, and assert
modules with various file states and permission changes.
Playbook: file_playbook.yaml
Bash equivalent: file_script.sh
Performance Comparison
Method | Duration (ms) | Factor vs Ansible |
---|---|---|
Spage Temporal | 1338 | 2.3x |
Ansible | 3098 | 1.0x |
Spage Run | 34 | 91.1x |
Compiled Binary | 23 | 134.6x |
Bash | 20 | 154.9x |
Go Generated | 149 | 20.7x |
Command operations
This benchmark focuses on shell command execution using the command
module. It tests direct command execution, argument handling, and loop operations with commands like touch
, mkdir
, and echo
. This represents typical system administration tasks.
Playbook: command_playbook.yaml
Bash equivalent: command_script.sh
Performance Comparison
Method | Duration (ms) | Factor vs Ansible |
---|---|---|
Spage Temporal | 746 | 1.9x |
Ansible | 1468 | 1.0x |
Spage Run | 26 | 56.4x |
Compiled Binary | 16 | 91.7x |
Bash | 7 | 209.7x |
Go Generated | 149 | 9.8x |
Jinja operations
This benchmark evaluates Jinja templating performance including variable substitution, list operations, concatenation, and filter usage. It tests set_fact
, debug
, and assert
modules with complex data structures and template expressions, representing configuration management scenarios.
Playbook: jinja_tests_playbook.yaml
Bash equivalent: jinja_script.sh
Performance Comparison
Method | Duration (ms) | Factor vs Ansible |
---|---|---|
Spage Temporal | 426 | 1.6x |
Ansible | 685 | 1.0x |
Spage Run | 17 | 40.2x |
Compiled Binary | 7 | 97.8x |
Bash | 1 | 685.0x |
Go Generated | 156 | 4.3x |