Skip to main content

Benchmarks

One of the reasons development was started on Spage is to achieve better performance than Ansible. This document aims to showcase the performance differences between Spage and Ansible in realistic scenarios.

What We're Benchmarking

Our benchmarks compare six different execution methods across multiple types of realistic playbooks:

Execution Methods

  1. Ansible - Traditional Ansible playbook execution using ansible-playbook
  2. Bash - Equivalent native bash scripts performing the same operations (baseline for optimal performance)
  3. Spage Run - Direct execution using the spage run command (interprets playbooks at runtime)
  4. Spage Temporal - Spage execution using the Temporal workflow engine (SPAGE_EXECUTOR=temporal)
  5. Go Generated - Running generated Go code with spage generate and go run generated_tasks.go
  6. Compiled Binary - Pre-compiled binary execution with spage generate, go build, and ./generated_tasks. Note that the generation and compilation is excluded from the benchmark timings. This benchmark only measures the execution time of the generated_tasks binary execution.

Playbook Types

  • File Operations - Tasks involving file creation, modification, and management
  • Command Operations - Shell command execution and system interactions
  • Jinja Templating - Complex template rendering and variable substitution

Methodology

Each benchmark:

  • Runs identical tasks across all execution methods
  • Measures wall-clock execution time (end-to-end performance)
  • Includes realistic workloads that mirror common automation scenarios
  • Averages results across multiple iterations for statistical accuracy
  • Uses Go's built-in benchmarking framework for precise measurements
  • Includes bash scripts as a performance baseline representing optimal native execution

The benchmarks are run on the same hardware with identical system conditions to ensure fair comparison. Bash scripts provide a theoretical performance ceiling since they represent the most direct way to execute the same operations without any abstraction layers.

Benchmarks

File operations

This benchmark tests file system operations including creating files, setting permissions, managing symlinks, and directory operations. It exercises the file, copy, stat, and assert modules with various file states and permission changes.

Playbook: file_playbook.yaml
Bash equivalent: file_script.sh

Performance Comparison

MethodDuration (ms)Factor vs Ansible
Spage Temporal13382.3x
Ansible30981.0x
Spage Run3491.1x
Compiled Binary23134.6x
Bash20154.9x
Go Generated14920.7x

Command operations

This benchmark focuses on shell command execution using the command module. It tests direct command execution, argument handling, and loop operations with commands like touch, mkdir, and echo. This represents typical system administration tasks.

Playbook: command_playbook.yaml
Bash equivalent: command_script.sh

Performance Comparison

MethodDuration (ms)Factor vs Ansible
Spage Temporal7461.9x
Ansible14681.0x
Spage Run2656.4x
Compiled Binary1691.7x
Bash7209.7x
Go Generated1499.8x

Jinja operations

This benchmark evaluates Jinja templating performance including variable substitution, list operations, concatenation, and filter usage. It tests set_fact, debug, and assert modules with complex data structures and template expressions, representing configuration management scenarios.

Playbook: jinja_tests_playbook.yaml
Bash equivalent: jinja_script.sh

Performance Comparison

MethodDuration (ms)Factor vs Ansible
Spage Temporal4261.6x
Ansible6851.0x
Spage Run1740.2x
Compiled Binary797.8x
Bash1685.0x
Go Generated1564.3x