Scaling Infrastructure

Comprehensive guide to scaling PteroCA and Pterodactyl infrastructure for high availability and performance

Scaling your hosting infrastructure is crucial for ensuring high availability, performance, and flexibility of services. This guide covers both PteroCA panel scaling and Pterodactyl node expansion to handle growing customer bases and server counts.

Overview

Scaling infrastructure in the PteroCA ecosystem involves two main components:

  1. PteroCA Panel - The client-facing billing and management interface

  2. Pterodactyl Panel & Nodes - The game server management platform

Each component has different scaling strategies and requirements. This guide provides detailed instructions for scaling from a single-server setup to a distributed, high-availability architecture.


PteroCA Architecture

Understanding PteroCA's architecture is essential for effective scaling decisions.

System Components

Application Layer:

  • Web Server: Nginx serving PHP-FPM (PHP 8.2+)

  • Application: Symfony 7 framework with Doctrine ORM

  • Background Jobs: Symfony Messenger for async task processing

Data Layer:

  • Database: MySQL 8.0 or MariaDB 10.2+ (UTF8MB4 charset)

  • Session Storage: File-based (default) or Redis (recommended for scaling)

  • Cache: Filesystem (default), Redis or APCu (recommended for scaling)

  • Queue: Database-backed (default), Redis or RabbitMQ (recommended for scaling)

Static Assets:

  • Uploads: User files, logos, favicons

  • Compiled Assets: CSS, JavaScript (Bootstrap 5, EasyAdmin theme)

  • Theme Assets: Custom theme files

How PteroCA Works

PteroCA serves as a client-facing interface that interacts with your Pterodactyl panel through its API. It maintains essential metadata about purchased servers—such as specifications, billing details, and user associations—within its own database. This information is crucial for managing user accounts, billing, and service provisioning.

However, the actual server configurations, deployments, and runtime management are handled by Pterodactyl. All operational aspects, including server creation, resource allocation, and lifecycle management, occur within the Pterodactyl environment. PteroCA does not directly manage or store the operational data of the game servers themselves.

This separation ensures that while PteroCA manages the business and user interaction layer, Pterodactyl remains responsible for the technical management of game servers.

Data Flow

  1. User Request → Nginx → PHP-FPM → PteroCA Application

  2. Server Purchase → Database (order data) → Queue (async tasks) → Pterodactyl API (server creation)

  3. Session Data → Session Storage (files or Redis)

  4. Cache Queries → Cache Layer (filesystem, Redis, or APCu)

  5. Background Jobs → Queue System → Job Workers → External Services (email, API calls)


Single Instance Deployment

For small to medium deployments (up to ~500 customers), a single-instance setup is sufficient.

Small Deployment (50-200 customers):

  • CPU: 4 cores

  • RAM: 8GB

  • Disk: 50GB SSD

  • Network: 100 Mbps

Medium Deployment (200-500 customers):

  • CPU: 8 cores

  • RAM: 16GB

  • Disk: 100GB SSD

  • Network: 1 Gbps

Configuration for Single Instance

Environment Variables (.env):

PHP-FPM Configuration (www.conf):

OPcache Settings (php.ini):

Nginx Configuration:

Maintenance Tasks

Daily:

  • Monitor disk space (logs, cache, uploads)

  • Check error logs for issues

  • Verify backup completion

Weekly:

  • Review application performance

  • Check database size and optimization needs

  • Update software packages

Monthly:

  • Analyze growth trends

  • Plan capacity increases

  • Review security patches


Horizontal Scaling (Multiple Instances)

For larger deployments (500+ customers) or high-availability requirements, distribute the load across multiple instances.

Infrastructure Requirements

Minimum HA Setup:

  • Web Instances: 2+ application servers

  • Load Balancer: 1 (HAProxy or Nginx)

  • Database: 1 master + 1 replica (read scaling)

  • Redis: 1 instance (sessions, cache, queues)

  • Shared Storage: NFS or S3-compatible for uploads

Recommended HA Setup:

  • Web Instances: 3+ application servers

  • Load Balancer: 2 (active-passive or active-active)

  • Database: 1 master + 2 replicas

  • Redis: 3 instances (Sentinel or Cluster mode)

  • Shared Storage: Distributed filesystem or object storage

Required Configuration Changes

1. Session Storage (Redis)

Install Redis:

PteroCA Configuration (config/packages/framework.yaml):

Environment Variables:

2. Cache Backend (Redis)

Cache Configuration (config/packages/cache.yaml):

Environment Variables:

3. Message Queue (Redis)

Messenger Configuration (config/packages/messenger.yaml):

Environment Variables:

4. Shared File Storage

Option A: NFS Mount

Option B: S3-Compatible Storage Install S3 adapter (via Composer package) and configure Vich Uploader to use S3 backend.

5. Load Balancer Configuration

HAProxy Example (haproxy.cfg):

Nginx Load Balancer Example:

Deployment Workflow

  1. Deploy Code to All Instances:

  1. Run Migrations (One Instance Only):

  1. Restart Services:

  1. Verify Health:


Database Scaling

Read Replicas

For read-heavy workloads, configure MySQL read replicas.

Master-Slave Replication Setup:

On Master (my.cnf):

On Replica (my.cnf):

Doctrine Configuration (config/packages/doctrine.yaml):

Connection Pooling

Use ProxySQL or MaxScale for connection pooling:

ProxySQL Configuration:

PteroCA Connection:

Database Optimization

Regular Maintenance:

Index Optimization:


Caching Strategies

Redis Caching

Redis Installation:

Redis Configuration (/etc/redis/redis.conf):

PteroCA Cache Configuration:

APCu (Alternative for Single Instance)

Installation:

Configuration (php.ini):

PteroCA Configuration:

Cache Invalidation

Manual Cache Clear:

Programmatic Cache Invalidation: Settings are cached for 24 hours via SettingService. After updating settings, cache is automatically invalidated.


Performance Optimization

PHP Configuration

Production Settings (php.ini):

PHP-FPM Tuning

For 8GB RAM Server (www.conf):

For 16GB RAM Server:

Calculate max_children:

Nginx Optimization

Database Performance

MySQL Configuration (my.cnf):


Scaling Pterodactyl

Single Panel Architecture

It's recommended to maintain a single instance of the Pterodactyl panel that manages all nodes. This architecture simplifies management and allows centralized monitoring of all servers.

Why Single Panel?

  • Centralized user management

  • Unified server inventory

  • Simplified API integration with PteroCA

  • Consistent configuration across all nodes

Adding Nodes

To increase server capacity, add new nodes to the existing panel instance.

Step 1: Create a Location

In the Pterodactyl panel:

  1. Navigate to Admin → Locations

  2. Click Create New

  3. Configure location:

    • Short Code: us-east (identifier)

    • Description: "US East Coast Data Center"

  4. Click Create Location

Location Purposes:

  • Group nodes by physical location

  • Geographic distribution for latency reduction

  • Organizational separation (production vs. staging)

Step 2: Add a New Node

  1. Navigate to Admin → Nodes

  2. Click Create New

  3. Fill in required information:

Basic Settings:

  • Name: us-east-node-01

  • Description: "Production node in Virginia"

  • Location: Select created location

  • FQDN: us-east-node-01.example.com (must resolve via DNS)

  • Communicate Over SSL: Recommended (Yes)

  • Behind Proxy: Yes (if using CDN/proxy)

Configuration:

  • Memory: Total RAM available (MB) - e.g., 32768 for 32GB

  • Memory Over-Allocation: Percentage to over-allocate (e.g., 0 for none, 10 for 10% more)

  • Disk Space: Total disk available (MB) - e.g., 500000 for ~488GB

  • Disk Over-Allocation: Percentage to over-allocate

Ports:

  • Daemon Port: 8080 (Wings communication)

  • Daemon SFTP Port: 2022 (SFTP access for users)

Advanced:

  • Daemon Server File Directory: /var/lib/pterodactyl/volumes

  • Maintenance Mode: Disabled (enable to prevent new servers)

  1. Click Create Node

  2. Copy the configuration displayed after creation

Step 3: Install Wings

On the new node server:

Install Docker:

Install Wings:

Create Systemd Service (/etc/systemd/system/wings.service):

Enable and Start:

Step 4: Configure Wings

  1. Create configuration file (/etc/pterodactyl/config.yml):

    • Paste the configuration copied from Pterodactyl panel

    • Or download via API:

  1. Verify configuration:

Should contain:

  1. Start Wings:

  1. Check Wings logs:

Look for successful connection to panel:

Step 5: Verify Node Status

  1. Return to Pterodactyl Admin → Nodes

  2. Find your new node

  3. Check status indicator (should be green/online)

  4. Click node name to view details

  5. Check System Information shows live stats (CPU, RAM, Disk)

Allocations

After adding a node, configure allocations (IP addresses and ports) for servers:

  1. Navigate to node in Pterodactyl Admin

  2. Click Allocation tab

  3. Assign New Allocations:

    • IP Address: Node's public IP (or specific IPs)

    • IP Alias: Optional display name

    • Ports: Range of ports (e.g., 25565-25665 for 100 ports)

  4. Click Submit

Allocation Best Practices:

  • Allocate ports in blocks (e.g., 25565-25665 for Minecraft)

  • Use different port ranges for different game types

  • Reserve some allocations for future use

  • Document allocation schema for organization

Node Resource Management

Resource Limits:

  • RAM: Total physical RAM minus OS overhead (recommend 10-20% reserve)

  • Disk: Total disk space minus OS/system usage

  • CPU: Percentage-based allocation (100% = 1 core)

Over-Allocation:

  • Memory Over-Allocation: Allocate more memory than physically available (risky)

  • Disk Over-Allocation: Allocate more disk than physically available (risky)

  • Recommendation: Start with 0% over-allocation, increase only if needed

Example for 32GB RAM, 500GB Disk Node:

  • Memory: 28672 MB (28GB, leaving 4GB for OS)

  • Disk: 450000 MB (~439GB, leaving 50GB for OS/Docker)

  • Over-Allocation: 0% initially

Horizontal Scaling Strategy

Small Deployment (50-200 servers):

  • 1-2 nodes

  • Basic hardware (4 cores, 16GB RAM, 500GB SSD)

Medium Deployment (200-1000 servers):

  • 3-5 nodes

  • Mid-range hardware (8 cores, 32GB RAM, 1TB SSD)

Large Deployment (1000+ servers):

  • 5-10+ nodes

  • High-end hardware (16+ cores, 64GB+ RAM, 2TB+ SSD)

  • Geographic distribution for latency

  • Redundancy for high availability


Monitoring and Health Checks

PteroCA Monitoring

Built-in Health Checks:

  • Plugin Health: Runs every 6 hours (checks integrity, dependencies, configuration)

  • Cron Tasks: Scheduled tasks execute via pteroca:cron:schedule

Application Monitoring:

Check Application Status:

Database Monitoring:

Queue Monitoring:

Pterodactyl Monitoring

Node Status:

  1. Navigate to Admin → Nodes

  2. Check status indicators (green = online, red = offline)

  3. Click node to view detailed metrics

Wings Monitoring:

Server Metrics:

  • CPU usage per server

  • Memory usage per server

  • Disk usage per server

  • Network I/O statistics

External Monitoring Tools

Recommended Tools:

Uptime Monitoring:

  • UptimeRobot

  • Pingdom

  • StatusCake

Application Performance:

  • New Relic APM

  • DataDog

  • Sentry (error tracking)

Infrastructure Monitoring:

  • Prometheus + Grafana

  • Netdata

  • Zabbix

Log Aggregation:

  • ELK Stack (Elasticsearch, Logstash, Kibana)

  • Graylog

  • Loki + Grafana


Backup Strategies

PteroCA Backups

Database Backups:

Application Backups:

Automated Backup with Cron:

Pterodactyl Backups

Panel Database:

Server Backups: Pterodactyl includes built-in server backup functionality:

  • Configure backup limits per server

  • Backups stored on node or remote S3-compatible storage

  • Users can create/restore backups via panel


Security at Scale

Network Security

Firewall Rules (UFW Example):

Internal Network:

  • Use private network for backend communication (database, Redis, etc.)

  • Restrict public access to application and load balancer only

  • Implement VPN for administrative access

Application Security

SSL/TLS:

  • Use Let's Encrypt for free SSL certificates

  • Configure HSTS (HTTP Strict Transport Security)

  • Disable weak ciphers and protocols

Nginx SSL Configuration:

Rate Limiting:

Database Security

Secure MySQL:

Backup Encryption:


API Rate Limits

In cases of intensive use of the Pterodactyl API, adjust request limits to avoid errors:

Pterodactyl .env Configuration:

Recommendations:

  • Small Deployment: Default limits (240/720) sufficient

  • Medium Deployment: Increase to 480/1440

  • Large Deployment: Increase to 1200/2880 or higher

Monitor API Usage:


Troubleshooting at Scale

High Load Issues

Symptoms:

  • Slow page load times

  • Timeouts

  • 502/504 gateway errors

Diagnosis:

Solutions:

  • Increase PHP-FPM pm.max_children

  • Add more web instances (horizontal scaling)

  • Optimize database queries

  • Implement caching (Redis)

  • Enable OPcache if not already enabled

Session Issues

Symptoms:

  • Users logged out randomly

  • Session data lost

  • Login loops

Causes:

  • File-based sessions with multiple instances

  • Session directory permissions

  • Session garbage collection issues

Solutions:

  • Migrate to Redis session storage

  • Ensure session directory is writable

  • Configure session affinity on load balancer

Database Connection Issues

Symptoms:

  • "Too many connections" errors

  • Slow query execution

  • Connection timeouts

Diagnosis:

Solutions:

  • Increase max_connections in MySQL

  • Implement connection pooling (ProxySQL)

  • Optimize long-running queries

  • Add read replicas for read-heavy operations

Queue Processing Issues

Symptoms:

  • Emails not sending

  • Background tasks not completing

  • Queue messages pile up

Diagnosis:

Solutions:

  • Increase worker count

  • Switch to Redis-backed queue

  • Check external service availability (SMTP, etc.)

  • Retry failed messages: php bin/console messenger:failed:retry


Best Practices

Capacity Planning

Monitor Growth:

  • Track user registrations per month

  • Track server creations per month

  • Monitor resource usage trends

  • Plan capacity 3-6 months ahead

Scaling Triggers:

  • CPU usage consistently >70%

  • RAM usage consistently >80%

  • Disk space <20% remaining

  • Response times >1 second average

Deployment Strategy

Blue-Green Deployment:

  1. Deploy new version to "blue" environment

  2. Test thoroughly

  3. Switch load balancer to blue

  4. Keep green as rollback option

Rolling Updates:

  1. Update one instance at a time

  2. Verify health before next instance

  3. Minimal downtime

Canary Releases:

  1. Deploy to subset of instances

  2. Monitor for issues

  3. Gradually roll out to all instances

Documentation

Maintain Runbooks:

  • Deployment procedures

  • Rollback procedures

  • Incident response plans

  • Configuration management

Infrastructure as Code:

  • Use Ansible, Terraform, or similar

  • Version control infrastructure configs

  • Automate provisioning and scaling

Testing

Load Testing:

Performance Testing:

  • Test database queries under load

  • Test API response times

  • Test queue processing speed

  • Test backup and restore procedures


Conclusion

Scaling PteroCA and Pterodactyl infrastructure requires careful planning and implementation. Start with a single-instance deployment and scale horizontally as your business grows. Monitor performance metrics, implement caching and queue systems, and ensure high availability through redundancy and load balancing.

Key Takeaways:

  • Single instance sufficient for up to 500 customers

  • Redis essential for horizontal scaling

  • Shared storage required for multiple instances

  • Pterodactyl scales by adding nodes, not panel instances

  • Monitor, backup, and document everything


PteroCA Configuration:

Pterodactyl Integration:

Advanced Topics:

External Resources:

Last updated