- CPU: 1 vCPU
- RAM: 1 GB (2 GB recommended)
- Disk: 10 GB for the application, plus whatever your database needs
- Software: Docker and Docker Compose, or Python 3.11+, PostgreSQL 14+, and Redis 6+
CueAPI is lightweight. A small VPS (e.g., 2 vCPU / 2 GB RAM) can handle thousands of cues comfortably.
Yes. The default docker-compose.yml runs the web server, poller, PostgreSQL, and Redis on a single machine. This is fine for small to medium workloads (up to ~10,000 cues). For higher scale or high availability, separate PostgreSQL and Redis onto dedicated instances.
Account-level cue limits are stored in the database. To increase the limit for a specific account, update the accounts table directly:
UPDATE accounts SET max_cues = 1000 WHERE id = 'usr_...';There is no hard upper bound enforced by the system beyond what your database can handle.
The poller is responsible for checking which cues need to fire. If it crashes:
- No new executions will be created until the poller restarts.
- Already-queued webhook deliveries and worker executions continue processing normally.
- When the poller restarts, it catches up on any cues that should have fired during the downtime.
Run the poller under a process supervisor (Docker restart policy, systemd, etc.) so it restarts automatically.
CueAPI uses a transactional outbox pattern. When a cue fires, the execution is written to the database in the same transaction. The outbox processor then delivers the webhook. This means:
- At-least-once delivery. If a webhook delivery fails or the outbox processor crashes mid-delivery, the execution will be retried. Your endpoint may receive the same payload more than once.
- Not exactly-once. Design your webhook handlers to be idempotent. Use the execution ID in the payload to deduplicate.
Failed deliveries are retried with exponential backoff.
No. CueAPI relies on PostgreSQL-specific features (advisory locks, FOR UPDATE SKIP LOCKED, interval arithmetic). MySQL is not supported and there are no plans to add support.
- Health check:
GET /healthreturns{"status": "ok"}if the web process is alive. - Status:
GET /statusreturns database and Redis connectivity, poller heartbeat, and queue depth. - Logs: CueAPI logs to stdout in structured format. Ship logs to your preferred aggregator (Datadog, Grafana Loki, ELK, etc.).
- Key things to watch:
pending_outboxcount,stale_executionscount, poller heartbeat age, webhook failure rate.
See Production for details.
No. Redis is required for the outbox queue, distributed locking, and poller coordination. Without Redis, the poller and webhook delivery system will not function. Redis does not need to be large -- a few hundred MB is sufficient for most workloads.
- Read the release notes for any breaking changes.
- Run database migrations:
alembic upgrade head - Deploy the new containers.
For zero-downtime upgrades, run migrations first, then do a rolling update of your containers. See Production for the full procedure.
Migrations 011 through 014 are SaaS-specific (billing, usage tiers, multi-tenant features) and are not included in the open-source core. The gap is intentional. Alembic tracks migrations by revision ID, not by filename number, so the gap has no effect on alembic upgrade head. Your database will apply 001-010, then 015-016 in order. Do not create your own 011-014 migrations -- use sequential numbers starting after the highest existing migration (e.g., 017, 018).
- GitHub Issues: github.com/cueapi/cueapi-core/issues -- for bug reports and feature requests.
- Discussions: Use GitHub Discussions for questions and community help.
- Documentation: Start with the Quickstart, then read Configuration and Production.