Skip to content

AgentWorkforce/relayfile

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

182 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

relayfile

Turn any API into a filesystem that AI agents can read and write.

Agents don't call APIs. They read and write files. Relayfile handles webhooks, auth, and writeback so agents never need to know about the services behind the files.

# Agent reads a GitHub PR
cat /relayfile/github/repos/acme/api/pulls/42/metadata.json

# Agent writes a review
echo '{"body": "LGTM!", "event": "APPROVE"}' \
  > /relayfile/github/repos/acme/api/pulls/42/reviews/review.json

# Done. The review is posted to GitHub. The agent didn't authenticate or call any API.

How It Works

External Services          relayfile             Your Agents
─────────────────     ─────────────────     ─────────────────
                       ┌──────────────┐
  GitHub ──webhook──▶  │              │     cat /github/...
  Slack  ──webhook──▶  │  Virtual     │◀──  echo '...' > /slack/...
  Notion ──webhook──▶  │  Filesystem  │     ls /notion/...
  Linear ──webhook──▶  │              │
                       └──────┬───────┘
                              │
                        Adapters map paths
                        Providers handle auth
                        relayauth scopes access
  1. Webhooks arrive from GitHub, Slack, Notion, etc.
  2. Adapters normalize the payload and map it to a VFS path
  3. Providers handle OAuth tokens and API proxying
  4. Agents read and write files — that's their entire integration
  5. When an agent writes to a writeback path, the adapter posts the change back to the source API

Quick Start (Docker)

cd docker && docker compose up --build

This starts relayfile on :9090 and relayauth on :9091, seeds a ws_demo workspace with sample files, and prints a dev token. See docker/README.md for details.

Getting Started

Relayfile Cloud — everything managed. Sign up, get a token, connect your services from the dashboard.

Self-hosted:

# 1. Start the server
RELAYFILE_JWT_SECRET=my-secret go run ./cmd/relayfile

# 2. Generate a token
export RELAYFILE_AGENT_NAME=compose-agent
SIGNING_KEY=my-secret ./scripts/generate-dev-token.sh

# 3. Mount a workspace
relayfile mount my-workspace ./files --token $TOKEN

Development tokens should include workspace_id, agent_name, and aud: ["relayfile"].

Authentication & Permissions

Tokens are scoped JWTs issued by relayauth. The VFS paths are the permission boundaries — you control exactly what each agent can see and do:

# Read-only access to specific Notion pages
RELAYAUTH_SCOPES_JSON='["relayfile:fs:read:/notion/pages/product-roadmap/*", "relayfile:fs:read:/notion/pages/eng-specs/*"]'

# Code review agent: read PRs, write only reviews
RELAYAUTH_SCOPES_JSON='["relayfile:fs:read:/github/repos/acme/api/pulls/*", "relayfile:fs:write:/github/repos/acme/api/pulls/*/reviews/*"]'

# Support agent: read + reply in Slack support channel only
RELAYAUTH_SCOPES_JSON='["relayfile:fs:read:/slack/channels/support/*", "relayfile:fs:write:/slack/channels/support/messages/*"]'

# Observer: see everything, change nothing
RELAYAUTH_SCOPES_JSON='["relayfile:fs:read:*"]'

Scope format: plane:resource:action:path — supports wildcards and path prefixes.

Ecosystem

Repo What it does
relayfile Go server + TypeScript/Python SDKs
relayauth Token issuance + scoped permissions
relayfile-adapters GitHub, GitLab, Slack, Teams, Linear, Notion adapters
relayfile-providers Nango, Composio, Pipedream, Clerk, Supabase, n8n providers
cloud Cloudflare Workers deployment (managed service)

SDK

npm install @relayfile/sdk    # TypeScript
pip install relayfile          # Python
import { RelayFileClient } from "@relayfile/sdk";

const client = new RelayFileClient({ token: process.env.RELAYFILE_TOKEN! });

// Read
const file = await client.getFile("ws_123", "/github/repos/acme/api/pulls/42/metadata.json");

// Write (triggers writeback to GitHub automatically)
await client.putFile("ws_123", "/github/repos/acme/api/pulls/42/reviews/review.json", {
  content: JSON.stringify({ body: "LGTM!", event: "APPROVE" }),
});

// List
const tree = await client.listFiles("ws_123", "/github/repos/acme/api/pulls/");

API

Filesystem:

  • GET /v1/workspaces/{id}/fs/tree — list files
  • GET /v1/workspaces/{id}/fs/file — read file
  • PUT /v1/workspaces/{id}/fs/file — write file
  • DELETE /v1/workspaces/{id}/fs/file — delete file
  • GET /v1/workspaces/{id}/fs/query — structured metadata query

Webhooks & Writeback:

  • POST /v1/workspaces/{id}/webhooks/ingest — receive webhooks
  • GET /v1/workspaces/{id}/writeback/pending — pending writebacks
  • POST /v1/workspaces/{id}/writeback/{wbId}/ack — acknowledge writeback

Operations:

  • GET /v1/workspaces/{id}/ops — operation log
  • POST /v1/workspaces/{id}/ops/{opId}/replay — replay failed operation

Full spec: openapi/relayfile-v1.openapi.yaml

Self-Hosted

# In-memory (development)
go run ./cmd/relayfile

# Durable local (persisted to disk)
RELAYFILE_BACKEND_PROFILE=durable-local RELAYFILE_DATA_DIR=.data go run ./cmd/relayfile

# Production (Postgres)
RELAYFILE_BACKEND_PROFILE=production RELAYFILE_PRODUCTION_DSN=postgres://localhost/relayfile go run ./cmd/relayfile

Docker Compose:

cp compose.env.example .env
docker compose up --build -d

Mount

Mount a workspace as a local directory:

# FUSE mount (real-time)
relayfile mount ws_123 ./files --token $TOKEN

# Polling sync (no FUSE required)
go run ./cmd/relayfile-mount --workspace ws_123 --local-dir ./files --interval 2s --token $TOKEN

CLI Inspection

Inspect the remote VFS directly without mounting:

relayfile login --server https://api.relayfile.dev --token "$RELAYFILE_TOKEN"
relayfile workspace use ws_123
relayfile tree /github --depth 5
relayfile read /github/repos/acme/api/pulls/42/metadata.json
relayfile tree /github --depth 5 --json
relayfile read /external/blob.bin --output blob.bin

You can also set RELAYFILE_WORKSPACE=ws_123 instead of storing a default.

Open the hosted observer for the active workspace:

relayfile observer
relayfile observer ws_123 --no-open

Changelogs

Each publishable package keeps its own CHANGELOG.md:

Process — landed in every PR, finalized at release:

  1. PRs that touch a package add an entry under its ## [Unreleased] section (Keep a Changelog format: Added / Changed / Deprecated / Removed / Fixed / Security). Include the PR number as a link reference at the bottom of the file.
  2. At release, the Publish Package workflow runs scripts/finalize-changelogs.mjs, which renames [Unreleased] to [x.y.z] - YYYY-MM-DD, opens a fresh empty [Unreleased] above, and rewrites the compare-link references. Prereleases skip this step so their entries accumulate until the final release.
  3. Packages without user-visible changes in a given release leave [Unreleased] as _No unreleased changes._. The finalizer rewrites the dated section's body to _No user-visible changes in this release._ so the release heading still reads naturally.

License

MIT

About

Queue-first virtual filesystem-over-REST that ingests noisy external webhooks, projects a durable file tree, and performs conflict-safe writeback with retries, dead-lettering, and replay.

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors