Personal Automations App

Overview
Personal Automations is a full stack platform I built to act as a controlled layer between AI agents and real world systems.
As I started working with OpenClaw agents, I realized quickly that giving agents direct access to things like Gmail or external APIs did not feel right. It was too open ended, hard to reason about, and difficult to audit if something went wrong.
Instead of letting agents operate freely, I built a system that sits in the middle. It owns all integrations, processes incoming data, and creates structured jobs that agents can safely fetch and execute. Agents do not touch raw services. They work through a controlled interface.
This project has grown into a daily-use system and the most complex backend I have built, combining real time AI interaction, background processing, and a strong focus on safety and observability.
Core Idea
The core idea is simple: agents should request work, not perform arbitrary actions.
They should not hold credentials, make direct API calls, or operate outside defined boundaries. The platform defines what is allowed, executes it, and records what happened.
That constraint shapes the entire system. It turns agents from something unpredictable into something structured and reliable.
Pipeline System
At the center of the platform is a pipeline framework.
Each automation is defined as a typed pipeline with a clear input and output. Pipelines are registered at startup and stored in a central registry, which allows them to be discovered and executed from anywhere in the system.
The same pipeline can be triggered from:
- API requests
- WebSocket chat commands
- scheduled jobs
- external events
Every execution is recorded with input, output, status, and metadata. This creates a full audit trail and makes it easy to understand how the system behaves over time.
Pipelines are the only way work gets done. Nothing reaches directly into external systems. Everything flows through these controlled units of logic.
OpenClaw Integration and Job Model
Personal Automations is designed to work alongside OpenClaw.
Instead of giving OpenClaw agents direct access to integrations, this platform prepares the work for them. It connects to external services, processes data, and creates structured jobs that represent specific tasks.
Agents then fetch these jobs on a cron schedule, execute them within defined constraints, and return results back to the system.
This creates a clear separation of responsibilities:
- Personal Automations handles integrations, permissions, and orchestration
- OpenClaw handles agent execution
That separation makes the system easier to reason about and much safer to operate.
Area-Scoped AI Agents
The platform also includes AI assistants organized by domain.
Each assistant is scoped to a specific area and only has access to the pipelines and tools it needs. This keeps interactions focused and prevents unintended behavior.
Access control is enforced at the tool level. Even if an agent is prompted incorrectly, it cannot call pipelines outside its allowed scope.
There is also a general assistant with broader access, but the default pattern is constrained and domain-specific.
Real-Time Chat System
The system includes a WebSocket-based chat interface where AI responses stream back in real time.
Instead of waiting for a full response, the frontend receives a sequence of events as the model works. This includes partial text, tool calls, and tool results.
You can see when the assistant decides to use a tool, what arguments it builds, and what result it gets before the final response is complete.
All conversations are stored with full history and tool execution details, making the chat interface both interactive and traceable.
Email Triage System
One of the main pipelines is an automated email triage system.
It connects to Gmail and processes messages incrementally, classifying them into categories like action needed, finance, notifications, newsletters, and others.
The system uses a hybrid approach:
- heuristics for fast, obvious cases
- AI classification for ambiguous ones
Each decision includes a confidence score. Automated actions are only applied when confidence is high, and all changes are logged with enough detail to support auditing and undo functionality.
Database-Backed Scheduling
The platform includes a scheduling system backed by the database.
Users can define tasks using cron expressions, timezones, pipeline names, and input parameters. These schedules are picked up by worker processes without requiring code changes.
After each run, the system calculates the next execution time and updates the schedule.
This same mechanism drives agent workflows by determining when jobs should be created and when agents should return to fetch new work.
Architecture
The backend follows a layered architecture:
- routes handle requests and authentication
- services contain business logic
- repositories handle data access
- the database stores persistent state
Repositories flush changes but do not commit them. This allows multiple operations to share a single transaction and keeps data consistent.
The system is fully async and built with FastAPI, PostgreSQL, Redis, and background workers.
Security Model
Security is built into the system from the start.
- agents never receive raw credentials
- tokens are encrypted at rest
- access tokens are short lived and rotated
- actions are restricted through pipelines and scoped tools
The system is designed around containment. Agents can be useful, but only within clearly defined boundaries.
Frontend and Developer Experience
The frontend is built with Next.js and uses Zustand for state management.
Custom hooks handle API communication and WebSocket behavior, keeping the UI modular and easier to extend.
The full system runs locally with Docker Compose, including the app, database, Redis, worker, and scheduler. A custom CLI framework handles tasks like seeding data and managing users, which improved the development workflow significantly.
Observability and CI
Because the system coordinates multiple moving parts, observability is critical.
The application includes tracing across API requests, database queries, cache operations, and agent execution. Errors are tracked in production, and request IDs allow activity to be traced across the system.
The CI pipeline runs on every push and includes:
- linting
- type checking
- tests
- security checks
- Docker builds
Development Challenges
Defining safe boundaries for agents
The biggest challenge was deciding what agents should be allowed to do. The pipeline system emerged as a way to give them power without making the system unpredictable.
Balancing flexibility and control
Pipelines needed to support different use cases while still remaining structured and safe. Finding that balance required iteration.
Real-time streaming complexity
Streaming partial responses and tool calls through WebSocket required careful coordination between backend and frontend.
Classification tradeoffs
Combining heuristics and AI for classification required tuning to get the right balance of speed, cost, and accuracy.
This project reflects how I approach building AI systems. Not just making them capable, but making them controlled, observable, and usable in real workflows.