The stack that powers modern services businesses

Background
Copilot helps professional service businesses manage clients and communication in one place. Messaging, file sharing, contracts, and billing all happen in a single, unified experience that makes life easier for businesses and provides a seamless experience for their clients. At the core of this is a customizable client portal that businesses can tweak it to fit their needs and even extend it with apps and integrations.
On the surface, a client portal might seem pretty simple, but building a platform that works across industries and scales from solo operators to large teams means we need a system that’s flexible, easy to update, and can handle a growing number of features.
Here’s a look at how our system works:
Web server
Framework: Express/NodeJS
Infrastructure: Elastic Beanstalk
The web app is served to the browser through an Express web server, which delivers a server-side-rendered app shell for a single-page application (SPA). Once loaded, the app is hydrated on the client, making the initial experience feel much faster and more responsive..
We run Express on AWS Elastic Beanstalk, which takes care of provisioning servers, load balancing, and scaling. Beanstalk is great because it abstracts away a lot of the complexity of managing server instances, letting us focus on building features instead of worrying about infrastructure. We made a decision early on to adopt a serverless system but for the web server we intentionally chose to not use something like Lambda because we were concerned with some of the cold start delays, payload and build size limitations.
Auth
Framework: Cognito, Lambda
We use AWS Cognito to handle authentication, including sign-in, sign-up, mfa, third-party logins, and token validation. At its core, Cognito manages a User Pool, which acts as the directory for all users.
The big advantage of Cognito is that it provides a lot of functionality out of the box, so we don’t have to worry about managing passwords, handling security, or building authentication flows. It also integrates directly with other AWS services, making things like access control and identity federation much simpler and secure. Cognito can also trigger Lambda functions when certain authentication events happen (like login or password resets), which we use to automate workflows around user management and build custom notifications.
We also found Cognito to be an incredibly cost effective platform and really noticed the savings as we crossed 100K users registered in our system.
Web Client
Framework: React/Typescript
Libraries: AmplifyJS, Redux Toolkit Query (RTK Query), Tailwind, Zod, Formik/Yup, AgGrid, universal-router. Deprecating: MaterialUI, Vanilla Redux
The frontend is built with React and TypeScript, handling everything from rendering data to managing user interactions. React gives us a flexible way to build UI components, while TypeScript helps catch errors early and makes refactoring easier.
For data fetching, we use AmplifyJS, which is basically a wrapper around Axios with some extra utilities for handling things like S3 uploads and Cognito authentication. Requests are managed with RTK Query, which simplifies global state management and caching. Instead of manually handling API calls and storing results, we just use hooks that keep everything up-to-date automatically.
The UI started out with MaterialUI, which provided solid pre-built components and styling utilities. But as our design language evolved, we moved to Tailwind and custom components to get more flexibility and consistency.
For interactions, we use Formik/Yup to handle forms, AgGrid for complex tables, and universal-router for routing. These libraries make it easy to manage validation, work with large datasets, and keep navigation smooth.
API Gateway
Infrastructure: API Gateway, Cognito, Lambda
All API requests go through AWS API Gateway, which acts as the front door to our backend. It enforces authentication by checking tokens with Cognito and then routes valid requests to the right backend service.
Instead of running a traditional backend server, we use AWS Lambda, which means our APIs scale automatically and we only pay based on our usage. This lets us effortlessly handle traffic spikes and takes away the burden from the engineering to team to think about provisioning any additional resources.
One key detail: our web app isn’t served through API Gateway. This way frontend traffic (loading the app itself) and API requests scale separately, preventing unnecessary bottlenecks.
API Service
Framework: Golang/Gin
Infrastructure: Lambda, CloudFormation
The API service handles all the backend logic for incoming requests from the app. Things like data access, business rules, and core functionality. It’s written in Golang using Gin, which makes it lightweight and fast.
Golang is great for performance, especially in a serverless environment where its faster execution and lower cold start times (compared to JavaScript) help reduce latency and costs. Gin is a lightweight framework that makes handling API requests efficient while keeping the code clean and easy to maintain.
All our infrastructure is defined using CloudFormation, so every change is versioned and reviewed like code. This makes deployments more predictable and keeps our architecture well-documented.
Database
Infrastructure: DynamoDB
We store all application data in DynamoDB, a fully managed NoSQL database. Instead of using multiple tables, we follow a Single Table Design, meaning everything is structured within one table with well-defined partition keys. This is probably one of the most unique parts of our stack so it worth taking a moment to dive deeper here.
Why Single Table and NoSql?
- Fast queries with consistent performance: Data access patterns are carefully designed so queries hit the exact partitions they need, avoiding complex joins and keeping read operations efficient. Unlike relational databases, where complex queries can slow down as data grows, DynamoDB is optimized for key-based lookups and queries on indices. It enforces strict query patterns and limits, ensuring performance remains consistent no matter how large the table gets.
- Automatic scalability: DynamoDB handles massive amounts of data across multiple partitions. With a thoughtful schema we ensure that reads and writes are distributed efficiently, reducing hot partitions and helping the database scale seamlessly.
- Simplified schema evolution: Since data models are flexible, adding new attributes or relationships doesn’t require schema migrations like in traditional SQL databases.
Trade-offs
- Steeper learning curve: Unlike traditional SQL, where you can just normalize data and join tables dynamically, Dynamo requires upfront planning. You need to design access patterns carefully before setting up the schema, which can be challenging for teams unfamiliar with DynamoDB.
- Harder debugging and maintenance: With all entities stored in one table, it’s not as visually intuitive as a relational database with separate, well-labeled tables. Developers have to understand the partitioning and sorting logic to effectively navigate the data.
- Complexity in queries an limited flexibility: Every query has to be planned, and changes in query requirements may require restructuring data or adding secondary indexes.
- No built-in constraints or referential integrity: Unlike relational databases, DynamoDB doesn’t enforce foreign keys, unique constraints, or cascading deletes. It’s up to the application logic to maintain data integrity, which adds complexity when dealing with relationships like invoices linked to clients or clients linked to companies.
- No managed migrations: In SQL databases, schema migrations are structured and often come with tools to apply, roll back, and version changes. In DynamoDB, making schema changes like adding a new field across all records requires custom migration logic and batch updates, which can be costly and time-consuming.
Why This Works for us
Despite the trade-offs, DynamoDB makes sense for us because our access patterns are predictable—most queries involve fetching user-specific data like clients, contracts, invoices, and files, all of which can be efficiently retrieved using predefined partition keys. Given the predictability in our patterns, the flexibility that a NoSQL database offers lets us move really quickly to build new features.
By designing our schema carefully and leveraging secondary indexes when needed, we avoid many of the downsides while getting all the benefits of a high-performance, low-maintenance NoSQL database.
DB Event Stream
Infrastructure: DynamoDB Stream, Lambda
Any time data in DynamoDB changes (inserts, updates, deletes), an event stream captures those changes in real-time. These events are processed by Lambda functions, which handle things like notifications, send web-socket messages, background processing, and interacting with slower third party services.
This event-driven setup means we don’t have to poll the database to detect changes or explicitly implement async jobs in our api logic.
Backend Services
Infrastructure: Lambda, SQS, SNS, Simple Email Service (SES)
The backend is made up of multiple services, all running as serverless functions that get triggered by different events. In addition to the core API and database services, we have:
- Notification Service – Handles in-app notifications and sends transactional emails through AWS SES. It listens for events in an SQS queue, ensuring messages are processed asynchronously without blocking other operations.
- Portal Event Service – Manages real-time updates and webhooks. It listens to an SNS topic, processes events, and pushes messages via WebSockets or third-party integrations. This keeps everything in sync without requiring constant polling.
Going fully serverless for these services means we never have to worry about fine tuning infra, and we only pay for what we use. It also reduces maintenance overhead, letting us focus on improving the product instead of managing infrastructure.
Deploying and Testing
Infrastructure: CodePipeline, GitHub Actions
We use GitHub Actions to handle continuous integration (CI) whenever a pull request (PR) is created. This runs multiple jobs, with the main ones being TypeScript builds, web unit tests, and Go integration tests to ensure everything is working as expected.
Once a PR is reviewed, approved, and merged, it’s automatically deployed to the staging environment through CodePipeline, which takes care of deploying the web server (Elastic Beanstalk), Lambda functions, and any infrastructure changes defined in code. Since our infrastructure is managed as code, the staging environment mirrors production, making it easy to keep both in sync.
After staging deployments, we run automated UI tests that click through the app to validate everything end-to-end before pushing changes to production. We deploy to production daily, and this process also runs through CodePipeline, ensuring a smooth, reliable, and repeatable deployment workflow.
Looking Ahead
There’s still a lot to build in the core product, and we’re just getting started. Right now, we’re heavily focused on improving our commerce features to help businesses get paid faster and more seamlessly. At the same time, we’re expanding our platform capabilities, making Copilot even more customizable and extendable so businesses can tailor the experience to their exact needs.
One of our core beliefs is that every business, no matter the size, deserves great tools, and their clients deserve a high-quality, modern experience. To make that a reality, we need to keep scaling while staying nimble — continuing to ship fast, improve our infrastructure, and push the boundaries of what a client portal can do.
If this tech stack and problem space intrigue you, we’d love for you to join us. There’s no shortage of challenges to solve, and we’re looking for people who are excited to build something ambitious that empowers businesses everywhere.