The New Bottleneck
Who doesn't want faster builds and test runs? Especially now, with agents producing code at an unprecedented pace. What was once a problem exclusive to large organizations with massive codebases and sprawling teams is getting democratized. Build performance is becoming a bottleneck much earlier in the lifecycle of a codebase than it used to. Think about it: a solo developer or a small team can now generate and iterate on code at a rate that would have required a much larger engineering organization just a few years ago. Agents don't get tired. They don't take coffee breaks. They can refactor entire modules, write comprehensive test suites, and scaffold new features in minutes. But all of that generated code still needs to compile. It still needs to be tested. And if your build system can't keep up, you've just shifted the bottleneck from code production to code validation. The agent waits. You wait. Everyone waits. This creates an interesting situation. Most companies providing caching solutions fall into one of two camps: enterprise-first offerings with the classic "contact sales" button and a heavy procurement process that many developers are hesitant to engage with, or solutions tightly coupled to specific CI environments where providers compete on mounted volumes and ephemeral storage. Both approaches are limiting. As a developer, you want your builds and workflows to be fast whether they're running locally, on CI, or in a cloud development environment. And you want that without having to talk to a sales team or lock yourself into a single vendor's ecosystem. There's also a third category worth mentioning: solutions that require you to fundamentally restructure your project or adopt an entirely new build system. These can deliver impressive results, but the migration cost is often prohibitive. Most teams don't have the luxury of pausing feature development for months to overhaul their build infrastructure. They need something that meets them where they are. Google famously invested years and countless engineering hours into Blaze (later open-sourced as Bazel) precisely because build performance at scale is existential. As their engineering documentation notes: "A long time ago, Google built its software using large, generated Makefiles. These led to slow and unreliable builds, which began to interfere with our developers' productivity and the company's agility." But not every company has Google's resources to build custom infrastructure. Source: Software Engineering at Google, Chapter 18 The fundamental challenge here is physics, and it constrains you from both directions. On one side, the ceiling of any build system is the number of processors available, and in virtualized environments, multiple tenants often compete for the same compute resources. This means performance can be inconsistent and underwhelming compared to what you experience on your local machine. CI runners spin up, warm their caches, do their work, and disappear. The next run starts cold again. It's a cycle that wastes enormous amounts of compute, and by extension, time and money. On the other side, the alternative to raw compute is the network: fetch prebuilt artifacts instead of rebuilding them. But physics constrains you here too. Data can only travel so fast. Light speed through fiber, routing hops, TLS handshakes, bandwidth limitations. Network latency is a fundamental barrier, and if you're not careful, the time spent downloading cached artifacts can exceed the time it would have taken to just build locally and wait for compute resources to become available. You're caught between two physical constraints: the scarcity of processors and the latency of networks. The art of build optimization is navigating the space between them. If you're fortunate enough to work with a toolchain that supports fast incremental builds, like Go, the compute constraint is less painful. The compiler is fast, the dependency model is simple, and rebuilds are typically measured in seconds. But that's not the reality for many of us. Take Rust or Swift for instance. A small change can trigger minutes of recompilation for sources that have already been compiled elsewhere. A tweak to a low-level module can cascade through your dependency graph, invalidating caches and forcing rebuilds across dozens of targets. Minutes that add up across your team, across your CI runs, across your day. Multiply that by the number of pull requests you merge in a week, and you start to understand why large organizations staff entire platform teams just to keep builds manageable. What a waste of time and resources. This is where remote caching becomes appealing despite its constraints. If someone on your team, or your CI system, or even a developer at another company using the same open-source dependency, has already compiled a particular module with the same inputs, why compile it again? Just download the result. The promise is compelling: turn minutes of compilation into seconds of network transfer. But making that promise real requires overcoming the latency problem. It's not enough to have a cache. You need a cache that's fast enough, distributed enough, and smart enough to know when fetching makes sense and when building locally is the better choice. You need infrastructure that understands the geography of your team and the topology of your build graph. You need artifacts stored close to where they're needed, served with minimal round trips, and compressed in ways that maximize throughput without sacrificing decompression speed. What if we could overcome these physical limitations of the network in a way that isn't coupled to any particular environment? What if the opt-in process was as simple as creating an account on Vercel and deploying your app there? What if the same caching layer worked seamlessly across your laptop, your CI pipeline, and your cloud dev environment, intelligently routing requests to the nearest edge location and making smart decisions about what to fetch versus what to build? Would you say no to that? And what if it was complemented with the right interfaces for your agents to help you understand and optimize your builds over time? AI can do more than just write code. It can analyze build graphs, identify bottlenecks, and suggest structural improvements that make your compilations more effective. It can spot that one target that gets rebuilt constantly because of an overly broad dependency declaration. It can recommend where to draw module boundaries for better parallelization. It can help you understand why a particular build took longer than expected and what you can do about it. The combination of fast infrastructure and intelligent tooling is more powerful than either one alone. There's a real opportunity here, and we want to explore it at Tuist. We're taking the first steps to build the infrastructure that makes this possible. We have a strong foundation of technologies and ideas, and we're wrapping everything in a developer experience that democratizes access to faster builds and tests for anyone. I don't believe this kind of investment should be exclusive to large enterprises. The companies that can afford dedicated platform teams and custom build infrastructure already have solutions, even if those solutions are expensive to build and maintain. It's everyone else who gets left behind: the startups moving fast and accumulating build debt, the indie developers whose side projects have grown into real products, the small teams who just want to ship without watching progress bars. More and more of them will want their agents to be productive at the tasks they're assigned, and slow builds are a tax on that productivity. When I think about the shape this could take, I can't help but think of Supabase and how they can spin up databases wherever you want with just a button. Or Vercel, where you don't think about where your functions run. You just trust that they'll execute close to where requests originate. Or Cloudflare, which turned the complexity of global edge infrastructure into something any developer can use without thinking about regions or replication. These platforms are used by indie developers and large companies alike. People pay for the developer experience as much as the underlying infrastructure. They pay for not having to think about the hard problems so they can focus on their actual product. Tuist can provide developer productivity as a service, built on an open foundation of technologies, and with the developer experience we've always strived to create since we wrote the first line of code in that Xcode project generator. We've spent years understanding the intricacies of build systems, the pain points of mobile development, and the patterns that help teams scale. Now we want to take that knowledge and make it accessible to everyone, not just the teams who can afford to figure it out themselves. The goal hasn't changed. We just have a clearer picture of what's possible, and we're excited to build it.