pedro
Latest posts from Pedro Piñera
-
Ideas and Expression
Dec 03 ⎯ The other day I had an idea around how we could make agents more effective working with Xcode's toolchain, and went on an agentic journey to validate the idea and share about it in this blog post. These kinds of explorations play a role in getting ideas for how we could push the boundaries of Apple development, and also serve as a marketing tool. Developers enjoy these in-depth blog posts over shallow and very SEO-optimized ones (e.g. 5 things you didn't know about Xcode). But I'm not going to talk about the content of the blog post here. Instead, I'd like to share how LLMs, and in particular Claude Code, helped me throughout the process. Months ago, I was very much on the fence about using AI tools to write and publish content on the Internet. You might have heard the term AI Slop, and for some time, I thought anything produced by it was slop. But at some point, I realized that the slop content was produced by the same people who were already producing slop before, just at a slower pace or with at least some personality. I can't pinpoint what exactly changed my mind there, perhaps reading some blog posts that had been written by AI, where the author had placed the focus on the ideas that they wanted to share, and used AI as a tool to communicate them in the way they wanted them to be communicated. I believe it was some blog post by Peter Steinberger. So I started doing the same. I separated the ideas that I wanted to share from how they'd get shared, and oh god, the whole process became so much fun, not just writing blog posts, but also coding, coming up with social publications, or even preparing presentations with tools like sli.dev, which allow you to write slides in the universal language markdown. When there are ideas in my mind, the first thing that I need help with is structuring them and making the connections. In the case of Tuist's blog post, it was an idea of how we could use build system's structured data, persist it, and expose a queryable interface for agents to consume that data. I knew what we needed was in Apple's open source codebase, but the amount of code there was so vast that it would have taken me days or weeks to have the overview that I'd have needed to be able to do the exploration. And here's the thing, without AI, I wouldn't have even thought about it in the first place. My time is very limited, so I can't afford doing that, but AI changes that, and suddenly allows me to learn and validate ideas in minutes. Slowly, I built a good picture of the contract between Xcode and the build system, and started brainstorming what value we could derive from that data to help teams optimize their builds and project setup. This was so much fun. I had ideas like contention points for parallelization, or slowest compilation branches that approximate how much the compilation of the whole app is going to take. And the agent went beyond that making other suggestions, and proposing a CLI interface to query that data from an SQLite database where the data would get persisted. In a few iterations, I had a working executable that I could run against Wikipedia's build. It worked, and analyzed the data. And as we were both finding out about the possibilities of the idea, the AI would capture those in the blog post. And here's where how ideas are expressed comes into play. I'm not a native English speaker, so my usage of the language is very basic. So leaning on a tool to express the ideas is quite unlocking for roles like mine, where it takes me 2x or 3x the effort to make it sound like the Spanish version that I'd write. If anything doesn't sound right, or sounds very flat without personality, I'd just iterate on it. And this brings me back to what I said before. Whoever doesn't care about how things are expressed will just take the output, package it, and put it on the internet. It's the same as with code. How many people have you seen that don't look at the code anymore? They are probably the ones that didn't care too much about the craft before agents existed. It's the same thing here. And for me it's not black or white in terms of I'm going to do everything with AI. I'm developing an intuitive sense of when I feel like I'd like to write something with AI. That blog post on Tuist? It felt a natural choice. I treated it as an exploration whose output I knew would be a blog post and a tool. This one? I'm writing it myself, I'm letting the words flow, not paying too much attention to the grammar, and focusing on capturing the messiness of ideas that's currently in my head. I'm doing so with the peace of mind that AI will do a review. I don't know about you, but I find it crazy how enabling the technology is, and how my perception of it is changing. I read the blog post multiple times before publishing it, and there were pieces in it that blew my mind because I couldn't have come up with them myself. So as I said before, the people who want to produce slop, whether it's code or content, will do that with or without AI. And the ones who deeply care about the ideas that they want to put out there and the craftsmanship that makes them possible will treat the technology as a tool of empowerment, that helps us convey them in the best way possible, and prototype more things eliminating a good chunk of the friction.
- ai
- content
- +2
-
Reviewing your agents' work locally with G
Nov 22 ⎯ The other day, while talking with Marek about how we use agentic coding tools in our daily workflow, I realized that the pattern of opening a draft pull request just to use GitHub's PR review interface might be more common among developers than I initially thought. This practice of leveraging GitHub's review UI to systematically review work, leave comments, and mark files as viewed or not viewed has become second nature to many of us. I particularly appreciate GitHub's user interface and the way it incorporates features that transform the review experience into something resembling a checklist. This checklist-like approach is especially handy because it allows me to clearly flag whether a piece of work has already been reviewed by me, preventing duplicate effort and ensuring nothing slips through the cracks. This conversation sparked an idea: what if we could bring this same powerful reviewing experience into a local development environment where it works completely offline? Better yet, what if it could be integrated as an MCP server that plugs directly into agentic coding tools like Claude Code or Codex, enabling these tools to read and resolve comments left during the review process? That's exactly how Guck came to life. The name Guck is derived from the German verb "gucken," which means to take a look or to have a glance at something. This etymology perfectly captures what Guck is all about: it's a tool designed for both you and AI agents to collaboratively review the work that has been done and iterate on it together in a seamless manner. Guck is a command-line interface tool that you install on your system and configure in your shell environment. Once properly configured, it intelligently starts and stops a server automatically as you navigate into or out of directories that contain a Git repository. This automatic behavior means you don't have to remember to manually start or stop anything. When you're working inside one of these directories, you can simply run the guck command, and it will instantly open a web interface in your browser where you can begin reviewing your work. The graphical user interface supports marking individual files as viewed (similar to GitHub's functionality), and it also allows you to leave detailed comments on specific lines of code. These comments can then be read and acted upon by AI agents using the MCP protocol, creating a true collaborative workflow between human and machine. The state of your reviews, including which files you've marked as viewed and all the comments you've left, is persisted across Guck sessions on your system. This persistence is intelligently associated with the specific repository, branch, commit, and even the exact line number where comments were made, ensuring that your review context is never lost. Pretty cool, isn't it? Guck's UI is very similar to GitHub's review UI wher eyou see a list of files, and the changes in them, and you can leave comments and mark them as viewed or not.The project is open-source and released under the MIT license, which means you're free to use it, modify it, and adapt it to fit your specific needs. I'd love for you to give it a shot and see how it fits into your development workflow. If you find it useful or have ideas on how to make it even better, I'd genuinely appreciate your contributions. Whether you want to open a pull request with improvements, report bugs, suggest new features, or simply share your thoughts through GitHub issues, all feedback is welcome. The beauty of open-source is that we can build and improve tools like this together as a community, making them more robust and useful for everyone who relies on them.
-
Rethinking Developer Infrastructure
Nov 12 ⎯ I've been tinkering in some of my coding time at Tuist with what a solution with great developer experience would look like to solve the need for optimizing workflows (build, test, and custom automation), which is becoming more and more pressing as AI agents write more code to support development. Traditionally, teams resort to bumping the environment spec—for example, with beefier CI environments and developers' laptops—which places the focus on the container while disregarding the content, likely because of a short-term mindset. It's easier to just throw money at hardware and kick the can down the road, but sooner or later you find yourself in a position where you can no longer ignore the content. We see this at Tuist with Xcode projects. I experienced it at Shopify when they decided to transition from native mobile development to React Native, and most recently I've seen them starting to invest in building developer tooling with Nix at its core. At the time I was there, they also spun up a test infrastructure team to be selective about which tests they'd run and mitigate the flakiness that would cause slow retries and prevent them from shipping faster. Strategically, prioritizing solving the problem in such a generic manner is a bad idea for the business, which is why we're considering stepping into ecosystems closer to ours as our next steps—like Android and infrastructure—without which we can't build the level of performance that we want our users to have. However, and this is a bit of my personality, I like playing with what a different future might look like, thinking about a potentially better future, and building a path backwards to the present. As part of this exercise, there are a few patterns we've noticed, a few ideas about how the space might evolve, and how that developer experience might look. All this experimentation is happening at Fabrik, which is a bit of a materialization of my experiments. The experiments there gravitate around the following ideas: Key Observations All build systems will have caching capabilities, and alignment on the details of the contract is unlikely. Companies that adopted Bazel might consider switching back to the native toolchain, reducing their maintenance costs significantly and helping advance the official toolchains instead. Bazel remains ahead in terms of capabilities, but it's an ecosystem fork, which helps with innovation but makes it inaccessible for many organizations. Build systems' caching contracts will leave many details out, like how to secure the interfacing with the systems necessary to distribute the cache, or attribution of the interactions to users or projects. These details, left out by build systems, will be crucial for companies who care deeply about security or want to have deeper insights, like cache effectiveness per environment. Companies providing cache as a service that go this extra mile will be solving the same problems for every build system, duplicating efforts unnecessarily. If caching capabilities materialize, then the network latency and bandwidth between the machine and the cache become more important than the machine spec itself, since a warm enough cache will result in most build operations becoming I/O operations. This is an interesting future for us because it means we don't need to deploy machines ourselves and can instead lean on big players like AWS that have everything figured out around how to colocate instances and ensure the routing between them is as fast as it can be. The CI runners space is already getting crowded, and it'll get even more crowded since it's a replicable business model, which will push some players to move vertically and aim at providing new solutions—likely caching and potentially other build or test-related solutions. This is where things get interesting, and where I want to talk more about Tuist. I was reading the other day an update where someone mentioned partnering with AWS to give a workshop on how to use EC2 machines as runners, and I couldn't help but think about how much of this reality is already happening. Developers don't like when service and technology are the same thing, or are somewhat strongly dependent on each other, like Next.js & Vercel, which for many years had internal designs that gave Vercel an advantage over competitors. It's a fine line to walk, where if you walk it well, you can have Vite (commodity) and Vite+ (service), or you can have a company that's struggling to put their proprietary tech into the hands of users. Forks are costly. Bazel is a fork. Dagger is a fork. Forks bring innovative ideas, but it's a huge risk to build a business on it because many companies will see the investment as a liability from the get-go, so you have to convince them the investment is worth making. Tuist's generated project is a fork of Xcode project, which we'll hopefully be able to move away from at some point, but at the end of the day, you get a standard Xcode project, so it was easy for us to remind organizations that this is not a liability. Any investment that we make has to be incrementally adoptable, and we should embrace what people use every day, with their flaws, even if we disagree with the design—like YAMLs for CI pipelines, or the fact that runners as a service is a trend you'd better align with. Tuist's Strengths One of the things that Tuist does great, and which I'm very proud of, is its community, which took years of nurturing and building, our openness, and our taste for great developer experience. Since its inception, it has gravitated around the Apple ecosystem, mostly reverse-engineering their tools and accidentally building a moat that helped us get the business off the ground. The Fabrik Experiment So I created a repo, Fabrik, and started playing with the idea of separating automation tools like scripts, test runners, and build systems from the infrastructure that gives them extra capabilities, in the same way shipping containers separated the goods being delivered from the means of transport. Initially, I placed a focus on caching, but as I kept coding with the help of agents, it became more and more obvious that a technology should span across build systems, test runners, and custom automation (e.g., scripts), and give teams visibility over how those tools are performing, along with optimizations to ensure they'd do the most efficient job possible. This is where the infrastructure would come in, but more on this later. A technology that, like shipping containers, would solve the fragmentation—something I don't expect to change since every tool lives within its own ecosystem, and while there is some alignment, it's not as widespread as many would like it to be. Authentication and Developer Experience The first idea I was playing with is: can we move from a "one token for everyone" model to a system that's able to require me to authenticate to access the cache and activate the right configuration, no matter which build system I'm using? Turns out it's possible, and a local proxy with a mapping from project directory to port, plus a hook or command to activate the server, can lead to a pretty great developer experience. Your project just requires a fabrik.toml file, and voilà—you'll be able to just cd into a directory and interact with your project with cache enabled for you, magically. I played with Turborepo, Bazel, Xcode, Metro, and Gradle (have more on the list) and got all of them working. Some can only be configured via configuration files or arguments passed at invocation time, but I'm thinking about starting conversations in those repositories to change that. It's likely that they never thought about that as an opportunity to improve the developer experience. As I was working through the build systems, I noticed how all the APIs were very similar semantically, but the conventions differed, which made me think about how cool it would be to unify the contract beyond the local layer so that one doesn't have to implement the contract of every build system. This naturally led me to fabrik exec as a wrapper to activate the environment on the fly, instead of doing it using shell hooks when entering or leaving directories. Thanks Mise for those awesome ideas. You continue to be amazing! Caching Scripts From there, I paused and started looking at Nx (talking about companies who mix technology and service), and once again learning from Mise, I thought: why not bring caching to scripts? I couldn't help but think about those GitHub Actions cache steps whose artifacts can't escape CI. It didn't feel right. If CI has done npm install and resolved dependencies, can't I just pull those and skip the resolution phase and pull them from very fast storage? This is the premise of Nx, which went from a task orchestrator for the JS ecosystem to a task orchestrator for any platform (likely to match the expectations that came with their investment). Can this be simpler? Turns out it can! I'll throw in an example here that speaks for itself: That's it. Your installation of dependencies is cached—on CI, locally, and in agentic environments. Instead of creating a new configuration file to declare caching attributes of existing scripts, you can just use annotations in the script, and Fabrik will take care of the rest. It was crazy that this implementation just took me a few minutes with the help of agents and replicates a big portion of the value that Nx provides. Build System Integration Things started to get interesting. What if Fabrik could also be a library that build systems could directly integrate, either by importing it via a foreign C interface or by directly invoking a CLI interface for interacting with the cache? Suddenly tools could bring caching capabilities without having to reinvent the interfacing with the infrastructure, which would be designed to accommodate multiple backends. So there I went and coded a C interface, which is distributed in every release as a static library that they can link. We could eventually make Tuist's module caching and selective testing just plug into it and delegate a big chunk of the responsibilities to Fabrik. Authentication Flow As next steps, I want to explore what authentication can look like in this world. Early thoughts led me to OAuth2 Authorization Code flow with PKCE so that the user can be taken through a web-based authentication workflow, and then an access and secret pair is persisted and managed by Fabrik automatically, ensuring that if refresh is needed, it does so while accounting for the concurrent nature of the requests. The caching layers needed for low latency and high bandwidth can then validate the token without having to go to the server, which would have a high penalty on latency—precisely something we want to avoid. Thanks to this, we can add attribution data as well as environment data to the interface of those layers so that any service using the technology can augment data such as hits or misses with things like which environment the data came from or at which layer the hit resolves. Also, if the person leaves the organization, even if they took the access and secret keys, they won't be able to renew the session if expired. Two Key Directions From there, there are two directions that I'd like to explore. 1. Layering of Caching The first one is layering of caching. Build systems with CAS need the cache to have the lowest latency and bandwidth possible to get results. Unlike running the operations locally, which has CPU cores as a ceiling, network presents more parallelism, but if latency and bandwidth are not great, then the build times might end up being slower or not as great—especially in build systems like Xcode's that have not yet been optimized for remote caching. This is where the companies that either provide CI or runners for CI have the moat, because they can colocate runners and cache machines. Hence the strong appetite for solving cache that we're seeing in them. We're going to take a different approach—one that works no matter where your builds are running from: locally, on CI, and from agentic environments. This requires two things: first, a technology (Fabrik) that's designed for this, and managed infrastructure that can run the technology closer to where it's needed. This, conceptually, is not new. Supabase can manage a PostgresDB where you need it and manage replicas too, and Vercel can run your functions close to where they are needed. There's a caveat with the Vercel model that many don't realize, and it's the fact that any win in reducing latency because it runs on the edge usually gets diminished when you have to go all the way to another region to hit a database. Think of Fabrik managed by Tuist as cache on the edge with a nice developer experience where you can choose where you need the caches, and why not, deploy some in offices or enable P2P sharing of binaries within the local network of the office. Some companies kind of do something like this for large enterprises, but it's not productized. It requires huge investment and a lot of engineers dedicated to a single tenant for weeks to get this up and running. Deploying a Fabrik cluster should be as easy as clicking a button on a UI, and plugging into it should be as easy as installing the CLI and logging in. This is where we need to be. 2. Testing and Telemetry The second area I'd like to explore is testing and telemetry. In this case, the low latency and high bandwidth design requirement is not there, but there are some common needs, like test selection and flakiness tooling, that are once again solved over and over, either in-house or by third-party vendors. How much of that can be separated from the vendor, who could focus on the UI layer to interface with the solution? No idea at this point. I even wonder if Fabrik should just be about caching, but can't tell until I explore it further. Telemetry is also something teams need. In the same way you can monitor what's going on in your remote servers, I think you should be able to do the same with build and test runs and custom scripts. Something that can go from the simplest—which is having the standard output and error of those environments, which is enough to run some diagnostics—to something more structured using real-time data as build graphs are processed by the build system. In this area, I think Dagger Cloud has done an amazing job providing a visualization tool for the execution of their DAG, and we should learn from there. Again, not sure if this is a Fabrik thing, but I'll explore and see. Fabrik might act as a standardization layer and maybe plug itself into build systems and test runners at runtime so that the information is streamed to the server live. A screenshot of Dagger's web-based UI where it shows how a Dagger DAG graph is executed.Looking Forward I haven't been this excited in a long time. Shaping Fabrik reminds me of the early days of framing Tuist. The more I dive into other build systems and bring the best DX from around, the more confident I feel that the technology and service split is the way to go, and that we need to step into infrastructure to make the DX that we want to provide a reality. While our main customers will remain large enterprises, we want to make Tuist feel more like Supabase—something you can set up yourself if you're just getting started and that grows with you all the way up to global development teams with agents helping from their coding environments.
-
Staying Human with AI
Nov 11 ⎯ First of all, I have to say I'm excited to start publishing on Fika, a tool by a friend of mine, Pau, who believes a different Internet is possible, and he's proving it with tools like Fika, which feel so unique in a tech world that gravitates towards pattern homogeneity. I'm currently using Ghost, which I'm a happy subscriber of, but I'd rather support friends, especially when they are trying to build something different. I thought I'd write a first blog post talking about how I use AI these days, something that keeps changing, and that has nothing to do with how I used it months ago. For context, I'm a builder turned founder and I'm now the CEO of Tuist, a productivity toolchain for app developers, and we are a small company of 4 people. Because of that, my responsibilities span across several domains: doing sales calls, writing content for marketing, doing some support, fixing small issues and so on… Because I find a strong joy in building, I also allocate some time every day to do some building that brings me joy and balances the other tasks/responsibilities that are less fun to navigate. Coding My coding these days divides into two categories. On the one hand, I tackle small issues or bugs that I notice arise. I try not to take on big projects because the frequent context switching the company state requires would put myself in someone else's critical path, or drag a project forever. Also, my ability to go deep into a particular problem is very limited due to the context switching, and this is something where agentic coding tools come quite handy. First, I use tools like Codex & Claude Code to go deep into understanding a particular issue or bug, and have a conversation about what the best solution might look like. I treat it as a pairing partner, and we focus initially on understanding the trade-offs of all possible solutions. This is a lot of fun, because they do the exercise of going deep into the codebase and the problem, and then I can leverage my expertise (or gain extra) to guide it to the best path. That work usually results in a small PR, which I usually double-check, but at times I miss details because of a bit of ADHD (I'm already thinking about the next priority). The second piece of coding work that I do is more creative and long-term. I enjoy thinking how Tuist could evolve in 1, 2, and 5 years from now, so I prototype ideas. This is the funniest part, and it reminds me a lot of the beginning of Tuist. It's a white canvas. When working on the day-to-day responsibilities and priorities, the work is very constrained. The immediate priorities are reaching profitability, within the Apple ecosystem where we have the moat today, and while respecting the mental models that people have built around what you are, even though in your mind you have ideas for what it could be to unlock new business opportunities. This is frustrating, and I hope to talk about this in future blog posts. And wow, AI is so amazing to do these explorations. I like to read what's happening around us, and in other ecosystems, and I capture the ideas that I believe are good, and the ones that I believe are bad ones, and then have very interesting conversations with LLMs and agentic coding tools about those and then we shape a solution together. When I'm doing this, the programming language is an implementation detail. I know which language would be the most suitable for the solution, so I suggest that early, but once it starts coding, I just focus on the architectural and DX & testing patterns that I believe are good to embrace, and don't care too much about the style of the code. I check the tests though to ensure they capture well what I expect the software to do. Oh, and I forgot to mention that I also use coding agents to learn about new codebases or contribute fixes to them without having to spend days gathering enough context to make a contribution. It’d be more accurate to say that I care about the styling. It’s just that I believe that this is something that toolchains have finally solved, so I’d delegate that task to tools like cargo fmt or mix format Sales One of my responsibilities consists of turning prospects into customers. This involves having frequent conversations with customers, initially over email, and eventually over Slack, which we usually connect through. In those first emails, I want to make sure the message that I'm trying to send sounds well on the receiving end, and also takes into account cultural aspects that are part of the sales process. For that I draft an email myself, and then lean on LLMs to help me iterate on it grammatically, ensuring the message is well captured, and that will evoke an action on the other end. I believe a well written message can make a huge difference. I like to write those myself because when they are written by an LLM, they don't have my personality, and this is something that I don't like. If you wonder, I copy the email to the desktop app to the LLM, ChatGPT or Claude, and then copy the iterated message back to the field. I'm not using those agentic experiences built into browsers. Perhaps I will at some point? I got so used to the back and forth, that the pattern solidified and now I don't see much value in changing it. I quite like the jump. I wish more CRM tools used AI to assist in the sales process. We are excited about operate.so but it's still work in progress. We are currently using Attio, but I think they are missing the opportunity to rethink the solution with AI at the core. Sure, they sprinkled it here and there, but the needs that I have, which I think AI would solve very nicely, remain unaddressed. Some examples: Reminders about when it's a good time to do a follow-up, and what some action items remain unaddressed. Some important notes that one should know when jumping on the next call with a lead. Automatic reminders when it's a good time to start talking about the renewals. Identifying the different roles in the conversations that are happening across all the channels: emails, calls, Slack. Based on what we've seen, I think CRM tools will get there, and I couldn't be happier. I feel sales is an area where a lot could be automated with AI. I believe a human needs to be in the loop, and needs to be the founders there because we bring the technical credibility, but everything else can be automated. I'm seeing a lot of these sales tools that automate outbound emails, or demos, or sales calls, but I don't know… it gives me "Black Mirror" vibes and I have my doubts this will work with developers. We are humans, and commerce is human too, especially in B2B deals like these ones we are doing. Content A lot of the marketing of Tuist is done through content where we share not only product updates but also talk about how we see our space, and the vision that we have for it. I wish I did more of this, but as I mentioned earlier, content writing suffers, like deep coding sessions, because my work these days entails a lot of context switching. Most of my blog posts start with iA Writer on my iPhone, some are even completed there and then I do an editing on desktop and publish. Other times I just use iA Writer as a place where I jot down ideas, and then later on, when I have a quiet moment, I sit down, and then turn those ideas into a blog post. Where do I use AI, you might ask? I do so when reviewing the grammar of my blog post. I used to use Grammarly for that, but I've replaced it with LLMs now. I'm very clear about not changing the style or the personality, and just focus on the typos that I might have introduced, since I'm not a native English speaker, so there are many tiny things that I miss without noticing it. I tried to write some blog posts with AI, doing a bunch of iterations, but I ended up giving up in the end because it doesn't sound "me". I really enjoy when there's a personality in the content that someone publishes, like in Pau's posts, and I don't want that to be gone with a productive mindset that AI tools bring to the table. Research When there's something I want to learn about, I go and ask LLMs. This has replaced Google completely for me which I use when I know exactly what I'm looking for, which is often not the case in my day-to-day job, where I've heard of a technology, or a concept, and I want it to expand on it and share some references with me so that I keep digging if I need to. I tried to use tools like Obsidian or Logseq to keep track of ideas that I'd learn, and connect them, but I don't feel the need for those tools anymore. Sure, I'm dependent on some LLM having that interconnection of ideas, which is not something I control, and I'm also very reliant on my brain, but so far it's been working fine for me. Jumping between iA Writer, LLMs, and sometimes paper is a setup I feel very comfortable with. In fact, I have to admit I enjoy a bit of chaos when doing this type of work because I believe chaos and creativity go hand in hand. When I try to eliminate the chaos with structure, for example trying to find the right Logseq structure to connect notes, things start to feel less creative and fun. Reading I enjoy reading the blog posts that are worth reading, and those are usually something that you can filter out by reading a few sentences. However, there are others that you can tell right away, that those are the "it could have been a meeting" kind of post, so when I come across those, I just go straight into the LLM and ask if there's anything meaty in it worth learning from. Often times the core of the blog post is as shallow as the first lines, but in other cases there might be something substantial hidden in it, so I just get the summary and move on. I try not to do this a lot, because my brain, which tries to optimize where possible, will gravitate to do the same with every piece of context, as if the function of every piece of context was for you to derive something meaningful out of it and not just simply the joy of reading. I'd say my usage of AI is very basic, compared to what I'm seeing out there, but I'm happy with it. While it's a productivity tool, which helps me focus on the things that bring me the most joy, and automate some of the ones that don't, I don't try to treat it as a tool that should automate everything, including writing, or writing some code here and there, because then everything would revolve around productivity, making everything that I do feel miserable in some way. I don't want everything in my life to be optimized towards productivity. I want things to be done for the joy of doing them, and that's the reason why I don't get obsessed with trying to put AI everywhere.
- ai
- founder