A
website.

So, why did I not use AI for this website?

Truth be told, I could've made this entire thing in with Claude, or Cursor, or any of the (at this point) hundreds of agentic programming services.

However, for a personal website, I think it's important to keep as much of a personal touch as possible — I want each and every vistor of this site to be able to intuit as much of my personality as possible, and I want to demonstrate that I actually have , and that I'm capable of writing a few paragraphs without needing ChatGPT. To say it explictly: I didn't write (or revise) any of this with AI (yes, I really do use this many in my day-to-day correspondence — I find it helps to map out the way my brain structures my thoughts as closely as possible, in conjunction with colons, semi-colons, and parentheticals). In any case, I try to write how I talk — hopefully not to the detriment of your ability to parse what I'm saying. Simply put, I think authenticity is most important for a personal website. If you want bullet points instead, read my resume.

What about AI usage in {insert professional context here}?

I am extremely aware of the fact that software developers are in a productivity arms race (and indeed, have been since before I was born). If I don't use AI, there are 50,000 20-somethings out there who will, and the truth of the matter is that companies care about sheer code output a significant amount (even if it's at the detriment of code quality and the ability to debug or even explain the code that came out the magic sycophant box). With this in mind, I would be remiss to not mention the fact that I have used Agentic CodingTM tools in my professional workflow, and in fact use them to this day.

When you're working at a startup, getting tasks done as soon as possible (within acceptable margins of quality) is simply a necessity, and working on those smaller teams tends to mean that you have to have a greater code output on an individual level than if you were working in a larger team. For the sake of employability, yes, I am comfortable with using agentic workflows to speed up task delivery. I use Claude regularly to ship as fast as reasonably possible, from implementation to debugging (but, it is worth mentioning, never during ideation/task shaping). I still write a lot of my code, and regularly have to rewrite/fix what Claude puts out, and never assume that Claude has written something flawless. However, there are many, many, many concerns that I harbour over the future of AI and its role in the world, so I try to as much as is feasible given my occupation. TL;DR: I'm concerned about the effects of AI, but I recognize it's necessary to be competitive as a SWE. Given that this website is a personal project more than anything else, I see no need to use AI.

Why would I make AI summarize a book or the meaning behind a painting when I can luxuriate in the process instead and find fulfilment in that process?

Does this mean you hate AI? What about {use case that is beneficial to humanity}?

No, I don't hate AI. I think conversations around the impact/usage of AI need to have nuance (for example, are we specifically talking about generative AI? If so, are we talking about LLMs specifically? When talking to a non-technical person about AI, do they understand the difference between Skynet and a glorified statistical regression model? Does the person you're having a conversation with legitimately believe LLMs are becoming better at reasoning, or just at being an LLM?), and I freely acknowledge that there are some uses of AI that are incredibly beneficial to humanity (like in medical imaging, for example, although there are definitely still pitfalls to consider there). I just think that we should avoid buying into the hype (and, depending who you are, the doom) of the "X Industry is Over" headlines, as those articles generally imply a level of fidelity that is perpetually "down the stack". I think that we will experience the law of diminishing returns firsthand as software engineers in regards to the increasing capabilities of AI, and as such I think any adoption of AI must be done with this in mind.


As a post-scriptum: even Anthropic has noticed a drop in code comprehension Anthropic Study Snippet between AI and non-AI users. Granted, this is a tiny, tiny study (n=52), but if even they are upfront about the loss of capability as a result of the regular use of AI, I feel it safe to bring up my previously mentioned concerns without worry of coming off as a luddite.

Thanks for visiting <3