On recent Episode 678 of ATP, Marco, John, and Casey shared a handful of thoughts about Artificial Intelligence. I really enjoyed their discussion and wanted to collect a handful of the links they had in the show notes here, adding a few thoughts of my own.

From a blog post We mourn our craft, by Nolan Lawson:

The worst fact about these tools is that they work. They can write code better than you or I can, and if you don’t believe me, wait six months.

Nolan Lawson (nolanlawson.com)

End of last year, I was ready to buy into the “AI hype train will crash in 2026” line of thinking. Now, I’ve swung not quite a complete 180 but certainly changed my opinion from these things being glorified chatbots to the agentic AI coding solutions being powerful tools.

I really do feel for those who love writing code, though. Some of my favorite iOS apps are from indie developers who just outright love the craft of writing code. AI code generation like Codex and Claude Code is coming for those folks in major way. A year from now I believe things will simply look a lot different.

Steve Troughton Smith proved how powerful these tools are by building 3 apps in a week.

He was porting existing code and he is an experienced developer, but for the $20/month OpenAI subscription price… this is amazing stuff.

An interesting study from Anthropic on How AI assistance impacts the formation of coding skills had this tidbit:

Major LLM services also provide learning modes (e.g., Claude Code Learning and Explanatory mode or ChatGPT Study Mode) designed to foster understanding.

How AI assistance impacts the formation of coding skills

I did not know this was a thing, but it’s very nifty and makes a lot of sense that it exists. Most new technologies have problems which can be viewed as terrible yet can be overcome by using that same technology or others contemporary with it. Looking at these things as opportunities, and not disqualifying the technology outright because of these foibles can be key to making progress incorporating the tech into our world without causing the worst possible outcomes many fear could happen with it.

Also of note, how participants in this study interacted with AI had meaningful impact to the outcome of the task. Just like anything - phones, cars, ultra-processed foods – how we interact with those things as humans is a big part of the story for how good or bad they impact our lives and society as a whole.

The Atlassian study has a handful of funny stats in it, such as “Leaders see AI gains within their own functions but not within others”:

  • 82% of marketing executives say marketers get a lot of value from AI; only 26% of HR and 20% of Technology leaders agree.
  • 50% of HR executives say HR employees significantly benefit from working with AI; only 11% of Technology and 5% of Marketing leaders say the same.

‘AI is great for my team, but those other organizations have no clue how to use it!’

From the same study, I love this insight: “Build a connected, company-wide knowledge base”

If I had a nickel for every time I had a question at work but didn’t know who to ask… I wouldn’t have to work there anymore. I’d have enough nickels to retire. Knowing who to ask is often more important than knowing what questions to begin to ask in the first place. And knowledge is power, so let’s democratize access to that power to really amp up our workforce.

The idea of Comprehension Debt is powerful:

Comprehension debt arises when developers rely heavily on LLM-generated code without fully understanding its underlying logic, structure, or potential pitfalls. This scenario can lead to several challenges, including difficulties in debugging, maintenance issues, and a lack of ownership over the codebase. As applications grow in complexity, this debt can snowball, creating a ticking time bomb that can jeopardize project timelines and team morale.

Comprehension Debt (shekhar14.medium.com)

And I bet these AI companies are banking on exactly this. Bring in AI Agents, have them write most of your code so your existing staff no longer understands it, and now you can’t NOT pay OpenAI and Anthropic and Google those monthly bills for their AI Agents because those agents are the only ones who can reliably update your codebase!

It will be a huge risk to companies to become heavily dependent on this stuff. You might not have enough humans left in 5 years who understand how your product works if AI companies turn the screws and try to flip on the money train. Those short-term savings you get by having an AI Agent replace a junior dev will be wiped out by your new monthly AI bill.

Anyways, reading all of this stuff gave me a lot to think about. My views on AI have shifted from “this is silly” to “this is cool”. This technology has become cool and useful. Slop is still a thing and probably always will be, just like spam emails were a thing the Internet brought us, but we’re not saying burn down the ’net because we have to deal with spam.

The intellectual property theft is still the original sin AI will have to come to terms with some day. The massive use of power and the horrible things that will happen to generate enough electricity generation capacity in order to power the AI beast is a major concern. The transformation that will come to the economy as these tools displace humans works gives me pause.

But darn if AI isn’t just an exciting tool right now.