5 minute read

A recent blog post titled ‘I miss thinking hard’ has sparked extensive debate among engineers, revealing something engineering leaders should find uncomfortable: a growing divide between those who believe AI coding tools change what programmers think about and those who believe they reduce thinking altogether. For organisations building technical capability for the long term, this is not a philosophical dispute. It is a workforce development question with material consequences.

The original post, by a physicist turned software engineer, frames the tension through two internal drives: the Builder, who craves shipping and pragmatism, and the Thinker, who needs prolonged mental struggle with difficult problems. AI tools, the author argues, satisfy the Builder while starving the Thinker. The pragmatic 70% solution arrives so quickly that reaching for it becomes irrational to resist, even when the author suspects manual work would produce better results.

What makes this worth examining is not the personal reflection but the response it generated. Hundreds of experienced engineers weighed in, and their disagreements expose a fault line that matters for anyone responsible for technical teams.

Two cognitive modes, one job title

The most striking pattern in the discussion is how differently engineers describe what ‘thinking hard’ means to them. One camp reports thinking harder than ever with AI tools, focusing on architecture, system design, and orchestration while the model handles implementation. They describe a shift upward in abstraction, spending cognitive effort on what to build rather than how to build it.

The other camp describes something being lost that the first group does not appear to miss. They point to the sustained, single-threaded concentration required to hold a complex problem in mind for days, the kind of thinking where solutions emerge in the shower or before sleep because the subconscious has been processing continuously. This mode, they argue, cannot survive the constant context-switching that AI-assisted work demands.

Both groups are describing real experiences. The disagreement is not about facts but about which cognitive mode matters more. And this is where engineering leaders need to pay attention, because the answer almost certainly depends on the engineer’s experience level and the organisation’s time horizon.

The hidden capability pipeline problem

Several comments in the discussion identified what may be the most consequential dynamic at play. One engineer compared AI-assisted coding to working with junior developers who change daily, requiring constant supervision but never accumulating institutional knowledge. Another noted that the thinking required to write code often provided feedback that improved higher-level design decisions, a loop that breaks when implementation is delegated.

The implication is uncomfortable but straightforward: the cognitive work that AI tools eliminate may be precisely the work that builds the judgment required to use AI tools effectively. Senior engineers who spent years developing intuition through manual implementation can orchestrate AI confidently because they recognise when output is wrong. Engineers who skip that apprenticeship may lack the foundation to evaluate what they are reviewing.

This is not a hypothetical concern. Multiple commenters reported observing skill atrophy among experienced colleagues and shallow understanding among those who adopted AI tools early in their careers. One described senior engineers producing meeting documents and technology recommendations that appeared copy-pasted from language models, lacking any original analysis.

If these observations generalise, organisations face a capability pipeline problem that productivity metrics will not reveal. The engineers shipping fastest today may be depleting the skills required to ship well tomorrow.

What the Linus Torvalds comparison gets wrong

A popular framing in the discussion compared AI-assisted engineers to Linus Torvalds in his current role: reviewing and merging code rather than writing it. The analogy is appealing but misleading in a way that matters.

Torvalds can review kernel patches effectively because he wrote the kernel. His decades of implementation work built the judgment that makes his review valuable. The question for engineering leaders is whether review-first workflows can produce engineers with comparable judgment, or whether they produce something more like technical project managers who can coordinate work they could not perform themselves.

The honest answer is that nobody knows yet. AI-assisted development at scale is too new to have produced a generation of senior engineers trained primarily through orchestration rather than implementation. Organisations making workforce decisions today are placing bets on an outcome that will not be visible for years.

What this means for engineering organisations

The productivity gains from AI coding tools are real and immediate. The capability risks are speculative and deferred. This asymmetry creates obvious incentive problems for organisations under pressure to ship.

Three questions can help engineering leaders think through the trade-offs for their specific context.

First, what is the experience distribution of the team? For senior engineers with deep implementation backgrounds, AI tools likely amplify existing capability. For junior engineers still building foundational skills, the same tools may substitute for learning that would otherwise compound over a career. The same deployment strategy probably should not apply to both groups.

Second, what does the organisation’s technical debt trajectory look like? Multiple commenters noted that AI tools make it easy to generate code that works initially but accumulates hidden problems. Organisations already struggling with maintenance burden may find AI acceleration makes the underlying condition worse, not better.

Third, what is the organisation’s hiring strategy for the next decade? If the assumption is that AI will continue improving and human implementation skills will become irrelevant, optimising for AI fluency makes sense. If the assumption is that human judgment will remain essential for complex systems, then preserving the conditions that develop that judgment matters, even at some productivity cost.

The question engineering leaders should be asking

The debate over whether AI makes engineers think more or less is ultimately the wrong frame. The more useful question is whether the thinking that AI-assisted work encourages builds the same capabilities as the thinking it replaces.

For experienced engineers, the answer may be that it does not matter. They have already accumulated the judgment that makes AI tools useful. For engineers early in their careers, and for organisations that need those engineers to become senior contributors, the answer is less clear and considerably more consequential.

Productivity metrics will not surface this risk. It will only become visible when organisations discover they have shipped quickly for years while quietly depleting the capability required to ship well.

Categories:

Updated: