Just because I don't care doesn't mean I don't understand.
692 stories
·
3 followers

Let’s talk about LLMs

1 Comment

Everybody seems to agree we’re in the middle of something, though what, exactly, seems to be up for debate. It might be an unprecedented revolution in productivity and capabilities, perhaps even the precursor to a technological “singularity” beyond which it’s impossible to guess what the world might look like. It might be just another vaporware hype cycle that will blow over. It might be a dot-com-style bubble that will lead to a big crash but still leave us with something useful (the way the dot-com bubble drove mass adoption of the web). It might be none of those things.

Many thousands of words have already been spent arguing variations of these positions. So of course today I’m going to throw a few thousand more words at it, because that’s what blogs are for. At least all the ones you’ll read here were written by me (and you can pry my em-dashes from my cold, dead hands).

Terminology, and picking a lane

But first, a couple quick notes:

I’m going to be using the terms “LLM” and “LLMs” almost exclusively in this post, because I think the precision is useful. “AI” is a vague and overloaded term, and it’s too easy to get bogged down in equivocations and debates about what exactly someone means by “AI”. And virtually everything that’s contentious right now about programming and “AI” is really traceable specifically to the advent of large language models. I suppose a slightly higher level of precision might come from saying “GPT” instead, but OpenAI keeps trying to claim that one as their own exclusive term, which is a different sort of unwelcome baggage. So “LLMs” it is.

And when I talk about “LLM coding”, I mean use of an LLM to generate code in some programming language. I use this as an umbrella term for all such usage, whether done under human supervision or not, whether used as the sole producer of code (with no human-generated code at all) or not, etc.

I’m also going to try to limit my comments here to things directly related to technology and to programming as a profession, because that’s what I know (I have a degree in philosophy, so I’m qualified to comment on some other aspects of LLMs, but I’m deliberately staying away from them in this post because I find a lot of those debates tedious and literally sophomoric, as in reminding me of things I was reading and discussing when I was a sophomore).

If you’re using an LLM in some other field, well, I probably don’t know that field well enough to usefully comment on it. Having seen some truly hot takes from people who didn’t follow this principle, I’ve thought several times that we really need some sort of cute portmanteau of “LLM” and “Gell-Mann Amnesia” for the way a lot of LLM-related discourse seems to be people expecting LLMs to take over every job and field except their own.

No silver bullet

A few years ago I wrote about Fred Brooks’ No Silver Bullet, and said I think it may have been the best thing Brooks ever wrote. If you’ve never read No Silver Bullet, I strongly recommend you do so, and I recommend you read the whole thing for yourself (rather than just a summary of it).

No Silver Bullet was published at a time when computing hardware was advancing at an incredible rate, but our ability to build software was not even close to keeping up. And so Brooks made a bold prediction about software:

There is no single development, in either technology or management technique, which by itself promises even a single order-of-magnitude improvement within a decade in productivity, in reliability, in simplicity.

To support this he looked at sources of difficulty in software development, and assigned them to two broad categories (emphasis as in the original):

Following Aristotle, I divide them into essence—the difficulties inherent in the nature of the software—and accidents—those difficulties that today attend its production but that are not inherent.

A classic example is memory management: some programming languages require the programmer to manually allocate, keep track of, and free memory, which is a source of difficulty. And this is accidental difficulty, because there’s nothing which inherently requires it; plenty of other programming languages have automatic memory management.

But other sources of difficulty are different, and seem to be inherent to software development itself. Here’s one of the ways Brooks summarizes it (emphasis matches what’s in my copy of No Silver Bullet):

The essence of a software entity is a construct of interlocking concepts: data sets, relationships among data items, algorithms, and invocations of functions. This essence is abstract, in that the conceptual construct is the same under many different representations. It is nonetheless highly precise and richly detailed.

I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation. We still make syntax errors, to be sure; but they are fuzz compared to the conceptual errors in most systems.

If this is true, building software will always be hard. There is inherently no silver bullet.

And to drive the point home, he also explains the diminishing returns of only addressing accidental difficulty:

How much of what software engineers now do is still devoted to the accidental, as opposed to the essential? Unless it is more than 9/10 of all effort, shrinking all the accidental activities to zero time will not give an order of magnitude improvement.

This is a straightforward mathematical argument. If its two empirical premises—that the accidental/essential distinction is real and that the accidental difficulty remaining today does not represent 90%+ of total—are true, then the conclusion which rules out an order-of-magnitude gain from reducing accidental difficulty follows automatically.

I think most programmers believe the first premise, at least implicitly, and once the first premise is accepted it becomes very difficult to argue against the second. In fact, I’d personally go further than the minimum required for Brooks’ argument. His math holds up as long as accidental difficulty doesn’t reach that 90%+ mark, since anything lower makes a 10x improvement from eliminating accidental difficulty impossible. But I suspect accidental difficulty, today, is a vastly smaller proportion of the total than that. In a lot of mature domains of programming I’d be surprised if there’s even a doubling of productivity still available from a complete elimination of remaining accidental difficulty.

There’s also a section in No Silver Bullet about potential “hopes for the silver” which addresses “AI”, though what Brooks considered to be “AI” (and there is a tangent about clarifying exactly what the term means) was significantly different from what’s promoted today as “AI”. The most apt comparison to LLMs in No Silver Bullet is actually not the discussion of “AI”, it’s the discussion of automatic programming, which has meant a lot of different things over the years, but was defined by Brooks at the time as “the generation of a program for solving a problem from a statement of the problem specifications”. That’s pretty much the task for which LLMs are currently promoted to programmers.

But Brooks quotes David Parnas on the topic: “automatic programming always has been a euphemism for programming with a higher-level language than was presently available to the programmer.” And Brooks did not believe higher-level languages on their own could be a silver bullet. As he put it in a discussion of the Ada language:

It is, after all, just another high-level language, and the biggest payoff from such languages came from the first transition, up from the accidental complexities of the machine into the more abstract statement of step-by-step solutions. Once those accidents have been removed, the remaining ones are smaller, and the payoff from their removal will surely be less.

Many people are currently promoting LLMs as a revolutionary step forward for software development, but are doing so based almost exclusively on claims about LLMs’ ability to generate code at high speed. The No Silver Bullet argument poses a problem for these claims, since it sets a limit on how much we can gain from merely generating code more quickly.

In chapter 2 of The Mythical Man-Month, Brooks suggested as a scheduling guideline that five-sixths (83%) of time on a “software task” would be spent on things other than coding, which puts a pretty low cap on productivity gains from speeding up just the coding. And even if we assume LLMs reduce coding time to zero, and go with the more generous No Silver Bullet formulation which merely predicts no order-of-magnitude gain from a single development, that’s still less than the gain Brooks himself believed could come from hiring good human programmers. From chapter 3 of The Mythical Man-Month:

Programming managers have long recognized wide productivity variations between good programmers and poor ones. But the actual measured magnitudes have astounded all of us. In one of their studies, Sackman, Erikson, and Grant were measuring performances of a group of experienced programmers. Within just this group the ratios between best and worst performances averaged about 10:1 on productivity measurements and an amazing 5:1 on program speed and space measurements!

(although I’m personally skeptical of the “10x programmer” concept, the software industry overall does seem to accept it as true)

Anecdote time: much of what I’ve done over my career as a professional programmer is building database-backed web applications and services, and I don’t see much of a gain from LLMs. I suppose it looks impressive, if you’re not familiar with this field of programming, to auto-generate the skeleton of an entire application and the basic create/retrieve/update/delete HTTP handlers from no more than a description of the data you want to work with. But that capability predates LLMs: Rails’ scaffolding, for example, could do it twenty years ago.

And not just raw code generation, but also the abstractions available to work with, have progressed to the point where I basically never feel like the raw speed of production of code is holding me back. Just as Fred Brooks would have predicted, the majority of my time is spent elsewhere: talking to people who want new software (or who want existing software to be changed); finding out what it is they want and need; coming up with an initial specification; breaking it down into appropriately-sized pieces for programmers (maybe me, maybe someone else) to work on; testing the first prototype and getting feedback; preparing the next iteration; reviewing or asking for review, etc. I haven’t personally tracked whether it matches Brooks’ five-sixths estimate, but I wouldn’t be at all surprised if it did.

Given all that, just having an LLM churn out code faster than I would have myself is not going to offer me an order of magnitude improvement, or anything like it. Or as a recent popular blog post by the CEO of Tailscale put it:

AI’s direct impact on this problem is minimal. Okay, so Claude can code it in 3 minutes instead of 30? That’s super, Claude, great work.

Now you either get to spend 27 minutes reviewing the code yourself in a back-and-forth loop with the AI (this is actually kinda fun); or you save 27 minutes and submit unverified code to the code reviewer, who will still take 5 hours like before, but who will now be mad that you’re making them read the slop that you were too lazy to read yourself. Little of value was gained.

More simply: throwing more patches into the review queue, when the review queue still drains at the same rate as before, is not a recipe for increased velocity. Real software development involves not just a review queue but all the other steps and processes I outlined above, and more, and having an LLM generate code more quickly does not increase the speed or capacity of all those other things.

So as someone who accepts Brooks’ argument in No Silver Bullet, I am committed to believe on theoretical grounds that LLMs cannot offer “even a single order-of-magnitude improvement … in productivity, in reliability, in simplicity”. And my own experience matches up with that prediction.

Practice makes (im)perfect

But enough theory. What about the empirical actual reality of LLM coding?

Every fan of LLMs for coding has an anecdote about their revolutionary qualities, but the non-anecdotal data points we have are a lot more mixed. For example, several times now I’ve been linked to and asked to read the DORA report on the “State of AI-assisted Software Development”. And initially it certainly seems like it’s declaring the effects of LLMs are settled, in favor of the LLMs. From its executive summary (page 3):

[T]he central question for technology leaders is no longer if they should adopt AI, but how to realize its value.

And elsewhere it makes claims like (page 34) “AI is the new normal in software development”.

But then, going back to the executive summary, things start sounding less uniformly positive:

The research reveals a critical truth: AI’s primary role in software development is that of an amplifier. It magnifies the strengths of high-performing organizations and the dysfunctions of struggling ones.

And then (still on page 3):

The greatest returns on AI investment come not from the tools themselves, but from a strategic focus on the underlying organizational system: the quality of the internal platform, the clarity of workflows, and the alignment of teams. Without this foundation, AI creates localized pockets of productivity that are often lost to downstream chaos.

Continuing on to page 4:

AI adoption now improves software delivery throughput, a key shift from last year. However, it still increases delivery instability. This suggests that while teams are adapting for speed, their underlying systems have not yet evolved to safely manage AI-accelerated development.

“Delivery instability” is defined (page 13) in terms of two factors:

  • Change fail rate: “The ratio of deployments that require immediate intervention following a deployment.”
  • Rework rate: “The ratio of deployments that are unplanned but happen as a result of an incident in production.”

Later parts of the report get into more detail on this. Page 38 charts the increase in delivery instability, for example. And elsewhere in the section containing that chart, there’s a discussion of whether increases in throughput (defined by DORA as a combination of lead time for changes, deployment frequency, and failed deployment recovery time) are enough to offset or otherwise make up for this increase in instability (page 41, emphasis added by me):

Some might argue that instability is an acceptable trade-off for the gains in development throughput that AI-assisted development enables.

The reasoning is that the volume and speed of AI-assisted delivery could blunt the detrimental effects of instability, perhaps by enabling such rapid bug fixes and updates that the negative impact on the end-user is minimized.

However, when we look beyond pure software delivery metrics, this argument does not hold up. To assess this claim, we checked whether AI adoption weakens the harms of instability on our outcomes which have been hurt historically by instability.

We found no evidence of such a moderating effect. On the contrary, instability still has significant detrimental effects on crucial outcomes like product performance and burnout, which can ultimately negate any perceived gains in throughput.

And the chart on page 38 appears to show the increase in instability as quite a bit larger than the increase in throughput, in any case.

Curiously, that chart also claims a significant increase in “code quality”, and other parts of the report (page 30, for example) claim a significant increase in “productivity”, alongside the significant increase in delivery instability, which seems like it ought to be a contradiction. As far as I can tell, DORA’s source for both “productivity” and “code quality” is perceived impact as self-reported by survey respondents. Other studies and reports have designed less subjective and more quantitative ways to measure these things. For example, this much-discussed study on adoption of the Cursor LLM coding tool used the results of static analysis of the code to measure quality and complexity. And self-reported productivity impacts, in particular, ought to be a deeply suspect measure. From (to pick one relevant example) the METR early-2025 study (emphasis added by me):

This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%.

LLM coding advocates have often criticized this particular study’s finding of slower development for being based on older generations of LLMs (more on that argument in a bit), but as far as I’m aware nobody’s been able to seriously rebut the finding that developers are not very effective at self-estimating their productivity. So to see DORA relying on self-estimated productivity is disappointing.

The DORA report goes on to provide a seven-part “AI capabilities model” for organizations (begins on page 49), which consists of recommendations like: strong version control practices, working in small batches, quality internal platforms, user-centric focus… all of which feel like they should be table stakes for any successful organization regardless of whether they also happen to be using LLMs.

Suppose, for sake of a silly example, that someone told you a new technology is revolutionizing surgery, but the gains are not uniformly distributed, and the best overall outcomes are seen in surgical teams where in addition to using the new thing, team members also wash their hands prior to operating. That’s not as extreme a comparison as it might sound: the sorts of practices recommended for maximizing LLM-related gains in the DORA report, and in many other similar whitepapers and reports and studies, are or ought to be as fundamental to software development as hand-washing is to surgery. The Joel Test was recommending quite a few of these practices a quarter-century ago, the Agile Manifesto implied several of them, and even back then they weren’t really new; if you dig into the literature on effective software development you can find variations of much of the DORA advice going all the way back to the 1970s and even earlier.

For a more recent data point, I’ve seen a lot of people talking about and linking me to CircleCI’s 2026 “State of Software Delivery” which, like the DORA report, claims an uneven distribution of benefits from LLM adoption, and even says (page 8) “the majority of teams saw little to no increase in overall throughput”. The CircleCI report also raises a worrying point that echoes the increase in “delivery instability” seen in the DORA report (CircleCI executive summary, page 3):

Key stability indicators show that AI-driven changes are breaking more often and taking teams longer to fix, making validation and integration the primary bottleneck.

CircleCI further reports (page 11) that, year-over-year, they see a 13% increase in recovery time for a broken main branch, and a 25% increase for broken feature branches. And (page 12) they also say failures are increasing:

[S]uccess rates on the main branch fell to their lowest level in over 5 years, to 70.8%. In other words, attempts at merging changes into production code bases now fail 30% of the time.

For comparison, their own recommended benchmark of success for main branches is 90%.

The cost of these increasing failures and the increasing time to resolve them is quantified (emphasis matches the report, page 14):

For a team pushing 5 changes to the main branch per day, going from a 90% success rate to 70% is the difference between one showstopping breakage every two days to 1.5 every single day (a 3x increase).

At just 60 minutes recovery time per failure, you’re looking at an additional 250 hours in debugging and blocked deployments every year. And that’s at a relatively modest scale. Teams pushing 500 changes per day would lose the equivalent of 12 full-time engineers.

The usual response to reports like these is to claim they’re based on people using older LLMs, and the models coming out now are the truly revolutionary ones, which won’t have any of those problems. For example, this is the main argument that’s been leveled against the METR study I mentioned above. But that argument was flimsy to begin with (since it’s rarely accompanied by the kind of evidence needed to back up the claim), and its repeated usage is self-discrediting: if the people claiming “this time is the world-changing revolutionary leap, for sure” were wrong all the prior times they said that (as they have to have been, since if any prior time had actually been the revolutionary leap they wouldn’t need to say this time will be), why should anyone believe them this time?

Also, I’ve read a lot of studies and reports on LLM coding, and these sorts of findings—uneven or inconsistent impact, quality/stability declines, etc.—seem to be remarkably stable, across large numbers of teams using a variety of different models and different versions of those models, over an extended period of time (DORA does have a bit of a messy situation with contradictory claims that “code quality” is increasing while “delivery instability” is increasing even more, but as noted above that seems to be a methodological problem). The two I’ve quoted most extensively in this post (the DORA and CircleCI reports) were chosen specifically because they’re often recommended to me by advocates of LLM coding, and seem to be reasonably pro-LLM in their stances.

The other expected response to these findings is a claim that it’s not necessarily older models but older workflows which have been obsoleted, that the state of the art is no longer to just prompt an LLM and accept its output directly, but rather involves one LLM (or LLM-powered agent) generating code while one or more layers of “adversarial” ones review and fix up the code and also review each other’s reviews and responses and fixes, thus introducing a mechanism by which the LLM(s) will automatically improve the quality of the output.

I’m unaware of rigorous studies on these approaches (yet), but several well-publicized early examples do not inspire confidence. I’ll pick on Cloudflare here since they’ve been prominent advocates for using LLMs in this fashion. In their LLM rebuild of Next.js:

We wired up AI agents for code review too. When a PR was opened, an agent reviewed it. When review comments came back, another agent addressed them. The feedback loop was mostly automated.

But their public release of it, vetted through this process and, apparently, some amount of human review on top, was initially unable to run even the basic default Next.js application, and also was apparently riddled with security issues. From one disclosure post (emphasis added by me):

AI is now very good at getting a system to the point where it looks complete.

One specific problem cited was that the LLM rebuild simply did not pull in all the original tests, and therefore could miss security-critical cases those tests were checking. From the same disclosure post:

The process was feature-first: decide which viNext features existed, then port the corresponding Next.js tests. That is a sensible way to move quickly. It gives you broad happy-path coverage.

But it does not guarantee that you bring over the ugly regression tests, missing-export cases, and fail-open behavior checks that mature frameworks accumulate over years.

So middleware could look “covered” while the one test that proves it fails safely never made it over.

For example, Next.js has a dedicated test directory (test/e2e/app-dir/proxy-missing-export/) that validates what happens when middleware files lack required exports. That test was never ported because middleware was already considered “covered” by other tests.

On the whole, that post is somewhat optimistic, but considering that the Next.js rebuild was carried out by presumably knowledgeable people who presumably were following good modern practices and prompting good modern LLMs to perform a type of task those LLMs are supposed to be extremely good at—a language and framework well-represented in training data, well-documented, with a large existing test suite written in the target language to assist automated verification—I have a hard time being that optimistic.

And though I haven’t personally read through the recent alleged leak of the Claude Code source, I’ve read some commentary and analysis from people who have, and again it seems like a team that should be as well-positioned as anyone to take maximum advantage of the allegedly revolutionary capabilities of LLM coding isn’t managing to do so.

So the consistent theme here, in the studies and reports and in more recent public examples, is that being able to generate code much more quickly than before, even in 2026 with modern LLMs and modern practices, is still no guarantee of being able to deliver software much more quickly than before. As the CircleCI report puts it (page 3):

The data points to a clear conclusion: success in the AI era is no longer determined by how fast code can be written. The decisive factor is the ability to validate, integrate, and recover at scale.

And if that sounds like the kind of thing Fred Brooks used to say, that’s because it is the kind of thing Fred Brooks used to say. Raw speed of generating code is not and was not the bottleneck in software development, and speeding that up or even reducing the time to generate code to effectively zero does not have the effect of making all the other parts of software development go away or go faster.

So at this point it seems clear to me that in practice as well as in theory LLM coding does not represent a silver bullet, and it seems highly unlikely to transform into one at any point in the near future.

On being left behind

When expressing skepticism about LLM coding, a common response is that not adopting it, or even just delaying slightly in adopting it, will inevitably result in being “left behind”, or even stronger effects (for example, words like “obliterated” have been used, more than once, by acquaintances of mine who really ought to know better). LLMs are the future, it’s going to happen whether you like it or not, so get with the program before it’s too late!

I said I’ll stick to the technical mode here, but I’ll just mention in passing that the “it’s going to happen whether you like it or not” framing is something I’ve encountered a lot and found to be pretty disturbing and off-putting, and not at all conducive to changing my mind. And milder forms like “It’s undeniable that…” are rhetorically suspect. The burden of proof ought to be on the person making the claim that LLMs truly are revolutionary, but framing like this tries to implicitly shift that burden and is a rare example of literally begging the question: it assumes as given the conclusion (LLMs are in fact revolutionary) that it needs to prove.

Meanwhile, I see two possible outcomes:

  1. The skeptical position wins. LLM coding tools do not achieve revolutionary silver-bullet status. Perhaps they become another tool in the toolbox, like TDD or pair programming, where some people and companies are really into them. Perhaps they become just another feature of IDEs, providing functionality like boilerplate generators to bootstrap a new project (if your favorite library/framework doesn’t provide its own bootstrap anyway).
  2. The skeptical position loses. LLM coding tools do achieve true revolutionary silver-bullet status or beyond (consistently delivering one or more orders of magnitude improvement in software development productivity), and truly become a mandatory part of every working programmer’s tools and workflows, taking over all or nearly all generation of code.

In the first case, delayed adoption has no downside unless someone happens to be working at one of the companies that decide to mandate LLM use. And they can always pick it up at that point, if they don’t mind or if they don’t feel like looking for a new job.

As to the second case: based on what I’ve argued above about the status and prospects of LLMs up to now, I obviously think that continuing the type of progress in models and practices that’s been seen to date does not offer any viable path to a silver bullet. Which means a truly revolutionary breakthrough will have to be something sufficiently different from the current state of the art that it will necessarily invalidate many (or perhaps even all) prior LLM-based workflows in addition to invalidating non-LLM-based workflows.

And even if that doesn’t result in a completely clean-slate starting point with everyone equal—even if experience with older LLM workflows is still an advantage in the post-silver-bullet world—I don’t think it can ever be the sort of insurmountable advantage it’s often assumed to be. For one thing, even with vastly higher average productivity, there likely would not be sufficient people with sufficient pre-existing LLM experience to fill the vastly expanded demand for software that would result (this is why a lot of LLM advocates, across many fields, spend so much time talking about the Jevons paradox). For another, any true silver-bullet breakthrough would have to attack and reduce the essential difficulty of building software, rather than the accidental difficulty. Let us return once again to Brooks:

I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation.

Much of the skill required of human LLM users today consists of exactly this: specifying and designing the software as a “conceptual construct”, albeit in specific ways that can be placed into an LLM’s context window in order to have it generate code. In any true silver-bullet world, much or all of that skillset would have to be rendered obsolete, which significantly reduces the penalty for late adoption if and when the silver bullet is finally achieved.

Power to the people?

Aside from impact on professional programmers and professional software-development teams, another claim often made in favor of LLM coding is that it will democratize access to software development. With LLM coding tools, people who aren’t experienced professional programmers can produce software that solves problems they face in their day-to-day jobs and lives. Surely that’s a huge societal benefit, right? And it’s tons of fun, too!

Setting aside that the New York Times piece linked above was written by someone who is an experienced professional, I’m not convinced of this use case either.

Mostly I think this is a situation where you can’t have it both ways. It seems to be widely agreed among advocates of LLM coding that it’s a skill which requires significant understanding, practice, and experience before one is able to produce consistent useful results (this is the basis of the “adopt now or be left behind” claim dealt with in the previous section); strong prior knowledge of how to design and build good software is also generally recommended or assumed. But that’s very much at odds with the democratized-software claim: that someone with no prior programming knowledge or experience will simply pick up an LLM, ask it in plain non-technical natural language to build something, and receive a sufficiently functional result.

I think the most likely result is that a non-technical user will receive something that’s obviously not fit for purpose, since they won’t have the necessary knowledge to prompt the LLM effectively. They won’t know how to set up directories of Markdown files containing instructions and skill definitions and architectural information for their problem. They won’t have practice at writing technical specifications (whether for other humans or for LLMs) to describe what they want in sufficient detail. They won’t know how to design and architect good software. They won’t know how to orchestrate multiple LLMs or LLM-powered agents to adversarially review each other. In short, they won’t have any of the skills that are supposed to be vital for successful LLM coding use.

There’s also the possibility that “natural” human language alone will never be sufficient to specify programs, even to much more advanced LLMs or other future “AI” systems, due to inherent ambiguity and lack of precision. In that case, some type of specialized formal language for specifying programs would always be necessary. Edsger W. Dijkstra, for example, took this position and famously derided what he called “the foolishness of ‘natural language programming’”, which is worth reading for some classic Dijkstra-isms like:

When all is said and told, the “naturalness” with which we use our native tongues boils down to the ease with which we can use them for making statements the nonsense of which is not obvious.

Another possible outcome for LLM coding by non-programmers is the often-mentioned analogy to 3D printing, which also was hyped up as a great democratizer that would let anyone design and make anything, but never delivered on that promise and, at the individual level, became a niche hobby for the small number of enthusiasts who were willing and able to put in the time, money, and effort to get moderately good at it.

But the nightmare result is that non-programmer LLM users will receive something that seems to work, and only reveals its shortcomings much later on. Given how often I see it argued that LLMs will democratize coding and write utility programs for people working in fields where privacy and confidentiality are both vital and legally mandated, I’m terrified by that potential failure mode. And I think one of the worst possible things that could happen for advocates of LLM adoption is to have the news full of stories of well-meaning non-technical people who had their lives ruined by, say, accidentally enabling a data breach with their LLM-coded helper programs, or even “just” turning loose a subtly-incorrect financial model on their business. So even if I were an advocate of LLM coding, I’d be very wary of pushing it to non-programmers.

But ultimately, the only situation in which LLMs could meaningfully democratize access to software development is one where they achieve a true silver bullet, by significantly reducing or removing essential difficulty from the software development process. And as noted above, LLM advocates seem to believe that even in the silver-bullet situation there would still be such a gap between those with pre-existing LLM usage skills and those without, that those without could never meaningfully catch up. Although I happen to disagree with that belief, it remains the case that advocates can’t have it both ways: either LLM coding will be an exclusive club for those who built up the necessary skills, XOR it will be a great democratizer and do away with the need for those skills.

Takeaways

I’m already over 6,000 words in this post, and though I could easily write many more, I should probably wrap it up.

If I had to summarize my position on LLM coding in one sentence, it would be “Please go read No Silver Bullet”. I think Brooks’ argument there is both theoretically correct and validated by empirical results, and sets some pretty strong limits on the impact LLM coding, or any other tool or technique which solely or primarily attacks accidental difficulty, can have.

Of course, limits on what we can do or gain aren’t necessarily the end of the world. Many of the foundations of computer science, from On Computable Numbers to Rice’s theorem and beyond, place inflexible limits on what we can do, but we still write software nonetheless, and we still work to advance the state of our art. So the No Silver Bullet argument is not the same as arguing that LLMs are necessarily useless, or that no gains can possibly be realized from them. But it is an argument that any gains we do realize are likely going to be incremental and evolutionary, rather than the world-changing revolution many people seem to be expecting.

Correspondingly, I think there is not a huge downside, right now, to slow or delayed adoption of LLM coding. Very few organizations have the strong fundamentals needed to absorb even a relatively moderate, incremental increase in the amount of code they generate, which I suspect is why so many studies and reports find mixed results and lots of broken CI pipelines. Not only is there no silver bullet, there especially is no quick or magical gain to be had from rushing to adopt LLM coding without first working on those fundamentals. In fact, the evidence we have says you’re more likely to hurt than help your productivity by doing so.

I also don’t think LLMs are going to meaningfully democratize coding any time soon; even if they become indispensable tools for programmers, they are likely to continue requiring users to “think like a programmer” when specifying and prompting. We would be much better served by teaching many more people how to think rigorously and reason about abstractions (and they would be much better served, too) than we would by just plopping them as-is in front of LLMs.

As for what you should be doing instead of rushing to adopt LLM coding out of fear that you’ll be left behind: I think you should be listening to what all those whitepapers and reports and studies are actually telling you, and working on fundamentals. You should be adopting and perfecting solid foundational software development practices like version control, comprehensive test suites, continuous integration, meaningful documentation, fast feedback cycles, iterative development, focus on users, small batches of work… things that have been known and proven for decades, but are still far too rare in actual real-world software shops.

If the skeptical position is wrong and it turns out LLMs truly become indispensable coding tools in the long term, well, the available literature says you’ll be set up to take the greatest possible advantage of them. And if it turns out they don’t, you’ll still be in much better shape than you were, and you’ll have an advantage over everyone who chased after wild promises of huge productivity gains by ordering their teams to just chew through tokens and generate code without working on fundamentals, and who likely wrecked their development processes by doing so.

Or as Fred Brooks put it:

The first step toward the management of disease was replacement of demon theories and humours theories by the germ theory. That very step, the beginning of hope, in itself dashed all hopes of magical solutions. It told workers that progress would be made stepwise, at great effort, and that a persistent, unremitting care would have to be paid to a discipline of cleanliness. So it is with software engineering today.

Read the whole story
jgbishop
19 hours ago
reply
What a refreshing take on all of this!
Raleigh, NC
Share this story
Delete

Trump Warns Iran To Accept His Ultimatum Or Face Wrath Of Next Ultimatum

1 Comment

WASHINGTON—Threatening to continue issuing threats if the Islamic Republic did not quickly agree to his demands, President Donald Trump warned Iran on Monday to accept his ultimatum or face the wrath of his next ultimatum. “Lay down your weapons now or I will have no choice but to ask you to lay down your weapons later,” the commander in chief wrote on Truth Social, adding that the Iranian regime only had two more days to consider his terms before he would give them eight more days to consider his terms. “Mark my words, this is your last chance before your next last chance. If you do not act immediately, I won’t hesitate to wait even longer. You may think I’m bluffing, but believe me when I say you will feel the full weight of my social media posts.” At press time, Trump urged Iran not to try his patience because they would find it much, much greater than they expected.

The post Trump Warns Iran To Accept His Ultimatum Or Face Wrath Of Next Ultimatum appeared first on The Onion.

Read the whole story
jgbishop
3 days ago
reply
Isn't The Onion supposed to be satire, and not actual fact?
Raleigh, NC
Share this story
Delete

The Argyle Sweater - 2026-04-05

1 Comment
Read the whole story
jgbishop
5 days ago
reply
Hahaha!
Raleigh, NC
Share this story
Delete

Vulnerability Research Is Cooked

1 Comment

Vulnerability Research Is Cooked

Thomas Ptacek's take on the sudden and enormous impact the latest frontier models are having on the field of vulnerability research.

Within the next few months, coding agents will drastically alter both the practice and the economics of exploit development. Frontier model improvement won’t be a slow burn, but rather a step function. Substantial amounts of high-impact vulnerability research (maybe even most of it) will happen simply by pointing an agent at a source tree and typing “find me zero days”.

Why are agents so good at this? A combination of baked-in knowledge, pattern matching ability and brute force:

You can't design a better problem for an LLM agent than exploitation research.

Before you feed it a single token of context, a frontier LLM already encodes supernatural amounts of correlation across vast bodies of source code. Is the Linux KVM hypervisor connected to the hrtimer subsystem, workqueue, or perf_event? The model knows.

Also baked into those model weights: the complete library of documented "bug classes" on which all exploit development builds: stale pointers, integer mishandling, type confusion, allocator grooming, and all the known ways of promoting a wild write to a controlled 64-bit read/write in Firefox.

Vulnerabilities are found by pattern-matching bug classes and constraint-solving for reachability and exploitability. Precisely the implicit search problems that LLMs are most gifted at solving. Exploit outcomes are straightforwardly testable success/failure trials. An agent never gets bored and will search forever if you tell it to.

The article was partly inspired by this episode of the Security Cryptography Whatever podcast, where David Adrian, Deirdre Connolly, and Thomas interviewed Anthropic's Nicholas Carlini for 1 hour 16 minutes.

I just started a new tag here for ai-security-research - it's up to 11 posts already.

Tags: security, thomas-ptacek, careers, ai, generative-ai, llms, nicholas-carlini, ai-ethics, ai-security-research

Read the whole story
jgbishop
6 days ago
reply
This is one wild ride; it will be interesting to see what happens in the next few months.
Raleigh, NC
GaryBIshop
6 days ago
or terrifying. I'm very glad I'm no longer responsible for production systems.
Share this story
Delete

Web Feeds: Turn any website into an RSS feed

4 Comments and 7 Shares

Not every website has an RSS feed. Some never did. Some had one years ago and quietly removed it. And some sites have content that updates regularly but was never structured as a feed in the first place: job boards, product listings, event calendars, changelog pages. Until now, if a site didn’t offer RSS, you were out of luck.

Web Feeds is a new feature that creates RSS feeds from any website. Point it at a URL, and NewsBlur analyzes the page structure, identifies the repeating content patterns, and generates extraction rules that turn the page into a live feed. It works on news sites, blogs, job boards, product pages, or really anything with a list of items that changes over time.

This is a huge feature and has been requested for years. I’m so thrilled to finally be able to offer it in a way that I feel comfortable with. Other solutions including having you select story titles on a re-hosted version of the page, but it was clumsy and error-prone. This way, we use LLMs to figure out what the story titles are likely to be, present the variations to you, and then let you decide what’s right. So much better!

How it works

Open the Add + Discover Sites page and click the Web Feed tab. Paste a URL and click Analyze. NewsBlur fetches the page, strips out navigation and boilerplate, and analyzes the HTML structure. Within a few seconds, you’ll see multiple extraction variants, each representing a different content pattern found on the page.

Progress updates stream in real-time while the analysis runs. NewsBlur typically finds 3-5 different extraction patterns on a page. The first variant is usually the main content (article list, blog posts, product grid), but sometimes the page has multiple distinct sections worth subscribing to. Each variant shows a label, a description of what it captures, and a preview of 3 extracted stories so you can see exactly what you’d get.

Select the variant that matches what you want to follow, pick a folder, and subscribe. NewsBlur will re-fetch and re-extract the page on a regular schedule, just like any other feed.

Story hints

Sometimes the initial best guess isn’t what you’re looking for. Maybe the page has a blog section and a job listings section, and you want the jobs. Click the Refine button and type a hint like “I’m looking for the job postings.” NewsBlur re-analyzes the page with your hint in mind and reorders the variants to prioritize what you described.

What gets extracted

For each story, NewsBlur extracts whatever it can find: title, link, content snippet, image, author, and date. Not every field will be available on every site, and that’s fine. At minimum you’ll get titles and links. The extraction uses XPath expressions, which means it’s precise and consistent across page refreshes as long as the site’s HTML structure stays the same.

When things change

Websites redesign. HTML structures shift. When NewsBlur detects that the extraction rules have stopped working (after 3 consecutive failures), the feed is flagged as needing re-analysis. You’ll see a feed exception indicator, and you can re-analyze the page with one click to generate updated extraction rules.

Use cases

Some examples of sites that work well with Web Feeds:

  • Company blogs without RSS — Many corporate blogs dropped their RSS feeds years ago. Web Feeds brings them back.
  • Job boards — Track new postings on a company’s careers page.
  • Government sites — Follow press releases, meeting agendas, or public notices.
  • Changelog pages — Monitor when a tool or service ships updates.
  • Event listings — Keep tabs on upcoming concerts, conferences, or local events.
  • Product pages — Watch for new arrivals or restocks on stores that don’t offer feeds.

Availability

Web Feeds are available to Premium Archive and Premium Pro subscribers. The ongoing feed fetching and extraction runs on NewsBlur’s servers like any other feed.

If you have feedback or ideas for improvements, please share them on the NewsBlur forum.

Read the whole story
jgbishop
28 days ago
reply
NewsBlur keeps getting better!
Raleigh, NC
Share this story
Delete
3 public comments
satadru
17 days ago
reply
Nice!
New York, NY
samuel
27 days ago
reply
One of the best new features ever. I say that but just wait until I launch the Daily Briefing and story clustering, both coming sooooooon... also I just finished AI prompt classifiers for text and for images, so that's also coming. Hoo boy, lots of good stuff. And Android redesign is nearly complete!
San Francisco
chrismorgan
27 days ago
The feature makes sense, but… could you please give it a different name? https://en.wikipedia.org/wiki/Web_feed
samuel
27 days ago
Web feed is a superset of RSS feed, so it seems quite appropriate
chrismorgan
27 days ago
This is specifically a feature to let you subscribe to sources that *don’t have* a web feed. The name “Web Feed” is accordingly very confusing.
digitalink2008
28 days ago
reply
Samuel you absolute BAMF! This is an amazing feature!

Restoring a Sun SPARCstation IPX Part 1: PSU and Nvram

1 Comment

Main46_fd0612eb5acffebe0f6c15845d700c4d3596e609.jpg

Repairing a dead power supply and replacing the NVRAM in a vintage UNIX workstation.

If you worked in computing in the early 90s, studied computer science around then, or just had a keen interest in computers, the chances are that Sun Microsystems was a familiar name and their workstations were highly coveted. At this time PCs were fast becoming the standard desktop computer and pretty much all looked the same, while if you worked in a creative industry you might have been lucky enough to use a Mac — but even then, the differences were not so profound.

SS5_cdaf1a2bf4ef89a276b27488ccbf0660c70f64bc.jpg

Detail from a Sun SPARCstation 5 ad, circa 1994.

However, there was a breed of computer that really stood out from the rest and not only looked far more exotic, but had a clear lead in terms of performance and capabilities — the UNIX workstation. While most people were running Windows 3.x and being frustrated at the limited resources, lack of true multitasking and all too frequent instability, UNIX workstations provided a far more advanced, feature rich and robust environment. This included proper multitasking and typically greater RAM, more powerful CPU and advanced graphics, plus higher performance disks etc. Of course, this all came at a cost and a UNIX workstation could easily have a price tag 10x that of a typical PC.

There were numerous competing UNIX vendors and many had their own niche in which they excelled. Such as Silicon Graphics, who as the name suggests, specialised in powerful graphics workstations, which proved popular in applications such as 3D visualisation, and film and TV.

Sun Microsystems hardware found many different uses, but was notably popular in the nascent Internet industry, with “headless” (no VDU etc.) versions of their workstations and also larger server chassis configurations often being put to use as DNS, FTP, e-mail and web servers.

As the first UNIX computer that I ever used, I have a particular fondness for the Sun SPARCstation IPX, which is a compact “lunchbox” form factor machine that packed a lot of power — at that time — into a small space and featuring an attractive industrial design.

Hardware specs

IPX_1682643f14c88e93c34ed2e8d149d501f77c184e.jpg

  • 40MHz 32-bit SPARC CPU + FPU (Sun 4/50)
  • Up to 64MB RAM as standard
  • Sun Turbo GX colour framebuffer
  • 3.5” floppy drive
  • SCSI drive
  • 10M Ethernet (uses an external transceiver)
  • Audio input and output
  • Serial port
  • 2x SBus expansion slots

SPARC is a RISC instruction set architecture (ISA) that was developed by Sun and Fujitsu, first released in 1987 and which continues to be developed today by Fujitsu. Earlier Sun computers were based on the Motorola 68K processor and the first SPARC workstation was introduced in 1989.

The processor clock speed may seem incredibly low to us now and at the time there would have been Intel CPUs clocked at similar speeds. However, the SPARC CPU was designed with UNIX in mind and the IPX featured an MMU with 8x hardware contexts, which benefited multitasking.

The price upon introduction in 1991 was around $15,000 for a SPARCstation IPX complete with 19” CRT monitor, keyboard and mouse. Hence use tended to be limited to specialist applications and the one that I used around this time was employed in mobile data network conformance testing.

The SPARCstation IPX that is the subject of this blog post has a HDD installed which is a few hundred megabytes and it has 32MB of RAM fitted. I’ve owned it for some years and it did previously work, but at some point the power supply developed a fault.

Failed electrolytic capacitors

PSU_PCBA_db2bf679ccd887580265db0a3cafa10941e104bf.jpg

Out-of-spec and out-and-out failed electrolytic capacitors are a common issue with ageing electronics and it appears that, out of all the Sun SPARCstation models, this is particularly an issue with the IPX and IPC power supplies. Which suggests that those originally fitted may have been of a poor quality or from a bad batch, else something in the design of the PSU has caused them to fail.

The SPARCstation IPC and IPX share the same power supply and this web page gives a description of replacing the electrolytics in a PSU from an IPC. Interestingly, the author notes elsewhere that they had two IPCs with dead PSUs, whereas one fitted in an IPX worked fine. However, the IPC is a slightly earlier model and it could simply be that these were older, else perhaps pure chance. In any case, I have two IPXs and both of which with suffered from dead power supplies.

All the electrolytic capacitors were replaced, apart from the physically large line-side filter capacitor with the rubber cap on top. The RS parts used are:

There were originally both 25V and 50V rated 47uF capacitors fitted, but for the sake of convenience, only 50V rated parts were ordered and these were fitted in place of both.

Leakage_0008d67e4fad3eebd30889511d56f1ced87e791c.jpg

As can be seen above, the original capacitors had leaked electrolyte and this is never a good sign.

Corrosion_7f20dc7673d58bf4d4f6a06da8b6bfb2e6be553f.jpg

This had led to what looks like corrosion on the underside of underside of the board.

Caps_ef51cd3c27219b6a775df3bcacede6bfd54ad152.jpg

Once the caps had been replaced, the power supply was reassembled and fitted back into chassis.

PSU_Test_dbd0a6940127672d309bd8b96666aa2632dd156e.jpg

Following which, power could be applied and it was the moment of truth.

Unfortunately, the SPARCstation didn’t spring back into life and there was just a slight kick of the PSU fan upon powering on. At this point the PSU was removed, soldering was re-checked for shorts etc. and nothing looked obviously out of place. Apart from the aforementioned corrosion.

Given that replacing the caps didn’t take too long and there was a 2nd dead IPX, the decision was made to proceed to re-cap its power supply. This was in a much better state, without electrolyte on the top of the PCB and with no corrosion on the underside.

Cleaner_ad75c65e0512cececa72f59e1e04d4495219c5da.jpg

Unfortunately, when removing the old caps a section of pad/trace lifted, but nothing too serious and this was easily dealt with when fitting the new capacitor.

PadLift_a6b8b136fca4b534e0a78053f587a47d040db5f2.jpg

Once re-capped and reassembled, this was fitted into the first SPARCstation, power was applied and thankfully it then sprung into life.

The priority was to get one system working and at some point I’ll return to the first PSU and attempt to track down the fault, which should be easier with a second working power supply, since measurements can be made at various points on the PCB with both and then compared.

Now on to resolve the next issue.

Replacing the NVRAM

PowerUp_1_ff859df426c1a52e5f02d2bd2e9fb656426c6cd7.jpg

Similar to how a PC has BIOS for initialising the hardware and loading an O/S, Sun machines have OpenBoot PROM (OBP) firmware. Which like BIOS can also be used to make configuration changes — such as setting which device to boot from by default — and these are persisted in non-volatile RAM (NVRAM).

However, whereas PCs typically have a rechargeable battery or removable coin cell that is used to retain configuration settings and maintain the time-of-day while powered down, Sun hardware instead uses a TIMEKEEPER device which integrates SRAM, RTC and a battery. Hence when the battery fails, the entire device must be replaced.

Above we can see the error displayed on power-up, as the OBP determines that the NVRAM chip battery is dead. Unfortunately, this is also used to store the Ethernet MAC address and its Sun hostid, a unique ID that is used for things such as locking licensed software to a specific machine.

The venerable Sun NVRAM/hostid FAQ can be consulted to find details of the replacement part, which is specified for the IPX as being M48T02 (310-9655) . Once this is fitted we then need to program the MAC address and hostid. However, if we didn’t have a record of these we will have to make them up and just need to ensure sure that the MAC address doesn’t clash with that of another device on our network. The above linked FAQ has details of the hostid format, which has a prefix specific to the machine model, while the MAC address prefix is that assigned to Sun Microsystems.

Prior to reprogramming the IDPROM, we first need to reset to NVRAM defaults and disable the diag option, by entering at the OBP “ok” prompt:

set-defaults
setenv diag-switch? false

The OBP monitor is actually a Forth interpreter and the mkp command can be used to reprogram the IDPROM section of the NVRAM. Handily, the FAQ has an example based on SPARCstation IPX hardware and for this a hostid of 57c0ffee is used, with a MAC address of 08:00:20:c0:ff:ee.

1 0 mkp
real-machine-type 1 mkp
8 2 mkp
0 3 mkp
20 4 mkp
c0 5 mkp
ff 6 mkp
ee 7 mkp
0 8 mkp
0 9 mkp
0 a mkp
0 b mkp
c0 c mkp
ff d mkp
ee e mkp
0 f 0 do i idprom@ xor loop f mkp

The first command sets byte 0, which holds the format/version number, to 01. Bytes 2-7 hold the MAC address. Bytes c-e store the hostid. Byte f is used to hold a checksum and this is computed and programmed using the final command shown above. For detailed information, see the FAQ.

At this point we can enter reset at the prompt.

PowerUp_2_f40a7489be9499b840647ab4aca5343db66ecf6c.jpg

Now we find that we have a MAC address and hostid configured, but sadly there is still an error warning that NVRAM needs replacing, despite configuration changes persisting when powered down. It seems that this is not uncommon and there are various discussions online regarding this, with one thread suggesting that the issue may be resolved by fitting an M48T12 part instead.

NVRAM_a07ca2fd5accae0c79bfb80b2e65a7ee20a40e76.jpg

The downside to this error persisting is that the machine will not automatically boot, so in an effort to resolve this an M48T12 part (829-4073) was ordered and the IDPROM programming repeated. Unfortunately, this gave the same results and so more research is required, but at least the SPARCstation will keep time and it won’t be necessary to program the MAC address and hostid every time it’s powered on; not automatically booting is a minor annoyance to put up with.

First boot

SunOS_be271c44e42acef2d2d7f825f2388dd3abb8be9a.jpg

To boot from the internal disk we simply enter at the OBP prompt:

boot disk

Following which Solaris 7 (SunOS 5.7) was loaded.

EthAddress_fec16d08f3b5e4375b10f0f38835f780b360c9c7.jpg

After a short while it was then possible to log in as root. From the logs it appears as though the machine was last booted in the year 2000 — or at least it thought that was the year! It was also possible to search the logs to see what the Ethernet address was when it was previously booted, which is useful as we could then go back to the OBP monitor and reprogram the IDPROM with this address. Of course, it is always possible that this wasn’t the original MAC address and it had been set to some random value when the original one had previously been lost.

Next steps

Now that we have a working machine, it would be nice to see if we can clean the enclosure up and remove some of the marks and if possible, the yellowing of the plastic. Further research is also required to see if the NVRAM error can be cleared. Finally, Solaris 7 was released in late 1998 and is perhaps a little modern for a SPARCstation IPX, hence it would seem appropriate to load a Solaris 1.x release from around 1991-1994, closer to when the system was introduced.

Part 2 is available now

  — Andrew Back

Adblock test (Why?)

Read the whole story
jgbishop
29 days ago
reply
An old article, but still a neat read.
Raleigh, NC
Share this story
Delete
Next Page of Stories