In his excellent piece How I Use “AI”, Nicholas Carlini writes:

I don’t think that “AI” models (by which I mean: large language models) are over-hyped.

Yes, it’s true that any new technology will attract the grifters. And it is definitely true that many companies like to say they’re “Using AI” in the same way they previously said they were powered by “The Blockchain”. […] It’s also the case we may be in a bubble. The internet was a bubble that burst in 2000, but the Internet applications we now have are what was previously the stuff of literal science fiction.

But the reason I think that the recent advances we’ve made aren’t just hype is that, over the past year, I have spent at least a few hours every week interacting with various large language models, and have been consistently impressed by their ability to solve increasingly difficult tasks I give them.

[…]

So in this post, I just want to try and ground the conversation.

With 50 detailed examples, Nicholas illustrates how LLMs have aided him in deep technical challenges, including learning new programming languages, tackling the complexity of modern GPU development, and more. He repeatedly demonstrates how LLMs can be both immensely useful and comically flawed.

Nicholas conveys a broad balanced perspective that resonates strongly with me. Is AI a bubble? Sure; there’s plenty of malinvestment. Is AI over-hyped? Sure; there are those who claim it’s about to replace countless jobs, achieve sentience, or even take over the world. Can AI be harmful? Sure; bias and energy usage are two quite different and troubling considerations. But is AI useless? No, demonstrably not.