Took a four-day trip to Iceland. What a beautiful place. A quick photo summary:

Seljalandsfoss WaterfallSeljalandsfoss WaterfallSnowfall PeninsulaAurora in IcelandSnowfall PeninsulaSeljalandsfoss WaterfallKirkjufellsfoss WaterfallAurora in Iceland

I knew that Captain Grace Hopper was an early pioneer in computer programming who just so happened to discover and document the first ever computer bug — a literal moth!

But I’d never seen a video of her before.

Yesterday, the NSA declassified a lecture Hopper gave in 1982 at the age of 75.

It’s astonishingly prescient. She likens that moment to the days just after Ford introduced the Model T and changed the face of the country forever:

I can remember when Riverside Drive in New York City, along the Hudson River, was a dirt road. And on Sunday afternoons, as a family, we would go out on the drive and watch all the beautiful horses and carriages go by. In a whole afternoon, there might be one car.

Whether you recognize it or not, the Model Ts of the computer industry are here. We’ve been through the preliminaries of the industry. We are now at the beginnings of what will be the largest industry in the United States.

But with the Model T came unintended consequences; Hopper foresaw the same for the computer age:

I’m quite worried about something.

When we built all those roads, and the shopping centers, and all the other things, and provided for automobile transportation… we forgot something. We forgot transportation as a whole. We only looked at the automobile. Because of that, when we need them again, the beds of the railroads are falling apart. […] If we want to move our tanks from the center of the country to the ports to ship them overseas, there are no flat cars left. […] The truth of the matter is, we’ve done a lousy job of managing transportation as a whole.

Now as we come to the world of the microcomputer, I think we’re facing the same possibility. I’m afraid we will continue to buy pieces of hardware and then put programs on them, when what we should be doing is looking at the underlying thing, which is the total flow of information through any organization, activity, or company. We should be looking at the information flow and then selecting the computers to implement that flow.

In his excellent piece How I Use “AI”, Nicholas Carlini writes:

I don’t think that “AI” models (by which I mean: large language models) are over-hyped.

Yes, it’s true that any new technology will attract the grifters. And it is definitely true that many companies like to say they’re “Using AI” in the same way they previously said they were powered by “The Blockchain”. […] It’s also the case we may be in a bubble. The internet was a bubble that burst in 2000, but the Internet applications we now have are what was previously the stuff of literal science fiction.

But the reason I think that the recent advances we’ve made aren’t just hype is that, over the past year, I have spent at least a few hours every week interacting with various large language models, and have been consistently impressed by their ability to solve increasingly difficult tasks I give them.

[…]

So in this post, I just want to try and ground the conversation.

With 50 detailed examples, Nicholas illustrates how LLMs have aided him in deep technical challenges, including learning new programming languages, tackling the complexity of modern GPU development, and more. He repeatedly demonstrates how LLMs can be both immensely useful and comically flawed.

Nicholas conveys a broad balanced perspective that resonates strongly with me. Is AI a bubble? Sure; there’s plenty of malinvestment. Is AI over-hyped? Sure; there are those who claim it’s about to replace countless jobs, achieve sentience, or even take over the world. Can AI be harmful? Sure; bias and energy usage are two quite different and troubling considerations. But is AI useless? No, demonstrably not.

French TV weather reports apparently include two new climate change graphics:

“We see it as the weather being a still image, and the climate being the film in which this image is featured,” explains Audrey Cerdan, climate editor-in-chief at France Télévisions. “If you just see the still image, but you don’t show the whole movie, you’re not going to understand the still picture.”

The first graphic shows projected global temperature rise, in Celsius, to 8 decimal places:

People watched in real time as the counter ticked over from 1.18749863 Celsius above the pre-industrial level to 1.18749864 C. Now, it’s ticking past 1.2.

The second graphic, climate stripes, shows annual temperature rise at a glance. Here are the global stripes for 1850 through 2023:

Global climate stripes for 1850-2023

I admire the simplicity of this visualization. I suppose it can be attacked both for its choice of color scale and for its choice of baseline average (1960 through 2010, somewhat arbitrarily) but, for a TV audience, those details seem much less important than the instinct conveyed.

I wonder what further opportunities there are to raise awareness of climate change through this combination of mass media and simple data visualization?

Molly White, introducing her new project Follow the Crypto:

This website provides a real-time lens into the cryptocurrency industry’s efforts to influence 2024 elections in the United States.

In addition to shedding much needed light on crypto lobbyist spending, Molly’s project is also open source and can theoretically be targeted at unrelated industries. It’d be fun to see someone build a similar dashboard for fossil fuel influence.

Beyond that: I’ve spent a good chunk of 2024 focused on the upcoming Presidential election and have done quite a bit of analysis of FEC data; I still learned a few things by reading the code.

Paul Graham, writing about “The Right Kind of Stubborn”:

The reason the persistent and the obstinate seem similar is that they’re both hard to stop. But they’re hard to stop in different senses. The persistent are like boats whose engines can’t be throttled back. The obstinate are like boats whose rudders can’t be turned.

That feels like a useful analogy.

When I’m at “startup” events in the Seattle region, I tend to — unfairly, I imagine — reduce the stories I’m told to two axes: “commitment to a goal” and “commitment to an implementation”. The entrepreneurs I admire most tend to be highly committed to a clearly articulable goal but only lightly committed to its implementation: for them, implementations are simply hypotheses to test as the business is built.

From Maggie Appleton’s conference talk on Home-Cooked Software:

For the last ~year I’ve been keeping a close eye on how language models capabilities meaningfully change the speed, ease, and accessibility of software development. The slightly bold theory I put forward in this talk is that we’re on a verge of a golden age of local, home-cooked software and a new kind of developer – what I’ve called the barefoot developer.

Like everything on Maggie’s site, it’s worth a read.

For my part, I share Maggie’s hopefulness that LLMs will help make software development more accessible to a wider audience. I also appreciate her attempt to broaden the definition of “local-first software” away from its technical roots. I want a flourishing world of community software built by and for the communities it serves.

(An aside: “local-first” has always implied “sync engine” to me. And sync engines nearly always end up being complex hard-to-build systems. That’s one reason why I’m skeptical that we’ll see local-first software architectures take off the way the community seems to hope.)

From Glyph:

The history of AI goes in cycles, each of which looks at least a little bit like this…

A++, no notes. Glyph does a great job framing the current AI “moment” in its historical context.

Well, okay, one note. From the final sentence:

What I can tell you is that computers cannot think

Not yet, in any case.

Farcaster, the Web3 social network, just raised $150M with a $1B valuation on the back of (checks notes) 80k active users.

Like most things blockchain, Farcaster seems to me to be a comical Rube Goldberg. Ethereum is too slow and transactions too expensive to support a Twitter-like social network, so Farcaster uses it primarily for identity. An amusing consequence is that users must pay to create Farcaster accounts. The network itself is built from a series of blockchain-decoupled “hubs” and a protocol to send data between them; as of this writing, users are effectively charged for their use, too.

Farcaster’s design is motivated by co-founder Varun Srinivasan’s older post on “sufficient” decentralization for social networks. For my part, I find Varun’s ideas intriguing but flawed. Much of Farcaster’s goofy complexity seems to flow from the idea that a social network’s name registry must be decentralized and that boring old domain names are, for some reason, an unacceptable place to start.

In any case, Srinivasan and co-founder Dan Romero must be telling one hell of a compelling story to reach a $1B valuation; more power to them! It’s just hard for me to see what can possibly be so compelling in the face of vastly more successful — and sensibly blockchain-free — decentralized approaches like ActivityPub, ATProto, and even Nostr.

I built a simple fake SMTP server that logs received emails and packaged it as a docker container. This can be handy in development, typically as part of a docker-compose.yml of some kind, if the service you’re working on requires SMTP and doesn’t provide an easy escape hatch (like Rails’ ActionMailer interceptors or Django’s console email backend). The SMTP RFC is vast; at the moment, smtp-logger only supports the tiniest bits necessary to make my basic use cases work. But hopefully it’s useful to someone else and easily extended.

Just ran across a nuanced older Cory Doctorow article asking what kind of bubble AI is:

Of course AI is a bubble. It has all the hallmarks of a classic tech bubble. Pick up a rental car at SFO and drive in either direction on the 101 – north to San Francisco, south to Palo Alto – and every single billboard is advertising some kind of AI company. Every business plan has the word “AI” in it, even if the business itself has no AI in it.

Tech bubbles come in two varieties: The ones that leave something behind, and the ones that leave nothing behind.

Doctorow argues — and I agree — that unlike, say, the crypto bubble, AI is likely to leave value behind when it pops.

He also makes an interesting argument that, because there are so many low-stakes low-dollar uses for AI, smaller self-hosted models may be the long-term winners:

There will be little models – Hugging Face, Llama, etc – that run on commodity hardware. The people who are learning to “prompt engineer” these “toy models” have gotten far more out of them than even their makers imagined possible. They will continue to eke out new marginal gains from these little models, possibly enough to satisfy most of those low-stakes, low-dollar ap­plications.

I disagree with the article on a couple points, however. First, Doctorow worries about the origins of small models:

These little models were spun out of big models, and without stupid bubble money and/or a viable business case, those big models won’t survive the bubble and be available to make more capable little models.

The past many months have seen advancements in low-cost training that make me think this probably won’t be an issue.

Second:

The universe of low-stakes, high-dollar applications for AI is so small that I can’t think of anything that belongs in it.

Triple-A game titles and other high-end entertainment strike me as straightforward examples. And, with Microsoft integrating large models across their entire Office suite, perhaps another answer is “all those low-stakes Excel spreadsheets and Word docs that nevertheless do something useful for business somewhere”.

There’s a lot more to Doctorow’s article; it’s worth a read in full.

Count More

If you’re a student in the United States, you can register to vote either in your home state or your school state.

If one of those happens to be a battleground state in the 2024 presidential election, your vote effectively “counts more” there.

That’s why we built Count More, a tiny website to help students choose:

Screen shot of countmore.us

It’s maybe the (technically) simplest project we’ve worked on at Front Seat but, with the right campaign behind it, we hope it will meaningfully impact the 2024 presidential election.

Angie Wang in the New Yorker: 'Is My Toddler a Stochastic Parrot?'

Angie Wang’s sketch in the New Yorker, “Is My Toddler a Stochastic Parrot?”, is a lovely meditation on language, AI, and life:

The world is racing to develop ever more sophisticated large language models while a small language model unfurls itself in my home.

Seriously, just find a large screen and read it. It’s such a delight.