Decompression

It was unexpectedly sunny in Seattle today. The trails and parks were full. Perhaps we were all decompressing.

Even with the ample sun and fresh air, it was impossible not to dwell:

We can say it was about the economy. We can say it was (sigh) about immigrants taking jobs and importing crime. We can say it was about misogyny and racism.

It was about these things, to some degree.

But I think they also miss the mark. As I write, the GOP looks poised to sweep it all: the popular vote, both houses of Congress, everything.

I see this election as a resounding affirmative vote for Trump and Trumpism. I see it as a strident repudiation of much that came before, the political order of America since at least the Reagan administration. I see it as a statement that our norms and our mores and our rule of law no longer matter except to a minority of us.

America is changed. Mending it will require the work not of another election, but of an entire generation.

If you’re a U.S. citizen: vote for Kamala Harris.

I don’t think I need to say more. Just vote.

PEP 750: Template Strings proposes a new addition to the Python programming language that generalizes f-strings. Unlike f-strings, t-strings evaluate to a new type, Template, that provides access to the string and its interpolated values before they are combined:

name = "World"
template: Template = t"Hello {name}"
assert template.args[0] == "Hello "
assert template.args[1].value == "World"

This opens the door to a variety of new use cases. The proposal is currently in the draft stage and open for feedback.

Took a four-day trip to Iceland. What a beautiful place. A quick photo summary:

Seljalandsfoss WaterfallSeljalandsfoss WaterfallSnowfall PeninsulaAurora in IcelandSnowfall PeninsulaSeljalandsfoss WaterfallKirkjufellsfoss WaterfallAurora in Iceland

I knew that Captain Grace Hopper was an early pioneer in computer programming who just so happened to discover and document the first ever computer bug — a literal moth!

But I’d never seen a video of her before.

Yesterday, the NSA declassified a lecture Hopper gave in 1982 at the age of 75.

It’s astonishingly prescient. She likens that moment to the days just after Ford introduced the Model T and changed the face of the country forever:

I can remember when Riverside Drive in New York City, along the Hudson River, was a dirt road. And on Sunday afternoons, as a family, we would go out on the drive and watch all the beautiful horses and carriages go by. In a whole afternoon, there might be one car.

Whether you recognize it or not, the Model Ts of the computer industry are here. We’ve been through the preliminaries of the industry. We are now at the beginnings of what will be the largest industry in the United States.

But with the Model T came unintended consequences; Hopper foresaw the same for the computer age:

I’m quite worried about something.

When we built all those roads, and the shopping centers, and all the other things, and provided for automobile transportation… we forgot something. We forgot transportation as a whole. We only looked at the automobile. Because of that, when we need them again, the beds of the railroads are falling apart. […] If we want to move our tanks from the center of the country to the ports to ship them overseas, there are no flat cars left. […] The truth of the matter is, we’ve done a lousy job of managing transportation as a whole.

Now as we come to the world of the microcomputer, I think we’re facing the same possibility. I’m afraid we will continue to buy pieces of hardware and then put programs on them, when what we should be doing is looking at the underlying thing, which is the total flow of information through any organization, activity, or company. We should be looking at the information flow and then selecting the computers to implement that flow.

In his excellent piece How I Use “AI”, Nicholas Carlini writes:

I don’t think that “AI” models (by which I mean: large language models) are over-hyped.

Yes, it’s true that any new technology will attract the grifters. And it is definitely true that many companies like to say they’re “Using AI” in the same way they previously said they were powered by “The Blockchain”. […] It’s also the case we may be in a bubble. The internet was a bubble that burst in 2000, but the Internet applications we now have are what was previously the stuff of literal science fiction.

But the reason I think that the recent advances we’ve made aren’t just hype is that, over the past year, I have spent at least a few hours every week interacting with various large language models, and have been consistently impressed by their ability to solve increasingly difficult tasks I give them.

[…]

So in this post, I just want to try and ground the conversation.

With 50 detailed examples, Nicholas illustrates how LLMs have aided him in deep technical challenges, including learning new programming languages, tackling the complexity of modern GPU development, and more. He repeatedly demonstrates how LLMs can be both immensely useful and comically flawed.

Nicholas conveys a broad balanced perspective that resonates strongly with me. Is AI a bubble? Sure; there’s plenty of malinvestment. Is AI over-hyped? Sure; there are those who claim it’s about to replace countless jobs, achieve sentience, or even take over the world. Can AI be harmful? Sure; bias and energy usage are two quite different and troubling considerations. But is AI useless? No, demonstrably not.

French TV weather reports apparently include two new climate change graphics:

“We see it as the weather being a still image, and the climate being the film in which this image is featured,” explains Audrey Cerdan, climate editor-in-chief at France Télévisions. “If you just see the still image, but you don’t show the whole movie, you’re not going to understand the still picture.”

The first graphic shows projected global temperature rise, in Celsius, to 8 decimal places:

People watched in real time as the counter ticked over from 1.18749863 Celsius above the pre-industrial level to 1.18749864 C. Now, it’s ticking past 1.2.

The second graphic, climate stripes, shows annual temperature rise at a glance. Here are the global stripes for 1850 through 2023:

Global climate stripes for 1850-2023

I admire the simplicity of this visualization. I suppose it can be attacked both for its choice of color scale and for its choice of baseline average (1960 through 2010, somewhat arbitrarily) but, for a TV audience, those details seem much less important than the instinct conveyed.

I wonder what further opportunities there are to raise awareness of climate change through this combination of mass media and simple data visualization?

Molly White, introducing her new project Follow the Crypto:

This website provides a real-time lens into the cryptocurrency industry’s efforts to influence 2024 elections in the United States.

In addition to shedding much needed light on crypto lobbyist spending, Molly’s project is also open source and can theoretically be targeted at unrelated industries. It’d be fun to see someone build a similar dashboard for fossil fuel influence.

Beyond that: I’ve spent a good chunk of 2024 focused on the upcoming Presidential election and have done quite a bit of analysis of FEC data; I still learned a few things by reading the code.

Paul Graham, writing about “The Right Kind of Stubborn”:

The reason the persistent and the obstinate seem similar is that they’re both hard to stop. But they’re hard to stop in different senses. The persistent are like boats whose engines can’t be throttled back. The obstinate are like boats whose rudders can’t be turned.

That feels like a useful analogy.

When I’m at “startup” events in the Seattle region, I tend to — unfairly, I imagine — reduce the stories I’m told to two axes: “commitment to a goal” and “commitment to an implementation”. The entrepreneurs I admire most tend to be highly committed to a clearly articulable goal but only lightly committed to its implementation: for them, implementations are simply hypotheses to test as the business is built.

From Maggie Appleton’s conference talk on Home-Cooked Software:

For the last ~year I’ve been keeping a close eye on how language models capabilities meaningfully change the speed, ease, and accessibility of software development. The slightly bold theory I put forward in this talk is that we’re on a verge of a golden age of local, home-cooked software and a new kind of developer – what I’ve called the barefoot developer.

Like everything on Maggie’s site, it’s worth a read.

For my part, I share Maggie’s hopefulness that LLMs will help make software development more accessible to a wider audience. I also appreciate her attempt to broaden the definition of “local-first software” away from its technical roots. I want a flourishing world of community software built by and for the communities it serves.

(An aside: “local-first” has always implied “sync engine” to me. And sync engines nearly always end up being complex hard-to-build systems. That’s one reason why I’m skeptical that we’ll see local-first software architectures take off the way the community seems to hope.)

From Glyph:

The history of AI goes in cycles, each of which looks at least a little bit like this…

A++, no notes. Glyph does a great job framing the current AI “moment” in its historical context.

Well, okay, one note. From the final sentence:

What I can tell you is that computers cannot think

Not yet, in any case.