Lit. 🤞
Lit. 🤞
If you’re a U.S. citizen: vote for Kamala Harris.
I don’t think I need to say more. Just vote.
PEP 750: Template Strings proposes a new addition to the Python programming language that generalizes f-strings. Unlike f-strings, t-strings evaluate to a new type, Template
, that provides access to the string and its interpolated values before they are combined:
name = "World"
template: Template = t"Hello {name}"
assert template.args[0] == "Hello "
assert template.args[1].value == "World"
This opens the door to a variety of new use cases. The proposal is currently in the draft stage and open for feedback.
How to Monetize a Blog seems a bit bland at first. I’m guessing most visitors quickly bounce away.
Don’t. Keep reading. Becōme.
And that’s how you fix your 47-year-old computer from 15 billion miles away!
Fascinating talk by Bruce Waggoner, a mission assurance manager at JPL, about how Voyager 1 was repaired after it effectively stopped sending all useful data back to Earth in late 2023.
I knew that Captain Grace Hopper was an early pioneer in computer programming who just so happened to discover and document the first ever computer bug — a literal moth!
But I’d never seen a video of her before.
Yesterday, the NSA declassified a lecture Hopper gave in 1982 at the age of 75.
It’s astonishingly prescient. She likens that moment to the days just after Ford introduced the Model T and changed the face of the country forever:
I can remember when Riverside Drive in New York City, along the Hudson River, was a dirt road. And on Sunday afternoons, as a family, we would go out on the drive and watch all the beautiful horses and carriages go by. In a whole afternoon, there might be one car.
…
Whether you recognize it or not, the Model Ts of the computer industry are here. We’ve been through the preliminaries of the industry. We are now at the beginnings of what will be the largest industry in the United States.
But with the Model T came unintended consequences; Hopper foresaw the same for the computer age:
I’m quite worried about something.
When we built all those roads, and the shopping centers, and all the other things, and provided for automobile transportation… we forgot something. We forgot transportation as a whole. We only looked at the automobile. Because of that, when we need them again, the beds of the railroads are falling apart. […] If we want to move our tanks from the center of the country to the ports to ship them overseas, there are no flat cars left. […] The truth of the matter is, we’ve done a lousy job of managing transportation as a whole.
Now as we come to the world of the microcomputer, I think we’re facing the same possibility. I’m afraid we will continue to buy pieces of hardware and then put programs on them, when what we should be doing is looking at the underlying thing, which is the total flow of information through any organization, activity, or company. We should be looking at the information flow and then selecting the computers to implement that flow.
In his excellent piece How I Use “AI”, Nicholas Carlini writes:
I don’t think that “AI” models (by which I mean: large language models) are over-hyped.
Yes, it’s true that any new technology will attract the grifters. And it is definitely true that many companies like to say they’re “Using AI” in the same way they previously said they were powered by “The Blockchain”. […] It’s also the case we may be in a bubble. The internet was a bubble that burst in 2000, but the Internet applications we now have are what was previously the stuff of literal science fiction.
But the reason I think that the recent advances we’ve made aren’t just hype is that, over the past year, I have spent at least a few hours every week interacting with various large language models, and have been consistently impressed by their ability to solve increasingly difficult tasks I give them.
[…]
So in this post, I just want to try and ground the conversation.
With 50 detailed examples, Nicholas illustrates how LLMs have aided him in deep technical challenges, including learning new programming languages, tackling the complexity of modern GPU development, and more. He repeatedly demonstrates how LLMs can be both immensely useful and comically flawed.
Nicholas conveys a broad balanced perspective that resonates strongly with me. Is AI a bubble? Sure; there’s plenty of malinvestment. Is AI over-hyped? Sure; there are those who claim it’s about to replace countless jobs, achieve sentience, or even take over the world. Can AI be harmful? Sure; bias and energy usage are two quite different and troubling considerations. But is AI useless? No, demonstrably not.
French TV weather reports apparently include two new climate change graphics:
“We see it as the weather being a still image, and the climate being the film in which this image is featured,” explains Audrey Cerdan, climate editor-in-chief at France Télévisions. “If you just see the still image, but you don’t show the whole movie, you’re not going to understand the still picture.”
The first graphic shows projected global temperature rise, in Celsius, to 8 decimal places:
People watched in real time as the counter ticked over from 1.18749863 Celsius above the pre-industrial level to 1.18749864 C. Now, it’s ticking past 1.2.
The second graphic, climate stripes, shows annual temperature rise at a glance. Here are the global stripes for 1850 through 2023:
I admire the simplicity of this visualization. I suppose it can be attacked both for its choice of color scale and for its choice of baseline average (1960 through 2010, somewhat arbitrarily) but, for a TV audience, those details seem much less important than the instinct conveyed.
I wonder what further opportunities there are to raise awareness of climate change through this combination of mass media and simple data visualization?
Molly White, introducing her new project Follow the Crypto:
This website provides a real-time lens into the cryptocurrency industry’s efforts to influence 2024 elections in the United States.
In addition to shedding much needed light on crypto lobbyist spending, Molly’s project is also open source and can theoretically be targeted at unrelated industries. It’d be fun to see someone build a similar dashboard for fossil fuel influence.
Beyond that: I’ve spent a good chunk of 2024 focused on the upcoming Presidential election and have done quite a bit of analysis of FEC data; I still learned a few things by reading the code.
Paul Graham, writing about “The Right Kind of Stubborn”:
The reason the persistent and the obstinate seem similar is that they’re both hard to stop. But they’re hard to stop in different senses. The persistent are like boats whose engines can’t be throttled back. The obstinate are like boats whose rudders can’t be turned.
That feels like a useful analogy.
When I’m at “startup” events in the Seattle region, I tend to — unfairly, I imagine — reduce the stories I’m told to two axes: “commitment to a goal” and “commitment to an implementation”. The entrepreneurs I admire most tend to be highly committed to a clearly articulable goal but only lightly committed to its implementation: for them, implementations are simply hypotheses to test as the business is built.
From Maggie Appleton’s conference talk on Home-Cooked Software:
For the last ~year I’ve been keeping a close eye on how language models capabilities meaningfully change the speed, ease, and accessibility of software development. The slightly bold theory I put forward in this talk is that we’re on a verge of a golden age of local, home-cooked software and a new kind of developer – what I’ve called the barefoot developer.
Like everything on Maggie’s site, it’s worth a read.
For my part, I share Maggie’s hopefulness that LLMs will help make software development more accessible to a wider audience. I also appreciate her attempt to broaden the definition of “local-first software” away from its technical roots. I want a flourishing world of community software built by and for the communities it serves.
(An aside: “local-first” has always implied “sync engine” to me. And sync engines nearly always end up being complex hard-to-build systems. That’s one reason why I’m skeptical that we’ll see local-first software architectures take off the way the community seems to hope.)
From Glyph:
The history of AI goes in cycles, each of which looks at least a little bit like this…
A++, no notes. Glyph does a great job framing the current AI “moment” in its historical context.
Well, okay, one note. From the final sentence:
What I can tell you is that computers cannot think
Not yet, in any case.
Farcaster, the Web3 social network, just raised $150M with a $1B valuation on the back of (checks notes) 80k active users.
Like most things blockchain, Farcaster seems to me to be a comical Rube Goldberg. Ethereum is too slow and transactions too expensive to support a Twitter-like social network, so Farcaster uses it primarily for identity. An amusing consequence is that users must pay to create Farcaster accounts. The network itself is built from a series of blockchain-decoupled “hubs” and a protocol to send data between them; as of this writing, users are effectively charged for their use, too.
Farcaster’s design is motivated by co-founder Varun Srinivasan’s older post on “sufficient” decentralization for social networks. For my part, I find Varun’s ideas intriguing but flawed. Much of Farcaster’s goofy complexity seems to flow from the idea that a social network’s name registry must be decentralized and that boring old domain names are, for some reason, an unacceptable place to start.
In any case, Srinivasan and co-founder Dan Romero must be telling one hell of a compelling story to reach a $1B valuation; more power to them! It’s just hard for me to see what can possibly be so compelling in the face of vastly more successful — and sensibly blockchain-free — decentralized approaches like ActivityPub, ATProto, and even Nostr.
I built a simple fake SMTP server that logs received emails and packaged it as a docker container. This can be handy in development, typically as part of a docker-compose.yml
of some kind, if the service you’re working on requires SMTP and doesn’t provide an easy escape hatch (like Rails’ ActionMailer
interceptors or Django’s console email backend). The SMTP RFC is vast; at the moment, smtp-logger
only supports the tiniest bits necessary to make my basic use cases work. But hopefully it’s useful to someone else and easily extended.