From Maggie Appleton’s conference talk on Home-Cooked Software:

For the last ~year I’ve been keeping a close eye on how language models capabilities meaningfully change the speed, ease, and accessibility of software development. The slightly bold theory I put forward in this talk is that we’re on a verge of a golden age of local, home-cooked software and a new kind of developer – what I’ve called the barefoot developer.

Like everything on Maggie’s site, it’s worth a read.

For my part, I share Maggie’s hopefulness that LLMs will help make software development more accessible to a wider audience. I also appreciate her attempt to broaden the definition of “local-first software” away from its technical roots. I want a flourishing world of community software built by and for the communities it serves.

(An aside: “local-first” has always implied “sync engine” to me. And sync engines nearly always end up being complex hard-to-build systems. That’s one reason why I’m skeptical that we’ll see local-first software architectures take off the way the community seems to hope.)

From Glyph:

The history of AI goes in cycles, each of which looks at least a little bit like this…

A++, no notes. Glyph does a great job framing the current AI “moment” in its historical context.

Well, okay, one note. From the final sentence:

What I can tell you is that computers cannot think

Not yet, in any case.

Farcaster, the Web3 social network, just raised $150M with a $1B valuation on the back of (checks notes) 80k active users.

Like most things blockchain, Farcaster seems to me to be a comical Rube Goldberg. Ethereum is too slow and transactions too expensive to support a Twitter-like social network, so Farcaster uses it primarily for identity. An amusing consequence is that users must pay to create Farcaster accounts. The network itself is built from a series of blockchain-decoupled “hubs” and a protocol to send data between them; as of this writing, users are effectively charged for their use, too.

Farcaster’s design is motivated by co-founder Varun Srinivasan’s older post on “sufficient” decentralization for social networks. For my part, I find Varun’s ideas intriguing but flawed. Much of Farcaster’s goofy complexity seems to flow from the idea that a social network’s name registry must be decentralized and that boring old domain names are, for some reason, an unacceptable place to start.

In any case, Srinivasan and co-founder Dan Romero must be telling one hell of a compelling story to reach a $1B valuation; more power to them! It’s just hard for me to see what can possibly be so compelling in the face of vastly more successful — and sensibly blockchain-free — decentralized approaches like ActivityPub, ATProto, and even Nostr.

I built a simple fake SMTP server that logs received emails and packaged it as a docker container. This can be handy in development, typically as part of a docker-compose.yml of some kind, if the service you’re working on requires SMTP and doesn’t provide an easy escape hatch (like Rails’ ActionMailer interceptors or Django’s console email backend). The SMTP RFC is vast; at the moment, smtp-logger only supports the tiniest bits necessary to make my basic use cases work. But hopefully it’s useful to someone else and easily extended.

Just ran across a nuanced older Cory Doctorow article asking what kind of bubble AI is:

Of course AI is a bubble. It has all the hallmarks of a classic tech bubble. Pick up a rental car at SFO and drive in either direction on the 101 – north to San Francisco, south to Palo Alto – and every single billboard is advertising some kind of AI company. Every business plan has the word “AI” in it, even if the business itself has no AI in it.

Tech bubbles come in two varieties: The ones that leave something behind, and the ones that leave nothing behind.

Doctorow argues — and I agree — that unlike, say, the crypto bubble, AI is likely to leave value behind when it pops.

He also makes an interesting argument that, because there are so many low-stakes low-dollar uses for AI, smaller self-hosted models may be the long-term winners:

There will be little models – Hugging Face, Llama, etc – that run on commodity hardware. The people who are learning to “prompt engineer” these “toy models” have gotten far more out of them than even their makers imagined possible. They will continue to eke out new marginal gains from these little models, possibly enough to satisfy most of those low-stakes, low-dollar ap­plications.

I disagree with the article on a couple points, however. First, Doctorow worries about the origins of small models:

These little models were spun out of big models, and without stupid bubble money and/or a viable business case, those big models won’t survive the bubble and be available to make more capable little models.

The past many months have seen advancements in low-cost training that make me think this probably won’t be an issue.

Second:

The universe of low-stakes, high-dollar applications for AI is so small that I can’t think of anything that belongs in it.

Triple-A game titles and other high-end entertainment strike me as straightforward examples. And, with Microsoft integrating large models across their entire Office suite, perhaps another answer is “all those low-stakes Excel spreadsheets and Word docs that nevertheless do something useful for business somewhere”.

There’s a lot more to Doctorow’s article; it’s worth a read in full.

Count More

If you’re a student in the United States, you can register to vote either in your home state or your school state.

If one of those happens to be a battleground state in the 2024 presidential election, your vote effectively “counts more” there.

That’s why we built Count More, a tiny website to help students choose:

Screen shot of countmore.us

It’s maybe the (technically) simplest project we’ve worked on at Front Seat but, with the right campaign behind it, we hope it will meaningfully impact the 2024 presidential election.

Angie Wang in the New Yorker: 'Is My Toddler a Stochastic Parrot?'

Angie Wang’s sketch in the New Yorker, “Is My Toddler a Stochastic Parrot?”, is a lovely meditation on language, AI, and life:

The world is racing to develop ever more sophisticated large language models while a small language model unfurls itself in my home.

Seriously, just find a large screen and read it. It’s such a delight.

OpenAI: Consumer & Cloud

Sam Altman, partway into his understated keynote at the first ever OpenAI DevDay:

Even though this is a developer conference, we can’t resist making some improvements to ChatGPT.

Before this moment, I only fuzzily understood that OpenAI operates in two separate but related markets. They have:

  1. A consumer product called ChatGPT. This includes mobile apps, websites, secondary models like DALL·E 3, and integrated tools like the web browser and code interpreter. ChatGPT, used by 100M weekly actives, costs $20/month. Despite Monday being “DevDay”, OpenAI launched major new consumer-facing features. Most notably, it now allows anyone to build custom “GPTs”.

  2. A growing suite of cloud services. Before OpenAI DevDay, I might have referred to this as the “OpenAI API” — a small piece of code that interfaces with OpenAI’s foundation models. Now, this seems too simplistic as the API quickly evolves into multiple value-add (and margin-add!) services like the Assistant API. This is similar to how AWS adds value (and margin) on top of its few core services. OpenAI’s cloud, with 2M registered developers, undoubtedly generates significant revenue.

Finding oneself in two very different product and market segments — suddenly and at scale — is no small feat! Yes, these markets utilize the same underlying technologies. However, their pricing models, sales strategies, and value propositions are quite different and are likely to diverge over time. This isn’t exactly unheard of in the tech industry, but it’s a significant undertaking for a company that’s been around for less than a decade.

Sam Altman, with charming understatement:

About a year ago […] we shipped ChatGPT as a “low-key research preview”… and that went pretty well.

Very few consumer services reach ChatGPT’s scale. In under a year, OpenAI accidentally defined a new consumer product category and became its heavyweight. I don’t think we need to look any further to find the future of consumer chatbots. Especially with the introduction of custom GPTs, it’s hard to see much room for creating differentiated consumer-facing chat services. Instead, developers will need to bring their unique data assets and external compute capabilites to the ChatGPT interface. The Canva demo is a compelling example: Canva is a decacorn but the expectation is that users will still start in ChatGPT.

As for the growth of a new kind of cloud, focused not on virtual compute and object storage but instead on foundation models, we should expect to see many more value-add services in the future. The logic behind this year’s launches seems straightforward: OpenAI simply observed the emerging architectural patterns behind LLM applications and implemented versions that are better and easier to use, readily available from the same vendor that provides the LLM itself. Like AWS, OpenAI is the heavyweight in this emerging cloud segment. While there will always be room for smaller players along competitive axes like model customization and privacy, and I hope we see plenty of innovation there, the default place to start building services that need LLMs will probably be OpenAI for the foreseeable future.

Mike Hoye, writing on Mastodon:

People go to Stack Overflow because the docs and error messages are garbage. TLDR exists because the docs and error messages are garbage. People ask ChatGPT for help because the docs and error messages are garbage. We are going to lose a generation of competence and turn programming into call-and-response glyph-engine supplicancy because we let a personality cult that formed around the PDP-11 in the 1970s convince us that it was pure and good that docs and error messages are garbage.

Mike takes a look at what can go wrong when writing a one-line “Hello World” program in C. It’s a darkly comic example of the slapstick violence that developers inflict on one other.

It’s not just error messages and documentation. Today’s tools and frameworks overflow with violence. Its omnipresence inures us to it; we cast blame anywhere but where we should. All developers suffer for it but new developers suffer disproportionately more.

Anil Dash gave a delightful keynote address at last week’s Oh, the Humanity! conference.

I checked and was not surprised to learn that Anil and I are very close in age. We were kids when personal computers were new and we were probably both in middle or high school when the web was born. We seem to have similar perspectives on the why of the open web: why building a more open web is — in addition to being fun — an important public good.

Anil’s framing is very personal, though, and I found it very moving. The full talk is available on YouTube and feels like it deserves more than its current ~650 views.

Otis Health

Today we launched our newest PSL spinout, Otis Health.

In its first iteration, Otis offers a free discount pharmacy card to the underserved 1099 worker market. A shocking percentage of contract workers have no insurance whatsoever; saving even a handful of dollars on medications can make a meaningful difference. Otis can often save much more than that.

Otis Health: as easy as 1-2-3

I’m excited about Otis on three fronts. First, it brings a modern design and user experience to a market that until today has sorely lacked it. Second, Otis’ discount card is free to use; all the economic magic happens behind the scenes, in the form of complex contracts between retail chains, benefits managers, distributors, and manufacturers. Finally, Otis has its eyes firmly fixed on several adjacent services that we think will also meaningfully improve the lives of 1099 workers in the future.

Oh, and one more (the most important!) reason to be excited: the team. It’s been a blast working with Aaron, Luke, Sanford, Sharon, and Steve to get this thing out the door.

Craig Hockenberry, writing on his long-lived personal blog:

Well, it happened.

We knew it was coming.

A prick pulled the plug.

Over the weekend, Tweetbot, Twitterrific, and every other popular third-party Twitter client was unceremoniously banned. It’s a stupid petty move on Twitter’s part, executed in an impressively stupid petty way. I imagine it’s the final nail in the coffin for several high-profile Twitter hangers-on.

Most of the people I follow, though? They’re long gone.