Published

Inexactly benchmarking Eleventy vs Astro build times

The Eames Institute team is checking out some frameworks and static site generators for a project, and I wanted to see how Eleventy and Astro compare in terms of build time.

Zach Leatherman’s 2022 article “Which generator builds Markdown the fastest?” is probably the most thorough resource I’ve come across along these lines, and I’d recommend checking that out if you want to do some serious comparisons.

But I was curious about a “real world” test in 2024, so I decided to do some inexact benchmarking using a Markdown export of this blog. A caveat up front: I’m much more familiar with Eleventy than I am with Astro, which will likely be apparent when I get to the incremental build tests later in this post.

For Eleventy I used eleventy-netlify-boilerplate with zero modifications, and for Astro I used the blog template as described in their docs with some small modifications along the lines of this wordpress-to-astro repo to get categories and tags working. I didn’t want to use wordpress-to-astro directly since it was last updated two years ago, but it is a good reference point.

My blog has 770 posts which were exported to 770 Markdown files. With a paginated feed, categories, and tags, the total number of built pages is around 2550.*

Based on an average taken from 10 builds, Astro took 10.07 seconds and Eleventy took 4.29 seconds to build.

Incremental builds can speed things up significantly since the only built content is that which is relevant to the modified files.

Eleventy has supported incremental builds since December 2022 (I believe!), but it doesn’t yet support it on a CI server. There is an open issue for it which looks like it has traction.

To test incremental builds, I added and removed the same single tag on the same post 10 times.** Based on an average taken from 10 incremental builds, Eleventy took 2.17 seconds skipping 777 files. I would have expected it to skip more, but this might have to do with not being able to incrementally build paginated data.

I wanted to test the same content change in Astro… but it isn’t clear to me that there is an apples-to-apples comparison. Astro introduced an experimental Incremental Content Caching feature in v4.0 (not sure if this is supported on CI servers). When I added experimental.contentCollectionCache to the config, there was no difference between basic build times when I made a content change. I’m not sure if this is because having all of my content in Markdown makes the caching a mute point, or if it’s something else. If anyone has further context on how best to test Incremental Content Caching in Astro, would love to know.

For what it’s worth, running astro dev is extremely quick, just 125ms before it’s ready.

I’d be curious to do a similar benchmark using WordPress’s REST API but am not sure I’ll have the time… Will update here if I do.


* I give a rough number because the Eleventy boilerplate and Astro template generate a few additional pages, but the page total difference is in the single digits so I didn’t waste time evening them up perfectly.

** For my own future reference in case I do further tests: Add and remove the tag hello from this post.

Published

“‘AI’ is pretty much just shorthand for mediocre”

Just read through “You sound like a bot” by Adi Robertson in the Verge. I hadn’t really put my finger on the right word for my feelings about AI until reading that article but that’s it: it feels very mediocre.

If you want to get a rough overview of how the average frontend engineer might feel about a JavaScript framework, ChatGPT is useful enough. If you’re willing to ignore the questionable origins of the training data in use, Midjourney can be useful for rapid image generation for an early storyboard.

But as of right now, the output always feels meh, “yeah ok”. Never really surprises you with a unique perspective, or an unexpected visual language. That vibe is only becoming stronger as AI developers continue to sand off the “rough” edges on their products.

Maybe that will change. As Robertson says, “Maybe the schism between artists and AI developers will resolve, and we’ll see more tools that amplify human idiosyncrasy instead of offering a lowest-common-denominator replacement for it.”

That’s not happening any time soon. One reason is that artists have been given about 1,000 reasons to distrust AI, and I think that it is only widespread artistic use and input that could actually lead to that sort of breakthrough.

Another reason: spewing mediocrity is a pretty strong sweet spot for AI. AI is useful as a summarizer so long as you take the response with a grain of salt and follow up on sources. Case in point: Elicit seems pretty cool! Listen to this ShopTalk Show episode with Maggie Appleton for more.

Anyways, maybe we’ll eventually get to the point where AI has that human “spark”, who knows. Maybe it’ll happen next month and I’ll eat my words. Until then, as most of the content we experience online becomes more grey and sludgy, the personal will become far more valuable.

In Anil Dash’s article “The Internet Is About to Get Weird Again” for Rolling Stone late last year, he says that “the human web, the one made by regular people, is resurgent”. He places a lot of emphasis on the breakdown of the content silos we’ve relied on for so many years, which definitely seems like the major catalyst for the shift. But AI’s growing mediocrity will be the force that drives it home and really makes the human web stick.

(Related side point: clearly I need to read Filterworld by Kyle Chayka.)


Edit 21 Feb 2024: Maybe I should eat my words sooner? OpenAI just came out with Sora. Which is impressive! But… IDK, it still feels meh somehow? Maybe it’s just because it’s still early days, we’ll see.

Published

Bell ringing as abstraction, exercise, and communion

Can’t remember how we got talking about it, but another member of the Brooklyn Conservatory Chorale told me that she’s very in to English Change Ringing.

I thought I hadn’t heard of it before, but I have heard it, many times since I lived over there for 10 years. Listen to an example from St Paul’s on YouTube. I didn’t know it had a name, guess I always assumed it was sort of random.

If you listen to it closely you can start to recognize patterns. And if you live in the US, you might realize how this sound feels somewhat historical, not something that we hear frequently even in places with lots of churches. It is somewhat-to-very rare in the US depending upon where you live (see map of North American bell towers).

I started poking around online. For a concise description of English Change Ringing, you can’t beat the one on the New York Trinity Ringers website. Would love to go hear their bells some time.

But for a wonderfully in-depth presentation, it’s worth reading the article “Campanologomania” by Katherine Hunt published in issue 53 of Cabinet magazine in spring 2014.

(Incidentally, how have I never come across Cabinet before? “We believe that curiosity is the very basis of ethics insofar as a deeper understanding of our social and material cultures encourages us both to be better custodians of the world and at the same time allows us to imagine it otherwise.” Spot on. I hope they’re not done for… The last issue was winter ‘21 / spring ‘22, and the last event was in late 2020 as far as I can tell.)

In the article, Hunt goes through the origins of English Change Ringing as almost a drunken group pastime on idle bells, to a sort of obsession by folks – men, really – of many classes, to something that was seen as somewhat lowly due to the physical exertion it required, to the qualities it shares with modern twelve-tone music and the invention of the dumbbell (quite literally a dumb bell).

It’s hard to describe how physically in-tune the bell ringers must be to achieve the many permutations in a multi-hour peal. Hunt says:

While change ringers must understand the shape of the particular method they are ringing, they do not follow written notation for each and every change. Nor do they memorize the individual changes. Rather, the practice relies on the ringers internalizing the patterns of the method, perhaps by looking at notation that shorthands the whole method, showing only the key moments at which the permutations change course in order to exhaust all the possible orders. Ringers know principally by doing: they anticipate when two bells will have to swap places in the following round, and they feel their way as a group through the ringing of all the orders of the rows. Change ringing’s linguistic potential may have been exploited by Stedman and Mundy, but in the bell-tower it is a sweaty, communal, and profoundly corporeal activity.

That reliance on communality reminds me of many Musarc performances, though those are of course much more contemporary and experimental (and choral, not bells!).

Anyways, clearly there is something very attractive about this to me… The trouble is the meeting lengths and frequency, it would be really tough to get involved at this point in my life. Maybe something for when I’m 50+.

***

Side note: I was about to post a link to Outhwaites of Hawes, a traditional ropemaking business that started before 1840. The building is their workshop and also effectively houses a museum. It was lovely to walk through there and see the rope being made, including the incredible ropes required for change ringing. But sadly, it looks like they closed almost exactly a year ago.

Published

Some thoughts about making a Donald Judd-esque table

Most of the NYC crew from the Eames Institute took a little field trip to 101 Spring Street yesterday. There was a lot I found beautiful, and a few things that gave me pause.

But one of the things I most enjoyed inspecting was Donald Judd’s big 14-seater whitewood table in the kitchen / dining space on the second floor. Clearly well-loved, and slightly more rough-and-ready than some of his other furniture. It was good fun to have a close look at the dining chairs too, though I’m more interested in the form there. Don’t look too comfortable.

This is a very broad overview of some points to consider if I ever want to make a Judd-esque table.

Read more

Published

Color contrast tools to check against APCA

EL introduced me to contrast.tools recently, it uses the Advanced Perception of Color Algorithm (APCA) to check the accessibility of your text based on the desired colors and the font weight + size. But importantly, it also provides a lookup table to verify how you should (possibly, probably) interpret things.

I think that APCA is being floated as the new contrast algorithm for WCAG 3.0? But I’d need to look in to it more to be sure. Apparently APCA Readability Criterion (ARC) might be a new standard for visual contrast.

Side note: I kind of wish we could get away from acronyms-within-acronyms-within-acronyms in the accessibility standards world…