Reading List #52

Hello

This week was kind of special, as I was alone with the kids for four days for the first time. While this was challenging and exhausting at times, we also had a great time and enjoyed it very much. My work week therefore felt even shorter than usual, but still pretty productive.

We are in the middle of the process to migrate all of our Git repositories from Bitbucket to Github, which I very much enjoy using. While doing so, we are also working through a huge backlog full of ideas for picu – some of them years old – and combining them into Github Issues, which is a lot of work but feels way more actionable and closer to the code than the dusty corners of Asana, where they lived before.

On to the (few) things I’ve been reading / watching this past week.

Frontend Development

⚖️ Typographic widows on the web

Richard Rutter shows how we will soon be able to set an end to typographic widows on the web by using text-wrap: balance; in CSS. At least for titles, captions and any other text with no more than a few lines.

Clagnut – An end to typographic widows on the web


WordPress

📦 10up Block Components

I’m still making only baby steps in terms of building custom blocks and all the build steps and React code involved still don’t feel at home. This Block Components library by 10up sounds like a good thing to bookmark.

Github – 10up Block Components

🔎 WP Plugin: Wayfinder – Easily select nested blocks

This plugin lets you display block titles, classes and outlines on hover. Looking at it, my first reaction was about the same as Jamie Maddens’: “Why isn’t this part of core Gutenberg?”

WP.org Plugins – Wayfinder


Other

🤖 Max Tegmark: The Case for Halting AI Development

The AI hype is going strong at the moment, and the news of new tools and developments are coming in daily. As interesting and impressive as those tools and techniques can be, there are also some real dangers inherent in the development of them. Max Tegmark is a cosmologist and machine learning researcher at MIT and the president of the Future of Life Institute, who knows a thing or two about all this. They published an open letter that you might have heard about, with the goal to pause the training of models bigger than GPT-4 for six months, which thousands of people including some big-name scientists and CEOs signed. I really liked watching Lex Fridmans interview with him, where he shared his thoughts on AI safety, the concept of “moloch” and what he thinks would be the right steps forward. I highly recommend you to watch this if you are interested in AI and its possible implications.

Lex Fridmans Podcast #371 – Max Tegmark


✌️

Made with ❤️ in Switzerland