Arq restore notes

About a month ago, my personal laptop died, requiring the logic board to be replaced. All the data on the previous incarnation of the machine was irrecoverably lost, requiring me to restore from backup. Unless my memory deceives me, this was the second time I've restored a backup made with Arq1.

This time, things didn't go quite as smoothly, but overall, it still worked out ok. I'm going to put some notes here about the road bumps that I ran into while restoring, for reference by my Future Self™. Given that "past road bumps are no guarantee of future road bumps", I'm dumping this in a blog article rather than a wiki post, as I consider these to be more "moment in time" observations than an enduring record of what to do in the future.

A SuperDuper! backup is a full disk copy, and pretty close to being something that you can just restore and then run with. Arq, on the other hand, only backs up my home folder, which means I need to reinstall a bunch of apps after doing the restore. Additionally, there appear to be quite a few settings (particularly permission-related settings) that don't survive across a backup/restore cycle, and these need to be set up manually afterwards.

First steps:

  • Install Arq, so I can initiate the restore.
  • Install 1Password, so I can get the credentials necessary for the restore.
  • Actually start the restore; after many hours, Arq used all the memory on the system, requiring me to start again2.
  • Sign in to iCloud, because apparently thanks to Apple "magic" none of my photos were actually on my device, or at least, the couldn't be restored by Arq; after signing in, photos start syncing and I eventually got access to them all again.

Next up, set the hostname, because otherwise Fig won't know what machine it is running on, and my machine will only get a lowest-common-denominator set-up as opposed to the specific one I want for this machine:

scutil --set HostName latina
scutil --set ComputerName 'MacBook Pro'
scutil --set LocalHostName latina

We then try running git in terminal, which triggers a command line tools download and install, and can finally at that point run install under ~/code/wincent/. That died with:

$HOME/code/wincent/vendor/n/bin/n: fork: Resource temporarily unavailable

A bin/n --cleanup should supposedly delete all versions, but I ended up having to rm -r vendor/node/n/versions to actually get n working, at which point I could run install again.

You have to run vendor/homebrew/install.sh in order for this to work, and additionally you have to open a new window after installing in order to have the Homebrew directory appear in your $PATH (merely running hash -r isn't going to cut it).

Homebrew helpfully points out the following:

stderr: 'Error: homebrew/bundle was deprecated. This tap is now empty and all its contents were either deleted or migrated.\n',

So I deleted than, and it then says:

'Error: No available formula with the name "bun". Did you mean bup, buf or run?\n'

Turns out that you need fully-qualified (tap + brew) formula and cask names, as noted in Homebrew/brew#21416, so I fixed that.

There are a number of apps that you have to open or twiddle in order to get things working, even though Homebrew installs them:

  • Karabiner-Elements (have to open it and grant privileges).
  • Hammerspoon (have to open once, let it prompt for permission, set to load at login, hit Ctrl-Opt-Command-F2 to set two-monitor layout and F4 to set horizontal display arrangement).
  • Raycast (somehow it couldn't read its settings, so I had to import a backup, set Command-Space in the System Settings to not open Spotlight, grant accessibility access for snippets to work, and the same for 1Password while I was at it, and set it to launch at login).
  • Before using tmux, had to go to System Settings -> Keyboard -> Shortcuts -> Input Sources and turn off these trouble-makers:
    • Select the previous input source (ctrl-space)
    • Select next source in input menu (ctrl-option-space)
  • Kitty (have to open it and set it to stay in the Dock)
  • Godspeed (open it, set it to stay in dock, launch at login)
  • terminal-notifier: run a command like terminal-notifier -title hi -message there so it appears in System Settings; then you can allow notifications from it.
  • Resilio Sync (set to launch at login)
  • Orion (installed via cask, but have to open and set to default browser in System Settings)
  • CleanShot X (open and set to start at login, grant accessibility settings, turn off conflicting shortcuts in System Settings)
  • iStatMenus (launch and grant permissions)

And general house-keeping:

  • Remove cruft from dock.
  • Dock settings: turn hiding on (I thought I had a default for that; maybe all I needed was to log out and in again for it to take effect)
  • System settings: under the lock screen settings, require password immediately.

For some reason, I couldn't get Screenflow to make use of my microphone (it would claim I needed to grant it access to the microphone, but it didn't actually ask for access and there was no way to force it in the System Settings). In the end, I blew away the settings and data and it eventually prompted:

rm -rf \
  ~/Library/Preferences/net.telestream.screenflow10.plist \
  ~/Library/Preferences/WSG985FR47.net.telestream.screenflowhelper.plist \
  ~/Library/Containers/net.telestream.screenflow10 \
  "~/Library/Group Containers/WSG985FR47.net.telestream.screenflow10" \
  "~/Library/Application Support/ScreenFlow*"

The above is what I've discovered so far over the course of several days. Hopefully that's the end of it!

  1. The other backup tool that has saved my hide in the past is SuperDuper!, but on this occasion I didn't have access to my physical (SuperDuper!) backup, so restoring from the cloud (Arq) was my only option.

  2. Restarting is a bit annoying, because I use Glacier storage for my backups, meaning that you can't just start downloading the data from the cloud; instead, you request for it to be made available and then wait 5 hours before actually beginning the download. Downloading from Glacier also hurts the wallet a bit, to the tune of about a hundred bucks for all the retrieval costs associated with the repeated attempts.

Dreamscape

tl;dr:

  • The bizarre otherworldliness of dreams makes them seem foreign, almost as though they came from some weird place "out there", not of our own making.
  • This in turn makes recurring dreams puzzling, especially when they are separated by long intervals (sometimes years); if dreams come from "out there", where the heck is that and why can it store these recurring patterns and places stably over time?
  • These false intuitions are dispelled once we realize that our brains are all about storing patterns, and the same mechanisms that allow us to form waking memories and mental models are the ones that provide us with a stable pool of patterns from which we build recurring dreams.
  • This simple fact is occluded and obscured by the apparent forgetting that happens on waking.
  • This all seems bleedingly obvious in retrospect, and it makes one wonder why I even needed to write it down.

Over a period of many years now I've had a series of recurring dreams, or at least, recurring themes within dreams. When you're in the dream world, it seems detailed and real, yet at the same time unreal or surreal because of the way in which improbable or impossible things occur. You find places and people morphing from one into another in a way that seems to simultaneously escape your notice while also registering in way that causes you to remark on it later on. The laws of physics are defied. Rules of causality are suspended. Events are reordered and incompatible facts are juxtaposed. These bizarre ensembles of characters, interactions, and locations are so unexpected, so novel — even if they are stitched out of a patchwork of people and things you know or can imagine — that it is almost like they're being delivered to you from the outside, by a Christopher Nolan-esque cinematic auteur of unbounded and inimitable creativity.

The thing that has struck me about these recurrences is just how complicated the dream world seems to be. Your brain appears to synthesize these fantastical locations, not just randomly — like a procedural generator would create terrain in an open-world video game — but in a way that has permanence, because you can find yourself back in those same places months or years later.

At first this ability to recall these impossible, intricate places puzzled me, but I have a theory now about what makes them come back. I'm not going to get into how dreams get put together, because I honestly have no idea, but I want to explain how it is that these recurring patterns are more or less durably stored and made available for reuse over long periods.

So, let's start from the basics. How do brains work? Let me give you my horribly imprecise understanding of this, hopefully one that is just vague enough to be compatible with how things actually work in there. Brains are massive networks of interconnected neurons. Synapses fire. Signals are transmitted between neurons. There are activation thresholds that dictate whether signals get through. Importantly for the purposes of this discussion, brains are not merely deciding machines that control the systems and actions in the organism that they inhabit; they are dynamic, evolving, self-modifying receptors and recorders of information and patterns. That's how memories are made: the act of "recording" a memory is in some sense a rewiring of the brain in such a way as to capture a pattern and make it available for latter retrieval.

There is a curious difference between memory and memorization. We form memories effortlessly all the time as experiences wash over us. In contrast, memorization is an intentional, effortful act aimed at creating a memory for later recall. Perhaps frustratingly, this latter activity can be quite hard. But the automatic formation of memories is as easy as breathing; in fact, it's something we can't help but do. This is not to say that our memory is infalible. Details may escape us, memories may fade (become harder or imposible to access), facts may be switched. But in the absence of dementia or other pathologies, the brain is a marvelously flexible, capacious, and impressively reliable self-updating reservoir of information.

A particular kind of memory is the "mental model". We typically use that word to describe things that are a little more abstract that a vanilla memory. For example, we might refer to a mental model of "how the economy works", or "what Alice thinks about Bob". But we also have spatial models of the environment around us — the neighborhood we live in, for example — and these are much more akin to memories, in the sense that we acquire them and build them up automatically, without even intending to, by the mere fact of experiencing the environment around us.

And this memory formation, this rewiring of the brain, it becomes more and more accurate, detailed and durable as we repeatedly move through that environment, in a sense "wearing in" the connections in a way that makes them stronger and more complete. (As an aside, this is the power of positive thinking too — and the harm of negative thought patterns — because when we repeatedly activate the same neural pathways we "burn in" in a way that makes it all the more likely that we'll fall into following the same pathways again in the future.) After a while, your mental model of the area you live in becomes so detailed and comprehensive that you can name and visualize countless details about it, large and small, involving distances, textures, smells, relationships, and all manner of patterns and symbols.

I think that it's this that's happening when you visit places in dreams. Your brain is conjuring up these fantastic places, making a pastiche of experiences, patterns, places, and symbols — both lived or perhaps experienced in fictional worlds of movies, TV shows, video games, comics, magazines — by some mechanism that I haven't even begun to understand. But regardless of the source for this material and its combinations, it winds up leaving an impression on your brain. It leaves a mark, not quite the same as a waking memory, but a nearby cousin to it. Something is recorded in your brain somewhere. Neurons are rewired. Patterns are inscribed. Symbols are persisted.

The fact that we usually have trouble remembering our dreams when we wake up obscures all the pattern-recording that's actually going on behind the scenes. I don't know the mechanism for this amnesia-on-waking either, but I feel sure that it is only a surface-level illusion. The reason why we revisit places and relive patterns in recurring dreams is because we accumulate a corpus of stored material, a set of "mental models", in our brains that's very close to the kinds of structures we use to store other memories. The dreamscape only seems otherworldly; actually, it's made of the same stuff from which we build our internal representations of the real world. And now that I've written all that out, I'm rather embarrassed that it took me this long to figure out.

Thoughts on AI — 2026 edition

It's a couple years since I wrote about AI1 and things have been changing fast. In 2024, AI had started to impinge on my day-to-day work, but 2025 felt like the year in which it changed things dramatically. While I still would characterize a lot of my work with AI as consisting of "arguing with LLMs", there were definitely times when they produced results of acceptable quality after a few rounds of revision, feedback, and adjustment, and the net time and effort required felt like a relative win compared to doing it all myself.

New tools

The biggest change came in the form of the arrival of Claude Code. Instead2 of just chatting with an LLM (mostly using my Shellbot fork), I could now delegate to it as an agent without having to abandon my beloved $EDITOR3. What began as experimentation (figuring out what can this thing do?) has since turned into an integral part of my workflow: even for changes I could very quickly carry out myself, I will instead turn to Claude and ask it to make the change, unless it is utterly trivial (ie. the threshold for cutting over to Claude is the point at which it can manually manipulate the text faster than I can).

New capabilities

2025 brought customization mechanisms, Model Context Protocol4, subagents, skills, and custom slash commands among other things. From my point of view, these all have the same goal, namely, equipping agents with:

  1. Specialized knowledge that enables them to obtain the information they need; and:
  2. The means to carry out necessary actions in service of an objective; while simultaneously:
  3. Not overflowing the context window with garbage which obscures things and prevents the agent from producing a correct result.

Collectively, these are probably more important and useful than improvements to the models themselves. Speaking of which...

New models

2025 brought model sicophancy to the forefront, and Claude was no exception. Around mid-year, Claude's "You're absolutely right!" was ringing in the ears of users across the world in an almost continuous chorus. Thankfully, it seems to have subsided a bit now.

I didn't follow the whole model benchmarking question very closely, and am in general only interested in how well the models improve my experience in my daily work. Overall, subjectively, I'd say that the models improved significantly over the last year, but as I said before, I believe that it's the tooling around the models that had the greater impact.

Use cases

In my last post, I said that LLMs were good for "low-stakes stuff like Bash and Zsh scripts for local development", "React components", "Dream interpretation", and "Writing tests". In 2025, I used them for a lot more than that. I used them for fixing bugs, adding features, working across multiple languages and services, and for explaining foreign codebases to me.

Where they shine:

  • Places where there are a lot of guard rails in place to provide them with clear feedback about success (eg. working in a strongly typed language like Rust, or in an environment where you can perform automated verification in the form of linting or tests).
  • They're also great in places where you may not even know where to start but their ability to quickly search large corpuses and repos can rapidly locate leads for you to follow.

Places where they still leave much to be desired:

  • Things where the non-determinism of their output means that you can't trust the quality of their results. For example, say you have a change that you want to make uniformly across a few hundred call sites in a repo. Your first instinct might be to say, "This is a repetitious change, one that should be amenable to automation, and if the LLM can be given clear instructions that allow it to do it correctly in one place, then it should be able to do it quickly and correctly in all 100 places". Sadly, this could not be further from the truth. LLMs are inherently non-deterministic, and that means there's always a random chance that they'll do something different on the 19th, 77th or 82nd time. You will have to check every single modification they make, and you may be far better off getting the LLM to create a separate, deterministic tool to carry out the work. And if you want to throw caution to the wind and have the LLM make all the changes for you anyway, you're probably better off firing off the agent in a loop, with a clean context for every iteration and a clearly stated mechanism for verifying the correctness of the change, than expecting a single agent to carry out any significant amount of work serially.
  • Anything that can't be trivially described with a minimum of context. This is a conclusion that I've recently come to. In the past, I thought that bigger context windows would endow the models with the ability to solve fuzzier problems, the kinds that humans are particularly good at (with their ability to take into account disparate sources of information scattered across time and place). But my experience with even relatively small amounts in their context (ie. far less than 200K tokens), is that models can easily "overlook" salient information that's "buried" in the context, even when it's not that large. Failure modes include things like telling the model to look at a series of commits, and then observing how it "forgets" something critical in the first of the series; it proposes a change that looks like it only actually attended to the most recent tokens in its context window, and often ends up contradicting itself, or reimplementing a decision that it previously reverted. My suspicion is that when we have models that have 10-million-token context windows, we'll still get the best results when we distill down everything we want them to "know" into the first few thousand tokens.

Job security

In 2024 I said that I wasn't worried about AI taking my job in the near term, but that things could change quickly, and I advised to "judiciously use AI to get your job done faster". In 2026, AI has clearly gotten to the point where it is making real waves in tech workplaces. Not only is AI making it possible for people to ship more code and faster than before, there is also considerable business pressure to make use of in in the name of maximizing productivity. Unfortunately, the signal here is very noisy: our corporate overlords can mandate the use of these tools and monitor their use, but I don't think we have reliable evidence yet on how much of this is unalloyed value, and how much of it is technical debt, latent regressions, and noise masquerading as productivity.

Now more than ever it seems important to not only use the machines to deliver useful work, but also to focus on the places where I as a human can still deliver value where a mere next-token-predictor cannot. The pressure on both of those fronts is only going to increase. I'd say that my feeling of "precariousness" is quite a bit stronger now than it was two years ago, and I'm not looking forward to seeing that trend continue although I feel that is surely must.

In terms of job satisfaction, I've observed an inverse correlation: the more my job consists of me prompting the AI to do things for me, the less intrinsically satisfied I feel. This was one of the reasons why I had so looked forward to Advent of Code in December; I was itching to do some significant work with my own two hands. I look now towards the future with some dread, but also with a determination to not go gently into that good night: no matter what happens, I want to commit to finding things to take authentic pride in beyond "how I got a swarm of agents running in parallel to implement some set of inscrutable artifacts".

The impact on the world more generally

So far, I've been talking about how AI has affected my job. But "Gen AI", in particular, is having the expected effects on the wider world. Deep fakes, AI slop, and bot activity more generally are flooding YouTube, Twitter, and anywhere content is shared5. It seems that we're already well on the way into a "post-truth" world, where our ability to distinguish fact from falsehood has been devastatingly damaged, with no prospect of putting the genie back in the bottle, given the inevitably increasing capabilities of AI systems to produce this stuff at ever higher levels of quality.

One can hold out, clinging to reliable information sources, but in the end it seems unavoidable that these will be islands of truth surrounded by oceans of worthless, endlessly self-referential fabrication. I shudder to imagine what this looks like when you fast-forward a hundred years, or even ten...

  1. I wrote that piece in March 2024, so just a couple months shy of two years ago, to be precise.

  2. Maybe I should say "as well as" rather than "instead" because I still do chat with the LLM a fair bit when I want to ask it general questions about something in the world; but when doing almost anything related to coding, I almost exclusively do that via Claude Code.

  3. Technically, I am "abandoning" it in the sense of switching focus to another tmux pane, but Neovim continues running and I can dip in and out of it whenever I want.

  4. MCP nominally arrived in 2024, but as it required folks to actually build MCP servers, I think it's fair to say that it "arrived" in a tangible way in 2025.

  5. I almost wrote "where humans share content", but that's already appallingly misleading.

Lockout horror stories →

All blog posts