Advent of Code 2025
This year Advent of Code ("AoC") was only 12 days instead of 25. Last year I solved all 25 days (both parts) and that felt pretty good. This year, I also solved both parts of all the days, but given the overall puzzle count was lower, I’m not finishing with quite the same sense of achievement as last time.
I have mixed feelings about that. I’m simultaneously grateful that this year wasn’t a gruelling ordeal, but also disappointed that I didn’t feel stretched by it. Given how important AI has become in my working life as a software engineer, I had been looking forward to AoC as something that would force me to use my brain to create code solutions instead of using it to engineer prompts.
In a "normal" AoC year, the puzzles on the first days are relatively trivial, and the difficulty slowly ramps up from there. Around the middle of the month, you might find a day that is unexpectedly difficult or unexpectedly easy, but the general progression is clear. Towards the end of the month there are generally some "epic" problems that feel really satisfying to solve.
This year, folks were wondering what would happen given the shorter schedule? Would the difficulty ramp up more quickly? Would the final days this year be just as hard as the final days of prior years?
It’s really hard to give an objective answer to that, so I’ll give a subjective one instead. My impression is that no, the final days weren’t as hard. One of the reasons why the 25-day schedule feels so tough is precisely because it’s like a marathon: if you’re going to beat 25 two-part puzzles of increasing difficulty you have to bring a maintained level of intensity and commitment to carry you through to the end. With a merely 12-day schedule, it’s more like a sprint.
This year, there were two or three days that I found to be disappointing:
-
On day 9, I felt pretty good about my part 1 solution, which used bitwise operations to great effect (the code was simple and the solution ran fast), but part 2 ended up being too hard (for me) to solve without using a linear programming library…
I initially represented each problem as a multivariate linear Diophantine equation and was able to solve the simple cases (eg.
ax + by = n) using the Extended Euclidean Algorithm, and then I started extending it to trivariate and larger cases (eg.ax + by + cz... = n) before discovering that, while I could solve the equation, minimizing the coefficients was going to require a bunch of linear programming tricks that were beyond my reach. You’re supposed to solve the puzzle relatively quickly (ideally, the same day it is published), and coding up and debugging the kind of linear solver that I would need would have taken days, or maybe even weeks. So, I reached for an existing library and used that to find minimal solutions to a system of linear equations. -
Day 11 (part 2) felt a bit unsatisfying in a different way. It was a graph traversal problem where a naïve DFS approach would involve some ungodly number of operations. You had to count the number of paths in a Directed Acyclic Graph starting from a given node, passing through two other specified nodes — call them
aandb— (in any order), and ending at another.As I said, a simple DFS searching from
starttofinishwould take forever, so I tried it out on some of the subproblems:starttoa;atob;btoa;atofinish; andbtofinish. I also reversed the edges in the graph on the chance that finding the paths in the backwards direction was any easier (it was). Armed with this information it was clear: there were no paths frombtoa, so all I had to worry about was solving the three subproblems:starttoa,atob, andbtofinish. Solving any of the equivalent problems in the reversed graph would be valid too (ie.atostart,btoa, orfinishtob). Once I had the path counts for each of the three segments, I just had to multiply them together.So, I set about solving the subproblems, and when I found one of them to be stubbornly resisting my DFS, my first thought was that, if I switched to a BFS, I might avoid a lot of time spent going deep into unproductive pathways via DFS. As a shortcut, I figured I would test this idea out first by coding a depth limit into my DFS, before going to the trouble of writing a BFS. And as I progressively increased the depth, the answer stabilized, and eventually settled on just over 7.5 million paths for that last segment that I needed to solve.
Multiplying all this out gave me the right answer of 553,204,221,431,080 paths. Part of me felt like I should have been happy to narrow it down from "zillions" to a correct answer of some 553 trillion odd paths, but it didn’t feel like that at all. It felt more like I’d carried out the computer-assisted calculation by hand instead of constructing a general algorithm to solve the problem. I think if I’d written it as a BFS in the first place instead of a DFS, I would have felt better, although I still would have had to hard code a semi-arbitrary depth limit in there somewhere…
-
Day 12 was much worse. On the surface, the puzzle looked like a set of large combinatorial problems with exponential complexity, even one of which would have made a brute force approach unfathomably expensive. I jotted down a bunch of heuristics for pruning the search space down, but none of them promised to be remotely enough. I decided some research was in order. I skimmed a bunch of articles on packing irregular objects and came to the conclusion that my best bet was going to be to use simulated annealing; partly because it’s a good tool for approximating a global optimum even in an enormous search space, and partly because it’s easy to understand and I’ve used it before. There were 1000 problems in the puzzle input, but I didn’t even need to find global optimums for them all; I just had to find packing configurations that were good enough to get the job done. Surely (subtle foreshadowing), many or most of the problems would be solvable using this approach.
All of that without writing any code. When I finally sat down to write some, I was delighted to see the genetic algorithm find solutions good enough to pack the sample problems. I ran it on the actual puzzle input and was even more delighted to see it work on the vastly larger problem set too.
But I felt uneasy. Surely, it shouldn’t have been that easy, should it? I dropped into the channel at work and saw that only one person had shared a solution. I looked at it, wondering if they’d used something stochastic like I did, and was appalled to see that they didn’t do any packing at all. Yes, that’s right: the puzzle input doesn’t actually require you to rotate or fit shapes together at all; rather, you can count the number of shapes, compute how much space they’d need if they were all 3-by-3 squares, and see if the provided space is enough.
Good lord. One of my shower thoughts based on eye-balling the puzzle input had been that, of the thousand problems, there was probably some subset that was trivially and obviously packable into its corresponding space, and another subset that was trivially and obviously not packable. Maybe I’d be able to eliminate a few hundred of the former kind and few hundred more of the latter, leaving me only having to solve the remaining few hundred that were on the borderline.
Well, it turns out that all the problems could be trivially classified and counted like that, and there was no need for my fancy simulated annealing algorithm at all. I kind of felt deceived, because the sample input wasn’t trivially classifiable in that way. My initial satisfaction at coding up the algorithm and seeing it work was replaced with a feeling of regret for not having followed up on my shower thought sooner.