That was 2025
January 01, 2026 | Comments2025: hard year, good year.
Work was… pretty crazy. Too much to go into here, but at Google I/O my team launched Veo 3 and a product (built with our friends in Google Labs) around it, Flow; later in the year Nano Banana, our latest and greatest image model, shipped (then its Pro variant); and around these we opened up the Music AI Sandbox (a suite of tools for pro musicians) and put out some interesting APIs, open source music experiments, and a model. We also started working more closely with professional filmmakers, premiering ANCESTRA, the first of three short films we’re making with Darren Aronofsky, at Tribeca. Generative AI is an increasingly contentious space, generative media also, and I’m proud of both the progress we’ve made, and the contrasts we’ve drawn between our work and that of our competitors. I look at ANCESTRA, or the delightful animations of Christian Haas, or SuperHero Sam (a couple of Sunday mornings with B), and I’m excited by the clay that the creators of today and tomorrow will have to work with.
All this definitely stretched me in new directions, and naturally that wasn’t always comfortable. It was the hardest year at Google so far - the pace in AI is intense, and in most ways I like that, but at points I noticed sleep and health start to fray in ways they hadn’t since the harder days of running Future Platforms, nearly 20 years ago. I took a month off in November to decompress a little before moving to a new role - still at DeepMind, still around generative media.
When time allows I’ve been noodling with some ideas - fairly abstract, noncommercial stuff around creating a robot which exhibits curiosity, and using this as a jumping-off point to explore simpler forms of intelligence, inspired by what I took from A Brief History of Intelligence last year. My GitHub history tells the story of when I’ve managed to find time for that.
It was a solid year for karate. I didn’t train as frequently as I’d like - averaging 1-2 classes a week, but this average is thrown off by travel (between work travel and a summer in the UK, I was out of town for about 3 months of the year). I had another fun weekend in Petaluma with Rick Hotton, whose teaching style I continue to enjoy and whose body mechanics I continue to marvel at. At the end of October I traveled to Okinawa with a group from our dojo, and retook my shodan grading (see 2023), this time passing - to much relief. I’m still helping with the Saturday morning kids classes (which I continue to find extremely rewarding), and B has continued to train, herself moving up the ranks and towards being one of the more senior kids, including a couple of gradings this year and taking part in our annual demo at the Cherry Blossom Festival - #prouddad of course.
Work took me to New York and Los Angeles frequently, London and Zurich a little. Kate and I spent 6 weeks of the summer camped out just off Worthing sea front, giving B some grandparent time. In retrospect it was too long - I spent the week days in London, but even there being 8 hours distant from most of my colleagues for so long didn’t work so well - next year I’ll do a much shorter trip for me. Kate and I made it out to Iceland for a few days while in Europe, which was phenomenal - in particular the boat tour of the glacier lagoon was an experience I’m carrying with me.
Outside of these trips, I spent quite a few odd days hiking around Marin and Point Reyes (which I have inwardly declared to be My Happy Place), continuing to gasp at the beauty of the Californian countryside, and made it up to Tahoe for a few weekends of skiing with B and friends.
We enjoyed a few sets of visitors this year, including an old friend from university days having her first trip to the USA - which gave me an excuse to wander around this beautiful city, and chance to see it all a little fresher.
I read a little less this year personally, but managed to keep The San Francisco Cognitive Science Reading Group alive. We met 8 times this year, mostly discussing books, each mostly split over a couple of sessions. This was OK, but next year I want us to get more into papers for more breadth and depth.
- Active Inference (Parr, Pezzulo and Friston) - explaining Karl Friston’s ideas uniting sensing and action. Friston has kept coming up in other readings, and we’ve enjoyed grappling with him.
- The Self Evidencing Brain
- The Book of Minds (Jakob Hohwy) - mostly a literature review, notably for kicking off a discussion of “can crabs play chess?”
- Where Minds Come From (Michael Levin) - a video rather than a book, but we wanted to dip into Levin’s work after he came up in Book of Minds
- Biological Underpinnings for lifelong learning machines (Kudithipudi et al) - read for the second time because neither I nor other members of the group noticed we’d done it before… perhaps a commentary on our own cognitive faculties. That said, it’s striking comparing my notes how I picked out very different details this time around.
- What Is Intelligence? (Blaise Aguera y Arcas)- by popular vote (though I spent 5 years working in Blaise’s research group a while back so would’ve read it anyway). Covers a lot of ground, I think I took more from the earlier sections (around his work with artificial life) than the later more philosophical ones.
We also moved venue at the end of the year to Frontier Towers on Market - somewhere I want to spend more time next year, an enjoyably shabby and enormously energetic co-working space / art + AI + robotics + bio-hacking lab. I’ve attended one other event there - the Wetware meetup - and had a couple of tours. Biology is not my thing, but the whole place feels special; perhaps I’ve spent too much time in Google offices, though I remember loving the more back-to-basics atmosphere in Android’s Building 44 too… so maybe this is a personal aesthetic.
Personal reading from this year - a bit less than normal, reflecting my working schedule and the tendency towards books at the reading group:
- Don’t sleep, there are snakes (Daniel Everett) - a missionary linguist takes his family to live with the Amazonian Pirahã people, to study their (unusually simple) language, translate the New Testament and convert them to Christianity… losing his own faith (in both Christ and Chomsky) along the way. Really really enjoyed this.
- Service model (Adrian Tchaikovsky) - amusing dystopian science fiction about a lost and effusively logical robot’s wanderings through a rotting world, after the death of its master.
- Fully Automated Luxury Communism (Aaron Bastani) - I felt bad for not having already read this but found it mixed. Opens with a set of cliched attacks on capitalism, goes through an amazing mid section plotting the economic effect of abundance across a few sectors (energy, food, minerals, health) and 180’s into a closing admonishment to enact left-wing staples (local credit unions). I’d love to understand more deeply the future they envisage - the technological and political seem quite separate visions. e.g. what are the cooperatives that workers share ownership of in a world of abundance where prices drop to zero?
- The Genius Myth - enjoyable wander through the presentation, celebration and reality of historical smartypants by the always-insightful Helen Lewis. Good zinger count (“Great discoveries create the conditions for their own under-appreciation”), finishes with a couple of good admonishments (stop thinking of genius as a transferable skill, apply the label to acts not people)
- Shop Class As Soulcraft (Matthew Crawford, AKA the British version I read, the much better titled The Case For Working With Your Hands) - really well written philosophical tract on our relationship to manual creation and consumerism, by an author who ran a think tank before leaving to work in a motorcycle repair shop, with no regrets. Mashes insecurity-buttons for this carpenter’s son who lives 90% massaging bits not atoms.
- What Art Does (Bette Adriaanse and Brian Eno) Really good, fun, short book exploring what art is, what it is for, how to engage with it. Read it myself, since used it as a bedtime story with B, who enjoyed it enough to keep asking for it.
- Abundance (Ezra Klein, Derek Thompson) - TLDR we’ve made it deliberately hard to do things which require a lot of coordination, for good reasons (climate impact, getting a handle on corruption etc) - but in doing so, have implicitly created a vetocracy and should, to paraphrase Graeber, choose otherwise. I loved this one, though it plays to many of my preconceptions (“let’s science our way out of this”).
- The Great Automatic Grammatizator and other stories (Roald Dahl) - I’ve only read the main story but it’s astonishingly prescient for 1953 when it was written : a frustrated writer invents a machine which encodes the rules of grammar and plot and can create novels. Themes he lays out are ones we’re grappling with now: can a machine be creative; unequal access to technology; AI slop (in the story, the machine is biased towards mediocrity); safety issues; engagement hacking; even a hint of AI take-off.
- Station Eleven (Emily St. John Mandel) - an aging actor falls dead on-stage during a performance of King Lear, and in the days following, a plague (“Georgian Flu”) kills almost everyone, leaving survivors wandering America or setting up encampments.
I failed to finish How to do nothing - the jokes write themselves.
I think 2026 will be about enshrining better habits, whether they be about preserving attention, health, boundaries or sociality.
2024 in the rear-view
January 20, 2025 | Comments2024 was mostly a work year. I described work as “exhilarating and exhausting” a year ago, and those words remain a pretty good descriptions. While our group in GDM came together mid-2023 with the first merger of Google Brain and DeepMind, it felt like we started firing on all cylinders this year. We released continuously throughout 2024, but there were a couple of days where many strands of work came together. First was Google I/O in May, where we announced our video model Veo, plus a new (state-of-the-art) Imagen 3, and talked about Music AI Sandbox, a set of AI tools built with and for musicians. Then in December we followed up with Veo 2 (which we believe is the best video model out there) and made it available to a broader set of folks, plus an improved Imagen. I’ve seen some incredible works created using Veo, and I’m excited to see what gets made with it this year.
Between these we put image, video and music generation in the hands of YouTube users and Cloud customers, shipped a novel and quite fun live-music creation tool (MusicFX DJ), and more (notably, our researchers contributed the incredibly realistic dialogs which power the podcasts NotebookLM makes). And of course all this only happens thanks to teams of researchers coming together, gelling, and quietly making the advances which power all this - launches are often the end result of years of effort, and build on a ton of past work.
This was also the year I ended up managing full-time; I’d gathered a few reports by the end of 2023, but my team grew a little, and matured a lot (myself included) through 2024. I bumped up to Director and will spend the next year working out what that means.
Generative media is an interesting space: challenging from many different angles, often controversial, and with a ton of promise - watching film-makers pick up on Veo 2 has been particularly exciting over the last few months. I remain excited and grateful to be working here, on this, now. During the summer I bumped into an old friend, who knew me as a teenager and reminded me that I had been enthusiastic about AI way back then - nearly 40 years ago now - and going through some old papers, I found cuttings from the New Scientist sent to me by my uncle and aunt in 1992. New item by Tom Hume
Most things took a back seat to work this year. In particular Strava’s end-of-year report enthusiastically congratulated me on doing about 2/3 as much exercise in 2024 as in 2023. Karate in particular dropped off a lot, and I didn’t manage to make it back to Japan to retake my shodan - definitely one for 2025. Bright points here were a couple of courses in Petaluma with Rick Hotton, a really interesting teacher (and phenomenal example of body mechanics) who’s cross-trained in karate and aikido, and whose style of teaching reminds me fondly of Tom Helsby from Airenjuku Brighton. I also continue to find Saturday mornings, where I help teach the kid’s class at Zanshin dojo, extremely rewarding.
An old friend, someone I was once very close to, died suddenly and shockingly in April; an extremely sad story, the one bright point of which was reconnecting with a couple of faces from my early 20s.
Another old friend visited us early Summer; Pride and July 4th make fantastic book-ends for a trip to San Francisco.
We spent the rest of summer in Brighton (OK, Worthing) again, near to family and friends. A big group of family gathered in July for Dad’s 80th, which was lovely; and I traveled up to Warwickshire to see a beloved aunt, and thereon to Yorkshire for Mr Burt and another old friend, around beautiful Hebden Bridge.
Album of the year was The Head Hurts but the Heart Knows the Truth by Headache - reminsicent of Blue Jam, dream-like ambient background noise with a middle-aged British guy bemoaning his lost youth in a self-obsessed fashion. No sniggering at the back, please. Song of the year was Ministry by Karen O and Dangermouse.
I continued to shepherd the San Francisco Cognitive Science Reading Group, mostly monthly; we went bi-monthly towards the end of the year as work got busy, I hope to return to monthly in 2025. We read:
- “A Free Energy Principle for the Brain” by Karl Friston, a recurrent name in our various meetings who we finally started to grapple with. Introduces his ideas (tldr organisms act to minimize “surprise” between what they sense and what they expect by either updating their models of the world or acting on it to bring it in line with their expectations), but like every explanation I’ve read of them, gets technical quickly and is hard work in general.
- “A Brief History of Intelligence” by Max Bennett. By far the best book of the year, by virtue of being interesting in its narrative (tracking the places in evolution where pressures forced the development of specific cognitive capabilities) and by being incredibly well-written. Our group judged it the best book of the year; one member, who spent time at MIT CSAIL in the 70s, judged it the best book he had ever read. So thought-provoking we spent 4 sessions working through it. I cannot recommend this book highly enough.
- “Language is primarily a tool for communication rather than thought” (Ev Federenko et al.). Enjoyable, persuasive, felt like the settling of a long-running debate rather than introducing novel ideas - but that’s useful too.
- “Vehicles” by Valentino Braitenberg, a delightful book structured as a sequence of thought experiments around simple organisms which, as they get more complex through the book, exhibit more complex behaviours that suggest psychology.
I started 29 books, finished 22 of them, 3 are still in progress. The rest I gave up on, either consciously or otherwise. Outside those above, some highlights:
- Blindsight, by Peter Watts - excellent hard science fiction, intelligence without consciousness as a key character, horrifically unsentimental in parts with a large set of academic references at the back.
- Living on Earth, by Peter Godfrey-Smith. Takes the perspective of the earth as a system, wuth intelligence as a cause of action rather than a goal in itself (a change of posture from his previous books). Lost me in the middle, but ends with some interesting meditations on the ethics of intervening in nature, and the observation that what distinguishes us from other hominids is our ability for tolerance of one another, perhaps not how it feels like from the news.
- Wicked and Weird by Buck 65, a hallucinatory biography from a man whose imagination is better than his memory. No idea how much is true but once I stopped worrying about that,
- Less is More by Jason Hickel. I went into this one, an argument for degrowth, sceptical - someone whose opinion I value had read it and was debating me while grounded in it, so I wanted to understand better. I found some superficial aspects easy to agree with (growth can’t go on forever, waste in the economy is bad, socialized healthcare is good) but came away unconvinced. I’ve been near enough consumer electronics to know that planned obsolence isn’t planned; he doesn’t acknowledge any environmental progress (London smog, the hole in the ozone layer); his passion for, and connection of everything with, animism just bewildered me. One part which did surprise me, and worried me, was his observation that growth in clean energy is being outpaced by overall energy usage. (I checked with OurWorldInData and yup, it is).
- Hillbilly Elegy by JD Vance. The pre-Trump JD Vance came across pretty well, I thought. It’s a well-told tale of growing up in poverty, the difficulties of getting out, how hard it is to climb into a new culture, how the lack of a family hurt him and his extended family helped.
- And Finally, by Henry Marsh, author of the incredible Do No Harm (tldr a neurosurgeon discusses his successes and failures). In this one, also autobiographical, Marsh transitions from doctor to patient as he’s diagnosed with cancer, calling into question his own bed-side manner with patients past. “Empathy, like exercise, is hard work and it is normal and natural to avoid it”
Energy Efficiency drives Predictive Coding in Neural Networks
December 29, 2023 | CommentsI don’t remember how I came across it, but this is one of the most exciting papers I’ve read recently. The authors train a neural network that tries to identify the next in a sequence of MNIST samples, presented in digit order. The interesting part is that when they include a proxy for energy usage in the loss function (i.e. train it to be more energy-efficient), the resulting network seems to exhibit the characteristics of predictive coding: some units seem to be responsible for predictions, others for encoding prediction error.
Why is this exciting?
- It proposes a plausible mechanism by which predictive coding would arise in practice.
- It shows an existence proof of this (well, two actually: one for MNIST and one for CIFAR images)
- It lines up artificial neural networks to theses around predictive coding from Andy Clark etc.
I grabbed the source code, tried to run it to replicate, and hit some issues (runtime errors etc), so have forked the repo to fix these, and also added support for the MPS backend (i.e. some acceleration on a Mac M1) which sped things up significantly - see my fork here.
But lots of directions to go from here:
- I’d like to reimplement this in a framework like Jax, both to simplify it a little and to check I really understand it (and Jax)
- Does this approach work for more complex network architectures? For other tasks?
In the spirit of making it all run faster, I tried implementing early stopping (i.e. if you notice loss doesn’t keep falling, bail - on the basis you’ve found a local minima). Interestingly, it seemed that if I stopped too early (e.g. after just 5-10 epochs of loss not dropping) my results weren’t as good - i.e. the training process needs to really plug away at this fruitlessly for a while before it gets anywhere.
Ending 2023
December 28, 2023 | Comments2023 brought enough disruption and disappointment that I struggle to look back on it with much warmth. At the same time, I’ve many reasons to be grateful it wasn’t worse.
The year started with my beloved manager leaving Google, and then the layoffs hitting. I found the latter shocking, partly because I’d never worked in a company which had gone through them before, but also because there seemed plenty of room for improvement in how they were done. I woke up one morning to find colleagues - including some loyal, talented, productive long-time employees who I felt epitomized Google culture - had effectively been vanished overnight. I’m still bothered by this; while I can understand that it might be necessary, the way it was done seemed to fall short of the high standards Google sets for itself elsewhere. I chatted to a friend who works at Facebook, and what they described of their own layoffs seemed more compassionate. It doesn’t feel good to be beaten by Zuck on empathy.
A few slightly rudderless months followed; my responsibilities were a little diffuse and I lost a little passion for a new mission my group had taken up. In May an opportunity came up to move when the Google Brain/DeepMind merger occurred, and I shuffled into Google DeepMind, where I’m working now. It’s been interesting to shift sideways into a slightly different culture, and I’m finding it often refreshing, sometimes exhilarating, occasionally exhausting - and on balance am very happy to be there. I work in our generative media group, which is an interesting place to be right now, and have taken on more management responsibilities, something I’ve been wanting to do for a little while. Much of my work this year was around music, which enjoys both particular complexities and opportunities. I remain grateful to be working at the coal-face of AI, this decade.
Life outside work was similarly chaotic. Works on our home are about to enter their second year, and we’ve, depressingly, had the “architect and contractor pointing fingers of blame at each other” stage. I am hopeful that this will all conclude in the first quarter of 2024 and we can forget about it. More positively, our roof is now festooned with solar panels and a huge battery sits on our garage wall, which together seem to more than cover our energy needs for 9 months of the year.
I continued practicing karate in 2023. In July a group from our dojo traveled to Okinawa for the 4-yearly gasshuku. It was conveniently held just a typhoon drifted across the islands, twice, blocking entry for some friends, delaying our exit as the airport closed down, and shutting down training - including, memorably, a third of the way into my shodan grading. I was not passed. When given the opportunity to conclude it via Zoom a few weeks later, I declined (on the basis that these things ought to be done in person), and am stubbornly resolved to return to Okinawa to do it again. I’ve enjoyed continuing to help out with the kids’ classes on Saturday morning, and find it quite fulfilling to see their concentration and coordination improve over the months. Under 7s are also surprisingly bloodthirsty. Our daughter also returned to class this year (having declared, age 4, that she was “done with karate”) and is both enjoying herself and becoming more physically confident.
Generally, exercise was OK: I’ll have run about 600km (vs a target of 750km, about the same as last year), done 120h of karate (vs a target of 150, up significantly from 2022), and cycled 3400km (vs a target of 3000km, again slightly up from 2022 - cycling to work most days really helps). Around that, a few lovely walks around Tahoe, Marin, Point Reyes, and Sussex, and some skiing with friends and their daughters. No significant injuries this year - something to be grateful for as I turned 50.
We spent another wonderful, glorious summer camped out in Worthing near Kate’s parents - our daughter getting grandparent time and being ferried around other friends and relatives, and I working in London during the weeks and seeing old friends at weekends. It was particularly fun to catch up with university friends from 30 years previously, back in Reading - and having fun in Reading is quite an achievement.
I traveled to Israel for the first time, and probably (given events) the last time in a while. Tel Aviv is a beautiful city - it has a wonderful faded Mediterranean feel to it, and was more lively and upbeat than I expected. I happened to be visiting during the pro-democracy demonstrations, wandered through their beginnings on the way to walk, and watched them from the nearby Google office. The situation there is horrific, and talking to friends and colleagues based there brings it closer, in some ways, than the front pages of newspapers can.
I’ll write separately about the cognitive science reading group I set up in February; outside of that, it was an OK year for reading:
- A Computer Called Leo by Georgina Ferry; a birthday gift from Toby. tl;dr Britain invents an early computer to run tea rooms, then fails to capitalize on it and sells it to ICL.
- The Three Body Problem trilogy by Cixin Liu; wonderful, I read the first and devoured the following two, then watched the Tencent adaptation.
- The Night Ocean, by Paul LaFarge; a wander through nested narratives and frequent fabrications based around a Lovecraftian scenius, with Asimov and Burroughs hovering at the edges.
- Werner Herzog, a Guide for the Perplexed; a biography of the great man (who I got to see speak in San Francisco this year), spread over decades of conversation, the spirit shining through them all.
- The Ballad of Halo Jones. Alan Moore knows the score; not read since I was a child, and so much had gone over my head back then. “What did she want? Everything. Where did she go? Out.”
- Aurora, by Kim Stanley Robinson. Solid science fiction about the folly of interstellar colonization and sacrifice.
- All You Need to Know About the Music Business, by Donald Passman. Great insider view (Passman is a music industry lawyer) of the byzantine history and practices of the music industry, including the changes wrought by streaming. Recommended by a colleague.
- Do/Interesting, by Russell Davies. A slim gateway drug to Zettelkasten (notice stuff, collect it by writing it down, go through it), but with the goal of creativity rather than the (more common) sceintific efficiency.
- A Visit from the Goon Squad and The Candy House, both by Jennifer Egan. Enjoyable, character-driven narratives about music industry workers and has-beens, bleeding into near-term science fiction, especially in the latter. Great fun, some beautiful set pieces. Who among us hasn’t tried screaming at strangers to force them into authenticity?
- The Planiverse, by A K Dewdney. When I was about 8 years old, a family friend and maths professor showed me this book, which posited a 2-dimensional world. This year I tracked it down, read it and enjoyed it.
- The Mountain In The Sea, by Ray Nayler. Christmas present from Kate; tl;dr what if octopuses got smart? A fun romp, but not so subtle - definite shades of Garth Merengi. Maybe I’m a bit too close to/interested in the source material.
Started and unfinished:
- The Idea Factory by Jon Gertner. A history of Bell Labs. I was looking at models for past research organizations. Now I’ve joined a new one, I’m less interested.
- Musicophilia, by Oliver Sacks. Great exploration of mental illness and music, I’m working through it still.
- The Come Up. Christmas present from James, an oral history of hip hop. Really enjoying it but only just started.
- You Have a Choice. Manual for self-pitying ingrates like myself, trapped in enjoyable demanding jobs yet still incapable of being happy. May never finish this one.
Things I’m thinking about for next year: improving my diet, retaking shodan, working from home a bit more, finding time for side-projects, Neuromatch Academy, spending the summer or more in the UK, and writing more here…
San Francisco Cognitive Science Reading Group, 2023
December 27, 2023 | CommentsSo back in February I set up a local reading group for cognitive science. I’d wanted to learn more about the less… silicon… aspects of intelligence for a while, and figured getting together with a group of like-minded people was a good route to that. And I was surprised at how dead the existing meetups were, post-COVID. So I started one.
So far, so good. We’ve tended to gather a group of 4-5 each month, mostly face-to-face (we’ve had a couple of dial-ins). We alternate reading books and papers, to give us more time for the former. Our group is mostly the same each month, with maybe one or two new faces - we’ve not been great at retaining new arrivals. Conversations frequently get back to large language models, a topic I like to avoid given my day job, and how hard it is to keep track of what’s public and what’s not, when your entire working life is spent in this stuff. Karl Friston also shows up, metaphorically, most months. We need to spend some time with him, metaphorically, in 2024.
- March: Being You: A New Science of Consciousness by Anil Seth. A physicalist (and more importantly, Sussex prof) presents an agnostic view of consciousness, emotion, free will, AI. Comforting confession that noone understands Friston on Free Energy, and I enjoyed the explanation of Integrated Information Theory and the takedown of those free-will experiments showing brain intention to act can be measured before intention is conscious.
- April: The neural architecture of language - Integrative modeling converges on predictive processing. The researchers pass sentences through language models (up to GPT-1) and people (having them read out sentences while FMRI’d). They found they could build predictors both ways, from hidden layers of each to the other - suggesting strong correlations, especially on next word prediction… suggesting it’s a core capability and there are parallels between how artificial neural networks and the brain do it.
- May: Mind in Motion: How Action Shapes Thought, by Barbara Tversky. I had trouble with this one, frequently finding myself more confused after reading the author’s explanations.
- June: Building Machines That Learn and Think Like People. Dates from pre-transformer era, so a bit dated. Lots of desire for intuitive physics abilities and intuitive psychology; emphasis on compositionality as a core cognitive ability (which I can imagine, but they present as necessary)
- July: we took July off, as I was in the UK.
- August: The Experience Machine, another Sussex connection, Andy Clark’s latest. He takes the predictive processing theories laid out in previous works a bit further, in talking explicitly about precision weighting - an attention mechanism - and connects sensing and acting more overtly.. A few chapters then go into implications of the theory elsewhere - with focus on psychiatry, medicine, and broader societal issues.
- September: Whatever next? Predictive brains, situated agents and the future of cognitive science. A 2013 paper from Andy Clark - interesting to have read this after the book, I had a real sense of the core ideas being worked through here. One detail I enjoyed: his prediction (on p12) that each level of processing requires distinct units for representations and error. This is exactly what they found in /energy-efficiency/.
- October: Seven and a half lessons about the brain, by Lisa Feldman Barrett. Her name had come up in past discussions and readings so many times that it felt necessary. I was a bit lost in this one, and wondered if something was lost or added in translation between her academic work and this (more pop science) book. She starts by taking down the notion of the triune (lizard/mammal/human) brain model, wanders through brains-as-networks and pruning/tuning (with the mother/daughter relationship presented as an example, which I found plausible but not convincing), then through social/cultural implications and to the idea of brains creating reality.
- November: Biological Underpinnings for lifelong learning machines - a literature review of all the ways biology might inspire and address issues with systems that need lifelong learning (i.e. those beyond today’s common training/inference split). Broad, interesting - their reference to memories being maintained even while brain tissue is remodelled must rule out some mechanisms for storage? But their metaphor or forgetting as being “memory locations being overwritten” seemed a bit too silicon-like… There’s a theme of “replay of training data” throughout (random activations in the hippocampus used for rehearsal/generative replay). Working out what needs to be stored for replay seems important.
- December: Models of the Mind: How Physics, Engineering and Mathematics Have Shaped Our Understanding of the Brain by Grace Lindsay. I was surprised how much I enjoyed this exploration of maths in biology - when I started it, it felt like another pop-neuroscience introduction but I found a lot of material which I’d not encountered elsewhere. Starts from the basics of neuron firing, zooming out to network assemblies, into the visual network, to information-encoding and then to the motor system - finally into real-world interactions, Bayesian predictions, reinforcement learning and ending with grand unified theories (e.g. Integrated Information Theory, Thousand Brains, etc., of which the author is quite openly sceptical). I was pleased to hear Pandemonium Architectures called out - their messiness still feels both biologically plausible, and underexplored given our leap to the clean abstractions of matrix multiplication. .
A fun year, I think. Towards the end I was definitely losing energy a bit (mostly thanks to the day job) but reading back I’m happy with the ground we covered.
- •
- 1
- 2