Epistemic status: Just a confusion I once had, and how I eventually resolved it to my satisfaction.
In ordinary differential equations, separability is a deductive rule stating that whenever you have a differential equation of the form
you can then reason that
and then than
From the very first time I saw that, I was immediately off-put by that middle equation. What the hell does an expression like (by itself) even mean? Until I saw this, I had figured that, apart from their weird notation, differentiation and integration were just plain-old multivariate functions. I had made sense of their notation by just ignoring it, basically. And when I held that point of view, the above deduction is just nonsensical.
I also remember not getting good clarificatory answers about this at the time! I mostly recall being told to just ignore the middle equation and take the whole conditional on faith, as something that has been separately proven.
Eventually, I learned that there was this idea in math called differential forms which gave a precise-and-everywhere-valid interpretation to the stand-alone expression . But you don't quite need that machinery to resolve the above thing that bothered me.
Did you know that "calculus," is an abridgment of the original term "the infinitesimal calculus"? "The rules for soundly manipulating infinitesimal quantities," basically. I did not know this when I first encountered this separability thing. There's a whole saga, maybe even the main story in mathematics, about why that interpretation and corresponding terminology fell out of favor.
The basic infinitesimal calculus idea (which is only sometimes, not always, a valid interpretation of the symbols) is that
(I very vividly remember the moment when I discovered that the integral sign was just a stylized "S", for "sum"!) Now you cannot everywhere use the above separability reasoning on the strength of the infinitesimal interpretation. Again, it's not an everywhere-valid interpretation!
Once you're using any everywhere-valid interpretation, using any way of giving and their own independent meanings as symbols, though, the separability deduction just falls out! If two things are equal, you can multiply both by any mathematical object and get a true equation. It doesn't matter what kind of mathematical object is. If two things are equal, you can apply the same operation to both and get a true equation. It doesn't matter what the integration summation operation amounts to, precisely.
I sampled hundreds of short context snippets from openwebtext, and measured ablation effects averaged over those sampled forward-passes. Averaged over those hundreds of passes, I didn't see any real signal in the logit effects, just a layer of noise due to the ablations.
More could definitely be done on this front. I just tried something relatively quickly that fit inside of GPU memory and wanted to report it here.
Could you hotlink the boxes on the diagrams to that, or add the resulting content as a hover text to areas, in them or something? This might be hard to do on LW: I suspect some Javascript code might be required to do this sort of thing, but perhaps a library exists for this?
My workaround was to have the dimension links laid out below each figure.
My current "print to flat .png" approach wouldn't support hyperlinks, and I don't think LW supports .svg images.
That line was indeed quite poorly phrased. It now reads:
At the bottom of the box, blue or red token boxes show the tokens most promoted (blue) and most suppressed (red) by that dimension.
That is, you're right. Interpretability data on an autoencoder dimension comes from seeing which token probabilities are most promoted and suppressed when that dimension is ablated, relative to leaving its activation value alone. That's an ablation effect sign, so the implied, plotted promotion effect signs are flipped.
The main thing I got out of reading Bostrom's Deep Utopia is a better appreciation of this "meaning of life" thing. I had never really understood what people meant by this, and always just rounded it off to people using lofty words for their given projects in life.
The book's premise is that, after the aligned singularity, the robots will not just be better at doing all your work but also be better at doing all your leisure for you. E.g., you'd never study for fun in posthuman utopia, because you could instead just ask the local benevolent god to painlessly, seamlessly put all that wisdom in your head. In that regime, studying with books and problems for the purpose of learning and accomplishment is just masochism. If you're into learning, just ask! And similarly for any psychological state you're thinking of working towards.
So, in that regime, it's effortless to get a hedonically optimal world, without any unendorsed suffering and with all the happiness anyone could want. Those things can just be put into everyone and everything's heads directly—again, by the local benevolent-god authority. The only challenging values to satisfy are those that deal with being practically useful. If you think it's important to be the first to discover a major theorem or be the individual who counterfactually helped someone, living in a posthuman utopia could make things harder in these respects, not easier. The robots can always leave you a preserve of unexplored math or unresolved evil... but this defeats the purpose of those values. It's not practical benevolence if you had to ask for the danger to be left in place; it's not a pioneering scientific discovery if the AI had to carefully avoid spoiling it for you.
Meaning is supposed to be one of these values: not a purely hedonic value, and not a value dealing only in your psychological states. A further value about the objective state of the world and your place in relation to it, wherein you do something practically significant by your lights. If that last bit can be construed as something having to do with your local patch of posthuman culture, then there can be plenty of meaning in the postinstrumental utopia! If that last bit is inextricably about your global, counterfactual practical importance by your lights, then you'll have to live with all your "localistic" values satisfied but meaning mostly absent.
It helps to see this meaning thing if you frame it alongside all the other objectivistic "stretch goal" values you might have. Above and beyond your hedonic values, you might also think it good for you and others to have objectively interesting lives, accomplished and fulfilled lives, and consumingly purposeful lives. Meaning is one of these values, where above and beyond the joyful, rich experiences of posthuman life, you also want to play a significant practical role in the world. We might or might not be able to have lots of objective meaning in the AI utopia, depending on how objectivistic meaningfulness by your lights ends up being.
Considerations that in today's world are rightly dismissed as frivolous may well, once more pressing problems have been resolved, emerge as increasingly important [remaining] lodestars... We could and should then allow ourselves to become sensitized to fainter, subtler, less tangible and less determinate moral and quasi-moral demands, aesthetic impingings, and meaning-related desirables. Such recalibration will, I believe, enable us to discern a lush normative structure in the new realm that we will find ourselves in—revealing a universe iridescent with values that are insensible to us in our current numb and stupefied condition (pp. 318-9).
I believe I and others here probably have a lot to learn from Chris, and arguments of the form "Chris confidently believes false thing X," are not really a crux for me about this.
Would you kindly explain this? Because you think some of his world-models independently throw out great predictions, even if other models of his are dead wrong?
I agree that stronger, more nuanced interpretability techniques should tell you more. But, when you see something like, e.g.,
25132 ▁vs, ▁differently, ▁compared, ▁greater, all, ▁per
25134 ▁I, ▁My, I, ▁personally
isn't it pretty obvious what those two autoencoder neurons were each doing?
No, towards an value. is the training proxy for that, though.
An excerpt from Richard Hamming's The Art of Doing Science and Engineering with some relevance to AI coding:
—Hamming (1996), pp. 45–8