Richard_Kennaway

Computer scientist, applied mathematician. Based in the eastern part of England.

Fan of control theory in general and Perceptual Control Theory in particular. Everyone should know about these, whatever subsequent attitude to them they might reach. These, plus consciousness of abstraction dissolve a great many confusions.

I created the Insanity Wolf Sanity Test. There it is, work out for yourself what it means.

Change ringer since 2022. It teaches learning and grasping abstract patterns, memory, thinking with your body, thinking on your feet, fixing problems and moving on, always looking to the future and letting both the errors and successes of the past go.

As of May 2025, I have yet to have a use for LLMs. (If this date is more than six months old, feel free to remind me to update it.)

Wikitag Contributions

Comments

Sorted by

The other much harder path would require designing systems where those in power are monitored first and most strictly, under rules they can’t tamper with.

Who monitors the monitors? Who decides the decision rules?

The whole thing is made of anthroporphism. The AI is imagined as a valuable person that we in our paranoia seek to enslave. But suppose it isn't? The post's answer to that is here:

We’re stuck with uncertainty [about AIs being conscious]. Anyone telling you they know the answer is almost certainly mistaken. What does one do? Consider the possibilities:

  1. AI as they currently are can not be phenomenally conscious, and are no different from any other tool
  1. AI can be and/or are already phenomenally consciousness, and they have all the same natural rights and moral patience as anyone else

If one assumes that #1 is true, and they are right, no big deal. If they are wrong though… that’s bad. Extremely bad.

If one assumes that #2 is true, and they are wrong, well again, no big deal.

If one assumes that #2 is true, and they are wrong, likely end of the world.

ETA: Actually if it's superintelligent it's doom however you slice it.

Is this Claude making a bid for AI rule over humanity?

My argument is roughly that religions uniquely provide a source of meaning, community, and life guidance not available elsewhere

In my limited experience of religion (brought up Church of Scotland but never took it seriously; attended Buddhist meditation classes on and off over a period of about 6 years; have attended a few Christian wedding ceremonies), I have never witnessed these things. At a church service there are bible readings, hymns, and a sermon, and … that’s it. Whatever community there is around it is at most just people using the regular services as a Schelling point to meet everyone else.

This surely varies from place to place, but that is what I have seen.

The simple act of sitting together in silence for an hour bonds the people who do it because they understand the level of comitment it takes not to get up a leave.

“I did this difficult thing, therefore it must have been worth it” is standard cultish anti-epistemology.

Any art style becomes generic if repeated enough. Having now seen a bunch of Ghiblified images, I find them as emptily cloying as a Thomas Kinkade. When you can just push the button to get another, and another, like Kinkade did without a computer, it renders the thing meaningless. What are such images for, when all the specificity of the scene they are supposed to be illustrating has been scrubbed off?

Have some Bouguereau! Long dead and safely out of copyright.

"A traditional American barn dance, in the style of William-Adolphe Bouguereau."

I once ran as fast as I could to catch a train, despite knowing that I was already too late.

I arrived on the platform a few minutes after the departure time. This was my expectation, but not my intention. My intention was to catch the train.

I got on the train with seconds to spare, because, unknown to me, it had been delayed a few minutes in leaving.

I had an intention to catch the train, no expectation of doing so, and succeeded in catching the train. How does this fit your conflation of intention and expectation?

If two variables are made to take the same values, that does not make them the same variable. If they were the same variable, it would be unnecessary to do anything to cause them to have the same value.

Despite all this, the purpose of the post was to give a clear definition of emergence that doesn't fall into Yudkowsky's strawman

I am taking Eliezer's word for it that he has encountered people seriously using the word "emergence" in the way that he criticises, that it is not a straw man. The sources I found bolster that view. (In the Steven Johnson interview I took him to be using "magical" figuratively, but as various people have pointed out, if someone thinks the world is weird, they're weird for thinking that the world should carry on behaving just like the tiny fragment of it they know about so far.) To respond by inventing a different concept and calling it by the same name does not bear on the matter.

This is an error I see people making over and over. They disagree with some criticism of an idea, but instead of arguing against that criticism, they come up with a different idea and use the same name for it. This leaves the criticism still standing. Witness the contortions people go through to defend a named theory of causal reasoning (CDT, EDT, etc.) by changing the theory and keeping the same name. That is not a defence of the theory, but a different theory. That different theory may be a useful new development! But that is what it is, not a defence of the original theory.

My guess is that they're for the higher-ups at the company itself, a glossy corporate brochure in front of the real website. Urgh. The gym I use has a similar sort of website, and it's really troublesome to go to it with a definite question like "when is the 50m pool open this week?" and extract an answer.

Particularly when your selection process is biased (through correlations between rhetoric and substance) towards finding precisely the stupidest of stuff among it.

I did not have to scrape any barrels to find those examples. They were all from the first page of Google hits. Steven Johnson's book is not new-age woo. Of my First/Second/Third examples, the first is a respectable academic text, and only the third is outright woo. So I do not think that "the power of emergence" is optimised for finding nonsense (it was the only phrase I tried), nor am I depending on the vastness of the internet to find these. Certainly, my search was biased — or less tendentiously, intended — to find the sort of thing that might have been Eliezer's target: not new-age woo, but the things respectable enough to be worth his noticing at all. Lacking the actual sources he had in mind, what I found does look like the right sort of thing. I notice that the Steven Johnson book predates his posting.

As I said above, the following exchange remains illustrative of this:

It's worth quoting a little further:

anonymous: The apparent disagreement here, comes from different understandings of the word "non-superconductivity".

By "non-superconductivity", Yudkowsky means (non-super)conductivity, i.e. any sort of conductivity that is not superconductivity. This is indeed emergent, since conductivity does not exist at the level of quantum field.

By "non-superconductivity", Perplexed means non-(superconductivity), i.e. anything that is not superconductivity. This is not emergent as Perplexed explained.

In your footnote you say:

As an aside, it's reflective of what Eliezer does these days as well: almost exclusively arguing with know-nothing e/acc's on Twitter where he can comfortably defeat their half-baked arguments, instead of engaging with more serious anti-doom arguments by more accomplished and reasonable proponents.

Selection effect again? I don't look at Twitter, but I did notice that Eliezer recently gave a three-hour interview and wrote a book on the subject.

Load More