Daniel Lemire's blog

, 17 min read

Sentience is indescribable

16 thoughts on “Sentience is indescribable”

  1. I don’t remember in what book (I am a strange loop, maybe?) Douglas Hofstadter was talking about the size of souls, another word he used of sentience.

  2. Interesting, Daniel. As it so happens, I’m writing a paper called “A Theory of Ethics for Sentient Machines”. Part of my answer to your question “why should they deserve special consideration” has to do with the actor’s intentions. In other words, it isn’t so much that one’s ethical stance towards sentient machines is determined by *their* sentience as it is by *your intentions* towards them.

  3. @Paul

    So a rock is sentient, but a crystal is not? The solution space to the quintic polynomials is sentient, but the quartic is not?

    I obviously don’t claim to have defined what sentience is. (I am throwing a conjecture out there.) This being said… Neither a rock nor a crystal, nor a solution space, are systems, so I would reject them.

    the human brain’s capacity for understanding seems like an arbitrary yardstick.

    It is not arbitrary because I am the observer.

    Sentience is not, I conjecture, some absolute property. Some things appear sentient to us because we cannot wrap our head around them.

  4. @Paul

    I agree that my point of view is less interesting in the sense that it constitutes a demystification: there is no magical spark of consciousness.

  5. @Paul

    But I personally suspect there’s something incorporating internal memory, self-modification and complexity going on with sentience.

    I agree.

  6. Paul says:

    So a rock is sentient, but a crystal is not? The solution space to the quintic polynomials is sentient, but the quartic is not? What would that mean, that a mathematical statement has sentience?

    At the very least, it doesn’t seem to me that complexity is sufficient for sentience. I’m not convinced it’s necessary either: the human brain’s capacity for understanding seems like an arbitrary yardstick.

  7. Paul says:

    I obviously don’t claim to have defined what sentience is. (I am throwing a conjecture out there.)

    Understood. As you said, these topics may forever escape us. I offer counter-examples as an attempt to better see what the boundaries of this conjecture may be.

    Neither a rock nor a crystal, nor a solution space, are systems, so I would reject them.

    Which raises an interesting sub-question of “what is a system?” A rock can be broken into dissimilar, interconnected constituent components. And we can define inputs and outputs on it in terms of forces/reactions. The spam filter starts when an external force initiates the program, it returns a result you could predict. We could define the input to the rock as a sharp blow with a hammer, and the output as the particular cracking pattern. With a perfect crystal we could predict the crack to certain quantum limits, with the rock we couldn’t. Which is a bit more realistic, I suppose, than posing the sentience of a rock: does the process of a rock breaking cause any “sensations” for the rock, or perhaps more properly, the universe, in a way we’d recognize as sentient?

    Sentience is not, I conjecture, some absolute property. Some things appear sentient to us because we cannot wrap our head around them.

    Which is an interesting definition to consider. Instead of “does that object have subjective experiences”, it becomes “does my subjective experience suggest that object also has subjectivity.” I agree that your question is far more practical and definable. But it’s also a less interesting question. In a sense, you’re searching for candidates for a subjective existence, but not addressing the more usual definition of sentience: “ok, the spam filter could have a spark of conciousness. But does it? Or am I just projecting my own subjective experience because I don’t understand the unconcious rules being followed?”

  8. Paul says:

    @Daniel

    I agree that my point of view is less interesting in the sense that it constitutes a demystification: there is no magical spark of consciousness.

    And yet, isn’t there? I just sipped my coffee and had an experience of “taste”. I feel “pressure” as I type this post. I’m not convinced time exists, I’m not convinced my sense of self is divided from other senses of self, I’m not convinced of free will. But I am convinced “taste” as something emergent above and beyond the shuttling of chemicals and electricity around a biological computer exists. And that may or may not also occur in the chemical and electrical patterns of a mosquito.

    If a sufficiently advanced algorithm analyzed a drop of coffee, would it too have a sensation of “taste”? Or would it blindly flit 1’s and 0’s around, with no subjective, sentient experience?

    Returning to one of your original conjectures, this may just not be amenable to any sort of analysis or definition. But I personally suspect there’s something incorporating internal memory, self-modification and complexity going on with sentience.

  9. @Rafael

    Making computers that can pass as human beings is within our grasp. I agree.

    But once you have made such a computer, will you know what consciousness is?

  10. Rafael says:

    If I spend a week with a program and have the same sort of meaningful talks and emotions you’d have with a friend, then who am I to say that it’s not sentient?

    In my view, sentience, which I guess you call sapience, is just a measure of how close a something is to a human. If it is exactly like a human, then we call it sapient. It doesn’t matter complexity, limits or anything like that.

    I’m not sure what you think is forever going to escape us. If I make a sentient computer, then I would have understood what it takes to be sentient and other people will easily figure out that I have done a sentient computer.

  11. Joe says:

    Ha, I was half expecting the “illegible” link would be a photo of your attempt to predict your wife’s actions.

    Good post, I like this idea of relative sentience. It may not be as useful as an absolute, but it can certainly help us, the observers, get a better understanding of it.

  12. Sounds like you would agree with Daniel Dennett in Consciousness Explained. If you want a contrasting point of view, see David Chalmers’ The Conscious Mind.

  13. @Vellino

    Thanks for the references. Indeed, I agree with Dennett, it seems.

  14. The Douglas Hofstadter book referenced in the first comment is “I am a Strange Loop”, and it has a very deep and interesting perspective on this issue. Worth reading.

    Paul.

  15. Venkat says:

    Fascinating conjecture here. There’s one major counter-argument and that is that the illegibility and high-entropy may be illusory: it may be low algorithmic/Kolmogorov information masquerading as high Shannon information. i.e. a forest, a cat or your wife (and you) might all be low complexity in the sense that pi is low complexity. At least Wolfram and the other digital physics people (like Seth Lloyd) seem to believe it. Maybe we’re all Automaton #29 with different initial conditions or something. An even more intriguing thought is recursive self-description. Somewhere in the expansion of pi, is there a number string that is also a description of an algorithm to generate pi?

    If so, then your brain could possibly be described by a string that is far smaller than the extensive form of the brain itself, and the brain could contain its own compact description and possibly truly understand itself.

    So I’d rephrase your conjecture to include the counter-argument in the generalized either/or form: what’s the true Kolmogorov complexity of the universe (from quarks to quasars and everything in between, including forests, cats, people…)? And is it increasing or decreasing?

    The Shannon:Kolmogorov ratio is in a sense a measure of the sentience level of a universe. If it is 1, the universe is maximally intelligent and entropic. This is one reason some people appear to like the idea that the 2nd law of thermodynamics can be interpreted as the universe evolving into an omniscient entity, a.k.a God. Asimov has a story based on this premise.

    Alternatively (and this is the form in which I am considering the question) is the information capacity of the universe fully utilized? Underutilized information capacity shows up in our universe as symmetries. Some are obvious symmetries, others are deep symmetries. Find the symmetries of a forest and you’ll find out if it has as much information as you think it does.

    I am working on a very related topic… the illegibility/symmetry/information potential of “moves” rather than objects (i.e. I am asking your questions, but not about “noun” entities like cats and forests, but “verb” entities like a punch or a journey or a business decision).

    Anyway, apologies for riffing very metaphysically here.

  16. Angelo Pesce says:

    I guess I agree to a certain degree. We do define as conscious systems that escape our efforts to rationalize them. This is to a degree fairly obvious looking at our history and at the history of our religions even.

    But I agree with some commenter here when they notice that not all systems that currently escape our ability of describing them are marked as conscious.

    Now I suspect part of this is because we do define consciousness in a very anthropocentric way, we really look at systems that have a given communication ability that we can understand. For example I’m pretty persuaded that if we looked at the behavior of our earth ecosystem from the right perspective we could start noticing intelligent reactions that we miss only because we are not looking at it in the right perspective or granularity.

    But still it’s undeniable that there are complex systems that we don’t grasp and that still appear very mechanical to us.

    Also, even if we were to accept the notion that the complexity of a system in relation to another one is what define consciousness, and dismiss all the exceptions as conscious systems that we don’t recognize as such, I feel this is still a non answer as it’s the kind of answer that creates more questions that effectively are the “meat” of the problem, in the end it gives us very little information.

    Some of such questions would be:
    – What is the property for which a given system perceives another system as conscious? In other words, what is the complexity differential for consciousness?

    – More importantly, we define ourselves as conscious, we are self-aware. Is this just because we can’t explain in our symbolic reasoning system ourselves? And if so, is it because of ignorance or is there an inherent property that makes systems like us able to reason about themselves but incapable of comprehend our own inner workings? Can consciousness be cracked? Could we in the future explain ourselves in a way in which we perceive ourselves as mechanical beings?

    And those are really what I feel would be the questions about consciousness. If we had answers to these we could take a system and categorize it as self-aware or not, or understand how a system capable of reasoning about another system will perceive that second system as conscious.

    Now looking at the title of the post it seems to hint at the idea that we can’t answer these, because it we could we could then describe sentience, which we can’t. Indeed I agree as I wrote that if we could we would then categorize ourselves as mechanical. The problem is to prove that we can’t (thus we will always perceive ourselves as conscious)!