Rendered at 09:48:59 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
WarmWash 19 hours ago [-]
This article is dripping with the same kind of cringey techno-engineering naivete you find in hollywood movies. The author is totally lost in the sauce of complex surface level analyses mixed with romantic ideals of human exceptionalism, and completely blind to the deeper abstractions and common under girding systems that an expertise in computation would reveal (and don't have any care for emotional concepts).
The takeaway seems to be "Only meat brains can be conscious because I can feel it and computers aren't made of meat". Which is basically the plot line of every human/robot movie for the last 80 years.
adamzwasserman 16 hours ago [-]
The interesting version of the argument isn't about substrate: it's about motivation.
Present the trolley problem to GPT-4 and it gives you a philosophy survey answer.
Present it to a human and their palms sweat. The gap isn't computation, it's that humans are value-making machines shaped by millions of years of selection pressure.
Pollan lands on the wrong argument (biology vs. silicon) when the real one is: where do the values come from, and can they emerge without a reproductive lineage that stakes survival on getting them right?
judahmeek 6 hours ago [-]
I refer to this as moral grounding.
I'm not sure I would call it a requirement for consciousness, but knowing that most beings with general intelligence (humans) have a form of it similar to my own does make it easier to sleep at night.
ath3nd 11 hours ago [-]
[dead]
orbital-decay 19 hours ago [-]
>The idea that the same consciousness algorithm can be run on a variety of different substrates makes no sense when the substrate in question—a brain—is continually being physically reconfigured by whatever information (or “algorithm of consciousness”) is run on it. Brains are simply not interchangeable, neither with computers nor with other brains.
This is kind of self-contradictory. Then humans aren't conscious? Or each has their own consciousness? Then why not the machine? Not sure what's the point being made here. Yes, the states of a human brain and a transformer are absolutely incompatible (humans at least share the common architecture), that's why any attempts to map model's "emotions" to humans' and the entire model welfare concept are pretty dubious. That doesn't prove there's no (or can never be) consciousness in that, though.
That's the most coherent argument from the entire article. It criticizes the Butlin report in particular and extrapolates that to "never", while ignoring modern takes on that (e.g. interpretability studies showing vague similarity of both on a level deeper than just the language) and any possible future evidence.
In a sense the title is right, nobody ever formally defined consciousness, so you and I and anyone else are free to make almost any argument and spin any narrative according to our beliefs and it will be true! Ill-defined terms and baseless solipsism are the main problems with all these discussions. Good thing that in practice they matter as much as the question whether a submarine swims.
adamzwasserman 16 hours ago [-]
The substrate argument is the wrong hill for Pollan to die on. The stronger version isn't "meat vs. silicon" — it's that brains are value-making machines operating under evolutionary pressure, and no current AI architecture has anything analogous to that. You can simulate the outputs of valuation without having the mechanism. The question isn't whether consciousness can exist in another substrate, it's whether you can get there without the thing that actually drives human cognition: spontaneous assignment of moral and survival value with no prior programming.
waffletower 10 hours ago [-]
AI is an extension and acceleration of so called "evolutionary pressure". But so far AI models lack both agency and consciousness, and do not "experience" this pressure, though they are entirely defined by it. They can also explain this relationship to you.
sdwr 16 hours ago [-]
It's a fine argument, just rough around the edges:
Human brains use redundancy and the physical independence of neurons to build new pathways over time.
Current LLMs have no redundancy and brittle weights. Their technology and architecture fundamentally prevents them from learning.
I think our understanding of consciousness is developing as we build new edge cases. We have a machine that understands and reacts, but can't learn, grow, or "be" over time in a meaningful way.
orbital-decay 15 hours ago [-]
But how is neuroplasticity relevant to consciousness, whatever it is?
>Their technology and architecture fundamentally prevents them from learning.
No? There's in-context learning which is actual learning, it's sample-efficient, and the result can be stored for a learning pipeline. Yes it's ludicrously crude and underpowered compared to neuroplasticity, but that's another question, there's nothing fundamental about this.
mono442 20 hours ago [-]
We don't really know a consciousness really is and I think it is premature to dismiss the possibility of replicating the behavior with a mathematical model.
Merrill 16 hours ago [-]
Consciousness is an emergent phenomenon from the ability to fantasize - to think about things that don't exist. In particular, it is fantasy about the "self".
AI is getting close.
superxpro12 18 hours ago [-]
All in all, humans are just reeeeeeeeeallly complicated if-statements that violate all rational bounds of cyclomatic complexity.
jhickok 17 hours ago [-]
I don't have the data to demonstrate that this is incorrect, and that's because we lack a fundamental model of how brains operate. Brains probably compute under an expansive definition of computer, but to say it's a classical computer is sorely underdetermined by the evidence.
fortyseven 20 hours ago [-]
Hell, I don't even know for sure if I'm "conscious". When I really stop and think hard about it, the process of speaking or typing, word by word (even this!) is built on past experience. If you smack me on the head hard enough, and give me amnesia, there goes all that memory and suddenly I can't talk about the things that I could before. I would struggle and need to be exposed to new information (looking at it, reading up, being told about it, etc.) to be able to discuss it further. For me, that idea suggests there's a process that's not entirely different from a large language model. Not the same. But definitely makes me wonder if have more in common with them on some level and there isn't as much to the human mind as we think. For humans, maybe we're just more than the sum of our components.
adamzwasserman 16 hours ago [-]
The commonality breaks down at value assignment. You hear an unexpected sound and have a threat/delight assessment in 170ms. Faster than Google serves a first byte. You do this with virtually no data.
An LLM doesn't assign value to anything; it predicts tokens. The interesting question isn't whether we share a process with LLMs, it's whether the thing that makes your decisions matter to you, moral weight, spontaneous motivation, can emerge from a system that has no survival stake in its own outputs. I wrote about this a few years ago as "the consciousness gap": https://hackernoon.com/ai-and-the-consciousness-gap-lr4k3yg8
waffletower 16 hours ago [-]
The article conflates morality with consciousness. While consciousnesss may be a prerequisite for empathy, morality does not require empathy and neither does consciousness guarantee empathy.
big-chungus4 20 hours ago [-]
If that's the case, it won't be unethical to torture humans simulated using a computer up to their wave function or whatever the smallest thing is, which seems sus
tim-tday 15 hours ago [-]
Get a consensus definition of consciousness and we’ll talk.
Throaway1982 18 hours ago [-]
Only people who can't let go of religious arguments about the special nature of humanity think we can't build conscious machines.
BoxOfRain 17 hours ago [-]
I disagree, my problem with claims of machine consciousness is that they are effectively unfalsifiable without both an adequate theory of consciousness and a way of measuring it empirically. We don't have these, so in my opinion while this is a question we may answer in the future, we definitely lack the theories and tools to make particularly credible claims at the moment. Neither pessimism nor boosterism is warranted yet in my view.
I suspect the space of forms consciousness can take is enormous, and it likely can exist in many forms other than the one we usually experience. I wouldn't rule out machine consciousness as a possibility, but without an adequate theory of consciousness it's just not something I think we can claim is possible or impossible yet with much credibility. That's not a religious argument, if anything it's the argument of an agnostic.
mlyle 13 hours ago [-]
The problems with any claim of consciousness are that they are unfalsifiable.
But it seems to be pretty hard to come up with a coherent claim of meat-consciousness that really excludes the possibility of machine-consciousness without some kind of really motivated reasoning.
scarmig 16 hours ago [-]
I agree that all our theories of consciousness are deeply inadequate. And it it were purely a scientific question, I'd be fine holding off on this question. But consciousness plays a huge role in most theories of ethics, and agnosticism with a negative prior will inevitably lead to unethical actions, if there are any beings that exist outside our "is it human?" heuristic.
Other adult humans? Babies? Fetuses? Brain dead patients? Severe Alzheimer's? Higher apes? Mammals? Vertebrates? Jellyfish? Trees? Organic aliens? Inorganic aliens? A pile of dirt?
Without a good theory of consciousness, we can't answer yes or no for any of them. And yet we don't have a good theory of consciousness and still want to make ethical decisions. What do? We have to rely on gestures toward a theory of consciousness and make decisions based on it, despite its flaws.
keiferski 16 hours ago [-]
I spent a couple months last year writing an essay about consciousness for the Berggruen essay contest. Ultimately they ended up picking a guy that already wrote books about the topic, alas…
Anyway, I plan on posting it online somewhere eventually, but HN seems like a good place to throw the introduction out there.
The basic argument I have is that consciousness is a red herring, a concept that was relevant historically but is increasingly routed around by cybernetic systems that aren’t interested in interior states.
Here’s the intro. If you find this interesting, please let me know!
MacGuffin. Whodunit. Smoking gun. Fall guy. The detective fiction genre is an underappreciated source of terminology for unsolved problems, useful not only for criminal mysteries, but also for unanswered questions in philosophy and science. One such term is the red herring: an apparently useful thing, that upon further inspection, is actually a distraction from solving the main mystery at hand.
The concept of consciousness may be such a red herring. It has occupied the minds of
philosophers for centuries and increasingly frames debates around AI, animal rights, and
medical ethics, among other issues. And yet, even as consciousness is rhetorically dominant, in practice it is increasingly ignored and routed around in real-world situations. When rights are bestowed and resources allocated, the mechanism by which these are done is increasingly uninterested in interior consciousness.
This is not because the problem of consciousness has been solved, or because a revolutionary new theory has novel insights. Rather, it is the natural consequence of cybernetic systems concerned only with output, not internal states or abstract ideals.
What is needed, then, is a genealogy of the concept of consciousness, in the manner of Nietzsche, Foucault, or Charles Taylor. Not a new theory of consciousness, but a story of how the concept developed and came to underlie significant legal, moral, and philosophical systems, and how that foundation is rapidly fading away.
What this genealogy reveals is not merely the history of a single concept or the changing of societal systems, but a deeper human shift: the erosion of interiority itself and the triumph of the external. In simpler terms: a new, largely exterior idea of the self is forming, while at the same time, it is becoming more difficult to conceive of an interior-focused one.
This essay will trace the history of the concept of consciousness, show how it is being routed around by output-focused systems, then ask what effect this has on human life, and how to address it.
avmich 19 hours ago [-]
Seems like a lot of points for questions, not sure where to start :) . Author should be familiar with FPGA in relation to hardware vs. software distinction. Really, more non-humanities part of education might clarify some things.
Somebody with another background might take on commenting the article, so instead of short comments here we might have a coherent picture.
manjuc 19 hours ago [-]
Very interesting.
I explored a related angle on how AI challenges our assumptions about self and awareness.
Don't the machines and cymeks purge and subjugate humanity? That's a bit different than "being too smart" innit?
everdrive 18 hours ago [-]
I've always felt that the different aspects of the mind have been very loosely defined. We've lacked the science to really define them specifically. ("What is consciousness?" remains a philosophical question, which is a strong queue that we don't understand the science of the question yet) And until recently, we've lacked a lot of basis for comparison. (animal intelligence and consciousness _should_ have been a basis for comparison, but I think for cultural reasons we've been quite late to make peace with that fact.)
In any case, intelligence, consciousness, sapience, ego, etc. will probably need more strict fact-based definitions before we can agree on whether or not artificial consciousness can exist.
My personal theory is that consciousness is a specific biological adaptation, and it exists primarily to manage the care of young, and to manage status & relationships in kin groups. A theory of mind can benefit the care of young, which is a good argument for why it appears that only mammals and birds (two classes of animals which do a lot of caring for young) appear to either have a prefrontal cortex (mammals) or appear to have developed something which performs the same functions. (birds) In my opinion, consciousness as people experience it is also necessary for developing a theory of mind for other people, which is beneficial with regard to understand status & hierarchy in a group, and for cultivating and maintaining status.
This is partially why you can be a mystery to yourself; the same skills you'd use to try to understand someone else must actually be used to understand yourself. eg: "was I secretly jealous when I cut down my coworker?" Why don't you just know with 100% certainty? I'd argue that it's because the maintenance of ego does not require this certainty, because ego is tacked onto an already developed brain and lacks perfect insight into the brain's processes. I'd also argue this is why there can be such a gap between who someone believes themselves to be, and who they actually are. You're maintaining a personal identity which ties directly to status. It's not super relevant whether you're consistent over time or 100% internally consistent. You must meet the threshold to maintain your status, but really no more is needed.
It's also why you talk yourself in inane ways. You're walking through your house and you finally found your lost car keys. "I found them!" you might say to yourself. But who are you telling? Certainly "you" already know. I'd argue that the "you" in your head is an abstract identity that you have imperfect access to -- just the same as you have imperfect access and knowledge to other people. Your mind builds a model of your own mind using the same tools it uses to build a model of other people's minds. You have _more_ information about your own mind, but you certainly do not have omniscience about your own mind. The models are always imperfect.
I could go on, but I'd also argue this is sort of the basis for religion. Just like we see faces in the clouds, we try to find a theory of mind in places where it doesn't actually exist. (eg: "We must have upset an ego out there, and that's why it's not raining.") I also think it's why people have moral intuitions but not mathematical intuitions. Or why moral intuitions fail at scale. (eg: Peter Singer's famous child drowning in a small pond thought experiment.)
vvoid 17 hours ago [-]
> "I found them!" you might say to yourself. But who are you telling?
I don't, personally, have this internal monologue. My interior world is a roiling foam of images, feelings and intuitions, memories and imagined possibilities that slosh around solid concepts and facts like boulders in the surf. I have no trouble thinking of words when I need to but I must first conjure up an audience or sit down to journal.
Before these kinds of interpretive posts, I thought the idea of talking to one's self was just a metaphor.
I would expect LLMs to develop some similar non-verbal structure deep within their black boxes, but I know from my own experience that there's more to cogitation than language.
Will today's LLMs achieve consciousness? Almost certainly not.
Will AI as a general concept ever achieve human level cognition and sentience? Depends on your definition of "ever".
Anyone who tries to feed you a line about "never" doesn't understand what they're talking about. On almost any topic.
AI as a concept is never going away and if we keep working the problem, we will eventually achieve a sentient AI. There's nothing magical about meat, there's only things that we don't understand.
To assert that only a human meat brain can be conscious is to assert that only humans can be conscious. That excludes alien life for one, and a large fraction of terrestrial life. One can argue quite successfully that many terrestrial species are conscious and aware. Elephants, great apes, whales, dolphins, octopi, pigs, corvids.
If an octopus is conscious (and I have good reason to believe they are) why is it so ridiculous to think that a hunk of silicon can do it?
Humans really are not special. We're just animals like any other. Our brains are not cosmically blessed and unique. There is no magic.
casey2 19 hours ago [-]
Conscious, or at least sentience is just the meta system that allows one to mesh multiple sensory input (all of which is generated by the brain itself, maybe using some real information).
Whether AI needs consciousness is a totally separate question. LLMs are the great Chinese room, I'd say they have unconscious understanding, the distinction is like c vs list and similarly meaningless but may become meaningful in a constrained self-learning robotics context.
AI will never need to be conscious, AI isn't a moth flying to an open flame, but people will try anyway
grantcas 15 hours ago [-]
[dead]
in-silico 22 hours ago [-]
TLDR: the author does not believe in computational functionalism
The takeaway seems to be "Only meat brains can be conscious because I can feel it and computers aren't made of meat". Which is basically the plot line of every human/robot movie for the last 80 years.
Present the trolley problem to GPT-4 and it gives you a philosophy survey answer.
Present it to a human and their palms sweat. The gap isn't computation, it's that humans are value-making machines shaped by millions of years of selection pressure.
Pollan lands on the wrong argument (biology vs. silicon) when the real one is: where do the values come from, and can they emerge without a reproductive lineage that stakes survival on getting them right?
I'm not sure I would call it a requirement for consciousness, but knowing that most beings with general intelligence (humans) have a form of it similar to my own does make it easier to sleep at night.
This is kind of self-contradictory. Then humans aren't conscious? Or each has their own consciousness? Then why not the machine? Not sure what's the point being made here. Yes, the states of a human brain and a transformer are absolutely incompatible (humans at least share the common architecture), that's why any attempts to map model's "emotions" to humans' and the entire model welfare concept are pretty dubious. That doesn't prove there's no (or can never be) consciousness in that, though.
That's the most coherent argument from the entire article. It criticizes the Butlin report in particular and extrapolates that to "never", while ignoring modern takes on that (e.g. interpretability studies showing vague similarity of both on a level deeper than just the language) and any possible future evidence.
In a sense the title is right, nobody ever formally defined consciousness, so you and I and anyone else are free to make almost any argument and spin any narrative according to our beliefs and it will be true! Ill-defined terms and baseless solipsism are the main problems with all these discussions. Good thing that in practice they matter as much as the question whether a submarine swims.
Human brains use redundancy and the physical independence of neurons to build new pathways over time.
Current LLMs have no redundancy and brittle weights. Their technology and architecture fundamentally prevents them from learning.
I think our understanding of consciousness is developing as we build new edge cases. We have a machine that understands and reacts, but can't learn, grow, or "be" over time in a meaningful way.
>Their technology and architecture fundamentally prevents them from learning.
No? There's in-context learning which is actual learning, it's sample-efficient, and the result can be stored for a learning pipeline. Yes it's ludicrously crude and underpowered compared to neuroplasticity, but that's another question, there's nothing fundamental about this.
AI is getting close.
An LLM doesn't assign value to anything; it predicts tokens. The interesting question isn't whether we share a process with LLMs, it's whether the thing that makes your decisions matter to you, moral weight, spontaneous motivation, can emerge from a system that has no survival stake in its own outputs. I wrote about this a few years ago as "the consciousness gap": https://hackernoon.com/ai-and-the-consciousness-gap-lr4k3yg8
I suspect the space of forms consciousness can take is enormous, and it likely can exist in many forms other than the one we usually experience. I wouldn't rule out machine consciousness as a possibility, but without an adequate theory of consciousness it's just not something I think we can claim is possible or impossible yet with much credibility. That's not a religious argument, if anything it's the argument of an agnostic.
But it seems to be pretty hard to come up with a coherent claim of meat-consciousness that really excludes the possibility of machine-consciousness without some kind of really motivated reasoning.
Other adult humans? Babies? Fetuses? Brain dead patients? Severe Alzheimer's? Higher apes? Mammals? Vertebrates? Jellyfish? Trees? Organic aliens? Inorganic aliens? A pile of dirt?
Without a good theory of consciousness, we can't answer yes or no for any of them. And yet we don't have a good theory of consciousness and still want to make ethical decisions. What do? We have to rely on gestures toward a theory of consciousness and make decisions based on it, despite its flaws.
Anyway, I plan on posting it online somewhere eventually, but HN seems like a good place to throw the introduction out there.
The basic argument I have is that consciousness is a red herring, a concept that was relevant historically but is increasingly routed around by cybernetic systems that aren’t interested in interior states.
Here’s the intro. If you find this interesting, please let me know!
MacGuffin. Whodunit. Smoking gun. Fall guy. The detective fiction genre is an underappreciated source of terminology for unsolved problems, useful not only for criminal mysteries, but also for unanswered questions in philosophy and science. One such term is the red herring: an apparently useful thing, that upon further inspection, is actually a distraction from solving the main mystery at hand.
The concept of consciousness may be such a red herring. It has occupied the minds of philosophers for centuries and increasingly frames debates around AI, animal rights, and medical ethics, among other issues. And yet, even as consciousness is rhetorically dominant, in practice it is increasingly ignored and routed around in real-world situations. When rights are bestowed and resources allocated, the mechanism by which these are done is increasingly uninterested in interior consciousness.
This is not because the problem of consciousness has been solved, or because a revolutionary new theory has novel insights. Rather, it is the natural consequence of cybernetic systems concerned only with output, not internal states or abstract ideals.
What is needed, then, is a genealogy of the concept of consciousness, in the manner of Nietzsche, Foucault, or Charles Taylor. Not a new theory of consciousness, but a story of how the concept developed and came to underlie significant legal, moral, and philosophical systems, and how that foundation is rapidly fading away.
What this genealogy reveals is not merely the history of a single concept or the changing of societal systems, but a deeper human shift: the erosion of interiority itself and the triumph of the external. In simpler terms: a new, largely exterior idea of the self is forming, while at the same time, it is becoming more difficult to conceive of an interior-focused one.
This essay will trace the history of the concept of consciousness, show how it is being routed around by output-focused systems, then ask what effect this has on human life, and how to address it.
Somebody with another background might take on commenting the article, so instead of short comments here we might have a coherent picture.
I explored a related angle on how AI challenges our assumptions about self and awareness.
https://www.immaculateconstellation.info/why-ai-challenges-u...
He instead seems to make up a mental image of how a neural network might work on a computer and uses that representation instead.
1: https://dune.fandom.com/wiki/Butlerian_Jihad
In any case, intelligence, consciousness, sapience, ego, etc. will probably need more strict fact-based definitions before we can agree on whether or not artificial consciousness can exist.
My personal theory is that consciousness is a specific biological adaptation, and it exists primarily to manage the care of young, and to manage status & relationships in kin groups. A theory of mind can benefit the care of young, which is a good argument for why it appears that only mammals and birds (two classes of animals which do a lot of caring for young) appear to either have a prefrontal cortex (mammals) or appear to have developed something which performs the same functions. (birds) In my opinion, consciousness as people experience it is also necessary for developing a theory of mind for other people, which is beneficial with regard to understand status & hierarchy in a group, and for cultivating and maintaining status.
This is partially why you can be a mystery to yourself; the same skills you'd use to try to understand someone else must actually be used to understand yourself. eg: "was I secretly jealous when I cut down my coworker?" Why don't you just know with 100% certainty? I'd argue that it's because the maintenance of ego does not require this certainty, because ego is tacked onto an already developed brain and lacks perfect insight into the brain's processes. I'd also argue this is why there can be such a gap between who someone believes themselves to be, and who they actually are. You're maintaining a personal identity which ties directly to status. It's not super relevant whether you're consistent over time or 100% internally consistent. You must meet the threshold to maintain your status, but really no more is needed.
It's also why you talk yourself in inane ways. You're walking through your house and you finally found your lost car keys. "I found them!" you might say to yourself. But who are you telling? Certainly "you" already know. I'd argue that the "you" in your head is an abstract identity that you have imperfect access to -- just the same as you have imperfect access and knowledge to other people. Your mind builds a model of your own mind using the same tools it uses to build a model of other people's minds. You have _more_ information about your own mind, but you certainly do not have omniscience about your own mind. The models are always imperfect.
I could go on, but I'd also argue this is sort of the basis for religion. Just like we see faces in the clouds, we try to find a theory of mind in places where it doesn't actually exist. (eg: "We must have upset an ego out there, and that's why it's not raining.") I also think it's why people have moral intuitions but not mathematical intuitions. Or why moral intuitions fail at scale. (eg: Peter Singer's famous child drowning in a small pond thought experiment.)
I don't, personally, have this internal monologue. My interior world is a roiling foam of images, feelings and intuitions, memories and imagined possibilities that slosh around solid concepts and facts like boulders in the surf. I have no trouble thinking of words when I need to but I must first conjure up an audience or sit down to journal.
Before these kinds of interpretive posts, I thought the idea of talking to one's self was just a metaphor.
I would expect LLMs to develop some similar non-verbal structure deep within their black boxes, but I know from my own experience that there's more to cogitation than language.
Will AI as a general concept ever achieve human level cognition and sentience? Depends on your definition of "ever".
Anyone who tries to feed you a line about "never" doesn't understand what they're talking about. On almost any topic.
AI as a concept is never going away and if we keep working the problem, we will eventually achieve a sentient AI. There's nothing magical about meat, there's only things that we don't understand.
To assert that only a human meat brain can be conscious is to assert that only humans can be conscious. That excludes alien life for one, and a large fraction of terrestrial life. One can argue quite successfully that many terrestrial species are conscious and aware. Elephants, great apes, whales, dolphins, octopi, pigs, corvids.
If an octopus is conscious (and I have good reason to believe they are) why is it so ridiculous to think that a hunk of silicon can do it?
Humans really are not special. We're just animals like any other. Our brains are not cosmically blessed and unique. There is no magic.
Whether AI needs consciousness is a totally separate question. LLMs are the great Chinese room, I'd say they have unconscious understanding, the distinction is like c vs list and similarly meaningless but may become meaningful in a constrained self-learning robotics context.
AI will never need to be conscious, AI isn't a moth flying to an open flame, but people will try anyway