This issue with Aaronson's response is that it comes from the perspective of first accepting Chalmers' "Hard Problem of consciousness". The "Hard Problem of consciousness", despite the name, is actually a statement of position. Briefly, it states that:
a) We have experiences, like being hungry, tasting a strawberry, seeing blue, etc.
b) It's possible to imagine a being which located food when hungry, ate when necessary, used colours to navigate the world, etc, but did not have these conscious experiences. To put it another way, what we've learnt so far about how the brain works gives us great insight into how we would eat, navigate, etc, but does not give us any insight into either how or why we would have conscious experiences.
c) Therefore conscious experiences are not explicable by physical brain processes.
This is a belief arising from an appeal to intuition and does not present a testable or falsifiable proposition.
Integrated Information Theory (which I am not a vigorous proponent of) posits that the experience of consciousness is related to the level of "integration" of a system. However, if you come to this while believing in the "Hard Problem", that cannot possibly be true, because IIT relates consciousness to physical properties of the system such as connectivity, but the "Hard Problem" defines consciousness as something which does not arise from any physical property of the system.
Analemma_ 671 days ago [-]
Aaronson's point later in the post that it is possible to construct a function which has an arbitrarily high phi but is nonetheless obviously not conscious-- which wrecks IIT completely, at least in its current formulation-- does not depend on anything to do with the hard problem of consciousness.
> There’s two responses to this. The easiest response is to say that φ is merely
necessary for C —problem solved. GT’s response would be to challenge your
intuition for things being unconscious. Here’s a historical analogy; imagine
when the Kelvin temperature scale was introduced. Here Kelvin was saying
that just about everything has heat in it. In fact, even the coldest thing you’ve touched actually has substantial heat in it! Think of IIT as attempting to put a Kelvin-scale on our notions of C . I find this “Kelvin scale for C ” analogy makes the panpsychism much more palatable.
Then, Scott's response to that:
> Suppose, again, that I told you that physicists since Kelvin had gotten the definition of temperature all wrong, and that I had a new, better definition. And, when I built a Scott-thermometer that measures true temperatures, it delivered the shocking result that boiling water is actually colder than ice. You’d probably tell me where to shove my Scott-thermometer. But wait: how do you know that I’m not the Copernicus of heat, and that future generations won’t celebrate my breakthrough while scoffing at your small-mindedness?
Ok, pretty dismissive, but this doesn't actually address what the first quote mentions. Maybe phi's necessary but not sufficient for consciousness; that would still be pretty interesting. (I.e. maybe the pesky function does have high phi, but phi is not itself consciousness, because Consciousness(system) = phi(system) + Corrective_factor).
Or suppose we add an axiom like "feedback loops that affect future trajectory" to exclude such a pesky static function definition, or better still suppose IIT moves towards something that generally accounts for more subtle dynamics as well as structure. I can't help but think of the relationship between euclidean/noneuclidean geometry here, especially when "obviousness" comes up in these discussions. It seems productive to play with adding/discarding axioms and looking for richness/consistency. And isn't lots of modern physics about exploring model parameter-space ( https://www.quantamagazine.org/using-the-bootstrap-physicist... ) to zero in on "the" model given "a" model? Like 1D-Ising won't phase-transition, maybe current 1D-IIT won't cut it but has some fruitful generalization.
mannykannot 670 days ago [-]
From the perspective that minds are very complex things (which is not a very controversial proposition), it would not be at all surprising to find that human-like minds unavoidably have a high φ. Unless the measure can differentiate between minds and other complex things that score as highly, however, this would not be particularly interesting and it is hard to see how it would get us noticeably closer to understanding minds. Thus, Aaronson's identification of structures capable of arbitrarily high φ, yet not displaying higher-level consciousness (such as being self-aware and having a theory of mind) is indeed a significant problem for IIT.
If proponents of IIT object that it is only an assumption that these systems are not conscious, they should be pleased that Aaronson has given them a way to empirically evaluate their thesis!
670 days ago [-]
bawolff 670 days ago [-]
> Maybe phi's necessary but not sufficient for consciousness; that would still be pretty interesting.
Would it be interesting? Why?
Coming up with conditions that are neccesary but not sufficient is pretty easy. Sure they are interesting if the conditions are non-intuitive and you've actually proven they are neccesary. I'm not sure either criteria is met here.
photonthug 670 days ago [-]
I mean.. doesn't your logic here advocate throwing away every "lower-bound" type of result in math, suggest that it's boring/trivial to answer the smallest LLM that speaks english ( https://arxiv.org/abs/2305.07759 ), etc?
bawolff 670 days ago [-]
No, just that a lower bound in and of itself isn't interesting without further justification as to why. Most non-tight lower bounds are trivial.
That's kind of the opposite question. There's a difference between asking "What is the smallest LLM that can speak english" and "What size must an LLM at least be in order for it to be able to speak english". The former is an interesting question. The latter is trivial as it has an answer of "1". I believe the parent was talking about the latter type of lower bound not the former.
freehorse 670 days ago [-]
That would sort out any current or shortly-foreseeable AI systems as non-conscious (or very-low-consious), though, while in other theories of consciousness it is not clear what would be the case (because others make extremely vague predictions and accept quite contradictory models of them).
goatlover 671 days ago [-]
I don't think you need b) to make the argument work. You just need to point out that a) isn't present in physical theories, except as labels or correlations. The zombie argument is b), which is just one of several thought experiments to illustrate the argument being made, but it's not necessary to make the argument work. Chalmers, Nagel, McGinn and Block have all made arguments that don't rely on b).
Nagel states it most clearly in that science is the view from nowhere. The world doesn't feel like, taste like, look like anything on it's own, because those are creature-based sensations which depend on the kind of sensory organs and nervous systems an animal has.
wzdd 671 days ago [-]
The position still seems to boil down to "Reducing experience to labels or correlations doesn't feel right to me", which actually dovetails nicely with your Nagel quote, since you can't rely on your intuition when attempting to understand a system from the inside.
goatlover 671 days ago [-]
The fact that it feels like something at all is enough of a rebuttal to reducing experience to labels or correlations. That's not an intuition, it's just a statement of empirical fact, since empiricism relies on phenomenal observations.
wzdd 670 days ago [-]
The fact that it feels like something at all could simply be a description of what it's like for these correlations to arise inside a complex system. It's not a rebuttal, it's just a restatement of the position.
jhedwards 671 days ago [-]
I think the answer to a) and b) is actually quite obvious: we are not an automaton that simply eats when the need arises. The experience of hunger decouples our behavior from the need to eat, and we can plan according to the strength of our hunger relative to other needs.
We are not an automaton that simply eats a strawberry because it is edible. We are decision making organisms that can adjust the composition of our diet based on the chemical properties of the food. We can presume that there is an evolutionary advantage to being able to taste and therefore select from multiple dietary options.
It is clear that the conscious experiences as described are extremely subtle forms of information that allow us to plan and make decisions based on the information they provide us, and not simply blindly react to the world, and I think it is pretty obvious that that is a massive advantage and also more in line with my experience as a conscious being.
Blahah 670 days ago [-]
Plants, bacteria, and fungi can do equivalent planning and decision making about nutrient intake. What we experience as consciousness is indistinguishable from evolved fecundity preservation in a complex and dynamic environment, based on the conditions you highlight.
omniglottal 670 days ago [-]
Looks like a hyperbolic flaw in reasoning. The "Hard Problem", as described, seems more to define consciousness as something which does not arise exclusively from (known) physical properties of the system.
In what reality would the phenomenom of consciousness be defined without any underlying physical property? Seems that study would aptly be called "metaphysics".
that said i'm still keen to see more/better discussion of IIT, and/or more modern extensions. IIT is certainly quantitative, and arguably elegant, despite problems. so it puzzles me how eager some people are to just junk it rather than repairing it.
mrtranscendence 671 days ago [-]
I rarely see IIT characterized as complete junk. Personally, I don't see much value in it as I don't think it answers, or even grapples with, the hard problem of consciousness. Tell me how integrated information gives rise to subjective experience and maybe I'll start buying what they're selling.
photonthug 670 days ago [-]
I think what they are selling is not answers, but more like a non-metaphysical platform to propose answers within, i.e. a decent start at a scientific framework. Frameworks shouldn't be confused with answers, although they might represent some way of getting closer to answers. Besides IIT, is there an alternative scientific/quantified framework that could even try to grapple with your question?
For purposes of comparison, here's another approach that's super interesting ( https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4168033/ ). So both frameworks are looking at system structure/dynamics and then attempting to deduce/quantify some aspect of them that (hopefully) gives us some insight into "Mind" or "Consciousness". But to me, grounding such inquiries directly in neuroscience seems awkward, because brain only runs Mind. Uncovering a bunch of implementation details about that from FMRIs probably won't give us a lot of interesting insight at the right level. To an extent some implementation details are important and might shed light on the architecture/algorithm of Consciousness/Mind, but evolved systems are also going to be cluttered up with total hacks that evolution brute-forced where implementation is kinda arbitrary.
All of which is to say, IIT being "platform agnostic" and proposing stuff like a thermostat is more conscious than a rock, a dog moreso than a thermostat, and a human more than a dog is not just a cool trick. Being able to approach this kind of intuitive problem in a somewhat rigorous way seems like a necessary requirement for a serious scientific theory of mind. I like the Ising and graph theory modeling approach, but the implementation details of {neuron-counting, FMRI, lesions on functional areas, etc} is probably a distraction if you're trying to understand mind rather than medicine.
rvcdbn 671 days ago [-]
You could ask the same kind of question about General Relativity. Sure mass/energy causes the curvature of spacetime but tell me how it does or your theory is worthless. Even without the “how” GR still makes testable predictions that are born out by experiment. And this is the shape of all our physical theories. I think this is the kind of theory IIT is trying to be.
mrtranscendence 670 days ago [-]
Well, there's a reason it's called "the hard problem of consciousness" and not "just some incidental observation about models of consciousness". When evaluating general relativity it's not important to ask how mass/energy causes the curvature of spacetime; that's not the point of general relativity. But a theory of consciousness that doesn't explain the most intriguing, important component of consciousness doesn't explain what's important to me.
feoren 670 days ago [-]
I was nodding along to everything until they got to the 4th and 5th axioms:
> Integration: ... seeing a blue book is irreducible to seeing a book without the color blue, plus the color blue without the book
What? Why is this taken as a given? This is not at all evident. You could show someone a blue book and then ask them to imagine that same book, but with a red cover instead, and many people would feel perfectly comfortable doing this, having some experience close to actually seeing that red book. If that doesn't straight-up prove this axiom wrong (I personally think it does), then it at least shows that this axiom is not clearly "evident".
> Exclusion: my experience flows at a particular speed—each experience encompassing say a hundred milliseconds or so
Have these people never been bored? Have they never heard of how "time slows down" during a car crash? This is not only non-evident, it's another one that seem patently false if you actually talk to any human beings about their own experiences.
transfer92 670 days ago [-]
I'm either not smart enough to understand it, or I'm too smart to be bamboozled by it.
(Physics A.B. from Harvard & PhD. from UC Berkeley, FWIW)
akhayam 670 days ago [-]
From someone who has dabbled in information theory (the real one), I am just as confused as you.
What I have observed in the past decade is that calling things “information theory of something” makes it somehow more palatable for a broader audience.
I don't think IIT is a correct theory or even on the right track. Nevertheless I fail to see how it's "silly". It's no sillier than any other putative scientific theory of consciousness -- less silly, since IIT actually makes a testable prediction.
DontchaKnowit 671 days ago [-]
I cant even find anything resembling an explanation if what "speculative realism" posits in that wiki article.
What is it and why is it silly?
Der_Einzige 671 days ago [-]
"Thus, all object relations, human and nonhuman, are said to exist on an equal ontological footing with one another"
They are ideas from people who don't like Anthropocenterism, which Integrated Information Theory is also opposed to.
It's worth noting that all of the people who believe in any of this are philosophical wingcucks like Nick Land.
DontchaKnowit 669 days ago [-]
I understood basically 0 of this.
WFHRenaissance 671 days ago [-]
LOL Chill with the Land hate.
Der_Einzige 671 days ago [-]
He's openly fascist and a charlatan. I shouldn't be surprised that people here like him.
Also no surprise that people influenced by him, i.e. Mark Fischer, killed themselves.
DontchaKnowit 669 days ago [-]
okay I must say - I just read his wikipedia and I read a line saying he was explicitly racist - skimmed all the sources and there was 0 evidence of racism.
Then i read his article : Hyper-Racism. It is not racist, as far as I can tell.
Can you hook me up with something damning about him?
cause to me, he just seems like a mega autistic edge-lord philosopher that has some fairly prescient views about what the future is going to look like.
(and Steve Bannon happens to like him, so people call him racist)
EDIT :
sorry misread your comment - you said fascist, not racist. Anyway I am going to keep my above comment cause I'm interested
Der_Einzige 669 days ago [-]
Read his essay "Dark Enlightenment". Again, he is openly fascist, even if he says he's not. It's a meme for actual fascists to claim they aren't.
"Land disputes the similarity between his ideas and fascism, claiming that "Fascism is a mass anti-capitalist movement,"[4] whereas he prefers that "[capitalist] corporate power should become the organizing force in society."
Okay nick, you're an idiot and never actually looked at how fascist societies, like fascist italy were organized. They were "Corportist" and ruthlessly capitalistic and even hitler only included the "socialism" part of national socialism to appease the masses. Hitler and his regime were not socialist, and if he were, he never would have secured the support of big business.
I don't see what makes them silly. Metaphysics is hard, and speculative realism is a proposed answer to modern transcendental idealism stemming from Kant, where the worry is that we can't say objective things about the world independent of human thought. Things like dinosaurs existing before humans evolved is seen as correlated to our experiences with fossils in the ground, and not an objective truth about the universe. Speculative realism is a way around that while respecting the philosophical arguments of the Kantians.
mannykannot 670 days ago [-]
Our mental experiences seem to be subjective, so what prospect is there of making any objective statement about the world if correlations are inadequate for the purpose?
goatlover 670 days ago [-]
I think the argument is that it can't just be correlations because then we get stuck in the framework of the world looks as if things were going on before humans existed without being able to say that's true. Realists want to be able to say there are fossils in the ground because dinosaurs really did exist before us and not just that it appears that way to humans. Thus the speculative part of how to define reality in a way that isn't just correlated to our experiences.
cpsempek 671 days ago [-]
How does déja vu, that is the re-experiencing an experience, fit into this theory? It appears that the Information axioms fails to be Essential when one considers déja vu.
n4r9 670 days ago [-]
I don't see that deja vu poses a problem. Deja vu is simply a feeling of familiarity associated to an experience. It doesn't involve actually having an experience more than once.
cpsempek 669 days ago [-]
Possibly - but I could just claim that for me déja vu is exactly when two experiences are not differentiable from one another. And if someone claims to have experienced this moment before, identically, how would I refute their claim. It's their experience after all.
n4r9 669 days ago [-]
When I experience deja vu, I can never place the time or location that I actually had a previous similar experience. Nor can I predict in advance any occurrence, despite it seeming so inevitable when it does occur. It is simply a dreamlike quality of familiarity. Having done some reading about it, this seems to be the typical state of affairs.
Given this, the simplest and most compelling explanation is that the brain has somehow short-circuited itself into invoking a sense of familiarity. Otherwise I'd expect, at least some people, some of the time, to be able to either link the two experiences or make predictions. The claim that exactly the same experience has actually occurred twice is the more remarkable, both in terms of evidence and plausible mechanism.
Whilst I do appreciate that experiences are highly subjective beasts, it doesn't feel like this really challenges the linked article. Besides, if one is so wedded to the subjectivity of experience, they might as well just say that nothing can objectively explain consciousness because it's subjective, end of.
https://scottaaronson.blog/?p=1799
a) We have experiences, like being hungry, tasting a strawberry, seeing blue, etc.
b) It's possible to imagine a being which located food when hungry, ate when necessary, used colours to navigate the world, etc, but did not have these conscious experiences. To put it another way, what we've learnt so far about how the brain works gives us great insight into how we would eat, navigate, etc, but does not give us any insight into either how or why we would have conscious experiences.
c) Therefore conscious experiences are not explicable by physical brain processes.
This is a belief arising from an appeal to intuition and does not present a testable or falsifiable proposition.
Integrated Information Theory (which I am not a vigorous proponent of) posits that the experience of consciousness is related to the level of "integration" of a system. However, if you come to this while believing in the "Hard Problem", that cannot possibly be true, because IIT relates consciousness to physical properties of the system such as connectivity, but the "Hard Problem" defines consciousness as something which does not arise from any physical property of the system.
This argument has played out before so I'll just link to that discussion. quoting from https://www.scottaaronson.com/response-p1.pdf
> There’s two responses to this. The easiest response is to say that φ is merely necessary for C —problem solved. GT’s response would be to challenge your intuition for things being unconscious. Here’s a historical analogy; imagine when the Kelvin temperature scale was introduced. Here Kelvin was saying that just about everything has heat in it. In fact, even the coldest thing you’ve touched actually has substantial heat in it! Think of IIT as attempting to put a Kelvin-scale on our notions of C . I find this “Kelvin scale for C ” analogy makes the panpsychism much more palatable.
Then, Scott's response to that:
> Suppose, again, that I told you that physicists since Kelvin had gotten the definition of temperature all wrong, and that I had a new, better definition. And, when I built a Scott-thermometer that measures true temperatures, it delivered the shocking result that boiling water is actually colder than ice. You’d probably tell me where to shove my Scott-thermometer. But wait: how do you know that I’m not the Copernicus of heat, and that future generations won’t celebrate my breakthrough while scoffing at your small-mindedness?
Ok, pretty dismissive, but this doesn't actually address what the first quote mentions. Maybe phi's necessary but not sufficient for consciousness; that would still be pretty interesting. (I.e. maybe the pesky function does have high phi, but phi is not itself consciousness, because Consciousness(system) = phi(system) + Corrective_factor).
Or suppose we add an axiom like "feedback loops that affect future trajectory" to exclude such a pesky static function definition, or better still suppose IIT moves towards something that generally accounts for more subtle dynamics as well as structure. I can't help but think of the relationship between euclidean/noneuclidean geometry here, especially when "obviousness" comes up in these discussions. It seems productive to play with adding/discarding axioms and looking for richness/consistency. And isn't lots of modern physics about exploring model parameter-space ( https://www.quantamagazine.org/using-the-bootstrap-physicist... ) to zero in on "the" model given "a" model? Like 1D-Ising won't phase-transition, maybe current 1D-IIT won't cut it but has some fruitful generalization.
If proponents of IIT object that it is only an assumption that these systems are not conscious, they should be pleased that Aaronson has given them a way to empirically evaluate their thesis!
Would it be interesting? Why?
Coming up with conditions that are neccesary but not sufficient is pretty easy. Sure they are interesting if the conditions are non-intuitive and you've actually proven they are neccesary. I'm not sure either criteria is met here.
> suggest that it's boring/trivial to answer the smallest LLM that speaks english ( https://arxiv.org/abs/2305.07759 ), etc?
That's kind of the opposite question. There's a difference between asking "What is the smallest LLM that can speak english" and "What size must an LLM at least be in order for it to be able to speak english". The former is an interesting question. The latter is trivial as it has an answer of "1". I believe the parent was talking about the latter type of lower bound not the former.
Nagel states it most clearly in that science is the view from nowhere. The world doesn't feel like, taste like, look like anything on it's own, because those are creature-based sensations which depend on the kind of sensory organs and nervous systems an animal has.
We are not an automaton that simply eats a strawberry because it is edible. We are decision making organisms that can adjust the composition of our diet based on the chemical properties of the food. We can presume that there is an evolutionary advantage to being able to taste and therefore select from multiple dietary options.
It is clear that the conscious experiences as described are extremely subtle forms of information that allow us to plan and make decisions based on the information they provide us, and not simply blindly react to the world, and I think it is pretty obvious that that is a massive advantage and also more in line with my experience as a conscious being.
that said i'm still keen to see more/better discussion of IIT, and/or more modern extensions. IIT is certainly quantitative, and arguably elegant, despite problems. so it puzzles me how eager some people are to just junk it rather than repairing it.
For purposes of comparison, here's another approach that's super interesting ( https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4168033/ ). So both frameworks are looking at system structure/dynamics and then attempting to deduce/quantify some aspect of them that (hopefully) gives us some insight into "Mind" or "Consciousness". But to me, grounding such inquiries directly in neuroscience seems awkward, because brain only runs Mind. Uncovering a bunch of implementation details about that from FMRIs probably won't give us a lot of interesting insight at the right level. To an extent some implementation details are important and might shed light on the architecture/algorithm of Consciousness/Mind, but evolved systems are also going to be cluttered up with total hacks that evolution brute-forced where implementation is kinda arbitrary.
All of which is to say, IIT being "platform agnostic" and proposing stuff like a thermostat is more conscious than a rock, a dog moreso than a thermostat, and a human more than a dog is not just a cool trick. Being able to approach this kind of intuitive problem in a somewhat rigorous way seems like a necessary requirement for a serious scientific theory of mind. I like the Ising and graph theory modeling approach, but the implementation details of {neuron-counting, FMRI, lesions on functional areas, etc} is probably a distraction if you're trying to understand mind rather than medicine.
> Integration: ... seeing a blue book is irreducible to seeing a book without the color blue, plus the color blue without the book
What? Why is this taken as a given? This is not at all evident. You could show someone a blue book and then ask them to imagine that same book, but with a red cover instead, and many people would feel perfectly comfortable doing this, having some experience close to actually seeing that red book. If that doesn't straight-up prove this axiom wrong (I personally think it does), then it at least shows that this axiom is not clearly "evident".
> Exclusion: my experience flows at a particular speed—each experience encompassing say a hundred milliseconds or so
Have these people never been bored? Have they never heard of how "time slows down" during a car crash? This is not only non-evident, it's another one that seem patently false if you actually talk to any human beings about their own experiences.
(Physics A.B. from Harvard & PhD. from UC Berkeley, FWIW)
https://en.wikipedia.org/wiki/Object-oriented_ontology
https://en.wikipedia.org/wiki/Speculative_realism
What is it and why is it silly?
They are ideas from people who don't like Anthropocenterism, which Integrated Information Theory is also opposed to.
It's worth noting that all of the people who believe in any of this are philosophical wingcucks like Nick Land.
Also no surprise that people influenced by him, i.e. Mark Fischer, killed themselves.
Can you hook me up with something damning about him? cause to me, he just seems like a mega autistic edge-lord philosopher that has some fairly prescient views about what the future is going to look like. (and Steve Bannon happens to like him, so people call him racist)
EDIT : sorry misread your comment - you said fascist, not racist. Anyway I am going to keep my above comment cause I'm interested
"Land disputes the similarity between his ideas and fascism, claiming that "Fascism is a mass anti-capitalist movement,"[4] whereas he prefers that "[capitalist] corporate power should become the organizing force in society."
Okay nick, you're an idiot and never actually looked at how fascist societies, like fascist italy were organized. They were "Corportist" and ruthlessly capitalistic and even hitler only included the "socialism" part of national socialism to appease the masses. Hitler and his regime were not socialist, and if he were, he never would have secured the support of big business.
See the fascist section of these articles if you don't believe me: https://en.wikipedia.org/wiki/Corporatism
https://en.wikipedia.org/wiki/Mefo_bills
https://en.wikipedia.org/wiki/Secret_Meeting_of_20_February_...
https://en.wikipedia.org/wiki/Freundeskreis_der_Wirtschaft
https://en.wikipedia.org/wiki/Industrielleneingabe
Given this, the simplest and most compelling explanation is that the brain has somehow short-circuited itself into invoking a sense of familiarity. Otherwise I'd expect, at least some people, some of the time, to be able to either link the two experiences or make predictions. The claim that exactly the same experience has actually occurred twice is the more remarkable, both in terms of evidence and plausible mechanism.
Whilst I do appreciate that experiences are highly subjective beasts, it doesn't feel like this really challenges the linked article. Besides, if one is so wedded to the subjectivity of experience, they might as well just say that nothing can objectively explain consciousness because it's subjective, end of.