Golds
Loves Juno
Level 10
onuJ sevoL
|
|
« Reply #40 on: October 28, 2024, 08:15:29 AM » |
|
Oh yeah, I think they should just use nuclear power for a lot of things if the climate issue is so dire. Seems simple.
Microsoft is reactivating mothballed nuclear reactors at Three Mile Island in Ohio, to power the giant datacenters full of frontier AI models. I think this is a good idea, as again, no carbon emissions. Sadly, as an intern in this military structure, it is not as much fun. But that big paycheck makes it all worthwhile
|
|
|
Logged
|
|
|
|
michaelplzno
|
|
« Reply #41 on: October 28, 2024, 09:00:41 AM » |
|
@jsnake The reason I disagreed was principled: That is, that consciousness isn't well defined enough to say that it is non-computational. The Turing test was designed for such an analysis. And the ones who think AI can't do what humans do are likely the religious ones, not the ones who say the machines are just as good as a person.
|
|
|
Logged
|
|
|
|
J-Snake
|
|
« Reply #42 on: October 28, 2024, 11:50:19 AM » |
|
You are confusing two distinct concepts here. These two statements do not relate to each other.
1. Consciousness is non-computational. 2. Relating consciousness to AI is purely a matter of religious belief.
There is no connection between these two statements. But the second statement is undeniably true. Any doubts about the second statement stem from arbitrary assumptions, as there is no inherent relationship between AI and consciousness by design.
Similarly, the Turing Test is not designed to analyze consciousness. The real question is whether consciousness is necessary for acting intelligently.
|
|
|
Logged
|
|
|
|
michaelplzno
|
|
« Reply #43 on: October 28, 2024, 05:24:34 PM » |
|
1. Consciousness is non-computational. -> this directly implies that humans (which are conscious) have some kind of soul or other non-emperical mechanism by which they make decisions, like quantum fluctuations in their brain or something. Something that cannot be measured in a traditional computational way. Most of these concepts would be considered to be religious in nature.
2. Relating consciousness to AI is purely a matter of religious belief. -> only in the way that any belief is inherently religious because to believe something without any kind of emperical backing is an act of faith. Sometimes even when the data supports a conclusion faith is needed to bridge the gaps as a sort of scaffholding that is part of human thought.
But premise 1 connects to premise 2 in this premise:
3. If conciousness is non-computational it is impossible for AI to be conscious. -> That is, non-computational systems have more capabilities than computational ones. So if you are saying that the concept of human thought is beyond computation, then it cannot be that AI has human thought. But in doing that, you have said that there is a religious component to human thought because it is essentially not bound by math.
In computer science there is a term "oracle" which is used as a black box that can solve the halting problem (a non-computational problem of if a program will or will not ever terminate.) Even with a computer that has this "oracle" component in it (perhaps the human brain posseses one, why not) there would still be things that could not be computed with computers that have an oracale inside, thus creating a hierarchy of computation that goes even into a world where some kind of magic/religion/soul/god is giving you answers out of nowhere.
To believe AI is concious of course is religious because we don't have an emperical definition of what conciousness is, and to believe that ... well ... any belief has at least some religious backing because that is the nature of belief without any kind of empericisim to measure it.
The turing test is relevant because it gives us a way of testing how concious a computer is in an emperical way: by running an interview and having a conversation without knowing if a human or a computer is behind the wheel. Which we would need more such information, and better tests, to know what is and is not concious as opposed to just guessing through blind faith that "hey this thing seems concious, why not?"
|
|
|
Logged
|
|
|
|
J-Snake
|
|
« Reply #44 on: October 28, 2024, 09:24:47 PM » |
|
2. Relating consciousness to AI is purely a matter of religious belief. -> only in the way that any belief is inherently religious In my explanation, a religious belief is based on arbitrary assumptions. A sound belief is based on strong indication and profound experience. So not every belief is religious. But premise 1 connects to premise 2 in this premise: There is still no connection because these are two orthogonal statements. Statement 2 is true independent from the truth value of statement 1. If statement 1 is true, then statement 2 is true. If statement 1 is not true, statement 2 is still true. It is important to not get confused here. In computer science there is a term "oracle" which is used as a black box that can solve the halting problem... It only means there is no repeating pattern and it's orthogonal to my statements about consciousness. The turing test is relevant because it gives us a way of testing how concious a computer is in an emperical way: No need to relate consciousness here again if you want to be scientific. The only thing that you are actually testing for real is whether consciousness is necessary for acting intelligently. This is a completely different question that has no relation to consciousness.
|
|
|
Logged
|
|
|
|
michaelplzno
|
|
« Reply #45 on: October 29, 2024, 02:46:08 AM » |
|
Not sure why we are drilling into semantics here, but you seem interested in that so whatever. In my explanation, a religious belief is based on arbitrary assumptions. A sound belief is based on strong indication and profound experience. So not every belief is religious. In ontology, we can classify beliefs into categories like irrational, rational, and based on evidence. I believe 2+2=4 but you wouldn't say that. You would say I *know* that 2+2=4. That is both that I believe in it, and there is a theoretical backing that explains the veracity of the claim. In my view, belief is still the domain of religion in that some people don't believe 2+2=4 for irrational reasons no matter how much you explain the theory. People's belief structures are based on some kind of "magic" rather than factual empiricism. There is still no connection because these are two orthogonal statements. Even in geometry, two orthogonal lines share at least one point (in most cases). Similarly, both statements are about consciousness, so they are at least thematically connected. 1) My car is red. 2) My car is a great car. These statements are orthogonal, but anyone who knows basic logic knows that we can infer a third premise that 3) There are great cars that are red. You can say "well the third statement is orthogonal." I guess? I'm not sure what orthogonal means here. It is important to not get confused here. Not sure why any of this is important at all. Even the most popular threads ever here, like Minecraft's announcement thread ... are they "important" would you say? It only means there is no repeating pattern A computer may loop forever without epeating a pattern. No need to relate consciousness here again if you want to be scientific. My entire point is that consciousness should be more scientifically scrutinized. When we get into beliefs, there are the 2+2=4 beliefs (scientific) and then there are beliefs like "There are aliens who live on Alpha Centauri who want me to do my taxes" 2 + 2 = 4 is something we can check through empiricism, math, and just general scientific knowledge. The tax aliens is something we cannot really check or know for certain because there is no way currently to see the Alpha Centauri star system's planets, or to know its inhabitants' wishes Re: Taxes. So the tax aliens is a religious thing. I'm trying to say that consciousness shouldn't be the domain of religion, that empirical tests, like the Turing Test should be able to analyze it better and give us more 2 + 2 = 4 (emperical) kind of data on the concept rather than more religious space tax alien kind of belief.
|
|
|
Logged
|
|
|
|
J-Snake
|
|
« Reply #46 on: October 29, 2024, 08:02:06 AM » |
|
A computer may loop forever without epeating a pattern. Only by external influence or infinite resources. But this does not invalidate my statement. People's belief structures are based on some kind of "magic" rather than factual empiricism. Religious beliefs are not inherently spiritual; they are often rooted in a single source of authority, such as 'God says so,' without additional justification. Claiming that AI will become conscious simply because it surpasses the brain's complexity involves the same level of magical thinking attributed to complexity itself. Replace the word 'religion' with arbitrary assumption for clarity. Your understanding of orthogonality misses the mark. The essence of orthogonality is that the state of one dimension or statement does not determine or require the state of another. For instance, there can be great cars that are red, but they aren’t necessarily red. Similarly, while conscious, intelligent beings exist, consciousness is not necessarily a prerequisite for intelligence. Consistent success in the Turing Test would demonstrate that consciousness is not a requirement for intelligent behavior. This is the logical conclusion if we understand orthogonality correctly.
|
|
|
Logged
|
|
|
|
michaelplzno
|
|
« Reply #47 on: October 29, 2024, 09:32:52 AM » |
|
Only by external influence or infinite resources. But this does not invalidate my statement. No, the way computers work, as in a Turing Machine, it will either run forever or halt. This is a theoretical question like if 2 + 2 = 4 or 2 + 2 = 5, there is an answer, you do not need infinite resources or external factors to loop forever while not repeating a pattern. For example, the following code: for(int i = 1; i > 0; i++) { for(int j = 0; j < i; j++) { print("A"); } print(NEWLINE); }
Will Print A AA AAA AAAA AAAAA AAAAAA ...
We know this will loop forever, and we know that it never repeats a line as each line is longer than the last. We can know this without running the code or even compiling it. orthogonal Not really sure I understand how you are using this here: If you are trying to create some kind of semantic difference between intelligence and consciousness I'm sort of only half following it. Being intelligent and being conscious are totally different things? Like a dog is conscious but not intelligent, and a genius human indie game dev is conscious and intelligent, but AI is intelligent but not conscious or something? Not sure how this invalidates anything I've said either. Arbitrary Assumption Fine, though I wouldn't belittle such beliefs in the way I wouldn't belittle "Imaginary Numbers" in math. They can provide real solutions when plugged into different formulas.
|
|
|
Logged
|
|
|
|
J-Snake
|
|
« Reply #48 on: October 29, 2024, 02:47:35 PM » |
|
we know that it never repeats a line as each line is longer than the last. The pattern here is in the successive increment of each line. Your understanding of orthogonality and patterns lacks necessary abstraction. Not sure how this invalidates anything I've said either. Orthogonality shows that your association of consciousness with the Turing Test is rooted in arbitrary assumptions, for example.
|
|
|
Logged
|
|
|
|
michaelplzno
|
|
« Reply #49 on: October 29, 2024, 02:54:56 PM » |
|
we know that it never repeats a line as each line is longer than the last. The pattern here is in the successive increment of each line. Your understanding of orthogonality and patterns lacks necessary abstraction. If the program calculated the digits of pi would that be a better example of an infinite sequence that does not repeat? I thought the code of that example would be a bit dense.
|
|
|
Logged
|
|
|
|
michaelplzno
|
|
« Reply #50 on: October 29, 2024, 02:58:52 PM » |
|
Orthogonality shows that your association of consciousness with the Turing Test is rooted in arbitrary assumptions, for example.
To me it seems like the conclusion you yourself are drawing here comes from orthogonality, in that it is a right angle from the other reasoning you use, lmao: https://www.merriam-webster.com/dictionary/orthogonal
|
|
|
Logged
|
|
|
|
J-Snake
|
|
« Reply #51 on: October 29, 2024, 04:04:40 PM » |
|
we know that it never repeats a line as each line is longer than the last. The pattern here is in the successive increment of each line. Your understanding of orthogonality and patterns lacks necessary abstraction. If the program calculated the digits of pi would that be a better example of an infinite sequence that does not repeat? I thought the code of that example would be a bit dense. Any example without a recurring pattern would just confirm what I already said. To expand on the concept of patterns in an abstract sense, imagine a real computer with finite resources, like your laptop, rather than a theoretical Turing machine. A running program in this context essentially acts as a deterministic state machine with an astronomical but still finite number of states. This means that any given program either terminates, or eventually repeats a pattern, as the limited states will inevitably recur at some point. You can move to a higher order of thinking and consider patterns related to the order and frequency of each recurring state itself. With finite resources, the absence of any patterns at this level would only be possible through external influence. As a "comp sci" guy, I thought you understand the abstract concept of orthogonality. In any case, I already elaborated on what it means.
|
|
|
Logged
|
|
|
|
michaelplzno
|
|
« Reply #52 on: October 29, 2024, 04:35:21 PM » |
|
The essence of orthogonality is that the state of one dimension or statement does not determine or require the state of another. For instance, there can be great cars that are red, but they aren’t necessarily red. Similarly, while conscious, intelligent beings exist, consciousness is not necessarily a prerequisite for intelligence. It sounds like you are describing an "independence assumption," that the logical axioms are independent of one another. I've never heard what you are describing as "orthogonality" which is (in my dictionary) more of a geometric thing about looking on a different spatial axis. Though I was once the talk of my CS department for lousing up some independence assumptions in the probability distributions of game of bingo due to a poorly worded paper I read. If you are just trying to get to the ephemeral nature of consciousness being independent of intelligence, there is a semantic difference, but you still aren't offering much in the way of definitions of either conciousness or intelligence, and to say that the Turing Test only measures intelligence and not conciousness is something I don't agree with, though that may be ... arbitrary assumptions? I'm pretty sure have more reasoning than just that though. IQ tests (Intelligence Quotient) measure intelligence and computers are better suited to such pattern tests, the whole point of the Turing Test is not to measure Intelligence but Conciousness, that is, can the test subject pass as an actual human. It's not a perfect test, but I use it as an example of something that is designed to measure conciousness rather than intelligence. Perhaps in your infinite wisdom you could design a better conciousness test, jsnake. imagine a real computer with finite resources Eventually a real computer will run out of memory, yes, though the amount of complexity one can reach with a modern machine is astronomical: Even with such a finite model of computation with limited memory instead of an infinite tape, there would be incomputable things... I think ... (I have not written a proof) that the question of if a computer of such a design will loop, reach the same state twice, or terminate, end computation or run out of memory, is also not computable. As an attempt at proof, lets say ORACLE(X) takes a piece of code and tells if it will loop or terminate, then you could run ORACLE(X) on your own code and if it says to loop just terminate, otherwise, just start a short loop that never terminates. This program will do the oposite of what it is supposed to. This assumes that ORACLE can also run with the limited memory model as well? It gets fuzzy there.
|
|
|
Logged
|
|
|
|
J-Snake
|
|
« Reply #53 on: October 29, 2024, 05:12:58 PM » |
|
but you still aren't offering much in the way of definitions of either conciousness or intelligence I don't need to prove anything here. You're the one asserting possible existence of a 'red unicorn,' metaphorically speaking. I'm simply pointing out that this claim is based on a random assumption—and explain why. But I am actually always open to sound speculations on why AI and consciousness might be related. the whole point of the Turing Test is not to measure Intelligence but Conciousness, that is, can the test subject pass as an actual human. Pretty sure a capable mind like Turing understood the limits of what the Turing Test actually demonstrated and sidestepped the question of consciousness entirely. Claiming 'planes pass as birds because they can fly' involves the same level of misguided reasoning. Even with such a finite model of computation with limited memory instead of an infinite tape, there would be incomputable things... I Even with an infinite tape, there are incomputable things. This is not a secret.
|
|
|
Logged
|
|
|
|
michaelplzno
|
|
« Reply #54 on: October 29, 2024, 06:43:22 PM » |
|
sidestepped the question of consciousness entirely You don't have to prove anything, and neither did Turing (besides his numerous computer related proofs.) I'm just saying it would be a better conversation if you were to posit some definitions of what consciousness is rather than be defensive. To me, I belive you need some non-computational element to achieve conciousness, but as I've said, I cannot prove that, I guess that's a red unicorn. You on the other hand won't even offer a definition of the word. Ok... I guess that's the smart move? Don't want to make a mistake and then you'd look bad.
|
|
|
Logged
|
|
|
|
J-Snake
|
|
« Reply #55 on: October 29, 2024, 09:32:06 PM » |
|
A speculative conversation about consciousness and AI is not a justification for misguided reasoning. Providing a definition for consciousness is just as random or nonsensical as relating consciousness to the Turing Test in some way. So you won't hear any definitions from me about that. It's also a thread about AI, and everything AI-related can be discussed without the notion of consciousness to full effect, as AI is not based or influenced by any notion of consciousness.
Having that said, it is not that hard to say some speculative things about consciousness, and some backed by deep meditative experience inaccessible to traditional science. Like it is not a result of a computation. But there are still levels to consciousness, from being fully aware to barely being aware. Consciousness is also not a by-product of a process, but part of the cognitive feedback loop itself. This is something AI cannot have by design, otherwise it would not be a well defined machine controlled by input data.
|
|
|
Logged
|
|
|
|
michaelplzno
|
|
« Reply #56 on: October 30, 2024, 06:30:09 AM » |
|
random or nonsensical Don't want to be random or nonsensical, that's the path to fun and we don't want that! there are still levels to consciousness In my own experience, I find that being too aware prevents me from getting stuff done. But thank you for explaining some of your beliefs on the subject.
|
|
|
Logged
|
|
|
|
J-Snake
|
|
« Reply #57 on: October 30, 2024, 08:16:19 AM » |
|
random or nonsensical Don't want to be random or nonsensical, that's the path to fun and we don't want that! You can be random when you want fun. You cannot be random when making an argument. Your approach encourages mental entropy, which won’t lead to deeper insights.
|
|
|
Logged
|
|
|
|
michaelplzno
|
|
« Reply #58 on: November 01, 2024, 09:56:05 PM » |
|
You can be random when you want fun. You cannot be random when making an argument. Obviously, you have never seen an American politician speak.
|
|
|
Logged
|
|
|
|
Golds
Loves Juno
Level 10
onuJ sevoL
|
|
« Reply #59 on: November 02, 2024, 11:59:14 AM » |
|
1. Consciousness is non-computational. -> this directly implies that humans (which are conscious) have some kind of soul or other non-emperical mechanism by which they make decisions, like quantum fluctuations in their brain or something. Something that cannot be measured in a traditional computational way. Most of these concepts would be considered to be religious in nature.
They were religious in nature, because they had to be. Nietsche (while he was still sane) had no Einstein or Neils Bohr, or even the father of Eels frontman, Mark Oliver Everett, to translate the necessary concepts into language for the common man. We rely on science's hard-fought-for tools all the time now. Maybe there is a soul, and maybe it has something to do with the quantum realm. I bet it will also be found to have something to do with the next layer of machinery undernath that, once we understand it, and the machinery below and above and all around that, like an infinitely-dimensional fractal. These questions are probably always going to remain religious, because we definitionally can't understand Everything.
|
|
|
Logged
|
|
|
|
|