• 0 Posts
  • 26 Comments
Joined 2 months ago
cake
Cake day: July 7th, 2024

help-circle
  • Schrödinger was not “rejecting” quantum mechanics, he was rejecting people treating things described in a superposition of states as literally existing in “two places at once.” And Schrödinger’s argument still holds up perfectly. What you are doing is equating a very dubious philosophical take on quantum mechanics with quantum mechanics itself, as if anyone who does not adhere to this dubious philosophical take is “denying quantum mechanics.” But this was not what Schrödinger was doing at all.

    What you say here is a popular opinion, but it just doesn’t make any sense if you apply any scrutiny to it, which is what Schrödinger was trying to show. Quantum mechanics is a statistical theory where probability amplitudes are complex-valued, so things can have a -100% chance of occurring, or even a 100i% chance of occurring. This gives rise to interference effects which are unique to quantum mechanics. You interpret what these probabilities mean in physical reality based on how far they are away from zero (the further from zero, the more probable), but the negative signs allow for things to cancel out in ways that would not occur in normal probability theory, known as interference effects. Interference effects are the hallmark of quantum mechanics.

    Because quantum probabilities have this difference, some people have wondered if maybe they are not probabilities at all but describe some sort of physical entity. If you believe this, then when you describe a particle as having a 50% probability of being here and a 50% probability of being there, then this is not just a statistical prediction but there must be some sort of “smeared out” entity that is both here and there simultaneously. Schrödinger showed that believing this leads to nonsense as you could trivially set up a chain reaction that scales up the effect of a single particle in a superposition of states to eventually affect a big system, forcing you to describe the big system, like a cat, in a superposition of states. If you believe particles really are “smeared out” here and there simultaneously, then you have to believe cats can be both “smeared out” here and there simultaneously.

    Ironically, it was Schrödinger himself that spawned this way of thinking. Quantum mechanics was originally formulated without superposition in what is known as matrix mechanics. Matrix mechanics is complete, meaning, it fully makes all the same predictions as traditional quantum mechanics. It is a mathematically equivalent theory. Yet, what is different about it is that it does not include any sort of continuous evolution of a quantum state. It only describes discrete observables and how they change when they undergo discrete interactions.

    Schrödinger did not like this on philosophical grounds due to the lack of continuity. There were discrete “gaps” between interactions. He criticized it saying that “I do not believe that the electron hops about like a flea” and came up with his famous wave equation as a replacement. This wave equation describes a list of probability amplitudes evolving like a wave in between interactions, and makes the same predictions as matrix mechanics. People then use the wave equation to argue that the particle literally becomes smeared out like a wave in between interactions.

    However, Schrödinger later abandoned this point of view because it leads to nonsense. He pointed in one of his books that while his wave equation gets rid of the gaps in between interactions, it introduces a new gap in between the wave and the particle, as the moment you measure the wave it “jumps” into being a particle randomly, which is sometimes called the “collapse of the wave function.” This made even less sense because suddenly there is a special role for measurement. Take the cat example. Why doesn’t the cat’s observation of this wave not cause it to “collapse” but the person’s observation does? There is no special role for “measurement” in quantum mechanics, so it is unclear how to even answer this in the framework of quantum mechanics.

    Schrödinger was thus arguing to go back to the position of treating quantum mechanics as a theory of discrete interactions. There are just “gaps” between interactions we cannot fill. The probability distribution does not represent a literal physical entity, it is just a predictive tool, a list of probabilities assigned to predict the outcome of an experiment. If we say a particle has a 50% chance of being here or a 50% chance of being there, it is just a prediction of where it will be if we were to measure it and shouldn’t be interpreted as the particle being literally smeared out between here and there at the same time.

    There is no reason you have to actually believe particles can be smeared out between here and there at the same time. This is a philosophical interpretation which, if you believe it, it has an enormous amount of problems with it, such as what Schrödinger pointed out which ultimately gets to the heart of the measurement problem, but there are even larger problems. Wigner had also pointed out a paradox whereby two observers would assign different probability distributions to the same system. If it is merely probabilities, this isn’t a problem. If I flip a coin and look at the outcome and it’s heads, I would say it has a 100% chance of being heads because I saw it as heads, but if I asked you and covered it up so you did not see it, you would assign a 50% probability of it being heads or tails. If you believe the wave function represents a physical entity, then you could setup something similar in quantum mechanics whereby two different observers would describe two different waves, and so the physical shape of the wave would have to differ based on the observer.

    There are a lot more problems as well. A probability distribution scales up in terms of its dimensions exponentially. With a single bit, there are two possible outcomes, 0 and 1. With two bits, there’s four possible outcomes, 00, 01, 10, and 11. With three bits, eight outcomes. With four bits, sixteen outcomes. If we assign a probability amplitude to each possible outcome, then the number of degrees of freedom grows exponentially the more bits we have under consideration.

    This is also true in quantum mechanics for the wave function, since it is again basically a list of probability amplitudes. If we treat the wave function as representing a physical wave, then this wave would not exist in our four-dimensional spacetime, but instead in an infinitely dimensional space known as a Hilbert space. If you want to believe the universe actually physically made up of infinitely dimensional waves, have at ya. But personally, I find it much easier to just treat a probability distribution as, well, a probability distribution.


  • What is it then? If you say it’s a wave, well, that wave is in Hilbert space which is infinitely dimensional, not in spacetime which is four dimensional, so what does it mean to say the wave is “going through” the slit if it doesn’t exist in spacetime? Personally, I think all the confusion around QM stems from trying to objectify a probability distribution, which is what people do when they claim it turns into a literal wave.

    To be honest, I think it’s cheating. People are used to physics being continuous, but in quantum mechanics it is discrete. Schrodinger showed that if you take any operator and compute a derivative, you can “fill in the gaps” in between interactions, but this is just purely metaphysical. You never see these “in between” gaps. It’s just a nice little mathematical trick and nothing more. Even Schrodinger later abandoned this idea and admitted that trying to fill in the gaps between interactions just leads to confusion in his book Nature and the Greeks and Science and Humanism.

    What’s even more problematic about this viewpoint is that Schrodinger’s wave equation is a result of a very particular mathematical formalism. It is not actually needed to make correct predictions. Heisenberg had developed what is known as matrix mechanics whereby you evolve the observables themselves rather than the state vector. Every time there is an interaction, you apply a discrete change to the observables. You always get the right statistical predictions and yet you don’t need the wave function at all.

    The wave function is purely a result of a particular mathematical formalism and there is no reason to assign it ontological reality. Even then, if you have ever worked with quantum mechanics, it is quite apparent that the wave function is just a function for picking probability amplitudes from a state vector, and the state vector is merely a list of, well, probability amplitudes. Quantum mechanics is probabilistic so we assign things a list of probabilities. Treating a list of probabilities as if it has ontological existence doesn’t even make any sense, and it baffles me that it is so popular for people to do so.

    This is why Hilbert space is infinitely dimensional. If I have a single qubit, there are two possible outcomes, 0 and 1. If I have two qubits, there are four possible outcomes, 00, 01, 10, and 11. If I have three qubits, there are eight possible outcomes, 000, 001, 010, 011, 100, 101, 110, and 111. If I assigned a probability amplitude to each event occurring, then the degrees of freedom would grow exponentially as I include more qubits into my system. The number of degrees of freedom are unbounded.

    This is exactly how Hilbert space works. Interpreting this as a physical infinitely dimensional space where waves really propagate through it just makes absolutely no sense!


  • It is weird that you start by criticizing our physical theories being descriptions of reality then end criticizing the Copenhagen interpretation, since this is the Copenhagen interpretation, which says that physics is not about describing nature but describing what we can say about nature. It doesn’t make claims about underlying ontological reality but specifically says we cannot make those claims from physics and thus treats the maths in a more utilitarian fashion.

    The only interpretation of quantum mechanics that actually tries to interpret it at face value as a theory of the natural world is relational quantum mechanics which isn’t that popular as most people dislike the notion of reality being relative all the way down. Almost all philosophers in academia define objective reality in terms of something being absolute and point-of-view independent, and so most academics struggle to comprehend what it even means to say that reality is relative all the way down, and thus interpreting quantum mechanics as a theory of nature at face-value is actually very unpopular.

    All other interpretations either: (1) treat quantum mechanics as incomplete and therefore something needs to be added to it in order to complete it, such as hidden variables in the case of pilot wave theory or superdeterminism, or a universal psi with some underlying mathematics from which to derive the Born rule in the Many Worlds Interpretation, or (2) avoid saying anything about physical reality at all, such as Copenhagen or QBism.

    Since you talk about “free will,” I suppose you are talking about superdeterminism? Superdeterminism works by pointing out that at the Big Bang, everything was localized to a single place, and thus locally causally connected, so all apparent nonlocality could be explained if the correlations between things were all established at the Big Bang. The problem with this point of view, however, is that it only works if you know the initial configuration of all particles in the universe and a supercomputer powerful to trace them out to modern day.

    Without it, you cannot actually predict any of these correlations ahead of time. You have to just assume that the particles “know” how to correlate to one another at a distance even though you cannot account for how this happens. Mathematically, this would be the same as a nonlocal hidden variable theory. While you might have a nice underlying philosophical story to go along with it as to how it isn’t truly nonlocal, the maths would still run into contradictions with special relativity. You would find it difficult to construe the maths in such a way that the hidden variables would be Lorentz invariant.

    Superdeterministic models thus struggle to ever get off the ground. They only all exist as toy models. None of them can reproduce all the predictions of quantum field theory, which requires more than just accounting for quantum mechanics, but doing so in a way that is also compatible with special relativity.


  • It is only continuous because it is random, so prior to making a measurement, you describe it in terms of a probability distribution called the state vector. The bits 0 and 1 are discrete, but if I said it was random and asked you to describe it, you would assign it a probability between 0 and 1, and thus it suddenly becomes continuous. (Although, in quantum mechanics, probability amplitudes are complex-valued.) The continuous nature of it is really something epistemic and not ontological. We only observe qubits as either 0 or 1, with discrete values, never anything in between the two.




  • Why are you isolating a single algorithm? There are tons of them that speed up various aspects of linear algebra and not just that single one, and many improvements to these algorithms since they were first introduced, there are a lot more in the literature than just in the popular consciousness.

    The point is not that it will speed up every major calculation, but these are calculations that could be made use of, and there will likely even be more similar algorithms discovered if quantum computers are more commonplace. There is a whole branch of research called quantum machine learning that is centered solely around figuring out how to make use of these algorithms to provide performance benefits for machine learning algorithms.

    If they would offer speed benefits, then why wouldn’t you want to have the chip that offers the speed benefits in your phone? Of course, in practical terms, we likely will not have this due to the difficulty and expense of quantum chips, and the fact they currently have to be cooled below to near zero degrees Kelvin. But your argument suggests that if somehow consumers could have access to technology in their phone that would offer performance benefits to their software that they wouldn’t want it.

    That just makes no sense to me. The issue is not that quantum computers could not offer performance benefits in theory. The issue is more about whether or not the theory can be implemented in practical engineering terms, as well as a cost-to-performance ratio. The engineering would have to be good enough to both bring the price down and make the performance benefits high enough to make it worth it.

    It is the same with GPUs. A GPU can only speed up certain problems, and it would thus be even more inefficient to try and force every calculation through the GPU. You have libraries that only call the GPU when it is needed for certain calculations. This ends up offering major performance benefits and if the price of the GPU is low enough and the performance benefits high enough to match what the consumers want, they will buy it. We also have separate AI chips now as well which are making their way into some phones. While there’s no reason at the current moment to believe we will see quantum technology shrunk small and cheap enough to show up in consumer phones, if hypothetically that was the case, I don’t see why consumers wouldn’t want it.

    I am sure clever software developers would figure out how to make use of them if they were available like that. They likely will not be available like that any time in the near future, if ever, but assuming they are, there would probably be a lot of interesting use cases for them that have not even been thought of yet. They will likely remain something largely used by businesses but in my view it will be mostly because of practical concerns. The benefits of them won’t outweigh the cost anytime soon.


  • Uh… one of those algorithms in your list is literally for speeding up linear algebra. Do you think just because it sounds technical it’s “businessy”? All modern technology is technical, that’s what technology is. It would be like someone saying, “GPUs would be useless to regular people because all they mainly do is speed up matrix multiplication. Who cares about that except for businesses?” Many of these algorithms here offer potential speedup for linear algebra operations. That is the basis of both graphics and AI. One of those algorithms is even for machine learning in that list. There are various algorithms for potentially speeding up matrix multiplication in the linear. It’s huge for regular consumers… assuming the technology could ever progress to come to regular consumers.


  • A person who would state they fully understand quantum mechanics is the last person i would trust to have any understanding of it.

    I find this sentiment can lead to devolving into quantum woo and mysticism. If you think anyone trying to tell you quantum mechanics can be made sense of rationally must be wrong, then you implicitly are suggesting that quantum mechanics is something that cannot be made sense of, and thus it logically follows that people who are speaking in a way that does not make sense and have no expertise in the subject so they do not even claim to make sense are the more reliable sources.

    It’s really a sentiment I am not a fan of. When we encounter difficult problems that seem mysterious to us, we should treat the mystery as an opportunity to learn. It is very enjoyable, in my view, to read all the different views people put forward to try and make sense of quantum mechanics, to understand it, and then to contemplate on what they have to offer. To me, the joy of a mystery is not to revel in the mystery, but to search for solutions for it, and I will say the academic literature is filled with pretty good accounts of QM these days. It’s been around for a century, a lot of ideas are very developed.

    I also would not take the game Outer Wilds that seriously. It plays into the myth that quantum effects depend upon whether or not you are “looking,” which is simply not the case and largely a myth. You end up with very bizarre and misleading results from this, for example, in the part where you land on the quantum moon and have to look at the picture of it for it to not disappear because your vision is obscured by fog. This makes no sense in light of real physics because the fog is still part of the moon and your ship is still interacting with the fog, so there is no reason it should hop to somewhere else.

    Now quantum science isn’t exactly philosophy, ive always been interested in philosophy but its by studying quantum mechanics, inspired by that game that i learned about the mechanic of emerging properties. I think on a video about the dual slit experiment.

    The double-slit experiment is a great example of something often misunderstood as somehow evidence observation plays some fundamental role in quantum mechanics. Yes, if you observe the path the two particles take through the slits, the interference pattern disappears. Yet, you can also trivially prove in a few line of calculation that if the particle interacts with a single other particle when it passes through the two slits then it would also lead to a destruction of the interference effects.

    You model this by computing what is called a density matrix for both the particle going through the two slits and the particle it interacts with, and then you do what is called a partial trace whereby you “trace out” the particle it interacts with giving you a reduced density matrix of only the particle that passes through the two slits, and you find as a result of interacting with another particle its coherence terms would reduce to zero, i.e. it would decohere and thus lose the ability to interfere with itself.

    If a single particle interaction can do this, then it is not surprising it interacting with a whole measuring device can do this. It has nothing to do with humans looking at it.

    At that point i did not yet know that emergence was already a known topic in philosophy just quantum science, because i still tried to avoid external influences but it really was the breakthrough I needed and i have gained many new insights from this knowledge since.

    Eh, you should be reading books and papers in the literature if you are serious about this topic. I agree that a lot of philosophy out there is bad so sometimes external influences can be negative, but the solution to that shouldn’t be to entirely avoid reading anything at all, but to dig through the trash to find the hidden gems.

    My views when it comes to philosophy are pretty fringe as most academics believe the human brain can transcend reality and I reject this notion, and I find most philosophy falls right into place if you reject this notion. However, because my views are a bit fringe, I do find most philosophical literature out there unhelpful, but I don’t entirely not engage with it. I have found plenty of philosophers and physicists who have significantly helped develop my views, such as Jocelyn Benoist, Carlo Rovelli, Francois-Igor Pris, and Alexander Bogdanov.


  • This is why many philosophers came to criticize metaphysical logic in the 1800s, viewing it as dealing with absolutes when reality does not actually exist in absolutes, stating that we need some other logical system which could deal with the “fuzziness” of reality more accurately. That was the origin of the notion of dialectical logic from philosophers like Hegel and Engels, which caught on with some popularity in the east but then was mostly forgotten in the west outside of some fringe sections of academia. Even long prior to Bell’s theorem, the physicist Dmitry Blokhintsev, who adhered to this dialectical materialist mode of thought, wrote a whole book on quantum mechanics where the first part he discusses the need to abandon the false illusion of the rigidity and concreteness of reality and shows how this is an illusion even in the classical sciences where everything has uncertainty, all predictions eventually break down, nothing is never possible to actually fully separate something from its environment. These kinds of views heavily influenced the contemporary physicist Carlo Rovelli as well.


  • And as any modern physicist will tell you: most of reality is indeed invisible to us. Most of the universe is seemingly comprised of an unknown substance, and filled with an unknown energy.

    How can we possibly know this unless it was made through an observation?

    Most of the universe that we can see more directly follows rules that are unintuitive and uses processes we can’t see. Not only can’t we see them, our own physics tells is it is literally impossible to measure all of them consistently.

    That’s a hidden variable theory, presuming that systems really have all these values and we just can’t measure them all consistently due to some sort of practical limitation but still believing that they’re there. Hidden variable theories aren’t compatible with the known laws of physics. The values of the observables which become indefinite simply cease to have existence at all, not that they are there but we can’t observe them.

    But subjective consciousness and qualia fit nowhere in our modern model of physics.

    How so? What is “consciousness”? Why do you think objects of qualia are special over any other kind of object?

    I don’t think it’s impossible to explain consciousness.

    You haven’t even established what it is you’re trying to explain or why you think there is some difficulty to explain it.

    We don’t even fully understand what the question is really asking. It sidesteps our current model of physics.

    So, you don’t even know what you’re asking but you’re sure that it’s not compatible with the currently known laws of physics?

    I don’t subscribe to Nagel’s belief that it is impossible to solve, but I do understand how the points he raises are legitimate points that illustrate how consciousness does not fit into our current scientific model of the universe.

    But how?! You are just repeating the claim over and over again when the point of my comment is that the claim itself is not justified. You have not established why there is a “hard problem” at all but just continually repeat that there is.

    If I had to choose anyone I’d say my thoughts on the subject are closest to Roger Penrose’s line of thinking, with a dash of David Chalmers.

    Meaningless.

    I think if anyone doesn’t see why consciousness is “hard” then there are two possibilities: 1) they haven’t understood the question and its scientific ramifications 2) they’re not conscious.

    You literally do not understand the topic at hand based on your own words. Not only can you not actually explain why you think there is a “hard problem” at all, but you said yourself you don’t even know what question you’re asking with this problem. Turning around and then claiming everyone who doesn’t agree with you is just some ignoramus who doesn’t understand then is comically ridiculous, and also further implying people who don’t agree with you may not even be conscious.

    Seriously, that’s just f’d up. What the hell is wrong with you? Maybe you are so convinced of this bizarre notion you can’t even explain yourself because you dehumanize everyone who disagrees with you and never take into consideration other ideas.


  • Reading books on natural philosophy. By that I mean, not mathematics of the physics itself, but what do the mathematics actually tell us about the natural world, how to interpret it and think about it, on a more philosophical level. Not a topic I really talk to many people irl on because most people don’t even know what the philosophical problems around this topic. I mean, I’d need a whole whiteboard just to walk someone through Bell’s theorem to even give them an explanation to why it is interesting in the first place. There is too much of a barrier of entry for casual conversation.

    You would think since natural philosophy involves physics that it would not be niche because there are a lot of physicists, but most don’t care about the topic either. If you can plug in the numbers and get the right predictions, then surely that’s sufficient, right? Who cares about what the mathematics actually means? It’s a fair mindset to have, perfectly understandable and valid, but not part of my niche interests, so I just read tons and tons and tons of books and papers regarding a topic which hardly anyone cares. It is very interesting to read like the Einstein-Bohr debates, or Schrodinger for example trying to salvage continuity viewing a loss of continuity as a breakdown in classical notion of causality, or some of the contemporary discussions on the subject such as Carlo Rovelli’s relational quantum mechanics or Francois-Igor Pris’ contextual realist interpretation. Things like that.

    It doesn’t even seem to be that popular of a topic among philosophers, because most don’t want to take the time to learn the math behind something like Bell’s theorem (it’s honestly not that hard, just a bit of linear algebra). So as a topic it’s pretty niche but I have a weird autistic obsession over it for some reason. Reading books and papers on these debates contributes nothing at all practically beneficial to my life and there isn’t a single person I know outside of online contacts who even knows wtf I’m talking about but I still find it fascinating for some reason.


  • Why do you think consciousness remains known as the “hard problem”, and still a considered contentious mystery to modern science, if your simplistic ideas can so easily explain it?

    You people really need to stop pretending like because one guy published a paper calling it the “hard problem” that it’s somehow a deep impossible to solve scientific question. It’s just intellectual dishonesty, trying to paint it as if it’s equivalent to solving the problem of making nuclear fusion work or something.

    It’s not. And yes, philosophy is full of idiots who never justify any of their premises. David Chalmers in his paper where he calls it the “hard problem” quotes Thomas Nagel’s paper as “proof” that experience is something subjective, and then just goes forward with his argument as if it’s “proven,” but Nagel’s paper is complete garbage, and so nothing Chalmers argues beyond that holds any water, but is just something a lot of philosophers blindly accept even though it is nonsensical.

    Nagel claims that the physical sciences don’t incorporate point-of-view, and that therefore point-of-view must be a unique property of mammals, and that experience is point-of-view dependent, so experience too must come from mammals, and therefore science has to explain the origin of experience.

    But his paper was wildly outdated when he wrote it. By then, we already had general relativity for decades, which is a heavily point-of-view dependent theory as there is no absolute space or time but its properties depend upon your point of view. Relational quantum mechanics also interprets quantum mechanics in a way that gets rid of all the weirdness and makes it incredibly intuitive and simple just with the singular assumption that the properties of particles depends upon point-of-view not that much different than general relativity with the nature of space and time, and so there is no absolute state of a system anymore.

    Both general relativity and relational quantum mechanics not only treat reality as point-of-view dependent but tie itself back directly to experience: they tell you what you actually expect to observe in measurements. In quantum mechanics they are literally called observables, entities identifiable by their experiential properties.

    Nagel is just an example of am armchair philosopher who does not engage with the sciences so he thinks they are all still Newtonian with some sort of absolute world independent of point-of-view. If the natural world is point-of-view dependent all the way down, then none of Nagel’s arguments follow. There is no reason to believe point-of-view is unique to mammals, and then there is further no reason to think the point-of-view dependence of experience makes it inherently mammalian, and thus there is no reason to call experience “subjective.”

    Although I prefer the term “context” rather than “point-of-view” as it is more clear what it means, but it means the same thing. The physical world is just point-of-view dependent all the way down, or that is to say, context-dependent. We just so happen to be objects and thus like any other, exist in a particular context, and thus experience reality from that context. Our experiences are not created by our brains, experience is just objective reality from the context we occupy. What our brain does is think about and reflect upon experience (reality). It formulates experience into concepts like “red,” “tree,” “atom,” etc. But it does not create experience.

    The entire “hard” problem is based on a faulty premise based on science that was outdated when it was written.

    If experience just is reality from a particular context then it makes no sense to ask to “derive” it as Chalmers and Nagel have done. You cannot derive reality, you describe it. Reality just is what it is, it just exists. Humans describe reality with their scientific theories, but their theories cannot create reality. That doesn’t even make sense. All modern “theories of consciousness” are just nonsense as they all are based on the false premise that experience is not reality but some illusion created by the mammalian brain and that “true” reality is some invisible metaphysical entity that lies beyond all possible experience, and thus they demand we somehow need a scientific theory to show how this invisible reality gives rise to the visible realm of experience. The premise is just silly. Reality is not invisible. That is the nonsensical point of view.


  • There shouldn’t be a distinction between quantum and non-quantum objects. That’s the mystery. Why can’t large objects exhibit quantum properties?

    What makes quantum mechanics distinct from classical mechanics is the fact that not only are there interference effects, but statistically correlated systems (i.e. “entangled”) can seem to interfere with one another in a way that cannot be explained classically, at least not without superluminal communication, or introducing something else strange like the existence of negative probabilities.

    If it wasn’t for these kinds of interference effects, then we could just chalk up quantum randomness to classical randomness, i.e. it would just be the same as any old form of statistical mechanics. The randomness itself isn’t really that much of a defining feature of quantum mechanics.

    The reason I say all this is because we actually do know why there is a distinction between quantum and non-quantum objects and why large objects do not exhibit quantum properties. It is a mixture of two factors. First, larger systems like big molecules have smaller wavelengths, so interference with other molecules becomes harder and harder to detect. Second, there is decoherence. Even small particles, if they interact with a ton of other particles and you average over these interactions, you will find that the interference terms (the “coherences” in the density matrix) converge to zero, i.e. when you inject noise into a system its average behavior converges to a classical probability distribution.

    Hence, we already know why there is a seeming “transition” from quantum to classical. This doesn’t get rid of the fact that it is still statistical in nature, it doesn’t give you a reason as to why a particle that has a 50% chance of being over there and a 50% chance of being over here, that when you measure it and find it is over here, that it wasn’t over there. Decoherence doesn’t tell you why you actually get the results you do from a measurement, it’s still fundamentally random (which bothers people for some reason?).

    But it is well-understood how quantum probabilities converge to classical probabilities. There have even been studies that have reversed the process of decoherence.


  • That’s actually not quite accurate, although that is how it is commonly interpreted. The reason it is not accurate is because Bell’s theorem simply doesn’t show there is no hidden variables and indeed even Bell himself states very clearly what the theorem proves in the conclusion of his paper.

    In a theory in which parameters are added to quantum mechanics to determine the results of individual measurements, without changing the statistical predictions, there must be a mechanism whereby the setting of one measuring device can influence the reading of another instrument, however remote. Moreover, the signal involved must propagate instantaneously, so that such a theory could not be Lorentz invariant.[1]

    In other words, you can have hidden variables, but those hidden variables would not be Lorentz invariant. What is Lorentz invariance? Well, to be “invariant” basically means to be absolute, that is to say, unchanging based on reference frame. The term Lorentz here refers to Lorentz transformations under Minkowski space, i.e. the four-dimensional spacetime described by special relativity.

    This implies you can actually have hidden variables under one of two conditions:

    1. Those hidden variables are invariant under some other framework that is not special relativity, basically meaning the signals would have to travel faster than light and thus would contradict special relativity and you would need to replace it with some other framework.
    2. Those hidden variables are variant. That would mean they do indeed change based on reference frame. This would allow local hidden variable theories and thus even allow for current quantum mechanics to be interpreted as a statistical theory in a more classical sense as it even evades the PBR theorem.[2]

    The first view is unpopular because special relativity is the basis of quantum field theory, and thus contradicting it would contradict with one of our best theories of nature. There has been some fringe research into figuring out ways to reformulate special relativity to make it compatible with invariant hidden variables,[3] but given quantum mechanics has been around for over a century and nobody has figured this out, I wouldn’t get your hopes up.

    The second view is unpopular because it can be shown to violate a more subtle intuition we all tend to have, but is taken for granted so much I’m not sure if there’s even a name for it. The intuition is that not only should there be no mathematical contradictions within a single given reference frame so that an observer will never see the laws of physics break down, but that there should additionally be no contradictions when all possible reference frames are considered simultaneously.

    It is not physically possible to observe all reference frames simulatenously, and thus one can argue that such an assumption should be abandoned because it is metaphysical and not something you can ever observe in practice.[4] Note that inconsistency between all reference frames considered simulatenously does not mean observers will disagree over the facts, because if one observer asks another for information about a measurement result, they are still acquiring information about that result from their reference frame, just indirectly, and thus they would never run into a disagreement in practice.

    However, people still tend to find it too intuitive to abandon this notion of simultaneous consistency, so it remains unpopular and most physicists choose to just interpret quantum mechanics as if there are no hidden variables at all. #1 you can argue is enforced by the evidence, but #2 is more of a philosophical position, so ultimately the view that there are no hidden variables is not “proven” but proven if you accept certain philosophical assumptions.

    There is actually a second way to restore local hidden variables which I did not go into detail here which is superdeterminism. Superdeterminism basically argues that if you did just have a theory which describes how particles behave now but a more holistic theory that includes the entire initial state of the universe going back to the Big Bang and tracing out how all particles evolved to the state they are now, you can place restrictions on how that system would develop that would such that it would always reproduce the correlations we see even with hidden variables that is indeed Lorentz invariant.

    Although, the obvious problem is that it would never actually be possible to have such a theory, we cannot know the complete initial configuration of all particles in the universe, and so it’s not obvious how you would derive the correlations between particles beforehand. You would instead have to just assume they “know” how to be correlated already, which makes them equivalent to nonlocal hidden variable theories, and thus it is not entirely clear how they could be made Lorentz invariant. Not sure if anyone’s ever put forward a complete model in this framework either, same issue with nonlocal hidden variable theories.


  • Use IBM’s cloud quantum computers to learn a bit, you can indeed find YouTube videos that explain to you how to do the calculations and then you can just play around making algorithms on their systems and verifying that you can do the calculations correctly. With that knowledge alone you can then begin to learn how to step through a lot of the famous experiments that all purport to show the strangeness of quantum mechanics, like Bell’s theorem, the “bomb tester” thought experiment, GHZ experiment, quantum teleportation, etc, as most of the famous ones can be implemented on a quantum computer and you can get an understanding of why they are interesting.



  • The subjective experience of consciousness is directly observable, and definitely real, no?

    Experience is definitely real, but there is no such thing as “subjective experience.” It is not logically possible to say there is “subjective experience” without inherently entailing that there is some sort of “objective experience,” in the same way that saying something is “inside of” something makes no sense unless there is an “outside of” it. Without implicitly entailing some sort of “objective experience” then the qualifier “subjective” would become meaningless.

    If you associate “experience” with “minds,” then you’d be implying there is some sort of objective mind, i.e. a cosmic consciousness of some sorts. Which, you can believe that, but at that point you’ve just embraced objective idealism. The very usage of the term “subjective experience” that is supposedly inherently irreducible to non-minds inherently entails objective idealism, there is no way out of it once you’ve accepted that premise.

    The conflation between experience with “subjectivity” is largely done because we all experience the world in a way unique to us, so we conclude experience is “subjective.” But a lot of things can be experienced differently between different observers. Two observers, for example, can measure the same object to be different velocities, not because velocity is subjective, but because they occupy different frames of reference. In other words, the notion that something being unique to us proves it is “subjective” is a non sequitur, there can be other reasons for it to be unique to us, which is just that nature is context-dependent.

    Reality itself depends upon where you are standing in it, how you are looking at it, everything in your surroundings, etc, how everything relates to everything else from a particular reference frame. So, naturally, two observers occupying different contexts will perceive the world differently, not because their perception is “subjective,” but in spite of it. We experience the world as it exists independent of our observation of it, but not independent of the context of our observation. Experience itself is not subjective, although what we take experience to be might be subjective.

    We can misinterpret things for example, we can come to falsely believe we experienced some particular thing and later it turns out we actually perceived something else, and thus were mistaken in our initial interpretation which we later replaced with a new interpretation. However, at no point did it become false that there was experience. Reality can never be true or false, it always just is what it is. The notion that there is some sort of “explanatory gap” between what humans experience and some sort of cosmic experience is just an invented problem. There is no gap because what we experience is indeed reality independent of conscious observers being there to interpret it, but absolutely dependent upon the context under which it is observed.

    Again, I’d recommend reading Jocelyn Benoist’s Toward a Contextual Realism. All this is explained in detail and any possible rebuttal you’re thinking of has already been addressed. People are often afraid of treating experience as real because they operate on this Kantian “phenomenal-noumenal” paradigm (inherently implied by the usage of “subjective experience”) and then think if they admit that this unobservable “noumenon” is a meaningless construct then they have to default to only accepting the “phenomenon,” i.e. that there’s only “subjective experience” and we’re all “trapped in our minds” so to speak. But the whole point of contextual realism is to point out this fear is unfounded because both the phenomenal and noumenal categories are problematic and both have to be discarded: experience is not “phenomenal” as a “phenomenal” means “appearance of reality” but it is not the appearance of reality but is reality.

    You only enter into subjectivity, again, when you take reality to be something, when you begin assigning labels to it, when you begin to invent abstract categories and names to try and categorize what you are experiencing. (Although the overwhelming majority of abstract categories you use were not created by you but are social constructs, part of what Wittgenstein called the “language game.”)

    (I don’t think adding some metaphysical element does much of anything, and Penrose still doesn’t really explain it, just provides a potential mechanism for it in the brain. It’s still a real “thing”, unexplained by current physics though.)

    We don’t need more metaphysical elements, we need less. We need to stop presuming things that have no reason to be presumed, then presuming other things to fix contradictions created by those false presumptions. We need to discard those bad assumptions that led to the contradiction in the first place (discard then phenomenal-noumenal distinction entirely, not just one or the other).

    Also, to your other point, my I believe everything is just an evolving wave function.

    This is basically the Many Worlds Interpretation. I don’t really buy it because we can’t observe wave functions, so if the entire universe is made of wave functions… how does that explain what we observe? You end up with an explanatory gap between what we observe and the mathematical description.

    The whole point of science is to explain the reality which we observe, which is synonymous with experience, which again experience just is reality. That’s what science is supposed to do: explain experiential reality, so we have to tie it back to experience, what Bell called “local beables,” things we can actually point to identify in our observations.

    The biggest issue with MWI is that there is simply no way to tie it back to what we actually observe because it contains nothing observable. There is an explanatory gap between the world of waves in Hilbert space and what we actually observe in reality.

    The Copenhagen interpretation is just how the many worlds universe appears to behave to a conscious observer.

    What you’ve basically done is just wrapped up the difficult question of how the invisible world of waves in Hilbert space converts itself to the visible world of particles in spacetime by just saying “oh it has something to do with our consciousness.” I mean, sure, if you find that to be satisfactory, I personally don’t.



  • There 100% are…

    If you choose to believe so, like I said I don’t really care. Is a quantum computer conscious? I think it’s a bit irrelevant whether or not they exist. I will concede they do for the sake of discussion.

    Penrose thinks they’re responsible for consciousness.

    Yeah, and as I said, Penrose was wrong, not because the measurement problem isn’t the cause for consciousness, but that there is no measurement problem nor a “hard problem.” Penrose plays on the same logical fallacies I pointed out to come to believe there are two problems where none actually exist and then, because both problems originate from the same logical fallacies. He then notices they are similar and thinks “solving” one is necessary for “solving” the other, when neither problems actually existed in the first place.

    Because we also don’t know what makes anesthesia stop consciousness. And anesthesia stops consciousness and stops the quantum process.

    You’d need to define what you mean more specifically about “consciousness” and “quantum process.” We don’t remember things that occur when we’re under anesthesia, so are we saying memory is consciousness?

    Now, the math isn’t clean. I forget which way it leans, but I think it’s that consciousness kicks out a little before the quantum action is fully inhibited? It’s been a minute, and this shit isn’t simple.

    Sure, it’s not simple, because the notion of “consciousness” as used in philosophy is a very vague and slippery word with hundreds of different meanings depending on the context, and this makes it seem “mysterious” as its meaning is slippery and can change from context to context, making it difficult to pin down what is even being talked about.

    Yet, if you pin it down, if you are actually specific about what you mean, then you don’t run into any confusion. The “hard problem of consciousness” is not even a “problem” as a “problem” implies you want to solve it, and most philosophers who advocate for it like David Chalmers, well, advocate for it. They spend their whole career arguing in favor of its existence and then using it as a basis for their own dualistic philosophy. It is thus a hard axiom of consciousness and not a hard problem. I simply disagree with the axioms.

    Penrose is an odd case because he accepts the axioms and then carries that same thinking into QM where the same contradiction re-emerges but actually thinks it is somehow solvable. What is a “measurement” if not an “observation,” and what is an “observation” if not an “experience”? The same “measurement problem” is just a reflection of the very same “hard problem” about the supposed “phenomenality” of experience and the explanatory gap between what we actually experience and what supposedly exists beyond it.

    It’s the quantum wave function collapse that’s important.

    Why should I believe there is a physical collapse? This requires you to, again, posit that there physically exists something that lies beyond all possibilities of us ever observing it (paralleling Kant’s “noumenon”) which suddenly transforms itself into something we can actually observe the moment we try to look at it (paralleling Kant’s “phenomenon”). This clearly introduces an explanatory gap as to how this process occurs, which is the basis of the measurement problem in the first place.

    There is no reason to posit a physical “collapse” or even that there exists at all a realm of waves floating about in Hilbert space. These are unnecessary metaphysical assumptions that are purely philosophical and contribute nothing but confusion to an understanding of the mathematics of the theory. Again, just like Chalmers’ so-called “hard problem,” Penrose is inventing a problem to solve which we have no reason to believe is even a problem in the first place: nothing about quantum theory demands that you believe particles really turn into invisible waves in Hilbert space when you aren’t looking at them and suddenly turn back into visible particles in spacetime when you do look at them.

    That’s entirely metaphysical and arbitrary to believe in.

    There’s no spinning out where multiple things happen, there is only one thing. After wave collapse, is when you look in the box and see if the cats dead. In a sense it’s the literal “observer effect” happening our head. And that is probably what consciousness is.

    There is only an “observer effect” if you believe the cat literally did turn into a wave and you perturbed that wave by looking at it and caused it to “collapse” like a house of cards. What did the cat see in its perspective? How did it feel for the cat to turn into a wave? The whole point of Schrodinger’s cat thought experiment was that Schrodinger was trying to argue against believing particles really turn into waves because then you’d have to believe unreasonable things like cats turning into waves.

    All of this is entirely metaphysical, there is no observations that can confirm this interpretation. You can only justify the claim that cats literally turn into waves when you don’t look at them and there is a physical collapse of that wave when you do look at them on purely philosophical grounds. It is not demanded by the theory at all. You choose to believe it purely on philosophical grounds which then leads you to think there is some “problem” with the theory that needs to be “solved,” but it is purely metaphysical.

    There is no actual contradiction between theory and evidence/observation, only contradiction between people’s metaphysical assumptions that they refuse to question for some reason and what they a priori think the theory should be, rather than just rethinking their assumptions.

    That’s how science works. Most won’t know who Penrose is till he’s dead.

    I’d hardly consider what Penrose is doing to be “science” at all. All these physical “theories of consciousness” that purport not to just be explaining intelligence or self-awareness or things like that, but more specifically claim to be solving Chalmers’ hard axiom of consciousness (that humans possess some immaterial invisible substance that is somehow attached to the brain but is not the brain itself), are all pseudoscience, because they are beginning with an unreasonable axiom which we have no scientific reason at all to take seriously and then trying to use science to “solve” it.

    It is no different then claiming to use science to try and answer the question as to why humans have souls. Any “scientific” approach you use to try and answer that question is inherently pseudoscience because the axiomatic premise itself is flawed: it would be trying to solve a problem it never established is even a problem to be solved in the first place.