Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Consider changes in gravity affect cognition.

That is literally what it does. It slows down cognition. In the exact same way that it slows down clocks. The "clock in your brain" and the "watch in your hand" are all operating on the same physical and chemical principles. It's all just atoms and electromagnetic interactions at the lowest levels.

> Likewise, if the brain has any quantum mechanisms

Again, this is a pop-science misunderstanding of all of physics, not just Quantum Mechanics (QM).

The rules of QM either apply to everything or nothing. The laws of the universe do not begin or end at the edge of the laboratory bench top. Similarly, there are no special rules that apply to just the human brain.

The rules that govern galaxies, particle beams, atomic clocks, mechanical watches, and human observers are the same. They all tick along at the same rate. One second per second, locally at least.



> The rules of QM either apply to everything or nothing. The laws of the universe do not begin or end at the edge of the laboratory bench top. Similarly, there are no special rules that apply to just the human brain.

Well, for now we don't really know how the rules of QM apply to macroscopic objects. The exact physical interpretation of wave-function collapse (if any) is still a matter of speculation, with all options still being on the table - maybe there is no collapse (e.g. many worlds theory), maybe there exists a collapse when interacting with large enough systems (measurement - the Copenhagen interpretation), maybe collapse is a physical process that happens at precise scales (Roger Penrose seems to believe something like this), maybe the wave function is a physical wave (pilot wave theory) and there is no collapse in another way.


> Well, for now we don't really know how the rules of QM apply to macroscopic objects

Sure we do, "we" just refuse to acknowledge this, keeping a 100 year old debate alive for no good reason.

Macroscopic objects follow microscopic rules. That's that. There's no further debate. There can be none.

I can't even begin to describe how absurd it is to argue anything else. It's like... saying with a straight face that software doesn't "really" follow the rules of boolean algebra if it has enough lines of code. That somehow once a program gets "big enough", it can transcend truth tables and somehow go analog or something.

It's like a mathematician saying that really big equations, the kind that span several pages stop following the rules of algebra.

Get it? It's just... insane. The rules of the universe are the rules for all things in it. They apply to everything, at all scales, at all times.

If they don't, they're not the rules!


That's the point. They are not the rules.

QM and GR are approximations. The real rules are unknown to us. It just happens that "in the small", QM seems to be a good approximation that is consistent with experimental results. And "in the large" the same holds for GR. Neither of them works for everything though.

That just means we need to find a better approximation. That's not insane.

Bits are not an approximation of software "in the small". They are the real building block. We know that because we made them. First made them, then observed how they behave. But QM is a theory created by first observing. Physics is a natural science trying to understand what we observe. Math and CS are not. The objects they care about are conceived by us and observed second.


> Get it? It's just... insane. The rules of the universe are the rules for all things in it. They apply to everything, at all scales, at all times.

This is nowhere near as certain as you make it out to be. Take Conway's Life. With 4 simple "rules of the universe", you can create a series which to the best of our knowledge cannot be predicted from those rules. Clearly this series is a "thing" within the simulated universe, and clearly it emerges from the universal rules, but the rules don't give us any insight about it! The only known way to "predict" it is to let it run: in other words, it's irreducibly complex.

Causality can look different at different scales. It's well established that simple low-level rules can generate extreme if not irreducible complexity. This doesn't mean the bottom-level rules don't apply everywhere; it just means their descriptive/predictive utility is not necessarily preserved when you zoom out. Is a prime-finding algorithm best described by the mechanism of the computational substrate? Any number of substrates could suffice. We can best predict its output by thinking at a higher level of abstraction.


> you can create a series which to the best of our knowledge cannot be predicted from those rules

You can absolutely reversibly compute conways game of life state, and you can compute it forward as well (after all, it is a game). That's prediction.

> It's well established that simple low-level rules can generate [...] irreducible complexity.

Simple rules can create a very complex emergent system, but that doesn't mean it cannot still be reduced to it's component rules. That's what makes them rules and not just guidelines.


> You can absolutely reversibly compute conways game of life state, and you can compute it forward as well (after all, it is a game). That's prediction.

Notice I said irreducible, not irreversible. Yes, you can reverse the computation; but there's no known way to "shortcut" the forward process to predict the outcome of the series I mentioned any faster than simply letting it run. Letting it run is not prediction in the sense I mean here. By "predict" I mean foretell in advance what the system will do without needing to let it run.

> Simple rules can create a very complex emergent system, but that doesn't mean it cannot still be reduced to it's component rules. That's what makes them rules and not just guidelines.

I agree, with a minor caveat about language: you can describe a system which may display complex emergent behavior in terms of its underlying rules, but doing so is not guaranteed to give you useful information about (i.e., allow you to "predict", in the sense described above) the emergent behaviors. In contexts like these, to "reduce" typically means to describe in terms of a lower-level formalism while preserving predictive ability, AFAIK. In other words, the low-level formalism provides full information about the behavior of the entire system, such that you don't need to observe the system to know what it will do. In the Life example this is not the case.


When people do prediction, they also simulate system states given known priors and behavioral characteristics. The distinction you're making is in my opinion not valuable (or at least, you've not demonstrated it's value here).


Sure, and the point of simulation in the first place is often to understand how the system in question will behave, in advance. The distinction I meant to make is precisely that some processes may not be computable in this way; that they cannot be simulated any faster than the real thing, in other words. That is what is meant by computational irreducibility, AFAIK. Determining which systems this is true of is very valuable, imo


> It's like a mathematician saying that really big equations, the kind that span several pages stop following the rules of algebra.

Incidentally, Greg Egan wrote 2 short stories on basically that premise: Luminous and Dark Integers.


Of course he did! 8)


Have you ever observed an interference pattern for tennis balls? Have you ever been unable to place a stationary object in space because you knew it's velocity?

Furthermore, leaving behind direct observations which could perhaps be waved away with discussions of probabilities, we have one big problem: there is no gravity in QM and we have no idea how to account for it or curved spacetime in QM.

So for now, we have one working model for the macroscopic world (general relativity) and one for the microscopic world (QM), but the two are mathematically incompatible, they can't be simultaneously true, and we have not yet found an experiment which contradicts either of them.


Interference patterns have been observed for buckyballs, and superposition has been observed for MEMS springs consisting of many thousands of atoms. Electromagnetic interference effects can occur with radiation that has kilometer-long wavelengths. Similarly, quantum encryption and key exchange have been performed over many kilometers.

> there is no gravity in QM

The weakness(es) of any one particular theory doesn't in an way disprove that macroscopic objects follow the same rules that microscopic objects do.

Just because MySQL is bad doesn't mean that the relational model is false, or that databases are pointless.

Just because the current theories of GR and QM aren't easily extended to all regimes doesn't meant that there is some sort of hard boundary where the rules change. Our theories of the world don't affect how it works. The boundaries of our theories are not boundaries of the world.

You can't fall of the edge of the world because the maps only go so far...


> Just because the current theories of GR and QM aren't easily extended to all regimes doesn't meant that there is some sort of hard boundary where the rules change. Our theories of the world don't affect how it works. The boundaries of our theories are not boundaries of the world.

Yes, absolutely agreed. My point was simply that we don't know how QM extends to macroscopic objects, not that there must be some hard boundary (though we can't exclude the possibility that there exists some hard boundary at some level of energy, just as we know that the Standard Model doesn't describe matter at certain high energies).

Until we have some unification of GR and QM, we can't say for sure that QM describes the macroscopic world, just as we can't say that GR describes the behaviors of particles. Most likely we will at some point find such a model, and find out exactly how QM applies to large systems - perhaps with some limits to uncertainty, similarly to how c acts as a limit to speeds.


> just as we can't say that GR describes the behaviors of particles

Sure it does - gravitational lensing, etc.


I guess more precisely, GR can't predict the interactions between particles, only the interaction of particles with gravitational fields. But GR can't predict the behavior of two colliding electrons - it will make similar predictions to classical physics, not the Schroedinger equation.


> It's like... saying with a straight face that software doesn't "really" follow the rules of boolean algebra if it has enough lines of code.

As a programmer, I wholly endorse this notion.


>I can't even begin to describe how absurd it is to argue anything else. It's like... saying with a straight face that software doesn't "really" follow the rules of boolean algebra if it has enough lines of code. That somehow once a program gets "big enough", it can transcend truth tables and somehow go analog or something.

It literally does sometimes, just ask anyone that's programmed software designed to be resistant to bit flips caused by cosmic radiation.


If large programs are vulnerable to cosmic radiation-induced bit flips, so are small programs, all the way down to individual machine instructions. The point is that the rules of the system are consistent across scales.


> The rules that govern galaxies, particle beams, atomic clocks, mechanical watches, and human observers are the same.

And we do not know what those rules are. We have 2 widely accepted guesses: quantum physics and general relativity that have each been incredibly succesful in their predictive power.

The problem is, we do not know how to make these theories consistent with each other. Either relativity is wrong on quantum scales, or QM is wrong on relatavistic scales. Probably both.


>That is literally what it does. It slows down cognition.

Yes, but I'm talking actual changes in the cognitive output. Look at all the physiological changes astronauts undergo. Those changes extend to their cognition, even if the effect of general relativity is small.

>Again, this is a pop-science misunderstanding of all of physics, not just Quantum Mechanics (QM).

There's an entire field called quantum biology. I'm not advocating the brain has quantum magic (in fact last I checked the theory was pretty much completely discounted). Rather, on the off-chance that it does, your example breaks down via way of the elements that comprise the brain's computation not being necessarily subject to the same relativistic effects due to potentially vast distances between some of those elements.

Or, without QM at all, consider that there's an extremely minute difference in relativistic effects even across the few inches that span your skull. By that alone, 1.0 is not 1.0.

Not trying to be pedantic here, merely food for thought.


I'm saying that GR and QM apply to all things, not that the brain "isn't quantum". It follows QM rules in the ordinary sense that electrons orbit atoms in the brain the same way as they do in other well established contexts. Similarly, the many-world interpretation is that the non-locality experiments make perfect sense if you include the human brain in the experiment and stop excluding it. That's simply a statement that we are not "gods", somehow standing apart from the Universe and observing it from the outside. We're in it, and interacting with it, the same as everything else.

The only really interesting QM "stuff" that seems to be going on is that some enzymes have incredibly high efficiencies, even when bathed in dilute reactants. There are hints that this may be some sort of inherently quantum mechanical process that cannot be understood classically. But this is a tenuous hypothesis at best, there's no hard evidence yet, let alone a good theory.


This is opening up all new avenues of thought in my head! So theoretically if we tested an astronaut on a simple mental recall test on Earth and timed then, then had them repeat the test on a trip to the Moon but timed them on the Earth the astronaut outside of Earth's gravity well should be observed completing the test faster (from Earth), but if they measured the test on the space craft, the results would be the same as when they completed the test on Earth.


Correct, but the effect would be too small to measure like that (human performance at tests is noisy).

If you made the difference in gravity more extreme the difference in measurement would be trivial to notice, but we don't have a way to achieve that in practice.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: