He was an outstanding player and had an impact in the game like no other player before him. His peak performance coincided with the mass media surge of the late 60's. One nitpick is that he did not carry Brazil to 3 world cup victories, because in 1962 he played only one full match. He got injured during the second match and was sidelined until the end of the world cup. Still, winning two world cup finals is a feat only a dozen players have done--incidentally, two Italians and 10 Brazilians.
I wouldn't say Einstein was wrong. It is a sad story that he never got to see the work of Nobel laureate Julian Schwinger on Quantum Field Theory, or the contributions of other Nobel laureates like Frank Wilczek. Switching from a particle-centric theory (like Quantum Mechanics) to a field-centric theory makes all the QM paradoxes disappear, and problems like locality, the double-slit experiment, etc., become trivial. What we have been calling particles are instead oscillators in fields. The Schrödinger equation is a wave function, but Quantum Mechanics has been using it to represent a probability distribution instead of an actual wave. Why is that? Because of the focus on particles. If everything has to be a particle, then of course we have to use a wave function as a probability distribution. But why force that view? We know how to describe fields since the days of Faraday and Maxwell, yet after Copenhagen all we want to do is to force wave-describing partial differential equations into a probabilistic model full of paradoxes.
What most physicists refer to as "particle" is very different from what lay people understand by that term. If you ask physicists at the LHC about particles, they will explain what I've already mentioned, because what I'm saying is far from being revolutionary.
You can count individual quanta of any kind (photons, electrons, etc.), and you can measure their quantum collapse. But that does not mean they are localized "particles" the way Dirac liked to think about them.
Of course you can do that, if we couldn't detect state collapses then they wouldn't be a staple of quantum mechanics. If you measure a rotating wave function over and over then it wont rotate since it will collapse into the same state again, while if you let it be it can rotate into another state giving you another measurement result.
This works since rotations aren't linear, small rotations are quadratic and hence will almost always result in the original state. You can also use this technique to rotate a state by making many measurements slowly changing the axis, so each measurement results in a small rotation.
Edit: But you are right that we can't see the history of state collapses, but they are definitely required for our current theories to work as you get the wrong experimental results without them in the theory.
>> Of course you can do that, if we couldn't detect state collapses then they wouldn't be a staple of quantum mechanics.
No. If you shoot pairs of entangled particles in opposite directions, someone receiving one stream of particles can take or not take measurements thereby collapsing or not collapsing the wave function of the particles going in the other direction. If you could tell the difference between a particle with a collapsed wave function and one without, this could be used for FLT communication. Bottom line is we can't tell if a wave function is "collapsed" or not. It's not a real event.
> if you let it be it can rotate into another state giving you another measurement result.
But then following your prior reasoning, that's just another collapse. So if the only way to measure is to collapse then pmkahler is right: there is no way to discern a collapsed wave function from a non collapsed one.
> But then following your prior reasoning, that's just another collapse.
But it isn't random, if we know how fast it rotates then the second time we measure it we can get a close to exact result.
The most famous experiment for this is the double slit experiment. Normally when you fire particles through you will get an interference pattern on the other side since the particles passes through like a wave. Measuring it in one will collapse the wavefunction and therefore destroying the inference pattern, so now the particles mostly just travels straight and creates a distribution of hits as if it passed through just a single slit.
Edit: Can look at this picture from wikipedia showing the difference between single and double slit, just measuring at one of the slits will even cause particles passing through the other slit to go much straighter.
(Disclaimer: I'm not a quantum physicist and the following is not the mainstream opinion)
Yet, it's just twisting fields together.
The right picture to have is fermions being something like knots on a rope, in a portion of space either you have a knot or you don't. But the knot can be more or less tight, and can be moving and have various shape.
When the fields are not coupled, i.e. when particles are far away, the only stable solutions, have a discrete quantity. These quantities are the conserved quantities that are preserved by the field evolution. Typically they are the quadratic values that the symplectic integrator conserve locally.
When particles get closer, they can exchange continuously some of the quantities between their fields, but as in a game of musical chair, as soon as the particles get away from each other, they must have taken a seat and settled in one of their discrete values.
QFT or quantum physics is like keeping track of the counts of the number of knots on the ropes and model the probabilities of how these values evolve upon collisions. But if you keep track of the rope shape (aka fields phases), you can more precisely predict where the knots are.
The catch-22 is that the rope shape is not observable, (in a similar fashion as you can't observe the seed of a random number generator), so you can't make better prediction using the rope model than you could with quantum mechanic.
But the "answer" to this catch-22, is that even though with rope mechanics you can't compute the probabilities any faster than QM would (as marginalization isn't fast), you can simulate in same compute complexity as classical system a universe that behaves according (convergence in law) to the probabilities of QM.
The case for particles is quantization. We've never seen half a photon or half an electron. This dates back to the ultraviolet catastrophe. If it was all about waves that would be easy; we reluctantly acknowledge particles because Nature has forced us to.
Quanta is fundamental to Quantum Field Theory so it can't be the deviding factor. I would say we are biased to think in terms of particles because our brains and senses have evolved to perceive macro objects as having a precise location and definite boundaries, thus we have a tendency to project that macro structure onto everything we want to describe.
But they still aren't waves. Waves can be split, quantum particlewaves can't. This is a fundamental difference and makes them neither waves nor particles.
It is equally wrong to view them as classical particles as it is to view them as classical waves. Classical particles with a probability wave is more accurate than both of those, but still not fully accurate. However I don't think there is any better likeness than that, to explain better you'd need to teach the math and equations behind quantum mechanics which usually takes years.
Can this probabilities field simply be the magnetic field? An electron has magnetic field that travels at the speed of light, while the electron is crawling behind, and by the time it enters one of the slits, its magnetic field has already formed the interference pattern that will guide the electron further.
Not all waves can be split, e.g. Solitons which are named because of their particle like nature and can be observed in optics, water chains of oscillators and more.
I think that is the crux of the issue, we have waves with a discrete energy, we can call them particles but they are very different from the traditional image that people have of what a particle is.
You can easily split solitons in water, just cut it in the middle and both sides will continue to live on as the water displacement field is still there.
On the other hand, try grab a part of an electron cloud around an atom and you will either get the entire electron or you will grab nothing. No matter what you do you can't separate one part of the cloud from the other, they are always connected. Trying to grab the electron will either remove all of the wave parts outside, or remove all of the wave part you try to grab. There is no classical system that behaves like this.
But quantum field theory doesn't replace probability distributions on particles with fields; it replaces them with probability distributions on fields.
“Switching from a particle-centric theory
(like Quantum Mechanics) to a field-centric theory
makes all the QM paradoxes disappear, and problems
like locality, the double-slit experiment, etc., become
trivial.“
I feel the same way. Would you know of any references that described the actual experiments seemingly revealing the paradoxes from quantum field theory perspective? Would appreciate it if you could share the references. Thanks!
I take it you're a MWI fan then? Isn't the answer for why we "force wave-describing partial differential equations into a probabilistic model" because in our reality, when we look at an electron we observe something that looks like a particle and not a wave?
I'm not a fan of the Many-Worlds Interpretation :-)
As for the electron, it is an oscillator described by a wave function, quantized, without locality. Here is an image of the wave function interpreted as a probability density:
The Quantum Mechanics interpretation is that the electron is a particle in an indeterminate location and the plot describes the probability of where the electron can be located. The Quantum Field Theory interpretation is that what we see is a field in an excited state, quantized. By looking at those plots, we can see a quantized field vibrating. If we send it through a double slit, it will behave like a wave. If instead we think about it as a single, indivisible particle, then we need to explain how it passes through two different slits at the same time. Thinking about it as a quantized oscillator disolves the paradox.
Makes sense -- but if you're saying "the electron travels through both slits at the same time because it is a wave", then why can't we detect that wave simultaneously at both slits?
At that point of measurement/detection we HAVE to start talking about probabilities, not just waves, right?
What does it mean for a wave to quantize? That is not something I (as a mathematician) are familiar with. It feels like something that bears a lot of explanation.
I would hazard a guess that the explanation makes it decently reasonable to call this process 'a particle'.
Certainly, to me it feels like saying 'it is just a wave' doesn't describe it because this quantization is a special thing.
I guess it means that the governing equation has a solution space which is somehow discrete. I would like to know if there's a more precise definition than that!
not a physicist, but afaik MWI doesn't work like that.
iirc, particles are actually excitations in a quantum field. the more particle-y an electron looks - the closer you bound its position - the more waves are needed to constructively/destructively interfere to make a peak there.
it's like a Fourier transform - if you want a perfect square wave you need infinite sine waves. in this analogy that's momentum space expanding out.
also, like, you're not really seeing individual electrons. you're seeing macroscopic phenomena, like your sensor or photomultiplier tube or whatever. you're seeing the interaction, not the particle. understanding that as your lab equipment, retina and brain entering the state space caused by resolving a wave to a spike makes more sense to me than some decoherence mechanism.
My experience is quite different: I feel extremely productive working in the terminal, and what is described in the article as "The frozen world" is for me a virtue of the UNIX philosophy: the structure is not embedded in the data, but the programs can project a structure on it. That allows programs to be filters, and text to be the only format for exchanging information. What the article describes as "The nightmare that is composition iteration" is for me a very pleasant experience: the terminal provides a very fast feedback loop where I can iteratively examine the output of a command and refine the filters I apply. As a result, I am able to prototype commands quickly and accurately.
> the terminal provides a very fast feedback loop where I can iteratively examine the output of a command and refine the filters I apply. As a result, I am able to prototype commands quickly and accurately.
That's what any REPL does. And it works great with typed data. Sometimes even with graphical output.
> the terminal provides a very fast feedback loop where I can iteratively examine the output of a command and refine the filters I apply
An IDE would be used to develop those commands, more than compose them.
> text to be the only format for exchanging information
For this to work, with non-text data, you end up using the file system as a buffer, then passing filepaths between commands. For many contexts, this isn't performant enough.