Arcadian Functor

occasional meanderings in physics' brave new world

My Photo
Name:
Location: New Zealand

Marni D. Sheppeard

Saturday, April 14, 2007

Gravity Probe B

If you are still busy sipping champagne after the MiniBooNE results, grab your hat and get ready for the next installment in the 007 Year of Physics. Gravity Probe B results will be discussed today by C. W. Francis Everitt from Stanford. For the Kiwis, that's late tonight.

As the abstract states, NASA's Gravity Probe B is designed to test two consequences of Einstein's theory, namely (1) the predicted 6.6 arc sec per year geodetic effect due to the motion of the gyroscope through the curved spacetime around the Earth, and (2) the predicted 0.041 arc sec per year frame dragging effect due to the rotating Earth.

Garth at PF has a nice summary of various theories and their predictions.

10 Comments:

Blogger Matti Pitkänen said...

Dear Kea,

I must confess that I did not even know what Gravity Probe B is supposed to do. Thanks for links! This is what I
learned.

Matti

April 14, 2007 7:59 PM  
Blogger nige said...

This comment has been removed by the author.

April 15, 2007 3:13 AM  
Blogger nige said...

Thanks for these links! This nice experimental testing will probably go wrong because the error bars will be too big to rule anything out, or whatever. If you get crazy looking experimental results, people will dismiss them instead of throwing general relativity away in any case; as a last resort an epicycle will be added to make general relativity agree (just as the CC was modified to make the mainstream general relativity framework fit the facts in 1998).

This post reminds me of a clip on U-tube showing Feynman in November 1964 giving his Character of Physical Law lectures at Cornell (these lectures were filmed for the BBC which broadcast them in BBC2 TV in 1965):

"In general we look for a new law by the following process. First we guess it. Don't laugh... Then we compute the consequences of the guess to see what it would imply. Then we compare the computation result to nature: compare it directly to experiment to see if it works. If it disagrees with experiment: it's wrong. In that simple statement is the key to science. It doesn't any difference how beautiful your guess is..."

- http://www.youtube.com/watch?v=ozF5Cwbt6RY

I haven't seen the full lectures. Someone should put those lectures films on the internet in their entirity. They have been published in book form, but the actual film looks far more fun, particularly as they catch the audience's reactions. Feynman has a nice discussion of the LeSage problem in those lectures, and it would ne nice to get a clip of him discussing that!

General relativity is right at a deep level and doesn't in general even need testing for all predictions, simply because it's just a mathematical description of accelerations in terms of spacetime curvature, with a correction for conservation of mass-energy. You don't keep on testing E=mc^2 for different values of m, so why keep testing general relativity? Far better to work on trying to understand the quantum gravity behind general relativity, or even to do more research into known anomalies such as the Pioneer anomaly.

General relativity may need corrections for quantum effects, just as it needed a major correction for the conservation of mass-energy in November 1915 before the field equation was satisfactory.

The major advance in general relativity (beyond the use of the tensor framework, which dates back to 1901, when developed by Ricci and Tullio Levi-Civita) is a correction for energy conservation.

Einstein started by saying that curvature, described by the Ricci tensor R_ab, should be proportional to the stress-energy tensor T_ab which generates the field.

This failed, because T_ab doesn't have zero divergence where zero divergence is needed "in order to satisfy local conservation of mass-energy".

The zero divergence criterion just specifies that you need as many field lines going inward from the source as going outward from the source. You can't violate the conservation of mass-energy, so the total divergence is zero.

Similarly, the total divergence of magnetic field from a magnet is always zero, because you have as many field lines going outward from one pole as going inward toward the other pole, hence div.B = 0.

The components of T_ab (energy density, energy flux, pressure, momentum density, and momentum flux) don't obey mass-energy conservation because of the gamma factor's role in contracting the volume.

For simplicity if we just take the energy density component, T_00, and neglect the other 15 components of T_ab, we have

T_00 = Rho*(u_a)*(u_b)

= energy density (J/m^3) * gamma^2

where gamma = [1 - (v^2)/(c^2)]^(-1/2)

Hence, T_00 will increase towards infinity as v tends toward c. This violates the conservation of mass-energy if R_ab ~ T_ab, because radiation going at light velocity would experience infinite curvature effects!

This means that the energy density you observe depends on your velocity, because the faster you travel the more contraction you get and the higher the apparent energy density. Obviously this is a contradiction, so Einstein and Hilbert were forced to modify the simple idea that (by analogy to Poisson's classical field equation) R_ab ~ T_ab, in order to make the divergence of the source of curvature always equal to zero.

This was done by subtracting (1/2)*(g_ab)*T from T_ab, because T_ab - (1/2)*(g_ab)*T always has zero divergence.

T is the trace of T_ab, i.e., just the sum of scalars: the energy density T_00 plus pressure terms T_11, T_22 and T-33 in T_ab ("these four components making T are just the diagonal - scalar - terms in the matrix for T_ab").

The reason for this choice is stated to be that T_ab - (1/2)*(g_ab)*T gives zero divergence "due to Bianchi’s identity", which is a bit mathematically abstract, but obviously what you are doing physically by subtracting (1/2)(g_ab)T is just getting rid from T_ab what is making it give a finite divergence.

Hence the corrected R_ab ~ T_ab - (1/2)*(g_ab)*T ["which is equivalent to the usual convenient way the field equation is written, R_ab - (1/2)*(g_ab)*R = T_ab"].

Notice that since T_00 is equal to its own trace T, you see that

T_00 - (1/2)(g_ab)T

= T - (1/2)(g_ab)T

= T(1 - 0.5g_ab)

Hence, the massive modification introduced to complete general relativity in November 1915 by Einstein and Hilbert amounts to just subtracting a fraction of the stress-energy tensor.

The tensor g_ab [which equals (ds^2)/{(dx^a)*(dx^b)}] depends on gamma, so it simply falls from 1 to 0 as the velocity increases from v = 0 to v = c, hence:

T_00 - (1/2)(g_ab)T = T(1 - 0.5g_ab) = T where g_ab = 0 (velocity of v = c) and

T_00 - (1/2)(g_ab)T = T(1 - 0.5g_ab) = (1/2)T where g_ab = 1 (velocity v = 0)

Hence for a simple gravity source T_00, you get curvature R_ab ~ (1/2)T in the case of low velocities (v ~ 0), but for a light wave you get R_ab ~ T, i.e., there is exactly twice as much gravitational acceleration acting at light speed as there is at low speed. This is clearly why light gets deflected in general relativity by twice the amount predicted by Newtonian gravitational deflection (a = MG/r^2 where M is sun's mass).

I think it is really sad that no great effort is made to explain general relativity simply in a mathematical way (if you take away the maths, you really do lose the physics).

Feynman had a nice explanation of curvature in his 1963 Lectures on Physics:gravitational contracts (shrinks) earth's radius by (1/3)GM/c^2 = 1.5 mm, but this contraction doesn't affect transverse lines running perpendicularly to the radial gravitational field lines, so the circumference of earth isn't contracted at all! Hence Pi would increase slightly if there are only 3 dimensions: circumference/diameter of earth (assumed spherical) = [1 + 2.3*10^{-10}]*Pi.

This distortion to geometry - presumably just a simple physical effect of exchange radiation compressing masses in the radial direction only (in some final theory that includes quantum gravity properly) - explains why there is spacetime curvature. It's a shame that general relativity has become controversial just because it's been badly explained using false arguments (like balls rolling together on a rubber water bed, which is a false two dimensional analogy - and if you correct it by making it three dimensional and have a surrounding fluid pushing objects together where they shield one another, you get get censored out, because most people don't want accurate analogies, just myths).

(Sorry for the length of this comment by the way and feel free to delete it. I was trying to clarify why general relativity doesn't need testing.)

April 15, 2007 3:15 AM  
Blogger Kea said...

Sunday: it appears there are no definitive results yet, at least going by the gossip at PF.

April 15, 2007 11:49 AM  
Blogger Matti Pitkänen said...

A comment to Nige,

I find difficult to understand why general relativity would not need testing! I would continue also the testing of E=mc^2! The geodesic hypothesis, which you seem to be talking about, dictates the couplings of test particle to gravitational field but is only a small part of GR.

Quite generally, General Coordinate Invariance is something precisely defined but even the metric description of gravity guaranteing general coordinate invariance allows alternatives.

An example is my own approach in which metric of space-time surface is induced from 8-D imbedding space. This implies strong constraints on the metric implying among other things that globally imbeddable cosmologies have sub-critical mass density.

Kerr-metric regarded as a description of exterior of rotating star as such is very probably not imbeddable to the 8-D imbedding space of TGD. This need not mean that metric derived from Maxwellian picture applying at post-Newtonian limit would not work in the approximation considered but the tests, in particular the test for the rotation of gyroscope in gravito-magnetic field is highly interesting. Of course, if the test fails, one can stay in GRT framework and claim that the space around the star fails to be describable using Kerr metric for some rason but this is definitely something less nice than the simplest description.

The precise meaning of Equivalence Principle is far from clear and varies from theory to theory. Equivalence Principle involves many aspects.

One aspect of EP is the notion of local Lorentz invariance with full Lorentz invariance being lost. This leads to the loss of global conservation laws. Actually even worse: the notion of energy momentum is lost completely. In TGD Lorentz and Poincare invariances are global invariances of 8-D imbedding space so that energy momentum tensor is replaced by a collection of currents and inertial and gravitational energy momenta are well defined.

Second aspect of EP is the identification of gravitational and inertial masses: something questionable on basis of above. In TGD inertial momentum is identified as the temporal average of non-conserved gravitational four-momentum over the relevant p-adic length scale: this gives rise to particle massivation.

Best,
Matti

April 15, 2007 3:25 PM  
Blogger nige said...

This comment has been removed by the author.

April 15, 2007 9:55 PM  
Blogger nige said...

This comment has been removed by the author.

April 15, 2007 10:21 PM  
Blogger nige said...

Matti, thank you very much for your response. On the issue of tests for science, if a formula is purely based on facts, it's not speculative and my argument is that it doesn't need testing in that case. There are two ways to do science:

* Newton's approach: "Hypotheses non fingo" [I frame no hypotheses].

* Feynman's dictum: guess and test.

The key ideas in the framework of general relativity are solid empirical science: gravitation, the equivalence principle of inertial and gravitational acceleration (which seems pretty solid to me, although Dr Mario Rabinowitz writes somewhere about some small discrepancies, there's no statistically significant experimental refutation of the equivalence principle, and it's got a lot of evidence behind it), spacetime (which has evidence from electromagnetism), the conservation of mass-energy, etc.

All these are solid. So the field equation of general relativity, which is key to to making the well tested unequivocal or unambiguous predictions (unlike the anthropic selection from the landscape of solutions it gives for cosmology, which is a selection to fit observations of how much "dark energy" you assume is powering the cosmological constant, and how much dark matter is around that can't be detected in a lab for some mysterious reason), is really based on solid experimental facts.

It's as pointless to keep testing - within the range of the solid assumptions on which it is based - a formula based on solid facts, as it is to keep testing say Pythagoras' law for different sizes of triangle. It's never going to fail (in Euclidean geometry, ie flat space), because the inputs to the derivation of the equation are all solid facts.

Einstein and Hilbert in 1915 were using Newton's no-hypotheses (no speculations) approach, so the basic field equation is based on solid fact. You can't disprove it, because the maths has physical correspondence to things already known. The fact it predicts other things like the deflection of starlight by gravity when passing the sun as twice the amount predicted by Newton's law, is a bonus, and produces popular media circus attention if hyped up.

The basic field equation of general relativity isn't being tested because it might be wrong. It's only being tested for psychological reasons and publicity, and the false idea that Popper had speculations must forever be falsifiable (ie, uncertain, speculative, or guesswork).

The failure of Popper is that he doesn't include proofs of laws which are based on solid experimental facts.

First, Archimedes proof of the law of buoyancy in On Floating Bodies. The water is X metres deep, and the pressure in the water under a floating body is the same as that at the same height above the seabed regardless of whether a boat is above it or not. Hence, the weight of water displaced by the boat must be exactly equal to the weight of the boat, so that the pressure is unaffected whether or not a boat is floating above a fixed submerged point.

This law is a falsifiable law. Nor are other empirically-based laws. The whole idea of Popper that you can falsify an solidly empirically based scientific theory is just wrong. The failures of epicycles, phlogiston, caloric, vortex atoms, and aether are due to the fact that those "theories" were not based on solid facts, but were based upon guesses. String theory is also a guess, but is not a Feynman type guess (string theory is really just postmodern ***t in the sense that it can't be tested, so it's not even a Popper type ever-falsifiable speculative theory, it's far worse than that: it's "not even wrong" to begin with).

Similarly, Einstein's original failure with the cosmological constant was a guess. He guessed that the universe is static and infinite without a shred of evidence (based on popular opinion and the "so many people can't all be wrong" fallacy). Actually, from Obler's paradox, Einstein should have realised that the big bang is the correct theory.

The big bang goes right back to Erasmus Darwin in 1791 and Edgar Allan Poe in 1848, which was basically an a fix to Obler's paradox (the problem that if the universe is infinite and static and not expanding, the light intensity from the infinite number of stars in all directions will mean that the entire sky would be as bright as the sun: the fact that the sun is close to us and gives higher inverse-square law intensity than a distant star would be balanced by the fact that at great distances, there are more stars by the inverse square law of the distance, covering any given solid angle of the sky; the correct resolution to Obler's paradox is not - contrary to popular accounts - the limited size of the universe in the big bang scenario, but the redshifts of distant stars in the big bang, because after all we're looking back in time with increasing distance and in the absence of redshift we'd see extremely intense radiation from the high density early universe at great distances).

Erasmus Darwin wrote in his 1790 book ‘The Botanic Garden’:

‘It may be objected that if the stars had been projected from a Chaos by explosions, they must have returned again into it from the known laws of gravitation; this however would not happen, if the whole Chaos, like grains of gunpowder, was exploded at the same time, and dispersed through infinite space at once, or in quick succession, in every possible direction.’

So there was no excuse for Einstein in 1916 to go with popular prejudice and ignore Obler's paradox, ignore Darwin, and ignore Poe. What was Einstein thinking? Perhaps he assumed the infinite eternal universe because he wanted to discredit 'fiat lux' and thought he was safe from experimental refutation in such an assumption.

So Einstein in 1916 introduced a cosmological constant that produces an antigravity force with increases with distance. At small distances, say within a galaxy, the cosmological constant is completely trivial because it's effects are so small. But at the average distance of separation between galaxies, Einstein made the cosmological constant take the right value so that its repulsion would exactly cancel out the gravitation attraction of galaxies.

He thought this would keep the infinite universe stable, without continued aggregation of galaxies over time. As now known, he was experimentally refuted over the cosmological constant by Hubble's observations of redshift increasing with distance, which is redshift of the entire spectrum of light uniformly caused by recession, and not the result of scattering of light with dust (which would be a frequency-dependent redshift) or "tired light" nonsense.

However, the Hubble disproof is not substantive to me. Einstein was wrong because he built the cosmological constant extension on prejudice not facts, he ignored evidence from Obler's paradox, and in particular his model of the universe is unstable. Obviously his cosmological constant fix suffered from the drawback that galaxies are not all spaced at the same distance apart, and his idea to produce stability in an infinite, eternal universe was a failure physically because it was not a stable solution. Once you have one galaxy slightly closer to another than the average distances, the cosmological constant can't hold them apart, so they'll eventually combine, and that will set off more aggregation.

The modern cosmological constant application (to prevent the long range gravitational deceleration of the universe from occurring, since no deceleration is present in data of redshifts of distant supernovas etc) is now suspect experimentally because the "dark energy" appears to be "evolving" with spacetime. But it's not this experimental (or rather observational) failure of the mainstream Lambda-Cold Dark Model of cosmology which makes it pseudoscience. The problem is that the model is not based on science in the first place. There's no reason to assume that gravity should slow the galaxies at great distances. Instead,

"... the flat universe is just not decelerating, it isn’t really accelerating..."

The reason it isn't decelerating is that gravity, contraction, and inertia are ultimately down to some type of gauge boson exchange radiation causing forces, and when these exchange radiation are exchanged between receding masses over vast distances, they get redshifted so their energy drops by Planck's law E=hf. That's one simple reason for why general relativity - which doesn't include quantum gravity with this effect of redshift of gauge bosons - falsely predicts gravitational deceleration which wasn't seen.

The mainstream response to the anomaly, of adding an epicycle (dark energy, small positive CC) is just what you'd expect from from mathematicians, who want to make the theory endlessly adjustable and non-falsifiable (like Ptolemy's system of adding more epicycles to overcome errors).

Many thanks for discussion you gave of issues with the equivalence principle. I can't grasp what is the problem with inertial and gravitational masses being equal to within experimental error to many decimals. To me it's a good solid fact. There are a lot of issues with Lorentz invariance anyway, so its general status as a universal assumption is in doubt, although it certainly holds on large scales. For example, any explanation of fine-graining in the vacuum to explain the UV cutoff physically is going to get rid of Lorentz invariance at the scale of the grain size, because that will be an absolute size. At least this is the argument Smolin and others make for "doubly special relativity", whereby Lorentz invariance only emerges on large scales. Also, from the classical electromagnetism perspective of Lorentz's original theory, Lorentz invariance can arise physically due to contraction of a body in the direction of motion in a physically real field of force-causing radiation, or whatever is the causative agent in quantum gravity.

Many thanks again for the interesting argument. Best wishes, Nige

April 15, 2007 10:24 PM  
Blogger Matti Pitkänen said...

Dear Nige,

thank you for comments.

I agree with you in many aspects. I think that the differences between theories emerge when one begins to model simple systems like rotating stars. Here one encounters what Einstein regarded as the basic weakness of his theory: the left hand side of his equations involving energy momentum tensor is an ad hoc construct. Assuming gravitational vacuum one gets rid of the problem in the models of star exterior.

In TGD framework inertial vacua replace gravitational vacua as the first guess. There is however huge variety of vacuum extremals and an improved guess is stationarity meaning that gravitational four-momentum currents are conserved. This model should apply to the asymptotic states of stars. This means that Einstein's action for induced metric determines the space-time surface. One obtains a generalization of minimal surface equations with metric replaced by Einstein tensor. I have earlier constructd a model for asymptotic state of star using this guess: one prediction is that the star necessarily rotates.

I checked what is the simplest possible vacuum extremal model for the magneto-gravitational field in TGD framework.

Dipole field does not allow embedding as vacuum extremal. The field resulting from simplest possible deformation of Schwartschild metric behaves as 1/r^3 but is dipole field only at gravito-magnetic equator. Gravitomagnetic flux flows along z axis and emanates radially from it and flows along spherical surface so that radial component is absent completely. Near poles the field diverges as 1/sin(theta). This prediction is certainly testable and positive result would kill the simplest GRT model and one would be forced to think whether one can find justifications for energy momentum tensor reproducing this kind of gravitomagnetic field naturally in GRT context.

Also magnetic parts of gauge fields have necessarily the same behavior and of course dominate the Lorentz force for charged particles. Helical orbits around z-axis could relate to jets associated with super-novas and galactic jets.

Personally I believe dark matter as an attractive working hypothesis but as something much profound than ad hoc particle which exotic name. But this is of course an experimental question.


Matti

April 16, 2007 12:48 AM  
Anonymous Anonymous said...

Hello. Nige says: "The key ideas in the framework of general relativity are solid empirical science: gravitation, the equivalence principle... spacetime (which has evidence from electromagnetism), the conservation of mass-energy, etc."

Spacetime certainly isn't solid empirical science Nige. We don't understand time well enough - motion through time is thought to be an illusion, but we can't explain it, and SR describes things that would look very odd without it. You have to assume two different levels, one at which motion through time happens, the other at which it doesn't happen. This is an incomplete picture, with many unanswered questions.

And what's more, spacetime leads directly to the idea that the future is already fixed and decided. There's a single point from SR that leads specifically to this, which is about simultaneity at a distance. And yet quantum theory tells us very clearly that the future is undecided until it happens - events are random, we can only get the probabilities.

The difference between a fixed future and an unfixed one is an enormous difference, not just a little tweaking required to either spacetime or quantum theory. The only reason you are capable of writing a post like yours is that these issues are swept under the carpet by people who derive a sense of security and self-esteem from established physics. For centuries there have been people who try to paper over the cracks (which kills science, or rather delays its progress), while the innovators do the opposite - they seek out the cracks in the edifice, and have the courage to look right into them.

Jonathan

July 15, 2007 10:33 PM  

Post a Comment

<< Home