Setterfield and the variable speed of light model

Discussion in 'Creation vs. Evolution' started by Helen, Apr 3, 2003.

  1. Paul of Eugene New Member

    Joined:
    Oct 30, 2001
    Messages:
    2,782
    Likes Received:
    0
    OK so Barry maps the 14 BILLION year history of the universe into 8000 annual earth orbits instead of 6000 annual earth orbits. In terms of the descrepancy between this assertion and the observations there is practically no difference.

    You'd better have some editing done at that chart as well. z scales for galaxies and quasars are measured back to 3 and 4 now. And that explanation about the meaning of the z scale is way off. Would you like some technical help in explaining that?

    I understand that. I mentioned this so many times I thought you would understand yourself that it is the appearance of the movement under gravitational control that should appear to slow when the light, that brings us that appearance, slows as it shows us that movement. In other words, the only way gravitational behavior can appear now to be moving at normal speeds and still assert light was moving faster then would be if the gravitionally controlled movements were ALSO moving faster then. But there goes your alternative timing factor! You don't want to have
    the absurdity of having earth orbit billions of times after all just because the orbiting process went faster in lockstep with the speed of light!

    Let me know if you'd like help in phrasing the meaning of the z scale in astronomy.
     
  2. Helen <img src =/Helen2.gif>

    Joined:
    Aug 29, 2001
    Messages:
    11,703
    Likes Received:
    2
    Paul,

    The c/redshift chart could go up all the way to whatever number you like, but the basic curve is there and the origin remains the same.

    Regarding movement and c, here -- Barry has already responded to all that in the astronomy section. Having lectured in astronomy before -- no thank you, he doesn't need your help. Kind of you to offer, though.

    http://www.setterfield.org/AstronomicalDiscussion.htm

    Don't you think, again, it would be good to check his material before coming up with some of this stuff?
     
  3. Peter101 New Member

    Joined:
    Mar 2, 2003
    Messages:
    518
    Likes Received:
    0
    In a post just a few hours before this one, I posted Day's criticism of Barry's perfect correlation coefficient. Why is it that you will offer an explanation of almost anything but somehow not this?

    from the Administrator: this was edited to remove a personal slur towards another member. This is your last warning about this, Peter101. If you would like to discuss this please email the Administrators at science@baptistmail.com.

    [ April 30, 2003, 07:58 AM: Message edited by: Administrator ]
     
  4. Helen <img src =/Helen2.gif>

    Joined:
    Aug 29, 2001
    Messages:
    11,703
    Likes Received:
    2
    Peter, as before stated, the figure you are referring to is regarding a trend. Please read Barry's specific reply regarding this. Thank you.

    It is on page three of this thread, with the posting date and time of April 28, 2003 12:27 PM
     
  5. Peter101 New Member

    Joined:
    Mar 2, 2003
    Messages:
    518
    Likes Received:
    0
    &gt;&gt;&gt;&gt;&gt;Peter, as before stated, the figure you are referring to is regarding a trend. Please read Barry's specific reply regarding this. Thank you.

    It is on page three of this thread, with the posting date and time of April 28, 2003 12:27 PM&lt;&lt;&lt;&lt;&lt;&lt;&lt;

    Helen, there is nothing on the page you refer to above that replies to the issue that I raised. If you are talking about Barry's comments on the student t test, that must be a comment on some later work, and not a response to Day's critique. Day's critique is the issue that I have been trying to get the two of you to address. Day's critique does not involve student's t test. I have been quite clear about precisely what issues that Setterfield has not responded on, and it is frustrating after several days of discussion for you not to understand what it is that I am talking about. Please read Day's critique on the Internet to be sure you understand what it is I am talking about. I don't begrudge you the right to refuse to answer Day's critique, but if you want to take that tack, please do so explicitly, so that we can end this matter. I won't let you claim to have responded to it, when in fact you have not.
     
  6. Peter101 New Member

    Joined:
    Mar 2, 2003
    Messages:
    518
    Likes Received:
    0
    &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;from the Administrator: this was edited to remove a personal slur towards another member. This is your last warning about this, Peter101. If you would like to discuss this please email the Administrators at science@baptistmail.com&lt;&lt;&lt;&lt;&lt;

    I am not sure what triggered the annoyance of the moderator, but I will write to find out. I suspect that the moderator is influenced by an emotional attachment to creationism. Determining what is a slur and what is not is, after all, a subjective matter.
     
  7. Helen <img src =/Helen2.gif>

    Joined:
    Aug 29, 2001
    Messages:
    11,703
    Likes Received:
    2
    Ok, Peter, Barry took the time to look up his old work. If Day, or you, or anyone had actually read more of his work even from that time, any of you would have come across this from Ex Nihilo, Volume 5 Number 3, January 1983, in an article entitled "The Velocity of Light and the Age of the Universe (Part One b):

    For a perfect fit to the data, r^2=1 and all points like exactly on the curve. In the abbreviated Part 1 that appeared locally in Australia, the value of r^2 was quoted as 1 to nine figures. It was subsequently noticed that it had been obtained at an incorrect point on the computer programme, and a check gave the value as r^2=o.99+ which appeared in the International Edition.

    Preceding this in this article are the exact equations used to obtain this result when it was plugged into the computer.

    For the data they were able to get, and the computer program they were using, the results are right. However, please remember that the fewer data points you have, the more perfect the fit. This is one reason we have been saying over and over again that a lot of this work was published prematurely and under pressure from others. This is why Barry has requested that his work from 1987 on ONLY be dealt with, as it was not until that report came out that all the data points had been collected and worked with.

    Now, can we PLEASE talk about his actual work and not material that has been so totally superceded in the years after the material Day was choosing to concentrate on came out? Day should have known better, anyway, than to pick on someone's first material when so much more recent material with better data and a more complete set of analyses were available.

    But if that is all Talk Origins can do, that's them. In the meantime, again, Barry has continued FAR past that early data and we would respectfully ask you and everyone else to read the papers and discussions on his website.

    Thank you.
     
  8. Peter101 New Member

    Joined:
    Mar 2, 2003
    Messages:
    518
    Likes Received:
    0
    Helen,

    After about 10 attempts on my part, you are at last answering the question that I asked, rather than something else. Thanks...I think.

    I suspect that Day's criticisms are still valid and that a correlation coefficient of even 0.99 is still in error. I notice that Barry's usual defense is that such and so result came from the computer. Those who have some experience with computers know that this type of rationale means nothing.

    Also, it seems that Setterfield's original paper gave a correlation coefficient of 1 and not 0.99. But as Day points out, that was revised downward in later publications as other criticisms came in.

    &gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;Helen: Now, can we PLEASE talk about his actual work &lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;

    I thought that the 1981 paper, published in a creationist journal, was actual work.

    [ April 30, 2003, 05:27 PM: Message edited by: Peter101 ]
     
  9. Paul of Eugene New Member

    Joined:
    Oct 30, 2001
    Messages:
    2,782
    Likes Received:
    0
    It occurs to me that all my faithful readers out there might feel left out of the discussion. I've been referring to the "slowing" aspect Setterfield theory requires without, perhaps, taking the time for a new reader to understand just what I'm talking about. So for those who are new to the discussion I'll try to explain my point in a way that makes it very clear.

    Consider the great galaxy in Andromeda. It is the very nearest full sized galaxy. It is measured to be 2.8 million light years away. Setterfield theory would have it that light from that galaxy has traversed that distance within 8000 of our years, due to a highly increased speed of light in the past compared to now.

    If light had the same average speed from then to now, it would mean light traveled over 300 times faster. But we have a continuous curve for the increase, so I will estimate that light we now see Andromeda Galaxy by was traveling 700 times faster compared to its speed today. The discussion that follows is not materially altered by any reasonable value consistent with Setterfield theory.

    Now let us consider a thought experiment. Suppose there was a planet identical to ours going around a sun identical to ours over there. I do not require it have life; but suppose an angel went there back in time and broadcast a radio message we could receive here on earth. Let's further suppose that we turn on our radios and receive that message right now. That angel on that planet continues to broadcast his message for a whole year as measured by his local planetary orbit, which is a duplicate of the earth orbit in our galaxy at the same time so long ago. Then he signs off. He timed it just right so that here, today, we pick up the beginning of his message.

    When the radio waves left the Andromeda Galaxy long ago - an unspecified time, whatever makes it right in Setterfield theory - they were traveling our estimated 700 times c. So we have a very long radio beam moving towards us, 700 light years long, that was created within a single year by the radio broadcast from that angel. Note that the year long interval itself is not considered to be different, but rather the length of the continuous radio beam headed towards earth is what is different.

    The long radio beam continued and slowed down as it came to us and today we pick it up and it is traveling at "only" the regular speed of light. But get this - it is still 700 light years long! How long will we have to wait before we hear the end of that year long broadcast? 700 years, of course! It will then appear to us that the length of a year appears slowed, streached out by the slowing of the light on its way to us.

    Extra solar planets have been already observed in our own galaxy, detected by the "wobble" they cause in their host star, as seen in the doppler shifting of the starlight. It is conceivable that someday the same technique will work for an individual star in the Andromeda galaxy. Shall we expect the orbits to be slower by a factor of 700, in accordance with Setterfield theory?

    Now there is no such angel and no such broadcast. But we do have substitutes, we have the actual measured rotation of the galaxy itself, for example, also an orbital motion, and we have the swelling and contracting of cepheid variables, also a gravitationally controlled event, and these do NOT show any signs of Setterfield theory implied slowing. We have observations of x-ray binary stars, and their orbits about each other fall into expected time frames. This is what I'm talking about when I say astronomical observations disprove Setterfield CDK theory.

    [ May 02, 2003, 02:25 PM: Message edited by: Paul of Eugene ]
     
  10. mdkluge Guest

    Helen writes:
    ROTRL! Setterfield's results were not only wrong for his data and program, but were so for ANY data or program. Setterfield's problem wasn't that his computer program gave him a patently wrong answer of a correlation coefficient of .9999999+: The problem is that Setterfield didn't IMMEDIATELY recognize the mistake. Instead he PUBLISHED what any reputable researcher would have immediately recognized as garbage. I do not exaggerate here. Whether mathematician, biologist, astronomer, physicist, social scientist, it is doubtful that any of them in the whole world has ever failed to detect such an error to the extent that he or she actually published it EVER!

    Setterfield's blunder has a name in the vernacular--GIGI--Garbage In, Garbage Out. In Setterfield's case the garbage in wasn't his data, but rather the program he used and relied upon to analyze his data. A correlation coefficient of .99999999 was a red flag that something was wrong, VERY wrong. While it's better for him to have discovered his blunder after publication, it is inexcusable that he did not do so prior to publication . That the same man subsequently instructs us in statistics is just plain laughable. If I had made such a mistake in my youth I would be spending much time and effort mending my deservedly bad scientific and statistical reputation. It would be a high priority to convince scientific colleagues and skeptics that I am no longer the same statistician who published that string of nines 22 years ago, and I would seek to reassure them that I had learned from my blunder. I certainly wouldn't blame my old troubles on a computer program or data in fro9nt of competent colleagues who know better.

    Except for the trivial case of two data points for a linear fit (or three for a quadratic, etc.) it is not true that quality of fit has anything to do with the number of data points. That should be obvious and need no explanation. If saying such things marks Setterfield's improvement in statistical technique since the string-o-nines incident, then I fear he has a very long way to go before criticizing others' statistics.

    What does this say about those "others" who, reading Setterfield's work and noting how, nimbly as a cat, he whipped out his string-o-nine tails, either didn't notice this elementary blunder, or else pressured him to publish anyway?

    THE PROBLEM AIN'T WITH THE DATA. IT'S WITH THE ANALYSIS.
     
  11. mdkluge Guest

    Paul of Eugene: I think you are on a good track for explaining clearly the absurdity of Setterfield's construction. I would like to get one thing clear though. In your example yousaid that the angel broadcasts for one year. I assume that by one year you mean the period of time (whether "dynamic" or "atomic" in Setterfield's lexicon) taken to complete one revolution of his planet about its sun.
     
  12. mdkluge Guest

    Peter wrote:
    In fairness and accuracy, there are some criticisms that can be drawn against Aardasma's paper. As, I think, Alan Montgomery noted in the CARM forum, when you do a least squares fit you should be dealing with a homogeneous dataset. While Aardasma's weighting scheme does minimize the sum of the squared deviations normalized to each point's uncertainty squared (as given by the authors producing each point), one can readily agree with Setterfield that this might not produce a meaningful physical fit for datasets where the uncertainty is a strongly decreasing function of time. It is true that Aardasma's weighting means that old data have very little influence on the fitting parameters, and that effectively one just confines onself to a relatively small and recent interval for one's analysis. In that sense it is correct to condemn Aardasma's weighted fit to the data.

    However, I interpret Aardasma's treatment another way. I find nothing in his paper actually defending or recommending the use of his weighted least squares fitting to determine whether, or how fast measurements of the speed of light have decreased over time. Rather, Aardasma's method should be viewed as a sort of straw man. Its weightings are offered to fix an obvious defect in Setterfield's unweighted analysis. That Aardasma's weighted method is inadequate to the task is not the point. Aardasm's weighting does solve a problem of Setterfield's analysis, and if Aardasma's own method also fails for the inhomogeneous dataset he is studying, how much more does Setterfield's unweighted method fail for the same reason?
     
  13. Helen <img src =/Helen2.gif>

    Joined:
    Aug 29, 2001
    Messages:
    11,703
    Likes Received:
    2
    all Barry and I can do is laugh now. If you folks have to go back 20 years to an incomplete data set, and start hollering about that material, and if Paul is consistently choosing to misunderstand what Barry is saying today, then Barry is doing fine!

    Have a good time. If nothing else, the fact that this is what you folks want to concentrate on tells me more about the validity of Barry's work than any of the support and compliments we get from well-wishers and supporters.

    We thank you.
     
  14. Paul of Eugene New Member

    Joined:
    Oct 30, 2001
    Messages:
    2,782
    Likes Received:
    0
    Yes, of course. The definition of a year. I believe Setterfield would term the orbiting of a planet around its sun to be "dynamic" rather than "atomic" and would welcome any correction or discussion from Helen or Barry. Its the local planet for the angel of course, a dynamical copy of earth at the same time in our galaxy. I'm taking the liberty to go back and edit the original post to make that point clear. Thank you for clarification suggestion.

    [ May 02, 2003, 02:29 PM: Message edited by: Paul of Eugene ]
     
  15. Peter101 New Member

    Joined:
    Mar 2, 2003
    Messages:
    518
    Likes Received:
    0
    &gt;&gt;&gt;However, please remember that the fewer data points you have, the more perfect the fit.&lt;&lt;&lt;&lt;&lt;

    As Mark pointed out, the above idea is mistaken.

    But the whole idea of trying to draw a line, with high accuracy through points of wildly varying uncertainty is a big mistake. The oldest methods give the speed of light with a much larger uncertainty compared to the results with a very small uncertainty in the last century. In other words the quality of the data was improving rapidly during the two or three centuries. It is perhaps not surprising that as the quality of the speed of light measurements improves, that there is less and less "evidence" that the light speed is slowing down. In other words, Setterfield's conclusion depends strongly on older measurements of low quality. So I would simply conclude that his results show nothing but the changes in the quality of the measurements. As my Ph.D. mentor used to say, in regard to scientists who try to extend their conclusions beyond the data, "He is pushing too hard". What that means is that one should not try to stake an important claim based on small and statistically questionable effects. There is uncertainty in any measurement and quite often we don't really know the size of that uncertainty. Even if Setterfield's statistical treatment is perfect, and we doubt that it is, one cannot know for sure that all sources of uncertainty have been included and accounted for in the points that contribute to Setterfield's line.
     
  16. Paul of Eugene New Member

    Joined:
    Oct 30, 2001
    Messages:
    2,782
    Likes Received:
    0
    oops double post sorry
     
  17. mdkluge Guest

    Helen writes:
    [quote[all Barry and I can do is laugh now. If you folks have to go back 20 years to an incomplete data set, and start hollering about that material,...[/quote]

    OK, let's discuss something more recent--like from yesterday:
    Helen, you wrote that. Is this also the statistical opinion of Setterfield? Did he even read it, much less approve?

    Please, Helen, tell us that it isn't Setterfield's opinion, that it was something silly you said, that you take full responsibility for it and that Setterfield himself has explained to you at lehngth how fewer points does not generally or even usually lead to a higher absolute correlation.

    Pretty please...

    Pretty please with sugar on it...
     
  18. Helen <img src =/Helen2.gif>

    Joined:
    Aug 29, 2001
    Messages:
    11,703
    Likes Received:
    2
    Mark, if you draw a line between two points, you have a perfect fit for those two points. The more points you have, the less likely you are to have a perfect fit for a line or a curve or any function you like. It was a simple logical statement. Please don't take it to be something it was not meant to be.
     
  19. mdkluge Guest

    Setterfield, in his article "ATOMIC QUANTUM STATES, LIGHT, AND THE REDSHIFT" at http://www.setterfield.org/quantumredshift.htm#isotropcandrel, actually writes down Maxwell’s Equations using genuine calculus. He discusses their application to his theory of time-varying c,. This is good, since physics is mathematical and physicists are accustomed to communicate amongst themselves with mathematics, and Setterfield’s articles are notorious, if for nothing else, for their lack of substantial mathematics. The inclusion of Maxwell’s Equations, then, comes as a welcome surprises. Unfortunately he blunders in their use. It turns out that the time-dependence he introduces can be removed by a trivial time coordinate transformation, yielding the conventional, constant-c, Maxwell’s Equations

    I have previously noted that Setterfield’s introduction of a time-dependent permeability of free space laks physical motivation. The permeability of free space, mu, is DEFINED to have the value of 4*pi*10^-7 MKS units. It enters physics through the proportionality parameter in Ampere’s law, stating that bhe magnetic field produced by a current is proportional to that current. We do not, however, measure magnetic fields except by the forces they exert on currents or moving charges: Those forces are taken to be proportional to the magnetic fields exerting the force. Thus we have two current-carrying wires exerting forces on each other proportional to the product of their currents. That is what is measurable. If, for some geometry of wires, k1 is the proportionality constant between the current in wire 1 and the magnetic field it produces at wire 2, and k2 is the proportionality constant between the force exerted by the magnetic field due to current 1 and the current in wire 2, then all we can measure k1*k2, the ratio of the force between the wires and the product of their currents; but we cannot measure k1 or k2 separately. We cannot separately measure the magnetic field strength. We must adopt some arbitrary convention, and the convention adopted is to choose units of magnetic field strength so that B1 = mu * I1/(2*pi*r), where r is the distance from wire 1, mu is as defined above, I1 is the current in wire 1 and B1 the magnetic field strength due to that current a distance r from the wire 1.

    Turning to Setterfield’s version of Maxwell’s Equations, let us see how the time-dependence leaves them.. In Setterfield’s theory he gives Maxwell’s equations in vacuum as:
    mu div H = 0
    epsilon div E = 0
    curl E = - mu (d H/d t)
    curl H = epsilon (d E/d t).

    (I do not have a “curly dee” partial derivative sign, so I am using just a plain lower-case “d” here. Since I will not be dealing with total time derivatives here (as, for example, in fluid mechanics) this should cause no confusion.

    For arbitrary disparate time-dependences of mu and epsilon, these would indeed be mathematically different from the conventional Maxwell equations. However, Setterfield claims that in his theory both mu and epsilon are inversely proportional to c. That is, if mu_n, epsilon_n and c_n are, respectively, the values of mu, epsilon, and c(t) now, then:

    mu = mu_n*c_n/c(t)
    epsilon = epsilon_n*c_n/c(t).

    Now let us let
    T = integral from 0 to t with respect to t of (c(t)/c_n.
    Then dT/dt = c(t)/c_n.

    Then mu*(dH/dt) = mu_n*c_n/c(t)*(dH/dt)
    = mu_n*c_n/c(t)*(dH/dT)*(dT/dt)
    = mu_n*c_n/c(t)*(dH/dT)*c(t)/c_n
    = mu_n(dH/dT).

    Similarly
    Epsilon(dE/dt) = epsilon_n(dE/dT).

    Setterfield’s Maxwell’s Equations become:
    mu_n div H = 0
    Epsilon_n div E = 0
    curl E = - mu_n (d H/dT)
    curl H = epsilon_n (d E/dT).

    These are just the standard (constant-c) Maxwell Equations in vacuum!. We could, with a trivial transformation of current density and charge density, extend this result to the case with free charges and currents. This we leave as an exercise for the reader.

    So now that we have gotten Maxwell’s Equations by a simple time-transformation, it is clear that T-time, rather than t-time, is the suitable time for solving electromagnetic problems. One does not even have to worry about conservation of energy or changing wave amplitudes. (Ironically this is the result that Setterfield tries to obtain via incorrect approximate methods: Energy conservation does depend critically upon whether the time-varying parameters, epsilon and mu, are inside or outside of the partial time derivatives. Setterfield moves them in and out of the time-derivatives; but that is valid only for times short compared to both the frequencies of electromagnetic radiation being considered (as Setterfield correctly points out), but also short compared to c(t)/(dc(t)/dt). Since Setterfield’s theory attempts to deal with cosmological problems the latter condition is not fulfilled, and his argument for energy conservation fails.)

    So where is the physics of this so-called c-variation. Clearly not within electrodynamics. It could be that we need to use t-time when we consider the interaction of charged particles with other fields, such as gravitational fields; but Setterfield has not even told us what the modifications are supposed to be in Newton’s Laws of motion under his theory, let alone how electromagnetically and gravitationally “charged” particles (particles with electrical charge and mass) behave. Until he does this most basic thing it is no exaggeration to say that his theory does not predict the time-dependence of anything at all.
     
  20. Helen <img src =/Helen2.gif>

    Joined:
    Aug 29, 2001
    Messages:
    11,703
    Likes Received:
    2
    Mark, we understand that you are fighting to defend a convention that has been established regarding the definitions involved in such terms as 'permeability'. But what if the conventions are not accurate? What if they don't cover the reality as we are beginning to understand it, vs what was previously thought?

    You already know that permeability used to be defined in terms of the speed of light. Now the definition is different. Why?

    Why do conventions and definitions change?

    Why do you think that what we have now is the final truth?

    Please understand that Barry did not start out wanting or trying to defy any convention. He started out simply curious about some anomalies. This has led him to places that some have denied even exist.

    The definition changed with the speed of light, for instance, to DEFINE it as a constant, DESPITE the earlier changing measurements, and despite even Birge's acceptance and charting of these changes.

    Thus permeability is also defined as a constant now, and not only that, its definition (for safety's sake?) has been separated from the speed of light. (Note, please, that such texts as Physics by S.G. Starling and A. J. Woodall, 1958, Longmans, Green and Co., London, p. 1262 was still defining permeability in terms of light speed.)

    There is no guarantee that you are right, I'm afraid. Barry is trying to follow the data, no matter where it leads. You will see a quote on the front of his webpage as follows:

    It is never good science to ignore anomalous data or to eliminate a conclusion because of some presupposition. Sir Henry Dale, one-time President of the Royal Society of London, made an important comment in his retirement speech: "Science should not tolerate any lapse of precision, or neglect any anomaly, but give Nature's answers to the world humbly and with courage." To do so may not place one in the mainstream of modern science, but at least we will be searching for truth and moving ahead rather than maintaining the scientific status quo.

    Barry said that and remains aware of it -- and of the consequences it leads to. Reactions like yours are common and are one of the consequences. That doesn't do anything to deny the data or the logical and mathematical consequences of what is unfolding, however.