Showing posts with label Prof. Matt Strassler. Show all posts
Showing posts with label Prof. Matt Strassler. Show all posts

Thursday, November 20, 2014

Naturalness 2014-Weizmann Institute of Science

Information about the event was blogged by Professor Matt Strassler at, At the Naturalness 2014 Conference. See also his explanation on Naturalness and the Standard Model

Information about the event itself.

The discovery of a Higgs boson, with a mass around 125 GeV, at the LHC is a great victory for the Standard Model (SM). With its minimal scalar sector of electroweak symmetry breaking, the SM at short distances, well below the proton radius, is a complete weakly coupled theory. Even though the SM cannot explain several experimental observations such as neutrino masses, the baryon asymmetry of the universe and the origin of dark matter, one cannot deduce an energy scale at which the SM would be forced to be extended (with the exceptions of the Planck scale and the Landau pole of the hypercharge force). See: Naturalness 2014
***

 
In physics, naturalness is the property that the free parameters or physical constants appearing in a physical theory should take relative values "of order 1". That is, a natural theory would have parameters with values like 2.34 rather than 234,000 or 0.000234. This is in contrast to current theory like the standard model, where there are a number of parameters that vary by many orders of magnitude, and require extensive "fine-tuning" of those values in order for the theory to predict a universe like the one we live in.

The requirement that satisfactory theories should be "natural" in this sense is a current of thought initiated around the 1960s in particle physics. It is an aesthetic criterion, not a physical one, that arises from the seeming non-naturalness of the standard model and the broader topics of the hierarchy problem, fine-tuning, and the anthropic principle.

It is not always compatible with Occam's razor, since many instances of "natural" theories have more parameters than "fine-tuned" theories such as the Standard Model.
 ***

  Now that naturalism has become an accepted component of philosophy, there has recently been interest in Kuhn's work in the light of developments in the relevant sciences, many of which provide corroboration for Kuhn's claim that science is driven by relations of perceived similarity and analogy. It may yet be that a characteristically Kuhnian thesis will play a prominent part in our understanding of science http://plato.stanford.edu/entries/thomas-kuhn/
***

A non-technical discussion of the naturalness criterion and its implications for new physics searches at the LHC. To be published in the book "LHC Perspectives", edited by G. Kane and A. Pierce. See: Naturally Speaking: The Naturalness Criterion and Physics at the LHC
(PDF)

Tuesday, September 04, 2012

The Quantum Harmonic Oscillator

Quantum Harmonic Oscillator

 


Here are a series of written Blog entries by Matt Strassler from his Blog, Of Particular Significance.
  1. Ball on a Spring (Classical)
  2. Ball on a Spring (Quantum)
  3. Waves (Classical Form)
  4. Waves (Classical Equation of Motion)
  5. Waves (Quantum) 
  6. Fields
  7.  Particles are Quanta
  8.  How fields and particles interact with each other 
  9.  How the Higgs Field Works



Given a preceding map  by Proffessor Strassler according to what has been gain in finality views requires this updating in order to proceed correctly in the views shared currently in science. So that lineage of thought is important to me.

Probability Distributions for the Quantum Oscillator



At the same time one cannot be held back from looking further and seeing where theoretical views have been taken beyond the constraints applied to the science mind.:)





So what is the theory, then?

Pythagoras could be called the first known string theorist. Pythagoras, an excellent lyre player, figured out the first known string physics -- the harmonic relationship. Pythagoras realized that vibrating Lyre strings of equal tensions but different lengths would produce harmonious notesratio of the lengths of the two strings were a whole number. (i.e. middle C and high C) if the......

   Pythagoras discovered this by looking and listening. Today that information is more precisely encoded into mathematics, namely the wave equation for a string with a tension T and a mass per unit length m. If the string is described in coordinates as in the drawing below, where x is the distance along the string and y is the height of the string, as the string oscillates in time t, 


See: Official String Theory Web Site


Friday, January 13, 2012

The Smoking Gun

One string theorist even went so far to conclude that a string theory calculation of Kaluza-Klein modes was the "smoking gun" that proved our theory was the same as the string theory that string theorists had already been studying.Warped Passages: Unraveling The Mysteries of the Universes Hidden Dimensions by Lisa Randall Pg 436, Para 4

Putting this together with what is real in our reality is of importance as well. While I may have my own metaphysical development and model building characteristics it was important that I learn the scientific one so that I could see where I may have been wrong in my own development scenario. Wrong in my own intuitions.

 Meanwhile I’m continuing to develop the Extra Dimensions series of articles, and I’ve now followed up my examples of extra dimensions with a next installment, a first discussion of what scientists would look for in trying to identify that our world actually has one or more extra dimensions .  The new article describes one of the key clues that would indicate their presence.  But this is far from the end of the story: I owe you more articles, explaining why extra dimensions would generate this clue, outlining how we try to search for this clue experimentally, and mentioning other possible clues that might arise.  All in due course…The Smoking Gun for Extra Dimensions by Theoretical Physicist Matt Strassler

Some may of not been forced to question them-self  with what it is that we have to ask of ourselves,  as we delve into the world of the sciences and philosophies. To ask ourselves whether we had always been dealing with the truth of our getting to the heart of things.

A professor may have asked what it is exactly what I wanted out of all of this,  and to him I have to relay a dream that has manifested because of his question.

In the dream I have been provided a forum for discussing my ideas.....but when it came to the time for speaking,  my preparations,  I felt lost as to where to begin. So it seems I have come to this point in time, as to "shit or get off the pot" as to what it is I wish to share of importance?

Giving these subjects the numbers of years since 2001, one would have thought  had served my own internship, but alas I remain ever the student with no classification. Yet it is the developing of the concepts with what is real in the push to experiment as to find what the real world examples are showing as attributes in the experimental processes as they unfold.

 In this example I’m going to map speed to the pitch of the note, length/postion to the duration of the note and number of turns/legs/puffs to the loudness of the note.See: How to make sound out of anything.


Who of us has the foresight to see where the process of the experiment had been developed to share an idea about what it was that we wanted to discover of nature? To see in the mind of the developers as to why the equipment has been superimposed from the schematics of theories to be tested as to discover what we may found in our model building.

Does all this prepare you to looking at the universe different?

 The Lagrange Points


In the above contour plot we see that L4 and L5 correspond to hilltops and L1, L2 and L3 correspond to saddles (i.e. points where the potential is curving up in one direction and down in the other). This suggests that satellites placed at the Lagrange points will have a tendency to wander off (try sitting a marble on top of a watermelon or on top of a real saddle and you get the idea). A detailed analysis (PDF link) confirms our expectations for L1, L2 and L3, but not for L4 and L5. When a satellite parked at L4 or L5 starts to roll off the hill it picks up speed. At this point the Coriolis force comes into play - the same force that causes hurricanes to spin up on the earth - and sends the satellite into a stable orbit around the Lagrange point. See: Space Travel and Propulsion Methods

I have to say who has not been touched as if we put on a pair of rose colored glasses to see the Lagrangian world as if the gravitons populated  locations of influence. As if they were descriptive as overlapping nodes of sound as to support some acoustical idea about levitation? Satellites that travel through space or held in position as our space station is.

 
Like different musical instruments, different types of stars produce different types of sound waves. Small stars produce a sound with a higher pitch than bigger stars, just like the 'piccolo' produces a higher sound than the cello

Thus it is as ones can see differently that I look upon the world as to discover what things we may not know of our own selves that we had missed in understanding our own physical evolution, that it is more then the matter with which we use and are made up of?


This recording was produced by converting into audible sounds some of the radar echoes received by Huygens during the last few kilometres of its descent onto Titan. As the probe approaches the ground, both the pitch and intensity increase. Scientists will use intensity of the echoes to speculate about the nature of the surface. Radar echos from Titan's surface

Monday, December 19, 2011

Bayesian probability

practically nobody took very seriously the CDF claim.......Tommaso: I will claim based on the above that according to Prof. D'Agostini, Prof. Matt Strassler is "practically nobody", since he is not convinced.

An acoustical difference of opinion with regard too, Nobody?


What is the probability of the observed acoustic data given that each of two 
possible phrases spoken?



Hmmmmm.......One is a shop keeper and one is a customer?

***

Bayesian probability is one of the different interpretations of the concept of probability and belongs to the category of evidential probabilities. The Bayesian interpretation of probability can be seen as an extension of logic that enables reasoning with propositions whose truth or falsity is uncertain. To evaluate the probability of a hypothesis, the Bayesian probabilist specifies some prior probability, which is then updated in the light of new, relevant data.[1]

The Bayesian interpretation provides a standard set of procedures and formulae to perform this calculation. Bayesian probability interprets the concept of probability as " a probability p is an abstract concept, a quantity that we assign theoretically, for the purpose of representing a state of knowledge, or that we calculate from previously assigned probabilities,"[2] in contrast to interpreting it as a frequency or a "propensity" of some phenomenon.
The term "Bayesian" refers to the 18th century mathematician and theologian Thomas Bayes (1702–1761), who provided the first mathematical treatment of a non-trivial problem of Bayesian inference.[3] Nevertheless, it was the French mathematician Pierre-Simon Laplace (1749–1827) who pioneered and popularised what is now called Bayesian probability.[4]

Broadly speaking, there are two views on Bayesian probability that interpret the probability concept in different ways. According to the objectivist view, the rules of Bayesian statistics can be justified by requirements of rationality and consistency and interpreted as an extension of logic.[2][5] According to the subjectivist view, probability measures a "personal belief".[6] Many modern machine learning methods are based on objectivist Bayesian principles.[7] In the Bayesian view, a probability is assigned to a hypothesis, whereas under the frequentist view, a hypothesis is typically tested without being assigned a probability.

Contents

Bayesian methodology

In general, Bayesian methods are characterized by the following concepts and procedures:

Objective and subjective Bayesian probabilities

Broadly speaking, there are two views on Bayesian probability that interpret the 'probability' concept in different ways. For objectivists, probability objectively measures the plausibility of propositions, i.e. the probability of a proposition corresponds to a reasonable belief everyone (even a "robot") sharing the same knowledge should share in accordance with the rules of Bayesian statistics, which can be justified by requirements of rationality and consistency.[2][5] Requirements of rationality and consistency are also important for subjectivists, for which the probability corresponds to a 'personal belief'.[6] For subjectivists however, rationality and consistency constrain the probabilities a subject may have, but allow for substantial variation within those constraints. The objective and subjective variants of Bayesian probability differ mainly in their interpretation and construction of the prior probability.

History

The term Bayesian refers to Thomas Bayes (1702–1761), who proved a special case of what is now called Bayes' theorem in a paper titled "An Essay towards solving a Problem in the Doctrine of Chances".[8] In that special case, the prior and posterior distributions were Beta distributions and the data came from Bernoulli trials. It was Pierre-Simon Laplace (1749–1827) who introduced a general version of the theorem and used it to approach problems in celestial mechanics, medical statistics, reliability, and jurisprudence.[9] Early Bayesian inference, which used uniform priors following Laplace's principle of insufficient reason, was called "inverse probability" (because it infers backwards from observations to parameters, or from effects to causes).[10] After the 1920s, "inverse probability" was largely supplanted by a collection of methods that came to be called frequentist statistics.[10]

In the 20th century, the ideas of Laplace were further developed in two different directions, giving rise to objective and subjective currents in Bayesian practice. In the objectivist stream, the statistical analysis depends on only the model assumed and the data analysed.[11] No subjective decisions need to be involved. In contrast, "subjectivist" statisticians deny the possibility of fully objective analysis for the general case.
In the 1980s, there was a dramatic growth in research and applications of Bayesian methods, mostly attributed to the discovery of Markov chain Monte Carlo methods, which removed many of the computational problems, and an increasing interest in nonstandard, complex applications.[12] Despite the growth of Bayesian research, most undergraduate teaching is still based on frequentist statistics.[13] Nonetheless, Bayesian methods are widely accepted and used, such as in the fields of machine learning[7] and talent analytics.

Justification of Bayesian probabilities

The use of Bayesian probabilities as the basis of Bayesian inference has been supported by several arguments, such as the Cox axioms, the Dutch book argument, arguments based on decision theory and de Finetti's theorem.

Axiomatic approach

Richard T. Cox showed that[5] Bayesian updating follows from several axioms, including two functional equations and a controversial hypothesis of differentiability. It is known that Cox's 1961 development (mainly copied by Jaynes) is non-rigorous, and in fact a counterexample has been found by Halpern.[14] The assumption of differentiability or even continuity is questionable since the Boolean algebra of statements may only be finite.[15] Other axiomatizations have been suggested by various authors to make the theory more rigorous.[15]

Dutch book approach

The Dutch book argument was proposed by de Finetti, and is based on betting. A Dutch book is made when a clever gambler places a set of bets that guarantee a profit, no matter what the outcome is of the bets. If a bookmaker follows the rules of the Bayesian calculus in the construction of his odds, a Dutch book cannot be made.

However, Ian Hacking noted that traditional Dutch book arguments did not specify Bayesian updating: they left open the possibility that non-Bayesian updating rules could avoid Dutch books. For example, Hacking writes[16] "And neither the Dutch book argument, nor any other in the personalist arsenal of proofs of the probability axioms, entails the dynamic assumption. Not one entails Bayesianism. So the personalist requires the dynamic assumption to be Bayesian. It is true that in consistency a personalist could abandon the Bayesian model of learning from experience. Salt could lose its savour."

In fact, there are non-Bayesian updating rules that also avoid Dutch books (as discussed in the literature on "probability kinematics" following the publication of Richard C. Jeffrey's rule). The additional hypotheses sufficient to (uniquely) specify Bayesian updating are substantial, complicated, and unsatisfactory.[17]

Decision theory approach

A decision-theoretic justification of the use of Bayesian inference (and hence of Bayesian probabilities) was given by Abraham Wald, who proved that every admissible statistical procedure is either a Bayesian procedure or a limit of Bayesian procedures.[18] Conversely, every Bayesian procedure is admissible.[19]

Personal probabilities and objective methods for constructing priors

Following the work on expected utility theory of Ramsey and von Neumann, decision-theorists have accounted for rational behavior using a probability distribution for the agent. Johann Pfanzagl completed the Theory of Games and Economic Behavior by providing an axiomatization of subjective probability and utility, a task left uncompleted by von Neumann and Oskar Morgenstern: their original theory supposed that all the agents had the same probability distribution, as a convenience.[20] Pfanzagl's axiomatization was endorsed by Oskar Morgenstern: "Von Neumann and I have anticipated" the question whether probabilities "might, perhaps more typically, be subjective and have stated specifically that in the latter case axioms could be found from which could derive the desired numerical utility together with a number for the probabilities (cf. p. 19 of The Theory of Games and Economic Behavior). We did not carry this out; it was demonstrated by Pfanzagl ... with all the necessary rigor".[21]

Ramsey and Savage noted that the individual agent's probability distribution could be objectively studied in experiments. The role of judgment and disagreement in science has been recognized since Aristotle and even more clearly with Francis Bacon. The objectivity of science lies not in the psychology of individual scientists, but in the process of science and especially in statistical methods, as noted by C. S. Peirce.[citation needed] Recall that the objective methods for falsifying propositions about personal probabilities have been used for a half century, as noted previously. Procedures for testing hypotheses about probabilities (using finite samples) are due to Ramsey (1931) and de Finetti (1931, 1937, 1964, 1970). Both Bruno de Finetti and Frank P. Ramsey acknowledge[citation needed] their debts to pragmatic philosophy, particularly (for Ramsey) to Charles S. Peirce.

The "Ramsey test" for evaluating probability distributions is implementable in theory, and has kept experimental psychologists occupied for a half century.[22] This work demonstrates that Bayesian-probability propositions can be falsified, and so meet an empirical criterion of Charles S. Peirce, whose work inspired Ramsey. (This falsifiability-criterion was popularized by Karl Popper.[23][24])

Modern work on the experimental evaluation of personal probabilities uses the randomization, blinding, and Boolean-decision procedures of the Peirce-Jastrow experiment.[25] Since individuals act according to different probability judgements, these agents' probabilities are "personal" (but amenable to objective study).
Personal probabilities are problematic for science and for some applications where decision-makers lack the knowledge or time to specify an informed probability-distribution (on which they are prepared to act). To meet the needs of science and of human limitations, Bayesian statisticians have developed "objective" methods for specifying prior probabilities.

Indeed, some Bayesians have argued the prior state of knowledge defines the (unique) prior probability-distribution for "regular" statistical problems; cf. well-posed problems. Finding the right method for constructing such "objective" priors (for appropriate classes of regular problems) has been the quest of statistical theorists from Laplace to John Maynard Keynes, Harold Jeffreys, and Edwin Thompson Jaynes: These theorists and their successors have suggested several methods for constructing "objective" priors:


 Each of these methods contributes useful priors for "regular" one-parameter problems, and each prior can handle some challenging statistical models (with "irregularity" or several parameters). Each of these methods has been useful in Bayesian practice. Indeed, methods for constructing "objective" (alternatively, "default" or "ignorance") priors have been developed by avowed subjective (or "personal") Bayesians like James Berger (Duke University) and José-Miguel Bernardo (Universitat de València), simply because such priors are needed for Bayesian practice, particularly in science.[26] The quest for "the universal method for constructing priors" continues to attract statistical theorists.[26]

Thus, the Bayesian statistican needs either to use informed priors (using relevant expertise or previous data) or to choose among the competing methods for constructing "objective" priors.

See also

Notes

  1. ^ Paulos, John Allen. The Mathematics of Changing Your Mind, New York Times (US). August 5, 2011; retrieved 2011-08-06
  2. ^ a b c Jaynes, E.T. "Bayesian Methods: General Background." In Maximum-Entropy and Bayesian Methods in Applied Statistics, by J. H. Justice (ed.). Cambridge: Cambridge Univ. Press, 1986
  3. ^ Stigler, Stephen M. (1986) The history of statistics. Harvard University press. pg 131.
  4. ^ Stigler, Stephen M. (1986) The history of statistics., Harvard University press. pg 97-98, pg 131.
  5. ^ a b c Cox, Richard T. Algebra of Probable Inference, The Johns Hopkins University Press, 2001
  6. ^ a b de Finetti, B. (1974) Theory of probability (2 vols.), J. Wiley & Sons, Inc., New York
  7. ^ a b Bishop, C.M. Pattern Recognition and Machine Learning. Springer, 2007
  8. ^ McGrayne, Sharon Bertsch. (2011). The Theory That Would Not Die, p. 10. at Google Books
  9. ^ Stigler, Stephen M. (1986) The history of statistics. Harvard University press. Chapter 3.
  10. ^ a b Fienberg, Stephen. E. (2006) When did Bayesian Inference become "Bayesian"? Bayesian Analysis, 1 (1), 1–40. See page 5.
  11. ^ Bernardo, J.M. (2005), Reference analysis, Handbook of statistics, 25, 17–90
  12. ^ Wolpert, R.L. (2004) A conversation with James O. Berger, Statistical science, 9, 205–218
  13. ^ Bernardo, José M. (2006) A Bayesian mathematical statistics primer. ICOTS-7
  14. ^ Halpern, J. A counterexample to theorems of Cox and Fine, Journal of Artificial Intelligence Research, 10: 67-85.
  15. ^ a b Dupré, Maurice J., Tipler, Frank T. New Axioms For Bayesian Probability, Bayesian Analysis (2009), Number 3, pp. 599-606
  16. ^ Hacking (1967, Section 3, page 316), Hacking (1988, page 124)
  17. ^ van Frassen, B. (1989) Laws and Symmetry, Oxford University Press. ISBN 0198248601
  18. ^ Wald, Abraham. Statistical Decision Functions. Wiley 1950.
  19. ^ Bernardo, José M., Smith, Adrian F.M. Bayesian Theory. John Wiley 1994. ISBN 0-471-92416-4.
  20. ^ Pfanzagl (1967, 1968)
  21. ^ Morgenstern (1976, page 65)
  22. ^ Davidson et al. (1957)
  23. ^ "Karl Popper" in Stanford Encyclopedia of Philosophy
  24. ^ Popper, Karl. (2002) The Logic of Scientific Discovery 2nd Edition, Routledge ISBN 0415278430 (Reprint of 1959 translation of 1935 original) Page 57.
  25. ^ Pierce & Jastrow (1885)
  26. ^ a b Bernardo, J. M. (2005). Reference Analysis. Handbook of Statistics 25 (D. K. Dey and C. R. Rao eds). Amsterdam: Elsevier, 17-90

References

External links