Saturday, December 31, 2011

Happy New Year 2012

Happy New Year 2012 and All the Best in the New Year

Andrea Rossi's 'E-cat' nuclear reactor


Andrea Rossi's 'E-cat' nuclear reactor: a video FAQ

Now I am a layman with a keen interest in how our society can benefit from research and development.  Can you save me from being fooled? Can you save society from being fooled? 

As a scientist please demonstrate your opinion as to the viability of such generation across the blogosphere and weigh in. I know some of you are well equipped to answer whether such things in inventiveness can be ascertain  regarding the jump to profitability by product development before all the science has been been supportive of such claims as to kilowatt production.

Imagine the cost reduction in products that could not only heat your homes, but as well as reduce your cost as to air conditioners and saving energy for the grid?


One Megawatt Heat Plant for Sale"We gave the exclusive commercial license to Ampenergo, and only they can sell our  plants." - Andrea Rossi (November 14, 2011) [Story

***

The Physics of Why the E-Cat's Cold Fusion Claims Collapse
By Ethan Siegel

With other companies now trying to capitalize off of this speculative, unverified and highly dubious claim, it's time for the eCat's proponents to provide the provable, testable, reproducible science that can answer these straightforward physics objections. Independent verification is the cornerstone of all scientific investigation and experiment, it's how we weed out all sorts of errors from miscalibration to contamination, and how we protect ourselves from unscrupulous swindles. Given everything that we know, as others also demonstrate (thanks, Steven B. Krivit), it's time to set aside the mirage of Nickel +Hydrogen fusion and get back to work finding real solutions to our energy and environmental problems.

See Also:

Update:

Thanks to scientists for following up. It is much appreciated

Further Update to May 2013
Update July12


More On Rossi's E-Cat: Ericsson And Pomp Rebut "Independent" Test

Thursday, December 29, 2011

Computational Science


Discussion from UC-HiPACC on VimeoAlso See: Bolshoi Simulation: WMAP Explorer

 As Richard Feynman put it:[13]
"It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of space/time is going to do? So I have often made the hypotheses that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities".


Computational science (or scientific computing) is the field of study concerned with constructing mathematical models and quantitative analysis techniques and using computers to analyze and solve scientific problems.[1] In practical use, it is typically the application of computer simulation and other forms of computation to problems in various scientific disciplines.

The field is distinct from computer science (the study of computation, computers and information processing). It is also different from theory and experiment which are the traditional forms of science and engineering. The scientific computing approach is to gain understanding, mainly through the analysis of mathematical models implemented on computers.

Scientists and engineers develop computer programs, application software, that model systems being studied and run these programs with various sets of input parameters. Typically, these models require massive amounts of calculations (usually floating-point) and are often executed on supercomputers or distributed computing platforms.

Numerical analysis is an important underpinning for techniques used in computational science.

Contents

  

  

Applications of computational science

Problem domains for computational science/scientific computing include:

  

Numerical simulations

Numerical simulations have different objectives depending on the nature of the task being simulated:
  • Reconstruct and understand known events (e.g., earthquake, tsunamis and other natural disasters).
  • Predict future or unobserved situations (e.g., weather, sub-atomic particle behaviour).

     Model fitting and data analysis

    • Appropriately tune models or solve equations to reflect observations, subject to model constraints (e.g. oil exploration geophysics, computational linguistics).
    • Use graph theory to model networks, especially those connecting individuals, organizations, and websites.

     

    Computational optimization

    • Optimize known scenarios (e.g., technical and manufacturing processes, front-end engineering).

      

    Methods and algorithms

    Algorithms and mathematical methods used in computational science are varied. Commonly applied methods include:

    Programming languages commonly used for the more mathematical aspects of scientific computing applications include R (programming language), MATLAB, Mathematica,[2] SciLab, GNU Octave, COMSOL Multiphysics, Python (programming language) with SciPy, and PDL.[citation needed] The more computationally intensive aspects of scientific computing will often utilize some variation of C or Fortran and optimized algebra libraries such as BLAS or LAPACK.

    Computational science application programs often model real-world changing conditions, such as weather, air flow around a plane, automobile body distortions in a crash, the motion of stars in a galaxy, an explosive device, etc. Such programs might create a 'logical mesh' in computer memory where each item corresponds to an area in space and contains information about that space relevant to the model. For example in weather models, each item might be a square kilometer; with land elevation, current wind direction, humidity, temperature, pressure, etc. The program would calculate the likely next state based on the current state, in simulated time steps, solving equations that describe how the system operates; and then repeat the process to calculate the next state.

    The term computational scientist is used to describe someone skilled in scientific computing. This person is usually a scientist, an engineer or an applied mathematician who applies high-performance computers in different ways to advance the state-of-the-art in their respective applied disciplines in physics, chemistry or engineering. Scientific computing has increasingly also impacted on other areas including economics, biology and medicine.

    Computational science is now commonly considered a third mode of science, complementing and adding to experimentation/observation and theory.[3] The essence of computational science is numerical algorithm[4] and/or computational mathematics.[5] In fact, substantial effort in computational sciences has been devoted to the development of algorithms, the efficient implementation in programming languages, and validation of computational results. A collection of problems and solutions in computational science can be found in Steeb, Hardy, Hardy and Stoop, 2004.[6]

     Education

    Scientific computation is most often studied through an applied mathematics or computer science program, or within a standard mathematics, sciences, or engineering program. At some institutions a specialization in scientific computation can be earned as a "minor" within another program (which may be at varying levels). However, there are increasingly many bachelor's and master's programs in computational science. Some schools also offer the Ph.D. in computational science, computational engineering, computational science and engineering, or scientific computation.

    There are also programs in areas such as computational physics, computational chemistry, etc.

     Related fields

      

    See also

     

     References

    1. ^ National Center for Computational Science
    2. ^ Mathematica 6 Scientific Computing World, May 2007
    3. ^ Siam.org
    4. ^ Nonweiler T. R., 1986. Computational Mathematics: An Introduction to Numerical Approximation, John Wiley and Sons
    5. ^ Yang X. S., 2008. Introduction to Computational Mathematics, World Scientific Publishing
    6. ^ Steeb W.-H., Hardy Y., Hardy A. and Stoop R., 2004. Problems and Solutions in Scientific Computing with C++ and Java Simulations, World Scientific Publishing. ISBN 981-256-112-9

     

    External links

    Wednesday, December 28, 2011

    The mechanism that explains why our universe was born with 3 dimensions: a 40-year-old puzzle of superstring theory solved by supercomputer


    A group of three researchers from KEK, Shizuoka University and Osaka University has for the first time revealed the way our universe was born with 3 spatial dimensions from 10-dimensional superstring theory1 in which spacetime has 9 spatial directions and 1 temporal direction. This result was obtained by numerical simulation on a supercomputer. .....

    .....Furthermore, the establishment of a new method to analyze superstring theory using computers opens up the possibility of applying this theory to various problems. For instance, it should now be possible to provide a theoretical understanding of the inflation5 that is believed to have taken place in the early universe, and also the accelerating expansion of the universe6, whose discovery earned the Nobel Prize in Physics this year. It is expected that superstring theory will develop further and play an important role in solving such puzzles in particle physics as the existence of the dark matter that is suggested by cosmological observations, and the Higgs particle, which is expected to be discovered by LHC experiments7.See:The mechanism that explains why our universe was born with 3 dimensions:a 40-year-old puzzle of superstring theory solved by supercomputer

    Tuesday, December 27, 2011

    The Nature of Reality

    The last major changes to the periodic table was done in the middle of the 20th Century. Glenn Seaborg is given the credit for it. Starting with his discovery of plutonium in 1940, he discovered all the transuranic elements from 94 to 102. He reconfigured the periodic table by placing the actinide series below the lanthanide series. In 1951, Seaborg was awarded the Noble prize in chemistry for his work. Element 106 has been named seaborgium (Sg) in his honor.A BRIEF HISTORY OF THE DEVELOPMENT OF PERIODIC TABLE



    How do you attempt to describe it?

     
    Photo by Graham Challifour. Reproduced from Critchlow, 1979, p. 132.





    "I’m a Platonist — a follower of Plato — who believes that one didn’t invent these sorts of things, that one discovers them. In a sense, all these mathematical facts are right there waiting to be discovered."Harold Scott Macdonald (H. S. M.) Coxeter

    In my perspective the Platonic solids were a first attempt at trying to describe reality?

    The Body Canvas

    ***

    Over the holiday period I had but a moment to peruse the latest article by Matt Strassler. It's called, " A New Particle at the LHC? Yes, But… " Also Update:LHC: is χb(3P) a new particle?

    Holding these thoughts I had a bit of time to think as to how one might go about this other then the ways on which we have particularize the particles of energy collisions that are decay products of the energy involved? I understand what he is saying so the following was sure to follow.



    Picture of the 1913 Bohr model of the atom showing the Balmer transition from n=3 to n=2. The electronic orbitals (shown as dashed black circles) are drawn to scale, with 1 inch = 1 Angstrom; note that the radius of the orbital increases quadratically with n. The electron is shown in blue, the nucleus in green, and the photon in red. The frequency ν of the photon can be determined from Planck's constant h and the change in energy ΔE between the two orbitals. For the 3-2 Balmer transition depicted here, the wavelength of the emitted photon is 656 nm.

    N category and the Hydrogen spectrum



    So many thoughts go through my mind not just of the orbitals or Bohr's model, but of how we might have looked at new elements created to have them classed in Mendeleev's the table of elements. Describing these elemental signatures to have them assigned a space" in between" of those we have already mapped.

     Proceedings of Societies [Report on the Law of Octaves]

    Mr. JOHN A. R. NEWLANDS read a paper entitled "The Law of Octaves, and the Causes of Numerical Relations among the Atomic Weights."[41] The author claims the discovery of a law according to which the elements analogous in their properties exhibit peculiar relationships, similar to those subsisting in music between a note and its octave. Starting from the atomic weights on Cannizzarro's [sic] system, the author arranges the known elements in order of succession, beginning with the lowest atomic weight (hydrogen) and ending with thorium (=231.5); placing, however, nickel and cobalt, platinum and iridium, cerium and lanthanum, &c., in positions of absolute equality or in the same line. The fifty-six elements[42] so arranged are said to form the compass of eight octaves, and the author finds that chlorine, bromine, iodine, and fluorine are thus brought into the same line, or occupy corresponding places in his scale. Nitrogen and phosphorus, oxygen and sulphur, &c., are also considered as forming true octaves. The author's supposition will be exemplified in Table II., shown to the meeting, and here subjoined:--




     ***

    The shapes of the first five atomic orbitals: 1s, 2s, 2px, 2py, and 2pz. The colors show the wave function phase. These are graphs of ψ(x,y,z) functions which depend on the coordinates of one electron. To see the elongated shape of ψ(x,y,z)2 functions that show probability density more directly, see the graphs of d-orbitals below.

    Qualitative understanding of shapes

    The shapes of atomic orbitals can be understood qualitatively by considering the analogous case of standing waves on a circular drum.[19] To see the analogy, the mean vibrational displacement of each bit of drum membrane from the equilibrium point over many cycles (a measure of average drum membrane velocity and momentum at that point) must be considered relative to that point's distance from the center of the drum head. If this displacement is taken as being analogous to the probability of finding an electron at a given distance from the nucleus, then it will be seen that the many modes of the vibrating disk form patterns that trace the various shapes of atomic orbitals. The basic reason for this correspondence lies in the fact that the distribution of kinetic energy and momentum in a matter-wave is predictive of where the particle associated with the wave will be. That is, the probability of finding an electron at a given place is also a function of the electron's average momentum at that point, since high electron momentum at a given position tends to "localize" the electron in that position, via the properties of electron wave-packets (see the Heisenberg uncertainty principle for details of the mechanism).


    This relationship means that certain key features can be observed in both drum membrane modes and atomic orbitals. For example, in all of the modes analogous to s orbitals (the top row in the illustration), it can be seen that the very center of the drum membrane vibrates most strongly, corresponding to the antinode in all s orbitals in an atom. This antinode means the electron is most likely to be at the physical position of the nucleus (which it passes straight through without scattering or striking it), since it is moving (on average) most rapidly at that point, giving it maximal momentum.


    A mental "planetary orbit" picture closest to the behavior of electrons in s orbitals, all of which have no angular momentum, might perhaps be that of the path of an atomic-sized black hole, or some other imaginary particle which is able to fall with increasing velocity from space directly through the Earth, without stopping or being affected by any force but gravity, and in this way falls through the core and out the other side in a straight line, and off again into space, while slowing from the backwards gravitational tug. If such a particle were gravitationally bound to the Earth it would not escape, but would pursue a series of passes in which it always slowed at some maximal distance into space, but had its maximal velocity at the Earth's center (this "orbit" would have an orbital eccentricity of 1.0). If such a particle also had a wave nature, it would have the highest probability of being located where its velocity and momentum were highest, which would be at the Earth's core. In addition, rather than be confined to an infinitely narrow "orbit" which is a straight line, it would pass through the Earth from all directions, and not have a preferred one. Thus, a "long exposure" photograph of its motion over a very long period of time, would show a sphere.


    In order to be stopped, such a particle would need to interact with the Earth in some way other than gravity. In a similar way, all s electrons have a finite probability of being found inside the nucleus, and this allows s electrons to occasionally participate in strictly nuclear-electron interaction processes, such as electron capture and internal conversion.


    Below, a number of drum membrane vibration modes are shown. The analogous wave functions of the hydrogen atom are indicated. A correspondence can be considered where the wave functions of a vibrating drum head are for a two-coordinate system ψ(r,θ) and the wave functions for a vibrating sphere are three-coordinate ψ(r,θ,φ)






      s-type modes
    • Mode u01 (1s orbital)
    • Mode u02 (2s orbital)
    •  
      Mode u03 (3s orbital)
        
      p-type modes
    • Mode u11 (2p orbital)
    • Mode u12 (3p orbital)
    •  
      Mode u13 (4p orbital)
       
      d-type modes
    • Mode u21 (3d orbital)
    • Mode u22 (4d orbital)
    • Mode u23 (5d orbital)

    ***



    So yes understanding that if you had Einstein crossing the room it becomes important to wonder about what gathering capabilities allow such elements to form other then students. You are looking for something specific?


    ***
    String theory isn't just another quantum field theory, another particular finite list of elementary particles with some interactions. It's an intellectually and literally multi-dimensional reservoir of wisdom that has taught us many things of completely new kinds that we couldn't foresee. The Reference Frame: LHC: is a new particle?: LHC: is χb(3P) a new particle?

    Tuesday, December 20, 2011

    Merry Christmas


    Also See: Google Gravity, Google Sphere, Askew

    Note: When on Google gravity page and your search box is on the bottom.....type in Dialogos of Eide

    Monday, December 19, 2011

    Bayesian probability

    practically nobody took very seriously the CDF claim.......Tommaso: I will claim based on the above that according to Prof. D'Agostini, Prof. Matt Strassler is "practically nobody", since he is not convinced.

    An acoustical difference of opinion with regard too, Nobody?


    What is the probability of the observed acoustic data given that each of two 
    possible phrases spoken?



    Hmmmmm.......One is a shop keeper and one is a customer?

    ***

    Bayesian probability is one of the different interpretations of the concept of probability and belongs to the category of evidential probabilities. The Bayesian interpretation of probability can be seen as an extension of logic that enables reasoning with propositions whose truth or falsity is uncertain. To evaluate the probability of a hypothesis, the Bayesian probabilist specifies some prior probability, which is then updated in the light of new, relevant data.[1]

    The Bayesian interpretation provides a standard set of procedures and formulae to perform this calculation. Bayesian probability interprets the concept of probability as " a probability p is an abstract concept, a quantity that we assign theoretically, for the purpose of representing a state of knowledge, or that we calculate from previously assigned probabilities,"[2] in contrast to interpreting it as a frequency or a "propensity" of some phenomenon.
    The term "Bayesian" refers to the 18th century mathematician and theologian Thomas Bayes (1702–1761), who provided the first mathematical treatment of a non-trivial problem of Bayesian inference.[3] Nevertheless, it was the French mathematician Pierre-Simon Laplace (1749–1827) who pioneered and popularised what is now called Bayesian probability.[4]

    Broadly speaking, there are two views on Bayesian probability that interpret the probability concept in different ways. According to the objectivist view, the rules of Bayesian statistics can be justified by requirements of rationality and consistency and interpreted as an extension of logic.[2][5] According to the subjectivist view, probability measures a "personal belief".[6] Many modern machine learning methods are based on objectivist Bayesian principles.[7] In the Bayesian view, a probability is assigned to a hypothesis, whereas under the frequentist view, a hypothesis is typically tested without being assigned a probability.

    Contents

    Bayesian methodology

    In general, Bayesian methods are characterized by the following concepts and procedures:

    Objective and subjective Bayesian probabilities

    Broadly speaking, there are two views on Bayesian probability that interpret the 'probability' concept in different ways. For objectivists, probability objectively measures the plausibility of propositions, i.e. the probability of a proposition corresponds to a reasonable belief everyone (even a "robot") sharing the same knowledge should share in accordance with the rules of Bayesian statistics, which can be justified by requirements of rationality and consistency.[2][5] Requirements of rationality and consistency are also important for subjectivists, for which the probability corresponds to a 'personal belief'.[6] For subjectivists however, rationality and consistency constrain the probabilities a subject may have, but allow for substantial variation within those constraints. The objective and subjective variants of Bayesian probability differ mainly in their interpretation and construction of the prior probability.

    History

    The term Bayesian refers to Thomas Bayes (1702–1761), who proved a special case of what is now called Bayes' theorem in a paper titled "An Essay towards solving a Problem in the Doctrine of Chances".[8] In that special case, the prior and posterior distributions were Beta distributions and the data came from Bernoulli trials. It was Pierre-Simon Laplace (1749–1827) who introduced a general version of the theorem and used it to approach problems in celestial mechanics, medical statistics, reliability, and jurisprudence.[9] Early Bayesian inference, which used uniform priors following Laplace's principle of insufficient reason, was called "inverse probability" (because it infers backwards from observations to parameters, or from effects to causes).[10] After the 1920s, "inverse probability" was largely supplanted by a collection of methods that came to be called frequentist statistics.[10]

    In the 20th century, the ideas of Laplace were further developed in two different directions, giving rise to objective and subjective currents in Bayesian practice. In the objectivist stream, the statistical analysis depends on only the model assumed and the data analysed.[11] No subjective decisions need to be involved. In contrast, "subjectivist" statisticians deny the possibility of fully objective analysis for the general case.
    In the 1980s, there was a dramatic growth in research and applications of Bayesian methods, mostly attributed to the discovery of Markov chain Monte Carlo methods, which removed many of the computational problems, and an increasing interest in nonstandard, complex applications.[12] Despite the growth of Bayesian research, most undergraduate teaching is still based on frequentist statistics.[13] Nonetheless, Bayesian methods are widely accepted and used, such as in the fields of machine learning[7] and talent analytics.

    Justification of Bayesian probabilities

    The use of Bayesian probabilities as the basis of Bayesian inference has been supported by several arguments, such as the Cox axioms, the Dutch book argument, arguments based on decision theory and de Finetti's theorem.

    Axiomatic approach

    Richard T. Cox showed that[5] Bayesian updating follows from several axioms, including two functional equations and a controversial hypothesis of differentiability. It is known that Cox's 1961 development (mainly copied by Jaynes) is non-rigorous, and in fact a counterexample has been found by Halpern.[14] The assumption of differentiability or even continuity is questionable since the Boolean algebra of statements may only be finite.[15] Other axiomatizations have been suggested by various authors to make the theory more rigorous.[15]

    Dutch book approach

    The Dutch book argument was proposed by de Finetti, and is based on betting. A Dutch book is made when a clever gambler places a set of bets that guarantee a profit, no matter what the outcome is of the bets. If a bookmaker follows the rules of the Bayesian calculus in the construction of his odds, a Dutch book cannot be made.

    However, Ian Hacking noted that traditional Dutch book arguments did not specify Bayesian updating: they left open the possibility that non-Bayesian updating rules could avoid Dutch books. For example, Hacking writes[16] "And neither the Dutch book argument, nor any other in the personalist arsenal of proofs of the probability axioms, entails the dynamic assumption. Not one entails Bayesianism. So the personalist requires the dynamic assumption to be Bayesian. It is true that in consistency a personalist could abandon the Bayesian model of learning from experience. Salt could lose its savour."

    In fact, there are non-Bayesian updating rules that also avoid Dutch books (as discussed in the literature on "probability kinematics" following the publication of Richard C. Jeffrey's rule). The additional hypotheses sufficient to (uniquely) specify Bayesian updating are substantial, complicated, and unsatisfactory.[17]

    Decision theory approach

    A decision-theoretic justification of the use of Bayesian inference (and hence of Bayesian probabilities) was given by Abraham Wald, who proved that every admissible statistical procedure is either a Bayesian procedure or a limit of Bayesian procedures.[18] Conversely, every Bayesian procedure is admissible.[19]

    Personal probabilities and objective methods for constructing priors

    Following the work on expected utility theory of Ramsey and von Neumann, decision-theorists have accounted for rational behavior using a probability distribution for the agent. Johann Pfanzagl completed the Theory of Games and Economic Behavior by providing an axiomatization of subjective probability and utility, a task left uncompleted by von Neumann and Oskar Morgenstern: their original theory supposed that all the agents had the same probability distribution, as a convenience.[20] Pfanzagl's axiomatization was endorsed by Oskar Morgenstern: "Von Neumann and I have anticipated" the question whether probabilities "might, perhaps more typically, be subjective and have stated specifically that in the latter case axioms could be found from which could derive the desired numerical utility together with a number for the probabilities (cf. p. 19 of The Theory of Games and Economic Behavior). We did not carry this out; it was demonstrated by Pfanzagl ... with all the necessary rigor".[21]

    Ramsey and Savage noted that the individual agent's probability distribution could be objectively studied in experiments. The role of judgment and disagreement in science has been recognized since Aristotle and even more clearly with Francis Bacon. The objectivity of science lies not in the psychology of individual scientists, but in the process of science and especially in statistical methods, as noted by C. S. Peirce.[citation needed] Recall that the objective methods for falsifying propositions about personal probabilities have been used for a half century, as noted previously. Procedures for testing hypotheses about probabilities (using finite samples) are due to Ramsey (1931) and de Finetti (1931, 1937, 1964, 1970). Both Bruno de Finetti and Frank P. Ramsey acknowledge[citation needed] their debts to pragmatic philosophy, particularly (for Ramsey) to Charles S. Peirce.

    The "Ramsey test" for evaluating probability distributions is implementable in theory, and has kept experimental psychologists occupied for a half century.[22] This work demonstrates that Bayesian-probability propositions can be falsified, and so meet an empirical criterion of Charles S. Peirce, whose work inspired Ramsey. (This falsifiability-criterion was popularized by Karl Popper.[23][24])

    Modern work on the experimental evaluation of personal probabilities uses the randomization, blinding, and Boolean-decision procedures of the Peirce-Jastrow experiment.[25] Since individuals act according to different probability judgements, these agents' probabilities are "personal" (but amenable to objective study).
    Personal probabilities are problematic for science and for some applications where decision-makers lack the knowledge or time to specify an informed probability-distribution (on which they are prepared to act). To meet the needs of science and of human limitations, Bayesian statisticians have developed "objective" methods for specifying prior probabilities.

    Indeed, some Bayesians have argued the prior state of knowledge defines the (unique) prior probability-distribution for "regular" statistical problems; cf. well-posed problems. Finding the right method for constructing such "objective" priors (for appropriate classes of regular problems) has been the quest of statistical theorists from Laplace to John Maynard Keynes, Harold Jeffreys, and Edwin Thompson Jaynes: These theorists and their successors have suggested several methods for constructing "objective" priors:


     Each of these methods contributes useful priors for "regular" one-parameter problems, and each prior can handle some challenging statistical models (with "irregularity" or several parameters). Each of these methods has been useful in Bayesian practice. Indeed, methods for constructing "objective" (alternatively, "default" or "ignorance") priors have been developed by avowed subjective (or "personal") Bayesians like James Berger (Duke University) and José-Miguel Bernardo (Universitat de València), simply because such priors are needed for Bayesian practice, particularly in science.[26] The quest for "the universal method for constructing priors" continues to attract statistical theorists.[26]

    Thus, the Bayesian statistican needs either to use informed priors (using relevant expertise or previous data) or to choose among the competing methods for constructing "objective" priors.

    See also

    Notes

    1. ^ Paulos, John Allen. The Mathematics of Changing Your Mind, New York Times (US). August 5, 2011; retrieved 2011-08-06
    2. ^ a b c Jaynes, E.T. "Bayesian Methods: General Background." In Maximum-Entropy and Bayesian Methods in Applied Statistics, by J. H. Justice (ed.). Cambridge: Cambridge Univ. Press, 1986
    3. ^ Stigler, Stephen M. (1986) The history of statistics. Harvard University press. pg 131.
    4. ^ Stigler, Stephen M. (1986) The history of statistics., Harvard University press. pg 97-98, pg 131.
    5. ^ a b c Cox, Richard T. Algebra of Probable Inference, The Johns Hopkins University Press, 2001
    6. ^ a b de Finetti, B. (1974) Theory of probability (2 vols.), J. Wiley & Sons, Inc., New York
    7. ^ a b Bishop, C.M. Pattern Recognition and Machine Learning. Springer, 2007
    8. ^ McGrayne, Sharon Bertsch. (2011). The Theory That Would Not Die, p. 10. at Google Books
    9. ^ Stigler, Stephen M. (1986) The history of statistics. Harvard University press. Chapter 3.
    10. ^ a b Fienberg, Stephen. E. (2006) When did Bayesian Inference become "Bayesian"? Bayesian Analysis, 1 (1), 1–40. See page 5.
    11. ^ Bernardo, J.M. (2005), Reference analysis, Handbook of statistics, 25, 17–90
    12. ^ Wolpert, R.L. (2004) A conversation with James O. Berger, Statistical science, 9, 205–218
    13. ^ Bernardo, José M. (2006) A Bayesian mathematical statistics primer. ICOTS-7
    14. ^ Halpern, J. A counterexample to theorems of Cox and Fine, Journal of Artificial Intelligence Research, 10: 67-85.
    15. ^ a b Dupré, Maurice J., Tipler, Frank T. New Axioms For Bayesian Probability, Bayesian Analysis (2009), Number 3, pp. 599-606
    16. ^ Hacking (1967, Section 3, page 316), Hacking (1988, page 124)
    17. ^ van Frassen, B. (1989) Laws and Symmetry, Oxford University Press. ISBN 0198248601
    18. ^ Wald, Abraham. Statistical Decision Functions. Wiley 1950.
    19. ^ Bernardo, José M., Smith, Adrian F.M. Bayesian Theory. John Wiley 1994. ISBN 0-471-92416-4.
    20. ^ Pfanzagl (1967, 1968)
    21. ^ Morgenstern (1976, page 65)
    22. ^ Davidson et al. (1957)
    23. ^ "Karl Popper" in Stanford Encyclopedia of Philosophy
    24. ^ Popper, Karl. (2002) The Logic of Scientific Discovery 2nd Edition, Routledge ISBN 0415278430 (Reprint of 1959 translation of 1935 original) Page 57.
    25. ^ Pierce & Jastrow (1885)
    26. ^ a b Bernardo, J. M. (2005). Reference Analysis. Handbook of Statistics 25 (D. K. Dey and C. R. Rao eds). Amsterdam: Elsevier, 17-90

    References

    External links