Tuesday, November 23, 2010

The Synapse of the Wondering Mind

Click here for Penrose's Seminar

While trying to organize my thoughts about the title of this blog entry, it becomes apparent to me that the potential of neurological transposition of electrical pulses is part of the function of the physical system in order to operate, while I am thinking something much different.

It is the idea of our being receptive too something more then a signal transfer within the physical system of pathways established through repetitive use, but also the finding of that location, to receive.It is one where we can accept something into ourselves as information from another. As accepting information from around us. Information is energy?

***


Structure of a typical chemical synapse
In the nervous system, a synapse is a junction that permits a neuron to pass an electrical or chemical signal to another cell. The word "synapse" comes from "synaptein", which Sir Charles Scott Sherrington and colleagues coined from the Greek "syn-" ("together") and "haptein" ("to clasp").

Synapses are essential to neuronal function: neurons are cells that are specialized to pass signals to individual target cells, and synapses are the means by which they do so. At a synapse, the plasma membrane of the signal-passing neuron (the presynaptic neuron) comes into close apposition with the membrane of the target (postsynaptic) cell. Both the presynaptic and postsynaptic sites contain extensive arrays of molecular machinery that link the two membranes together and carry out the signaling process. In many synapses, the presynaptic part is located on an axon, but some presynaptic sites are located on a dendrite or soma.
There are two fundamentally different types of synapse:
  • In a chemical synapse, the presynaptic neuron releases a chemical called a neurotransmitter that binds to receptors located in the postsynaptic cell, usually embedded in the plasma membrane. Binding of the neurotransmitter to a receptor can affect the postsynaptic cell in a wide variety of ways.
  • In an electrical synapse, the presynaptic and postsynaptic cell membranes are connected by channels that are capable of passing electrical current, causing voltage changes in the presynaptic cell to induce voltage changes in the postsynaptic cell.

***

The Einstein-Podolsky-Rosen Argument in Quantum Theory

First published Mon May 10, 2004; substantive revision Wed Aug 5, 2009

In the May 15, 1935 issue of Physical Review Albert Einstein co-authored a paper with his two postdoctoral research associates at the Institute for Advanced Study, Boris Podolsky and Nathan Rosen. The article was entitled “Can Quantum Mechanical Description of Physical Reality Be Considered Complete?” (Einstein et al. 1935). Generally referred to as “EPR”, this paper quickly became a centerpiece in the debate over the interpretation of the quantum theory, a debate that continues today. The paper features a striking case where two quantum systems interact in such a way as to link both their spatial coordinates in a certain direction and also their linear momenta (in the same direction). As a result of this “entanglement”, determining either position or momentum for one system would fix (respectively) the position or the momentum of the other. EPR use this case to argue that one cannot maintain both an intuitive condition of local action and the completeness of the quantum description by means of the wave function. This entry describes the argument of that 1935 paper, considers several different versions and reactions, and explores the ongoing significance of the issues they raise. See Also:Historical Figures Lead Us to the Topic of Entanglement
When looking at Penrose's seminar and you have clicked on the image, the idea presented itself to me that if one was to seek "a method by determination" I might express color of gravity as a exchange in principle as if spooky action at a distance, as an expression of a representative example of colorimetric expressions.

Science and TA by Chris Boyd
Do we selectively ignore other models from artificial intelligence such as Zadeh's Fuzzy Logic? This is a logic used to model perception and used in newly designed "smart" cameras. Where standard logic must give a true or false value to every proposition, fuzzy logic assigns a certainty value between zero and one to each of the propositions, so that we say a statement is .7 true and .3 false. Is this theory selectively ignored to support our theories?

Here fuzzy logic and TA had served in principal to show orders between "O and 1" as potentials of connection between the source of exchange between those two individuals. I see "cryptography" as an example of this determination  as a defined state of reductionism through that exchange.

Stuart Kauffman raises his own philosophical ideas in "Beyond Einstein and Schrodinger? The Quantum Mechanics of Closed Quantum Systems" about such things,  that lead to further  ideas on his topic, has blocked my comments there, so I see no use in further participating and offering ideas for his efforts toward "data mining" with regard to his biological methods to determination.

I can say it has sparked further interest in my own assessment of "seeking to understand color of gravity" as a method to determination,  as a state of deduction orientation, that we might get from a self evidential result from exchange,  as a "cause of determination" as to our futures.

While I have listed here between two individuals these thoughts also act as "an antennae" toward a universal question of "what one asks shall in some form be answered."

Not just a "blank slate" but one with something written on it. What design then predates physical expression, as if one could now define the human spirit and character, as  the soul in constant expression through materiality? An "evolution of spirit" then making manifest our progressions, as leading from one position to another.


***
See Also:

The Synapse is a Portal of the Thinking Mind

Thursday, November 18, 2010

QGP Research Advances

“We can say that the system definitely flows like a liquid,” says Harris.


One of the first lead-ion collisions in the LHC as recorded by the ATLAS experiment on November 8, 2010. Image courtesy CERN.

***
Scientists from the ALICE experiment at CERN’s Large Hadron Collider have publicly revealed the first measurements from the world’s highest energy heavy-ion collisions. In two papers posted today to the arXiv.org website, the collaboration describes two characteristics of the collisions: the number of particles produced from the most head-on collisions; and, for more glancing blows, the flow of the system of two colliding nuclei.
Both measurements serve to rule out some theories about how the universe behaves at its most fundamental, despite being based on a relatively small number of collisions collected in the first few days of LHC running with lead-ion beams.
In the first measurement, scientists counted the charged particles that were produced from a few thousand of the most central lead-ion collisions—those where the lead nuclei hit each other head-on. The result showed that about 18,000 particles are produced from collisions of lead ions, which is about 2.2 times more particles than produced in similar collisions of gold ions at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider.
See: ALICE experiment announces first results from LHC’s lead-ion collisions

Wednesday, November 17, 2010

Entanglement is a key feature of the way complexity....

LET’S CALL IT PLECTICS Murray Gell-Mann
It is appropriate that plectics refers to entanglement or the lack thereof, since entanglement is a key feature of the way complexity arises out of simplicity, making our subject worth studying. Forexample, all of us human beings and all the objects with which we deal are essentially bundles of simple quarks and electrons. If each of those particles had to be in its own independent state, we could not exist and neither could the other objects. It is the entanglement of the states of the particles that is responsible for matter as we know it.
http://tuvalu.santafe.edu/~mgm/Site/Publications_files/MGM%20118.pdf

I wanted to refer to this article and have in previous entries.  As of today, those current blog entries should have this new link  as referenced. Will be correcting as blog entries appear.

Sunday, November 14, 2010

Gravimetry

For the chemical analysis technique, see Gravimetric analysis.


Gravity map of the Southern Ocean around the Antarctic continent
Author-Hannes Grobe, AWI

This gravity field was computed from sea-surface height measurements collected by the US Navy GEOSAT altimeter between March, 1985, and January, 1990. The high density GEOSAT Geodetic Mission data that lie south of 30 deg. S were declassified by the Navy in May of 1992 and contribute most of the fine-scale gravity information.

The Antarctic continent itself is shaded in blue depending on the thickness of the ice sheet (blue shades in steps of 1000 m); light blue is shelf ice; gray lines are the major ice devides; pink spots are parts of the continent which are not covered by ice; gray areas have no data.
Gravimetry is the measurement of the strength of a gravitational field. Gravimetry may be used when either the magnitude of gravitational field or the properties of matter responsible for its creation are of interest. The term gravimetry or gravimetric is also used in chemistry to define a class of analytical procedures, called gravimetric analysis relying upon weighing a sample of material.

Contents

Units of measurement

Gravity is usually measured in units of acceleration. In the SI system of units, the standard unit of acceleration is 1 metre per second squared (abbreviated as m/s2). Other units include the gal (sometimes known as a galileo, in either case with symbol Gal), which equals 1 centimetre per second squared, and the g (gn), equal to 9.80665 m/s2. The value of the gn approximately equals the acceleration due to gravity at the Earth's surface (although the actual acceleration g varies fractionally from place to place).

How gravity is measured

An instrument used to measure gravity is known as a gravimeter, or gravitometer. Since general relativity regards the effects of gravity as indistinguishable from the effects of acceleration, gravimeters may be regarded as special purpose accelerometers. Many weighing scales may be regarded as simple gravimeters. In one common form, a spring is used to counteract the force of gravity pulling on an object. The change in length of the spring may be calibrated to the force required to balance the gravitational pull. The resulting measurement may be made in units of force (such as the newton), but is more commonly made in units of gals.

More sophisticated gravimeters are used when precise measurements are needed. When measuring the Earth's gravitational field, measurements are made to the precision of microgals to find density variations in the rocks making up the Earth. Several types of gravimeters exist for making these measurements, including some that are essentially refined versions of the spring scale described above. These measurements are used to define gravity anomalies.

Besides precision, also stability is an important property of a gravimeter, as it allows the monitoring of gravity changes. These changes can be the result of mass displacements inside the Earth, or of vertical movements of the Earth's crust on which measurements are being made: remember that gravity decreases 0.3 mGal for every metre of height. The study of gravity changes belongs to geodynamics.

The majority of modern gravimeters use specially-designed quartz zero-length springs to support the test mass. Zero length springs do not follow Hooke's Law, instead they have a force proportional to their length. The special property of these springs is that the natural resonant period of oscillation of the spring-mass system can be made very long - approaching a thousand seconds. This detunes the test mass from most local vibration and mechanical noise, increasing the sensitivity and utility of the gravimeter. The springs are quartz so that magnetic and electric fields do not affect measurements. The test mass is sealed in an air-tight container so that tiny changes of barometric pressure from blowing wind and other weather do not change the buoyancy of the test mass in air.

Spring gravimeters are, in practice, relative instruments which measure the difference in gravity between different locations. A relative instrument also requires calibration by comparing instrument readings taken at locations with known complete or absolute values of gravity. Absolute gravimeters provide such measurements by determining the gravitational acceleration of a test mass in vacuum. A test mass is allowed to fall freely inside a vacuum chamber and its position is measured with a laser interferometer and timed with an atomic clock. The laser wavelength is known to ±0.025 ppb and the clock is stable to ±0.03 ppb as well. Great care must be taken to minimize the effects of perturbing forces such as residual air resistance (even in vacuum) and magnetic forces. Such instruments are capable of an accuracy of a few parts per billion or 0.002 mGal and reference their measurement to atomic standards of length and time. Their primary use is for calibrating relative instruments, monitoring crustal deformation, and in geophysical studies requiring high accuracy and stability. However, absolute instruments are somewhat larger and significantly more expensive than relative spring gravimeters, and are thus relatively rare.

Gravimeters have been designed to mount in vehicles, including aircraft, ships and submarines. These special gravimeters isolate acceleration from the movement of the vehicle, and subtract it from measurements. The acceleration of the vehicles is often hundreds or thousands of times stronger than the changes being measured. A gravimeter (the Lunar Surface Gravimeter) was also deployed on the surface of the moon during the Apollo 17 mission, but did not work due to a design error. A second device (the Traverse Gravimeter Experiment) functioned as anticipated.

Microgravimetry

Microgravimetry is a rising and important branch developed on the foundation of classical gravimetry.

Microgravity investigations are carried out in order to solve various problems of engineering geology, mainly location of voids and their monitoring. Very detailed measurements of high accuracy can indicate voids of any origin, provided the size and depth are large enough to produce gravity effect stronger than is the level of confidence of relevant gravity signal.

History

The modern gravimeter was developed by Lucien LaCoste and Arnold Romberg in 1936.

They also invented most subsequent refinements, including the ship-mounted gravimeter, in 1965, temperature-resistant instruments for deep boreholes, and lightweight hand-carried instruments. Most of their designs remain in use (2005) with refinements in data collection and data processing.

See also

The Lunar Far Side: The Side Never Seen from Earth

                                                            Mass concentration (astronomy)

This figure shows the topography (top) and corresponding gravity (bottom) signal of Mare Smythii at the Moon. It nicely illustrates the term "mascon". Author Martin Pauer

While article is from Tuesday, June 22, 2010 9:00 PM it still amazes me how we see the moon in context of it's coloring.
Topography when seen in context of landscape, how we measure aspects of the gravitational field supply us with a more realistic interpretation of the globe as a accurate picture of how that sphere(isostatic equilibrium)  looks.


Image Credit: NASA/Goddard
Ten Cool Things Seen in the First Year of LRO

Tidal forces between the moon and the Earth have slowed the moon' rotation so that one side of the moon always faces toward our planet. Though sometimes improperly referred to as the "dark side of the moon," it should correctly be referred to as the "far side of the moon" since it receives just as much sunlight as the side that faces us. The dark side of the moon should refer to whatever hemisphere isn't lit at a given time. Though several spacecraft have imaged the far side of the moon since then, LRO is providing new details about the entire half of the moon that is obscured from Earth. The lunar far side is rougher and has many more craters than the near side, so quite a few of the most fascinating lunar features are located there, including one of the largest known impact craters in the solar system, the South Pole-Aitken Basin. The image highlighted here shows the moon's topography from LRO's LOLA instruments with the highest elevations up above 20,000 feet in red and the lowest areas down below -20,000 feet in blue.

Learn More About Far side of the Moon

***
 Credit: NASA/Goddard/MIT/Brown

Figure 4: A lunar topographic map showing the Moon from the vantage point of the eastern limb. On the left side of the Moon seen in this view is part of the familiar part of the Moon observed from Earth (the eastern part of the nearside). In the middle left-most part of the globe is Mare Tranquillitatis (light blue) the site of the Apollo 11 landing, and above this an oval-appearing region (Mare Serenitatis; dark blue) the site of the Apollo 17 landing. Most of the dark blue areas are lunar maria, low lying regions composed of volcanic lava flows that formed after the heavily cratered lunar highlands (and are thus much less cratered). The topography is derived from over 2.4 billion shots made by the Lunar Orbiter Laser Altimeter (LOLA) instrument on board the NASA Lunar Reconnaissance Orbiter. The large near-circular basins show the effects of the early impacts on early planetary crusts in the inner solar system, including the Earth. 

***
 Author and Image Credit: Mark A. Wieczorek
Radial gravitational anomaly at the surface of the Moon as determined from the gravity model LP150Q. The contribution due to the rotational flattening has been removed for clarity, and positive anomalies correspond to an increase in magnitude of the gravitational acceleration. Data are presented in two Lambert azimuthal equal area projections.
The major characteristic of the Moon's gravitational field is the presence of mascons, which are large positive gravity anomalies associated with some of the giant impact basins. These anomalies greatly influence the orbit of spacecraft about the Moon, and an accurate gravitational model is necessary in the planning of both manned and unmanned missions. They were initially discovered by the analysis of Lunar Orbiter tracking data,[2] since navigation tests prior to the Apollo program experienced positioning errors much larger than mission specifications.

Wednesday, November 10, 2010

It's Neither World, not Nether

Netherworld is often used as a synonym for Underworld.

Okay this may seem like a strange title, but believe me when I say how fascinating that such dynamics in meeting "each other: will allow something to "pop" right out of existence.

Underworld is a region in some religions and in mythologies which is thought to be under the surface of the earth.[1] It could be a place where the souls of the recently departed go, and, in some traditions, it is identified with Hell or the realm of death. In other traditions, however, such as animistic traditions, it could be seen as the place where life appears to have originated from (such as plant life, water, etc.) and a place to which life must return at life's end, with no negative undertones.

I mean I am not quite sure how this post must materialize, to conclude "non-existence" until it is clear, that such dynamics  will allow such a thing to happen, that one could say indeed,  they have completed their journey.

Now can I say that this is the process of the universe,  I can't be sure.I know that in the "mediation process" for concluding the experience,  such an experience has to come undone. Again this is such a strange thing in my mind that I had to say that "I was the experience" until such a time, that going along with other things in sameness of dynamics, that it was hard at first to see this dynamics in play as being apart from it.  I could actually only say enough of this experience to concluded  the realization of coming undone. Hmm...

To solidify this until understanding, I relived these things until I saw the last of the tension ebb away to allow  "a tension" to become undone. As if such tension "had to exist" until the very bubble that harbored and allowed all of the world of our expediency no longer supported such a viable option as that bubble.

I know this is not such a cute analogy but to get to the essence of the story then it has to be understood that underneath "this experience"  is a dynamcial revelation of sorts that hides the equation of such an experience?

You should know then that I see this very schematics of the world as having this nature to it that we may describe reality as something closer to the definition of it's very existence and that such a attempt at describing nature was to get to the very end of what begins? Imagine arriving at the juxtaposition of such a point?

How are We to Contained Experience?

In mathematics, the Klein bottle ([klaɪ̯n]) is a non-orientable surface, informally, a surface (a two-dimensional manifold) with no identifiable "inner" and "outer" sides. Other related non-orientable objects include the Möbius strip and the real projective plane. Whereas a Möbius strip is a two-dimensional surface with boundary, a Klein bottle has no boundary. (For comparison, a sphere is an orientable surface with no boundary.)
By adding a fourth dimension to the three dimensional space, the self-intersection can be eliminated. Gently push a piece of the tube containing the intersection out of the original three dimensional space. A useful analogy is to consider a self-intersecting curve on the plane; self-intersections can be eliminated by lifting one strand off the plane.
This immersion is useful for visualizing many properties of the Klein bottle. For example, the Klein bottle has no boundary, where the surface stops abruptly, and it is non-orientable, as reflected in the one-sidedness of the immersion.

The geometry was revealing as I tried to encapsulate this point, so as to see where such a description fell away from all that we can contain of the world. That we can truly say we had indeed let go. To imagine then that one's grip on things became ever tighter, while wishing to let the strength of this while becoming ever stronger to fall away.

"While Gassner was watching television, the natural motion of the Earth must have carried him through a small non-orientable pocket of the universe," said Boris Harkov, a mathematician at the Massachusetts Institute of Technology in Cambridge. "That's the only reasonable explanation."

One way to test the orientation of the universe is to hurl a right-handed glove into the air and see if it falls back to Earth as a left-handed glove--if it does, the universe must be non-orientable. Since Gassner's announcement, physicists have been carrying out such experiments, both outdoors and in Gassner's TV room, but so far all tests have come back negative. Still, many researchers are optimistic. "I'm confident that the glove will flip soon," said Chen Xiang, an experimental physicist at Brookhaven National Laboratory in New York. The Klarreich Occasionally


Ultimate realization that what is negative is a positive toward completion.That is how one might define the whole perspective of validation of no longer being negative?

As if one wold realize that such a tension revealed in the Tao, no longer existed in the picture as a demonstration of the Tao now gone.
Now, such a object seemed part of the experience,  as to the unfolding, yet in my inadequate understanding how could such a thing be taken down to such a point as to say it no longer existed. How can I say say such a geometry was part of that process while I struggle to define such an action as falling away or reducing it to such a point of nothing?

It's enough then that one sees "around that point"  that the ultimate quest envisions such  an "undoing" that we see where the relevance of such a tension can and should no longer exist?

The Experience Most Fitting then ?

As I relayed earlier I experience many things until I understood this undoing, that such reason then to awareness of "what should be" was capsulized in only one example. How shall I say it then that I understood all that befell me to dissolution to show that such a demonstration was complete. I would still be here? That such an equation of resistance could have been imparted not only in the equation, but in the telling of the experience too?

While I show by experience such an example it should be taken that in this example I have changed the name of the person in order to protect our association. Shall I be so forthcoming that only the "object of relation" shall be the only thing identifiable  so as to know that this association is very real to me, and only to me by that person's identification as an experience that is real? Aw....well anyway "more then one" for sure, as to the way in which I use that experience to demonstrate.

It all began, as I noticed a tension in his voice, as he slipped into the realization of something that had happen to him earlier in that day. I was taken to a "good observation point" so that I might admit to seeing what he was seeing.  As hard as I looked at first I could not tell what he was so upset about that I tried ever harder  to see, that slowly I understood then what he was pointing at. Why such a tension could exist in him and his voice, that such a rectification and adjustment was needed in order to make this right.

As I relay this situation it was apparent at the time of such a demonstration, as to a example that this situation popped up,  as such a reason to be demonstrated that to make it right, had to be the undoing of what made it wrong you see. To make the point ever driven home for realization was to demonstrate that such undoing had to rectify the situation of where it began, so of course,  all actions taken to get it fixed. Could it have ever been undone?

Well as if I understood why such an experience came frothing to the surface of awareness I thought to conclude this example by what I saw, that it took me by realization that "in turning" to back up, a hand imprint in oil was left on the back of the seat in order for the person to complete the job. A "new point of tension" by not washing their hands, or not covering  pristine upholstery that was just purchased, was created.

All of this has to be undone in order for one to say that this experience has popped out of existence you see?

That was how such a demonstration was shown to be reasonable in my mind for such an equation to manifest such a description about that experience that I could say that it was reasonable to me that I had understood.

Was it a good example rests on you to be sure.

***
Physically, the effect can be interpreted as an object moving from the "false vacuum" (where = 0) to the more stable "true vacuum" (where = v). Gravitationally, it is similar to the more familiar case of moving from the hilltop to the valley. In the case of Higgs field, the transformation is accompanied with a "phase change", which endows mass to some of the particles

"Quantum Field Theory

Quantum Vacuum:

In classical physics, empty space is called the vacuum. The classical vacuum is utterly featureless. However, in quantum theory, the vacuum is a much more complex entity. The uncertainty principle allows virtual particles (each corresponding to a quantum field) continually materialize out of the vacuum, propagate for a short time and then vanish. " http://universe-review.ca/R15-12-QFT.htm#vacuum

"The idea behind the Coleman-De Luccia instanton, discovered in 1987, is that the matter in the early universe is initially in a state known as a false vacuum. A false vacuum is a classically stable excited state which is quantum mechanically unstable." http://www.damtp.cam.ac.uk/research/gr/public/qg_qc.html

Monday, November 08, 2010

John Maynard Smith



John Maynard Smith
Born 6 January 1920(1920-01-06)
London, England
Died 19 April 2004(2004-04-19) (aged 84)
Lewes, East Sussex, England
Nationality British
Fields Evolutionary biologist and geneticist
Institutions University of Sussex
Alma mater University of Cambridge and University College London
Doctoral advisor J.B.S. Haldane
Doctoral students Andrew Pomiankowski
Sean Nee
Known for Game theory
Evolution of sex
Signalling theory
Notable awards Balzan Prize (1991)
Copley Medal (1999)
Kyoto Prize (2001)
Linnean Society of London's Darwin-Wallace Medal (2008)
John Maynard Smith,[1] F.R.S. (6 January 1920 – 19 April 2004) was a British theoretical evolutionary biologist and geneticist. Originally an aeronautical engineer during the Second World War, he then took a second degree in genetics under the well-known biologist J.B.S. Haldane. Maynard Smith was instrumental in the application of game theory to evolution and theorized on other problems such as the evolution of sex and signalling theory.

Contents

Biography

Early years

John Maynard Smith was born in London, the son of a surgeon, but following his father's death in 1928 the family moved to Exmoor, where he became interested in natural history. Quite unhappy with the lack of formal science education at Eton College, Maynard Smith took it upon himself to develop an interest in Darwinian evolutionary theory and mathematics, after having read the work of old Etonian J.B.S. Haldane, whose books were in the school's library despite the bad reputation Haldane had at Eton for his communism.
On leaving school, Maynard Smith joined the Communist Party of Great Britain and started studying engineering at Trinity College Cambridge. When the Second World War broke out in 1939, he defied his party's line and volunteered for service. He was rejected, however, because of poor eyesight and was told to finish his engineering degree, which he did in 1941. He later quipped that "under the circumstances, my poor eyesight was a selective advantage—it stopped me getting shot". The year of his graduation, he married Sheila Matthew, and they later had two sons and one daughter (Tony, Carol, and Julian). Between 1942 and 1947 he applied his degree to military aircraft design.

Second degree

Maynard Smith then took a change of career, entering University College London (UCL) to study fruit fly genetics under Haldane. After graduating he became a lecturer in Zoology at UCL between 1952 and 1965, where he directed the Drosophila lab and conducted research on population genetics. He published a popular Penguin book, The Theory of Evolution, in 1958 (with subsequent editions in 1966, 1975, 1993).
He became gradually less attracted to communism and became a less active member, finally leaving the Party in 1956 like many other intellectuals, after the Soviet Union brutally suppressed the Hungarian Revolution (Haldane had left the party in 1950 after becoming similarly disillusioned).

University of Sussex

In 1962 he was one of the founding members of the University of Sussex and was a Dean between 1965-85. He subsequently became a professor emeritus. Prior to his death the building housing much of Life Sciences at Sussex was renamed the John Maynard Smith Building, in his honour.

Evolution and the Theory of Games

In 1973 Maynard Smith formalised a central concept in game theory called the evolutionarily stable strategy (ESS), based on a verbal argument by George R. Price. This area of research culminated in his 1982 book Evolution and the Theory of Games. The Hawk-Dove game is arguably his single most influential game theoretical model.
He was elected a Fellow of the Royal Society in 1977. In 1986 he was awarded the Darwin Medal. He also developed and recovered from colon cancer.

Evolution of sex and other major transitions in evolution

Maynard Smith published a book entitled The Evolution of Sex which explored in mathematical terms, the notion of the "two-fold cost of sex". During the late 1980s he also became interested in the other major evolutionary transitions with the biochemist Eörs Szathmáry. Together they wrote an influential 1995 book The Major Transitions in Evolution, a seminal work which continues to contribute to ongoing issues in evolutionary biology.[2][3] . A popular science version of the book, entitled The Origins of Life: From the birth of life to the origin of language was published in 1999.
In 1991 he was awarded the Balzan Prize for Genetics and Evolution "For his powerful analysis of evolutionary theory and of the role of sexual reproduction as a critical factor in evolution and in the survival of species; for his mathematical models applying the theory of games to evolutionary problems" (motivation of the Balzan General Prize Committee). In 1995 he was awarded the Linnean Medal by The Linnean Society and in 1999 he was awarded the Crafoord Prize jointly with Ernst Mayr and George C. Williams. In 2001 he was awarded the Kyoto Prize.
In his honour, the European Society for Evolutionary Biology has an award for extraordinary young evolutionary biology researchers named The John Maynard Smith Prize.

Animal Signals

His final book, Animal Signals, co-authored with David Harper was published in 2003 on signalling theory.

Death

He died of lung cancer[4]—sitting in a high-backed chair, surrounded by books—at his home in Lewes, East Sussex on April 19, 2004, 122 years to the day after the death of Darwin. At his funeral, one of his grandchildren said, "he was very smart... and a jolly nice person". He was survived by his wife Sheila and their children.

Awards and Fellowships

Bibliography

[edit] Footnote and Reference

  1. ^ His surname was Maynard Smith, not Smith, nor was it hyphenated.
  2. ^ Sterelny, Kim (2007). Dawkins vs. Gould: Survival of the Fittest. Cambridge, U.K.: Icon Books. ISBN 1-84046-780-0.  Also ISBN 978-1-84046-780-2
  3. ^ Benton, Michael (2009). "Paleontology and the History of Life". In Michael Ruse & Joseph Travis. Evolution: The First Four Billion Years. Cambridge, Massachusetts: The Belknap Press of Harvard University Press. pp. 80–104. ISBN 978-0-674-03175-3. 
  4. ^ Obituary John Maynard Smith 1920-2004

External links

University of Sussex

Other academia

Obituaries

Saturday, November 06, 2010

Colour of Gravity 3

Colour measurement

We know that colour is a psychophysical experience of an observer which changes from observer to observer and is therefore impossible to replicate absolutely. In order to quantify colour in meaningful terms we must be able to measure or represent the three attributes that together give a model of colour perception. i.e. light, object and the eye. All these attributes have been standardised by the CIE or Commission Internationale de l'Eclairage.

The colours of the clothes we wear and the textiles we use in our homes must be monitored to ensure that they are correct and consistent.

Colour measurement is therefore essential to put numbers to colour in order to remove physical samples and the interpretation of results.
See:Colour measuring equipment

***

A New Culture?





***

Colour Space and Colour Theory


So by having defined the "frame of reference," and by introducing "Colour of gravity" I thought it important and consistent with the science to reveal how dynamical any point within that reference can become expressive. The history in association also important.

  ***
 

See Also:

Cymatics and the Heart Song

We might object that the heart makes heart sounds and jiggles water in the pericardial sac. Stuart Kauffman

The Colour of Gravity2
The Colour of Gravity1

Thursday, November 04, 2010

It's Still A Elephant

A sensible reductionist perspective would be something like “objects are completely defined by the states of their components.” The dialogue uses elephants as examples of complex objects, so Rosenberg imagines that we know the state (position and momentum etc.) of every single particle in an elephant. Now we consider another collection of particles, far away, in exactly the same state as the ones in the elephant. Is there any sense in which that new collection is not precisely the same kind of elephant as the original?
Physicalist Anti-Reductionism

Most know the "general area" we are talking about, and since Quantum gravity rests on a lot of minds, we have to see methods of materiality as measure in which to express that reality?




The Six Men and the Elephant

So what are the ways in which modern day theorists and scientists detest the insight that such designs are inherent in the very symmetrical views with which all symmetry breaking phases can materialize? Do they?

So I raise the thought of still a elephant in the room:)


"If you constraint the idea of the elephant as a picture of the quantum gravity regime then it is highly likely one would seek to use that elephant in thought experiments to progress such thinking about possible methods to describing that determination within that given environment? How many methods?

One, and only one blind man's description in hand?:) It's still a elephant:)"

Wednesday, October 27, 2010

Ducks Know Game Theory








 

A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature (2006)

The ducks, naturally, were delighted with this experiment, so they all rapidly paddled into position. But then Harper’s helpers began tossing the bread onto two separated patches of the pond. At one spot, the bread tosser dispensed one piece of bread every five seconds. The second was slower, tossing out the bread balls just once every 10 seconds.

Now, the burning scientific question was, if you’re a duck, what do you do? Do you swim to the spot in front of the fast tosser or the slow tosser? It’s not an easy question. When I ask people what they would do, I inevitably get a mix of answers (and some keep changing their mind as they think about it longer).
Perhaps (if you were a duck) your first thought would be to go for the guy throwing the bread the fastest. But all the other ducks might have the same idea. You’d get more bread for yourself if you switched to the other guy, right? But you’re probably not the only duck who would realize that. So the choice of the optimum strategy isn’t immediately obvious, even for people. To get the answer you have to calculate a Nash equilibrium.

After all, foraging for food is a lot like a game. In this case, the chunks of bread are the payoff. You want to get as much as you can. So do all the other ducks. As these were university ducks, they were no doubt aware that there is a Nash equilibrium point, an arrangement that gets every duck the most food possible when all the other ducks are also pursuing a maximum food-getting strategy.

Knowing (or observing) the rate of tosses, you can calculate the equilibrium point using Nash’s math. In this case the calculation is pretty simple: The ducks all get their best possible deal if one-third of them stand in front of the slow tosser and the other two-thirds stand in front of the fast tosser.

And guess what? It took the ducks about a minute to figure that out. They split into two groups almost precisely the size that game theory predicted. Ducks know how to play game theory!