Monday, September 24, 2012

Moon Shots Program



When we think of the efforts that were applied to the pursuit for landing on the Moon, I also would like to point towards the efforts of this initiative being forwarded today.




The University of Texas MD Anderson Cancer Center announces the launch of the Moon Shots Program, an unprecedented effort to dramatically accelerate the pace of converting scientific discoveries into clinical advances that reduce cancer deaths. 
The program, initially targeting eight cancers, will bring together sizable multidisciplinary groups of MD Anderson researchers and clinicians to mount comprehensive attacks on acute myeloid leukemia/myelodysplastic syndrome, chronic lymphocytic leukemia, melanoma, lung cancer, prostate cancer, and triple-negative breast and ovarian cancers -- two cancers linked at the molecular level. 
The Moon Shots Program takes its inspiration from President John Kennedy's famous 1962 speech, made 50 years ago this month at Rice University, just a mile from the main MD Anderson campus. "We choose to go to the moon in this decade ... because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win," Kennedy said. 
"Generations later, the Moon Shots Program signals our confidence that the path to curing cancer is in clearer sight than at any previous time in history," said Ronald A. DePinho, M.D., MD Anderson's president. 
Six moon shot teams, representing the eight cancers, were selected based on three rigorous criteria: the current state of scientific knowledge across the cancer continuum from prevention to survivorship; the strength and breadth of the assembled team; and the potential for measurable success in reducing cancer deaths.
With "intent to cure" as the shared purpose, each moon shot team will focus on personalized treatment; informed prediction and real-time assessment of the effect of those therapies; effective prevention and risk-management strategies; significant advances in diagnostics and early detection; and reduced treatment-related side effects that are detrimental to patients.

Sunday, September 23, 2012

Black Hole Thoughts are Spoken: Complementarity vs Firewall


Black Holes: Complementarity vs Firewalls

Deliver_poster
  • Subtitle: Strings 2012
  • Speaker: Raphael Bousso
  • Location: Ludwig-Maximilians-Universität München
  • Date: 27.07.2012 @ 16:04


Ahmed Almheiri, Donald Marolf, Joseph Polchinski, James Sully

We argue that the following three statements cannot all be true: (i) Hawking radiation is in a pure state, (ii) the information carried by the radiation is emitted from the region near the horizon, with low energy effective field theory valid beyond some microscopic distance from the horizon, and (iii) the infalling observer encounters nothing unusual at the horizon. Perhaps the most conservative resolution is that the infalling observer burns up at the horizon. Alternatives would seem to require novel dynamics that nevertheless cause notable violations of semiclassical physics at macroscopic distances from the horizon. Black Hole: Complementarity vs Firewall


This lecture presents some particular thoughts that rang a bell for me in terms of what reporting was done here earlier on the thought experiments by Susskind on how one may interpret information gained by the process of entanglement to an observer outside the black hole.

See:The elephant and the event horizon 26 October 2006 by Amanda Gefter at New Scientist.

 Also See: Where Susskind leaves off, Seth Lloyd begins

Various neutron interferometry experiments demonstrate the subtlety of the notions of duality and complementarity. By passing through the interferometer, the neutron appears to act as a wave. Yet upon passage, the neutron is subject to gravitation. As the neutron interferometer is rotated through Earth's gravitational field a phase change between the two arms of the interferometer can be observed, accompanied by a change in the constructive and destructive interference of the neutron waves on exit from the interferometer. Some interpretations claim that understanding the interference effect requires one to concede that a single neutron takes both paths through the interferometer at the same time; a single neutron would "be in two places at once", as it were. Since the two paths through a neutron interferometer can be as far as 5 cm to 15 cm apart, the effect is hardly microscopic. This is similar to traditional double-slit and mirror interferometer experiments where the slits (or mirrors) can be arbitrarily far apart. So, in interference and diffraction experiments, neutrons behave the same way as photons (or electrons) of corresponding wavelength. See: Complementarity (physics)
 

See Also:

Saturday, September 22, 2012

Mining Helium 3 On the Moon

Helium-3 (He-3, sometimes called tralphium[1]) is a light, non-radioactive isotope of helium with two protons and one neutron. It is rare on the Earth, and it is sought for use in nuclear fusion research. The abundance of helium-3 is thought to be greater on the Moon (embedded in the upper layer of regolith by the solar wind over billions of years)[citation needed], though still low in quantity (28 ppm of lunar regolith is helium-4 and from one ppb to 50 ppb is helium-3)[2][3], and the solar system's gas giants (left over from the original solar nebula).

 Materials on the Moon's surface contain helium-3 at concentrations on the order of between 1.4 and 15 ppb in sunlit areas,[41][42] and may contain concentrations as much as 50 ppb in permanently shadowed regions.[3] A number of people, starting with Gerald Kulcinski in 1986,[43] have proposed to explore the moon, mine lunar regolith and use the helium-3 for fusion. Recently, companies as Planetary_Resources have also stated to be interested in mining helium-3 on the moon. Because of the low concentrations of helium-3, any mining equipment would need to process extremely large amounts of regolith (over 150 million tonnes of regolith to obtain one ton of helium 3),[44] and some proposals have suggested that helium-3 extraction be piggybacked onto a larger mining and development operation

As Plato's Nightlight  mining company it is always of interest of what proposals are put forward that such ventures become of interest to providing for life being lived on the moon.  Materials for construction there and delivery to earth.

It is always of interest too, that the long range livability of the conditions for human life would have a longer goal term of mining for Helium 3 through a longer approach for sustenance as a by product of that venture.




Clementine color ratio composite image of Aristarchus Crater on the Moon. This 42 km diameter crater is located on the corner of the Aristarchus plateau, at 24 N, 47 W. Ejecta from the plateau is visible as the blue material at the upper left (northwest), while material excavated from the Oceanus Procellarum area is the reddish color to the lower right (southeast). The colors in this image can be used to ascertain compositional properties of the materials making up the deep strata of these two regions. (Clementine, USGS slide 11)

Precursors to research and development are a historical basis to the point where research development is now and part of that is elemental assessment of what we can gain from the environment of the moon.

 The orbiter, known as LRO, separated from the Atlas V rocket carrying it and a companion mission, the Lunar Crater Observation and Sensing Satellite. The LCROSS handoff is expected to occur in about two hours and 10 minutes.




 The Moon Mineralogy Mapper (M3) is one of two instruments that NASA contributed to India's first mission to the Moon, Chandrayaan-1, launched October 22, 2008. The instrument is led by principal investigator Carle Pieters of Brown University, and managed by NASA's Jet Propulsion Laboratory.

So in any adventure we must be able to build with the materials there to make it feasible for life so there are some things that need to be done in terms of that construction.

Moon is a 2009 British science fiction drama film directed by Duncan Jones.[3] The film is about a man who experiences a personal crisis as he nears the end of a three-year solitary stint mining helium-3 on the far side of the Earth's moon.[4]
 There would need to be significant infrastructure in place before industrial scale production of lunarcrete could be possible.[2]

One of the products of that construction if I may so skip ahead is to see that the fiction productions of movies help us to see what needs to be done in NASA research to make it a viable project in determination for the long run.  A certain prediction and thought process to engage the future possibilities.


Lunarcrete, also known as "Mooncrete", an idea first proposed by Larry A. Beyer of the University of Pittsburgh in 1985, is a hypothetical aggregate building material, similar to concrete, formed from lunar regolith, that would cut the construction costs of building on the Moon.[3]

What we need to take with us to the moon in terms of epoxies that will help shield and seal  the Mooncrete that we will be able to produce to provide for that building construction.


David Bennett, of the British Cement Association, argues that Lunarcrete has the following advantages as a construction material for lunar bases:[8]
  • Lunarcrete production would require less energy than lunar production of steel, aluminium, or brick.[8]
  • It is unaffected by temperature variations of +120°C to −150°C.[8]
  • It will absorb gamma rays.[8]
  • Material integrity is not affected by prolonged exposure to vacuum. Although free water will evaporate from the material, the water that is chemically bound as a result of the curing process will not.[8]
He observes, however, that Lunarcrete is not an airtight material, and to make it airtight would require the application of an epoxy coating to the interior of any Lunarcrete structure.[8]


Liquid scintillation counting is a standard laboratory method in the life-sciences for measuring radiation from beta-emitting nuclides. Scintillating materials are also used in differently constructed "counters" in many other fields.



See Also:

Saturday, September 15, 2012

Noncommutative standard model




In theoretical particle physics, the non-commutative Standard Model, mainly due to the French mathematician Alain Connes, uses his noncommutative geometry to devise an extension of the Standard Model to include a modified form of general relativity. This unification implies a few constraints on the parameters of the Standard Model. Under an additional assumption, known as the "big desert" hypothesis, one of these constraints determines the mass of the Higgs boson to be around 170 GeV, comfortably within the range of the Large Hadron Collider. Recent Tevatron experiments exclude a Higgs mass of 158 to 175 GeV at the 95% confidence level.[1] However, the previously computed Higgs mass was found to have an error, and more recent calculations are in line with the measured Higgs mass. [2]

 

Contents

 

Background


Current physical theory features four elementary forces: the gravitational force, the electromagnetic force, the weak force, and the strong force. Gravity has an elegant and experimentally precise theory: Einstein's general relativity. It is based on Riemannian geometry and interprets the gravitational force as curvature of space-time. Its Lagrangian formulation requires only two empirical parameters, the gravitational constant and the cosmological constant.

The other three forces also have a Lagrangian theory, called the Standard Model. Its underlying idea is that they are mediated by the exchange of spin-1 particles, the so-called gauge bosons. The one responsible for electromagnetism is the photon. The weak force is mediated by the W and Z bosons; the strong force, by gluons. The gauge Lagrangian is much more complicated than the gravitational one: at present, it involves some 30 real parameters, a number that could increase. What is more, the gauge Lagrangian must also contain a spin 0 particle, the Higgs boson, to give mass to the spin 1/2 and spin 1 particles. This particle has yet to be observed, and if it is not detected at the Large Hadron Collider in Geneva, the consistency of the Standard Model is in doubt.

Alain Connes has generalized Bernhard Riemann's geometry to noncommutative geometry. It describes spaces with curvature and uncertainty. Historically, the first example of such a geometry is quantum mechanics, which introduced Heisenberg's uncertainty relation by turning the classical observables of position and momentum into noncommuting operators. Noncommutative geometry is still sufficiently similar to Riemannian geometry that Connes was able to rederive general relativity. In doing so, he obtained the gauge Lagrangian as a companion of the gravitational one, a truly geometric unification of all four fundamental interactions. Connes has thus devised a fully geometric formulation of the Standard Model, where all the parameters are geometric invariants of a noncommutative space. A result is that parameters like the electron mass are now analogous to purely mathematical constants like pi. In 1929 Weyl wrote Einstein that any unified theory would need to include the metric tensor, a gauge field, and a matter field. Einstein considered the Einstein-Maxwell-Dirac system by 1930. He probably didn't develop it because he was unable to geometricize it. It can now be geometricized as a non-commutative geometry.

See also

 

 

Notes

 

  1. ^ The TEVNPH Working Group [1]
  2. ^ Resilience of the Spectral Standard Model [2]

 

References

 

 

External links

 


Update:


Friday, September 14, 2012

Computational Dilemma

Riemannian Geometry, also known as elliptical geometry, is the geometry of the surface of a sphere. It replaces Euclid's Parallel Postulate with, "Through any point in the plane, there exists no line parallel to a given line." A line in this geometry is a great circle. The sum of the angles of a triangle in Riemannian Geometry is > 180°.


Friedman Equation What is p density.

What are the three models of geometry? k=-1, K=0, k+1

 Negative curvature Omega=the actual density to the critical density

 If we triangulate Omega, the universe in which we are in, Omega m(mass)+ Omega(a vacuum), what position geometrically, would our universe hold from the coordinates given? The basic understanding is the understanding of the evolution of Euclidean geometries toward the revelation of a dynamical understanding in the continued expression of that geometry toward a non Euclidean freedom within context of the universe..






Maybe one should look for "a location" and then proceed from there?


    TWO UNIVERSES of different dimension and obeying disparate physical laws are rendered completely equivalent by the holographic principle. Theorists have demonstrated this principle mathematically for a specific type of five-dimensional spacetime ("anti–de Sitter") and its four-dimensional boundary. In effect, the 5-D universe is recorded like a hologram on the 4-D surface at its periphery. Superstring theory rules in the 5-D spacetime, but a so-called conformal field theory of point particles operates on the 4-D hologram  A black hole in the 5-D spacetime is equivalent to hot radiation on the hologram--for example, the hole and the radiation have the same entropy even though the physical origin of the entropy is completely different for each case. Although these two descriptions of the universe seem utterly unalike, no experiment could distinguish between them, even in principle. by Jacob D. Bekenstein




Consider any physical system, made of anything at all- let us call it, The Thing. We require only that The Thing can be enclosed within a finite boundary, which we shall call the Screen(Figure39). We would like to know as much as possible about The Thing. But we cannot touch it directly-we are restricted to making measurements of it on The Screen. We may send any kind of radiation we like through The Screen, and record what ever changes result The Screen. The Bekenstein bound says that there is a general limit to how many yes/no questions we can answer about The Thing by making observations through The Screen that surrounds it. The number must be less then one quarter the area of The Screen, in Planck units. What if we ask more questions? The principle tells us that either of two things must happen. Either the area of the screen will increase, as a result of doing an experiment that ask questions beyond the limit; or the experiments we do that go beyond the limit will erase or invalidate, the answers to some of the previous questions. At no time can we know more about The thing than the limit, imposed by the area of the Screen. Page 171 and 172 0f, Three Roads to Quantum Gravity, by Lee Smolin


    Holography encodes the information in a region of space onto a surface one dimension lower. It sees to be the property of gravity, as is shown by the fact that the area of th event horizon measures the number of internal states of a blackhole, holography would be a one-to-one correspondence between states in our four dimensional world and states in higher dimensions. From a positivist viewpoint, one cannot distinguish which description is more fundamental.Pg 198, The Universe in Nutshell, by Stephen Hawking


The problem is the further you go in terms of particle reductionism you meet a problem with discreteness in terms of "continuity of expression." I know what to call it and it is of value in science investigation. Which means the paradigmatic values with which one is govern by using discreteness in terms of lets say computational values might suffer?

While one might think that it would be easy to accept a foundational approach toward some computational view of reality that view suffers under the plight of what exists in terms of information out there?

If such a view of computational validation works in terms of viewing "a second life" then how would you approach the resolvability of mathematical functions that exist in abstractness and are applied to the nature of our expressions? Why has computations not solved the mathematical hypothesis of lets say Riemann?


 Joel:I wonder if this is related to the issue of "non-computability" of the human mind, put forward by Roger Penrose. Is this why we humans can do mathematics whereas a computer cannot ?

There are some interesting quotes here in following article that come real close to what is implied by that difference.

You raised a question that has always been a troubling one for me. On a general level how could such views have been arrived at that would allow one to access such a mathematical world?

The idea being that to get to the truth one had to turn inside and find the very roots of all thought in some geometrical form. The closer to that truth, the very understanding and schematics drawn in that form. Not all can say the search for such truth resides within? Why the need for such geometry in relativism? Riemann Hypothesis as a function of reality? Why has a computer not solved it?

My views were always general as to what we may have hoped to create in some kind of machine or mechanism. I just couldn't see this functionality in relation to the human brain as 1's and 0's.

I might say it never occurred to me the depth that it has occupied Penrose's Mind. The start of your question and the related perspectives of the authors revealed in the following discourse have raised a wide impact of views that seek to exemplify what is new to me as to what you are asking.

Yet the real world has made major advancements in terms of digital physics and hyper physics. Has any of this touched the the nature of consciousness. This would then lead to Penrose angle in relation to what consciousness is capable of and what a machine is capable of. That would be my guess.


Can one gleam the understanding of what exists all around them without the knowledge of how one can look at what is available to us in terms of our observations? You have to be able to use "distance" in order to arrive at the conclusion about the current state in terms of the geometry in order to understand how such perceptions are relevant characterization toward explaining the space and what may drive the universe in terms of it's expression.

So there are many on going experiments that help to further question that perspective test it and validate it.

The problem is that at a certain length things break down. How can consciousness then be imparted to what is geometrically inherent in our expressions of the reality in which we live? Topology? Continuity of expression?




 Paul- Where Do We Come From? What Are We? Where Are We Going?


"On the right (Where do we come from?), we see the baby, and three young women - those who are closest to that eternal mystery. In the center, Gauguin meditates on what we are. Here are two women, talking about destiny (or so he described them), a man looking puzzled and half-aggressive, and in the middle, a youth plucking the fruit of experience. This has nothing to do, I feel sure, with the Garden of Eden; it is humanity's innocent and natural desire to live and to search for more life. A child eats the fruit, overlooked by the remote presence of an idol - emblem of our need for the spiritual. There are women (one mysteriously curled up into a shell), and there are animals with whom we share the world: a goat, a cat, and kittens. In the final section (Where are we going?), a beautiful young woman broods, and an old woman prepares to die. Her pallor and gray hair tell us so, but the message is underscored by the presence of a strange white bird. I once described it as "a mutated puffin," and I do not think I can do better. It is Gauguin's symbol of the afterlife, of the unknown (just as the dog, on the far right, is his symbol of himself). 

One then ponders how such a universe is part of something much greater in expression that one might want to see how this continuity of expression is portrayed in our universe. How such a balance is struck to maintain this feature as a geometrical understanding?

You have to go outside the box.  Cosmologists are limited by this perspective. Others venture well beyond the constrains applied by them. About a beginning and an end and all that in between. Birth and death are set within the greater expression of such a universe,  on and on.


 
See:

  1. What is Happening at the Singularity?
  2. Space and Time: Einstein and Beyond

Tuesday, September 11, 2012

Dark Energy Camera


Dark Energy Camera construction time lapse
A long-awaited device that will help unravel one of the universe’s most compelling mysteries gets ready to see first light.See:The Dark Energy Camera opens its eyes





Unlike the human eye, photographic film and digital cameras can stare at the sky for a long time and store more and more light. By replacing the human eye with cameras, astronomers can detect fainter and more distant objects.

Cameras used for optical astronomy are usually composed of an array of digital chips called charge-coupled devices (CCDs). CCDs convert light into electrons. Each chip is divided into millions of pixels. The electrons generated by the light that hits each pixel are converted to a digital value that a computer can store or display. 

In concept, these are the same devices that make up the heart of any home digital camera. However, unlike home cameras that are used to record images of things that are very bright, astronomical CCDs must be souped up in order to detect the tiny amount of light that reaches us from faint and/or distant objects. Much of the light from extremely distant galaxies and supernovae has been redshifted into long-wavelength red and infrared light, which conventional CCDs do not detect very well. See: Dark Energy Survey

Monday, September 10, 2012

Nova: Exploring Neutrino Mysteries


Neutrinos are a mystery to physicists. They exist in three different flavors and mass states and may be able to give hints about the origins of the matter-dominated universe. A new long-baseline experiment led by Fermilab called NOvA may provide some answers. There is a live feed of the first detector block being moved at http://www.fnal.gov/pub/webcams/nova_webcam/index.htm


Watch live streaming video from fermilab at livestream.com

Thursday, September 06, 2012

Duchamp's Fountain

Duchamp~ Artmaking is making the invisible, visible.
See: Marcel Duchamps's Fountain: It's History and Aesthetics in Context of 1917 by William Camfield


Click on Image



The extended understanding  for me of Duchamp as an artist was always in context of the cubists revelation as an evolution of Quantum Gravity displayed in a Monte Carlo demonstration as membranes.

 
Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling to compute their results. Monte Carlo methods are often used in simulating physical and mathematical systems. Because of their reliance on repeated computation of random or pseudo-random numbers, these methods are most suited to calculation by a computer and tend to be used when it is unfeasible or impossible to compute an exact result with a deterministic algorithm.[1]

David Berenstein of Shores of Dirac Sea wrote a blog entry entitled, "Art From Math ," help to point out a distinction that helped to stimulate perspective about mathematical art demonstration in regard to Plot development.

Identifying artistic impressionism for me was to say that my view had been limited  by one method only.  Yet, it  presented the opportunity of expressing "a distinction of originality" that in context of science's regard as to see such expression as an "original"  yet producible by example.  This plot development and resulting image demonstrated by David Berenstein was repeated by Lubos's Motl's example.

This to me was demonstrative of the science behind repeatability by recognition of algorithmic function so while seemingly unique in the sense of being "artistic"  it seemed to me to be in essence of value in science. Not just relegated to blog alone. This was in difference to what I felt David was saying.

By revealing the subject of Duchamp's Fountain this helped to see further understanding of David Berenstein's expression of artistic mathematical imaging by accident and as a result seen as unique in science by accident. An accident,  in mathematical production.





See Also:

Tuesday, September 04, 2012

The Quantum Harmonic Oscillator

Quantum Harmonic Oscillator

 


Here are a series of written Blog entries by Matt Strassler from his Blog, Of Particular Significance.
  1. Ball on a Spring (Classical)
  2. Ball on a Spring (Quantum)
  3. Waves (Classical Form)
  4. Waves (Classical Equation of Motion)
  5. Waves (Quantum) 
  6. Fields
  7.  Particles are Quanta
  8.  How fields and particles interact with each other 
  9.  How the Higgs Field Works



Given a preceding map  by Proffessor Strassler according to what has been gain in finality views requires this updating in order to proceed correctly in the views shared currently in science. So that lineage of thought is important to me.

Probability Distributions for the Quantum Oscillator



At the same time one cannot be held back from looking further and seeing where theoretical views have been taken beyond the constraints applied to the science mind.:)





So what is the theory, then?

Pythagoras could be called the first known string theorist. Pythagoras, an excellent lyre player, figured out the first known string physics -- the harmonic relationship. Pythagoras realized that vibrating Lyre strings of equal tensions but different lengths would produce harmonious notesratio of the lengths of the two strings were a whole number. (i.e. middle C and high C) if the......

   Pythagoras discovered this by looking and listening. Today that information is more precisely encoded into mathematics, namely the wave equation for a string with a tension T and a mass per unit length m. If the string is described in coordinates as in the drawing below, where x is the distance along the string and y is the height of the string, as the string oscillates in time t, 


See: Official String Theory Web Site


Moon Pictures

http://creativecommons.org/licenses/by-nd/3.0/

LPOD Photo Gallery


Motivation

During the late 1800s and well into the 1900s it seemed that every book that described the craters, mountains and other features of Earth's moon was titled The Moon. In my mind this came to stand for an encyclopedia-like series of descriptions of features on the lunar surface. In general, more recent books, especially those by professional scientists, describe the processes that formed and modified the Moon, and the surface features themselves are no longer described systematically. But for many lunar observers and others thinking about the Moon as a place, knowledge of individual features is important. See: The Moon Wiki
Labeled Moon-Click Here for Larger Image

Monday, September 03, 2012

Space Weather Now




2012-09-03 15:14 UTC  G2 (Moderate) Geomagnetic Storm in Progress
G2 (Moderate) geomagnetic storming is ongoing now as a result of the coronal mass ejection (CME) arrival associated with the August 31st filament eruption.  Continued geomagnetic storming is expected in the near term as the CME continues to affect Earth.  Solar radiation storm levels continue to hover near the S1 (Minor) event threshold but should continue their slow decline toward background levels.  Stay tuned for updates. See:Space Weather Prediction Center