Showing posts with label Muons. Show all posts
Showing posts with label Muons. Show all posts

Monday, September 08, 2014

GEANT4

Geant4 is a toolkit for the simulation of the passage of particles through matter. Its areas of application include high energy, nuclear and accelerator physics, as well as studies in medical and space science. The two main reference papers for Geant4 are published in Nuclear Instruments and Methods in Physics Research A 506 (2003) 250-303, and IEEE Transactions on Nuclear Science 53 No. 1 (2006) 270-278. See: Geant4

See: Applications 

***

Geant4 [1][2] (for GEometry ANd Tracking) is a platform for "the simulation of the passage of particles through matter," using Monte Carlo methods. It is the successor of the GEANT series of software toolkits developed by CERN, and the first to use object oriented programming (in C++). Its development, maintenance and user support are taken care by the international Geant4 Collaboration. Application areas include high energy physics and nuclear experiments, medical, accelerator and space physics studies. The software is used by a number of research projects around the world.
The Geant4 software and source code is freely available from the project web site; until version 8.1 (released June 30, 2006), no specific software license for its use existed; Geant4 is now provided under the Geant4 Software License.

Muon Tomography (cont)

It is the case of an article on Muon Tomography, titled New Muon Detector Could Find Hidden Nukes. The article appeared a few days ago on Wired. It is centered on Lisa Grossman's interview to Marcus Hohlmann, a colleague from the Florida Institute of Technology. In a nutshell, the article explains how muon particles from cosmic rays can be used to detect heavy elements (as in nuclear fuel) hidden in transport containers. And what makes things sexier is that the used technology is a spin-off from experiments from particle physics. See: Muon Tomography: Who Is Leading The Research ?
See Also:

***

Beginning next year, two detectors (shown here in green) on either side of Fukushima Daiichi’s Unit 2 will record the path of muons (represented by the orange line) that have passed through the reactor. By determining how the muons scatter between the detectors, scientists will compile the first picture of the damaged reactor’s interior. See:
Particle physics to aid nuclear cleanup
***

The progression of Muon Tomography,  is an interesting subject in relation to what can be used to help us understand issues we face here on earth. Situations that need new ways in which to diagnostically deal with extreme situations. Example given in relation too, rock density, magma flows, or, even nuclear reactors.

One has to learn to understand "links that are dropped" which pursue a thread of evolution. These help one to understand the processional use of the technologies as used to understand the ways things are measured in those extreme situations. Sensor-ability,  then takes on a new meaning while using current scientific research and understandings in particle physics.

Wednesday, November 20, 2013

Muon Detection


An image of the shadow of the Moon in muons as produced by the 700m subterranean Soudan 2 detector in the Soudan Mine in Minnesota. The shadow is the result of approximately 120 muons missing from a total of 33 million detected in Soudan 2 over its 10 years of operation. The cross denotes the actual location of the Moon. The shadow of the Moon is slightly offset from this location because cosmic rays are electrically charged particles and were slightly deflected by the Earth's magnetic field on their journey to the upper atmosphere. The shadow is produced due to the shielding effect the Moon has on galactic and cosmic rays, which stream in from all directions. The cosmic rays normally strike atoms high in the upper atmosphere, producing showers of muons and other short lived particles.

Just an update here while looking at Sean Carroll's blog post article, entitled," Scientists Confirm Existence of Moon." While we understand the need for confirmation of the existence of things, seeing how our perception is used in order to make such a statement,  is a statement of such a measure then as to what is real.

 We report on the observation of a significant deficit of cosmic rays from the direction of the Moon with the IceCube detector. The study of this "Moon shadow" is used to characterize the angular resolution and absolute pointing capabilities of the detector. The detection is based on data taken in two periods before the completion of the detector: between April 2008 and May 2009, when IceCube operated in a partial configuration with 40 detector strings deployed in the South Pole ice, and between May 2009 and May 2010 when the detector operated with 59 strings. Using two independent analysis methods, the Moon shadow has been observed to high significance (> 6 sigma) in both detector configurations. The observed location of the shadow center is within 0.2 degrees of its expected position when geomagnetic deflection effects are taken into account. This measurement validates the directional reconstruction capabilities of IceCube. See: Observation of the cosmic-ray shadow of the Moon with IceCube,

So I have spent some time here looking at how this measure is used in term sof such clarifications and this to me is an exciting off shoot of what particle research has done for us. The skies the limit then as to our use of such a measure then is seen and understood in the post written by Sean Carroll.

Friday, March 01, 2013

Muon Tomography



The same nuclear reaction described above (i.e. hadron-hadron impacts to produce pion beams, which then quickly decay to muon beams over short distances) is used by particle physicists to produce muon beams, such as the beam used for the muon g − 2 experiment
 Can you create relationships of Pierre Auger cosmic particle collision's energy toward the level of energy being producing as these values in LHC? How many cosmic events can be directly related? These hold value in correlations as perceptions of relevance for me when it comes to what happens at point sources. Can we consistently say that such point source of this energetic value produce all the time QGP emissions which provide for the place that faster then light entities are possibly created?







You must know I have had a long journey in terms of putting the pieces together and of course I find it interesting that we can make use of and see into nature in ways that we had not seen before. The understanding of reducible elements makes it necessary to understanding the natural processes going on around us. To understand this Tommaso Dorigo brings the imaging back for us all to look at and possibly make use of new ways in which to measure radiative metals in amongst a full load of metal materials in a dump truck.



He reminds us of what has been sitting in the open reawakens the process of seeing  this decay chain as a necessary in the use of technologies as an example of what happens around being us caught naturally. To me this is part of the tracking I have been doing so as to see further then we had seen before.





It is really important that while I see the environment around earth as playing an instrumental part of the back drop measure of what we see as Cerenkov,  to me it asked for the evidences of things that go "Faster then the Speed of light." I know the motto here, so it is not necessary to correct me on that issue.


Fig. 1: Cerenkov radiation involves the nearly continuous emission of photons by a charged particle moving faster than the speed of light in its vicinity. The charged particle gradually radiates away its energy. Cohen-Glashow emission involves the occasional creation, near a speeding neutrino, of an electron-positron pair, in which the neutrino loses a large fraction of its energy in one step.Is the OPERA Speedy Neutrino Experiment Self-Contradictory?
 
 In such an environment of earth I find it appealing that this process is unfolding as we see the medium as allowing such transitions necessary as the after affect of the collision process that is taking place. So when the OPERA experiment results was announced,  I was of course interested in what they had to say. Suffice to say that such a thing as a loose wire did settle the issue once and for all.

Now of course it is important that where this collision process takes place with "the point in the environment with earth" such decimation reduced to decay change is necessarily seen. While we reproduce this naturally in our experimental processes we are also recording also in the AMS 02 outside our environment.




Blackholes

IN a vacuum the motto above about faster than light stands true here so it forces me to question that if such a case was possible then what says that what we see in the vent of information reliance toward blackhole emission would not show something in the nature of the universe that is connected that we do not understand? This then again may be a redundant factor for consideration while we wait for AMS 02 to reveal their results




Thus too,  it is of relevance that particle reductionism has taken us to that place where we wonder about the interconnectedness of the cosmos in ways that we did not understand before. It is important for this consideration to have such a point deliver the effect of QGP recognition that such traverses of particle decay as a relevant distribution point of all that we see here on earth. While it is dismissive that such emission would be quickly dispersive it is of natural consequence that we can see things here on earth such as muons as presented by Tommaso.

A black hole is an object so massive that even light cannot escape from it. This requires the idea of a gravitational mass for a photon, which then allows the calculation of an escape energy for an object of that mass. When the escape energy is equal to the photon energy, the implication is that the object is a "black hole". 




See Also:


Friday, July 06, 2012

The Bolshoi simulation

A virtual world?

 The more complex the data base the more accurate one's simulation is achieved. The point is though that you have to capture scientific processes through calorimeter examinations just as you do in the LHC.

So these backdrops are processes in identifying particle examinations as they approach earth or are produced on earth. See Fermi and capture of thunder storms and one might of asked how Fermi's picture taking would have looked had they pointed it toward the Fukushima Daiichi nuclear disaster?

So the idea here is how you map particulates as a measure of natural processes? The virtual world lacks the depth of measure with which correlation can exist in the natural world? Why? Because it asks the designers of computation and memory to directly map the results of the experiments. So who designs the experiments to meet the data?

 How did they know the energy range that the Higg's Boson would be detected in?





The Bolshoi simulation is the most accurate cosmological simulation of the evolution of the large-scale structure of the universe yet made ("bolshoi" is the Russian word for "great" or "grand"). The first two of a series of research papers describing Bolshoi and its implications have been accepted for publication in the Astrophysical Journal. The first data release of Bolshoi outputs, including output from Bolshoi and also the BigBolshoi or MultiDark simulation of a volume 64 times bigger than Bolshoi, has just been made publicly available to the world's astronomers and astrophysicists. The starting point for Bolshoi was the best ground- and space-based observations, including NASA's long-running and highly successful WMAP Explorer mission that has been mapping the light of the Big Bang in the entire sky. One of the world's fastest supercomputers then calculated the evolution of a typical region of the universe a billion light years across.

The Bolshoi simulation took 6 million cpu hours to run on the Pleiades supercomputer—recently ranked as seventh fastest of the world's top 500 supercomputers—at NASA Ames Research Center. This visualization of dark matter is 1/1000 of the gigantic Bolshoi cosmological simulation, zooming in on a region centered on the dark matter halo of a very large cluster of galaxies.Chris Henze, NASA Ames Research Center-Introduction: The Bolshoi Simulation



Snapshot from the Bolshoi simulation at a red shift z=0 (meaning at the present time), showing filaments of dark matter along which galaxies are predicted to form.
CREDIT: Anatoly Klypin (New Mexico State University), Joel R. Primack (University of California, Santa Cruz), and Stefan Gottloeber (AIP, Germany).
 THREE “BOLSHOI” SUPERCOMPUTER SIMULATIONS OF THE EVOLUTION OF THE UNIVERSE ANNOUNCED BY AUTHORS FROM UNIVERSITY OF CALIFORNIA, NEW MEXICO STATE UNIVERSITY



Pleiades Supercomputer

 MOFFETT FIELD, Calif. – Scientists have generated the largest and most realistic cosmological simulations of the evolving universe to-date, thanks to NASA’s powerful Pleiades supercomputer. Using the "Bolshoi" simulation code, researchers hope to explain how galaxies and other very large structures in the universe changed since the Big Bang.

To complete the enormous Bolshoi simulation, which traces how largest galaxies and galaxy structures in the universe were formed billions of years ago, astrophysicists at New Mexico State University Las Cruces, New Mexico and the University of California High-Performance Astrocomputing Center (UC-HIPACC), Santa Cruz, Calif. ran their code on Pleiades for 18 days, consumed millions of hours of computer time, and generating enormous amounts of data. Pleiades is the seventh most powerful supercomputer in the world.

“NASA installs systems like Pleiades, that are able to run single jobs that span tens of thousands of processors, to facilitate scientific discovery,” said William Thigpen, systems and engineering branch chief in the NASA Advanced Supercomputing (NAS) Division at NASA's Ames Research Center.
See|:NASA Supercomputer Enables Largest Cosmological Simulations



See Also: Dark matter’s tendrils revealed

Tuesday, November 22, 2011

first tau-neutrino “appearing” out of several billion of billions muon neutrinos

Layout of the CNGS beam line.
The OPERA neutrino experiment [1] at the underground Gran Sasso Laboratory (LNGS) was designed to perform the first detection of neutrino oscillations in direct appearance mode in the νμ→ντ channel, the signature being the identification of the Ï„− lepton created by its charged current (CC) interaction [2]. See: Measurement of the neutrino velocity with the OPERA detector in the CNGS beam-

Computer reconstruction of the tau candidate event detected in the OPERA
experiment. The light blue track is the one likely induced by the decay of a tau lepton
produced by a tau-neutrino. See: The OPERA experiment

***

See Also:

Proton Collision ->Decay to Muons and Muon Neutrinos ->Tau Neutrino ->

Sunday, November 20, 2011

Energy Boost From Shock Front

Main Components of CNGS
A 400 GeV/c proton beam is extracted from the SPS in 10.5 microsecond short pulses of 2.4x1013 protons per pulse. The proton beam is transported through the transfer line TT41 to the CNGS target T40. The target consists of a series of graphite rods, which are cooled by a recirculated helium flow. Secondary pions and kaons of positive charge produced in the target are focused into a parallel beam by a system of two pulsed magnetic lenses, called horn and reflector. A 1 km long evacuated decay pipe allows the pions and kaons to decay into their daughter particles - of interest here is mainly the decay into muon-neutrinos and muons. The remaining hadrons (protons, pions, kaons) are absorbed in an iron beam dump with a graphite core. The muons are monitored in two sets of detectors downstream of the dump. Further downstream, the muons are absorbed in the rock while the neutrinos continue their travel towards Gran Sasso.microsecond short pulses of 2.4x1013 protons per
 For me it has been an interesting journey in trying to understand the full context of a event in space sending information through out the cosmos in ways that are not limited to the matter configurations that would affect signals of those events.

In astrophysics, the most widely discussed mechanism of particle acceleration is the first-order Fermi process operating at collisionless shocks. It is based on the idea that particles undergo stochastic elastic scatterings both upstream and downstream of the shock front. This causes particles to wander across the shock repeatedly. On each crossing, they receive an energy boost as a result of the relative motion of the upstream and downstream plasmas. At non-relativistic shocks, scattering causes particles to diffuse in space, and the mechanism, termed "diffusive shock acceleration," is widely thought to be responsible for the acceleration of cosmic rays in supernova remnants. At relativistic shocks, the transport process is not spatial diffusion, but the first-order Fermi mechanism operates nevertheless (for reviews, see Kirk & Duffy 1999; Hillas 2005). In fact, the first ab initio demonstrations of this process using particle-in-cell (PIC) simulations have recently been presented for the relativistic case (Spitkovsky 2008b; Martins et al. 2009; Sironi & Spitkovsky 2009).
Several factors, such as the lifetime of the shock front or its spatial extent, can limit the energy to which particles can be accelerated in this process. However, even in the absence of these, acceleration will ultimately cease when the radiative energy losses that are inevitably associated with the scattering process overwhelm the energy gains obtained upon crossing the shock. Exactly when this happens depends on the details of the scattering process. See: RADIATIVE SIGNATURES OF RELATIVISTIC SHOCKS

So in soliton expressions while trying to find such an example here in the blog does not seem to be offering itself in the animations of the boat traveling down the channel we are so familiar with that for me this was the idea of the experimental processes unfolding at LHC. The collision point creates shock waves\particle sprays as Jets?


Soliton


Solitary wave in a laboratory wave channel.
In mathematics and physics, a soliton is a self-reinforcing solitary wave (a wave packet or pulse) that maintains its shape while it travels at constant speed. Solitons are caused by a cancellation of nonlinear and dispersive effects in the medium. (The term "dispersive effects" refers to a property of certain systems where the speed of the waves varies according to frequency.) Solitons arise as the solutions of a widespread class of weakly nonlinear dispersive partial differential equations describing physical systems. The soliton phenomenon was first described by John Scott Russell (1808–1882) who observed a solitary wave in the Union Canal in Scotland. He reproduced the phenomenon in a wave tank and named it the "Wave of Translation".

So in a sense the shock front\horn for me in respect of Gran Sasso is the idea that such a front becomes a dispersive element in medium expression of earth to know that such densities in earth have a means by which we can measure relativist interpretations as assign toward density determinations in the earth.  Yet,  there are things not held to this distinction so know that they move on past such targets so as to show cosmological considerations are just as relevant today as they are while we set up the experimental avenues toward identifying this relationship here on earth.

 For more than a decade, scientists have seen evidence that the three known types of neutrinos can morph into each other. Experiments have found that muon neutrinos disappear, with some of the best measurements provided by the MINOS experiment. Scientists think that a large fraction of these muon neutrinos transform into tau neutrinos, which so far have been very hard to detect, and they suspect that a tiny fraction transform into electron neutrinos. See: Fermilab experiment weighs in on neutrino mystery

When looking out at the universe such perspective do not hold relevant for those not looking past the real toward the abstract? To understand the distance measure of binary star of Taylor and Hulse,  such signals need to be understood in relation to what is transmitted out into the cosmos? How are we measuring that distance? For some who are even more abstractedly gifted they may see the waves generated in gravitational expression. So this becomes a means which which to ask if the binary stars are getting closer then how is this distance measured? You see?


Measurement of the neutrino velocity with the OPERA detectorin the CNGS beam 





Wednesday, October 12, 2011

Seeing Underlying Structures

 There is  gap between,  "Proton Collision ->Decay to Muons and Muon Neutrinos ->Tau Neutrino ->[gap] tau lepton may travel some tens of microns before decaying back into neutrino and charged tracks." Use the case of Relativistic Muons?


 An analysis of four Fermi-detected gamma-ray bursts (GRBs) is given that sets upper limits on the energy dependence of the speed and dispersion of light across the universe. The analysis focuses on photons recorded above 1 GeV for Fermi detected GRB 080916C, GRB 090510A, GRB090902B, and GRB 090926A. Upper limits on time scales for statistically significant bunching of photon arrival times were found and cataloged. In particular, the most stringent limit was found for GRB 090510A at redshift z & 0.897 for which t < 0.00136 sec, a limit driven by three separate photon bunchings. These photons occurred among the first seven super-GeV photons recorded for GRB 090510A and contain one pair with an energy difference of E & 23.5 GeV. The next most limiting burst was GRB 090902B at a redshift of z & 1.822 for which t < 0.161, a limit driven by several groups of photons, one pair of which had an energy difference E & 1.56 GeV. Resulting limits on the differential speed of light and Lorentz invariance were found for all of these GRBs independently. The strongest limit was for GRB 090510A with c/c < 6.09 x 10−21. Given generic dispersion relations across the universe where the time delay is proportional to the photon energy to the first or second power, the most stringent limits on the dispersion strengths were k1 < 1.38 x 10−5 sec Gpc−1 GeV−1 and k2 < 3.04 x 10−7 sec Gpc−1 GeV−2 respectively. Such upper limits result in upper bounds on dispersive effects created, for example, by dark energy, dark matter or the spacetime foam of quantum gravity. Relating these dispersion constraints to loop quantum gravity
energy scales specifically results in limits of M1c2 > 7.43 x 1021 GeV and M2c2 > 7.13 x 1011 GeV respectively. See: Limiting properties of light and the universe with high energy photons from Fermi-detected Gamma Ray Bursts


The point here is that Energetic disposition of flight time and Fermi Calorimetry result point toward GRB emission and directly determination of GRB emission allocates potential of underlying structure W and the electron-neutrino fields?

Fig. 3: An electron, as it travels, may become a more complex combination of disturbances in two or more fields. It occasionally is a mixture of disturbances in the photon and electron fields; more rarely it is a disturbance in the W and the electron-neutrino fields. See: Another Speed Bump for Superluminal Neutrinos Posted on October 11, 2011 at, "Of Particular Significance"
***
What I find interesting is that Tamburini and Laveder do not stop at discussing the theoretical interpretation of the alleged superluminal motion, but put their hypothesis to the test by comparing known measurements of neutrino velocity on a graph, where the imaginary mass is computed from the momentum of neutrinos and the distance traveled in a dense medium. The data show a very linear behaviour, which may constitute an explanation of the Opera effect: See: Tamburini: Neutrinos Are Majorana Particles, Relativity Is OK


See Also:

Friday, October 07, 2011

Cohen-Glashow Argument

Bee:And for all I know you need a charge for Cherenkov radiation and neutrinos don't have one.



Fig. 1: Cerenkov radiation involves the nearly continuous emission of photons by a charged particle moving faster than the speed of light in its vicinity. The charged particle gradually radiates away its energy. Cohen-Glashow emission involves the occasional creation, near a speeding neutrino, of an electron-positron pair, in which the neutrino loses a large fraction of its energy in one step.


But these details almost don’t matter, because Cohen and Glashow then put another chunk of powerful evidence on the table. They point out that neutrinos have been observed, at two other experiments, SuperKamiokande and IceCube, 100 to 1000 times more energetic than the neutrinos in OPERA’s beam. These neutrinos come out of the earth having traveled many hundreds or thousands of kilometers across interior of the planet. The fact that these neutrinos did not lose most of their energy while traveling all that distance implies that they, too, did not undergo CG emission. In short, they must have traveled very close to, and conservatively no more than about fifteen parts per billion faster than, the speed of light in empty space. (The limit from IceCube data may be as good as ten parts per trillion!)See: Is the OPERA Speedy Neutrino Experiment Self-Contradictory?

Wednesday, October 05, 2011

Proton Collision ->Decay to Muons and Muon Neutrinos ->Tau Neutrino ->

.....tau lepton may travel some tens of microns before decaying back into neutrino and charged tracks




 Before I comment on the result, let me give you a little background on the whole thing. Opera is a very innovative concept in neutrino detection. Its aim is to detect tau neutrino appearance in a beam of muon neutrinos. A Six-Sigma Signal Of Superluminal Neutrinos From Opera!

The OPERA result is based on the observation of over 15000 neutrino events measured at Gran Sasso, and appears to indicate that the neutrinos travel at a velocity 20 parts per million above the speed of light, nature’s cosmic speed limit. Given the potential far-reaching consequences of such a result, independent measurements are needed before the effect can either be refuted or firmly established. This is why the OPERA collaboration has decided to open the result to broader scrutiny. The collaboration’s result is available on the preprint server arxiv.orghttp://arxiv.org/abs/1109.4897.

In order to perform this study, the OPERA Collaboration teamed up with experts in metrology from CERN and other institutions to perform a series of high precision measurements of the distance between the source and the detector, and of the neutrinos’ time of flight. The distance between the origin of the neutrino beam and OPERA was measured with an uncertainty of 20 cm over the 730 km travel path. The neutrinos’ time of flight was determined with an accuracy of less than 10 nanoseconds by using sophisticated instruments including advanced GPS systems and atomic clocks. The time response of all elements of the CNGS beam line and of the OPERA detector has also been measured with great precision.

***

By classifying the neutrino interactions according to the type of neutrino involved (electron-neutrino or muon-neutrino) and counting their relative numbers as a function of the distance from their creation point, we conclude that the muon-neutrinos are "oscillating." See: STATEMENT: EVIDENCE FOR MASSIVE NEUTRINOS FOUND by Dave Casper

 ***
We present an analysis of atmospheric neutrino data from a 33.0 kiloton-year (535-day)exposure of the Super-Kamiokande detector. The data exhibit a zenith angle dependent de ficit of muon neutrinos which is inconsistent with expectations based on calculations of the atmospheric neutrino flux. Experimental biases and uncertainties in the prediction of neutrino fluxes and cross sections are unable to explain our observation. . Evidence for oscillation of atmospheric neutrinos



See:

Tuesday, October 04, 2011

Cherenkov radiation


Taking the formalisms of electromagnetic radiation and supposing a tachyon had an electric charge—as there is no reason to suppose a priori that tachyons must be either neutral or charged—then a charged tachyon must lose energy as Cherenkov radiation[15]—just as ordinary charged particles do when they exceed the local speed of light in a medium. A charged tachyon traveling in a vacuum therefore undergoes a constant proper time acceleration and, by necessity, its worldline forms a hyperbola in space-time. However, as we have seen, reducing a tachyon's energy increases its speed, so that the single hyperbola formed is of two oppositely charged tachyons with opposite momenta (same magnitude, opposite sign) which annihilate each other when they simultaneously reach infinite speed at the same place in space. (At infinite speed the two tachyons have no energy each and finite momentum of opposite direction, so no conservation laws are violated in their mutual annihilation. The time of annihilation is frame dependent.) Even an electrically neutral tachyon would be expected to lose energy via gravitational Cherenkov radiation, because it has a gravitational mass, and therefore increase in speed as it travels, as described above. See: Tachyon
 ***
An early set of experiments with a facility called the solar neutrino telescope, measured the rate of neutrino emission from the sun at only one third of the expected flux. Often referred to as the Solar Neutrino Problem, this deficiency of neutrinos has been difficult to explain. Recent results from the Sudbury Neutrino Observatory suggest that a fraction of the electron neutrinos produced by the sun are transformed into muon neutrinos on the way to the earth. The observations at Sudbury are consistent with the solar models of neutrino flux assuming that this "neutrino oscillation" is responsible for observation of neutrinos other than electron neutrinos. See: Detection of Neutrinos

P.I. Chats: Faster-than-light neutrinos?

Measurements by GPS confirm that the neutrinos identified by the Super-Kamiokande detector were indeed produced on the east coast of Japan. The physicists therefore estimate that the results obtained point to a 99.3% probability that electron neutrino appearance was detected.Neutrino Oscillations Caught in the Act



The Gran Sasso National Laboratory (LNGS) is one of four INFN national laboratories.




PERIMETER INSTITUTE RECORDED SEMINAR ARCHIVE



PIRSA:11090135  ( Flash Presentation , MP3 , PDF ) Which Format?
P.I. Chats: Faster-than-light neutrinos?
Abstract: Can neutrinos really travel faster than light? Recently released experimental data from CERN suggests that they can. Join host Dr. Richard Epp and a panel of Perimeter Institute scientists in a live webinar to discuss this unexpected and puzzling experimental result, and some theoretical questions it might raise.
Date: 28/09/2011 - 12:15 pm
Thanks Phil 

***



Using the NuMI beam to search for electron neutrino appearance.

The NOνA Experiment (Fermilab E929) will construct a detector optimized for electron neutrino detection in the existing NuMI neutrino beam. The primary goal of the experiment is to search for evidence of muon to electron neutrino oscillations. This oscillation, if it occurs, holds the key to many of the unanswered questions in neutrino oscillation physics. In addition to providing a measurement of the last unknown mixing angle, θ13, this oscillation channel opens the possibility of seeing matter/anti-matter asymmetries in neutrinos and determination of the ordering of the neutrino mass states.See:The NOνA Experiment at Fermilab (E929)

***

Image from a neutrino detection experiment. (Credit: Image courtesy of Southern Methodist University)

Hunting Oscillation of Muon to Electron: Neutrino Data to Flow in 2010; NOvA Scientists Tune Design


Bee:And for all I know you need a charge for Cherenkov radiation and neutrinos don't have one.

Friday, September 23, 2011

Measurement of the neutrino velocity with the OPERA detector

New results from OPERA on neutrino propertieslive from Main Amphitheatre.

“This result comes as a complete surprise,” said OPERA spokesperson, Antonio Ereditato of the University of Bern. “After many months of studies and cross checks we have not found any instrumental effect that could explain the result of the measurement. While OPERA researchers will continue their studies, we are also looking forward to independent measurements to fully assess the nature of this observation.” 


 “When an experiment finds an apparently unbelievable result and can find no artefact of the measurement to account for it, it’s normal procedure to invite broader scrutiny, and this is exactly what the OPERA collaboration is doing, it’s good scientific practice,” said CERN Research Director Sergio Bertolucci. “If this measurement is confirmed, it might change our view of physics, but we need to be sure that there are no other, more mundane, explanations. That will require independent measurements.”See:OPERA experiment reports anomaly in flight time of neutrinos from CERN to Gran Sasso




Have we considered their mediums of expression to know that we have witnessed Cerenkov radiation as a process in the faster than light, to know the circumstances of such expressions to have been understood as backdrop measures of processes we are familiar with. Explain the history of particulate expressions from vast distances across our universe?

The OPERA Detector


This is something very different though and it will be very interesting the dialogue and thoughts shared so as to look at the evidence in a way that helps us to consider what is sound in it's understanding, as speed of light.

See Also:

Wednesday, September 07, 2011

The Synaptic World of Experience and Knowledge



 It is important that people realize that as much as topological seasoning is added to the world by myself,  I see ourselves intrinsically linked to the inductive/deductive process. It is as if the tail of each is linked as a image of a our inner and outer relation with the world continually exchanged. We are the central process in this, as if link the past and the future together, as to the outcome in life. A self eventual recognition of the arche and our place on it as to the decision and acceptance of outcome according to our conclusions?
I think that Fig. 34.1 best expresses my position on this question, where each of three worlds, Platonic-mathematical, physical and mental-has it’s own kind of reality, and where each is (deeply and mysteriously) found in one that precedes it ( the worlds take cyclicly). I like to think that, in a sense the Platonic world may be the most primitive of the three, since mathematics is a kind of necessity, virtually conjuring its very existence through logic alone. Be that as it may, there is a further mystery, or paradox, of the cyclic aspect of these worlds , where each seems to be able to encompass the succeeding one in its entirety, while itself seeming to depend only upon a small part of its predecessor.”(Page 1028-The Road to Reality- Roger Penrose- Borzoi Book, Alfred A. Knoff- 2004)

For me, the visual helps to reinforce some  the understanding that is required of how let's say Sir Roger Penrose may look at the idea of "information transference?" How I may see this in individuals who are interacting with the world. I believe too, that how the universe is formulated into the Cyclical Universe is to direct our attention to the facets of time attached to the ideas of how this is formulated within ourselves as well. This are the same correlations of the past, as well as the future, in our now, in our universe(our neighborhood) as well.

If we can put everything together, we might have a model that reproduces everything we see in our detector."


Plato's problem is the term given by Noam Chomsky to the gap between knowledge and experience. It presents the question of how we account for our knowledge when environmental conditions seem to be an insufficient source of information. It is used in linguistics to refer to the "argument from poverty of the stimulus" (APS). In a more general sense, Plato’s Problem refers to the problem of explaining a "lack of input."
Solving Plato’s Problem involves explaining the gap between what one knows and the apparent lack of substantive input from experience (the environment). Plato's Problem is most clearly illustrated in the Meno dialogue, in which Socrates demonstrates that an uneducated boy nevertheless understands geometric principles.

The understanding here is that all knowledge exists in the universe and that we only have to awaken it within ourselves. This hasn't changed my view on the universal access to information that we can tap into. How is this accomplished.

This view I carry to the world of science and look for correspondences in experimental associations. I believe the answers we are looking for already exist.  It is just a matter of asking the right questions, as well as looking inside as to the truth of what we are looking at,  as a potential in the discourse of our existence as human beings. The role we are playing as components of this reality to better ourselves.

Wednesday, March 09, 2011

ICECUBE

 For me, the idea of a backdrop measure, as if Thomas Young experimentally fires his photon gun, the collision points at the LHC provide dimensional references(flight paths) to events that are measured  by comparison of LHC too,  muon detection facilitations as if,  Cosmic Rays collisions in faster then light medium of ice, resulting in ICECUBE data. Cerenkov. Muon detection scenarios are useful tools to speeds through earth and matters for  consideration anyway. Think of Volcano here or looking through pyramids.

That's the plan anyway right?
 
“IceCube: An instrument for neutrino astronomy,” by Francis Halzen and Spencer R. Klein
IceCube completed, University of Wisconsin press release
Ice Cube completed, Berkeley Lab press release
IceCube website

Sunday, December 12, 2010

The Compact Muon Solenoid......

Coordinates: 46°18′34″N 6°4′37″E / 46.30944°N 6.07694°E / 46.30944; 6.07694
Large Hadron Collider (LHC)
LHC.svg
LHC experiments
ATLAS A Toroidal LHC Apparatus
CMS Compact Muon Solenoid
LHCb LHC-beauty
ALICE A Large Ion Collider Experiment
TOTEM Total Cross Section, Elastic Scattering and Diffraction Dissociation
LHCf LHC-forward
MoEDAL Monopole and Exotics Detector At the LHC
LHC preaccelerators
p and Pb Linear accelerators for protons (Linac 2) and Lead (Linac 3)
(not marked) Proton Synchrotron Booster
PS Proton Synchrotron
SPS Super Proton Synchrotron

View of the CMS endcap through the barrel sections. The ladder to the lower right gives an impression of scale.
......(CMS) experiment is one of two large general-purpose particle physics detectors built on the proton-proton Large Hadron Collider (LHC) at CERN in Switzerland and France. Approximately 3,600 people from 183 scientific institutes, representing 38 countries form the CMS collaboration who built and now operate the detector.[1] It is located in an underground cavern at Cessy in France, just across the border from Geneva.

Contents

Background

Recent collider experiments such as the now-dismantled Large Electron-Positron Collider at CERN and the (as of 2010) still running Tevatron at Fermilab have provided remarkable insights into, and precision tests of the Standard Model of Particle Physics. However, a number of questions remain unanswered.

A principal concern is the lack of any direct evidence for the Higgs Boson, the particle resulting from the Higgs mechanism which provides an explanation for the masses of elementary particles. Other questions include uncertainties in the mathematical behaviour of the Standard Model at high energies, the lack of any particle physics explanation for dark matter and the reasons for the imbalance of matter and antimatter observed in the Universe.

The Large Hadron Collider and the associated experiments are designed to address a number of these questions.

Physics goals

The main goals of the experiment are:
The ATLAS experiment, at the other side of the LHC ring is designed with similar goals in mind, and the two experiments are designed to complement each other both to extend reach and to provide corroboration of findings.

Detector summary

CMS is designed as a general-purpose detector, capable of studying many aspects of proton collisions at 14 TeV, the center-of-mass energy of the LHC particle accelerator. It contains subsystems which are designed to measure the energy and momentum of photons, electrons, muons, and other products of the collisions. The innermost layer is a silicon-based tracker. Surrounding it is a scintillating crystal electromagnetic calorimeter, which is itself surrounded with a sampling calorimeter for hadrons. The tracker and the calorimetry are compact enough to fit inside the CMS solenoid which generates a powerful magnetic field of 3.8 T. Outside the magnet are the large muon detectors, which are inside the return yoke of the magnet.




The set up of the CMS. In the middle, under the so-called barrel there is a man for scale. (HCAL=hadron calorimeter, ECAL=electromagnetic calorimeter)

CMS by layers


A slice of the CMS detector.
For full technical details about the CMS detector, please see the Technical Design Report.

The interaction point

This is the point in the centre of the detector at which proton-proton collisions occur between the two counter-rotating beams of the LHC. At each end of the detector magnets focus the beams into the interaction point. At collision each beam has a radius of 17 Î¼m and the crossing angle between the beams is 285 Î¼rad.
At full design luminosity each of the two LHC beams will contain 2,808 bunches of 1.15×1011 protons. The interval between crossings is 25 ns, although the number of collisions per second is only 31.6 million due to gaps in the beam as injector magnets are activated and deactivated.

At full luminosity each collision will produce an average of 20 proton-proton interactions. The collisions occur at a centre of mass energy of 14 TeV. It is worth noting that the actual interactions occur between quarks rather than protons, and so the actual energy involved in each collision will be lower, as determined by the parton distribution functions.

The first which ran in September 2008 was expected to operate at a lower collision energy of 10 TeV but this was prevented by the 19 September 2008 shutdown. When at this target level, the LHC will have a significantly reduced luminosity, due to both fewer proton bunches in each beam and fewer protons per bunch. The reduced bunch frequency does allow the crossing angle to be reduced to zero however, as bunches are far enough spaced to prevent secondary collisions in the experimental beampipe.

Layer 1 – The tracker


The silicon strip tracker of CMS.
Immediately around the interaction point the inner tracker serves to identify the tracks of individual particles and match them to the vertices from which they originated. The curvature of charged particle tracks in the magnetic field allows their charge and momentum to be measured.

The CMS silicon tracker consists of 13 layers in the central region and 14 layers in the endcaps. The innermost three layers (up to 11 cm radius) consist of 100×150 μm pixels, 66 million in total.
The next four layers (up to 55 cm radius) consist of 10 cm × 180 μm silicon strips, followed by the remaining six layers of 25 cm × 180 μm strips, out to a radius of 1.1 m. There are 9.6 million strip channels in total.
During full luminosity collisions the occupancy of the pixel layers per event is expected to be 0.1%, and 1–2% in the strip layers. The expected SLHC upgrade will increase the number of interactions to the point where over-occupancy may significantly reduce trackfinding effectiveness.

This part of the detector is the world's largest silicon detector. It has 205 m2 of silicon sensors (approximately the area of a tennis court) comprising 76 million channels.[2]

Layer 2 – The Electromagnetic Calorimeter

The Electromagnetic Calorimeter (ECAL) is designed to measure with high accuracy the energies of electrons and photons.

The ECAL is constructed from crystals of lead tungstate, PbWO4. This is an extremely dense but optically clear material, ideal for stopping high energy particles. It has a radiation length of χ0 = 0.89 cm, and has a rapid light yield, with 80% of light yield within one crossing time (25 ns). This is balanced however by a relatively low light yield of 30 photons per MeV of incident energy.

The crystals used have a front size of 22 mm × 22 mm and a depth of 230 mm. They are set in a matrix of carbon fibre to keep them optically isolated, and backed by silicon avalanche photodiodes for readout. The barrel region consists of 61,200 crystals, with a further 7,324 in each of the endcaps.

At the endcaps the ECAL inner surface is covered by the preshower subdetector, consisting of two layers of lead interleaved with two layers of silicon strip detectors. Its purpose is to aid in pion-photon discrimination.

Layer 3 – The Hadronic Calorimeter


Half of the Hadron Calorimeter
The purpose of the Hadronic Calorimeter (HCAL) is both to measure the energy of individual hadrons produced in each event, and to be as near to hermetic around the interaction region as possible to allow events with missing energy to be identified.

The HCAL consists of layers of dense material (brass or steel) interleaved with tiles of plastic scintillators, read out via wavelength-shifting fibres by hybrid photodiodes. This combination was determined to allow the maximum amount of absorbing material inside of the magnet coil.

The high pseudorapidity region (3.0 < | η | < 5.0) is instrumented by the Hadronic Forward detector. Located 11 m either side of the interaction point, this uses a slightly different technology of steel absorbers and quartz fibres for readout, designed to allow better separation of particles in the congested forward region.
The brass used in the endcaps of the HCAL used to be Russian artillery shells.[3]

Layer 4 – The magnet

Like most particle physics detectors, CMS has a large solenoid magnet. This allows the charge/mass ratio of particles to be determined from the curved track that they follow in the magnetic field. It is 13 m long and 6 m in diameter, and its refrigerated superconducting niobium-titanium coils were originally intended to produce a 4 T magnetic field. It was recently announced that the magnet will run at 3.8 T instead of the full design strength in order to maximize longevity.[4]

The inductance of the magnet is 14 Î— and the nominal current for 4 T is 19,500 A, giving a total stored energy of 2.66 GJ, equivalent to about half-a-tonne of TNT. There are dump circuits to safely dissipate this energy should the magnet quench. The circuit resistance (essentially just the cables from the power converter to the cryostat) has a value of 0.1 mΩ which leads to a circuit time constant of nearly 39 hours. This is the longest time constant of any circuit at CERN. The operating current for 3.8 T is 18,160 A, giving a stored energy of 2.3 GJ.

Layer 5 – The muon detectors and return yoke

To identify muons and measure their momenta, CMS uses three types of detector: drift tubes (DT), cathode strip chambers (CSC) and resistive plate chambers (RPC). The DTs are used for precise trajectory measurements in the central barrel region, while the CSCs are used in the end caps. The RPCs provide a fast signal when a muon passes through the muon detector, and are installed in both the barrel and the end caps.

Collecting and collating the data

Pattern recognition


Testing the data read-out electronics for the tracker.
New particles discovered in CMS will be typically unstable and rapidly transform into a cascade of lighter, more stable and better understood particles. Particles travelling through CMS leave behind characteristic patterns, or ‘signatures’, in the different layers, allowing them to be identified. The presence (or not) of any new particles can then be inferred.

Trigger system

To have a good chance of producing a rare particle, such as a Higgs boson, a very large number of collisions are required. Most collision events in the detector are "soft" and do not produce interesting effects. The amount of raw data from each crossing is approximately 1 MB, which at the 40 MHz crossing rate would result in 40 TB of data a second, an amount that the experiment cannot hope to store or even process properly. The trigger system reduces the rate of interesting events down to a manageable 100 per second.
To accomplish this, a series of "trigger" stages are employed. All the data from each crossing is held in buffers within the detector while a small amount of key information is used to perform a fast, approximate calculation to identify features of interest such as high energy jets, muons or missing energy. This "Level 1" calculation is completed in around 1 Âµs, and event rate is reduced by a factor of about thousand down to 50 kHz. All these calculations are done on fast, custom hardware using reprogrammable FPGAs.

If an event is passed by the Level 1 trigger all the data still buffered in the detector is sent over fibre-optic links to the "High Level" trigger, which is software (mainly written in C++) running on ordinary computer servers. The lower event rate in the High Level trigger allows time for much more detailed analysis of the event to be done than in the Level 1 trigger. The High Level trigger reduces the event rate by a further factor of about a thousand down to around 100 events per second. These are then stored on tape for future analysis.

Data analysis

Data that has passed the triggering stages and been stored on tape is duplicated using the Grid to additional sites around the world for easier access and redundancy. Physicists are then able to use the Grid to access and run their analyses on the data.
Some possible analyses might be:
  • Looking at events with large amounts of apparently missing energy, which implies the presence of particles that have passed through the detector without leaving a signature, such as neutrinos.
  • Looking at the kinematics of pairs of particles produced by the decay of a parent, such as the Z boson decaying to a pair of electrons or the Higgs boson decaying to a pair of tau leptons or photons, to determine the properties and mass of the parent.
  • Looking at jets of particles to study the way the quarks in the collided protons have interacted.

Milestones

1998 Construction of surface buildings for CMS begins.
2000 LEP shut down, construction of cavern begins.
2004 Cavern completed.
10 September 2008 First beam in CMS.
23 November 2009 First collisions in CMS.
30 March 2010 First 7 TeV collisions in CMS.

See also


References

  1. ^ [1]
  2. ^ CMS installs the world's largest silicon detector, CERN Courier, Feb 15, 2008
  3. ^ CMS HCAL history - CERN
  4. ^ http://iopscience.iop.org/1748-0221/5/03/T03021/pdf/1748-0221_5_03_T03021.pdf Precise mapping of the magnetic field in the CMS barrel yoke using cosmic rays

External links