Cafe neutrino meeting minutes (19/05/2011)
Present: C. Hansen, O. M. Hansen, N. Charitonidis, A. Blondel, G. Prior, E. Benedetto
- Discussion of the importance of T2K's 2.5 sigma theta13 paper
- News and Discussions (round table)
Discussion about the new theta13 values (2.5sigma):
Discussions around this plot with the added theta13 values.
Here are the slides from the presentation where these results were shown.
Alain: There are two main comments to be discussed based on this paper; the results and consequences.
Only 2.5 sigma is an indication (at best), i.e. less than an evidence.
On the other hand, there is a huge difference between finding 2.5 sigma in your data
just by chance, or by looking for specific values of theta13, where cuts have been
designed long time ago and where the stop of data taking was done randomly.
There is however one thing that is bothering. The events in Super Kamiokande appear only at front face
(where the beams comes in) of the detector. This is woreing since the background might have a bigger probobility
to happen close to the edge of SK since its then closer to the rock (slide 54).
But good thing is that the energy distribution is perfect of the 6 candidate events (slide 51).
The value of theta13 is right below the CHOOZ limit. Minos is also soon expected to give new results.
It is important to point out that it is absolutely no way it would be possible to fake these T2K results due to two reasons
- the cuts were designed far in advance and
- the earthquake randomly stopped the data taking.
The T2K design is very sensitive, mature and flux well known since we have the near detector, ND280, and
the near to far ratio is well known thanks to the NA61 experiment.
What is the implications if theta13 is right where T2K now indicates?
One should not pay too much attention to the old sensitiviey plots since they assume old baselines
and are not optimized for large theta13.
The consequence of theta13 being large is that there are different ways of measuring the CPV measuments.
This means that we might want to redesign the future neutrino oscillation experiments.
One example would be to study differences between the first and second oscillation maxima.
Since the fast atmospheric oscillation (nu_u->nu_e) with small amplitude is oscillating on top of
the slow solar oscillation (nu_e->nu_mu) with large amplitude one have to add both, take the
square and the one get the oscillation probablity.
This causes that if we have large theta13 the first oscillation maxima has small CP assymetry
but the second maxima has large CP assymetry (since then the solar oscillation has increased).
So to compare the two oscillation maxima for one specific baseline one can have two different
energies, 1 energy optimized to have the first oscillation maxima at
the detector and the the 2nd energy to put the 2nd maxima at the detector.
Alain thinks that with larger theta13 this would be possible with a not no to far future experiment.
This might even be a big boost for the NOvA experiment (under construction).
With different baseline the CPV sensitivity plots will change.
For example the black NF curve with go up for high theta13 if we have
shorter baseline due to less materia uncertaintity. So large theta13 is not good for INO.
At NUFACT11 we will see many new CPV sensitivity since there will be new results from MINOS,
new CHOOZ limits and there will also be Global fits together with these new T2K results.
From C. Hansen,
Amplitude detuning turned out to not help the bunch intensity limits for the Decay Ring
due to emittance blow up as soon as octupoles stabilized the beam.
(Re)started studies of the dependence of the pipe size for collective effects in the DR.
Importance since we might need double bohr Decay Ring.
From O. Hansen,
More studies of mapping the loss of the particles in the solenoids, to understand why the new magnet-geometry works better.
Will finish these studies and make a big summary report.
Ole sees an increase in muon with the energy. A bigger increase than was seen in a paper.
This needs to be studied since it is unexpected. One have to take care wich physics list
one uses in G4BeamLine.
From G. Prior,
There exist now Mars code, one for CERN and one for BNL, wich seem to give different results.
Some particles like neutrinos shows big difference between the new results.
The author claimed that this is within statistical error.
Gersend wrote a paper explaining that the it was too big to be explained by statistical errors.
The author explained that different compiler give change in the random generator.
Gersend think that this is a strong reason to stop using Mars.
Maybe move completely move to G4BeamLine. I.e. put the whole Neutrino Factory complex into G4BeamLine.
A son of a CERN staff will work with the parameter lists.
Also a summerstudent will work on a lattice for the muon cooling front end.
From E. Benedetto,
Gave a presentation in the Safety workshop at CERN.
Lots of work to be done to follow the safety guidelines.
But Rob does not require full safety studies. The main work should
lay on costing which should be be added by other parameters by
physics performance and safety.
Writing reports and specification for Argon National Lab because they
are working on a thin litium curtain which we would like to use as a
target for the Production Ring (direct kinematics). We would gain in
ion production rates. There might ba a problem since the power, flow
and thickness (1mm->1cm) of the target have to be increased for the Beta Beam
Working on space charge measurements in the PS. The injection energy into PS planned
for Beta Beam showed out to not be possible. So when that changes Elena now have to
do MD's to study if we are safe for Space Charge. She is doing measurements in PS
to figure out where the dangerous resonance lines are in the Qx-Qy tune diagram.
14th July 10am - next cafe-neutrino meeting (room tbc).
Minutes by C. Hansen