Comments on LHCb-PAPER-2012-019 Abstract ------- - Add the final states that you look at (so K*0->Kpi and phi->KK). - Probably some of us would find it more natural to present the inverse ratio: the statistical error on the ratio is limited by the Bs->phigamma yield. Therefore, the statistical error on the inverse ratio is more Gaussian. - Remove "which is the most precise measurement to date". It is ok for conclusion but doesn't work out in abstract: It immediately suggest that your Acp measurement is not the most precise and therefore of little interest. On the other hand, it would make sense is to say that these measurements are in agreement with expectations. Page 1 - line 33: replace 'bending power' by 'field integral' - line 33: replace 3 stations by 12 layers (or just remove it). Page 2 - line 52: "filter the collisions that will be recorded" sounds odd. Since it is sort of obvious that HLT is used to select events, we suggest to remove the 2nd part of this sentence. - line 58: "IP chi2" is undefined (and not obvious since you need to specify how the reference point is obtained). Page 3 - line 87: Your phi and K* mass windows are quite narrow, which means that you miss a considerable part of the lineshape. (For the phi, this is probably close to 20%.) You need to state what models are used for the lineshapes in the MC, because this is relevant when you compute the efficiency ratio. In this current paper you provide no evidence that you look at resonant KK and Kpi: all you measure are Kpigamma with Kpi around the K* mass etc. It would be nice to show in the paper the mass distributions of the phi and K* (B background subtracted, e.g. with splots) and compare those with simulation. We could not find these plots in the analysis note, where we think they are mandatory in any case. In the J/psiphi analysis we have seen a significant (~1MeV) shift in the phi mass peak between data and MC. This would affect your evaluation of the selection efficiency. Have you looked at this? Page 5 - line 152: Do we understand correctly that the only reason that you do a simultaneous fit is that you want to fix the Bd-Bs mass difference? If so, does this really gain you anything? It is not wrong, but given the big signals it seems a bit overkill. Did you take into account the correlation in the errors in the yields when you computed the ratio? Page 7 - line 193: "to reproduce the signal distributions in data". Distributions of what? - line 193: You don't need to do the splot technique to do this and without further explanation is doesn't help that you mention splot here. We propose to to remove the setence about the splot and replace "signal" by "background subtracted" in "reproduce the signal distributions seen in data". Page 8 - equ 4: remove the part of thd equation with Gamma, because you don't measure Gamma - equ 4: replace B0 with "K+pi-gamma" etc, because you ignore eventual opposite sign corections. (Somebody will draw you a diagram for B0->K-pi+gamma) - figure 2 and beyond: We are puzzled by your treatment of the CP asymmetry in the background. We think that there are two valid scenarios: - you fix the asymmetry in the background using some external source. Then you assign a systematic for fixing it. - you float it (eventual with some constraint) in which case the uncertainty is accounted for in your statistical error. Judging from the description, you seem to do both: You float the background to get the result in line 225. Judging from the figure, you see relatively large asymmetries in the background, but judging from your statement in line 222 these are actually consistent with zero. Then, you do more fits fixing the background asymmetry to different values, to see how this affects the result. You assign the average as a correction and the spread as an additional error. Its an interesting cross-check, but it isn't clear what systematic you are probing. The ranges that you float the asymmetries with may be totally incompatible with your data. Finally, this leads to a double counting of errors: if you fix the background asymmetry, your statistical error on the asymmetry will reduce. So, there is overlap between the 0.7% in line 231 and the 1.7% in line 225. Note that he bias is probably just a consequence of the fact that you fit a non-zero asymmetry in the background while your background variations are centered around 0: Since you claim no significant asymmetry in the background, the bias must be irrelevant! We suggest the following: Strengthen your statement that the background asymmetries are compatible with zero. You could for example do this by performing a single fit in which they are all fixed to zero and look at the change in likelihood and the change in the value of Araw. If you are comfortable with the result, you write in the paper: "Note that the backgrounds in figure 2 exhibit a non-zero asymmetry. These asymmetries are compatible with zero. As a cross-check we also extract the result of Araw with a fit in which they are fixed to zero and find no significant change in Araw.") Then, entirely remove the discussion in lines 226 to 235 and the corresponding systematic. Page 9 - line 241: "No dependence on the kinematics of the KPi system has been found in its detection asymmetry." To most readers this will look odd, given that the K+/K- cross section difference depends strongly on momentum. We suggest to remove this sentence and replace the previous sentence with "It was found that for Kpi pairs in the kinematic range relevant for our analysis the detection asymmetry is ..." - line 250: You explanation of how you extract eps(t) is incomplete: you say you use an splot, but just the splot doesn't give you the efficiency unless you use something else. We suggest that you replace this with: "The time acceptance eps(t) has been extracted from data by using to the decay time distribution of background-subtracted signal events and the known Bd lifetime." (or something equivalent) Page 10 - line 262. We are a bit confused here. At first we thought that "Delta A_M" is the difference between the field-up and field-down results. However, then its error should have been roughly a factor two bigger than on Araw (so more like 3.4% rather than 0.2%). We had to read the paper more than once to understand that "Delta A_M" is actually the difference in Araw using two different ways to combine field-up and field-down data, namely that it is simply the difference between the results in line 225 and line 260. In fact, it is unclear what this number means: assuming that the number of signal decays per fb isn't time dependent, the result in line 225 should already be lumi-weighted. As far as we understand, there is no reason that the difference between the results in line 260 and 225 has anything to do with magnet polarity. (It also seems odd to use the difference between two methods as a correction. Why not just use the 2nd method then.) We strongly suggest that the procedure to evaluate this systematic be changed. First, we find it mandatory to report the results for field-up and field-down data separately in this paper. Then, to combine those two results you can follow either of the two following approaches, but not both: 1. you assume that there is an acceptance affect but that this is exactly opposite in field-up and field-down data. In this case you use a straight average (not a lumi-weighted average) to cancel the systematic. Your uncertainty will come from the fact that the recipee (exact cancellation) is possibly incomplete. To estimate how wrong you could be you could use the up-down difference, and multiply this with a scale factor that somehow encodes your belief in the cancellation. (See also the Ds production asymmetry analysis.) 2. you assume that there is no effect and just combine the results with their statistical errors. (This will give you the result in line 225.) - table 3: if you stick to using the numbers for background model and magnet polarity as corrections in table 3 (which we strongly discourage), then they should already appear in equation 6.