Second Hard Process
- Process Selection
- Phase Space Cuts and Scales
- Cross-section calculation
- Event information
When you have selected a set of hard processes for hadron beams, the
multiparton interactions
framework can add further interactions to build up a realistic
underlying event. These further interactions can come from a wide
variety of processes, and will occasionally be quite hard. They
do represent a realistic random mix, however, which means one cannot
predetermine what will happen. Occasionally there may be cases
where one wants to specify also the second hard interaction rather
precisely. The options on this page allow you to do precisely that.
flag
SecondHard:generate
(default = off
)
Generate two hard scatterings in a collision between hadron beams.
The hardest process can be any combination of internal processes,
available in the normal process
selection machinery, or external input. Here you must further
specify which set of processes to allow for the second hard one, see
the following.
parm
SecondHard:maxPDFreweight
(default = 2.
; minimum = 1.
; maximum = 10.
)
The two scatterings are first generated as if independent, but then
a weight is evaluated, that takes into account how the PDF of one is
modified by the presence of the other, averaged over the two possibilities.
This weight typically is a bit below unity, by momentum conservation,
but has a tail towards larger weights by correlations between partons.
The parameter tells how large such weights are compensated by standard
hit-and miss. A large value implies inefficiency in the generation,
while a small one means some events will be assigned a leftover weight
above unity.
Process Selection
In principle the whole process
selection allowed for the first process could be repeated
for the second one. However, this would probably be overkill.
Therefore here a more limited set of prepackaged process collections
are made available, that can then be further combined at will.
Since the description is almost completely symmetric between the
first and the second process, you always have the possibility
to pick one of the two processes according to the complete list
of possibilities.
Here comes the list of allowed sets of processes, to combine at will:
flag
SecondHard:TwoJets
(default = off
)
Standard QCD 2 → 2 processes involving gluons and
d, u, s, c, b quarks.
flag
SecondHard:PhotonAndJet
(default = off
)
A prompt photon recoiling against a quark or gluon jet.
flag
SecondHard:TwoPhotons
(default = off
)
Two prompt photons recoiling against each other.
flag
SecondHard:Charmonium
(default = off
)
Production of charmonium via colour singlet and colour octet channels.
flag
SecondHard:Bottomonium
(default = off
)
Production of bottomonium via colour singlet and colour octet channels.
flag
SecondHard:SingleGmZ
(default = off
)
Scattering q qbar → gamma^*/Z^0, with full interference
between the gamma^* and Z^0.
flag
SecondHard:SingleW
(default = off
)
Scattering q qbar' → W^+-.
flag
SecondHard:GmZAndJet
(default = off
)
Scattering q qbar → gamma^*/Z^0 g and
q g → gamma^*/Z^0 q.
flag
SecondHard:WAndJet
(default = off
)
Scattering q qbar' → W^+- g and
q g → W^+- q'.
flag
SecondHard:TopPair
(default = off
)
Production of a top pair, either via QCD processes or via an
intermediate gamma^*/Z^0 resonance.
flag
SecondHard:SingleTop
(default = off
)
Production of a single top, either via a t- or
an s-channel W^+- resonance.
A further process collection comes with a warning flag:
flag
SecondHard:TwoBJets
(default = off
)
The q qbar → b bbar and g g → b bbar processes.
These are already included in the TwoJets
sample above,
so it would be double-counting to include both, but we assume there
may be cases where the b subsample will be of special interest.
This subsample does not include flavour-excitation or gluon-splitting
contributions to the b rate, however, so, depending
on the topology if interest, it may or may not be a good approximation.
Phase Space Cuts and Scales
By default, the second hard process obeys exactly the same selection
rules for phase space cuts and
couplings and scales
as the first one does. Specifically, a pTmin cut for
2 → 2 processes would apply to the first and the second hard
process alike, and ballpark half of the time the second could be
generated with a larger pT than the first. (Exact numbers
depending on the relative shape of the two cross sections.) That is,
first and second is only used as an administrative distinction between
the two, not as a physics ordering one.
Optionally it is possible to pick the mass and pT
phase space cuts separately for
the second hard interaction. The main application presumably would
be to allow a second process that is softer than the first, but still
hard. But one is also free to make the second process harder than the
first, if desired. So long as the two pT (or mass) ranges
overlap the ordering will not be the same in all events, however.
flag
PhaseSpace:sameForSecond
(default = on
)
By default use the same cuts for a second hard process as for the
first. If off
then instead use the mass and pT
cuts below, where relevant. (The other cuts above still remain the same.)
parm
PhaseSpace:mHatMinSecond
(default = 4.
; minimum = 0.
)
The minimum invariant mass for a second interaction, if separate.
parm
PhaseSpace:mHatMaxSecond
(default = -1.
)
The maximum invariant mass for a second interaction, if separate.
A value below mHatMin
means there is no upper limit.
parm
PhaseSpace:pTHatMinSecond
(default = 0.
; minimum = 0.
)
The minimum invariant pT for a second interaction, if separate.
parm
PhaseSpace:pTHatMaxSecond
(default = -1.
)
The maximum invariant pT for a second interaction, if separate.
A value below pTHatMin
means there is no upper limit.
Cross-section calculation
As an introduction, a brief reminder of Poissonian statistics.
Assume a stochastic process in time, for now not necessarily a
high-energy physics one, where the probability for an event to occur
at any given time is independent of what happens at other times.
Then the probability for n events to occur in a finite
time interval is
P_n = <n>^n exp(-<n>) / n!
where <n> is the average number of events. If this
number is small we can approximate exp(-<n>) = 1 ,
so that P_1 = <n> and
P_2 = <n>^2 / 2 = P_1^2 / 2.
Now further assume that the events actually are of two different
kinds a and b, occurring independently of each
other, such that <n> = <n_a> + <n_b>.
It then follows that the probability of having one event of type
a (or b) and nothing else is
P_1a = <n_a> (or P_1b = <n_b>).
From
P_2 = (<n_a> + <n_b>)^2 / 2 = (P_1a + P_1b)^2 / 2 =
(P_1a^2 + 2 P_1a P_1b + P_1b^2) / 2
it is easy to read off that the probability to have exactly two
events of kind a and none of b is
P_2aa = P_1a^2 / 2 whereas that of having one a
and one b is P_2ab = P_1a P_1b. Note that the
former, with two identical events, contains a factor 1/2
while the latter, with two different ones, does not. If viewed
in a time-ordered sense, the difference is that the latter can be
obtained two ways, either first an a and then a b
or else first a b and then an a.
To translate this language into cross-sections for high-energy
events, we assume that interactions can occur at different pT
values independently of each other inside inelastic nondiffractive
(sometimes equated with "minbias") events. Then the above probabilities
translate into
P_n = sigma_n / sigma_ND where sigma_ND is the
total nondiffractive cross section. Again we want to assume that
exp(-<n>) is close to unity, i.e. that the total
hard cross section above pTmin is much smaller than
sigma_ND. The hard cross section is dominated by QCD
jet production, and a reasonable precaution is to require a
pTmin of at least 20 GeV at LHC energies.
(For 2 → 1 processes such as
q qbar → gamma^*/Z^0 (→ f fbar) one can instead make a
similar cut on mass.) Then the generic equation
P_2 = P_1^2 / 2 translates into
sigma_2/sigma_ND = (sigma_1 / sigma_ND)^2 / 2 or
sigma_2 = sigma_1^2 / (2 sigma_ND).
Again different processes a, b, c, ... contribute,
and by the same reasoning we obtain
sigma_2aa = sigma_1a^2 / (2 sigma_ND),
sigma_2ab = sigma_1a sigma_1b / sigma_ND,
and so on.
There is one important correction to this picture: all collisions
do no occur under equal conditions. Some are more central in impact
parameter, others more peripheral. This leads to a further element of
variability: central collisions are likely to have more activity
than the average, peripheral less. Integrated over impact
parameter standard cross sections are recovered, but correlations
are affected by a "trigger bias" effect: if you select for events
with a hard process you favour events at small impact parameter
which have above-average activity, and therefore also increased
chance for further interactions. (In PYTHIA this is the origin
of the "pedestal effect", i.e. that events with a hard interaction
have more underlying activity than the level found in minimum-bias
events.)
When you specify a matter overlap profile in the multiparton-interactions
scenario, such an enhancement/depletion factor f_impact is
chosen event-by-event and can be averaged during the course of the run.
As an example, the double Gaussian form used in Tune A gives
approximately <f_impact> = 2.5. In general, the more
uneven the distribution the higher the <f_impact>.
Also the pT0 parameter value has an impact, even if it is
less important over a realistic range of values, although it implies
that <f_impact> is energy-dependent. The origin of this
effect is as follows. A lower pT0 implies more MPI activity
at all impact parameters, so that the nondiffractive cross section
sigma_ND increases, or equivalently the proton size. But if
sigma_ND is fixed by data then the input radius of the matter
overlap profile (not explicitly specified but implicitly adjusted at
initialization) has to be shrunk so that the output value can stay
constant. This means that the proton matter is more closely packed and
therefore <f_impact> goes up.
The above equations therefore have to be modified to
sigma_2aa = <f_impact> sigma_1a^2 / (2 sigma_ND),
sigma_2ab = <f_impact> sigma_1a sigma_1b / sigma_ND.
Experimentalists often instead use the notation
sigma_2ab = sigma_1a sigma_1b / sigma_eff,
from which we see that PYTHIA "predicts"
sigma_eff = sigma_ND / <f_impact>.
When the generation of multiparton interactions is switched off it is
not possible to calculate <f_impact> and therefore
it is set to unity.
When this recipe is to be applied to calculate
actual cross sections, it is useful to distinguish three cases,
depending on which set of processes are selected to study for
the first and second interaction.
(1) The processes a for the first interaction and
b for the second one have no overlap at all.
For instance, the first could be TwoJets
and the
second TwoPhotons
. In that case, the two interactions
can be selected independently, and cross sections tabulated
for each separate subprocess in the two above classes. At the
end of the run, the cross sections in a should be multiplied
by <f_impact> sigma_1b / sigma_ND to bring them to
the correct overall level, and those in b by
<f_impact> sigma_1a / sigma_ND.
(2) Exactly the same processes a are selected for the
first and second interaction. In that case it works as above,
with a = b, and it is only necessary to multiply by an
additional factor 1/2. A compensating factor of 2
is automatically obtained for picking two different subprocesses,
e.g. if TwoJets
is selected for both interactions,
then the combination of the two subprocesses q qbar → g g
and g g → g g can trivially be obtained two ways.
(3) The list of subprocesses partly but not completely overlap.
For instance, the first process is allowed to contain a
or c and the second b or c, where
there is no overlap between a and b. Then,
when an independent selection for the first and second interaction
both pick one of the subprocesses in c, half of those
events have to be thrown, and the stored cross section reduced
accordingly. Considering the four possible combinations of first
and second process, this gives a
sigma'_1 = sigma_1a + sigma_1c * (sigma_2b + sigma_2c/2) /
(sigma_2b + sigma_2c)
with the factor 1/2 for the sigma_1c sigma_2c term.
At the end of the day, this sigma'_1 should be multiplied
by the normalization factor
f_1norm = <f_impact> (sigma_2b + sigma_2c) / sigma_ND
here without a factor 1/2 (or else it would have been
double-counted). This gives the correct
(sigma_2b + sigma_2c) * sigma'_1 = sigma_1a * sigma_2b
+ sigma_1a * sigma_2c + sigma_1c * sigma_2b + sigma_1c * sigma_2c/2
The second interaction can be handled in exact analogy.
For the considerations above it is assumed that the phase space cuts
are the same for the two processes. It is possible to set the mass and
transverse momentum cuts differently, however. This changes nothing
for processes that already are different. For two collisions of the
same type it is partly a matter of interpretation what is intended.
If we consider the case of the same process in two non-overlapping
phase space regions, most likely we want to consider them as
separate processes, in the sense that we expect a factor 2 relative
to Poissonian statistics from either of the two hardest processes
populating either of the two phase space regions. In total we are
therefore lead to adopt the same strategy as in case (3) above:
only in the overlapping part of the two allowed phase space regions
could two processes be identical and thus appear with a 1/2 factor,
elsewhere the two processes are never identical and do not
include the 1/2 factor. We reiterate, however, that the case of
partly but not completely overlapping phase space regions for one and
the same process is tricky, and not to be used without prior
deliberation.
The listing obtained with the pythia.stat()
already contain these corrections factors, i.e. cross sections
are for the occurrence of two interactions of the specified kinds.
There is not a full tabulation of the matrix of all the possible
combinations of a specific first process together with a specific
second one (but the information is there for the user to do that,
if desired). Instead pythia.stat()
shows this
matrix projected onto the set of processes and associated cross
sections for the first and the second interaction, respectively.
Up to statistical fluctuations, these two sections of the
pythia.stat()
listing both add up to the same
total cross section for the event sample.
There is a further special feature to be noted for this listing,
and that is the difference between the number of "selected" events
and the number of "accepted" ones. Here is how that comes about.
Originally the first and second process are selected completely
independently. The generation (in)efficiency is reflected in the
different number of initially tried events for the first and second
process, leading to the same number of selected events. While
acceptable on their own, the combination of the two processes may
be unacceptable, however. It may be that the two processes added
together use more energy-momentum than kinematically allowed, or,
even if not, are disfavoured when the PYTHIA approach to provide
correlated parton densities is applied. Alternatively, referring
to case (3) above, it may be because half of the events should
be thrown for identical processes. Taken together, it is these
effects that reduced the event number from "selected" to "accepted".
(A further reduction may occur if a
user hook rejects some events.)
It is allowed to use external Les Houches Accord input for the
hardest process, and then pick an internal one for the second hardest.
In this case PYTHIA does not have access to your thinking concerning
the external process, and cannot know whether it overlaps with the
internal or not. (External events q qbar' → e+ nu_e could
agree with the internal W ones, or be a W' resonance
in a BSM scenario, to give one example.) Therefore the combined cross
section is always based on the scenario (1) above. Corrections for
correlated parton densities are included also in this case, however.
That is, an external event that takes a large fraction of the incoming
beam momenta stands a fair chance of being rejected when it has to be
combined with another hard process. For this reason the "selected" and
"accepted" event numbers are likely to disagree.
In the cross section calculation above, the sigma'_1
cross sections are based on the number of accepted events, while
the f_1norm factor is evaluated based on the cross sections
for selected events. That way the suppression by correlations
between the two processes does not get to be double-counted.
The pythia.stat()
listing contains two final
lines, indicating the summed cross sections sigma_1sum and
sigma_2sum for the first and second set of processes, at
the "selected" stage above, plus information on the sigma_ND
and <f_impact> used. The total cross section
generated is related to this by
<f_impact> * (sigma_1sum * sigma_2sum / sigma_ND) *
(n_accepted / n_selected)
with an additional factor of 1/2 for case 2 above.
The error quoted for the cross section of a process is a combination
in quadrature of the error on this process alone with the error on
the normalization factor, including the error on
<f_impact>. As always it is a purely statistical one
and of course hides considerably bigger systematic uncertainties.
Warning: the calculational machinery above has not (yet)
been implemented for the case that the two interactions are to be
associated with different impact-parameter profiles, as is the case
for MultipartonInteractions:bProfile = 4
, i.e. when the
radius depends on the x value. Results for the double
cross section therefore cannot be trusted in this case.
Event information
Normally the process
event record only contains the
hardest interaction, but in this case also the second hardest
is stored there. If both of them are 2 → 2 ones, the
first would be stored in lines 3 - 6 and the second in 7 - 10.
For both, status codes 21 - 29 would be used, as for a hardest
process. Any resonance decay chains would occur after the two
main processes, to allow normal parsing. The beams in 1 and 2
only appear in one copy. This structure is echoed in the
full event
event record.
Most of the properties accessible by the
pythia.info
methods refer to the first process, whether that happens to be the
hardest or not. The code and pT scale of the second process
are accessible by the info.codeMPI(1)
and
info.pTMPI(1)
, however.
The sigmaGen()
and sigmaErr()
methods provide
the cross section and its error for the event sample as a whole,
combining the information from the two hard processes as described
above. In particular, the former should be used to give the
weight of the generated event sample. The statistical error estimate
is somewhat cruder and gives a larger value than the
subprocess-by-subprocess one employed in
pythia.stat()
, but this number is
anyway less relevant, since systematical errors are likely to dominate.