Multiparton Interactions
- Main variables
- Initialization speed-up
- Further variables
- Technical notes
The starting point for the multiparton interactions physics scenario in
PYTHIA is provided by [Sjo87]. Recent developments have
included a more careful study of flavour and colour correlations,
junction topologies and the relationship to beam remnants
[Sjo04], interleaving with initial-state radiation
[Sjo05], making use of transverse-momentum-ordered
initial- and final-state showers, with the extension to fully
interleaved evolution covered in [Cor10a]. A framework to
handle rescattering is described in [Cor09].
A big unsolved issue is how the colour of all these subsystems is
correlated. For sure there is a correlation coming from the colour
singlet nature of the incoming beams, but in addition final-state
colour rearrangements may change the picture. Indeed such extra
effects appear necessary to describe data, e.g. on
<pT>(n_ch). A simple implementation of colour
rearrangement is found as part of the
beam remnants description.
Main variables
Matching to hard process
The maximum pT to be allowed for multiparton interactions is
related to the nature of the hard process itself. It involves a
delicate balance between not double-counting and not leaving any
gaps in the coverage. The best procedure may depend on information
only the user has: how the events were generated and mixed (e.g. with
Les Houches Accord external input), and how they are intended to be
used. Therefore a few options are available, with a sensible default
behaviour.
mode
MultipartonInteractions:pTmaxMatch
(default = 0
; minimum = 0
; maximum = 3
)
Way in which the maximum scale for multiparton interactions is set
to match the scale of the hard process itself.
option
0 : (i) if the final state of the hard process
(not counting subsequent resonance decays) contains only quarks
(u, d, s, c, b), gluons and photons then pT_max
is chosen to be the factorization scale for internal processes
and the scale
value for Les Houches input;
(ii) if not, interactions are allowed to go all the way up
to the kinematical limit.
The reasoning is that the former kind of processes are generated by
the multiparton-interactions machinery and so would double-count hard
processes if allowed to overlap the same pT range,
while no such danger exists in the latter case.
option
1 : always use the factorization scale for an internal
process and the scale
value for Les Houches input,
i.e. the lower value. This should avoid double-counting, but
may leave out some interactions that ought to have been simulated.
option
2 : always allow multiparton interactions up to the
kinematical limit. This will simulate all possible event topologies,
but may lead to double-counting.
option
3 : as option 0, but for case (i) the pT_max
is chosen to be half the summed pT of the final-state
particles. This may be more relevant than option 0 for multiparton
final states, where the factorization scale could be associated with
the lowest pT of any parton, as appropriate for parton
showers, but not for MPI. The scale for no doublecounting and no gaps
should rather be associated with the "core" 2 → 2 hard
process, which here is approximated by sum pT / 2.
Note: If a "second hard" process is present, the two
are analyzed separately for the default 0 option, and for 3. It is
enough that one of them only consists of quarks, gluons and photons
to restrict the pT range. The maximum for MPI is then set
by the hard interaction with lowest scale.
Cross-section parameters
The rate of interactions is determined by
parm
MultipartonInteractions:alphaSvalue
(default = 0.130
; minimum = 0.06
; maximum = 0.25
)
The value of alpha_strong at m_Z. Default value is
picked equal to the one used in CTEQ 5L.
The actual value is then regulated by the running to the scale
pT^2, at which it is evaluated
mode
MultipartonInteractions:alphaSorder
(default = 1
; minimum = 0
; maximum = 3
)
The order at which alpha_strong runs at scales away from
m_Z.
option
0 : zeroth order, i.e. alpha_strong is kept
fixed.
option
1 : first order, which is the normal value.
option
2 : second order. Since other parts of the code do
not go to second order there is no strong reason to use this option,
but there is also nothing wrong with it.
option
3 : third order, with the same comment as for second
order. The expression in the 2006 RPP is used here.
QED interactions are regulated by the alpha_electromagnetic
value at the pT^2 scale of an interaction.
mode
MultipartonInteractions:alphaEMorder
(default = 1
; minimum = -1
; maximum = 1
)
The running of alpha_em used in hard processes.
option
1 : first-order running, constrained to agree with
StandardModel:alphaEMmZ
at the Z^0 mass.
option
0 : zeroth order, i.e. alpha_em is kept
fixed at its value at vanishing momentum transfer.
option
-1 : zeroth order, i.e. alpha_em is kept
fixed, but at StandardModel:alphaEMmZ
, i.e. its value
at the Z^0 mass.
Note that the choices of alpha_strong and alpha_em
made here override the ones implemented in the normal process machinery,
but only for the interactions generated by the
MultipartonInteractions
class.
In addition there is the possibility of a global rescaling of
cross sections (which could not easily be accommodated by a
changed alpha_strong, since alpha_strong runs)
parm
MultipartonInteractions:Kfactor
(default = 1.0
; minimum = 0.5
; maximum = 4.0
)
Multiply all cross sections by this fix factor.
The processes used to generate multiparton interactions form a subset
of the standard library of hard processes. The input is slightly
different from the standard hard-process machinery, however,
since incoming flavours, the alpha_strong value and most
of the kinematics are already fixed when the process is called.
It is possible to regulate the set of processes that are included in the
multiparton-interactions framework.
mode
MultipartonInteractions:processLevel
(default = 3
; minimum = 0
; maximum = 3
)
Set of processes included in the machinery.
option
0 : only the simplest 2 → 2 QCD processes
between quarks and gluons, giving no new flavours, i.e. dominated by
t-channel gluon exchange.
option
1 : also 2 → 2 QCD processes giving new flavours
(including charm and bottom), i.e. proceeding through s-channel
gluon exchange.
option
2 : also 2 → 2 processes involving one or two
photons in the final state, s-channel gamma
boson exchange and t-channel gamma/Z^0/W^+-
boson exchange.
option
3 : also charmonium and bottomonium production, via
colour singlet and colour octet channels.
Cross-section regularization
There are two complementary ways of regularizing the small-pT
divergence, a sharp cutoff and a smooth dampening. These can be
combined as desired, but it makes sense to coordinate with how the
same issue is handled in spacelike
showers. Actually, by default, the parameters defined here are
used also for the spacelike showers, but this can be overridden.
Regularization of the divergence of the QCD cross section for
pT → 0 is obtained by a factor pT^4 / (pT0^2 + pT^2)^2,
and by using an alpha_s(pT0^2 + pT^2). An energy dependence
of the pT0 choice is introduced by two further parameters,
so that pT0Ref is the pT0 value for the reference
CM energy, pT0Ref = pT0(ecmRef).
Warning: if a large pT0 is picked for multiparton
interactions, such that the integrated interaction cross section is
below the nondiffractive inelastic one, this pT0 will
automatically be scaled down to cope.
The actual pT0 parameter used at a given CM energy scale,
ecmNow, is obtained from a power law or a logarithmic
parametrization. The former is default with hadron beams and
the latter for photon-photon collisions.
mode
MultipartonInteractions:pT0parametrization
(default = 0
; minimum = 0
; maximum = 1
)
Choice of pT0 parametrization.
option
0 : Power law dependence on ecmNow:
pT0 = pT0(ecmNow) = pT0Ref * (ecmNow / ecmRef)^ecmPow
option
1 : Logarithmic dependence on ecmNow:
pT0 = pT0(ecmNow) = pT0Ref + ecmPow * log (ecmNow / ecmRef)
where pT0Ref, ecmRef and ecmPow are the
three parameters below. In case of photon-photon collisions the corresponding
parameters are set in Photoproduction.
parm
MultipartonInteractions:pT0Ref
(default = 2.28
; minimum = 0.5
; maximum = 10.0
)
The pT0Ref scale in the above formula.
Note: pT0Ref is one of the key parameters in a
complete PYTHIA tune. Its value is intimately tied to a number of other
choices, such as that of colour flow description, so unfortunately it is
difficult to give an independent meaning to pT0Ref.
parm
MultipartonInteractions:ecmRef
(default = 7000.0
; minimum = 1.
)
The ecmRef reference energy scale introduced above.
parm
MultipartonInteractions:ecmPow
(default = 0.215
; minimum = 0.0
; maximum = 0.5
)
The ecmPow energy rescaling pace introduced above.
Alternatively, or in combination, a sharp cut can be used.
parm
MultipartonInteractions:pTmin
(default = 0.2
; minimum = 0.1
; maximum = 10.0
)
Lower cutoff in pT, below which no further interactions
are allowed. Normally pT0 above would be used to provide
the main regularization of the cross section for pT → 0,
in which case pTmin is used mainly for technical reasons.
It is possible, however, to set pT0Ref = 0 and use
pTmin to provide a step-function regularization, or to
combine them in intermediate approaches. Currently pTmin
is taken to be energy-independent.
Gösta Gustafson has proposed (private communication, unpublished)
that the amount of screening, as encapsulated in the pT0
parameter, fluctuates from one event to the next. Specifically,
high-activity event are more likely to lead to interactions at large
pT scales, but the high activity simultaneously leads to a
larger screening of interactions at smaller pT. Such a scenario
can approximately be simulated by scaling up the pT0 by a
factor sqrt(n), where n is the number of interactions
considered so far, including the current one. That is, for the first
interaction the dampening factor is pT^4 / (pT0^2 + pT^2)^2,
for the second pT^4 / (2 pT0^2 + pT^2)^2, for the third
pT^4 / (3 pT0^2 + pT^2)^2, and so on. Optionally the scheme
may also be applied to ISR emissions. For simplicity the same
alpha_s(pT0^2 + pT^2) is used throughout. Note that, in this
scenario the pT0 scale must be lower than in the normal case
to begin with, since it later is increased back up. Also note that the
idea with this scenario is to propose an alternative to colour
reconnection to understand the rise of <pT>(n_ch),
so that the amount of colour reconnection should be reduced.
mode
MultipartonInteractions:enhanceScreening
(default = 0
; minimum = 0
; maximum = 2
)
Choice to activate the above screening scenario, i.e. an increasing
effective pT0 for consecutive interactions.
option
0 : No activity-dependent screening, i.e. pT0
is fixed.
option
1 : The pT0 scale is increased as a function
of the number of MPI's, as explained above. ISR is not affected,
but note that, if SpaceShower:samePTasMPI
is on,
then MultipartonInteractions:pT0Ref
is used also for ISR,
which may or may not be desirable.
option
2 : Both MPI and ISR influence and are influenced by the
screening. That is, the dampening is reduced based on the total number
of MPI and ISR steps considered so far, including the current one.
This dampening is implemented both for MPI and for ISR emissions,
for the latter provided that SpaceShower:samePTasMPI
is on
(default).
Impact-parameter dependence
The choice of impact-parameter dependence is regulated by several
parameters. The ones listed here refer to nondiffractive topologies
only, while their equivalents for diffractive events are put in the
Diffraction description. Note that
there is currently no bProfile = 4
option for diffraction.
Other parameters are assumed to agree between diffractive and
nondiffractive topologies.
mode
MultipartonInteractions:bProfile
(default = 3
; minimum = 0
; maximum = 4
)
Choice of impact parameter profile for the incoming hadron beams.
option
0 : no impact parameter dependence at all.
option
1 : a simple Gaussian matter distribution;
no free parameters.
option
2 : a double Gaussian matter distribution,
with the two free parameters coreRadius and
coreFraction.
option
3 : an overlap function (i.e. the convolution of
the matter distributions of the two incoming hadrons) of the form
exp(- b^expPow), where expPow is a free
parameter.
option
4 : a Gaussian matter distribution with a width
that varies according to the selected x value of an interaction,
1. + a1 log (1 / x), where a1 is a free parameter.
Note that once b has been selected for the hard process,
it remains fixed for the remainder of the evolution. Also note that
the machinery for a
second hard process
is not adapted to calculate the impact-parameter enhancement factor
for this option.
parm
MultipartonInteractions:coreRadius
(default = 0.4
; minimum = 0.1
; maximum = 1.
)
When assuming a double Gaussian matter profile, bProfile = 2,
the inner core is assumed to have a radius that is a factor
coreRadius smaller than the rest.
parm
MultipartonInteractions:coreFraction
(default = 0.5
; minimum = 0.
; maximum = 1.
)
When assuming a double Gaussian matter profile, bProfile = 2,
the inner core is assumed to have a fraction coreFraction
of the matter content of the hadron.
parm
MultipartonInteractions:expPow
(default = 1.85
; minimum = 0.4
; maximum = 10.
)
When bProfile = 3 it gives the power of the assumed overlap
shape exp(- b^expPow). Since the convolution of two Gaussians
is another Gaussian, the overlap shape obtained for expPow = 2
is the same as that obtained for a Gaussian matter distribution,
bProfile = 1. The limit expPow → infinity
corresponds to no impact parameter dependence at all, bProfile = 0.
The overlap shape that would be obtained for an exponential matter
distribution cannot be represented exactly on this form, but a similar
variance (for fixed normalisation and average b) is obtained
for expPow = 1.4. The double Gaussian matter profiles that
are obtained for bProfile = 2 generally correspond to the range
between expPow = 1 and expPow = 2, with the former
representing scenarios with pronounced "hot spots" and consequently
very significant fluctuations (high variance). Note that for small
expPow the program becomes slow and unstable, so the min
limit must be respected.
parm
MultipartonInteractions:a1
(default = 0.15
; minimum = 0.
; maximum = 2.
)
When bProfile = 4, this gives the a1 constant in the
Gaussian width. When a1 = 0., this reduces back to the single
Gaussian case.
mode
MultipartonInteractions:bSelScale
(default = 1
; minimum = 1
; maximum = 3
)
The selection of impact parameter is related to the scale of the
hard process: the harder this scale is, the more central the collision.
In practice this centrality saturates quickly, however, and beyond
a scale of roughly 20 GeV very little changes. (The relevant
quantity is that the QCD jet cross section above the scale should be
a tiny fraction of the total cross section.) In 2 → 1 and
2 → 2 processes traditional scale choices work fine, but
ambiguities arise for higher multiplicities, in particular when the
scale is used for matching between the multiparton matrix elements
and parton showers. Then the event scale may
be chosen as that of a very low-pT parton, i.e. suggesting a
peripheral collision, while the much harder other partons instead
would favour a central collision. Therefore the default here is to
override whatever scale value have been read in from an LHEF, say.
Notice that the scale used here is decoupled from the maximum scale
for MPIs (MultipartonInteractions:pTmaxMatch
).
option
1 :
Use the mass for a 2 → 1 process.
For 2 → n, n > 1 processes order the particles in
falling mmT = m + mT and then let the scale be
(mmT_1 + mmT_2)/2 + mmT_3/3 + mmT_4/4 + ... + mmT_n/n.
This is constructed always to be above m1, and to assign
decreasing importance to softer particles that are less likely to
be associated with the hard process.
option
2 : Use the scale
parameter of the event.
option
3 : use the same scale as chosen by the rules for
MultipartonInteractions:pTmaxMatch
.
Rescattering
It is possible that a parton may rescatter, i.e. undergo a further
interaction subsequent to the first one. The machinery to model this
kind of physics has only recently become fully operational
[Cor09], and is therefore not yet so well explored.
The rescattering framework has ties with other parts of the program,
notably with the beam remnants.
flag
MultipartonInteractions:allowRescatter
(default = off
)
Switch to allow rescattering of partons; on/off = true/false.
Note: the rescattering framework has not yet been implemented
for the MultipartonInteractions:bProfile = 4
option,
and can therefore not be switched on in that case.
Warning: use with caution since machinery is still not
so well tested.
flag
MultipartonInteractions:allowDoubleRescatter
(default = off
)
Switch to allow rescattering of partons, where both incoming partons
have already rescattered; on/off = true/false. Is only used if
MultipartonInteractions:allowRescatter
is switched on.
Warning: currently there is no complete implementation that
combines it with shower evolution, so you must use
PartonLevel:ISR = off
and PartonLevel:FSR = off
.
If not, a warning will be issued and double rescattering will not be
simulated. The rate also comes out to be much lower than for single
rescattering, so to first approximation it can be neglected.
mode
MultipartonInteractions:rescatterMode
(default = 0
; minimum = 0
; maximum = 4
)
Selection of which partons rescatter against unscattered partons
from the incoming beams A and B, based on their rapidity value
y in the collision rest frame. Here ySep is
shorthand for MultipartonInteractions:ySepRescatter
and
deltaY for MultipartonInteractions:deltaYRescatter
,
defined below. The description is symmetric between the two beams,
so only one case is described below.
option
0 : only scattered partons with y > 0
can collide with unscattered partons from beam B.
option
1 : only scattered partons with y > ySep
can collide with unscattered partons from beam B.
option
2 : the probability for a scattered parton to be considered
as a potential rescatterer against unscattered partons in beam B increases
linearly from zero at y = ySep - deltaY to unity at
y = ySep + deltaY.
option
3 : the probability for a scattered parton to be considered
as a potential rescatterer against unscattered partons in beam B increases
with y according to
(1/2) * (1 + tanh( (y - ySep) / deltaY)).
option
4 : all partons are potential rescatterers against both
beams.
parm
MultipartonInteractions:ySepRescatter
(default = 0.
)
used for some of the MultipartonInteractions:rescatterMode
options above, as the rapidity for which a scattered parton has a 50%
probability to be considered as a potential rescatterer.
A ySep > 0 generally implies that some central partons cannot
rescatter at all, while a ySep < 0 instead allows central
partons to scatter against either beam.
parm
MultipartonInteractions:deltaYRescatter
(default = 1.
; minimum = 0.1
)
used for some of the MultipartonInteractions:rescatterMode
options above, as the width of the rapidity transition region, where the
probability rises from zero to unity that a scattered parton is considered
as a potential rescatterer.
Initialization speed-up
The initialization of the MPI framework is the most time-consuming part
of the whole initialization step. This is because the jet cross section
has to be integrated numerically for the given pT0 choice, and
shown to be larger than the total cross section of the relevant event
class (nondiffractive or diffractive). If not, the pT0 value has
to be reduced. Sudakov factors also need to be tabulated for usage during
the generation. If the collision energy is kept fixed and diffraction is
not simulated, then the time usage is still hardly noticeable, at or below
the order of a second. For diffraction, however, the diffractive masses
vary between events, which requires a setup in a grid of masses for
subsequent interpolation. This is similar for nondiffractive events, if the
collision energy is allowed to vary. In total, this can give up to a
hundred process+energy/mass points to consider, requiring tens of seconds.
It can become problematic if additionally several different Pythia objects
are required to handle many different beam-particle combinations, or if
the Beams:allowIDAswitch
is used to handle several
beam-particle combinations inside a single Pythia object.
It is therefore possible to save the outcome of one initialization step
on file, and then read it back in for subsequent runs. This can save a
significant time, notably for short test runs. A warning is that all relevant
settings must be the same in the two runs. It is up to the user to
ensure that this is the case; it would be next-to-impossible to catch
any possible conflict. The most obvious aspects that must agree are
incoming beam particles and (maximum) collision energy, all MPI-related
settings and PDFs. Aspects that could be different without causing problems
include ISR and FSR, and all hadronization steps.
This trick also works for the initialization of several particles.
Specifically, when the Beams:allowIDAswitch
switch is
on, some 20 different incoming beam particles are initialized
in succession, and all of these can be saved and reused with the methods
below. Since we now need a few hundred seconds to initialize all of them,
the time savings can be considerable for repeated use.
mode
MultipartonInteractions:reuseInit
(default = 0
; minimum = 0
; maximum = 3
)
Action taken with respect to using MPI initialization data as above.
option
0 : current run is self-contained.
option
1 : MPI initialization is done as usual, but afterwards
the results of this initialization are saved on file.
option
2 : initialization data is read in from a file, saved
from a previous initialization, thereby saving time. If the file is not
found, initialization fails.
option
3 : as option 2, but if the file is not found, it will be
generated and saved after normal initialization.
word
MultipartonInteractions:initFile
(default = pp14000.mpi
)
The file name used to store or read MPI initialization data. It is up
to the user to pick a suitable name (including path if relevant) for the
case at hand.
Another possible speed-up, in some contexts, is to symmetrize the
description between particles and antiparticles, so that e.g.
pi^+ p and pi^- p are handled identically in the
MPI initialization, and thus could use the same initialization file.
This is used for the Beams:allowIDAswitch = on
option,
to limit the number of options needed to cover, but can also be used
in other contexts.
flag
MultipartonInteractions:setAntiSame
(default = off
)
The cross sections for particle-particle and particle-antiparticle
collisions are symmetrized at initialization. For the perturbative
cross section this is achieved by symmetrizing the PDFs of one
particle between quarks and antiquarks. For the nonperturbative
cross sections this is done by averaging two separate calls. If
either of the two particles is its own antiparticle the procedure
can be skipped. In the actual event generation the proper PDFs are
used for real parton-parton collisions, but combined with the
symmetrized Sudakov factor table and nondiffractive cross section.
Further variables
These should normally not be touched. Their only function is for
cross-checks.
mode
MultipartonInteractions:nQuarkIn
(default = 5
; minimum = 0
; maximum = 5
)
Number of allowed incoming quark flavours in the beams; a change
to 4 would thus exclude b and bbar as incoming
partons, etc.
mode
MultipartonInteractions:nSample
(default = 1000
; minimum = 100
)
The allowed pT range is split (unevenly) into 100 bins,
and in each of these the interaction cross section is evaluated in
nSample random phase space points. The full integral is used
at initialization, and the differential one during the run as a
"Sudakov form factor" for the choice of the hardest interaction.
A larger number implies increased accuracy of the calculations.
Technical notes
Relative to the articles mentioned above, not much has happened.
The main news is a technical one, that the phase space of the
2 → 2 (massless) QCD processes is now sampled in
dy_3 dy_4 dpT^2, where y_3 and y_4 are
the rapidities of the two produced partons. One can show that
(dx_1 / x_1) * (dx_2 / x_2) * d(tHat) = dy_3 * dy_4 * dpT^2
Furthermore, since cross sections are dominated by the "Rutherford"
one of t-channel gluon exchange, which is enhanced by a
factor of 9/4 for each incoming gluon, effective structure functions
are defined as
F(x, pT2) = (9/4) * xg(x, pT2) + sum_i xq_i(x, pT2)
With this technical shift of factors 9/4 from cross sections to parton
densities, a common upper estimate of
d(sigmaHat)/d(pT2) < pi * alpha_strong^2 / pT^4
is obtained.
In fact this estimate can be reduced by a factor of 1/2 for the
following reason: for any configuration (y_3, y_4, pT2) also
one with (y_4, y_3, pT2) lies in the phase space. Not both
of those can enjoy being enhanced by the tHat → 0
singularity of
d(sigmaHat) propto 1/tHat^2.
Or if they are, which is possible with identical partons like
q q → q q and g g → g g, each singularity comes
with half the strength. So, when integrating/averaging over the two
configurations, the estimated d(sigmaHat)/d(pT2) drops.
Actually, it drops even further, since the naive estimate above is
based on
(4 /9) * (1 + (uHat/sHat)^2) < 8/9 < 1
The 8/9 value would be approached for tHat → 0, which
implies sHat >> pT2 and thus a heavy parton-distribution
penalty, while parton distributions are largest for
tHat = uHat = -sHat/2, where the above expression
evaluates to 5/9. A fudge factor is therefore introduced to go the
final step, so it can easily be modified when further non-Rutherford
processes are added, or should parton distributions change significantly.
At initialization, it is assumed that
d(sigma)/d(pT2) < d(sigmaHat)/d(pT2) * F(x_T, pT2) * F(x_T, pT2)
* (2 y_max(pT))^2
where the first factor is the upper estimate as above, the second two
the parton density sum evaluated at y_3 = y_ 4 = 0 so that
x_1 = x_2 = x_T = 2 pT / E_cm, where the product is expected
to be maximal, and the final is the phase space for
-y_max < y_{3,4} < y_max.
The right-hand side expression is scanned logarithmically in y,
and a N is determined such that it always is below
N/pT^4.
To describe the dampening of the cross section at pT → 0 by
colour screening, the actual cross section is multiplied by a
regularization factor (pT^2 / (pT^2 + pT0^2))^2, and the
alpha_s is evaluated at a scale pT^2 + pT0^2,
where pT0 is a free parameter of the order of 2 - 4 GeV.
Since pT0 can be energy-dependent, an ansatz
pT0(ecm) = pT0Ref * (ecm/ecmRef)^ecmPow
is used, where ecm is the current CM frame energy,
ecmRef is an arbitrary reference energy where pT0Ref
is defined, and ecmPow gives the energy rescaling pace. For
technical reasons, also an absolute lower pT scale pTmin,
by default 0.2 GeV, is introduced. In principle, it is possible to
recover older scenarios with a sharp pT cutoff by setting
pT0 = 0 and letting pTmin be a larger number.
The above scanning strategy is then slightly modified: instead of
an upper estimate N/pT^4 one of the form
N/(pT^2 + r * pT0^2)^2 is used. At first glance, r = 1
would seem to be fixed by the form of the regularization procedure,
but this does not take into account the nontrivial dependence on
alpha_s, parton distributions and phase space. A better
Monte Carlo efficiency is obtained for r somewhat below unity,
and currently r = 0.25 is hardcoded.
In the generation a trial pT2 is then selected according to
d(Prob)/d(pT2) = (1/sigma_ND) * N/(pT^2 + r * pT0^2)^2 * ("Sudakov")
For the trial pT2, a y_3 and a y_4 are then
selected, and incoming flavours according to the respective
F(x_i, pT2), and then the cross section is evaluated for this
flavour combination. The ratio of trial/upper estimate gives the
probability of survival.
Actually, to profit from the factor 1/2 mentioned above, the cross
section for the combination with y_3 and y_4
interchanged is also tried, which corresponds to exchanging tHat
and uHat, and the average formed, while the final kinematics
is given by the relative importance of the two.
Furthermore, since large y values are disfavoured by dropping
PDF's, a factor
WT_y = (1 - (y_3/y_max)^2) * (1 - (y_4/y_max)^2)
is evaluated, and used as a survival probability before the more
time-consuming PDF+ME evaluation, with surviving events given a
compensating weight 1/WT_y.
An impact-parameter dependence is also allowed. Based on the hard
pT scale of the first interaction, and enhancement/depletion
factor is picked, which multiplies the rate of subsequent interactions.
Parton densities are rescaled and modified to take into account the
energy-momentum and flavours kicked out by already-considered
interactions.