1. Introduction | What is Sherpa
| |
2. Getting started | A guide to getting started with Sherpa
| |
3. Cross section | Sherpa’s total inclusive cross section
| |
4. Command line options | Sherpa’s command line options | |
5. Input structure | How to specify parameters for a Sherpa run | |
6. Parameters | The complete list of parameters
| |
7. ME-PS merging | The Sherpa method of merging matrix elements and parton showers
| |
8. Tips and Tricks | Advanced usage tips | |
9. Customization | Extending Sherpa
| |
10. Examples | Examples to illustrate some of Sherpa’s features
| |
11. Getting help | What to do if you have questions about Sherpa
| |
12. Authors | Authors of Sherpa | |
13. Copying | Your rights and freedoms
| |
A. References | Bibliography | |
B. Index |
Sherpa is a Monte Carlo event generator for the Simulation of High-Energy Reactions of PArticles in lepton-lepton, lepton-photon, photon-photon, lepton-hadron and hadron-hadron collisions. This document provides information to help users understand and apply Sherpa for their physics studies. The event generator is introduced, in broad terms, and the installation and running of the program are outlined. The various options and parameters specifying the program are compiled, and their meanings are explained. This document does not aim at giving a complete description of the physics content of Sherpa . To this end, the authors refer the reader to the original publication, [Gle08b] .
1.1 Introduction to Sherpa | Intro | |
1.2 Basic structure | Descriptions of modules within Sherpa |
Sherpa [Gle08b] is a Monte Carlo event generator that provides complete hadronic final states in simulations of high-energy particle collisions. The produced events may be passed into detector simulations used by the various experiments. The entire code has been written in C++, like its competitors Herwig++ [Bah08] and Pythia 8 [Sjo07] .
Sherpa simulations can be achieved for the following types of collisions:
The list of physics processes that come with Sherpa covers particle production at tree level in the Standard Model and in models beyond the Standard Model: The complete set of Feynman rules for its Minimal Supersymmetric extension according to [Ros89] [Ros95] has been implemented, including general mixing matrices for inter-generational squark and slepton mixing. Among other interaction models the ADD model of Large Extra Dimensions has been made available, too [Gle03a] . Furthermore, anomalous gauge couplings [Hag86] , a model with an extended Higgs sector [Ded08] , and a version of the Two-Higgs Doublet Model are available. The Sherpa program owes this versatility to the inbuilt matrix-element generators, AMEGIC++ and Comix, and to it’s phase-space generator Phasic [Kra01] , which automatically calculate and integrate tree-level amplitudes for the implemented models. This feature enables Sherpa to be used as a cross-section integrator and parton-level event generator as well. This aspect has been extensively tested, see e.g. [Gle03b] ,[Hag05] .
As a second key feature of Sherpa the program provides an implementation of the merging approach of [Hoe09] . This algorithm yields improved descriptions of multijet production processes, which copiously appear at lepton-hadron colliders like HERA [Car09] , or hadron-hadron colliders like the Tevatron and the LHC, [Kra04] , [Kra05] ,[Gle05] ,[Hoe09a] . An older approach, implemented in previous versions of Sherpa and known as the CKKW technique [Cat01] ,[Kra02] , has been compared in great detail in [Alw07] with other approaches, such as the MLM merging prescription [Man01] as implemented in Alpgen [Man02] , Madevent [Ste94] ,[Mal02a] , or Helac [Kan00] ,[Pap05] and the CKKW-L prescription [Lon01] ,[Lav05] of Ariadne [Lon92] .
This manual contains all information necessary to get started with Sherpa as quickly as possible. By reading it, users should be enabled to setup the program according to their needs for studying various physics aspects. Therefore, all switches plus options that have been provided are listed. It is explained how to use them, how Sherpa can be run in different modes and how the results and output of Sherpa can be interpreted. For external code that can be linked, corresponding references are given and users are encouraged to cite them accordingly.
On the other hand, the physics of Sherpa and its underlying structure and coding principles are not detailed in this manual. For this, readers are encouraged to refer to original work of the authors. Also, whenever justified, Sherpa users are kindly asked to cite Sherpa ’s original publication [Gle08b] . Moreover the authors strongly recommend the study of the manuals and/or many excellent publications on different aspects of event generation and physics at collider experiments of the other event generator authors.
This manual is organized as follows: in Basic structure the modular structure intrinsic to Sherpa is introduced. Getting started contains information about and instructions for the installation of the package. There is also a description of the steps that are needed to run Sherpa and generate events. The Input structure is then discussed, and the ways in which Sherpa can be steered are explained. All parameters and options are discussed in Parameters. Advanced Tips and Tricks are detailed, and some options for Customization are outlined for those more familiar with Sherpa. There is also a short description of the different Examples provided with Sherpa.
It should be stressed that the construction of a Monte Carlo program requires a number of implicit assumptions, approximations and simplifications of complicated situations. Potential bugs and other shortcomings of the authors may also be included. The results of event generators, independent of their quality, should therefore always be verified and cross-checked with results obtained by the programs of other authors.
The construction of the
Sherpa
program has been pursued in a modular
way. It fully reflects the paradigm of Monte Carlo event generation of
factorizing the simulation into well defined phases. Accordingly, each
module encapsulates a different aspect of event generation for
high-energy particle reactions. It resides within its own namespace
and is located in its own subdirectory of the same name. The main
module called SHERPA
steers the interplay of all modules – or
phases – and the actual generation of the events.
Altogether, the following modules are currently distributed with the
Sherpa
framework:
This is the toolbox for all other modules. Since the Sherpa framework does not rely on CLHEP etc., the ATOOLS contain classes with mathematical tools like vectors and matrices, organization tools such as read-in or write-out devices, and physics tools like particle data or classes for the event record.
In this module some general methods for the evaluation of helicity amplitudes have been accumulated. They are used in AMEGIC++ , the EXTRA_XS module, and the new matrix-element generator Comix. This module also contains helicity amplitudes for some generic matrix elements, that are, e.g., used by HADRONS++ . Further, METOOLS also contains a simple library of tensor integrals which are used in the PHOTONS++ matrix element corrections.
This module manages the treatment of the initial beam spectra for different colliders. The three options which are currently available include a monochromatic beam, which requires no extra treatment, photon emission in the Equivalent Photon Approximation (EPA) and - for the case of an electron collider - laser backscattering off the electrons, leading to photonic initial states.
The PDF module provides access to various parton density functions (PDFs) for the proton and the photon. In addition, it hosts an interface to the LHAPDF package, which makes a full wealth of PDFs available. An (analytical) electron structure function is supplied in the PDF module as well.
This module is responsible for setting up the physics model for a simulation run. It comprises the initialization of particle properties, basic physics parameters (coupling constants, mixing angles, etc.) and the set of available interaction vertices (Feynman rules). By now, there exist explicit implementations of the Standard Model (SM), its Minimal Supersymmetric extension (MSSM), the ADD model of large extra dimensions, and a comprehensive set of operators parametrizing anomalous triple and quartic electroweak gauge boson couplings.
In this module a (limited) collection of analytic expressions for simple 2->2 processes within the SM are provided together with classes embedding them into the Sherpa framework. This also includes methods used for the definition of the starting conditions for parton-shower evolution, such as colour connections and the hard scale of the process. The classes for phase-space integration are not included here, they are located in the module Phasic, since they are needed by AMEGIC++ and Comix as well.
AMEGIC++ [Kra01] is Sherpa ’s original matrix-element generator. It employs the method of helicity amplitudes [Kle85] ,[Bal92] and works as a generator, which generates generators: During the initialization run the matrix elements for a given set of processes, as well as their specific phase-space mappings are created by AMEGIC++. Corresponding C++ sourcecode is written to disk and compiled by the user using the makelibs script. The produced libraries are linked to the main program automatically in the next run and used to calculate cross sections and to generate weighted or unweighted events. AMEGIC++ has been tested for multi-particle production in the Standard Model [Gle03b] . Its MSSM implementation has been validated in [Hag05] .
Comix is a multi-leg tree-level matrix element generator, based on the color dressed Berends-Giele recursive relations [Duh06] . It employs a new algorithm to recursively compute phase-space weights and can be run in multithreaded mode to make better use of multicore processors and multiprocessor machines [Gle08] , see also [Gle08a] . The module is a useful supplement to older matrix element generators like AMEGIC++ in the high multiplicity regime. Due to the usage of colour sampling it is particularly suited for an interface with parton shower simulations and can hence be easily employed for the ME-PS merging within Sherpa. It is Sherpa’s default large multiplicity matrix element generator for the Standard Model.
All base classes dealing with the Monte Carlo phase-space integration are located in this module. For the evaluation of the initial-state (laser backscattering, initial-state radiation) and final-state integrals, the adaptive multi-channel method of [Kle94] ,[Ber94] is used by default together with a Vegas optimization [Lep80] of the single channels. In addition, final-state integration accomplished by Rambo [Kle85a] , Sarge [Dra00] and HAAG [Ham02] is supported.
This is the module hosting Sherpa’s default parton shower, which was published in [Sch07a] . The corresponding shower model was originally proposed in [Nag05] , [Nag06] . It relies on the factorisation of real-emission matrix elements in the CS subtraction framework [Cat96b] , [Cat02] . There exist four general types of CS dipole terms that capture the complete infrared singularity structure of next-to-leading order QCD amplitudes. In the large-N_C limit, the corresponding splitter and spectator partons are always adjacent in colour space. The dipole functions for the various cases, taken in four dimensions and averaged over spins, are used as shower splitting kernels.
AMISIC++ contains classes for the simulation of multiple parton interactions according to [Sjo87] . In Sherpa the treatment of multiple interactions has been extended by allowing for the simultaneous evolution of an independent parton shower in each of the subsequent (semi-)hard collisions. The beam–beam remnants are organized such that partons which are adjacent in colour space are also adjacent in momentum space. The corresponding classes for beam remnant handling reside in the PDF and SHERPA modules.
AHADIC++ is Sherpa ’s hadronization package, for translating the partons (quarks and gluons) into primordial hadrons, to be further decayed in HADRONS++. The algorithm bases on the cluster fragmentation ideas presented in [Got82] ,[Got83] ,[Web83] ,[Got86] and implemented in the Herwig family of event generators. It should be noted, though, that the Sherpa version, based on [Win03] , indeed differs from the original versions.
HADRONS++ is the module for simulating hadron and tau-lepton decays. The resulting decay products respect full spin correlations (if desired). Several matrix elements and form-factor models have been implemented, such as the Kühn-Santamaría model, form-factor parametrizations from Resonance Chiral Theory for the tau and form factors from heavy quark effective theory or light cone sum rules for hadron decays.
The PHOTONS++ module holds routines to add QED radiation to hadron and tau-lepton decays. This has been achieved by an implementation of the YFS algorithm [Yen61] . The structure of PHOTONS++ is such that the formalism can be extended to scattering processes and to a systematic improvement to higher orders of perturbation theory [Sch08] . The application of PHOTONS++ therefore accounts for corrections that usually are added by the application of PHOTOS [Bar93] to the final state.
Finally, SHERPA is the steering module that initializes, controls and evaluates the different phases during the entire process of event generation. All routines for the combination of truncated showers and matrix elements, which are independent of the specific matrix element and parton shower generator are found in this module.
The actual executable of the
Sherpa
generator can be found in the
subdirectory <prefix>/bin/
and is
called
Sherpa
. To run the program, input files have to be
provided in the current working directory or elsewhere by specifying
the corresponding path, see Input structure. All output files are then written to this
directory as well.
2.1 Installation | How to install Sherpa | |
2.2 Running Sherpa | How to run the event generator |
Sherpa is distributed as a tarred and gzipped file named
Sherpa-<version>.tar.gz
, and can be unpacked in the current
working directory with
|
To guarantee successful
installation, the following tools should have been made available on
the system: make
, autoconf
, automake
and
libtool
. Furthermore, a C++ and FORTRAN compiler must be
provided. Compilation and installation proceed through the following commands
|
If not specified differently, the directory structure after installation is organized as follows
$(prefix)/bin
Sherpa executeable and scripts
$(prefix)/include
headers for process library compilation
$(prefix)/lib
basic libraries
$(prefix)/share
PDFs, Decaydata, fallback run cards
The installation directory $(prefix)
can be specified by using the
./configure --prefix /path/to/installation/target
directive and defaults
to the current working directory.
If Sherpa has to be moved to a different directory after the installation, one has to set the following environment variables for each run:
SHERPA_INCLUDE_PATH=$newprefix/include/SHERPA-MC
SHERPA_SHARE_PATH=$newprefix/share/SHERPA-MC
SHERPA_LIBRARY_PATH=$newprefix/lib/SHERPA-MC
LD_LIBRARY_PATH=$SHERPA_LIBRARY_PATH:$LD_LIBRARY_PATH
Sherpa can be interfaced with various external packages, e.g. HepMC, for event output, or LHAPDF, for PDFs. For this to work, the user has to pass the appropriate commands to the configure step. This is achieved as shown below:
|
For a complete list of possible configuration options run ‘./configure --help’.
The Sherpa package has successfully been compiled, installed and tested on SuSE, RedHat / Scientific Linux and Debian / Ubuntu Linux systems using the GNU C++ compiler versions 3.2, 3.3, 3.4, 4.0, 4.1, 4.2, 4.3 and 4.4 as well as on Mac OS X 10 using the GNU C++ compiler version 4.0. In all cases the GNU FORTRAN compiler g77 or gfortran has been employed. Note that GCC version 2.96 is not supported.
If you have multiple compilers installed on your system, you can use shell environment variables to specify which of these are to be used. A list of the available variables is printed with
|
in the Sherpa top level directory and looking at the last lines. Depending on the shell you are using, you can set these variables e.g. with export (bash) or setenv (csh). Examples:
|
Installation on MacOS is supported at least in all Sherpa versions > 1.1.2. Before that, there might have been problems on the newer MacOS versions or architectures (10.5, Intel). The following issues have come up on Mac installations before, so please be aware of them:
autoreconf
or (g)libtoolize
, you have to make sure that you have a recent version
of GNU libtool (>=1.5.22 has been tested). Don’t confuse this with
the native non-GNU libtool which is installed in /usr/bin/libtool
and
of no use! Also make sure that your autools (autoconf >= 2.61,
automake >= 1.10 have been tested) are of recent versions. All this should
not be necessary though, if you only run configure
.
otool -L bin/Sherpa
The Sherpa
executable resides in the directory <prefix>/bin/
where <prefix>
denotes the path to the Sherpa installation
directory. The way a particular simulation will be accomplished is
defined by several parameters, which can all be listed in a
common file, or data card (Parameters can be
alternatively specified on the command line; more details are given
in Input structure).
This steering file is called Run.dat
and some example setups
(i.e. Run.dat
files) are distributed with the current version
of Sherpa. They can be found in the directory <prefix>/share/SHERPA-MC/Examples/
,
and descriptions of some of their key features can be found in the section
Examples.
Please note: It is not in general possible to reuse run cards from previous Sherpa versions. Often there are small changes in the parameter syntax of the run cards from one version to the next. These changes are documented in our manuals. In addition, always use the newer Hadron.dat and Decaydata directories (and reapply any changes which you might have applied to the old ones), see HADRONS++.
The very first step in running Sherpa
is therefore to adjust all parameters to the needs of the
desired simulation. The details for properly doing this are given in
Parameters. In this section, the focus is on the main
issues for a successful operation of Sherpa. This is illustrated by
discussing and referring to the parameter settings that come in the run card
./Examples/Tevatron_WJets/Run.dat
. This is a simple run card
created to show the basics of how to operate Sherpa. It should be
stressed that this run-card relies on many of Sherpa’s default settings,
and, as such, the user should understand those settings before using it to
look at physics. For more information on the settings and parameters in
Sherpa, see Parameters, and for more
examples see the Examples section.
Central to any Monte Carlo simulation is the choice of the hard
processes that initiate the events. These hard processes are
described by matrix elements. In Sherpa,
the selection of processes happens in the (processes)
part of the steering file.
Only a few
2->2
reactions have been hard-coded. They are available in the EXTRA_XS module.
The more usual way to compute matrix elements is to employ one of Sherpa’s
automatic tree-level generators, AMEGIC++ and Comix, see Basic structure.
If no matrix-element generator is selected, using the ME_SIGNAL_GENERATOR
tag, then Sherpa will use whichever generator is capable of calculating the
process, checking EXTRA_XS first, then Comix and then AMEGIC++. Therefore,
for some processes, several of the
options are used. In this example, EXTRA_XS calculates the 2->2 part of the
process, and Comix calculates the
2->3,4 parts.
To begin with the example, the Sherpa run has to be
started by changing into the <prefix>/share/SHERPA-MC/Examples/Tevatron_WJets/
directory and executing
|
The user may also run from an
arbitrary directory, employing
<prefix>/bin/Sherpa PATH=<prefix>/share/SHERPA-MC/Examples/Tevatron_WJets
. In the example, the
keyword PATH
is specified by an absolute path. It may also be
specified relative to the current working directory. If it is
not specified at all or it is omitted, the current working directory
is understood.
For good book-keeping, it is highly recommended to reserve different subdirectories for different simulations as is demonstrated with the example setups.
If AMEGIC++ is used, Sherpa requires an initialization run, where
libraries are written out, then the libraries must be compiled and linked by running
a makelibs
script in the working directory, and then Sherpa is
run again for the actual cross section integrations and event generation.
For an example of how to run Sherpa using AMEGIC++, see Running Sherpa with AMEGIC++.
If the Internal hard-coded cross sections or Comix are used, and AMEGIC++ is not, an initialization run is not needed, and Sherpa will calculate the cross sections and generate events during the first run.
As the cross sections are integrated, the
integration over phase space is optimized to arrive at an
efficient event generation.
Subsequently events are generated if EVENTS
was specified
either at the command line or added to the Run.dat
file in the
(run)
section.
The generated events are not stored into a file by default; for details on how to store the events see Event output formats. Note that the computational effort to go through this procedure of generating, compiling and integrating the matrix elements of the hard processes depends on the complexity of the parton-level final states. For low multiplicities ( 2->2,3,4 ), of course, it can be followed instantly.
Usually more than one generation run is wanted. As long as the
parameters that affect the matrix-element integration are not changed,
it is advantageous to store the cross sections obtained during the
generation run for later use. This saves CPU time especially for large
final-state multiplicities of the matrix elements. To store the
integration results, a <result>
directory has to be created in
Tevatron_WJets
(Alternatively, the command line option
‘-g’ can be invoked, see Command line options). Then utilizing an
extended command line reading
|
a generation run can be started and the results of the integration
will be stored in <result>
, see RESULT_DIRECTORY. The next time this command line is
used, Sherpa will look for the integration results in <result>
and read them in. Of course, if corresponding parameters do change,
the cross sections have to be re-evaluated for a valid new generation
run. The new results have to be stored in a new directory or the
<result>
directory may be re-used once it has been emptied.
Basically, most of the parameters listed in the (model)
, (me)
and (selector)
part of Run.dat
determine the calculation of cross sections.
Standard examples are changing the magnitude of couplings,
renormalization or factorization scales, changing the PDF or
centre-of-mass energy, or, applying different cuts at the parton
level. If unsure whether a re-integration is required, a simple
test is to remove the
RESULT_DIRECTORY
option from the run command and check
whether the new integration numbers (statistically) comply with the
stored ones.
One more remark (or maybe warning) concerning the validity of the
process libraries is in order here: it is absolutely mandatory to
generate new library files, whenever the physics model is altered,
i.e. particles are added or removed and hence new or existing
diagrams may or may not anymore contribute to the same final states.
Also, when particle masses are switched on or off new library files
must be generated (however, masses may be changed between non-zero
values keeping the same process libraries). Old library files cannot
account for such changes, since once generated their functional
structure is fixed. The best thing is to create a new and separate
setup directory. Otherwise the Process
and Result
directories have to be erased:
|
In either case one has to start over with the whole initialization procedure to prepare for the generation of events again.
The setup (or the Run.dat
file) provided in
./Examples/Tevatron_WJets/
can be considered as a standard
example to illustrate the generation of fully hadronized events in Sherpa.
Such events will include effects from parton showering,
hadronization into primary hadrons and their subsequent decays into
stable hadrons. Moreover, the example chosen here nicely demonstrates
how Sherpa is used in the context of merging matrix elements and
parton showers [Hoe09]
.
In addition to the aforementioned corrections, this simulation of
inclusive
W
production (with the
W
decaying into
electron and anti-electron-neutrino
) will then include higher-order jet corrections
at the tree level. As a result the transverse-momentum distribution of
the
W
boson as measured by the D0 and CDF collaborations at
Tevatron Run I can be well described, see also
[Kra04]
,[Kra05]
,[Gle05]
.
Before event generation, the initialization procedure as described in Process selection and initialization has to be completed. The matrix-element processes included in the setup are the following:
proton anti-proton -> parton parton -> electron anti-electron-neutrino + up to two partons |
In the (processes)
part of the steering file this translates into
Process 93 93 -> 11 -12 93{2} Order_EW 2; CKKW sqr(30/E_CMS) End process;
The physics model for
these processes is the Standard Model (‘SM’) which is the default
setting of the parameter MODEL
, in the (model)
part of
Run.dat
. Fixing the order of
electroweak couplings to ‘2’, matrix elements of all partonic
subprocesses for
W
production without any and with up to two extra QCD
parton emissions will be generated.
Proton–antiproton collisions are considered at
beam energies of 900 GeV; under the (beam)
part of the Run.dat
file, one therefore has BEAM_1=2212
, BEAM_2=-2212
and
BEAM_ENERGY_{1,2}=980.0
. The default PDF used by Sherpa is
CTEQ6L. Model parameters and couplings can be set in the
Run.dat
section (model)
, and the way couplings are treated
can be defined under the (me)
category. The QCD radiation matrix elements have to be
regularized to obtain meaningful cross sections. This is achieved by
specifying ‘CKKW sqr(30/E_CMS)’ in the (processes)
part of
Run.dat
. Simultaneously, this tag initiates the ME-PS merging procedure.
To eventually obtain fully hadronized events, the FRAGMENTATION
tag
has been left on it’s default setting ‘Ahadic’,
which will run Sherpa’s cluster hadronization, and the tag
DECAYMODEL
has it’s default setting
‘Hadrons’, which will run Sherpa’s hadron decays.
Additionally corrections owing to photon emissions are taken into
account.
To run this example set-up, use the
|
command as descibed in Running Sherpa. Sherpa displays some output as it runs. At the start of the run, Sherpa initializes the relevant model, and displays a table of particles, with their PDG codes and some properties. It also displays the Particle containers, and their contents. The other relevant parts of Sherpa are initialized, including the matrix element generator(s). The Sherpa output will look like:
Initialized the beams Monochromatic*Monochromatic PDF set 'cteq6l' loaded from 'libCTEQ6Sherpa'. PDF set 'cteq6l' loaded from 'libCTEQ6Sherpa'. Initialized the ISR: (SF)*(SF) Initialized the Beam_Remnant_Handler. Initialized the Shower_Handler. Initialized the Fragmentation_Handler. +----------------------------------+ | | | CCC OOO M M I X X | | C O O MM MM I X X | | C O O M M M I X | | C O O M M I X X | | CCC OOO M M I X X | | | +==================================+ | Color dressed Matrix Elements | | http://comix.freacafe.de | | please cite JHEP12(2008)039 | +----------------------------------+ Matrix_Element_Handler::BuildProcesses(): Looking for processes ...................... done ( 25856 kB, 0.34 s ). Matrix_Element_Handler::InitializeProcesses(): Performing tests ...................... done ( 25856 kB, 0 s ). Initialized the Matrix_Element_Handler for the hard processes. Hadron_Decay_Map::Read: Initializing hadron decay tables. This may take some time. Initialized the Hadron_Decay_Handler, Decay model = Hadrons Initialized the Soft_Photon_Handler. |
Then Sherpa will start to integrate the cross sections. The output will look like:
Process_Group::CalculateTotalXSec(): Calculate xs for '2_2__j__j__e-__nu_eb' (Comix) Starting the calculation. Lean back and enjoy ... . 1049.87 pb +- ( 41.8506 pb = 3.98626 % ) 5000 ( 5003 -> 99.9 % ) full optimization: ( 0 s elapsed / 13 s left / 13 s total ) ... |
The first line here displays the process which is being calculated. In this example, the integration is for the 2->2 process, parton, parton -> electron, neutrino. The matrix element generator used is displayed after the process. As the integration progresses, summary lines are displayed, like the one shown above. The current estimate of the cross section is displayed, along with its statistical error estimate. The number of phase space points calculated is displayed after this (‘5000’ in this example), and the efficiency is displayed after that. On the line below, the time elapsed is shown, and an estimate of the total time till the optimization is complete.
When the integration is complete, the output will look like:
... 985.27 pb +- ( 0.363629 pb = 0.0369065 % ) 300000 ( 300009 -> 99.9 % ) integration time: ( 11 s elapsed / 0 s left / 11 s total ) 985.206 pb +- ( 0.356621 pb = 0.0361976 % ) 310000 ( 310010 -> 99.9 % ) integration time: ( 11 s elapsed / 0 s left / 11 s total ) 2_2__j__j__e-__nu_eb : 985.206 pb +- ( 0.356621 pb = 0.0361976 % ) exp. eff: 6.13927 % |
with the final cross section result and its statistical error displayed.
Sherpa will then move on to integrate the other processes specified in the run card.
When the integration is complete, the event generation will start. As the events are being generated, Sherpa will display a summary line stating how many events have been generated, and an estimate of how long it will take. When the event generation is complete, Sherpa’s output looks like:
... Event 10000 ( 158 s elapsed / 0 s left ) -> ETA: Tue Jul 28 19:41 In Event_Handler::Finish : Summarizing the run may take some time. +--------------------------------------------------+ | | | Total XS is 1956 pb +- ( 8.71129 pb = 0.44 % ) | | | +--------------------------------------------------+ |
A summary of the number of events generated is displayed, with the total cross section for the processes.
The generated events are not stored into a file by default; for details on how to store the events see Event output formats.
Sherpa has its own tree-level matrix-element generators called AMEGIC++ and Comix.
Furthermore, with the module PHASIC++, sophisticated and
robust tools for phase-space integration are provided. Therefore
Sherpa obviously can be used as a cross-section integrator. Because
of the way Monte Carlo integration is accomplished, this immediately
allows for parton-level event generation. Taking the Tevatron_WJets
setup, users have to modify just a few settings in Run.dat
and
would arrive at a parton-level generation for the process gluon down-quark to electron
antineutrino and up-quark, to name an example. When, for instance, the
options “EVENTS=0 OUTPUT=2
” are added to the command line,
a pure cross-section integration for that process would be obtained
with the results plus integration errors written to the screen.
For the example, the (processes)
section alters to
Process : 21 1 -> 11 -12 2 Order_EW 2 End process
and under the assumption to start afresh, the initialization procedure has
to be followed as before.
Picking the same collider environment as in the previous
example there are only two more changes before the Run.dat
file
is ready for the calculation of the hadronic cross section of the
process g d to e- nu_e-bar u at Tevatron Run I and subsequent
parton-level event generation with
Sherpa
. These changes read
SHOWER_GENERATOR=None
, to switch off parton
showering, and, FRAGMENTATION=Off
, to do so for the
hadronization effects.
To determine the total cross section, in particular in the context of running CKKW merging with Sherpa, the final output of the event generation run should be used, e.g.
+-----------------------------------------------------+ | | | Total XS is 1612.17 pb +- ( 8.48908 pb = 0.52 % ) | | | +-----------------------------------------------------+
Note that the Monte Carlo error quoted for the total cross section is determined during event generation. It, therefore, might differ substantially from the errors quoted during the integration step.
In contrast to plain leading order results, Sherpa’s total cross section is composed of values from various leading order processes, namely those which are combined by applying the ME-PS merging, see ME-PS merging. In this context, it is important to note that
The exclusive higher order tree-level cross sections determined during the integration step are meaningless by themselves, only the inclusive cross section printed at the end of the event generation run is to be used.
In principle, this value has the same formal accuracy as a leading order result, but it might still differ by a significant amount. Depending on jet definitions, process etc., the merged cross section may be either larger or smaller than the leading order cross section.
Concerning a comparison with NLO calculations: It is known that for, e.g., inclusive Z production the NLO-LO K-factor is larger than one. In some setups the Sherpa cross section is smaller than the LO one, and therefore further from the NLO. Therefore, the Sherpa total cross section should not be thought of as an “improved leading order result”, which would suggest that it is always closer to the NLO than the LO cross section.
Sherpa total cross sections have leading order accuracy.
Broadly speaking, Sherpa’s ME-PS merging is adequate for capturing the information from (resummed) logarithmic corrections to the leading order (as is the parton shower). On the contrary, NLO cross sections are typically dominated by finite terms, as they are often quite inclusive and there are no large logarithms in this case. Sherpa’s merging algorithm has no way to calculate these finite terms, and this is why Sherpa’s cross section is not a better approximation to the NLO cross section. On the other hand, shape observables (especially jet transverse momenta and the like) are typically dominated by logarithmic corrections. If they are concerned, Sherpa can be expected to perform reasonably well.
The available command line options for Sherpa.
Read input from file ‘<file>’.
Read input file from path ‘<path>’.
Set number of events to generate ‘<events>’, see EVENTS.
Set the result directory to ‘<results>’, see RESULT_DIRECTORY.
Set the matrix element generator list to ‘<generators>’, see ME_SIGNAL_GENERATOR.
Set the event generation mode to ‘<mode>’, see EVENT_GENERATION_MODE.
Set the parton shower generator to ‘<generator>’, see SHOWER_GENERATOR.
Set the fragmentation module to ‘<module>’, see Fragmentation.
Set the decay module to ‘<module>’, see Fragmentation.
Set the analysis handler list to ‘<analyses>’, see ANALYSIS.
Set the analysis output path to ‘<path>’, see ANALYSIS_OUTPUT.
Set general output level ‘<level>’, see OUTPUT.
Set output level for event generation ‘<level>’, see OUTPUT.
Set number of threads ‘<threads>’, see Multi-threading.
Switch to non-batch mode, see BATCH_MODE.
Print versioning information.
Print a help message.
Set the value of a parameter, see Parameters.
Set the value of a tag, see Tags.
A Sherpa setup is steered by various parameters, associated with the different components of event generation.
These have to be specified in a run-card which by default is named “Run.dat” in the current working directory. If you want to use a different setup directory for your Sherpa run, you have to specify it on the command line as ‘-p <dir>’ or ‘PATH=<dir>’. To read parameters from a run-card with a different name, you may specify ‘-f <file>’ or ‘RUNDATA=<file>’.
Sherpa’s parameters are grouped according to the different aspects of event generation, e.g. the beam parameters in the group ‘(beam)’ and the fragmentation parameters in the group ‘(fragmentation)’. In the run-card this looks like:
(beam){ BEAM_ENERGY_1 = 7000. ... }(beam)
Each of these groups is described in detail in another chapter of this manual, see Parameters.
If such a section or file does not exist in the setup directory, a Sherpa-wide fallback mechanism is employed, searching the file in various locations in the following order (where $SHERPA_DAT_PATH is an optionally set environment variable):
All parameters can be overwritten on the command line, i.e. command-line input has the highest priority. The syntax is
<prefix>/bin/Sherpa KEYWORD1=value1 KEYWORD2=value2 ...
To change, e.g., the default number of events, the corresponding command line reads
<prefix>/bin/Sherpa EVENTS=10000
All over Sherpa, particles are defined by the particle code proposed by the PDG. These codes and the particle properties will be listed during each run with ‘OUTPUT=2’ for the elementary particles and ‘OUTPUT=4’ for the hadrons. In both cases, antiparticles are characterized by a minus sign in front of their code, e.g. a mu- has code ‘13’, while a mu+ has ‘-13’.
All quantities have to be specified in units of GeV and millimeter. The same units apply to all numbers in the event output (momenta, vertex positions). Scattering cross sections are denoted in pico-barn in the output.
There are a few extra features for an easier handling of the parameter file(s), namely global tag replacement, see Tags, and algebra interpretation, see Interpreter.
5.1 Interpreter | How to use the internal interpreter | |
5.2 Tags | How to use tags |
Sherpa has a built-in interpreter for algebraic expressions, like ‘cos(5/180*M_PI)’.
This interpreter is employed when reading integer and floating point numbers from
input files, such that certain parameters can be written in a more convenient fashion.
For example it is possible to specify the factorisation scale as ‘sqr(91.188)’.
There are predefined tags to alleviate the handling
Ludolph’s Number to a precision of 12 digits.
The speed of light in the vacuum.
The total centre of mass energy of the collision.
The expression syntax is in general C-like, except for the extra function ‘sqr’,
which gives the square of its argument. Operator precedence is the same as in C.
The interpreter can handle functions with an arbitrary list of parameters, such as
‘min’ and ‘max’.
The interpreter can be employed to construct arbitrary variables from four momenta,
like e.g. in the context of a parton level selector, see Selectors.
The corresponding functions are
The invariant mass of v in GeV.
The invariant mass squared of v in GeV^2.
The transverse momentum of v in GeV.
The transverse momentum squared of v in GeV^2.
The transverse mass of v in GeV.
The transverse mass squared of v in GeV^2.
The polar angle of v in radians.
The pseudorapidity of v.
The azimuthal angle of v in radians.
The i’th component of the vector v.
The relative transverse momentum between v1 and v2 in GeV.
The relative angle between v1 and v2 in radians.
The rapidity difference between v1 and v2.
The relative polar angle between v1 and v2 in radians.
Tag replacement in Sherpa is performed through the data reading routines, which means that it can be performed for virtually all inputs. Specifying a tag on the command line using the syntax ‘<Tag>:=<Value>’ will replace every occurrence of ‘<Tag>’ in all files during read-in. An example tag definition could read
<prefix>/bin/Sherpa QCUT:=20 NJET:=3
and then be used in the (me) and (processes) sections like
(me){ RESULT_DIRECTORY = Result_QCUT/ }(me) (processes){ Process 93 93 -> 11 -11 93{NJET} Order_EW 2; CKKW sqr(QCUT/E_CMS) End process; }(processes)
A Sherpa setup is steered by various parameters, associated with the different components of event generation. These are set in Sherpa’s run-card, see Input structure for more details. Tag replacements may be performed in all inputs, see Tags.
6.1 Run Parameters | List of general parameters | |
6.2 Beam Parameters | List of beam parameters | |
6.3 ISR Parameters | List of initial state radiation parameters | |
6.4 Model Parameters | List of interaction model parameters | |
6.5 Matrix Elements | Matrix element related settings | |
6.6 Processes | Syntax of the process setup | |
6.7 Selectors | Syntax of parton level cuts | |
6.8 Integration | List of integration parameters | |
6.9 Shower Parameters | List of shower parameters | |
6.10 MPI Parameters | List of multiple parton interaction parameters | |
6.11 Fragmentation | List of hadronization parameters | |
6.12 QED Corrections | List of Q ED correction parameters |
The following parameters describe general run information. They may be set in the (run)
section of the run-card, see Input structure.
6.1.1 EVENTS | Number of events to generate. | |
6.1.2 OUTPUT | Output level. | |
6.1.3 RANDOM_SEED | Seed for random number generator. | |
6.1.4 ANALYSIS | Switch internal analysis on or off. | |
6.1.5 ANALYSIS_OUTPUT | Directory for generated analysis histogram files. | |
6.1.6 TIMEOUT | Run time limitation. | |
6.1.7 BATCH_MODE | Batch mode settings. | |
6.1.8 SPIN_CORRELATIONS | Switch spin correlations on/off. | |
6.1.9 NUM_ACCURACY | Accuracy for gauge tests. | |
6.1.10 SHERPA_CPP_PATH | The C++ code generation path. | |
6.1.11 SHERPA_LIB_PATH | The runtime library path. | |
6.1.12 Event output formats | Event output in different formats. | |
6.1.13 Multi-threading | Multi-threaded integration with Sherpa. |
This parameter specifies the number of events to be generated.
It can alternatively be set on the command line through option
‘-e’, see Command line options.
This parameter specifies the output level (verbosity) of the program.
It can alternatively be set on the command line through option
‘-O’, see Command line options. A different output level can be
specified for the event generation step through ‘EVT_OUTPUT’
or command line option ‘-o’, see Command line options
The value can be any sum of the following:
E.g. OUTPUT=3 would display information, events and errors.
SHERPA uses a random-number generator as described in [Florida State University Report FSU-SCRI-87-50]. The two independent integer-valued seeds are specified by the option “RANDOM_SEED=A B”. The seeds A and B may range from 0 to 31328 and from 0 to 30081, respectively. They can also directly be set using “RANDOM_SEED1=A” and “RANDOM_SEED2=B” If RANDOM_SEED is not specified at all or only by one integer number, the old random-number generator (SHERPA 1.0.6 and older) will be used.
Analysis routines can be switched on or off by setting the ANALYSIS flag. The default is no analysis, corresponding to option ‘0’. This parameter can also be specified on the command line using option ‘-a’, see Command line options.
The following analysis handlers are currently available
Sherpa’s internal analysis handler.
To use this option, the package must be configured with option ‘--enable-analysis’.
An output directory can be specified using ANALYSIS_OUTPUT.
The Rivet package, see Rivet Website.
To enable it, Rivet and HepMC have to be installed and Sherpa must be configured
as described in Rivet analyses.
Multiple options can be combined using a comma, e.g. ‘ANALYSIS=Internal,Rivet’.
Name of the directory for histogram files when using the internal analysis and name of the Aida file when using Rivet, see ANALYSIS. The directory / file will be created w.r.t. the working directory. The default value is ‘Analysis/’. This parameter can also be specified on the command line using option ‘-A’, see Command line options.
A run time limitation can be given in user CPU seconds through TIMEOUT. This option is of some relevance when running SHERPA on a batch system. Since in many cases jobs are just terminated, this allows to interrupt a run, to store all relevant information and to restart it without any loss. This is particularly useful when carrying out long integrations. Alternatively, setting the TIMEOUT variable to -1, which is the default setting, translates into having no run time limitation at all.
Whether or not to run Sherpa in batch mode. The default is ‘1’, meaning Sherpa does not attempt to save runtime information when catching a signal or an exception. On the contrary, if option ‘0’ is used, Sherpa will store potential integration information and analysis results, once the run is terminated abnormally.
Note that when running the code on a cluster or in a grid environment, BATCH_MODE should never be different from 1.
The command line option ‘-b’ should therefore not be used in this case, see Command line options.
The algorithm used to transfer spin-correlation information from AMEGIC++ to HADRONS++ is switched off (=0) by default. It can be switched on via SPIN CORRELATIONS=1. Process libraries have to be re-created in this case.
The targeted numerical accuracy can be specified through NUM ACCURACY, e.g. for comparing two numbers. This might have to be reduced if gauge tests fail for numerical reasons.
The path in which Sherpa will eventually store dynamically created C++ source code. If not specified otherwise, sets ‘SHERPA_LIB_PATH’ to ‘$SHERPA_CPP_PATH/Process/lib’.
The path in which Sherpa looks for dynamically linked libraries from previously created C++ source code, cf. SHERPA_CPP_PATH.
Sherpa provides the possibility to output events in its native and two other output formats: The HepEVT common block structure or the HepMC format. The authors of Sherpa assume that the user is sufficiently acquainted with these formats when selecting them.
If the events are to be written to file, the following parameters have to be specified:
Filename for output in Sherpa format
Filename for output in HepMC::IO_GenEvent format.
Filename for output in HepMC::IO_GenEvent format. Only incoming beams and outgoing particles are stored. Intermediate and decayed particles are not listed.
Filename for output in HepEvt format.
Filename for output in ROOT ntuple format for NLO event generation.
For details on ntuple format, see Structure of ROOT NTuple Output.
This output option is only available if Sherpa was linked to ROOT during
installation by using the configure option --enable-root=/path/to/root
.
With these keywords the filename’s root can be
specified, i.e. HEPEVT_OUTPUT=<filename>
will create files named
<filename>.#.hepevt
, where the hash mark numerates the files if events are
split into multiple files.
The output can be further steered with the following options:
Number of events per file (default: 1000).
Directory where the files will be stored.
Steers the precision of all numbers written to file.
To write events directly to gzipped files instead of plain text, the option ‘--enable-gzip’ has to be specified during the installation.
There is also the option to change the format of the event output printed to
screen (if any) with the switch EVENT_MODE
:
Blob list output (default)
GenEvent print method
GenEvent print method of shortend event record
HepEvt common block
Multi-threaded integration in Sherpa can be enabled using the configuration option ‘--enable-multithread’. Subsequently the computation of amplitudes for large groups of processes is split into a number of threads which is limited from above by the parameter ‘PG_THREADS’. This parameter can also be specified using the command line option ‘-j’, see Command line options. Additionally, matrix-element calculation and phase-space evaluation for a single process with Comix can be distributed to different threads according to [Gle08] . The number of threads is then specified using the parameters ‘COMIX_ME_THREADS’ and ‘COMIX_PS_THREADS’, respectively.
The setup of the colliding beams is covered by the (beam)
section of the steering file or the beam data file Beam.dat
,
respectively, see Input structure. The mandatory settings to be made are
More options related to beamstrahlung and intrinsic transverse momentum can be found in the following subsections.
6.2.1 Beam Spectra | Options related to beamstrahlung | |
6.2.2 Intrinsic Transverse Momentum | Options related to primordial transverse momentum |
If desired, you can also specify spectra for beamstrahlung through
BEAM_SPECTRUM_1
and BEAM_SPECTRUM_2
. The possible values are
Possible values are
The beam energy is unaltered and the beam particles remain unchanged. That is the default and corresponds to ordinary hadron-hadron or lepton-lepton collisions.
This can be used to describe the backscattering of a laser beam off initial leptons. The energy distribution of the emerging photon beams is modelled by the CompAZ parametrization, see [Zar02] Note that this parametrization is valid only for the proposed TESLA photon collider, as various assumptions about the laser parameters and the initial lepton beam energy have been made.
This corresponds to a simple light backscattering off the initial lepton beam and produces initial-state photons with a corresponding energy spectrum.
This enables the equivalent photon approximation for colliding protons, see [Arc08] . The resulting beam particles are photons that follow a dipole form factor parametrization, cf. [Bud74] . The authors would like to thank T. Pierzchala for his help in implementing and testing the corresponding code.
A user defined spectrum is used to describe the energy spectrum
of the assumed new beam particles. The name of the corresponding
spectrum file needs to be given through the keywords
SPECTRUM_FILE_1
and SPECTRUM_FILE_2
.
The BEAM_SMIN
and BEAM_SMAX
parameters may be used to specify the
minimum/maximum fraction of cms energy
squared after Beamstrahlung. The reference value is the total centre
of mass energy squared of the collision, not the
centre of mass energy after eventual Beamstrahlung.
The parameter can be specified using the internal interpreter, see
Interpreter, e.g. as ‘BEAM_SMIN sqr(20/E_CMS)’.
This parameter specifies the mean intrinsic transverse
momentum for the first (left) beam in case of hadronic
beams, such as protons.
The default value for protons is 0.8 GeV.
This parameter specifies the mean intrinsic transverse
momentum for the second (right) beam in case of hadronic
beams, such as protons.
The default value for protons is 0.8 GeV.
This parameter specifies the width of the Gaussian distribution
of intrinsic transverse momentum for the first (left) beam in
case of hadronic beams, such as protons.
The default value for protons is 0.8 GeV.
This parameter specifies the width of the Gaussian distribution
of intrinsic transverse momentum for the first (left) beam in
case of hadronic beams, such as protons.
The default value for protons is 0.8 GeV.
If the option ‘BEAM_REMNANTS=0’ is specified, pure parton-level events are simulated, i.e. no beam remnants are generated. Accordingly, partons entering the hard scattering process do not acquire primordial transverse momentum.
The following parameters are used to steer the setup of beam substructure and
initial state radiation (ISR). They may be set in the
(isr)
section of the run-card, see Input structure.
BUNCH_1/BUNCH_2
Specify the PDG ID of the first
(left) and second (right) bunch particle, i.e. the particle after eventual
Beamstrahlung specified through the beam parameters, see Beam Parameters.
Per default these are taken to be identical to the parameters
BEAM_1
/BEAM_2
, assuming the default beam spectrum is
Monochromatic. In case the Simple Compton or Laser Backscattering spectra are
enabled the bunch particles would have to be set to 22, the PDG code of the
photon.
ISR_SMIN/ISR_SMAX
This parameter specifies the minimum fraction of cms energy
squared after ISR. The reference value is the total centre
of mass energy squared of the collision, not the
centre of mass energy after eventual Beamstrahlung.
The parameter can be specified using the internal interpreter,
see Interpreter, e.g. as ‘ISR_SMIN=sqr(20/E_CMS)’.
Sherpa provides access to a variety of structure functions. They can be configured with the following parameters.
PDF_LIBRARY
Switches between different interfaces to PDFs. If the two colliding beams are of different type, e.g. protons and electrons or photons and electrons, it is possible to specify two different PDF libraries using ‘PDF_LIBRARY_1’ and ‘PDF_LIBRARY_2’. The following options are distributed with Sherpa:
LHAPDFSherpa
Use PDF’s from LHAPDF [Wha05] . This is the default (and only available then) if Sherpa has been compiled with support for LHAPDF, see Installation.
CTEQ6Sherpa
Built-in library for some PDF sets from the CTEQ collaboration, cf. [Nad08] . This is the default, if Sherpa has not been compiled with LHAPDF support.
MSTW08Sherpa
Built-in library for PDF sets from the MSTW group, cf. [Mar09] .
MRST04QEDSherpa
Built-in library for photon PDF sets from the MRST group, cf. [Mar04] .
MRST01LOSherpa
Built-in library for the 2001 leading-order PDF set from the MRST group, cf. [Mar01] .
MRST99Sherpa
Built-in library for the 1999 PDF sets from the MRST group, cf. [Mar99] .
GRVSherpa
PDFESherpa
Built-in library for the electron structure function.
The perturbative order of the fine structure constant can be set using the
parameter ISR_E_ORDER
(default: 1). The
switch ISR_E_SCHEME
allows
to set the scheme of respecting non-leading terms. Possible options are 0
("mixed choice"), 1 ("eta choice"), or 2 ("beta choice", default).
Furthermore it is simple to build an external interface to an arbitrary PDF and load that dynamically in the Sherpa run. See External PDF for instructions.
PDF_SET
Specifies the PDF set for hadronic bunch particles. All sets available in the
chosen PDF_LIBRARY
can be figured by running Sherpa with the parameter
SHOW_PDF_SETS=1
, e.g.:
Sherpa PDF_LIBRARY=CTEQ6Sherpa SHOW_PDF_SETS=1
If the two colliding beams are of different type, e.g. protons and electrons or photons and electrons, it is possible to specify two different PDF sets using ‘PDF_SET_1’ and ‘PDF_SET_2’.
PDF_SET_VERSION
This parameter allows to eventually select a specific version (member) within the chosen PDF set. Specifying a negative value, e.g.
PDF_LIBRARY LHAPDFSherpa; PDF_SET NNPDF12_100.LHgrid; PDF_SET_VERSION -100;
results in Sherpa sampling all sets 1..100, which can be used to obtain the averaging required when employing PDF’s from the NNPDF collaboration [Bal08] , [Bal09] .
The interaction model setup is covered by the (model)
section of
the steering file or the model data file Model.dat
, respectively.
The main switch here is called MODEL
and sets the model that Sherpa uses
throughout the simulation run. The default is ‘SM’, for
the Standard Model. For a complete list of available models,
run Sherpa with SHOW_MODEL_SYNTAX=1 on the command line.
This will display not only the available models, but also the
parameters for those models.
The chosen model also defines the list of particles and their default properties. With the following switches it is possible to change the properties of all fundamental particles:
MASS[<id>]
Sets the mass (in GeV) of the particle with PDG id ‘<id>’.
Masses of particles and corresponding anti-particles are always set
simultaneously.
MASSIVE[<id>]
Specifies whether the finite mass of particle with PDG id ‘<id>’ is to be considered in matrix-element calculations or not.
WIDTH[<id>]
Sets the width (in GeV) of the particle with PDG id ‘<id>’.
ACTIVE[<id>]
Enables/disables the particle with PDG id ‘<id>’.
STABLE[<id>]
Sets the particle with PDG id ‘<id>’ either stable or unstable according to the following options:
Particle and anti-particle are unstable
Particle and anti-particle are stable
Particle is stable, anti-particle is unstable
Particle is unstable, anti-particle is stable
At present this affects only the simulation of tau decays, dealt with by the HADRONS++ package. Decays of other particles have to be specified in the process setup (see Processes) and are not treated automatically.
Note: To set properties of hadrons, you can use the same switches (except for
MASSIVE
) in the fragmentation section, see Fragmentation.
The SM inputs for the electroweak sector can be given in four different
schemes, that correspond to different choices of which SM physics
parameters are considered fixed and which are derived from the given
quantities. The input schemes are selected through the EW_SCHEME
parameter, whose default is ‘0’. The following options are provided:
0
:
all EW parameters are explicitly given.
Here the W, Z and Higgs masses are taken as inputs, and
the parameters 1/ALPHAQED(0)
, SIN2THETAW
, VEV
and LAMBDA
have to be specified as well. While
1/ALPHAQED(0)
corresponds to the fine structure constant
at zero momentum transfer, the parameters SIN2THETAW
,
VEV
, and LAMBDA
thereby specify the weak mixing angle,
the Higgs field vacuum expectation value, and the Higgs quartic
coupling respectively.
1
:
all EW parameters are calculated out of the W, Z and Higgs masses and
1/ALPHAQED(0)
.
2
:
all EW parameters are calculated out of 1/ALPHAQED(0)
,
SIN2THETAW
, VEV
and the Higgs mass.
3
:
this choice corresponds to the G_mu-scheme. The EW parameters are
calculated out of the weak gauge boson masses M_W, M_Z, the Higgs
boson mass M_H and the Fermi constant GF
.
To account for quark mixing the CKM matrix elements have to be assigned.
For this purpose the Wolfenstein parametrization [Wol83]
is
employed. The order of expansion in the lambda parameter is defined
through CKMORDER
, with default ‘0’ corresponding to a unit matrix.
The parameter convention for higher expansion terms reads:
CKMORDER = 1
, the CABIBBO
parameter has to be set,
it parametrizes lambda and has the default value ‘0.2272’.
CKMORDER = 2
, in addition the value of A
has to
be set, its default is ‘0.818’.
CKMORDER = 3
, the order lambda^3 expansion, ETA
and RHO
have to be specified. Their default values are ‘0.349’
and ‘0.227’, respectively.
The remaining parameter to fully specify the Standard Model
is the strong coupling constant at the Z-pole, given through
ALPHAS(MZ)
. Its default value is ‘0.118’. For
the two fine structure constants there is the option to provide
fixed values that can be used in calculations of matrix elements
in case running of the couplings is disabled. The two keywords
are 1/ALPHAQED(default)
and ALPHAS(default)
. When using
a running strong coupling, the order of the perturbative expansion
used can be set through ORDER_ALPHAS
, where the default ‘0’
corresponds to one-loop running and 1
,2
,3
to 2,3,4-loops, respectively.
To use the MSSM within Sherpa (cf. [Hag05]
)
the MODEL switch has to be set to ‘MSSM’. Further, the
parameter spectrum has to be fed in. To achieve this files that conform to the
SUSY-Les-Houches-Accord [Ska03]
are used. The actual SLHA file
name has to be specified by SLHA_INPUT
and has to reside in the current
run directory, i.e. PATH
. From this file the full low-scale MSSM spectrum
is read, including sparticle masses, mixing angles etc. In addition information
provided on the total particle’s widths is read from the input file. Note that
the setting of masses and widths through the SLHA input is superior to setting
through MASS[<id>]
and WIDTH[<id>]
.
In order to use the ADD model within Sherpa the switch MODEL = ADD
has to be
set. The parameters of the ADD model can be set as follows:
The variable N_ED
specifies the number of extra dimensions. The value of
the Newtonian constant can be specified in natural units using the keyword
G_NEWTON
. The size of the string scale M_S can be defined by the
parameter M_S
. Setting the value of KK_CONVENTION
allows to change
between three widely used conventions for the definition of M_S and the way of
summing internal Kaluza-Klein propagators. The switch M_CUT
one restricts
the c.m. energy of the hard process to be below this specified scale.
The masses, widths, etc. of both additional particles can set in the same way as
for the Standard Model particles using the MASS[<id>]
and
WIDTH[<id>]
keywords. The ids of the graviton and graviscalar are
39
and 40
.
For details of the implementation, the reader is referred to [Gle03a] .
Sherpa includes a number of effective Lagrangians describing anomalous gauge interactions:
G1_GAMMA
, KAPPA_GAMMA
, LAMBDA_GAMMA
,
G4_GAMMA
, G5_GAMMA
, KAPPAT_GAMMA
, LAMBDAT_GAMMA
,
G1_Z
, KAPPA_Z
, LAMBDA_Z
,
G4_Z
, G5_Z
, KAPPAT_Z
and LAMBDAT_Z
.
As a default the Standard Model limit is used
(G1_GAMMA/Z
=KAPPA_GAMMA/Z=1
, all other
=0
).
ALPHA_4_G_4
and ALPHA_5
.
The coupling parameters are specified through F4_GAMMA
, F5_GAMMA
,
H1_GAMMA
, H2_GAMMA
, H3_GAMMA
, H4_GAMMA
,
F4_Z
, F5_Z
, H1_Z
, H2_Z
, H3_Z
and H4_Z
,
all equal zero by default.
It should be noted that the most general anomalous coupling between three off-shell
neutral gauge bosons allows for more coupling terms [Gou00]
which are
not implemented in the current version.
Outside the on-shell limit of two of the vector bosons a symmetrized version
of the above vertex is used.
Due to the effective nature of the anomalous couplings unitarity
might be violated for coupling parameters other than the SM values.
For very large momentum transfers, such as probed at the LHC, this
will lead to unphysical results. As discussed in Ref. [Bau88]
this can be avoided introducing form factors to be applied on the deviation
of coupling parameters from their Standard Model values,
The corresponding
switches are UNITARIZATION_SCALE
and UNITARIZATION_N
.
By default the form factor is switched off
The THDM is incorporated as a subset of the MSSM Lagrangian. It is
defined as the extension of the SM by a second SU(2) doublet of
Higgs fields. Besides the particle content of the SM it contains
interactions of five physical Higgs bosons: a light and a heavy
scalar, a pseudo-scalar and two charged ones. Besides the SM inputs
the model is defined through the masses and widths of the Higgs
particles, MASS[PDG]
and WIDTH[PDG]
, where PDG = [25,35,36,37] for
h^0, H^0, A^0 and H^+, respectively. The inputs are complete, when
TAN(BETA), the ratio of the two Higgs vacuum expectation values,
and ALPHA, the Higgs mixing angle, are specified.
The model is invoked by specifying MODEL = THDM
in the (model)
section of the steering file or the model data file Model.dat
,
respectively.
The EHC describes the effective coupling of gluons and photons to Higgs bosons
via a top-quark loop, and a W-boson loop in case of photons. This supplement
to the Standard Model can be invoked by specifying MODEL = SM+EHC
in
the (model)
section of the steering file or the model data file
Model.dat
, respectively.
The effective coupling of gluons to the Higgs boson, g_ggH, can be
calculated either for a finite top-quark mass or in the limit of
an infinitely heavy top using the switch FINITE_TOP_MASS=[1,0]
.
Similarily, the photon-photon-Higgs coupling, g_ppH, can be calculated both
for finite top and/or W masses or in the infinite mass limit using the
switches FINITE_TOP_MASS=[1,0]
and FINITE_W_MASS=[1,0]
.
The default choice for both is the infinite mass limit in either case.
Either one of these couplings can be switched off using the
DEACTIVATE_GGH=[1,0]
and DEACTIVATE_PPH=[1,0]
switches.
Both default to 0.
The 4thGen model adds a fourth family of quarks and leptons to the
Standard Model. It is invoked by specifying MODEL = SM+4thGen
in the
’(model)’ section of the steering file or the model data file
‘Model.dat’, respectively.
The masses and widths of the additional particles are defined via the
usual MASS[PDG]
and WIDTH[PDG]
switches, where PDG = [7,8,17,18]
for the fourth generation down and up quarks, the charged lepton and the
neutrino, respectively. A general mixing is implemented for both
leptons and quarks, parametrised through three additional mixing
angles and two additional phases, as described in [Hou87a]
:
A_14
, A_24
, A_34
, PHI_2
and PHI_3
for quarks,
THETA_L14
, THETA_L24
, THETA_L34
, PHI_L2
and
PHI_L3
for leptons.
Both 4x4 mixing matrices expand upon their 3x3 Standard Model counter
parts: the CKM matrix for quarks and the unit matrix for leptons. Both
mixing matrices can be printed on screen with OUTPUT_MIXING = 1
.
Per default, all particles are set unstable and have to be decayed into
Standard Model particles within the matrix element or set stable via
STABLE[PDG] = 1
.
To use a model generated using the FeynRules package, cf. Refs. [Chr08] and [Chr09] , the MODEL switch has to be set to ‘FeynRules’ and ME_SIGNAL_GENERATOR has to be set to ‘Amegic’. Note, in order to obtain the FeynRules model output in a format readable by Sherpa the FeynRules subroutine ’WriteSHOutput[ L ]’ needs to be called for the desired model Lagrangian ’L’. This results in a set of ASCII files that represent the considered model through its particle data, model parameters and interaction vertices. Note also that Sherpa/Amegic can only deal with Feynman rules in unitary gauge.
The FeynRules output files need to be copied to the current working directory or have tto reside in the directory referred to by the PATH
variable, cf. Input structure. There exists an agreed default naming convention for the FeynRules output files to be read by Sherpa. However, the explicite names of the input files can be changed. They are referred to by the variables
FR_PARTICLES = <file name>
: File containing the particle data, default value Particle.dat
.
FR_IDENTFILE = <file name>
: File hosting declaration of all external model parameters, default value ident_card.dat
.
FR_PARAMCARD = <file name>
: List of numerical values of all elementary parameters, masses and decay widths, default param_card
.
FR_PARAMDEF = <file name>
: Input file where all derived parameters get defined, default value param_definition.dat
.
FR_INTERACTIONS = <file name>
: File where all interaction vertices are defined, default value Interactions.dat
.
For more details on the Sherpa interface to FeynRules please consult [Chr09] .
The setup of matrix elements is covered by the ‘(me)’ section of the steering file or the ME data file ‘ME.dat’, respectively. There are no mandatory settings to be made.
The following parameters are used to steer the matrix element setup.
6.5.1 ME_SIGNAL_GENERATOR | The matrix element generator(s). | |
6.5.2 RESULT_DIRECTORY | The directory to store integration results. | |
6.5.3 EVENT_GENERATION_MODE | The event generation mode. | |
6.5.4 SCALES | How to compute scales. | |
6.5.5 COUPLINGS | How to evaluate couplings. | |
6.5.6 KFACTOR | Whether and how to apply a K-factor. | |
6.5.7 YUKAWA_MASSES | Running of Yukawa couplings | |
6.5.8 Dipole subtraction | Parameters for calculations with dipole subtraction. |
The list of matrix element generators to be employed during the run. When setting up hard processes from the ‘(processes)’ section of the input file (see Processes), Sherpa calls these generators in order to check whether either one is capable of generating the corresponding matrix element. This parameter can also be set on the command line using option ‘-m’, see Command line options.
The built-in generators are
Simple matrix element library, implementing a variety of 2->2 processes.
The AMEGIC++ generator published under [Kra01]
It is possible to employ an external matrix element generator within Sherpa. For advice on this topic please contact the authors, Authors.
This parameter specifies the name of the directory which is used by Sherpa to store integration results and phasespace mappings. The default is ‘Results/’. It can also be set using the command line parameter ‘-r’, see Command line options. The directory will be created automatically, unless the option ‘GENERATE_RESULT_DIRECTORY=0’ is specified. Its location is relative to a potentially specified input path, see Command line options.
This parameter specifies the event generation mode. It can also be set on the command line using option ‘-w’, see Command line options. The three possible options are ‘Weighted’ (shortcut ‘W’), ‘Unweighted’ (shortcut ‘U’) and ‘PartiallyUnweighted’ (shortcut ‘P’). For partially unweighted events, the weight is allowed to exceed a given maximum, which is lower than the true maximum weight. In such cases the event weight will exceed the otherwise constant value.
This parameter specifies how to compute the renormalization and factorization scale and potential additional scales.
Sherpa provides several built-in scale schemes. The options which are currently available are
Scales are specified by additional parameters in a form which is understood by the internal interpreter, see Interpreter. If, for example the invariant mass of the lepton pair in Drell-Yan production is the desired scale, the corresponding setup reads
SCALES VAR{Abs2(p[2]+p[3])}
Note: the square of the desired scale must be given.
Renormalization and factorization scales can be chosen differently. For example in Drell-Yan + jet production one could set
SCALES VAR{Abs2(p[2]+p[3])}{MPerp2(p[2]+p[3])}
In this case the factorization scale must be specified first. More than two scales can be set as well to be subsequently used, e.g. by different couplings, see COUPLINGS.
If FastJet is enabled, this tag can be used to set a scale based on jet, rather than parton momenta. These jets can be defined in all possible ways allowed by FastJet with the arguments specified in the following way
SCALES FASTJET[A=antikt,PT=10,ET=0,R=0.4,M=1]{...}
Thereby, A defines the jet algorithm, R sets the cone size, and PT / ET define the minimum transverse momentum/energy of the jets. The additional argument M (default 1) sets the averaging mode. After jet finding, the jet momenta are ordered as in FastJet. Non-QCD partons are unaffected by this procedure, i.e. their momentum indices remain the same. The additional tags MU_22 .. MU_n2, with n the number of strongly interacting final state particles, hold the nodal values of the jet clustering and the transverse momenta of the final state jets in descending order.
The matrix element is clustered onto a core 2->2 configuration using a k_T-type algorithm with recombination into on-shell partons. Scales are defined as the minimum of the largest transverse momentum during clustering and the lowest invariant mass in the core process.
The matrix element is clustered onto a core 2->2 configuration using a
k_T-type algorithm with recombination into on-shell particles. Their
corresponding flavours are determined using run-time information from
the matrix element generator.
Scales are defined as the lowest invariant mass or negative virtuality
in the core process. For core interactions which are pure QCD processes
scales are set to the maximum transverse mass squared of the outgoing
particles.
This is the default scale scheme in Sherpa, since it is employed
for truncated shower merging, see ME-PS merging.
However, it might be subject to changes to enable further classes
of processes for merging in the future and should therefore be seen with care.
Integration results might change slightly between different Sherpa
versions.
Occasionally, users might encounter the warning message
METS_Scale_Setter::CalculateScale(): No CSS history for '<process name>' in <percentage>% of calls. Set \hat{s}.
As long as the percentage quoted here is not too high, this does not pose a serious problem. The warning occurs when - based on the current colour configuration and matrix element information - no suitable clustering is found by the algorithm. In such cases the scale is set to the invariant mass of the partonic process.
It is possible to implement a dedicated scale scheme within Sherpa. For advice on this topic please contact the authors, Authors.
For next-to-leading order calculations it must be guaranteed that the scale is calculated separately for the real correction and the subtraction terms, such that within the subtraction procedure the same amount is subtracted and added back. Starting from version 1.2.2 this is the case for all scale setters in Sherpa. Also, the definition of the scale must be infrared safe w.r.t. to the radiation of an extra parton. Infrared safe (for QCD-NLO calculations) are:
Not infrared safe are
Since the total number of partons is different for different pieces of the NLO calculation any explicit reference to a parton momentum will lead to an inconsistent result.
Simple scale variations can be done using the following parameters:
Note: Shower starting scales are not affected by this factor, change the ‘SCALES’ parameter to achieve that.
Note: This affects also the running coupling used in the parton shower evolution.
Within Sherpa, strong and electroweak couplings can be computed at any scale specified by a scale setter (cf. SCALES). The ‘COUPLINGS’ tag links the argument of a running coupling to one of the respective scales. This is better seen in an example. Assuming the following input
SCALES VAR{...}{PPerp2(p[2])}{Abs2(p[2]+p[3])} COUPLINGS Alpha_QCD 1, Alpha_QED 2
Sherpa will compute any strong couplings at scale one, i.e. ‘PPerp2(p[2])’ and electroweak couplings at scale two, i.e. ‘Abs2(p[2]+p[3])’. Note that counting starts at zero.
This parameter specifies how to evaluate potential K-factors in the hard process. This is equivalent to the ‘COUPLINGS’ specification of Sherpa versions prior to 1.2.2. Currently available options are
No reweighting
Couplings specified by an additional parameter in a form which is understood by the internal interpreter, see Interpreter. The tags Alpha_QCD and Alpha_QED serve as links to the built-in running coupling implementation.
If for example the process ‘g g -> h g’ in effective theory is computed, one could think of evaluating two powers of the strong coupling at the Higgs mass scale and one power at the transverse momentum squared of the gluon. Assuming the Higgs mass to be 120 GeV, the corresponding reweighting would read
SCALES VAR{...}{PPerp2(p[3])} COUPLINGS Alpha_QCD 1 KFACTOR VAR{sqr(Alpha_QCD(sqr(120))/Alpha_QCD(MU_12))}
As can be seen from this example, scales are referred to as MU_<i>2, where <i> is replaced with the appropriate number. Note that counting starts at zero.
It is possible to implement a dedicated K-factor scheme within Sherpa. For advice on this topic please contact the authors, Authors.
This parameter specifies whether the Yukawa couplings are evaluated using
running or fixed quark masses: YUKAWA_MASSES=Running
is the default since
version 1.2.2 while YUKAWA_MASSES=Fixed
was the default until 1.2.1.
This list of parameters can be used to optimize the performance when employing the Catani-Seymour dipole subtraction [Cat96b] as implemented in Amegic [Gle07] .
Specifies a dipole cutoff in the nonsingular region [Nag03] . Changing this parameter shifts contributions from the subtracted real correction piece (RS) to the piece including integrated dipole terms (I), while their sum remains constant. This parameter can be used to optimize the integration performance of the individual pieces. Also the average calculation time for the subtracted real correction is reduced with smaller choices of ‘DIPOLE_ALPHA’ due to the (on average) reduced number of contributing dipole terms. For most processes a reasonable choice is between 0.01 and 1 (default). See also Choosing DIPOLE_ALPHA
Specifies the cutoff of real correction terms in the infrared reagion to avoid numerical problems with the subtraction. The default is 1.e-8.
Specifies the number of quark flavours that are produced from gluon splittings. This number must be at least the number of massless flavours (default). If this number is larger than the number of massless quarks the massive dipole subtraction [Cat02] is employed.
Specifies the kappa-parameter in the massive dipole subtraction formalism [Cat02] .
The process setup is covered by the ‘(processes)’ section of the steering file or the process data file ‘Processes.dat’, respectively.
The following parameters are used to steer the process setup.
6.6.1 Process | The process setup start tag. | |
6.6.2 Decay | Tag to add an exclusive decay. | |
6.6.3 Onshell_Decay | Tag to add an exclusive on-shell decay. | |
6.6.4 Scales | Tag to set a process-specific scale. | |
6.6.5 Couplings | Tag to set process-specific couplings. | |
6.6.6 CKKW | Tag to setup multijet merging. | |
6.6.7 Selector_File | Tag to specify a specific selector file. | |
6.6.8 Order_EW | Tag to fix the electroweak order. | |
6.6.9 Max_Order_EW | Tag to fix the maximum electroweak order. | |
6.6.10 Order_QCD | Tag to fix the QCD order. | |
6.6.11 Max_Order_QCD | Tag to fix the maximum QCD order. | |
6.6.12 Min_N_Quarks | Tag to set the minimum number of quarks. | |
6.6.13 Max_N_Quarks | Tag to set the maximum number of quarks. | |
6.6.14 Min_N_TChannels | Tag to request a minimum number of t-channels. | |
6.6.15 Print_Graphs | Tag to enable writeout of feynman graphs. | |
6.6.16 Integration_Error | Tag to set a specific integration error. | |
6.6.17 Max_Epsilon | Tag to set a specific epsilon for overweighting. | |
6.6.18 Enhance_Factor | Tag to set an enhance factor. | |
6.6.19 Enhance_Function | Tag to set an enhance function. | |
6.6.20 Enhance_Observable | Tag to set an enhance observable. | |
6.6.21 NLO_QCD_Part | Tag to setup QCD NLO processes. | |
6.6.22 NLO_EW_Part | Tag to setup electroweak NLO processes. | |
6.6.23 ME_Generator | Tag to specifiy the tree ME generator. | |
6.6.24 Loop_Generator | Tag to specifiy the loop ME generator. | |
6.6.25 End process | The process setup end tag. |
This tag starts the setup of a process or a set of processes with common properties. It must be followed by the specification of the (core) process itself. The setup is completed by the ‘End process’ tag, see End process. The initial and final state particles are specified by their PDG codes, or by particle containers, see Particle containers. Examples are
Sets up a Drell-Yan process group with light quarks in the initial state.
Sets up jet production in e+e- collisions with up to three additional jets.
The syntax for specifying processes is explained in the following sections:
6.6.1.1 PDG codes | ||
6.6.1.2 Particle containers | ||
6.6.1.3 Curly brackets |
Initial and final state particles are specified using their PDG codes (cf. PDG). A list of particles with their codes, and some of their properties, is printed at the start of each Sherpa run, when the OUTPUT is set at level ‘2’.
Sherpa contains a set of containers that collect particles with similar properties, namely
90
),
91
),
92
),
93
),
94
).
These containers hold all massless particles and anti-particles
of the denoted type and allow for a more efficient definition of
initial and final states to be considered. The jet container consists
of the gluon and all massless quarks (as set by MASS[..]=0.0
or
MASSIVE[..]=0
). A list of particle containers
is printed at the start of each Sherpa run, when the OUTPUT is set
at level ‘2’.
It is also possible to define a custom particle container using the keyword
PARTICLE_CONTAINER
either on the command line or in the (model)
section of the input file. The container must be given an unassigned particle
ID (kf-code) and its name and content must be specified. An example would be
the collection of all down-type quarks, which could be declared as
PARTICLE_CONTAINER 98 downs 1 3 5;
The curly bracket notation when specifying a process allows up to a certain number of jets to be included in the final state. This is easily seen from an example,
Sets up jet production in e+e- collisions. The matix element final state may be 2, 3, 4 or 5 light partons or gluons.
Specifies the exclusive decay of a particle produced in the matrix element. The virtuality of the decaying particle is sampled according to a Breit-Wigner distribution. An example would be
Process 11 -11 -> 6[a] -6[b] Decay 6[a] -> 5 24[c] Decay -6[b] -> -5 -24[d] Decay 24[c] -> -13 14 Decay -24[d] -> 94 94
Specifies the exclusive decay of a particle produced in the matrix element. The decaying particle is on mass-shell, i.e. a strict narrow-width approximation is used. This tag can be specified alternatively as ‘DecayOS’. An example would be
Process 11 -11 -> 6[a] -6[b] DecayOS 6[a] -> 5 24[c] DecayOS -6[b] -> -5 -24[d] DecayOS 24[c] -> -13 14 DecayOS -24[d] -> 94 94
Sets a process-specific scale. For the corresponding syntax see SCALES.
Sets process-specific couplings. For the corresponding syntax see COUPLINGS.
Sets up multijet merging according to [Hoe09] . The additional argument specifies the separation cut in the form (Q_{cut}/E_{cms})^2. It can be given in any form which is understood by the internal interpreter, see Interpreter. Examples are
Sets a process-specific selector file name.
Sets a process-specific electroweak order. The given number is exclusive, i.e. only matrix elements with exactly the given order in the electroweak coupling are generated.
Note that for decay chains with Amegic this setting applies to the core process only, while with Comix it applies to the full process, see Decay and DecayOS.
Sets a process-specific maximum electroweak order. The given number is inclusive, i.e. matrix elements with up to the given order in the electroweak coupling are generated.
Note that for decay chains with Amegic this setting applies to the core process only, while with Comix it applies to the full process, see Decay and DecayOS.
Sets a process-specific QCD order. The given number is exclusive, i.e. only matrix elements with exactly the given order in the strong coupling are generated.
Note that for decay chains with Amegic this setting applies to the core process only, while with Comix it applies to the full process, see Decay and DecayOS.
Sets a process-specific maximum QCD order. The given number is inclusive, i.e. matrix elements with up to the given order in the strong coupling are generated.
Note that for decay chains with Amegic this setting applies to the core process only, while with Comix it applies to the full process, see Decay and DecayOS.
Limits the minimum number of quarks in the process to the given value.
Limits the maximum number of quarks in the process to the given value.
Limits the minimum number of t-channel propagators in the process to the given value.
Writes out Feynman graphs in LaTeX format.
Process/P<n>_<m>
, where n is the number
of incoming and m the number of outgoing particles of the
process it has generated.
After Sherpa has run, there will be a .tex
-file located in the
diagram information directory with the name <process>.tex
. This
has to be compiled by using latex <process>.tex
, which
produces a .mp
-file. Enter mpost *.mp
and
again latex <process>.tex
in order to produce the
.dvi
-file <process>.dvi
containing the Feynman diagrams.
Sets a process-specific relative integration error target.
For multijet processes, this parameter can be specified per final state multiplicity. An example would be
Process 93 93 -> 93 93 93{2} Integration_Error 0.02 {3,4}
Here, the integration error target is set to 2% for 2->3 and 2->4 processes.
Sets epsilon for maximum weight reduction. The key idea is to allow weights larger than the maximum during event generation, as long as the fraction of the cross section represented by corresponding events is at most the epsilon factor times the total cross section. In other words, the relative contribution of overweighted events to the inclusive cross section is at most epsilon.
Sets a process specific enhance factor.
For multijet processes, this parameter can be specified per final state multiplicity. An example would be
Process 93 93 -> 93 93 93{2} Enhance_Factor 4 {3} Enhance_Factor 16 {4}
Here, 3-jet processes are enhanced by a factor of 4, 4-jet processes by a factor of 16.
Sets a process specific enhance function.
This feature can only be used when generating weighted events.
For multijet processes, the parameter can be specified per final state multiplicity. An example would be
Process 93 93 -> 11 -11 93{1} Enhance_Function PPerp2(p[4]) {3}
Here, the 1-jet process is enhanced with the transverse momentum squared of the jet.
Note that the convergence of the Monte Carlo integration can be worse if enhance functions are employed and therefore the integration can take significantly longer. The reason is that the default phase space mapping, which is constructed according to diagrammatic information from hard matrix elements, is not suited for event generation including enhancement. It must first be adapted, which, depending on the enhance function and the final state multiplicity, can be an intricate task.
If Sherpa cannot achieve an integration error target due to the use of enhance functions, it might be appropriate to locally redefine this error target, see Integration_Error.
Allows for the specification of a ME-level observable in which the event generation should be flattened. Of course, this induces an appropriate weight for each event. This option is available for both weighted and unweighted event generation, but for the latter as mentioned above the weight stemming from the enhancement is introduced. For multijet processes, the parameter can be specified per final state multiplicity.
An example would be
Process 93 93 -> 11 -11 93{1} Enhance_Observable log10(PPerp(p[2]+p[3]))|1.0|3.0; {3}
Here, the 1-jet process is flattened with respect to the logarithmic transverse momentum of the lepton pair in the limits 1.0 (10 GeV) to 3.0 (1 TeV). For the calculation of the observable one can use any function available in the algebra interpreter (see Interpreter).
Note that the convergence of the Monte Carlo integration can be worse if enhance observables are employed and therefore the integration can take significantly longer. The reason is that the default phase space mapping, which is constructed according to diagrammatic information from hard matrix elements, is not suited for event generation including enhancement. It must first be adapted, which, depending on the enhance function and the final state multiplicity, can be an intricate task.
If Sherpa cannot achieve an integration error target due to the use of enhance functions, it might be appropriate to locally redefine this error target, see Integration_Error.
Specifies which pieces of a QCD NLO calculation are computed. Possible choices are
Different pieces can be combined in the processes setup. Only pieces with the same number of final state particles and the same order in alpha_S can be treated as one process, otherwise they will be automatically split up.
Note that Sherpa includes only a very limited selection of one-loop corrections. For processes not included external codes can be interfaced, see External one-loop ME
Set a process specific nametag for the desired tree-ME generator, see ME_SIGNAL_GENERATOR.
Set a process specific nametag for the desired
loop-ME generator. The only Sherpa-native option is Internal
with a few
hard coded loop matrix elements, e.g. NLO_W.
Another source for loop matrix elements is BlackHat.
To use this Sherpa has to be linked to BlackHat during installation by using the configure option
--enable-blackhat=/path/to/blackhat
.
Completes the setup of a process or a list of processes with common properties.
The setup of cuts at the matrix element level is covered by the ‘(selector)’ section of the steering file or the selector data file ‘Selector.dat’, respectively.
Sherpa provides the following selectors
6.7.1 One particle selectors | one particle selectors | |
6.7.2 Two particle selectors | two particle selectors | |
6.7.3 Jet finders | cuts on QCD partons | |
6.7.4 Universal selector | user-defined cuts | |
6.7.5 Minimum selector | cuts that are inclusive for several selectors | |
6.7.6 NLO selectors | selectors for NLO QCD calculations |
The selectors listed here implement cuts on the matrix element level, based on single particle kinematics. The corresponding syntax in ‘Selector.dat’ is
<keyword> <flavour code> <min value> <max value>
‘<min value>’ and ‘<max value>’ are floating point numbers, which can also be given in a form that is understood by the internal algebra interpreter, see Interpreter. The selectors act on all possible particles with the given flavour. Their respective keywords are
energy cut
transverse energy cut
transverse momentum cut
rapidity cut
pseudorapidity cut
The selectors listed here implement cuts on the matrix element level, based on two particle kinematics. The corresponding syntax in ‘Selector.dat’ is
<keyword> <flavour1 code> <flavour2 code> <min value> <max value>
‘<min value>’ and ‘<max value>’ are floating point numbers, which can also be given in a form that is understood by the internal algebra interpreter, see Interpreter. The selectors act on all possible particles with the given flavour. Their respective keywords are
invariant mass
angular separation (rad)
angular separation w.r.t. beam
(‘<flavour2 code>’ is 0 or 1, referring to beam 1 or 2)
pseudorapidity separation
azimuthal angle separation (rad)
R separation
There are three different types of jet finders
k_T-algorithm
cone-algorithm
k_T-type algorithm to select on a given number of jets
Their respective syntax is
JetFinder <ycut>[<ycut decay 1>[<ycut decay 11>...]...]... <D parameter> ConeFinder <min R> NJetFinder <n> <ptmin> <etmin> <D parameter> [<exponent>] [<eta max>]
For ‘JetFinder’, it is possible to give different values of ycut in individual subprocesses of a production-decay chain. The square brackets are then used to denote the decays. In case only one uniform set of ycut is to be used, the square brackets are left out.
‘<ycut>’, ‘<min R>’ and ‘<D parameter>’ are floating point numbers, which can also be given in a form that is understood by the internal algebra interpreter, see Interpreter.
The ‘NJetFinder’ allows to select for kinematic configurations with at least ‘<n>’ jets that satisfy both, the ‘<ptmin>’ and the ‘<etmin>’ minimum requirements and that are in a PseudoRapidity region |eta|<‘<eta max>’. The ‘<exponent>’ allows to apply a kt-algorithm (1) or an anti-kt algorithm (-1).
The universal selector is intended to implement non-standard cuts on the matrix element level. Its syntax is
"<variable>" <kf1>,..,<kfn> <min1>,<max1>:..:<minn>,<maxn> [<order1>,...,<orderm>]
No additional white spaces are allowed
The first word has to be double-quoted, and contains the name of the variable to cut on. The keywords for available predefined <variable>s can be figured by running Sherpa ‘SHOW_VARIABLE_SYNTAX=1’. Or alternatively, an arbitrary cut variable can be constructed using the internal interpreter, see Interpreter. This is invoked with the command ‘Calc(...)’. In the formula specified there you have to use place holders for the momenta of the particles: ‘p[0]’ ... ‘p[n]’ hold the momenta of the respective particles ‘kf1’ ... ‘kfn’. A list of available vector functions and operators can be found here Interpreter.
‘<kf1>,..,<kfn>’ specify the PDG codes of the particles the variable has to be calculated from. In case this choice is not unique in the final state, you have to specify multiple cut ranges (‘<min1>,<max1>:..:<minn>,<maxn>’) for all (combinations of) particles you want to cut on, separated by semicolons.
If no fourth argument is given, the order of cuts is determined internally, according to Sherpa’s process classification scheme. This then has to be matched if you want to have different cuts on certain different particles in the matrix element. To do this, you should put enough (for the possible number of combinations of your particles) arbitrary ranges at first and run Sherpa with debugging output for the universal selector: ‘Sherpa OUTPUT=2[Variable_Selector::Trigger|15]’. This will start to produce lots of output during integration, at which point you can interrupt the run (Ctrl-c). In the ‘Variable_Selector::Trigger(): {...}’ output you can see, which particle combinations have been found and which cut range your selector has held for them (vs. the arbitrary range you specified). From that you should get an idea, in which order the cuts have to be specified.
If the fourth argument is given, particles are ordered before the cut is applied. Possible orderings are ‘PT_UP’, ‘ET_UP’, ‘E_UP’ and ‘ETA_UP’, (increasing p_T, E_T, E, eta). They have to be specified for each of the particles, separated by commas.
Examples
"mT" 11,-12 50,E_CMS
"PT" 90 50.0,E_CMS [PT_UP]
"Calc(abs(Eta(p[0]))<1.1||(abs(Eta(p[0]))>1.5&&abs(Eta(p[0]))<2.5))" 11 1,1
Note the range 1,1 meaning true for bool operations.
"Calc(Eta(p[0])*Eta(p[1]))" 93,93 -100,0 [PT_UP,PT_UP]
"Calc(Mass(p[0]+p[1])<87.0||Mass(p[0]+p[1])>97.0)" 11,22 1,1
"m" 90,90 80,100:0,E_CMS:0,E_CMS:0,E_CMS:0,E_CMS:80,100
Here we use knowledge about the internal ordering to cut only on the correct lepton pairs.
This selector can combine several selectors to pass an event if either those passes the event. It is mainly designed to generate more inclusive samples that, for instance, include several jet finders and that allows a specification later. The syntax is
MinSelector { Selector 1 Selector 2 ... }
Phase-space cuts that are applied on next-to-leading order calculations must be defined in a infrared safe way. Technically there is also a special treatment for the real (subtracted) correction required. Currently only the following selectors meet this requirement:
NJetFinder <n> <ptmin> <etmin> <D parameter> [<exponent>] [<eta max>]
(see Jet finders)
PTNLO <flavour code> <min value> <max value> RapidityNLO <flavour code> <min value> <max value> PseudoRapidityNLO <flavour code> <min value> <max value>
PT2NLO <flavour1 code> <flavour2 code> <min value> <max value> Mass <flavour1 code> <flavour2 code> <min value> <max value>
The integration setup is covered by the ‘(integration)’ section of the steering file or the integration data file ‘Integration.dat’, respectively.
The following parameters are used to steer the integration.
6.8.1 ERROR | The relative integration error | |
6.8.2 INTEGRATOR | The integrator type | |
6.8.3 VEGAS | Whether to enable Vegas | |
6.8.4 FINISH_OPTIMIZATION | Whether to fully optimise the Vegas grid | |
6.8.5 PSI_NMAX | Maximum number of points per process |
Specifies the relative integration error target.
Specifies the integrator. The possible integrator types depend on the matrix element generator. In general users should rely on the default value and otherwise seek the help of the authors, see Authors. However, there are a few generator independent choices which have been designed for specific processes and might be more efficient there:
VHAAG_RES_KF
specifies
the kf-code of the weak boson, the default is W (24
).
VHAAG_RES_D1
and VHAAG_RES_D2
define the
positions of the Boson decay products within the internal
naming scheme, where 2
is the position of the first
outgoing particle. The defaults are VHAAG_RES_D1=2
and VHAAG_RES_D2=3
, which is the correct choice for
all processes where the decay products are the only not
strongly interacting final state particles.
Specifies whether or not to employ Vegas for adaptive integration. The two possible values are ‘On’ and ‘Off’, the default being ‘On’.
Specifies whether the full Vegas optimization is to be carried out. The two possible options are ‘On’ and ‘Off’, the default being ‘On’.
The maximum number of points before cuts to be generated during integration. This parameter acts on a process-by-process basis.
The shower setup is covered by the ‘(shower)’ section of the steering file or the shower data file ‘Shower.dat’, respectively.
The following parameters are used to steer the shower setup.
6.9.1 SHOWER_GENERATOR | Tag to set Sherpa’s shower generator. | |
6.9.2 CS Shower options | Options for Sherpa’s default shower. |
The only shower option currently available in Sherpa is ‘CSS’, and this is the default for this tag. See the module summaries in Basic structure for details about this shower.
Different shower modules are in principle supported and more
choices will be provided by Sherpa in the near future.
To list all available shower modules, the tag
SHOW_SHOWER_GENERATORS=1
can be specified on the
command line.
Sherpa’s default shower module is based on [Sch07a] . A new ordering parameter for initial state splitters was introduced in [Hoe09] and a novel recoil strategy for initial state splittings was proposed in [Hoe09a] . While the ordering variable is fixed, the recoil strategy for dipoles with initial-state emitter and final-state spectator can be changed for systematics studies. Setting ‘CSS_KIN_SCHEME=0’ (default) corresponds to using the recoil scheme proposed in [Hoe09a] , while ‘CSS_KIN_SCHEME=1’ enables the original recoil strategy. Note that the latter is more suitable for parton evolution in deep-inelastic lepton nucleon scattering, see HERA_DIS [Car09] . The lower cutoff of the shower evolution can be set via ‘CSS_PT2MIN’. Note that this value is specified in GeV^2.
By default, only QCD splitting functions are enabled in the shower. If you also want to allow for photon splittings, you can enable them by using ‘CSS_EW_MODE=1’. Note, that if you have leptons in your matrix-element final state, they are by default treated by a soft photon resummation as explained in QED Corrections. To avoid double counting, this has to be disabled as explained in that section.
The multiple parton interaction (MPI) setup is covered by the ‘(mi)’ section of the steering file or the MPI data file ‘MI.dat’, respectively. The basic MPI model is described in [Sjo87] while Sherpa’s implementation details are discussed in [Ale05]
The following parameters are used to steer the MPI setup.
6.10.1 MI_HANDLER | The MPI handler | |
6.10.2 SCALE_MIN | The p_T cutoff | |
6.10.3 PROFILE_FUNCTION | The hadron profile function | |
6.10.4 PROFILE_PARAMETERS | The hadron profile function | |
6.10.5 REFERENCE_SCALE | The reference scale | |
6.10.6 RESCALE_EXPONENT | The rescaling exponent |
Specifies the MPI handler. The two possible values at the moment are ‘None’ and ‘Amisic’.
Specifies the transverse momentum integration cutoff in GeV.
Specifies the hadron profile function. The possible values are ‘Exponential’, ‘Gaussian’ and ‘Double_Gaussian’. For the double gaussian profile, the relative core size and relative matter fraction can be set using PROFILE_PARAMETERS.
The potential parameters for hadron profile functions, see PROFILE_FUNCTION. For double gaussian profiles there are two parameters, corresponding to the relative core size and relative matter fraction.
Specifies the centre-of-mass energy at which the transverse momentum integration cutoff is used as is, see SCALE_MIN. This parameter should not be changed by the user. The default is ‘1800’, corresponding to Tevatron Run I energies.
Specifies the rescaling exponent for fixing the transverse momentum integration cutoff at centre-of-mass energies different from the reference scale, see SCALE_MIN, REFERENCE_SCALE.
The hadronization setup is covered by the ‘(fragmentation)’ section of the steering file or the fragmentation data file ‘Fragmentation.dat’, respectively.
There are, broadly speaking, two options of how Sherpa handles the transition of quarks and gluons into primordial hadrons (hadronization) and their decay:
6.11.1 Hadronization parameters | General hadronization parameters | |
6.11.2 AHADIC++ | The fragmentation module, and its parameters. | |
6.11.3 HADRONS++ | The hadron decay module, and its parameters. |
The FRAGMENTATION
parameter sets the fragmentation module to be employed
during event generation. The default is ‘Ahadic’. This parameter steers whether the
fragmentation is switched on and performed by the internal cluster fragmentation module
AHADIC++, by an interface to the corresponding routines of the Lund
string hadronization of Pythia [Sjo03]
), indicated by ‘Lund’), or
whether the fragmentation is completely switched (‘Off’).
The treatment of hadron
and tau decays is specified by DECAYMODEL
. Its allowed values are either the
default choice ‘Hadrons’, which renders the HADRONS++ module responsible for
performing the decays, or as alternative, the interface to Pythia can be invoked
by setting this parameter to ‘Lund’. For the former option, the reader is referred
to HADRONS++ for a more detailed discussion.
Please note that it is absolutely not advisable to use one fragmentation model with another model for the hadron decays, since there is quite an intimate relation between two - for instance, AHADIC++ knows and allows for many more primordial hadrons than Pythia does, which would obviously lead to problems, if they were created in AHADIC++ and left to Pythia for decays.
The Pythia routines have been made available through an interface to
the corresponding Fortran
code, with the option to steer Pythia
parameters through Sherpa.
The Lund parameters can be collected in a file set by the parameter LUND_FILE
(default is Lund.dat
), if ‘Lund’ is chosen as the method of choice for
both fragmentation and hadron decays. Driven in this option, Sherpa is more or less
ignorant about hadron decays, with the exception of tau-decays, which have been
supplemented early in the development of the Sherpa event generator.
The coarse features of the string breakup in the Lund model are characterized by three
parameters, Lund-a through PARJ(41)
, Lund-b through PARJ(42)
, and
Lund-sigma through PARJ(21)
. If the data-file parameters are not set,
Sherpa employs the default values in Pythia.
More Pythia parameters can be added to the Lund.dat
file using the Pythia
nomenclature in the common-blocks, e.g. MSTJ(11)
for changing the fragmentation
function for heavy quarks. Please note that particle decays cannot directly be disabled
there. Instead, the stable flag for the corresponding hadron needs to be used.
TODO.
HADRONS++ is the module within the Sherpa framework which is responsible for treating hadron and tau decays. It contains decay tables with branching ratios for approximately 2500 decay channels, of which many have their kinematics modelled according to a matrix element with corresponding form factors. Especially decays of the tau lepton and heavy mesons have form factor models similar to dedicated codes like Tauola [Jad93] and EvtGen [Lan01] .
Some general switches which relate to hadron decays can be adjusted in the
(fragmentation)
section:
DECAYPATH
The path to the parameter files for the hadron and tau decays
(default: Decaydata/
). It is important to note that the path
has to be given relative to the current working directory.
If it doesn’t exist, the default Decaydata
directory (\$prefix/share/SHERPA-MC/Decaydata
) will be used.
(fragmentation)
section in full analogy to the settings for fundamental particles
in the (model)
section (cf. Model Parameters).
MASS_SMEARING = [0,1,2]
(default: 1)
Determines whether particles entering the hadron decay event
phase should be put off-shell according to their mass
distribution. It is taken care that no decay mode is suppressed
by a potentially too low mass. While HADRONS++ determines this dynamically
from the chosen decay channel, for Pythia
as hadron decay handler
its w-cut
parameter is employed. Choosing option 2 instead of 1 will
only set unstable (decayed) particles off-shell, but leave stable particles
on-shell.
MAX_PROPER_LIFETIME = [mm]
Parameter for maximum proper lifetime (in mm) up to which particles
are considered unstable. If specified, this will make long-living particles
stable, even if they are set unstable by default or by the user.
Many aspects of the above mentioned “Decaydata” can be adjusted.
There exist three levels of data files, which are explained in the following
sections.
As with all other setup files, the user can either employ the default
“Decaydata” in <prefix>/share/SHERPA-MC/Decaydata
, or
overwrite it (also selectively) by creating the appropriate files in the
directory specified by DECAYPATH
.
HadronDecays.dat
consists of a table of particles that are to be decayed
by HADRONS++. Note: Even if decay tables exist for the other particles, only those
particles decay that are set unstable, either by default, or in the
model/fragmentation settings. It has the following structure, where each line
adds one decaying particle:
<kf-code> -> | <subdirectory>/ | <filename>.dat |
decaying particle | path to decay table | decay table file |
default names: | <particle>/ | Decays.dat |
It is possible to specify different decay tables for the particle (positive kf-code) and anti-particle (negative kf-code). If only one is specified, it will be used for both particle and anti-particle.
If more than one decay table is specified for the same kf-code, these tables will be used in the specified sequence during one event. The first matching particle appearing in the event is decayed according to the first table, and so on until the last table is reached, which will be used for the remaining particles of this kf-code.
Additionally, this file may contain the keyword CREATE_BOOKLET
on a separate
line, which will cause HADRONS++ to write a LaTeX document containing all decay
tables.
The decay table contains information about outgoing particles for each channel, its branching ratio and eventually the name of the file that stores parameters for a specific channel. If the latter is not specified HADRONS++ will produce it and modify the decay table file accordingly.
Additionally to the branching ratio, one may specify the error associated with it, and its source. Every hadron is supposed to have its own decay table in its own subdirectory. The structure of a decay table is
{kf1,kf2,kf3,...} | BR(delta BR)[Origin] | <filename>.dat |
outgoing particles | branching ratio | decay channel file |
It should be stressed here that the branching ratio which is explicitly given for any individual channel in this file is always used regardless of any matrix-element value.
A decay channel file contains various information about that specific decay channel. There are different sections, some of which are optional:
<Options> AlwaysIntegrate = 0 CPAsymmetryC = 0.0 CPAsymmetryS = 0.0 </Options>
AlwaysIntegrate = [0,1]
For each decay channel, one needs an
integration result for unweighting the kinematics (see below). This
result is stored in the decay channel file, such that the
integration is not needed for each run. The AlwaysIntegrate option
allows to bypass the stored integration result, and do the integration
nonetheless (same effect as deleting the integration result).
CPAsymmetryC/CPAsymmetryS
If one wants to include time dependent
CP asymmetries through interference between mixing and decay one can
set the coefficients of the cos and sin terms respectively.
HADRONS++ will then respect these asymmetries between particle and
anti-particle in the choice of decay channels.
<Phasespace> 1.0 MyIntegrator1 0.5 MyIntegrator2 </Phasespace>Specifies the phase-space mappings and their weight.
<ME> 1.0 0.0 my_matrix_element[X,X,X,X,X,...] 1.0 0.0 my_current1[X,X,...] my_current2[X,X,X,...] </ME>Specifies the matrix elements or currents used for the kinematics, their respective weights, and the order in which the particles (momenta) enter them. For more details, the reader is referred to [Kra10] .
<my_matrix_element[X,X,X,X,X,...]> parameter1 = value1 parameter2 = value2 ... </my_matrix_element[X,X,X,X,X,...]>Each matrix element or current may have an additional section where one can specify needed parameters, e.g. which form factor model to choose. Each parameter has to be specified on a new line as shown above. Available parameters are listed in [Kra10] . Parameters not specified get a default value, which might not make sense in specific decay channels. One may also specify often needed parameters in
HadronConstants.dat
, but they will get
overwritten by channel specific parameters, should these exist.
<Result> 3.554e-11 6.956e-14 1.388e-09; </Result>These last three lines have quite an important meaning. If they are missing, HADRONS++ integrates this channel during the initialization and adds the result lines. If this section exists though, and
AlwaysIntegrate
is off
(the default value, see above) then HADRONS++ reads in the maximum for
the kinematics unweighting.
Consequently, if some parameters are changed (also masses of incoming and
outgoing particles) the maximum might change such that a new integration is
needed in order to obtain correct kinematical distributions. There are two
ways to enforce the integration: either by deleting the last three lines or
by setting AlwaysIntegrate
to 1. When a channel is re-integrated, HADRONS++
copies the old decay channel file into .<filename>.dat.old
.
HadronConstants.dat
may contain some globally needed parameters (e.g.
for neutral meson mixing, see [Kra10]
) and also fall-back
values for all matrix-element parameters which one specifies in decay channel
files. Here, the Interference_X = 1
switch would enable rate asymmetries
due to CP violation in the interference between mixing and decay
(cf. Decay channel files), and setting Mixing_X = 1
enables explicit mixing in the event record according to the time evolution of
the flavour states. By default, all mixing effects are turned off.
x_K = 0.946 y_K = -0.9965 qoverp2_K = 1.0 Interference_K = 0 Mixing_K = 0 x_D = 0.0 y_D = 0.0 qoverp2_D = 1.0 Interference_D = 0 Mixing_D = 0 x_B = 0.776 y_B = 0.0 qoverp2_B = 1.0 Interference_B = 1 Mixing_B = 0 x_B(s) = 30.0 y_B(s) = 0.155 qoverp2_B(s) = 1.0 Interference_B(s) = 0 Mixing_B(s) = 0
Spin correlations:
the spin correlation algorithm is implemented, also for decays of tau’s
produced in the signal process. It can be switched on through the keyword
SPIN_CORRELATIONS
in the (run)
section, see Run Parameters for more
details.
Adding new channels:
if new channels are added to HADRONS++ (choosing isotropic decay kinematics) a new
decay table must be defined and the corresponding hadron must be added to HadronDecays.dat
.
The decay table merely needs to consist of the outgoing particles and branching ratios, i.e. the
last column (the one with the decay channel file name) can safely be dropped. By running Sherpa
it will automatically produce the decay channel files and write their names in the decay table.
Some details on tau decays: $\tau$ decays are treated within the HADRONS++ framework, even though the $\tau$ is not a hadron. As for many hadron decays, the hadronic tau decays have form factor models implemented, for details the reader is referred to [Kra10] .
Higher order QED corrections are effected both on hard interaction and, upon their formation, on each hadron’s subsequent decay. The Photons [Sch08] module is called in both cases for this task. It employes a YFS-type resummation [Yen61] of all infrared singular terms to all orders and is equipped with complete first order corrections for the most relevant cases (all other ones receive approximate real emission corrections built up by Catani-Seymour splitting kernels).
6.12.1 General Switches | ||
6.12.2 QED Corrections to the Hard Interaction | ||
6.12.3 QED Corrections to Hadron Decays |
The relevant switches to steer the higher order QED corrections reside in the ‘(fragmentation)’ section of the steering file or the fragmentation data file ‘Fragmentation.dat’, respectively.
6.12.1.1 YFS_MODE | Mode of operation. | |
6.12.1.2 YFS_USE_ME | Use ME-corrections if possible. | |
6.12.1.3 YFS_IR_CUTOFF | Infrared threshold for real photon generation. |
The keyword YFS_MODE = [0,1,2]
determines the
mode of operation of Photons. YFS_MODE = 0
switches Photons off.
Consequently, neither the hard interaction nor any hadron decay will be
corrected for soft or hard photon emission. YFS_MODE = 1
sets
the mode to "soft only", meaning soft emissions will be treated
correctly to all orders but no hard emission corrections will be
included. With YFS_MODE = 2
these hard emission corrections will
also be included up to first order in alpha_QED. This is the default setting.
The switch YFS_USE_ME = [0,1]
tells Photons how to correct hard
emissions to first order in alpha_QED. If YFS_USE_ME = 0
, then
Photons will use collinearly approximated real emission matrix elements. Virtual
emission matrix elements of order alpha_QED are ignored. If, however,
YFS_USE_ME=1, then exact real and/or virtual emission matrix elements
are used wherever possible. These are presently available for V->FF, V->SS,
S->FF, S->SS, S->Slnu, S->Vlnu type decays, Z->FF decays and leptonic tau and W
decays. For all other decay types general collinearly approximated matrix
elements are used. In both approaches all hadrons are treated as point-like
objects. The default setting is YFS_USE_ME = 1
. This switch is only
effective if YFS_MODE = 2
.
YFS_IR_CUTOFF
sets the infrared cut-off dividing the real emission in two
regions, one containing the infrared divergence, the other the "hard" emissions.
This cut-off is currently applied in the rest frame of the multipole of the
respective decay. It also serves as a minimum photon energy in this frame for
explicit photon generation for the event record. In contrast, all photons below
with energy less than this cut-off will be assumed to have negligible impact on
the final-state momentum distributions. The default is
YFS_IR_CUTOFF = 1E-3
(GeV). Of course, this switch is only effective if
Photons is switched on, i.e. YFS_MODE = [1,2]
.
The switch to steer QED corrections to the hard scatter resides in the ’(me)’ section of the steering file or the matrix element data file ‘ME.dat’, respectively.
6.12.2.1 ME_QED | Mode of operation. |
ME_QED = On
/Off
turns the higher order QED corrections to the
matrix element on or off, respectively. The default is ‘On’. Switching
QED corrections to the matrix element off has no effect on
QED Corrections to Hadron Decays.
The QED corrections to the matrix element will only be effected on final state
not strongly interacting particles. If a resonant production subprocess for an
unambiguous subset of all such particles is specified via the process
declaration (cf. Processes) this can be taken into account and dedicated
higher order matrix elements can be used (if YFS_MODE = 2
and
YFS_USE_ME = 1
).
If the Photons module is switched on, all hadron decays are corrected for higher order QED effects.
For a large fraction of LHC final states, the application of reconstruction algorithms will lead to the identification of several hard jets. A major task is to distinguish whether such events are signals for new physics or just manifestations of SM physics. Related calculations therefore need to describe as accurately as possible both the full matrix element for the underlying hard processes as well as the subsequent evolution and conversion of the hard partons into jets of hadrons. Several scales therefore determine the thorough development of the event. This makes it difficult to unambiguously disentangle the components, which belong to the hard process from those of the hard-parton evolution. Given an n-jet event of well separated partons, its jet structure is retained when emitting a further collinear or soft parton. An additional hard, large-angle emission however gives rise to an extra jet changing the n to an n+1 final state. The merging scheme has to define, on an event-by-event basis, which possibility has to be followed. Its primary goals are therefore to avoid double counting by preventing events to appear twice, i.e. once for each possibility, as well as dead regions by generating each configuration only once and using the appropriate path.
Various such merging schemes have been proposed. The currently most advanced treatment at tree-level is detailed in [Hoe09] . It relies on a strict separation of the phasespace for additional QCD radiation into a matrix-element and a parton-shower domain. Truncated showers are then needed to account for potential radiation in the parton-shower domain, if radiation in the matrix-element domain has already occured. This technique has been applied to the simulation of final states containing hard photons [Hoe09a] and has been extended to multi-scale processes where the leading order is dominated by very low scales [Car09] . A merging approach similar to [Hoe09] was presented in [Ham09] for the special case of angular-ordered parton showers. Several older approaches exist. The CKKW scheme as a procedure similar to the truncated shower merging was introduced in [Cat01] . Its extension to hadronic processes has been discussed in [Kra02] , and the approach has been validated for several cases [Sch05] , [Kra05a] , [Kra04] , [Kra05] , [Gle05] . A reformulation of CKKW to a merging procedure in conjunction with a dipole shower (CKKW-L) has been presented in [Lon01] . The MLM scheme has been developed using a geometric analysis of the unconstrained radiation pattern in terms of cone jets to generate the inclusive samples [Man01] ,[Man06] . In a number of works, all these different algorithms have been implemented in different variations on different levels of sophistication in conjunction with various matrix-element generators or already in full-fledged event generators. Their respective results have been compared e.g. in [Hoc06] , [Alw07] . Common to all schemes is that sequences of tree-level multileg matrix elements with increasing final-state multiplicity are merged with parton showers to yield a fully inclusive sample with no double counting. Their connection with truncated shower merging is outlined in [Hoe09] .
In Sherpa the merging of matrix elements and parton showers is accomplished as follows, cf. [Hoe09] :
Q_ij^2 = 2p_i.p_j min{2/(Cijk+Cjik)} |
where the minimum is taken over the colour-connected partons k (k different from i and j), and where, for final state partons i and j,
Cijk = p_i.p_k/((p_i+p_k).p_j) - m_i^2/(2p_i.p_j) if j=g, Cijk = 1 else. |
The generation of inclusive event samples, i.e. the combination of matrix elements for different parton multiplicities with parton showers and hadronization, is completely automatized within Sherpa. To obtain consistent results, certain parameters related to the matrix-element calculation and the parton showers have to be set accordingly. In the following the basic parameter settings for generating “merged” samples are summarised. Potential pitfalls are pointed out.
The starting point is the definition of a basic core
(lowest-order) process with respect to which the impact of
additional QCD radiation shall be studied. As an illustrative
example, consider Drell–Yan lepton-pair production in proton–proton
collisions. The lowest-order process reads pp -> l-bar l, mediated
through Z/photon exchange. Additional QCD radiation will then
manifest itself through additional QCD partons in the final state,
i.e. pp -> l\bar l + n jets with n=1,...,N. To initialise
the calculation of all the different matrix elements (for pp ->
l\bar l+0,1,...,N QCD partons) in a single generator run,
besides selecting the basic core process, the maximal number N
of additional final-state QCD partons has to be specified in the
(processes)
section of the steering file. For the above example,
assuming N=3, this reads:
Process 93 93 -> 90 90 93{3} Order_EW 2
N is given in the curly brackets belonging to the 93
, the
code for QCD partons. Note, that it is mandatory to fix the order of
electroweak couplings to the corresponding order of the basic core
process, here pp -> l-bar l or 93 93 -> 90 90
, as only QCD
corrections to this process can be considered and further
electroweak corrections are not treated by Sherpa’s ME-PS merging
implementation.
The most important parameter to be specified when generating merged samples with Sherpa is the actual value of the jet resolution that separates the subsamples of different parton multiplicities, the merging scale.
The jet criterion is explained in The algorithm implemented in Sherpa. The separation cut, Q_cut, must be specified. It is set using the CKKW tag, usually in the form (Q_cut/E_cms)^2. For example, a valid setting reads
CKKW sqr(20/E_CMS)
and must be included in the process specification, before the
End process
line. More sophisticated and more flexible
settings of Q_cut are possible, as is exemplified in HERA_DIS.
As mentioned before,
all extra QCD parton radiation is regularised by satisfying the
jet criterion. However, divergences of the basic core process, such as
vanishing invariant masses of lepton pairs, need to be regularised
by imposing additional cuts, see Selectors.
It always should be ensured that the parton showers are switched on.
Further remarks
Although the merging of different multiplicity matrix-element samples with parton showers attached is fully automatized in Sherpa, some care has to be taken to ensure physical meaningful results. Some of the most prominent mistakes are listed here:
NJetFinder
parameter, which applies it’s jet criterion to only the specified
number of jets. See the Examples section for an example of the usage.
(selector)
part of the steering file, or the merging scale set by CKKW.
Furthermore, changing the centre-of-mass energy, the choice of the
PDF or the running of alpha_s requires to integrate and possibly
store afresh.
Finally, a few more useful comments related to Sherpa’s merging are stated below:
8.1 Bash completion | How to add bash completion for Sherpa parameters | |
8.2 Rivet analyses | How to analyse Sherpa events using Rivet | |
8.3 HZTool analyses | How to analyse Sherpa events using HZTool | |
8.4 MCFM interface | How to use the MCFM interface in NLO calculation | |
8.5 Debugging a crashing/stalled event | How to recover the random seed for an event that is hanging or crashing | |
8.6 Versioned installation | How to install multiple Sherpa versions in the same prefix. | |
8.7 NLO calculations | How to efficiently perform NLO calculations |
Sherpa will install a file named ‘$prefix/share/SHERPA-MC/sherpa-completion’ which contains tab completion functionality for the bash shell. You simply have to source it in your active shell session by running
. $prefix/share/SHERPA-MC/sherpa-completion
and you will be able to tab-complete any parameters on a Sherpa command line.
To permanently enable this feature in your bash shell, you’ll have to add the source command above to your ~/.bashrc.
Sherpa is equipped with an interface to the analysis tool Rivet. To enable it, Rivet and HepMC have to be installed (e.g. using the Rivet bootstrap script) and your Sherpa compilation has to be configured with the following options:
./configure --enable-hepmc2=/path/to/hepmc2 --enable-rivet=/path/to/rivet
(Note: Both paths are equal if you used the Rivet bootstrap script.)
To use the interface, specify the switch
Sherpa ANALYSIS=Rivet
and create an analysis section in Run.dat
that reads as follows:
(analysis){ BEGIN_RIVET { -a D0_2008_S7662670 CDF_2007_S7057202 D0_2004_S5992206 CDF_2008_S7828950 } END_RIVET }(analysis)
The line starting with -a
specifies which Rivet analyses to run and the
histogram output file can be changed with the normal ANALYSIS_OUTPUT
switch.
You can also use rivet-mkhtml
(distributed with Rivet) to create
plot webpages from Rivet’s output files:
source /path/to/rivetenv.sh # see below rivet-mkhtml -o output/ file1.aida [file2.aida, ...] firefox output/index.html &
If your Rivet installation is not in a standard location, the bootstrap script
should have created a rivetenv.sh
which you have to source before running
the rivet-mkhtml
script.
Sherpa is equipped with an interface to the analysis tool HZTool. To enable it, HZTool and CERNLIB have to be installed and your Sherpa compilation has to be configured with the following options:
./configure --enable-hztool=/path/to/hztool --enable-cernlib=/path/to/cernlib --enable-hepevtsize=4000
To use the interface, specify the switch
Sherpa ANALYSIS=HZTool
and create an analysis section in Run.dat
that reads as follows:
(analysis){ BEGIN_HZTOOL { HISTO_NAME output.hbook; HZ_ENABLE hz00145 hz01073 hz02079 hz03160; } END_HZTOOL; }(analysis)
The line starting with HZ_ENABLE
specifies which HZTool analyses to run.
The histogram output directory can be changed using the ANALYSIS_OUTPUT
switch, while HISTO_NAME
specifies the hbook output file.
Sherpa is equipped with an interface to the NLO library of MCFM for decdicated processes. To enable it, MCFM has to be installed and compiled into a single library, libMCFM.a, and your Sherpa compilation has to be configured with the following options:
./configure --enable-mcfm=/path/to/mcfm
To use the interface, specify
Loop_Generator MCFM;
in the process section of the run card and add it to the list of generators in ME_SIGNAL_GENERATOR. For an example, see LHC_HWW_POWHEG. Of course, MCFM’s process.DAT file has to be copied to the current run directory.
If an event crashes, Sherpa tries to obtain all the information needed to reproduce that event and writes it out into a directory named
Status__<date>_<time>
If you are a Sherpa user and want to report this crash to the Sherpa team, please attach a tarball of this directory to your email. This allows us to reproduce your crashed event and debug it.
To debug it yourself, you can follow these steps (Only do this if you are a Sherpa developer, or want to debug a problem in an addon library created by yourself):
cp Status__<date>_<time>/random.dat ./
Sherpa [...] STATUS_PATH=./
Sherpa will then read in your random seed from “./random.dat” and generate events from it.
Sherpa [...] OUTPUT=15 STATUS_PATH=./
If event generation seems to stall, you first have to find out the number of the current event. For that you would terminate the stalled Sherpa process (using Ctrl-c) and check in its final output for the number of generated events. Now you can request Sherpa to write out the random seed for the event before the stalled one:
Sherpa [...] EVENTS=[#events - 1] SAVE_STATUS=Status/
(Replace [#events - 1] using the number you figured out earlier)
The created status directory can either be sent to the Sherpa developers, or be used in the same steps as above to reproduce that event and debug it.
If you want to install different Sherpa versions into the same prefix (e.g. /usr/local), you have to enable versioning of the installed directories by using the configure option ‘--enable-versioning’. Optionally you can even pass an argument to this parameter of what you want the version tag to look like.
8.7.1 Choosing DIPOLE_ALPHA | ||
8.7.2 Integrating complicated Loop-ME | ||
8.7.3 Structure of HepMC Output | ||
8.7.4 Structure of ROOT NTuple Output |
A variation of the parameter DIPOLE_ALPHA
(see Dipole subtraction) changes the
contribution from the real (subtracted) piece (RS
) and
the integrated subtraction terms (I
), keeping their sum constant.
Varying this parameter provides a nice check of the consistency
of the subtraction procedure and it allows to optimize the
integration performance of the real correction. This piece
has the most complicated momentum phase space and is often the
most time consuming part of the NLO calculation.
The optimal choice depends on the specific setup and can be
determined best by trial.
Hints to find a good value:
DIPOLE_ALPHA
is the less dipole term have to be
calculated, thus the less time the evaluation/phase space point takes.
RS
and the I
parts and thus to large statisical errors.
DIPOLE_ALPHA=1
.
The more complicated a process is the smaller DIPOLE_ALPHA
should be
(e.g. with 5 partons the best choice is typically around 0.01).
RS
piece is significantly positive but not much larger than
the born cross section.
For complicated processes the evaluation of one-loop matrix elements can be very time consuming. The generation time of a fully optimized integration grid can become prohibitively long. Rather than using a poorly optimized grid in this case it is more advisable to use a grid optimized with the born matrix elements, since the distibution in the phase space is rather similar.
This can be done as follows:
RESULT_OMIT_NLO_SUFFIX=1
.
RESULT_DIRECTORY
and also with RESULT_OMIT_NLO_SUFFIX=1
.
Note: this will not work for the RS
piece!
The generated events can be written out in the HepMC format to be passed through an independent analysis. For this purpose a shortened event structure is used containing only a single vertex. Correlated real and subtraction events are labeled with the same event number such that their possible cancelations can be taken into account properly.
To use this output option Sherpa has to be compiled with HepMC support. cf.
Installation. The HEPMC2_SHORT_OUTPUT=<filename>
has to used, cf.
Event output formats.
Using this HepMC output format the internal Rivet interface (Rivet analyses) can be used to pass the events through Rivet. It has to be stressed, however, that Rivet currently cannot take the correlations between real and subtraction events into account properly. The Monte-Carlo error is thus overestimated. Nonetheless, the mean is unaffected.
As above, the Rivet interface has to be instructed to use the shortened HepMC event structure:
(analysis){ BEGIN_RIVET { USE_HEPMC_SHORT 1 -a ... } END_RIVET }(analysis)
The generated events can be stored in a ROOT NTuple file, see Event output formats. The internal ROOT Tree has the following Branches:
Event ID to identify correlated real sub-events.
Number of outgoing partons.
Momentum components of the partons.
Parton PDG code.
Event weight, if sub-event is treated independently.
Event weight, if correlated sub-events are treated as single event.
ME weight (w/o PDF), corresponds to ’weight’.
ME weight (w/o PDF), corresponds to ’weight2’.
PDG code of incoming parton 1.
PDG code of incoming parton 2.
Factorisation scale.
Renormalisation scale.
Bjorken-x of incoming parton 1.
Bjorken-x of incoming parton 2.
x’ for I-piece of incoming parton 1.
x’ for I-piece of incoming parton 2.
Number of additional ME weights for loops and integrated subtraction terms.
Additional ME weights for loops and integrated subtraction terms.
Real correction events and their counter-events from subtraction terms are highly correlated with exhibit large cancellations. Although a treatment of sub-events as independent events leads to the correct cross section the statistical error would be greatly overestimated. In order to get a realistic statistical error sub-events belonging to the same event must be combined before added to the total cross section or a histogram bin of a differential cross section. Since in general each sub-event comes with it’s own set of four momenta the following treatment becomes necessary:
weight2
of all
sub-events that go into the same histogram bin. These sums x_id are the
quantities to enter the actual histogram.
x_id
and the sum over all x_id^2
. The cross section in the bin is
given by <x> = 1/N \sum x_id
, where N
is the number of events
(not sub-events). The 1-\sigma
statistical error for the bin is
\sqrt{ (<x^2>-<x>^2)/(N-1) }
Note: The main difference between weight
and weight2
is that they
refer to a different counting of events. While weight
corresponds to
each event entry (sub-event) counted separately, weight2
counts events
as defined in step 1 of the above procedure. For NLO pieces other than the real
correction weight
and weight2
are identical.
Born and real pieces:
Notation:
f_a(x_a) = PDF 1 applied on parton a,
F_b(x_b) = PDF 2 applied on parton b.
The total cross section weight is given by
weight = me_wgt f_a(x_a)F_a(x_b).
Loop piece and integrated subtraction terms:
The weights here have an explicit dependence on the renormalization and factorization scales.
To take care of the renormalization scale dependence (other than via
alpha_S
) the weight w_0
is defined as
w_0 = me_wgt + usr_wgts[0] log((\mu_R^new)^2/(\mu_R^old)^2)
+ usr_wgts[1] 1/2 [log((\mu_R^new)^2/(\mu_R^old)^2)]^2.
To address the factorization scale dependence the weights w_1,...,w_8
are given by
w_i = usr_wgts[i+1] + usr_wgts[i+9] log((\mu_F^new)^2/(\mu_F^old)^2).
The full cross section weight can be calculated as
weight = w_0 f_a(x_a)F_b(x_b)
+ (f_a^1 w_1 + f_a^2 w_2 + f_a^3 w_3 + f_a^4 w_4) F_b(x_b)
+ (F_b^1 w_5 + F_b^2 w_6 + F_b^3 w_7 + F_b^4 w_8) f_a(x_a)
where
f_a^1 = f_a(x_a) (a=quark), \sum_q f_q(x_a) (a=gluon),
f_a^2 = f_a(x_a/x'_a)/x'_a (a=quark), \sum_q f_q(x_a/x'_a)x'_a (a=gluon),
f_a^3 = f_g(x_a),
f_a^4 = f_g(x_a/x'_a)/x'_a.
Customizing Sherpa according to your needs.
Sherpa can be easily extended with certain user defined tools. To this extent, a corresponding class must be written, equipped with a corresponding getter function and compiled into an external library which can be linked to Sherpa at runtime. Several specific examples are listed in the following sections.
9.1 External RNG | How to add an external random number generator. | |
9.2 External PDF | How to add an external PDF. | |
9.3 Exotic physics | How to introduce your own models to Sherpa. Example: Z-prime. | |
9.4 External one-loop ME | How to interface external one-loop codes. | |
9.5 My own interface | How to make Sherpa send output to your framework. |
To use an external Random Number Generator (RNG) in Sherpa, you need to provide an interface to your RNG in an external dynamic library. This library is then loaded at runtime and Sherpa replaces the internal RNG with the one provided.
In this case Sherpa will not attempt to set, save, read or restore the RNG
The corresponding code for the RNG interface is
#include "ATOOLS/Math/Random.H" using namespace ATOOLS; class Example_RNG: public External_RNG { public: double Get() { // your code goes here ... } };// end of class Example_RNG // this makes Example_RNG loadable in Sherpa DECLARE_GETTER(Example_RNG_Getter,"Example_RNG",External_RNG,RNG_Key); External_RNG *Example_RNG_Getter::operator()(const RNG_Key &arg) const { return new Example_RNG(); } // this eventually prints a help message void Example_RNG_Getter::PrintInfo(std::ostream &str,const size_t width) const { str<<"example RNG interface"; }
If the code is compiled into a library called libExampleRNG.so, then this library is loaded dynamically in Sherpa using the command ‘SHERPA_LDADD=ExampleRNG’ either on the command line or in ‘Run.dat’. If the library is bound at compile time, like e.g. in cmt, you may skip this step.
Finally Sherpa is instructed to retrieve the external RNG by specifying ‘EXTERNAL_RNG=Example_RNG’ on the command line or in ‘Run.dat’.
To use an external PDF (not included in LHAPDF) in Sherpa, you need to provide an interface to your PDF in an external dynamic library. This library is then loaded at runtime and it is possible within Sherpa to access all PDFs included.
The simplest C++ code to implement your interface looks as follows
#include "PDF/Main/PDF_Base.H" using namespace PDF; class Example_PDF: public PDF_Base { public: void Calculate(double x,double Q2) { // calculate values x f_a(x,Q2) for all a } double GetXPDF(const ATOOLS::Flavour a) { // return x f_a(x,Q2) } virtual PDF_Base *GetCopy() { return new Example_PDF(); } };// end of class Example_PDF // this makes Example_PDF loadable in Sherpa DECLARE_PDF_GETTER(Example_PDF_Getter); PDF_Base *Example_PDF_Getter::operator()(const Parameter_Type &args) const { return new Example_PDF(); } // this eventually prints a help message void Example_PDF_Getter::PrintInfo (std::ostream &str,const size_t width) const { str<<"example PDF"; } // this lets Sherpa initialize and unload the library Example_PDF_Getter *p_get=NULL; extern "C" void InitPDFLib() { p_get = new Example_PDF_Getter("ExamplePDF"); } extern "C" void ExitPDFLib() { delete p_get; }
If the code is compiled into a library called libExamplePDFSherpa.so, then this library is loaded dynamically in Sherpa using ‘PDF_LIBRARY=ExamplePDFSherpa’ either on the command line, in ‘Run.dat’ or in ‘ISR.dat’. If the library is bound at compile time, like e.g. in cmt, you may skip this step. It is now possible to list all accessible PDF sets by specifying ‘SHOW_PDF_SETS=1’ on the command line.
Finally Sherpa is instructed to retrieve the external PDF by specifying ‘PDF_SET=ExamplePDF’ on the command line, in ‘Run.dat’ or in ‘ISR.dat’.
It is possible to add your own models to Sherpa in a straightforward way. To illustrate, a simple example has been included in the directory ./AddOns/ExampleModel
, showing how to add a Z-prime boson to the Standard Model.
The important features of this example include:
SM_Zprime.C
file.
This file contains the initialisation of the Z-prime boson. The properties of the Z-prime are set here, such as mass, width, electromagnetic charge, spin etc.
Interaction_Model_SM_Zprime.C
file.
This file contains the definition of the Z-prime boson’s interactions. The right- and left-handed couplings to each of the fermions are set here.
Makefile
.
This shows how to compile the sources above into a shared library.
SHERPA_LDADD = SMZprime
in the (run)
section of the run-card.
This line tells Sherpa to load the extra libraries created from the *.C files above.
MODEL = SM+Zprime
in the (model)
section of the run-card.
This line tells Sherpa which model to use for the run.
MASS[32] = 1000.
and WIDTH[32] = 50.
in the (model)
section of the run-card.
These lines show how you can overrule the choices you made for the properties of the new particle in the SM_Zprime.C
file. For more information on changing parameters in Sherpa, see Input structure and Parameters.
To use this model, create the libraries for Sherpa to use by running
make |
in this directory. Then run Sherpa as normal:
../../bin/Sherpa |
To implement your own model, copy these example files anywhere and modify them according to your needs.
Note: You don’t have to modify or recompile any part of Sherpa to use your
model. As long as the SHERPA_LDADD
parameter is specified as above,
Sherpa will pick up your model automatically.
Furthermore note: New physics models with an existing implementation in FeynRules, cf. [Chr08] and [Chr09] , can directly be invoked using Sherpa’s interface to FeynRules, see FeynRules model.
Sherpa includes only a very limited selection of one-loop matrix elements. To make full use of the implemented automated dipole subtraction it is possible to link external one-loop codes to Sherpa in order to perform full calculations at QCD next-to-leading order.
In general Sherpa can take care of any piece of the calculation except one-loop matrix elements, i.e. the born ME, the real correction, the real and integrated subtraction terms as well as the phase space integration and PDF weights for hadron collisions. Sherpa will provide sets of four-momenta and request for a specific parton level process the helicity and colour summed one-loop matrix element (more specific: the coefficients of the Laurent series in the dimensional regularization parameter epsilon up to the order epsilon^0).
The directory ./AddOns/LH_OLE
includes an example interface which
follows the Binoth Les Houches interface proposal [Bin10a]
of the 2009 Les Houches workshop.
A sample setup for W+1jet production at LHC 14TeV is included.
The interface:
OLE_order.lh
.
The external one-loop code (OLE) should confirm these settings/requests
writing out a file OLE_contract.lh
. For the syntax and
details see the above url.
For Sherpa the output/input of the order/contract file is handled
in LH_OLE_Communicator.[CH]
.
The actual interface is contained in LH_OLE_Interface.C
.
The parameters to be exchanged with the OLE are defined in the
latter file via
lhfile.AddParameter(...);
and might require an update for specific OLE or processes.
void OLP_Start(const char * filename); void OLP_EvalSubProcess(int,double*,double,double,double*);
which are defined and called in LH_OLE_Interface.C
must be specified.
For keywords possible data fields passed with this functions see
[Bin10a]
.
The function OLP_Start(...)
is called once when Sherpa is starting.
The function OLP_EvalSubProcess(...)
will be called many times
for different subprocesses and momentum configurations.
Makefile
shows how to compile the
sources above into a shared library.
The setup:
Run_B.dat
, the virtual Run_I.dat
and the real correction piece Run_R.dat
. While for a
full NLO calculation all three must be employed (and their results
combined) only the virtual piece requires the interface.
SHERPA_LDADD = LHOLE
in the (run)
section of the run-card tells Sherpa to load the extra libraries.
Loop_Generator LHOLE
tells the code to use
the interface for computing one-loop matrix elements.
--enable-analysis
must be include on the command line when
Sherpa is configured, see ANALYSIS.
It is possible to pass Sherpa output to an external Fortran or C++ framework on-the-flight.
To illustrate this option, a simple, yet functional example is included in the directory
./AddOns/HEPEVTInterface
, showing how to fill the HEPEVT common from Sherpa output.
It also exemplifies how to retrieve the weight of weighted events and how to access information
about the total cross section of the event sample at the end of the run.
Note that only the event converter is included in the sources, you will still need to implement the calling function and an initialize and finalize method, see below. However, these are rather simple.
The important features of this example include:
HEPEVT_Interface.C
)
#include "SHERPA/Main/Sherpa.H" class My_Sherpa { private: SHERPA::Sherpa m_sherpa; public: void init(int argc,char *argv[]) { // initialize the generator m_sherpa.InitializeTheRun(argc,argv); // set it up for event generation m_sherpa.InitializeTheEventHandler(); } bool one_event() { // generate event and return status if (!m_sherpa.GenerateOneEvent()) return false; // now the HEPEVT common is filled and you can use it return true; } void finish() { // clean up and store total cross section m_sherpa.SummarizeRun(); } };// end of class My_Sherpa
HEPEVT_Interface.C
.
This file defines a pseudo-analysis, which implements the conversion to HEPEVT.
Makefile
.
This shows how to compile the source into a shared library called libSherpaHEPEVT.so
.
The library can either be copied into the directory <prefix>/lib/SHERPA-MC
,
or it can be placed in the run path.
SHERPA_LDADD = SherpaHEPEVT
in the (run)
section of the run-card.
This line tells Sherpa to load the extra library created from the *.C file above.
ANALYSIS = HEPEVT
in the (run)
section of the run-card.
This line tells Sherpa to run the pseudo-analysis implementing your HEPEVT interface.
To use this interface, create the additional library for Sherpa by running
make SHERPA_PREFIX=/path/to/sherpa |
in the directory AddOns/HEPEVTInterface. After copying the library, run Sherpa from your interface.
Note: You don’t have to modify or recompile any part of Sherpa to use this
interface. As long as the SHERPA_LDADD
parameter is specified as above,
Sherpa will pick up the HEPEVT converter automatically.
Some example set-ups are included in Sherpa, in the
<prefix>/share/SHERPA-MC/Examples/
directory. These may be useful to
new users to practice with, or as templates for creating your
own Sherpa run-cards. In this section, we will look at some
of the main features of these examples.
This example is for an LHC set-up, with a proton–proton collision at centre of mass energy 14TeV. The final state is an electron–positron pair and up to 4 partons from the matrix element.
|
Things to notice:
(run)
section.
If this parameter is not set, 100 events will be generated.
(beam)
section, using
PDG codes, see PDG codes. The beam energy must also be specified in GeV.
(processes)
section, using PDG codes
again. ‘93’ is a particle container for light quarks and gluons, see
Particle containers. The ‘4’ in curly brackets after the final state
‘93’ means that the matrix element will be generated with up to 4 extra partons
in the final state, see Curly brackets.
This set-up is very similar to the one above, for LHC_ZJets, but with proton–anti-proton collisions at Tevatron energies.
|
Things to notice:
The Tevatron W+jets set-up is very similar to the Z+jets example. The only differences are in the final state leptons, and the invariant mass cuts on the lepton pairs.
|
Things to notice:
This example generates QCD events at the Tevatron, with 2, 3 or 4 final state partons in the matrix element.
|
Things to notice:
Order_EW
is set to ‘0’. This
ensures that all final state jets are produced via
the strong interaction.
NJetFinder
selector is used to
set a resolution criterion for the two jets of the core process.
This is necessary because the ‘CKKW’ tag does not apply any cuts to the core
process, but only to the extra-jet matrix elements, see ME-PS merging.
This cut is applied only to the 2->2 process using the {2}
specification, since the higher-order matrix elements should only be cut by
the ME+PS separation criterion.
10.5.1 Single photon production | ||
10.5.2 Diphoton production |
We have studied prompt photon production in [Hoe09a] and this section serves as a practical guide to the features necessary for these studies. Its main emphasis lies on how to generate samples which include both the direct and fragmentation component and how to apply ME+PS merging.
Traditionally, the direct and fragmentation component are well separated in a parton-shower Monte-Carlo, e.g. for single photon production: The direct component is produced by using the 2->2 matrix element with a photon and a parton in the final state, and the fragmentation component can be generated by using the 2->2 matrix element with two partons in the final state. On top of the LO matrix elements the parton shower would then produce interleaved QCD+QED emissions, where the QED shower emissions from the dijet sample will create the fragmentation component.
Note, that the generation of the fragmentation component in this way is very inefficient, because the shower will only very rarely produce hard isolated photons. To be able to compare to this method at all in [Hoe09a] , we have implemented an enhancement of the QED splitting functions in the parton shower which is of course corrected for by giving the events appropriate weights, cf. the appendix of that paper.
But the main feature of [Hoe09a] is the consistent treatment of photons in the context of ME+PS merging. This effectively means that one can split the fragmentation component into two parts by phase space slicing. Hard isolated photons are produced from the exact higher-order tree-level matrix element (e.g. pp -> photon + 2, 3, ... partons) while collinear photons are produced by the parton-shower, taking into account the correct resummation of the quark-photon singularities.
This is not only advisable to become less dependent on uncertain parton-shower approximations in the non-collinear region, but will also help to generate the fragmentation component more efficiently: If the phase space slicing criterion in the ME+PS merging is sufficiently similar (or loose) compared to the photon isolation criterion used in the analysis, one can expect that the second, painful, part of the fragmentation component, i.e. the one from the parton shower, becomes irrelevant for the analysis.
So in the following we describe how to generate single photons and diphotons making use of the default photon slicing criterion which looks like:
$\min(p_\perp^2(\gamma), p_\perp^2(parton)) (\DeltaR(\gamma, parton)/D)^2 > Q^2_cut$
where D=0.3
by default and Q_cut
is the merging parameter
specified in the CKKW line of the processes section. It might be sufficient to
adapt these two parameters, e.g. in the following run cards set
CKKW sqr(10.0/E_CMS)|0.2 |
for Q_cut=10.0
and D=0.2
.
If you notice that your photon isolation criterion is not sufficiently similar
(the shower sample does contribute to your analysis or your ME sample still
produces many photons which aren’t isolated according to your criterion) one
could still manually adapt the slicing criterion in Sherpa. Please contact us
for assistance in that case.
In the following sections we show and discuss run cards for single- and diphoton production. They have been separated into a matrix-element part (i.e. direct and fragmentation-from-ME component) and a parton-shower part (i.e. fragmentation from the shower) as described above. In all analyses which we have compared to so far we found the contribution of the second part negligible. Please note, that these run cards will generate weighted events with ME enhancements in phase space regions which would otherwise not be filled very efficiently.
(run){ EVENTS = 1000000 }(run) (processes){ Process 93 93 -> 22 93 93{2} Order_EW 1 CKKW sqr(10.0/E_CMS) Selector_File *|(coresel){|}(coresel) {2}; Enhance_Function PPerp2(p[2]) {2} Enhance_Function max(PPerp2(p[2]),PPerp2(p[3]),PPerp2(p[4])) {3} Enhance_Function max(PPerp2(p[2]),PPerp2(p[3]),PPerp2(p[4]),PPerp2(p[5])) {4} Enhance_Function max(PPerp2(p[2]),PPerp2(p[3]),PPerp2(p[4]),PPerp2(p[5]),PPerp2(p[6])) {5} Integration_Error 0.05 {4} End process }(processes) (coresel){ NJetFinder 2 10.0 0.0 1.0 }(coresel) (shower){ CSS_EW_MODE = 1 }(shower) (me){ EVENT_GENERATION_MODE = Weighted ME_QED = Off }(me) (fragmentation){ FRAGMENTATION = Off DECAYMODEL = Off }(fragmentation) (mi){ MI_HANDLER = None }(mi) (beam){ BEAM_1 = 2212 BEAM_ENERGY_1 = 980.0 BEAM_2 = -2212 BEAM_ENERGY_2 = 980.0 }(beam)
processes
section, matrix elements for
pp(bar) -> photon + 1, 2, 3 partons are requested.
Only Feynman diagrams with exactly one electroweak coupling are allowed.
The merging criterion is set to Q_cut=10.0 GeV
.
Enhance_Function
’s (cf. Enhance_Function) are introduced
to produce more hard partons/photons than the steeply falling cross
section would imply (appropriately weighted).
shower
section.
me
section enables weighted event generation and switches
off the emission of additional soft photons from the hard scattering.
beam
section specifies Tevatron Run 2 conditions in this
example but can simply be changed.
(which will ideally be irrelevant for the analysis)
(run){ EVENTS = 1000000 }(run) (processes){ Process 93 93 -> 93 93 93{2} Order_EW 0 CKKW sqr(10.0/E_CMS) Selector_File *|(coresel){|}(coresel) {2}; Enhance_Function PPerp2(p[2]) {2} Enhance_Function max(PPerp2(p[2]),PPerp2(p[3]),PPerp2(p[4])) {3} Enhance_Function max(PPerp2(p[2]),PPerp2(p[3]),PPerp2(p[4]),PPerp2(p[5])) {4} Enhance_Function max(PPerp2(p[2]),PPerp2(p[3]),PPerp2(p[4]),PPerp2(p[5]),PPerp2(p[6])) {5} Integration_Error 0.05 {4} End process }(processes) (coresel){ NJetFinder 2 10.0 0.0 1.0 }(coresel) (shower){ CSS_EW_MODE = 1 }(shower) (me){ EVENT_GENERATION_MODE = Weighted ME_QED = Off }(me) (fragmentation){ FRAGMENTATION = Off DECAYMODEL = Off }(fragmentation) (mi){ MI_HANDLER = None }(mi) (beam){ BEAM_1 = 2212 BEAM_ENERGY_1 = 980.0 BEAM_2 = -2212 BEAM_ENERGY_2 = 980.0 }(beam)
Here only the differences with respect to above are explained:
processes
section, matrix elements for
pp(bar) -> dijet + 0, 1, 2 partons are requested.
Only Feynman diagrams without electroweak couplings are allowed
(for efficiency reasons).
(run){ EVENTS = 1000000 }(run) (processes){ Process 21 21 -> 22 22 Scales VAR{Abs2(p[2]+p[3])} Loop_Generator gg_yy End process; Process 93 93 -> 22 22 93{2} Order_EW 2 CKKW sqr(10.0/E_CMS) Selector_File *|(coresel){|}(coresel) {2}; Enhance_Function PPerp2(p[2]) {2} Enhance_Function max(PPerp2(p[2]),PPerp2(p[3]),PPerp2(p[4])) {3} Enhance_Function max(PPerp2(p[2]),PPerp2(p[3]),PPerp2(p[4]),PPerp2(p[5])) {4} Enhance_Function max(PPerp2(p[2]),PPerp2(p[3]),PPerp2(p[4]),PPerp2(p[5]),PPerp2(p[6])) {5} Integration_Error 0.05 {4} End process }(processes) (coresel){ NJetFinder 2 10.0 0.0 1.0 }(coresel) (shower){ CSS_EW_MODE = 1 }(shower) (me){ EVENT_GENERATION_MODE = Weighted ME_QED = Off }(me) (fragmentation){ FRAGMENTATION = Off DECAYMODEL = Off }(fragmentation) (mi){ MI_HANDLER = None }(mi) (beam){ BEAM_1 = 2212 BEAM_ENERGY_1 = 980.0 BEAM_2 = -2212 BEAM_ENERGY_2 = 980.0 }(beam)
Here only the differences with respect to the single photon example are explained:
processes
section, tree-level matrix elements for
pp(bar) -> photon photon + 0, 1, 2 partons are requested.
Only Feynman diagrams with exactly two electroweak couplings are allowed
(for efficiency reasons).
In addition, the loop-induced matrix element for the process gg -> photon photon is enabled.
(which will ideally be irrelevant for the analysis)
The shower can produce di-photon events either from single-photon events or from di-jet events by producing one or two photons respectively. Here these two contributions are generated in one run, but of course they could be separated as well.
(run){ EVENTS = 1000000 }(run) (processes){ Process 93 93 -> 22 93 93{2} Order_EW 1 CKKW sqr(10.0/E_CMS) Selector_File *|(coresel){|}(coresel) {2}; Enhance_Function PPerp2(p[2]) {2} Enhance_Function max(PPerp2(p[2]),PPerp2(p[3]),PPerp2(p[4])) {3} Enhance_Function max(PPerp2(p[2]),PPerp2(p[3]),PPerp2(p[4]),PPerp2(p[5])) {4} Enhance_Function max(PPerp2(p[2]),PPerp2(p[3]),PPerp2(p[4]),PPerp2(p[5]),PPerp2(p[6])) {5} Integration_Error 0.05 {4} End process Process 93 93 -> 93 93 93{2} Order_EW 0 CKKW sqr(10.0/E_CMS) Selector_File *|(coresel){|}(coresel) {2}; Enhance_Function PPerp2(p[2]) {2} Enhance_Function max(PPerp2(p[2]),PPerp2(p[3]),PPerp2(p[4])) {3} Enhance_Function max(PPerp2(p[2]),PPerp2(p[3]),PPerp2(p[4]),PPerp2(p[5])) {4} Enhance_Function max(PPerp2(p[2]),PPerp2(p[3]),PPerp2(p[4]),PPerp2(p[5]),PPerp2(p[6])) {5} Integration_Error 0.05 {4} End process }(processes) (coresel){ NJetFinder 2 10.0 0.0 1.0 }(coresel) (shower){ CSS_EW_MODE = 1 }(shower) (me){ EVENT_GENERATION_MODE = Weighted ME_QED = Off }(me) (fragmentation){ FRAGMENTATION = Off DECAYMODEL = Off }(fragmentation) (mi){ MI_HANDLER = None }(mi) (beam){ BEAM_1 = 2212 BEAM_ENERGY_1 = 980.0 BEAM_2 = -2212 BEAM_ENERGY_2 = 980.0 }(beam)
Here only the differences with respect to above are explained:
processes
section, matrix elements for
pp(bar) -> dijet + 0, 1, 2 partons and
pp(bar) -> photon + 1, 2, 3 partons are requested.
Only Feynman diagrams with 0 or 1 electroweak couplings respectively
are allowed (for efficiency reasons).
|
This example shows a LEP set up, with electrons and positrons colliding at a centre of mass energy of 91.25GeV. Two processes have been specified, one final state with two or three light quarks and gluons being produced, and one with a b b-bar pair and possibly an extra light parton.
Things to notice:
(model)
section of the run-card, parameters relating
to the model can be set. In this example, the running of alpha_s
is set to leading order and the value of alpha_s at the Z-mass is set.
This is an example of a setup for hadronic final states in deep-inelastic lepton-nucleon scattering at a centre-of-mass energy of 300 GeV. Corresponding measurements were carried out by the H1 and ZEUS collaborations at the HERA collider at DESY Hamburg.
|
Things to notice:
This set-up illustrates the possibility of specifying a particular decay chain for the particles in the signal process.
|
The process generated is the production of a Higgs boson in association with a top quark pair from two light partons in the initial state. The Higgs boson decays into a bottom-antibottom pair, while each top quark decays into (anti-)bottom quark and W-boson. The W+ boson in turn decays leptonically, and the W- boson decays to quarks.
Things to notice:
DecayOS
is used to specify the decays. See DecayOS.
This set-up is very similar to the LHC_TTH example above. There are three notable differences. Firstly, the colliding beams are changed to proton anti-proton at Tevatron energies. Secondly, there is no Higgs boson produced in the core process, but there are two allowed decay channels of the top quark pair. Third, Matrix-Element Parton-Shower merging is enabled using the CKKW tag.
|
The process generated is the production of a top pair from two light quarks, with possibly an extra final-state parton. Each top then decays into (anti-)bottom quark and W+(-) boson. One of the W-bosons then decays leptonically, and the other decays to quarks. This time we include processes with either of the two W-bosons decaying leptonically, not just the W-, as in the LHC_TTH example above.
Things to notice:
In this example, the underlying event has been switched on. The
parameters controlling the simulation of multiple interactions
can be set in the (mi)
section of the run card. For a
full list of the available settings, see MPI Parameters.
|
Things to notice:
|
This is a Tevatron set-up with two W bosons produced, and both decaying leptonically. Up to two jets are also included in the matrix element.
Things to notice:
(model)
section, the top quark and
Higgs have been turned off, using the ACTIVE[PDG]
switches.
|
Things to notice:
(beam)
section of the data card, the beams
are set as electrons by BEAM_{1,2}=11
.
BEAM_SPECTRUM_1=Laser_Backscattering
and
BEAM_SPECTRUM_2=Monochromatic
have been specified.
|
This example shows a set up for the BaBar experiment at the PEPII collider, with electrons and positrons colliding at a centre of mass energy of 10.58 GeV. It also serves as an example for a setup with asymmetric beams. Two processes have been specified, the production of a B+ B- pair and the production of B0 B0bar pair. Hence, both processes will be mixed according to their respective cross sections into an inclusive sample.
Things to notice:
ANALYSIS = Internal
. To
display the analysis package syntax, run Sherpa with
SHOW_ANALYSIS_SYNTAX=1
on the command line.
This example shows a beyond the Standard Model set-up. In this example, a 4th generation of particles is included, as well as the Standard Model particles. This is one of Sherpa’s built-in BSM options, see Models available in Sherpa. For more information on using your own BSM models with Sherpa, see Exotic physics.
|
Things to notice:
(model)
section of
the run card, using the parameter MODEL
, see
Model Parameters. The default is the SM.
SHOW_MODEL_SYNTAX=1
on the command line when running Sherpa.
MASS[<id>]
and WIDTH[<id>]
,
may be set in the runcard, or on the command line,
in exactly the same way as the SM particle
properties. In the above example, only the fourth
generation lepton masses and widths have been set. The
corresponding parameters of the fourth generation
quarks were left at their default values since they do
not play a role in this process.
SHOW_ANALYSIS_SYNTAX=1
on the command line.
When Sherpa is run using the matrix element generator
AMEGIC++, it is necessary to run it twice. During the first run
(the initialization run) Feynman diagrams for the hard processes are constructed
and translated into helicity amplitudes. Furthermore suitable phase-space mappings are
produced. The amplitudes and corresponding integration channels are written to disk as
C++ sourcecode, placed in a subdirectory of LHC_4thGen
,
which is called Process
. The initialization run is started
using the standard Sherpa executable, as decribed in Running Sherpa.
The relevant command is
|
The initialization run apparently
stops with an error message, which is nothing but the request to carry
out the compilation and linking procedure for the generated
matrix-element libraries. The makelibs
script, provided for this
purpose and created in the working directory, must be invoked by the user:
./makelibs |
Afterwards Sherpa can be restarted using the same command as before. In this run (the generation run) the cross sections of the hard processes are evaluated. Simultaneously the integration over phase space is optimized to arrive at an efficient event generation.
Another built-in BSM model in Sherpa allows the inclusion of anomalous gauge couplings. An example with an anomalous Z-gamma-gamma coupling is given here.
|
Things to notice:
MODEL
parameter, see
Models available in Sherpa.
(model)
section. For more information on available
parameters, and their meanings, run Sherpa with
SHOW_MODEL_SYNTAX=1
on the command line, or see
Anomalous Gauge Couplings.
(processes)
section. When Sherpa is run with this
option enabled, the
valid Feynman graphs for the process are drawn, and
they are stored in
tex
format, in the specified directory. In this example, the
graphs will be found in the Process directory, after Sherpa
is run.
This example demonstrates the usage of beam spectra based on the equivalent photon approximation (EPA) in Sherpa. The corresponding setup is discussed in detail in [Arc08] .
|
Things to notice:
BEAM_SPECTRUM_<i>
parameter, see Beam Parameters.
This example shows a set-up for the ADD model. This is one of Sherpa’s built-in BSM options. For more information on using your own BSM models with Sherpa, see Exotic physics.
|
Things to notice:
(model)
section of
the run card, using the parameter MODEL
, see
Model Parameters. The default is the SM.
SHOW_MODEL_SYNTAX=1
on the command line when running Sherpa.
<id> = 39
) and
graviscalar (<id> = 40
),
such as e.g. MASS[<id>]
and WIDTH[<id>]
,
may be set in the runcard, or on the command line,
in exactly the same way as the SM particle properties.
N_ED = 2
.
Further the cut-off scale and the ADD scale M_S have been set equal
to a rather low value (2 TeV).
SHOW_ANALYSIS_SYNTAX=1
on the command line.
This example shows a beyond the Standard Model set-up, namely a setup for the Minimal Supersymmetric Standard Model (MSSM). This is one of Sherpa’s built-in BSM options, see Models available in Sherpa. For more information on using your own BSM models with Sherpa, see Exotic physics.
|
Things to notice:
(model)
section of
the run card, using the parameter MODEL
, see
Model Parameters. The default is the SM.
SLHA_INPUT
, see Minimal Supersymmetric Standard Model.
DecayOS
is used to specify the decays. See DecayOS.
This example computes the next-to-leading W-production cross section at Tevatron Run II energies.
|
Things to notice:
Enhance_Factor
has been added, see Processes.
--enable-analysis
must be include on the command line when
Sherpa is configured, see ANALYSIS.
LEVEL MENLO
.
This setup implements the MENLOPS method, combining both POWHEG-style NLO matrix element parton shower matching and CKKW-style multijet merging, cf. [Hoe10a] , [Ham10] . Its result, after switching on both the hadronisation and underlying event modules, can then be directly compared to experimental data.
|
Things to notice:
Enhance_Function
have been added to raise statistics on hard jet
emissions (cf. Enhance_Function).
This setup implements the MENLOPS method, combining both POWHEG-style NLO matrix element parton shower matching and CKKW-style multijet merging, cf. [Hoe10a] , [Ham10] . Its result, after switching on both the hadronisation and underlying event modules, can then be directly compared to experimental data.
|
Things to notice:
This setup implements a matching of NLO matrix elements with the parton shower using the POWHEG method, cf. [Hoe10] , [Nas04] , [Fri07] . Its result, after switching on both the hadronisation and underlying event modules, can then be directly compared to experimental data.
|
Things to notice:
This setup implements a matching of NLO matrix elements with the parton shower using the POWHEG method, cf. [Hoe10] , [Nas04] , [Fri07] . Its result, after switching on both the hadronisation and underlying event modules, can then be directly compared to experimental data.
|
Things to notice:
This setup implements a matching of NLO matrix elements with the parton shower using the POWHEG method, cf. [Hoe10] , [Nas04] , [Fri07] . Its result, after switching on both the hadronisation and underlying event modules, can then be directly compared to experimental data.
|
Things to notice:
If Sherpa exits abnormally, first check the Sherpa output for hints on the reason of program abort, and try to figure out what has gone wrong with the help of the Manual. Note that Sherpa throwing a ‘normal_exit’ exception does not imply any abnormal program termination! When using AMEGIC++ Sherpa will exit with the message:
New libraries created. Please compile. |
In this case, follow the instructions given in Running Sherpa with AMEGIC++.
If this does not help, contact the Sherpa team (see the Sherpa Team section of the website http://www.sherpa-mc.de), providing all information on your setup. Please include
Status__<date of crash>
produced before the program abort.
Sherpa was written by the Sherpa Team, see http://www.sherpa-mc.de.
Sherpa is free software. You can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation. You should have received a copy of the GNU General Public License along with the source for Sherpa; see the file COPYING. If not, write to the Free Software Foundation, 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
Sherpa is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
Sherpa was created during the Marie Curie RTN’s HEPTOOLS and MCnet. The MCnet Guidelines apply, see the file GUIDELINES and http://www.montecarlonet.org/index.php?p=Publications/Guidelines.
Jump to: | 1
A B C D E F G H I K L M N O P R S T U V W Y |
---|
Jump to: | 1
A B C D E F G H I K L M N O P R S T U V W Y |
---|
This document was generated by Frank Siegert on May 4, 2011 using texi2html 1.82.