sherpa is hosted by Hepforge, IPPP Durham

Sherpa 2.1.0 Manual

Sherpa Logo

1. Introduction

Sherpa is a Monte Carlo event generator for the Simulation of High-Energy Reactions of PArticles in lepton-lepton, lepton-photon, photon-photon, lepton-hadron and hadron-hadron collisions. This document provides information to help users understand and apply Sherpa for their physics studies. The event generator is introduced, in broad terms, and the installation and running of the program are outlined. The various options and parameters specifying the program are compiled, and their meanings are explained. This document does not aim at giving a complete description of the physics content of Sherpa . To this end, the authors refer the reader to the original publication, [Gle08b].


1.1 Introduction to Sherpa

Sherpa [Gle08b] is a Monte Carlo event generator that provides complete hadronic final states in simulations of high-energy particle collisions. The produced events may be passed into detector simulations used by the various experiments. The entire code has been written in C++, like its competitors Herwig++ [Bah08b] and Pythia 8 [Sjo07].

Sherpa simulations can be achieved for the following types of collisions:

The list of physics processes that come with Sherpa covers particle production at tree level in the Standard Model and in models beyond the Standard Model: The complete set of Feynman rules for the MSSM has been implemented according to [Ros89], [Ros95], including general mixing matrices for inter-generational squark and slepton mixing. Among other interaction models the ADD model of Large Extra Dimensions has been made available, too [Gle03a]. Furthermore, anomalous gauge couplings [Hag86], a model with an extended Higgs sector [Ded08], and a version of the Two-Higgs Doublet Model are available. The Sherpa program owes this versatility to the inbuilt matrix-element generators, AMEGIC++ and Comix, and to it’s phase-space generator Phasic [Kra01], which automatically calculate and integrate tree-level amplitudes for the implemented models. This feature enables Sherpa to be used as a cross-section integrator and parton-level event generator as well. This aspect has been extensively tested, see e.g. [Gle03b], [Hag05].

As a second key feature of Sherpa the program provides an implementation of the merging approaches of [Hoe09] and [Geh12], [Hoe12a]. These algorithms yield improved descriptions of multijet production processes, which copiously appear at lepton-hadron colliders like HERA [Car09], or hadron-hadron colliders like the Tevatron and the LHC, [Kra04], [Kra05], [Gle05], [Hoe09a]. An older approach, implemented in previous versions of Sherpa and known as the CKKW technique [Cat01a], [Kra02], has been compared in great detail in [Alw07] with other approaches, such as the MLM merging prescription [Man01] as implemented in Alpgen [Man02], Madevent [Ste94], [Mal02a], or Helac [Kan00], [Pap05] and the CKKW-L prescription [Lon01], [Lav05] of Ariadne [Lon92].

This manual contains all information necessary to get started with Sherpa as quickly as possible. It lists options and switches of interest for steering the simulation of various physics aspects of the collision. It does not describe the physics simulated by Sherpa or the underlying structure of the program. Many external codes can be linked with Sherpa. This manual explains how to do this, but it does not contain a description of the external programs. You are encouraged to read their corresponding documentations, which are referenced in the text. If you use external programs with Sherpa, you are encouraged to cite them accordingly.

The MCnet Guidelines apply to Sherpa. You are kindly asked to cite [Gle08b] if you have used the program in your work.

The Sherpa authors strongly recommend the study of the manuals and many excellent publications on different aspects of event generation and physics at collider experiments written by other event generator authors.

This manual is organized as follows: in Basic structure the modular structure intrinsic to Sherpa is introduced. Getting started contains information about and instructions for the installation of the package. There is also a description of the steps that are needed to run Sherpa and generate events. The Input structure is then discussed, and the ways in which Sherpa can be steered are explained. All parameters and options are discussed in Parameters. Advanced Tips and tricks are detailed, and some options for Customization are outlined for those more familiar with Sherpa. There is also a short description of the different Examples provided with Sherpa.

The construction of Monte Carlo programs requires several assumptions, approximations and simplifications of complicated physics aspects. The results of event generators should therefore always be verified and cross-checked with results obtained by other programs, and they should be interpreted with care and common sense.


1.2 Basic structure

Sherpa is a modular program. This reflects the paradigm of Monte Carlo event generation, with the full simulation is split into well defined event phases, based on QCD factorization theorems. Accordingly, each module encapsulates a different aspect of event generation for high-energy particle reactions. It resides within its own namespace and is located in its own subdirectory of the same name. The main module called SHERPA steers the interplay of all modules – or phases – and the actual generation of the events. Altogether, the following modules are currently distributed with the Sherpa framework:

The actual executable of the Sherpa generator can be found in the subdirectory <prefix>/bin/ and is called Sherpa. To run the program, input files have to be provided in the current working directory or elsewhere by specifying the corresponding path, see Input structure. All output files are then written to this directory as well.


2. Getting started


2.1 Installation

Sherpa is distributed as a tarred and gzipped file named SHERPA-MC-2.1.0.tar.gz, and can be unpacked in the current working directory with

 
 tar -zxf SHERPA-MC-2.1.0.tar.gz

Alternatively, it can also be accessed via SVN through the location specified on the download page. In that case, before continuing, it is necessary to construct the build scripts by running autoreconf -i once after checking out the SVN working copy.

To guarantee successful installation, the following tools should be available on the system:

If SQLite is installed in a non-standard location, please specify the installation path using option ‘--with-sqlite3=/path/to/sqlite’. If SQLite is not installed on your system, the Sherpa configure script provides the fallback option of installing it into the same directory as Sherpa itself. To do so, please run configure with option ‘--with-sqlite3=install’ (This may not work if you are cross-compiling using ‘--host’. In this case, please install SQLite by yourself and reconfigure using ‘--with-sqlite3=/path/to/sqlite’).

Compilation and installation proceed through the following commands:

 
 ./configure

 make install

If not specified differently, the directory structure after installation is organized as follows

$(prefix)/bin

Sherpa executeable and scripts

$(prefix)/include

headers for process library compilation

$(prefix)/lib

basic libraries

$(prefix)/share

PDFs, Decaydata, fallback run cards

The installation directory $(prefix) can be specified by using the ./configure --prefix /path/to/installation/target directive and defaults to the current working directory.

If Sherpa has to be moved to a different directory after the installation, one has to set the following environment variables for each run:

Sherpa can be interfaced with various external packages, e.g. HepMC, for event output, or LHAPDF, for PDFs. For this to work, the user has to pass the appropriate commands to the configure step. This is achieved as shown below:

 
./configure --enable-hepmc2=/path/to/hepmc2 --enable-lhapdf=/path/to/lhapdf

Here, the paths have to point to the top level installation directories of the external packages, i.e. the ones containing the lib/, share/, ... subdirectories.

For a complete list of possible configuration options run ‘./configure --help’.

The Sherpa package has successfully been compiled, installed and tested on SuSE, RedHat / Scientific Linux and Debian / Ubuntu Linux systems using the GNU C++ compiler versions 3.2, 3.3, 3.4, and 4.x as well as on Mac OS X 10 using the GNU C++ compiler version 4.0. In all cases the GNU FORTRAN compiler g77 or gfortran has been employed.

If you have multiple compilers installed on your system, you can use shell environment variables to specify which of these are to be used. A list of the available variables is printed with

 
./configure --help

in the Sherpa top level directory and looking at the last lines. Depending on the shell you are using, you can set these variables e.g. with export (bash) or setenv (csh). Examples:

 
export CXX=g++-3.4
export CC=gcc-3.4
export CPP=cpp-3.4

Installation on Cray XE6 / XK7

Sherpa has been installed successfully on Cray XE6 and Cray XK7. The following configure command should be used

 
./configure <your options> --enable-mpi --host=i686-pc-linux CC=CC CXX=CC FC='ftn -fPIC' LDFLAGS=-dynamic

Sherpa can then be run with

 
aprun -n <nofcores> <prefix>/bin/Sherpa -lrun.log

The modularity of the code requires setting the environment variable ‘CRAY_ROOTFS’, cf. the Cray system documentation.

Installation on IBM BlueGene/Q

Sherpa has been installed successfully on an IBM BlueGene/Q system. The following configure command should be used

 
./configure <your options> --enable-mpi --host=powerpc64-bgq-linux CC=mpic++ CXX=mpic++ FC='mpif90 -fPIC -funderscoring' LDFLAGS=-dynamic

Sherpa can then be run with

 
qsub -A <account> -n <nofcores> -t 60 --mode c16 <prefix>/bin/Sherpa -lrun.log

MacOS Installation

Since it is more complicated to set up the necessary compiler environment on a Mac we recommend using a package manager to install Sherpa and its dependencies. David Hall is hosting a repository for Homebrew packages at: http://davidchall.github.io/homebrew-hep/

In case you are compiling yourself, please be aware of the following issues which have come up on Mac installations before:


2.2 Running Sherpa

The Sherpa executable resides in the directory <prefix>/bin/ where <prefix> denotes the path to the Sherpa installation directory. The way a particular simulation will be accomplished is defined by several parameters, which can all be listed in a common file, or data card (Parameters can be alternatively specified on the command line; more details are given in Input structure). This steering file is called Run.dat and some example setups (i.e. Run.dat files) are distributed with the current version of Sherpa. They can be found in the directory <prefix>/share/SHERPA-MC/Examples/, and descriptions of some of their key features can be found in the section Examples.

Please note: It is not in general possible to reuse run cards from previous Sherpa versions. Often there are small changes in the parameter syntax of the run cards from one version to the next. These changes are documented in our manuals. In addition, always use the newer Hadron.dat and Decaydata directories (and reapply any changes which you might have applied to the old ones), see Hadron decays.

The very first step in running Sherpa is therefore to adjust all parameters to the needs of the desired simulation. The details for doing this properly are given in Parameters. In this section, the focus is on the main issues for a successful operation of Sherpa. This is illustrated by discussing and referring to the parameter settings that come in the run card ./Examples/V_plus_Jets/LHC_ZJets/Run.dat, cf. Z+jets production. This is a simple run card created to show the basics of how to operate Sherpa. It should be stressed that this run-card relies on many of Sherpa’s default settings, and, as such, you should understand those settings before using it to look at physics. For more information on the settings and parameters in Sherpa, see Parameters, and for more examples see the Examples section.


2.2.1 Process selection and initialization

Central to any Monte Carlo simulation is the choice of the hard processes that initiate the events. These hard processes are described by matrix elements. In Sherpa, the selection of processes happens in the (processes) part of the steering file. Only a few 2->2 reactions have been hard-coded. They are available in the EXTRA_XS module. The more usual way to compute matrix elements is to employ one of Sherpa’s automated tree-level generators, AMEGIC++ and Comix, see Basic structure. If no matrix-element generator is selected, using the ME_SIGNAL_GENERATOR tag, then Sherpa will use whichever generator is capable of calculating the process, checking Comix first, then AMEGIC++ and then EXTRA_XS. Therefore, for some processes, several of the options are used. In this example, however, all processes will be calculated by Comix.

To begin with the example, the Sherpa run has to be started by changing into the <prefix>/share/SHERPA-MC/Examples/V_plus_Jets/LHC_ZJets/ directory and executing

 
<prefix>/bin/Sherpa 

You may also run from an arbitrary directory, employing <prefix>/bin/Sherpa PATH=<prefix>/share/SHERPA-MC/Examples/V_plus_Jets/LHC_ZJets. In the example, the keyword PATH is specified by an absolute path. It may also be specified relative to the current working directory. If it is not specified at all or it is omitted, the current working directory is understood.

For good book-keeping, it is highly recommended to reserve different subdirectories for different simulations as is demonstrated with the example setups.

If AMEGIC++ is used, Sherpa requires an initialization run, where C++ source code is written to disk. This code must be compiled into dynamic libraries by the user by running the makelibs script in the working directory. Alternatively, if scons is installed, you may invoke <prefix>/bin/make2scons and run scons install. After this step Sherpa is run again for the actual cross section integrations and event generation. For more information on and examples of how to run Sherpa using AMEGIC++, see Running Sherpa with AMEGIC++.

If the internal hard-coded matrix elements or Comix are used, and AMEGIC++ is not, an initialization run is not needed, and Sherpa will calculate the cross sections and generate events during the first run.

As the cross sections are integrated, the integration over phase space is optimized to arrive at an efficient event generation. Subsequently events are generated if EVENTS was specified either on the command line or added to the (run) section in the Run.dat file.

The generated events are not stored into a file by default; for details on how to store the events see Event output formats. Note that the computational effort to go through this procedure of generating, compiling and integrating the matrix elements of the hard processes depends on the complexity of the parton-level final states. For low multiplicities ( 2->2,3,4 ), of course, it can be followed instantly.

Usually more than one generation run is wanted. As long as the parameters that affect the matrix-element integration are not changed, it is advantageous to store the cross sections obtained during the generation run for later use. This saves CPU time especially for large final-state multiplicities of the matrix elements. Per default, Sherpa stores these integration results in a directory called Results/. The name of the output directory can be customised via

 
<prefix>/bin/Sherpa RESULT_DIRECTORY=<result>/

see RESULT_DIRECTORY. The storage of the integration results can be prevented by either using

 
<prefix>/bin/Sherpa GENERATE_RESULT_DIRECTORY=0

or the command line option ‘-g’ can be invoked, see Command line options.

If physics parameters change, the cross sections have to be recomputed. The new results should either be stored in a new directory or the <result> directory may be re-used once it has been emptied. Parameters which require a recomputation are any parameters affecting the Model parameters, Matrix elements or Selectors. Standard examples are changing the magnitude of couplings, renormalisation or factorisation scales, changing the PDF or centre-of-mass energy, or, applying different cuts at the parton level. If unsure whether a recomputation is required, a simple test is to remove the RESULT_DIRECTORY option from the run command and check whether the new integration numbers (statistically) comply with the stored ones.

A warning on the validity of the process libraries is in order here: it is absolutely mandatory to generate new library files, whenever the physics model is altered, i.e. particles are added or removed and hence new or existing diagrams may or may not anymore contribute to the same final states. Also, when particle masses are switched on or off, new library files must be generated (however, masses may be changed between non-zero values keeping the same process libraries). The best recipe is to create a new and separate setup directory in such cases. Otherwise the Process and Results directories have to be erased:

 
rm -rf Process/ Results/

In either case one has to start over with the whole initialization procedure to prepare for the generation of events.


2.2.2 The example set-up: Z+Jets at the LHC

The setup file ( Run.dat ) provided in ./Examples/V_plus_Jets/LHC_ZJets/ can be considered as a standard example to illustrate the generation of fully hadronised events in Sherpa, cf. Z+jets production. Such events will include effects from parton showering, hadronisation into primary hadrons and their subsequent decays into stable hadrons. Moreover, the example chosen here nicely demonstrates how Sherpa is used in the context of merging matrix elements and parton showers [Hoe09]. In addition to the aforementioned corrections, this simulation of inclusive Drell-Yan production (electron-positron channel) will then include higher-order jet corrections at the tree level. As a result the transverse-momentum distribution of the Drell-Yan pair and the individual jet multiplicities as measured by the ATLAS and CMS collaborations at the LHC can be well described.

Before event generation, the initialization procedure as described in Process selection and initialization has to be completed. The matrix-element processes included in the setup are the following:

 
  proton proton -> parton parton -> electron positron + up to four partons

In the (processes) part of the steering file this translates into

  Process 93 93 -> 11 -11 93{4}
  Order_EW 2;
  CKKW sqr(30/E_CMS)
  End process;

The physics model for these processes is the Standard Model (‘SM’) which is the default setting of the parameter MODEL and is therefore not set explicitly. Fixing the order of electroweak couplings to ‘2’, matrix elements of all partonic subprocesses for Drell-Yan production without any and with up to two extra QCD parton emissions will be generated. Proton–proton collisions are considered at beam energies of 3.5 TeV. The default PDF used by Sherpa is CT10. Model parameters and couplings can be set in section (run) of Run.dat. Similarly, the way couplings are treated can be defined. As no options are set the default parameters and scale setting procedures are used.

The QCD radiation matrix elements have to be regularised to obtain meaningful cross sections. This is achieved by specifying ‘CKKW sqr(30/E_CMS)’ in the (processes) part of Run.dat. Simultaneously, this tag initiates the ME-PS merging procedure. To eventually obtain fully hadronized events, the FRAGMENTATION tag has been left on it’s default setting ‘Ahadic’ (and therefore been omitted from the run card), which will run Sherpa’s cluster hadronisation, and the tag DECAYMODEL has it’s default setting ‘Hadrons’, which will run Sherpa’s hadron decays. Additionally corrections owing to photon emissions are taken into account.

To run this example set-up, use the

 
<prefix>/bin/Sherpa 

command as descibed in Running Sherpa. Sherpa displays some output as it runs. At the start of the run, Sherpa initializes the relevant model, and displays a table of particles, with their PDG codes and some properties. It also displays the Particle containers, and their contents. The other relevant parts of Sherpa are initialized, including the matrix element generator(s). The Sherpa output will look like:

 
Initialized the beams Monochromatic*Monochromatic
PDF set 'ct10' loaded for beam 1 (P+).
PDF set 'ct10' loaded for beam 2 (P+).
Initialized the ISR: (SF)*(SF)
Initialize the Standard Model from  / Model.dat
One_Running_AlphaS::One_Running_AlphaS() {
  Setting \alpha_s according to PDF
  perturbative order 1
  \alpha_s(M_Z) = 0.118
}
One_Running_AlphaS::One_Running_AlphaS() {
  Setting \alpha_s according to PDF
  perturbative order 1
  \alpha_s(M_Z) = 0.118
}
Initialized the Soft_Collision_Handler.
Init shower for 1.
CS_Shower::CS_Shower(): Set core m_T mode 0
Shower::Shower(asfacs: IS = 0.73, FS = 1.38)
Init shower for 2.
CS_Shower::CS_Shower(): Set core m_T mode 0
Shower::Shower(asfacs: IS = 0.73, FS = 1.38)
Initialized the Shower_Handler.
+----------------------------------+
|                                  |
|      CCC  OOO  M   M I X   X     |
|     C    O   O MM MM I  X X      |
|     C    O   O M M M I   X       |
|     C    O   O M   M I  X X      |
|      CCC  OOO  M   M I X   X     |
|                                  |
+==================================+
|  Color dressed  Matrix Elements  |
|     http://comix.freacafe.de     |
|   please cite  JHEP12(2008)039   |
+----------------------------------+
Matrix_Element_Handler::BuildProcesses(): Looking for processes . done ( 23252 kB, 0s ).
Matrix_Element_Handler::InitializeProcesses(): Performing tests . done ( 23252 kB, 0s ).
Initialized the Matrix_Element_Handler for the hard processes.
Initialized the Beam_Remnant_Handler.
Hadron_Init::Init(): Initializing kf table for hadrons.
Initialized the Fragmentation_Handler.
Initialized the Soft_Photon_Handler.
Hadron_Decay_Map::Read:   Initializing HadronDecays.dat. This may take some time.
Initialized the Hadron_Decay_Handler, Decay model = Hadrons
R

Then Sherpa will start to integrate the cross sections. The output will look like:

 
Process_Group::CalculateTotalXSec(): Calculate xs for '2_2__j__j__e-__e+' (Comix)
Starting the calculation at 11:58:56. Lean back and enjoy ... .
822.035 pb +- ( 16.9011 pb = 2.05601 % ) 5000 ( 11437 -> 43.7 % )
full optimization:  ( 0s elapsed / 22s left ) [11:58:56]   
841.859 pb +- ( 11.6106 pb = 1.37916 % ) 10000 ( 18153 -> 74.4 % )
full optimization:  ( 0s elapsed / 21s left ) [11:58:57]   
...

The first line here displays the process which is being calculated. In this example, the integration is for the 2->2 process, parton, parton -> electron, positron. The matrix element generator used is displayed after the process. As the integration progresses, summary lines are displayed, like the one shown above. The current estimate of the cross section is displayed, along with its statistical error estimate. The number of phase space points calculated is displayed after this (‘10000’ in this example), and the efficiency is displayed after that. On the line below, the time elapsed is shown, and an estimate of the total time till the optimisation is complete. In square brackets is an output of the system clock.

When the integration is complete, the output will look like:

 
...
852.77 pb +- ( 0.337249 pb = 0.0395475 % ) 300000 ( 313178 -> 98.8 % )
integration time:  ( 19s elapsed / 0s left ) [12:01:35]   
852.636 pb +- ( 0.330831 pb = 0.038801 % ) 310000 ( 323289 -> 98.8 % )
integration time:  ( 19s elapsed / 0s left ) [12:01:35]   
2_2__j__j__e-__e+ : 852.636 pb +- ( 0.330831 pb = 0.038801 % )  exp. eff: 13.4945 %
  reduce max for 2_2__j__j__e-__e+ to 0.607545 ( eps = 0.001 ) 

with the final cross section result and its statistical error displayed.

Sherpa will then move on to integrate the other processes specified in the run card.

When the integration is complete, the event generation will start. As the events are being generated, Sherpa will display a summary line stating how many events have been generated, and an estimate of how long it will take. When the event generation is complete, Sherpa’s output looks like:

 
...
Event 10000 ( 58 s total )                                         
In Event_Handler::Finish : Summarizing the run may take some time.
+----------------------------------------------------+
|                                                    |
|  Total XS is 900.147 pb +- ( 8.9259 pb = 0.99 % )  |
|                                                    |
+----------------------------------------------------+

A summary of the number of events generated is displayed, with the total cross section for the process.

The generated events are not stored into a file by default; for details on how to store the events see Event output formats.


2.2.3 Parton-level event generation with Sherpa

Sherpa has its own tree-level matrix-element generators called AMEGIC++ and Comix. Furthermore, with the module PHASIC++, sophisticated and robust tools for phase-space integration are provided. Therefore Sherpa obviously can be used as a cross-section integrator. Because of the way Monte Carlo integration is accomplished, this immediately allows for parton-level event generation. Taking the LHC_ZJets setup, users have to modify just a few settings in Run.dat and would arrive at a parton-level generation for the process gluon down-quark to electron antineutrino and up-quark, to name an example. When, for instance, the options “EVENTS=0 OUTPUT=2” are added to the command line, a pure cross-section integration for that process would be obtained with the results plus integration errors written to the screen.

For the example, the (processes) section alters to

  Process : 21 1 -> 11 -11 1
  Order_EW 2
  End process

and under the assumption to start afresh, the initialization procedure has to be followed as before. Picking the same collider environment as in the previous example there are only two more changes before the Run.dat file is ready for the calculation of the hadronic cross section of the process g d to e- e+ d at LHC and subsequent parton-level event generation with Sherpa. These changes read SHOWER_GENERATOR=None, to switch off parton showering, FRAGMENTATION=Off, to do so for the hadronisation effects, MI_HANDLER=None, to switch off multiparton interactions, and ME_QED=Off, to switch off resummed QED corrections onto the Z -> e- e+ decay. Additionally, the non-perturbative intrinsic transverse momentum may be wished to not be taken into account, therefore set BEAM_REMNANTS=0;.


2.2.4 Running Sherpa with AMEGIC++

When Sherpa is run using the matrix element generator AMEGIC++, it is necessary to run it twice. During the first run (the initialization run) Feynman diagrams for the hard processes are constructed and translated into helicity amplitudes. Furthermore suitable phase-space mappings are produced. The amplitudes and corresponding integration channels are written to disk as C++ sourcecode, placed in a subdirectory called Process. The initialization run is started using the standard Sherpa executable, as decribed in Running Sherpa. The relevant command is

 
<prefix>/bin/Sherpa 

The initialization run stops with the message "New libraries created. Please compile.", which is nothing but the request to carry out the compilation and linking procedure for the generated matrix-element libraries. The makelibs script, provided for this purpose and created in the working directory, must be invoked by the user (see ./makelibs -h for help):

 
./makelibs

Note that the following tools have to be available for this step: autoconf, automake and libtool.

Alternatively, if scons is installed, you may invoke <prefix>/bin/make2scons and run scons install.

Afterwards Sherpa can be restarted using the same command as before. In this run (the generation run) the cross sections of the hard processes are evaluated. Simultaneously the integration over phase space is optimized to arrive at an efficient event generation.


3. ME-PS merging

For a large fraction of LHC final states, the application of reconstruction algorithms leads to the identification of several hard jets. A major task is to distinguish whether such events are signals for new physics or just manifestations of SM physics. Related calculations therefore need to describe as accurately as possible both the full matrix element for the underlying hard processes as well as the subsequent evolution and conversion of the hard partons into jets of hadrons. Several scales determine the evolution of the event. This makes it difficult to unambiguously disentangle the components, which belong to the hard process from those of the parton evolution. Given an n-jet event of well separated partons, its jet structure is retained when emitting a further collinear or soft parton. An additional hard, large-angle emission however gives rise to an extra jet changing the n to an n+1 final state. The merging scheme has to define, on an event-by-event basis, which route is followed. Its primary goals are to avoid double counting by preventing events to appear twice, i.e. once for each possibility, as well as dead regions by generating each configuration only once and using the appropriate path.

Various such merging schemes have been proposed. The currently most advanced treatment at tree-level is detailed in [Hoe09]. It relies on a strict separation of the phasespace for additional QCD radiation into a matrix-element and a parton-shower domain. Truncated showers are then needed to account for potential radiation in the parton-shower domain, if radiation in the matrix-element domain has already occured. This technique has been applied to the simulation of final states containing hard photons [Hoe09a] and has been extended to multi-scale processes where the leading order is dominated by very low scales [Car09]. A merging approach similar to [Hoe09] was presented in [Ham09a] for the special case of angular-ordered parton showers. Several older approaches exist. The CKKW scheme as a procedure similar to the truncated shower merging was introduced in [Cat01a]. Its extension to hadronic processes has been discussed in [Kra02], and the approach has been validated for several cases [Sch05], [Kra05a], [Kra04], [Kra05], [Gle05]. A reformulation of CKKW to a merging procedure in conjunction with a dipole shower (CKKW-L) has been presented in [Lon01], improved in [Lon11] and [Lon12b] and extended to the one-loop case in [Lav08] and [Lon12a]. The MLM scheme has been developed using a geometric analysis of the unconstrained radiation pattern in terms of cone jets to generate the inclusive samples [Man01],[Man06]. In a number of works, all these different algorithms have been implemented in different variations on different levels of sophistication in conjunction with various matrix-element generators or already in full-fledged event generators. Their respective results have been compared e.g. in [Hoc06], [Alw07]. Common to all schemes is that sequences of tree-level multileg matrix elements with increasing final-state multiplicity are merged with parton showers to yield a fully inclusive sample with no double counting. Their connection with truncated shower merging is outlined in [Hoe09].


3.1 The algorithm implemented in Sherpa

In Sherpa the merging of matrix elements and parton showers is accomplished as follows, cf. [Hoe09]:

  1. All cross sections sigma_k for processes with k=0,1,...,N extra partons are calculated with the constraint that the matrix-element final states pass the jet criteria. They are determined by the jet measure shown below, and the minimal distance is set by the actual merging scale Q_cut. The measure used for jet identification can be written as
     
      Q_ij^2 = 2p_i.p_j min{2/(Cijk+Cjik)}
    

    where the minimum is taken over the colour-connected partons k (k different from i and j), and where, for final state partons i and j,

     
      Cijk = p_i.p_k/((p_i+p_k).p_j) - m_i^2/(2p_i.p_j)      if j=g,
    
      Cijk = 1                                               else.
    
  2. Processes of fixed parton multiplicity are chosen with probability sigma_k/(sum of all sigma_k). The event’s hard process is picked from the list of partonic processes having the desired multiplicity and according to their particular cross-section contributions. All particle momenta are distributed respecting the correlations encoded in the matrix elements. Merged samples therefore fully include lepton-jet and jet–jet correlations up to N extra jets.
  3. The parton-shower equivalent of the final-state parton configuration of the matrix element is determined in order to perform the reweighting. Matrix elements are interpreted in the large N_c limit. The final state is clustered according to parton-shower branching probabilities and kinematics. The clustering is guided by physically allowed parton combinations, restricting the shower histories to those which correspond to valid Feynman diagrams. It is stopped after a 2->2 configuration (called core process) has been identified, or if two subsequent clusterings are unordered in terms of the shower evolution parameter.
  4. A scale for the core process is defined. This step can be customized using the keyword ‘CORE_SCALE’, as described in METS scale setting with multiparton core processes.
  5. The reweighting proceeds according to the reconstructed shower history. The event is accepted or rejected according to a kinematics-dependent weight, which corresponds to evaluating strong couplings in the parton shower scheme.
  6. The parton-shower evolution is started with suitably defined scales for intermediate and final-state particles.
  7. Intermediate partons undergo truncated shower evolution. This allows parton-shower emissions between the scales of one matrix element branching and the next. This leads to a situation where, due to additional partons originating from these branchings, the kinematics of the next matrix-element branching needs to be redefined. If for any reason (e.g. energy-momentum conservation) the matrix element branching cannot be reconstructed after a truncated shower branching, this shower branching is vetoed.
  8. In all circumstances parton-shower radiation is subject to the condition that no extra jet is produced. If any emission turns out to be harder than the separation cut Q_cut, the event is vetoed. This effectively implements a Sudakov rejection and reduces the individual inclusive cross section to exclusive ones. The exception to this veto – called highest-multiplicity treatment – is for matrix-element configurations with the maximal number N of extra partons. These cases require the parton shower to cover the phase space for more jets than those produced by the matrix elements. To obtain an inclusive N-jet prediction, the veto therefore is on parton emissions at scales harder than the softest parton-shower scale, which can produce allowed emissions harder than the separation scale Q_cut. Of course, correlations including the N+1th jet are only approximately taken into account.

3.2 Generation of merged samples

The generation of inclusive event samples, i.e. the combination of matrix elements for different parton multiplicities with parton showers and hadronization, is completely automatized within Sherpa. To obtain consistent results, certain parameters related to the matrix-element calculation and the parton showers have to be set accordingly. In the following the basic parameter settings for generating “merged” samples are summarised. Potential pitfalls are pointed out.

  1. Process setup

    The starting point is the definition of a basic core (lowest-order) process with respect to which the impact of additional QCD radiation shall be studied. As an illustrative example, consider Drell–Yan lepton-pair production in proton–proton collisions. The lowest-order process reads pp -> l-bar l, mediated through Z/photon exchange. Additional QCD radiation will then manifest itself through additional QCD partons in the final state, i.e. pp -> l\bar l + n jets with n=1,...,N. To initialise the calculation of all the different matrix elements (for pp -> l\bar l+0,1,...,N QCD partons) in a single generator run, besides selecting the basic core process, the maximal number N of additional final-state QCD partons has to be specified in the (processes) section of the steering file. For the above example, assuming N=3, this reads:

        Process 93 93 -> 90 90 93{3}
        Order_EW 2
    

    N is given in the curly brackets belonging to the 93, the code for QCD partons. Note, that it is mandatory to fix the order of electroweak couplings to the corresponding order of the basic core process, here pp -> l-bar l or 93 93 -> 90 90, as only QCD corrections to this process can be considered and further electroweak corrections are not treated by Sherpa’s ME-PS merging implementation.

  2. Setting the merging scale

    The most important parameter to be specified when generating merged samples with Sherpa is the actual value of the jet resolution that separates the subsamples of different parton multiplicities, the merging scale.

    The jet criterion is explained in The algorithm implemented in Sherpa. The separation cut, Q_cut, must be specified. It is set using the CKKW tag, usually in the form (Q_cut/E_cms)^2. For example, a valid setting reads

        CKKW sqr(20/E_CMS)
    

    and must be included in the process specification, before the End process line. As mentioned before, all extra QCD parton radiation is regularised by satisfying the jet criterion. However, divergences of the basic core process, such as vanishing invariant masses of lepton pairs, need to be regularised by imposing additional cuts, see Selectors.

  3. Parton showering

    The parton shower must always be enabled.

Further remarks

Although the merging of different multiplicity matrix-element samples with parton showers attached is automatized, some care has to be taken to ensure physical meaningful results. Some of the most prominent mistakes are listed here:

A few more comments related to Sherpa’s merging:


4. Cross section

To determine the total cross section, in particular in the context of ME+PS merging with Sherpa, the final output of the event generation run should be used, e.g.

+-----------------------------------------------------+
|                                                     |
|  Total XS is 1612.17 pb +- ( 8.48908 pb = 0.52 % )  |
|                                                     |
+-----------------------------------------------------+

Note that the Monte Carlo error quoted for the total cross section is determined during event generation. It, therefore, might differ substantially from the errors quoted during the integration step, and it can be reduced simply by generating more events.

In contrast to plain fixed order results, Sherpa’s total cross section is composed of values from various fixed order processes, namely those which are combined by applying the ME+PS merging, see ME-PS merging. In this context, it is important to note that

The exclusive higher order tree-level cross sections determined during the integration step are meaningless by themselves, only the inclusive cross section printed at the end of the event generation run is to be used.

This value has the same formal accuracy as a fixed order result for the inclusive reaction (e.g. pp->Z in Running Sherpa), but it might still differ by a significant amount, see [Hoe09] and [Lon12b] for details. Depending on jet definitions, process etc., the merged cross section may be either larger or smaller than the fixed order cross section.

Sherpa total cross sections have leading order accuracy when the generator is run in LO merging mode, in NLO merging mode they have NLO accuracy.

Broadly speaking, Sherpa’s ME-PS merging is adequate for capturing the information from (resummed) logarithmic corrections to the leading order (as is the parton shower). On the contrary, higher-order cross sections are dominated by finite terms when they are computed inclusively. Sherpa’s merging algorithm cannot predict the finite terms, and this is why Sherpa’s cross section is not a better approximation to higher-order cross sections. On the other hand, shape observables (especially jet transverse momenta and the like) are typically dominated by logarithmic corrections. If they are concerned, Sherpa can be expected to perform reasonably well.


5. Command line options

The available command line options for Sherpa.

-f <file>

Read input from file ‘<file>’.

-p <path>

Read input file from path ‘<path>’.

-L <path>

Set Sherpa library path to ‘<path>’, see SHERPA_CPP_PATH.

-e <events>

Set number of events to generate ‘<events>’, see EVENTS.

-t <events>

Set the event type to ‘<events>’, see EVENT_TYPE.

-r <results>

Set the result directory to ‘<results>’, see RESULT_DIRECTORY.

-R <seed>

Set the seed of the random number generator to ‘<seed>’, see RANDOM_SEED.

-m <generators>

Set the matrix element generator list to ‘<generators>’, see ME_SIGNAL_GENERATOR.

-w <mode>

Set the event generation mode to ‘<mode>’, see EVENT_GENERATION_MODE.

-s <generator>

Set the parton shower generator to ‘<generator>’, see SHOWER_GENERATOR.

-F <module>

Set the fragmentation module to ‘<module>’, see Fragmentation.

-D <module>

Set the hadron decay module to ‘<module>’, see Hadron decays.

-a <analyses>

Set the analysis handler list to ‘<analyses>’, see ANALYSIS.

-A <path>

Set the analysis output path to ‘<path>’, see ANALYSIS_OUTPUT.

-O <level>

Set general output level ‘<level>’, see OUTPUT.

-o <level>

Set output level for event generation ‘<level>’, see OUTPUT.

-l <logfile>

Set log file name ‘<logfile>’, see LOG_FILE.

-j <threads>

Set number of threads ‘<threads>’, see Multi-threading.

-g

Do not create result directory, see RESULT_DIRECTORY.

-b

Switch to non-batch mode, see BATCH_MODE.

-V

Print extended version information at startup.

-v, --version

Print versioning information.

-h, --help

Print a help message.

PARAMETER=VALUE

Set the value of a parameter, see Parameters.

TAG:=VALUE

Set the value of a tag, see Tags.


6. Input structure

A Sherpa setup is steered by various parameters, associated with the different components of event generation.

These have to be specified in a run-card which by default is named “Run.dat” in the current working directory. If you want to use a different setup directory for your Sherpa run, you have to specify it on the command line as ‘-p <dir>’ or ‘PATH=<dir>’. To read parameters from a run-card with a different name, you may specify ‘-f <file>’ or ‘RUNDATA=<file>’.

Sherpa’s parameters are grouped according to the different aspects of event generation, e.g. the beam parameters in the group ‘(beam)’ and the fragmentation parameters in the group ‘(fragmentation)’. In the run-card this looks like:

  (beam){
    BEAM_ENERGY_1 = 7000.
    ...
  }(beam)

Each of these groups is described in detail in another chapter of this manual, see Parameters.

If such a section or file does not exist in the setup directory, a Sherpa-wide fallback mechanism is employed, searching the file in various locations in the following order (where $SHERPA_DAT_PATH is an optionally set environment variable):

All parameters can be overwritten on the command line, i.e. command-line input has the highest priority. The syntax is

  <prefix>/bin/Sherpa  KEYWORD1=value1 KEYWORD2=value2 ...

To change, e.g., the default number of events, the corresponding command line reads

  <prefix>/bin/Sherpa  EVENTS=10000

All over Sherpa, particles are defined by the particle code proposed by the PDG. These codes and the particle properties will be listed during each run with ‘OUTPUT=2’ for the elementary particles and ‘OUTPUT=4’ for the hadrons. In both cases, antiparticles are characterized by a minus sign in front of their code, e.g. a mu- has code ‘13’, while a mu+ has ‘-13’.

All quantities have to be specified in units of GeV and millimeter. The same units apply to all numbers in the event output (momenta, vertex positions). Scattering cross sections are denoted in pico-barn in the output.

There are a few extra features for an easier handling of the parameter file(s), namely global tag replacement, see Tags, and algebra interpretation, see Interpreter.


6.1 Interpreter

Sherpa has a built-in interpreter for algebraic expressions, like ‘cos(5/180*M_PI)’. This interpreter is employed when reading integer and floating point numbers from input files, such that certain parameters can be written in a more convenient fashion. For example it is possible to specify the factorisation scale as ‘sqr(91.188)’.
There are predefined tags to alleviate the handling

M_PI

Ludolph’s Number to a precision of 12 digits.

M_C

The speed of light in the vacuum.

E_CMS

The total centre of mass energy of the collision.

The expression syntax is in general C-like, except for the extra function ‘sqr’, which gives the square of its argument. Operator precedence is the same as in C. The interpreter can handle functions with an arbitrary list of parameters, such as ‘min’ and ‘max’.
The interpreter can be employed to construct arbitrary variables from four momenta, like e.g. in the context of a parton level selector, see Selectors. The corresponding functions are

Mass(v)

The invariant mass of v in GeV.

Abs2(v)

The invariant mass squared of v in GeV^2.

PPerp(v)

The transverse momentum of v in GeV.

PPerp2(v)

The transverse momentum squared of v in GeV^2.

MPerp(v)

The transverse mass of v in GeV.

MPerp2(v)

The transverse mass squared of v in GeV^2.

Theta(v)

The polar angle of v in radians.

Eta(v)

The pseudorapidity of v.

Y(v)

The rapidity of v.

Phi(v)

The azimuthal angle of v in radians.

Comp(v,i)

The i’th component of the vector v. i=0 is the energy/time component, i=1, 2, and 3 are the x, y, and z components.

PPerpR(v1,v2)

The relative transverse momentum between v1 and v2 in GeV.

ThetaR(v1,v2)

The relative angle between v1 and v2 in radians.

DEta(v1,v2)

The pseudo-rapidity difference between v1 and v2.

DY(v1,v2)

The rapidity difference between v1 and v2.

DPhi(v1,v2)

The relative polar angle between v1 and v2 in radians.


6.2 Tags

Tag replacement in Sherpa is performed through the data reading routines, which means that it can be performed for virtually all inputs. Specifying a tag on the command line using the syntax ‘<Tag>:=<Value>’ will replace every occurrence of ‘<Tag>’ in all files during read-in. An example tag definition could read

  <prefix>/bin/Sherpa QCUT:=20 NJET:=3

and then be used in the (me) and (processes) sections like

  (me){
    RESULT_DIRECTORY = Result_QCUT/
  }(me)
  (processes){
    Process 93 93 -> 11 -11 93{NJET}
    Order_EW 2;
    CKKW sqr(QCUT/E_CMS)
    End process;
  }(processes)

7. Parameters

A Sherpa setup is steered by various parameters, associated with the different components of event generation. These are set in Sherpa’s run-card, see Input structure for more details. Tag replacements may be performed in all inputs, see Tags.


7.1 Run parameters

The following parameters describe general run information. They may be set in the (run) section of the run-card, see Input structure.


7.1.1 EVENTS

This parameter specifies the number of events to be generated.
It can alternatively be set on the command line through option ‘-e’, see Command line options.


7.1.2 EVENT_TYPE

This parameter specifies the kind of events to be generated. It can alternatively be set on the command line through option ‘-t’, see Command line options.

Alternatively there are two more specialised modes, namely:


7.1.3 TUNE

This parameter specifies which tune is to be used. Setting different tunes using this parameter ensures, that consistent settings are employed. This affects mostly Multiple interactions and Intrinsic Transverse Momentum parameters. Possible values are:


7.1.4 OUTPUT

This parameter specifies the output level (verbosity) of the program.
It can alternatively be set on the command line through option ‘-O’, see Command line options. A different output level can be specified for the event generation step through ‘EVT_OUTPUT’ or command line option ‘-o’, see Command line options

The value can be any sum of the following:

E.g. OUTPUT=3 would display information, events and errors.


7.1.5 LOG_FILE

This parameter specifies the log file. If set, the standard output from Sherpa is written to the specified file, but output from child processes is not redirected. This option is particularly useful to produce clean log files when running the code in MPI mode, see MPI parallelization. A file name can alternatively be specified on the command line through option ‘-l’, see Command line options.


7.1.6 RANDOM_SEED

Sherpa uses different random-number generators. The default is the Ran3 generator described in [ISBN-10:0521880688]. Alternatively, a combination of George Marsaglias KISS and SWB [Ann.Appl.Probab.1,3(1991)462] can be employed, see this website. The integer-valued seeds of the generators are specified by ‘RANDOM_SEED=A .. D’. They can also be set directly using ‘RANDOM_SEED1=A’ through ‘RANDOM_SEED4=D’. The Ran3 generator takes only one argument. This value can also be set using the command line option ‘-R’, see Command line options.


7.1.7 EVENT_SEED_MODE

The tag ‘EVENT_SEED_MODE’ can be used to enforce the same seeds in different runs of the generator. When set to 1, seed files are written to disk. These files are gzip compressed, if Sherpa was compiled with option ‘--enable-gzip’. When set to 2, existing random seed files are read and the seed is set to the next available value in the file before each event. When set to 3, Sherpa uses an internal bookkeeping mechanism to advance to the next predefined seed. No seed files are written out or read in.


7.1.8 ANALYSIS

Analysis routines can be switched on or off by setting the ANALYSIS flag. The default is no analysis, corresponding to option ‘0’. This parameter can also be specified on the command line using option ‘-a’, see Command line options.

The following analysis handlers are currently available

Internal

Sherpa’s internal analysis handler.
To use this option, the package must be configured with option ‘--enable-analysis’.
An output directory can be specified using ANALYSIS_OUTPUT.

Rivet

The Rivet package, see Rivet Website.
To enable it, Rivet and HepMC have to be installed and Sherpa must be configured as described in Rivet analyses.

HZTool

The HZTool package, see HZTool Website.
To enable it, HZTool and CERNLIB have to be installed and Sherpa must be configured as described in HZTool analyses.

Multiple options can be combined using a comma, e.g. ‘ANALYSIS=Internal,Rivet’.


7.1.9 ANALYSIS_OUTPUT

Name of the directory for histogram files when using the internal analysis and name of the Aida file when using Rivet, see ANALYSIS. The directory / file will be created w.r.t. the working directory. The default value is ‘Analysis/’. This parameter can also be specified on the command line using option ‘-A’, see Command line options.


7.1.10 TIMEOUT

A run time limitation can be given in user CPU seconds through TIMEOUT. This option is of some relevance when running SHERPA on a batch system. Since in many cases jobs are just terminated, this allows to interrupt a run, to store all relevant information and to restart it without any loss. This is particularly useful when carrying out long integrations. Alternatively, setting the TIMEOUT variable to -1, which is the default setting, translates into having no run time limitation at all. The unit is seconds.


7.1.11 RLIMIT_AS

A memory limitation can be given to prevent Sherpa to crash the system it is running on as it continues to build up matrix elements and loads additional libraries at run time. Per default the maximum RAM of the system is determined and set as the memory limit. This can be changed by giving ‘RLIMIT_AS=<size>’ where the size is given in the format 500 MB, 4 GB, or 10 %. The space between number and unit is mandatory. When running with MPI parallelization it might be necessary to divide the total maximum by the number of cores. This can be done by setting RLIMIT_BY_CPU=1.

Sherpa checks for memory leaks during integration and event generation. If the allocated memory after start of integration or event generation exceeds the parameter ‘MEMLEAK_WARNING_THRESHOLD’, a warning is printed. Like ‘RLIMIT_AS’, ‘MEMLEAK_WARNING_THRESHOLD’ can be set using units. However, no spaces are allowed between the number and the unit. The warning threshold defaults to 16MB.


7.1.12 BATCH_MODE

Whether or not to run Sherpa in batch mode. The default is ‘1’, meaning Sherpa does not attempt to save runtime information when catching a signal or an exception. On the contrary, if option ‘0’ is used, Sherpa will store potential integration information and analysis results, once the run is terminated abnormally. All possible settings are:

The settings are additive such that multiple settings can be employed at the same time.

Note that when running the code on a cluster or in a grid environment, BATCH_MODE should always contain setting 1 (i.e. BATCH_MODE=[1|3|5|7]).

The command line option ‘-b’ should therefore not be used in this case, see Command line options.


7.1.13 NUM_ACCURACY

The targeted numerical accuracy can be specified through NUM ACCURACY, e.g. for comparing two numbers. This might have to be reduced if gauge tests fail for numerical reasons.


7.1.14 SHERPA_CPP_PATH

The path in which Sherpa will eventually store dynamically created C++ source code. If not specified otherwise, sets ‘SHERPA_LIB_PATH’ to ‘$SHERPA_CPP_PATH/Process/lib’. This value can also be set using the command line option ‘-L’, see Command line options.


7.1.15 SHERPA_LIB_PATH

The path in which Sherpa looks for dynamically linked libraries from previously created C++ source code, cf. SHERPA_CPP_PATH.


7.1.16 Event output formats

Sherpa provides the possibility to output events in various formats, e.g. the HepEVT common block structure or the HepMC format. The authors of Sherpa assume that the user is sufficiently acquainted with these formats when selecting them.

If the events are to be written to file, the parameter ‘EVENT_OUTPUT’ must be specified together with a file name. An example would be EVENT_OUTPUT=HepMC_GenEvent[MyFile], where MyFile stands for the desired file base name. The following formats are currently available:

HepMC_GenEvent

Generates output in HepMC::IO_GenEvent format. The HepMC::GenEvent::m_weights weight vector stores the following items: [0] event weight, [1] combined matrix element and phase space weight (missing only PDF information, thus directly suitable for PDF reweighting), [2] event weight normalisation (in case of unweighted events event weights of ~ +/-1 can be obtained by (event weight)/(event weight normalisation)), and [3] number of trials. The total cross section of the simulated event sample can be computed as the sum of event weights divided by the sum of the number of trials. This value must agree with the total cross section quoted by Sherpa at the end of the event generation run, and it can serve as a cross-check on the consistency of the HepMC event file. Note that Sherpa conforms to the Les Houches 2013 suggestion (http://phystev.in2p3.fr/wiki/2013:groups:tools:hepmc) of indicating interaction types through the GenVertex type-flag. Multiple event weights will also be enabled as soon as a similar standard has been defined.

HepMC_Short

Generates output in HepMC::IO_GenEvent format, however, only incoming beams and outgoing particles are stored. Intermediate and decayed particles are not listed. The event weights stored as the same as above.

Delphes_GenEvent

Generates output in Root format, which can be passed to Delphes for analyses. Input events are taken from the HepMC interface. Storage space can be reduced by up to 50% compared to gzip compressed HepMC. This output format is available only if Sherpa was configured and installed with options ‘--enable-root’ and ‘--enable-delphes=/path/to/delphes’.

Delphes_Short

Generates output in Root format, which can be passed to Delphes for analyses. Only incoming beams and outgoing particles are stored.

PGS

Generates output in StdHEP format, which can be passed to PGS for analyses. This output format is available only if Sherpa was configured and installed with options ‘--enable-hepevtsize=4000’ and ‘--enable-pgs=/path/to/pgs’. Please refer to the PGS documentation for how to pass StdHEP event files on to PGS. If you are using the LHC olympics executeable, you may run ‘./olympics --stdhep events.lhe <other options>’.

PGS_Weighted

Generates output in StdHEP format, which can be passed to PGS for analyses. Event weights in the HEPEV4 common block are stored in the event file.

HEPEVT

Generates output in HepEvt format.

LHEF

Generates output in Les Houches Event File format. This output format is intended for output of matrix element configurations only. Since the format requires PDF information to be written out in the outdated PDFLIB/LHAGLUE enumeration format this is only available automatically if LHAPDF is used, the identification numbers otherwise have to be given explicitly via LHEF_PDF_NUMBER (LHEF_PDF_NUMBER_1 and LHEF_PDF_NUMBER_2 if both beams carry different structure functions). This format currently outputs matrix element information only, no information about the large-Nc colour flow is given as the LHEF output format is not suited to communicate enough information for meaningful parton showering on top of multiparton final states.

Root

Generates output in ROOT ntuple format for NLO event generation only. For details on the ntuple format, see Structure of ROOT NTuple Output. This output option is available only if Sherpa was linked to ROOT during installation by using the configure option --enable-root=/path/to/root. ROOT ntuples can be read back into Sherpa and analyzed using the option ‘EVENT_INPUT’. This feature is described in Production of NTuples.

The output can be further customized using the following options:

FILE_SIZE

Number of events per file (default: 1000).

NTUPLE_SIZE

File size per NTuple file (default: unlimited).

EVT_FILE_PATH

Directory where the files will be stored.

OUTPUT_PRECISION

Steers the precision of all numbers written to file.

For all output formats except ROOT and Delphes, events can be written directly to gzipped files instead of plain text. The option ‘--enable-gzip’ must be given during installation to enable this feature.


7.1.17 MPI parallelization

MPI parallelization in Sherpa can be enabled using the configuration option ‘--enable-mpi’. Sherpa supports OpenMPI and MPICH2 . For detailed instructions on how to run a parallel program, please refer to the documentation of your local cluster resources or the many excellent introductions on the internet. MPI parallelization is mainly intended to speed up the integration process, as event generation can be parallelized trivially by starting multiple instances of Sherpa with different random seed, cf. RANDOM_SEED. However, both the internal analysis module and the Root NTuple writeout can be used with MPI. Note that these require substantial data transfer.


7.1.18 Multi-threading

Multi-threaded integration in Sherpa can be enabled using the configuration option ‘--enable-multithread’. Subsequently the computation of amplitudes for large groups of processes is split into a number of threads which is limited from above by the parameter ‘PG_THREADS’. This parameter can also be specified using the command line option ‘-j’, see Command line options. Additionally, matrix-element calculation and phase-space evaluation for a single process with Comix can be distributed to different threads according to [Gle08]. The number of threads is then specified using the parameters ‘COMIX_ME_THREADS’ and ‘COMIX_PS_THREADS’, respectively.


7.2 Beam parameters

The setup of the colliding beams is covered by the (beam) section of the steering file or the beam data file Beam.dat, respectively, see Input structure. The mandatory settings to be made are

More options related to beamstrahlung and intrinsic transverse momentum can be found in the following subsections.


7.2.1 Beam Spectra

If desired, you can also specify spectra for beamstrahlung through BEAM_SPECTRUM_1 and BEAM_SPECTRUM_2. The possible values are Possible values are

Monochromatic

The beam energy is unaltered and the beam particles remain unchanged. That is the default and corresponds to ordinary hadron-hadron or lepton-lepton collisions.

Laser_Backscattering

This can be used to describe the backscattering of a laser beam off initial leptons. The energy distribution of the emerging photon beams is modelled by the CompAZ parametrization, see [Zar02]. Note that this parametrization is valid only for the proposed TESLA photon collider, as various assumptions about the laser parameters and the initial lepton beam energy have been made. See details below.

Simple_Compton

This corresponds to a simple light backscattering off the initial lepton beam and produces initial-state photons with a corresponding energy spectrum. See details below.

EPA

This enables the equivalent photon approximation for colliding protons, see [Arc08]. The resulting beam particles are photons that follow a dipole form factor parametrization, cf. [Bud74]. The authors would like to thank T. Pierzchala for his help in implementing and testing the corresponding code. See details below.

Spectrum_Reader

A user defined spectrum is used to describe the energy spectrum of the assumed new beam particles. The name of the corresponding spectrum file needs to be given through the keywords SPECTRUM_FILE_1 and SPECTRUM_FILE_2.

The BEAM_SMIN and BEAM_SMAX parameters may be used to specify the minimum/maximum fraction of cms energy squared after Beamstrahlung. The reference value is the total centre of mass energy squared of the collision, not the centre of mass energy after eventual Beamstrahlung.
The parameter can be specified using the internal interpreter, see Interpreter, e.g. as ‘BEAM_SMIN sqr(20/E_CMS)’.


7.2.1.1 Laser Backscattering

The energy distribution of the photon beams is modelled by the CompAZ parametrisation, see [Zar02], with various assumptions valid only for the proposed TESLA photon collider. The laser energies can be set by E_LASER_1/2 for the respective beam. P_LASER_1/2 sets their polarisations, defaulting to 0.. The LASER_MODE takes the values -1, 0, and 1, defaulting to 0. LASER_ANGLES and LASER_NONLINEARITY take the values On and Off, both defaulting to Off.


7.2.1.2 Simple Compton

This corresponds to a simple light backscattering off the initial lepton beam and produces initial-state photons with a corresponding energy spectrum. It is a special case of the above Laser Backscattering with LASER_MODE=-1.


7.2.1.3 EPA

The equivalent photon approximation, cf. [Arc08], [Bud74], has a few free parameters:

EPA_q2Max_1/2

Parameter of the EPA spectrum of the respective beam, defaults to 2. in units of GeV squared.

EPA_ptMin_1/2

Infrared regulator to EPA spectrum. Given in GeV, the value must be between 0. and 1. for EPA approximation to hold. Defaults to 0., i.e. the spectrum has to be regulated by cuts on the observable, cf Selectors.

EPA_Form_Factor_1/2

Form factor model to be used on the respective beam. The options are 0 (pointlike), 1 (homogeneously charged sphere, 2 (gaussian shaped nucleus), and 3 (homogeneously charged sphere, smoothed at low and high x). Applicable only to heavy ion beams. Defaults to 0.

EPA_AlphaQED

Value of alphaQED to be used in the EPA. Defaults to 0.0072992701.


7.2.2 Intrinsic Transverse Momentum

K_PERP_MEAN_1

This parameter specifies the mean intrinsic transverse momentum for the first (left) beam in case of hadronic beams, such as protons.
The default value for protons is 0.8 GeV.

K_PERP_MEAN_2

This parameter specifies the mean intrinsic transverse momentum for the second (right) beam in case of hadronic beams, such as protons.
The default value for protons is 0.8 GeV.

K_PERP_SIGMA_1

This parameter specifies the width of the Gaussian distribution of intrinsic transverse momentum for the first (left) beam in case of hadronic beams, such as protons.
The default value for protons is 0.8 GeV.

K_PERP_SIGMA_2

This parameter specifies the width of the Gaussian distribution of intrinsic transverse momentum for the first (left) beam in case of hadronic beams, such as protons.
The default value for protons is 0.8 GeV.

If the option ‘BEAM_REMNANTS=0’ is specified, pure parton-level events are simulated, i.e. no beam remnants are generated. Accordingly, partons entering the hard scattering process do not acquire primordial transverse momentum.


7.3 ISR parameters

The following parameters are used to steer the setup of beam substructure and initial state radiation (ISR). They may be set in the (isr) section of the run-card, see Input structure.

BUNCH_1/BUNCH_2

Specify the PDG ID of the first (left) and second (right) bunch particle, i.e. the particle after eventual Beamstrahlung specified through the beam parameters, see Beam parameters. Per default these are taken to be identical to the parameters BEAM_1/BEAM_2, assuming the default beam spectrum is Monochromatic. In case the Simple Compton or Laser Backscattering spectra are enabled the bunch particles would have to be set to 22, the PDG code of the photon.

ISR_SMIN/ISR_SMAX

This parameter specifies the minimum fraction of cms energy squared after ISR. The reference value is the total centre of mass energy squared of the collision, not the centre of mass energy after eventual Beamstrahlung.
The parameter can be specified using the internal interpreter, see Interpreter, e.g. as ‘ISR_SMIN=sqr(20/E_CMS)’.

Sherpa provides access to a variety of structure functions. They can be configured with the following parameters.

PDF_LIBRARY

Switches between different interfaces to PDFs. If the two colliding beams are of different type, e.g. protons and electrons or photons and electrons, it is possible to specify two different PDF libraries using PDF_LIBRARY_1’ and PDF_LIBRARY_2’. The following options are distributed with Sherpa:

LHAPDFSherpa

Use PDF’s from LHAPDF [Wha05]. The interface is only available if Sherpa has been compiled with support for LHAPDF, see Installation.

CT12Sherpa

Built-in library for some PDF sets from the CTEQ collaboration, cf. [Gao13].

CT10Sherpa

Built-in library for some PDF sets from the CTEQ collaboration, cf. [Lai10]. This is the default, if Sherpa has not been compiled with LHAPDF support.

CTEQ6Sherpa

Built-in library for some PDF sets from the CTEQ collaboration, cf. [Nad08].

MSTW08Sherpa

Built-in library for PDF sets from the MSTW group, cf. [Mar09a].

MRST04QEDSherpa

Built-in library for photon PDF sets from the MRST group, cf. [Mar04].

MRST01LOSherpa

Built-in library for the 2001 leading-order PDF set from the MRST group, cf. [Mar01].

MRST99Sherpa

Built-in library for the 1999 PDF sets from the MRST group, cf. [Mar99].

GRVSherpa

Built-in library for the GRV photon PDF [Glu91a], [Glu91]

PDFESherpa

Built-in library for the electron structure function. The perturbative order of the fine structure constant can be set using the parameter ISR_E_ORDER (default: 1). The switch ISR_E_SCHEME allows to set the scheme of respecting non-leading terms. Possible options are 0 ("mixed choice"), 1 ("eta choice"), or 2 ("beta choice", default).

None

No PDF. Fixed beam energy.


Furthermore it is simple to build an external interface to an arbitrary PDF and load that dynamically in the Sherpa run. See External PDF for instructions.

PDF_SET

Specifies the PDF set for hadronic bunch particles. All sets available in the chosen PDF_LIBRARY can be figured by running Sherpa with the parameter SHOW_PDF_SETS=1, e.g.:

  Sherpa PDF_LIBRARY=CTEQ6Sherpa SHOW_PDF_SETS=1

If the two colliding beams are of different type, e.g. protons and electrons or photons and electrons, it is possible to specify two different PDF sets using PDF_SET_1’ and PDF_SET_2’.

PDF_SET_VERSION

This parameter allows to eventually select a specific version (member) within the chosen PDF set. Specifying a negative value, e.g.

  PDF_LIBRARY LHAPDFSherpa;
  PDF_SET NNPDF12_100.LHgrid; PDF_SET_VERSION -100;

results in Sherpa sampling all sets 1..100, which can be used to obtain the averaging required when employing PDF’s from the NNPDF collaboration [Bal08], [Bal09].


7.4 Model parameters

The interaction model setup is covered by the (model) section of the steering file or the model data file Model.dat, respectively.

The main switch here is called MODEL and sets the model that Sherpa uses throughout the simulation run. The default is ‘SM’, for the Standard Model. For a complete list of available models, run Sherpa with SHOW_MODEL_SYNTAX=1 on the command line. This will display not only the available models, but also the parameters for those models.

The chosen model also defines the list of particles and their default properties. With the following switches it is possible to change the properties of all fundamental particles:

MASS[<id>]

Sets the mass (in GeV) of the particle with PDG id ‘<id>’.
Masses of particles and corresponding anti-particles are always set simultaneously.
For particles with Yukawa couplings, those are enabled/disabled consistent with the mass (taking into account the MASSIVE flag) by default, but that can be modified using the ‘YUKAWA[<id>]’ parameter. Note that by default the Yukawa couplings are treated as running, cf. YUKAWA_MASSES.

MASSIVE[<id>]

Specifies whether the finite mass of particle with PDG id ‘<id>’ is to be considered in matrix-element calculations or not.

WIDTH[<id>]

Sets the width (in GeV) of the particle with PDG id ‘<id>’.

ACTIVE[<id>]

Enables/disables the particle with PDG id ‘<id>’.

STABLE[<id>]

Sets the particle with PDG id ‘<id>’ either stable or unstable according to the following options:

0

Particle and anti-particle are unstable

1

Particle and anti-particle are stable

2

Particle is stable, anti-particle is unstable

3

Particle is unstable, anti-particle is stable

This option applies to decays of hadrons (cf. Hadron decays) as well as particles produced in the hard scattering (cf. Hard decays). For the latter, alternatively the decays can be specified explicitly in the process setup (see Processes) to avoid the narrow-width approximation.

PRIORITY[<id>]

Allows to overwrite the default automatic flavour sorting in a process by specifying a priority for the given flavour. This way one can identify certain particles which are part of a container (e.g. massless b-quarks), such that their position can be used reliably in selectors and scale setters.

Note: To set properties of hadrons, you can use the same switches (except for MASSIVE) in the fragmentation section, see Hadronization.


7.4.1 Standard Model

The SM inputs for the electroweak sector can be given in four different schemes, that correspond to different choices of which SM physics parameters are considered fixed and which are derived from the given quantities. The input schemes are selected through the EW_SCHEME parameter, whose default is ‘1’. The following options are provided:

The electro-weak coupling is by default not running. If its running has been enabled (cf. COUPLINGS), one can specify its value at zero momentum transfer as input value by 1/ALPHAQED(0).

To account for quark mixing the CKM matrix elements have to be assigned. For this purpose the Wolfenstein parametrization [Wol83] is employed. The order of expansion in the lambda parameter is defined through CKMORDER, with default ‘0’ corresponding to a unit matrix. The parameter convention for higher expansion terms reads:

The remaining parameter to fully specify the Standard Model is the strong coupling constant at the Z-pole, given through ALPHAS(MZ). Its default value is ‘0.118’. If the setup at hand involves hadron collisions and thus PDFs, the value of the strong coupling constant is automatically set consistent with the PDF fit and can not be changed by the user. If Sherpa is compiled with LHAPDF support, it is also possible to use the alphaS evolution provided in LHAPDF by specifying USE_PDF_ALPHAS=1. For this fine structure constant there is also the option to provide a fixed value that can be used in calculations of matrix elements in case running of the coupling is disabled (cf. COUPLINGS). The keyword is ALPHAS(default). When using a running strong coupling, the order of the perturbative expansion used can be set through ORDER_ALPHAS, where the default ‘0’ corresponds to one-loop running and 1,2,3 to 2,3,4-loops, respectively.

If unstable particles (e.g. W/Z bosons) appear as intermediate propagators in the process, Sherpa uses the complex mass scheme to construct MEs in a gauge-invariant way. For full consistency with this scheme, by default the dependent EW parameters are also calculated from the complex masses (‘WIDTHSCHEME=CMS’), yielding complex values e.g. for the weak mixing angle. To keep the parameters real one can set ‘WIDTHSCHEME=Fixed’. This may spoil gauge invariance though.


7.4.2 Minimal Supersymmetric Standard Model

To use the MSSM within Sherpa (cf. [Hag05]) the MODEL switch has to be set to ‘MSSM’. Further, the parameter spectrum has to be fed in. To achieve this files that conform to the SUSY-Les-Houches-Accord [Ska03] are used. The actual SLHA file name has to be specified by SLHA_INPUT and has to reside in the current run directory, i.e. PATH. From this file the full low-scale MSSM spectrum is read, including sparticle masses, mixing angles etc. In addition information provided on the total particle’s widths is read from the input file. Note that the setting of masses and widths through the SLHA input is superior to setting through MASS[<id>] and WIDTH[<id>].


7.4.3 ADD Model of Large Extra Dimensions

In order to use the ADD model within Sherpa the switch MODEL = ADD has to be set. The parameters of the ADD model can be set as follows:

The variable N_ED specifies the number of extra dimensions. The value of the Newtonian constant can be specified in natural units using the keyword G_NEWTON. The size of the string scale M_S can be defined by the parameter M_S. Setting the value of KK_CONVENTION allows to change between three widely used conventions for the definition of M_S and the way of summing internal Kaluza-Klein propagators. The switch M_CUT one restricts the c.m. energy of the hard process to be below this specified scale.

The masses, widths, etc. of both additional particles can set in the same way as for the Standard Model particles using the MASS[<id>] and WIDTH[<id>] keywords. The ids of the graviton and graviscalar are 39 and 40.

For details of the implementation, the reader is referred to [Gle03a].


7.4.4 Anomalous Gauge Couplings

Sherpa includes a number of effective Lagrangians describing anomalous gauge interactions:

Due to the effective nature of the anomalous couplings unitarity might be violated for coupling parameters other than the SM values. For very large momentum transfers, such as probed at the LHC, this will lead to unphysical results. As discussed in Ref. [Bau88] this can be avoided introducing form factors to be applied on the deviation of coupling parameters from their Standard Model values, The corresponding switches are UNITARIZATION_SCALE, UNITARIZATION_N and UNITARIZATION_M. Triple and quartic anomalous gauge couplings can be unitarised separatly, using UNITARIZATION_SCALE3, UNITARIZATION_SCALE4, UNITARIZATION_N3, UNITARIZATION_N4, UNITARIZATION_M3, and UNITARIZATION_M4. The default values are UNITARIZATION_SCALE=UNITARIZATION_SCALE3=UNITARIZATION_SCALE4=1000. (in units of GeV), UNITARIZATION_N=UNITARIZATION_N3=UNITARIZATION_N4=2 and UNITARIZATION_M=UNITARIZATION_M3=UNITARIZATION_M4=1.


7.4.5 Two Higgs Doublet Model

The THDM is incorporated as a subset of the MSSM Lagrangian. It is defined as the extension of the SM by a second SU(2) doublet of Higgs fields. Besides the particle content of the SM it contains interactions of five physical Higgs bosons: a light and a heavy scalar, a pseudo-scalar and two charged ones. Besides the SM inputs the model is defined through the masses and widths of the Higgs particles, MASS[PDG] and WIDTH[PDG], where PDG = [25,35,36,37] for h^0, H^0, A^0 and H^+, respectively. The inputs are complete, when TAN(BETA), the ratio of the two Higgs vacuum expectation values, and ALPHA, the Higgs mixing angle, are specified.

The model is invoked by specifying MODEL = THDM in the (model) section of the steering file or the model data file Model.dat, respectively.


7.4.6 Effective Higgs Couplings

The EHC describes the effective coupling of gluons and photons to Higgs bosons via a top-quark loop, and a W-boson loop in case of photons. This supplement to the Standard Model can be invoked by specifying MODEL = SM+EHC in the (model) section of the steering file or the model data file Model.dat, respectively.

The effective coupling of gluons to the Higgs boson, g_ggH, can be calculated either for a finite top-quark mass or in the limit of an infinitely heavy top using the switch FINITE_TOP_MASS=[1,0]. Similarily, the photon-photon-Higgs coupling, g_ppH, can be calculated both for finite top and/or W masses or in the infinite mass limit using the switches FINITE_TOP_MASS=[1,0] and FINITE_W_MASS=[1,0]. The default choice for both is the infinite mass limit in either case. It can be varied through setting EHC_SCALE2 to a different value;

Either one of these couplings can be switched off using the DEACTIVATE_GGH=[1,0] and DEACTIVATE_PPH=[1,0] switches. Both default to 0.


7.4.7 Fourth Generation

The 4thGen model adds a fourth family of quarks and leptons to the Standard Model. It is invoked by specifying MODEL = SM+4thGen in the ’(model)’ section of the steering file or the model data file ‘Model.dat’, respectively.

The masses and widths of the additional particles are defined via the usual MASS[PDG] and WIDTH[PDG] switches, where PDG = [7,8,17,18] for the fourth generation down and up quarks, the charged lepton and the neutrino, respectively. A general mixing is implemented for both leptons and quarks, parametrised through three additional mixing angles and two additional phases, as described in [Hou87a]: A_14, A_24, A_34, PHI_2 and PHI_3 for quarks, THETA_L14, THETA_L24, THETA_L34, PHI_L2 and PHI_L3 for leptons. Both 4x4 mixing matrices expand upon their 3x3 Standard Model counter parts: the CKM matrix for quarks and the unit matrix for leptons. Both mixing matrices can be printed on screen with OUTPUT_MIXING = 1.

Per default, all particles are set unstable and have to be decayed into Standard Model particles within the matrix element or set stable via STABLE[PDG] = 1.


7.4.8 FeynRules model

To use a model generated using the FeynRules package, cf. Refs. [Chr08] and [Chr09], the MODEL switch has to be set to ‘FeynRules’ and ME_SIGNAL_GENERATOR has to be set to ‘Amegic’. Note, in order to obtain the FeynRules model output in a format readable by Sherpa the FeynRules subroutine ’WriteSHOutput[ L ]’ needs to be called for the desired model Lagrangian ’L’. This results in a set of ASCII files that represent the considered model through its particle data, model parameters and interaction vertices. Note also that Sherpa/Amegic can only deal with Feynman rules in unitary gauge.

The FeynRules output files need to be copied to the current working directory or have tto reside in the directory referred to by the PATH variable, cf. Input structure. There exists an agreed default naming convention for the FeynRules output files to be read by Sherpa. However, the explicite names of the input files can be changed. They are referred to by the variables

For more details on the Sherpa interface to FeynRules please consult [Chr09].


7.5 Matrix elements

The setup of matrix elements is covered by the ‘(me)’ section of the steering file or the ME data file ‘ME.dat’, respectively. There are no mandatory settings to be made.

The following parameters are used to steer the matrix element setup.


7.5.1 ME_SIGNAL_GENERATOR

The list of matrix element generators to be employed during the run. When setting up hard processes from the ‘(processes)’ section of the input file (see Processes), Sherpa calls these generators in order to check whether either one is capable of generating the corresponding matrix element. This parameter can also be set on the command line using option ‘-m’, see Command line options.

The built-in generators are

Internal

Simple matrix element library, implementing a variety of 2->2 processes.

Amegic

The AMEGIC++ generator published under [Kra01]

Comix

The Comix generator published under [Gle08]

It is possible to employ an external matrix element generator within Sherpa. For advice on this topic please contact the authors, Authors.


7.5.2 RESULT_DIRECTORY

This parameter specifies the name of the directory which is used by Sherpa to store integration results and phasespace mappings. The default is ‘Results/’. It can also be set using the command line parameter ‘-r’, see Command line options. The directory will be created automatically, unless the option ‘GENERATE_RESULT_DIRECTORY=0’ is specified. Its location is relative to a potentially specified input path, see Command line options.


7.5.3 EVENT_GENERATION_MODE

This parameter specifies the event generation mode. It can also be set on the command line using option ‘-w’, see Command line options. The three possible options are ‘Weighted’ (shortcut ‘W’), ‘Unweighted’ (shortcut ‘U’) and ‘PartiallyUnweighted’ (shortcut ‘P’). For partially unweighted events, the weight is allowed to exceed a given maximum, which is lower than the true maximum weight. In such cases the event weight will exceed the otherwise constant value.


7.5.4 SCALES

This parameter specifies how to compute the renormalization and factorization scale and potential additional scales.

Sherpa provides several built-in scale setting schemes. For each scheme the scales are then set using expressions understood by the Interpreter. Each scale setter’s syntax is

SCALES <scale-setter>{<scale-definition>}

to define a single scale for both the factorisation and renormalisation scale. They can be set to different values using

SCALES <scale-setter>{<fac-scale-definition>}{<ren-scale-definition>}

In next-to-leading order parton shower matched calculations a third perturbative scale is present, the resummation or parton shower starting scale. It is set using the third argument

SCALES <scale-setter>{<fac-scale-definition>}{<ren-scale-definition>}{<res-scale-definition>}

Note: for all scales their squares have to be given. See Predefined scale tags for some predefined scale tags.

More than three scales can be set as well to be subsequently used, e.g. by different couplings, see COUPLINGS.


7.5.4.1 Scale setters

The scale setter options which are currently available are

VAR

The variable scale setter is the simplest scale setter available. Scales are simply specified by additional parameters in a form which is understood by the internal interpreter, see Interpreter. If, for example the invariant mass of the lepton pair in Drell-Yan production is the desired scale, the corresponding setup reads

SCALES VAR{Abs2(p[2]+p[3])}

Renormalization and factorization scales can be chosen differently. For example in Drell-Yan + jet production one could set

SCALES VAR{Abs2(p[2]+p[3])}{MPerp2(p[2]+p[3])}
FASTJET

If FastJet is enabled by including --enable-fastjet=/path/to/fastjet in the configure options, this scale setter can be used to set a scale based on jet-, rather than parton-momenta.

The final state parton configuration is first clustered using FastJet and resulting jet momenta are then added back to the list of non strongly interacting particles. The numbering of momenta therefore stays effectively the same as in standard Sherpa, except that final state partons are replaced with jets, if applicable (a parton might not pass the jet criteria and get "lost"). In particular, the indices of the initial state partons and all EW particles are uneffected. Jet momenta can then be accessed as described in Predefined scale tags through the identifiers p[i], and the nodal values of the clustering sequence can be used through MU_n2. The syntax is

SCALES FASTJET[<jet-algo-parameter>]{<scale-definition>}

Therein the parameters of the jet algorithm to be used to define the jets are given as a comma separated list of

  • the jet algorithm A:kt,antikt,cambridge,siscone (default antikt)
  • phase space restrictions, i.e. PT:<min-pt>, ET:<min-et>, Eta:<max-eta>, Y:<max-rap> (otherwise unrestricted)
  • radial parameter R:<rad-param> (default 0.4)
  • f-parameter for Siscone f:<f-param> (default 0.75)
  • recombination scheme C:E,pt,pt2,Et,Et2,BIpt,BIpt2 (default E)
  • b-tagging mode B:0,1,2 (default 0) This parameter, if specified different from its default 0, allows to use b-tagged jets only, based on the parton-level constituents of the jets. There are two options: With B:1 both b and anti-b quarks are counted equally towards b-jets, while for B:2 they are added with a relative sign as constituents, i.e. a jet containing b and anti-b is not tagged.
  • scale setting mode M:0,1 (default 1) It is possible to specify multiple scale definition blocks, each enclosed in curly brackets. The scale setting mode parameter then determines, how those are interpreted: In the M:0 case, they specify factorisation, renormalisation and resummation scale separately in that order. In the M:1 case, the n given scales are used to calculate a mean scale such that alpha_s^n(mu_mean)=alpha_s(mu_1)...alpha_s(mu_n) This scale is then used for factorisation, renormalisation and resummation scale.

Consider the example of lepton pair production in association with jets. The following scale setter

SCALES FASTJET[A:kt,PT:10,R:0.4,M:0]{sqrt(PPerp2(p[4])*PPerp2(p[5]))}

reconstructs jets using the kt-algorithm with R=0.4 and a minimum transverse momentum of 10 GeV. The scale of all strong couplings is then set to the geometric mean of the hardest and second hardest jet. Note M:0.

Similarly, in processes with multiple strong couplings, their renormalisation scales can be set to different values, e.g.

SCALES FASTJET[A:kt,PT:10,R:0.4,M:1]{PPerp2(p[4])}{PPerp2(p[5])}

sets the scale of one strong coupling to the transverse momentum of the hardest jet, and the scale of the second strong coupling to the transverse momentum of second hardest jet. Note M:1 in this case.

The additional tags MU_22 .. MU_n2 (n=2..njet+1), hold the nodal values of the jet clustering in descending order.

Please note that currently this type of scale setting can only be done within the process block (Processes) and not within the (me) section.

METS

The matrix element is clustered onto a core 2->2 configuration using an inversion of current parton shower, cf. SHOWER_GENERATOR, recombining (n+1) particles into n on-shell particles. Their corresponding flavours are determined using run-time information from the matrix element generator. It defines the three tags MU_F2, MU_R2 and MU_Q2 whose values are assigned through this clustering procedure. While MU_F2 and MU_Q2 are defined as the lowest invariant mass or negative virtuality in the core process (for core interactions which are pure QCD processes scales are set to the maximum transverse mass squared of the outgoing particles), MU_R2 is determined using this core scale and the individual clustering scales such that

  alpha_s(MU_R2)^{n+k} = alpha_s(core-scale)^k alpha_s(kt_1) ... alpha_s(kt_n)

where k is the order in strong coupling of the core process and k is the number of clusterings, kt_i are the relative transverse momenta at each clustering. The tags MU_F2, MU_R2 and MU_Q2 can then be used on equal footing with the tags of Predefined scale tags to define the final scale.

METS is the default scale scheme in Sherpa, since it is employed for truncated shower merging, see ME-PS merging, both at leading and next-to-leading order. Thus, Sherpa’s default is

SCALES METS{MU_F2}{MU_R2}{MU_Q2}

As the tags MU_F2, MU_R2 and MU_Q2 are predefined by the METS scale setter, they may be omitted, i.e.

SCALES METS

leads to an identical scale definition.

The METS scale setter comes in two variants: STRICT_METS and LOOSE_METS. While the former employs the exact inverse of the parton shower for the clustering procedure, and therefore is rather time consuming for multiparton final state, the latter is a simplified version and much faster. Giving METS as the scale setter results in using LOOSE_METS for the integration and STRICT_METS during event generation. Giving either STRICT_METS or LOOSE_METS as the scale setter results in using the respective one during both integration and event generation.

Clusterings onto 2->n (n>2) configurations is possible, see METS scale setting with multiparton core processes.

This scheme might be subject to changes to enable further classes of processes for merging in the future and should therefore be seen with care. Integration results might change slightly between different Sherpa versions.

Occasionally, users might encounter the warning message

METS_Scale_Setter::CalculateScale(): No CSS history for '<process name>' in <percentage>% of calls. Set \hat{s}.

As long as the percentage quoted here is not too high, this does not pose a serious problem. The warning occurs when - based on the current colour configuration and matrix element information - no suitable clustering is found by the algorithm. In such cases the scale is set to the invariant mass of the partonic process.


7.5.4.2 Custom scale implementation

When the flexibility of the ‘VAR’ scale setter above is not sufficient, it is also possible to implement a completely custom scale scheme within Sherpa as C++ class plugin. For details please refer to the Customization section.


7.5.4.3 Predefined scale tags

There exist a few predefined tags to facilitate commonly used scale choices or easily implement a user defined scale.

p[n]

Access to the four momentum of the nth particle. The initial state particles carry n=0 and n=1, the final state momenta start from n=2. Their ordering is determined by Sherpa’s internal particle ordering and can be read e.g. from the process names displayed at run time. Please note, that when building jets out of the final state partons first, e.g. through the FASTJET scale setter, these parton momenta will be replaced by the jet momenta ordered in transverse momenta. For example the process u ub -> e- e+ G G will have the electron and the positron at positions p[2] and p[3] and the gluons on postions p[4] and p[5]. However, when finding jets first, the electrons will still be at p[2] and p[3] while the harder jet will be at p[4] and the softer one at p[6].

H_T2

Square of the scalar sum of the transverse momenta of all final state particles.

H_TY2[fac:<factor>,exp:<exponent>]

Square of the scalar sum of the transverse momenta of all final state particles weighted by their rapidity distance from the final state boost vector. Thus, takes the form

  H_T^{(Y)} = sum_i pT_i exp [ fac |y-yboost|^exp ]

Typical values to use would by fac:0.3 and exp:1.

MU_F2, MU_R2, MU_Q2

Tags holding the values of the factorisation, renormalisation scale and resummation scale determined through backwards clustering in the METS scale setter.

MU_22, MU_32, ..., MU_n2

Tags holding the nodal values of the jet clustering in the FASTJET scale setter, cf. Scale setters.

All of those objects can be operated upon by any operator/function known to the Interpreter.


7.5.4.4 Scale schemes for NLO calculations

For next-to-leading order calculations it must be guaranteed that the scale is calculated separately for the real correction and the subtraction terms, such that within the subtraction procedure the same amount is subtracted and added back. Starting from version 1.2.2 this is the case for all scale setters in Sherpa. Also, the definition of the scale must be infrared safe w.r.t. to the radiation of an extra parton. Infrared safe (for QCD-NLO calculations) are:

Not infrared safe are

Since the total number of partons is different for different pieces of the NLO calculation any explicit reference to a parton momentum will lead to an inconsistent result.


7.5.4.5 Simple scale variations

Simple scale variations can be done using the following parameters:


7.5.4.6 Scale variations in parton showered and merged samples

When performing scale variations within parton showered samples the naive FACTORIZATION_SCALE_FACTOR and RENORMALIZATION_SCALE_FACTOR cannot be employed because they alter the resummation behaviour of the parton shower and lead to an overestimate of the associated uncertainty. Instead, the scales in the fixed-order matrix element and the parton shower resummation should be varied separately. This can be done for the matrix element by introducing the prefactor into its scale definition, e.g.

SCALES VAR{0.25*H_T2}{0.25*H_T2}

for setting both the renormalisation and factorisation scales to H_T/2. The parton shower scale is then varied by setting CSS_SHOWER_SCALE2_FACTOR=<factor>. It redefines the reference value of the strong coupling constant at mZ by its nuerical value at mZ*<factor> leaving its running unchanged.

In merged samples the METS scale setter has to be used. The scales in the respective multijet matrix elements can then be varied via

SCALES METS{<muF-var-factor>*MU_F2}{<muR-var-factor>*MU_R2}

In NLO-merged (MEPSatNLO) samples proper counterterms for compensating the change of scale at NLO have to be introduced via SP_NLOCT=1. Also, the resummation scale can be varied

SCALES METS{<muF-var-factor>*MU_F2}{<muR-var-factor>*MU_R2}{<muQ-var-factor>*MU_Q2}

7.5.4.7 METS scale setting with multiparton core processes

The METS scale setter stops clustering when no combination is found that corresponds to a parton shower branching, or if two subsequent branchings are unordered in terms of the parton shower evolution parameter. The core scale of the remaining 2->n process then needs to be defined. This is done by specifying a core scale through

CORE_SCALE <core-scale-setter>{<core-fac-scale-definition>}{<core-ren-scale-definition>}{<core-res-scale-definition>}

As always, for scale setters which define MU_F2, MU_R2 and MU_Q2 the scale definition can be dropped. Possible core scale setters are

VAR

Variable core scale setter. Syntax is identical to variable scale setter.

QCD

QCD core scale setter. Scales are set to harmonic mean of s, t and u. Only useful for 2->2 cores as alternatives to the usual core scale of the METS scale setter.

TTBar

Core scale setter for processes involving top quarks. Implementation details are described in Appendix C of [Hoe13].

An example for defining a custom core scale is given in Simulation of top quark pair production using MC@NLO methods.


7.5.5 COUPLING_SCHEME

The parameter COUPLING_SCHEME is used to enable the running of the gauge couplings. The default setting is COUPLING_SCHEME=Running_alpha_S, assuming only the strong coupling as running. The QED coupling is considered running as well by setting COUPLING_SCHEME=Running. To solely have a running QED coupling set COUPLING_SCHEME=Running_alpha_QED. If not considered running the values specified by ALPHAS(default) and 1/ALPHAQED(default) are used, respectively.


7.5.6 COUPLINGS

Within Sherpa, strong and electroweak couplings can be computed at any scale specified by a scale setter (cf. SCALES). The ‘COUPLINGS’ tag links the argument of a running coupling to one of the respective scales. This is better seen in an example. Assuming the following input

SCALES    VAR{...}{PPerp2(p[2])}{Abs2(p[2]+p[3])}
COUPLINGS Alpha_QCD 1, Alpha_QED 2

Sherpa will compute any strong couplings at scale one, i.e. ‘PPerp2(p[2])’ and electroweak couplings at scale two, i.e. ‘Abs2(p[2]+p[3])’. Note that counting starts at zero.


7.5.7 KFACTOR

This parameter specifies how to evaluate potential K-factors in the hard process. This is equivalent to the ‘COUPLINGS’ specification of Sherpa versions prior to 1.2.2. Currently available options are

NO

No reweighting

VAR

Couplings specified by an additional parameter in a form which is understood by the internal interpreter, see Interpreter. The tags Alpha_QCD and Alpha_QED serve as links to the built-in running coupling implementation.

If for example the process ‘g g -> h g’ in effective theory is computed, one could think of evaluating two powers of the strong coupling at the Higgs mass scale and one power at the transverse momentum squared of the gluon. Assuming the Higgs mass to be 120 GeV, the corresponding reweighting would read

SCALES    VAR{...}{PPerp2(p[3])}
COUPLINGS Alpha_QCD 1
KFACTOR   VAR{sqr(Alpha_QCD(sqr(120))/Alpha_QCD(MU_12))}

As can be seen from this example, scales are referred to as MU_<i>2, where <i> is replaced with the appropriate number. Note that counting starts at zero.

It is possible to implement a dedicated K-factor scheme within Sherpa. For advice on this topic please contact the authors, Authors.


7.5.8 YUKAWA_MASSES

This parameter specifies whether the Yukawa couplings are evaluated using running or fixed quark masses: YUKAWA_MASSES=Running is the default since version 1.2.2 while YUKAWA_MASSES=Fixed was the default until 1.2.1.


7.5.9 Dipole subtraction

This list of parameters can be used to optimize the performance when employing the Catani-Seymour dipole subtraction [Cat96b] as implemented in Amegic [Gle07].

`DIPOLE_ALPHA'

Specifies a dipole cutoff in the nonsingular region [Nag03]. Changing this parameter shifts contributions from the subtracted real correction piece (RS) to the piece including integrated dipole terms (I), while their sum remains constant. This parameter can be used to optimize the integration performance of the individual pieces. Also the average calculation time for the subtracted real correction is reduced with smaller choices of ‘DIPOLE_ALPHA’ due to the (on average) reduced number of contributing dipole terms. For most processes a reasonable choice is between 0.01 and 1 (default). See also Choosing DIPOLE_ALPHA

`DIPOLE_AMIN'

Specifies the cutoff of real correction terms in the infrared reagion to avoid numerical problems with the subtraction. The default is 1.e-8.

`DIPOLE_NF_GSPLIT'

Specifies the number of quark flavours that are produced from gluon splittings. This number must be at least the number of massless flavours (default). If this number is larger than the number of massless quarks the massive dipole subtraction [Cat02] is employed.

`DIPOLE_KAPPA'

Specifies the kappa-parameter in the massive dipole subtraction formalism [Cat02].


7.6 Processes

The process setup is covered by the ‘(processes)’ section of the steering file or the process data file ‘Processes.dat’, respectively.

The following parameters are used to steer the process setup.


7.6.1 Process

This tag starts the setup of a process or a set of processes with common properties. It must be followed by the specification of the (core) process itself. The setup is completed by the ‘End process’ tag, see End process. The initial and final state particles are specified by their PDG codes, or by particle containers, see Particle containers. Examples are

Process 93 93 -> 11 -11

Sets up a Drell-Yan process group with light quarks in the initial state.

Process 11 -11 -> 93 93 93{3}

Sets up jet production in e+e- collisions with up to three additional jets.

The syntax for specifying processes is explained in the following sections:


7.6.1.1 PDG codes

Initial and final state particles are specified using their PDG codes (cf. PDG). A list of particles with their codes, and some of their properties, is printed at the start of each Sherpa run, when the OUTPUT is set at level ‘2’.


7.6.1.2 Particle containers

Sherpa contains a set of containers that collect particles with similar properties, namely

These containers hold all massless particles and anti-particles of the denoted type and allow for a more efficient definition of initial and final states to be considered. The jet container consists of the gluon and all massless quarks (as set by MASS[..]=0.0 or MASSIVE[..]=0). A list of particle containers is printed at the start of each Sherpa run, when the OUTPUT is set at level ‘2’.

It is also possible to define a custom particle container using the keyword PARTICLE_CONTAINER either on the command line or in the (run) section of the input file. The container must be given an unassigned particle ID (kf-code) and its name (freely chosen by you) and content must be specified. An example would be the collection of all down-type quarks, which could be declared as

  PARTICLE_CONTAINER 98 downs 1 -1 3 -3 5 -5;

Note that, if wanted, you have to add both particles and anti-particles.


7.6.1.3 Curly brackets

The curly bracket notation when specifying a process allows up to a certain number of jets to be included in the final state. This is easily seen from an example,

Process 11 -11 -> 93 93 93{3}

Sets up jet production in e+e- collisions. The matix element final state may be 2, 3, 4 or 5 light partons or gluons.


7.6.2 Decay

Specifies the exclusive decay of a particle produced in the matrix element. The virtuality of the decaying particle is sampled according to a Breit-Wigner distribution. An example would be

Process 11 -11 -> 6[a] -6[b]
Decay 6[a] -> 5 24[c]
Decay -6[b] -> -5 -24[d]
Decay 24[c] -> -13 14
Decay -24[d] -> 94 94

7.6.3 DecayOS

Specifies the exclusive decay of a particle produced in the matrix element. The decaying particle is on mass-shell, i.e. a strict narrow-width approximation is used. This tag can be specified alternatively as ‘DecayOS’. An example would be

Process 11 -11 -> 6[a] -6[b]
DecayOS 6[a] -> 5 24[c]
DecayOS -6[b] -> -5 -24[d]
DecayOS 24[c] -> -13 14
DecayOS -24[d] -> 94 94

7.6.4 No_Decay

Remove all diagrams associated with the decay of the given flavours. Serves to avoid resonant contributions in processes like W-associated single-top production. Note that this method breaks gauge invariance! At the moment this flag can only be set for Comix. An example would be

Process 93 93 -> 6[a] -24[b] 93{1}
Decay 6[a] -> 5 24[c]
DecayOS 24[c] -> -13 14
DecayOS -24[b] -> 11 -12
No_Decay -6

7.6.5 Scales

Sets a process-specific scale. For the corresponding syntax see SCALES.


7.6.6 Couplings

Sets process-specific couplings. For the corresponding syntax see COUPLINGS.


7.6.7 CKKW

Sets up multijet merging according to [Hoe09]. The additional argument specifies the separation cut in the form (Q_{cut}/E_{cms})^2. It can be given in any form which is understood by the internal interpreter, see Interpreter. Examples are


7.6.8 Selector_File

Sets a process-specific selector file name.


7.6.9 Order_EW

Sets a process-specific electroweak order. The given number is exclusive, i.e. only matrix elements with exactly the given order in the electroweak coupling are generated.

Note that for decay chains with Amegic this setting applies to the core process only, while with Comix it applies to the full process, see Decay and DecayOS.


7.6.10 Max_Order_EW

Sets a process-specific maximum electroweak order. The given number is inclusive, i.e. matrix elements with up to the given order in the electroweak coupling are generated.

Note that for decay chains with Amegic this setting applies to the core process only, while with Comix it applies to the full process, see Decay and DecayOS.


7.6.11 Order_QCD

Sets a process-specific QCD order. The given number is exclusive, i.e. only matrix elements with exactly the given order in the strong coupling are generated.

Note that for decay chains with Amegic this setting applies to the core process only, while with Comix it applies to the full process, see Decay and DecayOS.


7.6.12 Max_Order_QCD

Sets a process-specific maximum QCD order. The given number is inclusive, i.e. matrix elements with up to the given order in the strong coupling are generated.

Note that for decay chains with Amegic this setting applies to the core process only, while with Comix it applies to the full process, see Decay and DecayOS.


7.6.13 Min_N_Quarks

Limits the minimum number of quarks in the process to the given value.


7.6.14 Max_N_Quarks

Limits the maximum number of quarks in the process to the given value.


7.6.15 Min_N_TChannels

Limits the minimum number of t-channel propagators in the process to the given value.


7.6.16 Max_N_TChannels

Limits the maximum number of t-channel propagators in the process to the given value.


7.6.17 Print_Graphs

Writes out Feynman graphs in LaTeX format. The parameter specifies a directory name in which the diagram information is stored. This directory is created automatically by Sherpa. The LaTeX source files can be compiled using the command

  ./plot_graphs <graphs directory>

which creates an html page in the graphs directory that can be viewed in a web browser.


7.6.18 Integration_Error

Sets a process-specific relative integration error target.

For multijet processes, this parameter can be specified per final state multiplicity. An example would be

Process 93 93 -> 93 93 93{2}
Integration_Error 0.02 {3,4}

Here, the integration error target is set to 2% for 2->3 and 2->4 processes.


7.6.19 Max_Epsilon

Sets epsilon for maximum weight reduction. The key idea is to allow weights larger than the maximum during event generation, as long as the fraction of the cross section represented by corresponding events is at most the epsilon factor times the total cross section. In other words, the relative contribution of overweighted events to the inclusive cross section is at most epsilon.


7.6.20 Enhance_Factor

Sets a process specific enhance factor.

For multijet processes, this parameter can be specified per final state multiplicity. An example would be

Process 93 93 -> 93 93 93{2}
Enhance_Factor 4 {3}
Enhance_Factor 16 {4}

Here, 3-jet processes are enhanced by a factor of 4, 4-jet processes by a factor of 16.


7.6.21 RS_Enhance_Factor

Sets an enhance factor for the RS-piece of an MC@NLO process.

For multijet processes, this parameter can be specified per final state multiplicity. An example would be

Process 93 93 -> 90 91 93{3};
NLO_QCD_Mode MC@NLO {2,3};
RS_Enhance_Factor 10 {2};
RS_Enhance_Factor 20 {3};

Here, the RS-pieces of the MC@NLO subprocesses of the 2 particle final state processes are enhanced by a factor of 10, while those of the 3 particle final state processes are enhanced by a factor of 20.


7.6.22 Enhance_Function

Sets a process specific enhance function.

This feature can only be used when generating weighted events.

For multijet processes, the parameter can be specified per final state multiplicity. An example would be

Process 93 93 -> 11 -11 93{1}
Enhance_Function VAR{PPerp2(p[4])} {3}

Here, the 1-jet process is enhanced with the transverse momentum squared of the jet.

Note that the convergence of the Monte Carlo integration can be worse if enhance functions are employed and therefore the integration can take significantly longer. The reason is that the default phase space mapping, which is constructed according to diagrammatic information from hard matrix elements, is not suited for event generation including enhancement. It must first be adapted, which, depending on the enhance function and the final state multiplicity, can be an intricate task.

If Sherpa cannot achieve an integration error target due to the use of enhance functions, it might be appropriate to locally redefine this error target, see Integration_Error.


7.6.23 Enhance_Observable

Allows for the specification of a ME-level observable in which the event generation should be flattened. Of course, this induces an appropriate weight for each event. This option is available for both weighted and unweighted event generation, but for the latter as mentioned above the weight stemming from the enhancement is introduced. For multijet processes, the parameter can be specified per final state multiplicity.

An example would be

Process 93 93 -> 11 -11 93{1}
Enhance_Observable VAR{log10(PPerp(p[2]+p[3]))}|1|3 {3}

Here, the 1-jet process is flattened with respect to the logarithmic transverse momentum of the lepton pair in the limits 1.0 (10 GeV) to 3.0 (1 TeV). For the calculation of the observable one can use any function available in the algebra interpreter (see Interpreter).

Note that the convergence of the Monte Carlo integration can be worse if enhance observables are employed and therefore the integration can take significantly longer. The reason is that the default phase space mapping, which is constructed according to diagrammatic information from hard matrix elements, is not suited for event generation including enhancement. It must first be adapted, which, depending on the enhance function and the final state multiplicity, can be an intricate task.

If Sherpa cannot achieve an integration error target due to the use of enhance functions, it might be appropriate to locally redefine this error target, see Integration_Error.


7.6.24 NLO_QCD_Mode

This setting specifies whether and in which mode an QCD NLO calculation should be performed. Possible values are:

The usual multiplicity identifier apply to this switch as well. Note that this setting implies NLO_QCD_Part BVIRS for the relevant multiplicities. This can be overridden by setting NLO_QCD_Part explicitly in case of fixed-order calculations.

Note that Sherpa includes only a very limited selection of one-loop corrections. For processes not included external codes can be interfaced, see External one-loop ME


7.6.25 NLO_QCD_Part

In case of fixed-order NLO calculations this switch specifies which pieces of a QCD NLO calculation are computed. Possible choices are

Different pieces can be combined in one processes setup. Only pieces with the same number of final state particles and the same order in alpha_S can be treated as one process, otherwise they will be automatically split up.


7.6.26 NLO_EW_Mode

This setting specifies whether and in which mode an electroweak NLO calculation should be performed. Possible values are:


7.6.27 NLO_EW_Part

In case of fixed-order NLO calculations this switch specifies which pieces of a electroweak NLO calculation are computed. Possible choices are

Different pieces can be combined in one processes setup. Only pieces with the same number of final state particles and the same order in alpha_QED can be treated as one process, otherwise they will be automatically split up.


7.6.28 Subdivide_Virtual

Allows to split the virtual contribution to the total cross section into pieces. Currently supported options when run with BlackHat are ‘LeadingColor’ and ‘FullMinusLeadingColor’. For high-multiplicity calculations these settings allow to adjust the relative number of points in the sampling to reduce the overall computation time.


7.6.29 ME_Generator

Set a process specific nametag for the desired tree-ME generator, see ME_SIGNAL_GENERATOR.


7.6.30 Loop_Generator

Set a process specific nametag for the desired loop-ME generator. The only Sherpa-native option is Internal with a few hard coded loop matrix elements.


7.6.30.1 BlackHat Interface

Another source for loop matrix elements is BlackHat. To use this Sherpa has to be linked to BlackHat during installation by using the configure option --enable-blackhat=/path/to/blackhat. The BlackHat settings file can be specified using ‘BH_SETTINGS_FILE’.


7.6.31 Integrator

Sets a process-specific integrator, see INTEGRATOR.


7.6.32 End process

Completes the setup of a process or a list of processes with common properties.


7.7 Selectors

The setup of cuts at the matrix element level is covered by the ‘(selector)’ section of the steering file or the selector data file ‘Selector.dat’, respectively.

Sherpa provides the following selectors


7.7.1 One particle selectors

The selectors listed here implement cuts on the matrix element level, based on single particle kinematics. The corresponding syntax in ‘Selector.dat’ is

<keyword> <flavour code> <min value> <max value>

<min value>’ and ‘<max value>’ are floating point numbers, which can also be given in a form that is understood by the internal algebra interpreter, see Interpreter. The selectors act on all possible particles with the given flavour. Their respective keywords are

Energy

energy cut

ET

transverse energy cut

PT

transverse momentum cut

Rapidity

rapidity cut

PseudoRapidity

pseudorapidity cut


7.7.2 Two particle selectors

The selectors listed here implement cuts on the matrix element level, based on two particle kinematics. The corresponding syntax in ‘Selector.dat’ is

<keyword> <flavour1 code> <flavour2 code> <min value> <max value>

<min value>’ and ‘<max value>’ are floating point numbers, which can also be given in a form that is understood by the internal algebra interpreter, see Interpreter. The selectors act on all possible particles with the given flavour. Their respective keywords are

Mass

invariant mass

Angle

angular separation (rad)

BeamAngle

angular separation w.r.t. beam

(‘<flavour2 code>’ is 0 or 1, referring to beam 1 or 2)

DeltaEta

pseudorapidity separation

DeltaY

rapidity separation

DeltaPhi

azimuthal angle separation (rad)

DeltaR

R separation


7.7.3 Decay selectors

The selectors listed here implement cuts on the matrix element level, based on particle decays, see Decay and DecayOS.

DecayMass

Invariant mass of a decaying particle. The syntax is

DecayMass <flavour code> <min value> <max value>
Decay

Any kinematic variable of a decaying particle. The syntax is

Decay(<expression>) <flavour code> <min value> <max value>

where <expression> is an expression handled by the internal interpreter, see Interpreter.

Decay2

Any kinematic variable of a pair of decaying particles. The syntax is

  Decay2(<expression>) <flavour1 code> <flavour2 code> <min value> <max value>

where <expression> is an expression handled by the internal interpreter, see Interpreter.

Particles are identified by flavour, i.e. the cut is applied on all decaying particles that match ‘<flavour code>’. ‘<min value>’ and ‘<max value>’ are floating point numbers, which can also be given in a format that is understood by the internal algebra interpreter, see Interpreter.


7.7.4 Jet finders

There are three different types of jet finders

JetFinder

k_T-algorithm

ConeFinder

cone-algorithm

NJetFinder

k_T-type algorithm to select on a given number of jets

Their respective syntax is

JetFinder  <ycut>[<ycut decay 1>[<ycut decay 11>...]...]... <D parameter>
ConeFinder <min R> 
NJetFinder <n> <ptmin> <etmin> <D parameter> [<exponent>] [<eta max>] [<mass max>]

For ‘JetFinder’, it is possible to give different values of ycut in individual subprocesses of a production-decay chain. The square brackets are then used to denote the decays. In case only one uniform set of ycut is to be used, the square brackets are left out.

<ycut>’, ‘<min R>’ and ‘<D parameter>’ are floating point numbers, which can also be given in a form that is understood by the internal algebra interpreter, see Interpreter.

The ‘NJetFinder’ allows to select for kinematic configurations with at least ‘<n>’ jets that satisfy both, the ‘<ptmin>’ and the ‘<etmin>’ minimum requirements and that are in a PseudoRapidity region |eta|<‘<eta max>’. The ‘<exponent>’ allows to apply a kt-algorithm (1) or an anti-kt algorithm (-1). As only massless partons are clustered by default, the ‘<mass max>’ allows to also include partons with a mass up to the specified values. This is useful e.g. in calculations with massive b-quarks which shall nonetheless satisfy jet criteria.


7.7.5 Universal selector

The universal selector is intended to implement non-standard cuts on the matrix element level. Its syntax is

"<variable>" <kf1>,..,<kfn> <min1>,<max1>:..:<minn>,<maxn> [<order1>,...,<orderm>]

No additional white spaces are allowed

The first word has to be double-quoted, and contains the name of the variable to cut on. The keywords for available predefined <variable>s can be figured by running Sherpa ‘SHOW_VARIABLE_SYNTAX=1’. Or alternatively, an arbitrary cut variable can be constructed using the internal interpreter, see Interpreter. This is invoked with the command ‘Calc(...)’. In the formula specified there you have to use place holders for the momenta of the particles: ‘p[0]’ ... ‘p[n]’ hold the momenta of the respective particles ‘kf1’ ... ‘kfn’. A list of available vector functions and operators can be found here Interpreter.

<kf1>,..,<kfn>’ specify the PDG codes of the particles the variable has to be calculated from. In case this choice is not unique in the final state, you have to specify multiple cut ranges (‘<min1>,<max1>:..:<minn>,<maxn>’) for all (combinations of) particles you want to cut on, separated by semicolons.

If no fourth argument is given, the order of cuts is determined internally, according to Sherpa’s process classification scheme. This then has to be matched if you want to have different cuts on certain different particles in the matrix element. To do this, you should put enough (for the possible number of combinations of your particles) arbitrary ranges at first and run Sherpa with debugging output for the universal selector: ‘Sherpa OUTPUT=2[Variable_Selector::Trigger|15]’. This will start to produce lots of output during integration, at which point you can interrupt the run (Ctrl-c). In the ‘Variable_Selector::Trigger(): {...}’ output you can see, which particle combinations have been found and which cut range your selector has held for them (vs. the arbitrary range you specified). From that you should get an idea, in which order the cuts have to be specified.

If the fourth argument is given, particles are ordered before the cut is applied. Possible orderings are ‘PT_UP’, ‘ET_UP’, ‘E_UP’, ‘ETA_UP’ and ‘ETA_DOWN’, (increasing p_T, E_T, E, eta, and decreasing eta). They have to be specified for each of the particles, separated by commas.

Examples

Two-body transverse mass

"mT" 11,-12 50,E_CMS

Cut on the pT of only the hardest lepton in the event

"PT" 90 50.0,E_CMS [PT_UP]

Using bool operations to restrict eta of the electron to |eta| < 1.1 or 1.5 < |eta| < 2.5

"Calc(abs(Eta(p[0]))<1.1||(abs(Eta(p[0]))>1.5&&abs(Eta(p[0]))<2.5))" 11 1,1

Note the range 1,1 meaning true for bool operations.

Requesting opposite side tag jets in VBF would for example need a setup like this

"Calc(Eta(p[0])*Eta(p[1]))" 93,93 -100,0 [PT_UP,PT_UP]

Restricting electron+photon mass to be outside of [87.0,97.0]:

"Calc(Mass(p[0]+p[1])<87.0||Mass(p[0]+p[1])>97.0)" 11,22 1,1

In ‘Z[lepton lepton] Z[lepton lepton]’, cut on mass of lepton-pairs produced from Z’s:

"m" 90,90 80,100:0,E_CMS:0,E_CMS:0,E_CMS:0,E_CMS:80,100

Here we use knowledge about the internal ordering to cut only on the correct lepton pairs.


7.7.6 Minimum selector

This selector can combine several selectors to pass an event if either those passes the event. It is mainly designed to generate more inclusive samples that, for instance, include several jet finders and that allows a specification later. The syntax is

MinSelector {
  Selector 1
  Selector 2
  ...
} 

7.7.7 NLO selectors

Phase-space cuts that are applied on next-to-leading order calculations must be defined in a infrared safe way. Technically there is also a special treatment for the real (subtracted) correction required. Currently only the following selectors meet this requirement:

QCD parton cuts
NJetFinder <n> <ptmin> <etmin> <D parameter> [<exponent>] [<eta max>] [<mass max>]

(see Jet finders)

Cuts on not strongly interacting particles

One particle selectors

PTNLO <flavour code> <min value> <max value>
RapidityNLO <flavour code> <min value> <max value>
PseudoRapidityNLO <flavour code> <min value> <max value>

Two particle selectors

PT2NLO <flavour1 code> <flavour2 code> <min value> <max value>
Mass <flavour1 code> <flavour2 code> <min value> <max value>
Cuts on photons
IsolationCut 22 <dR> <exponent> <epsilon>

implements the Frixione isolation cone [Fri98].

The Minimum selector can be used if constructed with other selectors mentioned in this section

7.7.8 Fastjet selector

If FastJet is enabled, the momenta and nodal values of the jets found with Fastjet can be used to calculate more elaborate selector criteria. The syntax of this selector is

FastjetSelector <expression> <algorithm> <n> <ptmin> <etmin> <dr> [<f(siscone)>=0.75] [<eta-max>] [<y-max>] [<bmode>]

wherein algorithm can take the values kt,antikt,cambridge,siscone. In the algebraic expression MU_n2 (n=2..njet+1) signify the nodal values of the jets found and p[i] are their momenta. For details see Scale setters. For example, in lepton pair production in association with jets

FastjetSelector Mass(p[4]+p[5])>100. antikt 2 40. 0. 0.5

selects all phase space points where two anti-kt jets with at least 40 GeV of transverse momentum and an invariant mass of at least 100 GeV are found. The expression must calculate a boolean value. The bmode parameter, if specified different from its default 0, allows to use b-tagged jets only, based on the parton-level constituents of the jets. There are two options: With <bmode>=1 both b and anti-b quarks are counted equally towards b-jets, while for <bmode>=2 they are added with a relative sign as constituents, i.e. a jet containing b and anti-b is not tagged.


7.8 Integration

The integration setup is covered by the ‘(integration)’ section of the steering file or the integration data file ‘Integration.dat’, respectively.

The following parameters are used to steer the integration.


7.8.1 INTEGRATION_ERROR

Specifies the relative integration error target.


7.8.2 INTEGRATOR

Specifies the integrator. The possible integrator types depend on the matrix element generator. In general users should rely on the default value and otherwise seek the help of the authors, see Authors. Within AMEGIC++ the options ‘AMEGIC_INTEGRATOR’ and ‘AMEGIC_RS_INTEGRATOR’ can be used to steer the behaviour of the default integrator

In addition, a few ME-generator independent integrators have been implemented for specific processes:


7.8.3 VEGAS

Specifies whether or not to employ Vegas for adaptive integration. The two possible values are ‘On’ and ‘Off’, the default being ‘On’.


7.8.4 FINISH_OPTIMIZATION

Specifies whether the full Vegas optimization is to be carried out. The two possible options are ‘On’ and ‘Off’, the default being ‘On’.


7.8.5 PSI_NMAX

The maximum number of points before cuts to be generated during integration. This parameter acts on a process-by-process basis.


7.8.6 PSI_ITMIN

The minimum number of points used for every optimisation cycle. Please note that it might be increased automatically for complicated processes.


7.8.7 PSI_ITMAX

The maximum number of points used for every optimisation cycle. Please note that for complicated processes the number given might be insufficient for a meaningful optimisation.


7.9 Hard decays

The handler for decays of particles produced in the hard scattering process (e.g. W, Z, top, Higgs) can be enabled using the ‘HARD_DECAYS=1’ switch. Which (anti)particles should be treated as unstable is determined by the ‘STABLE[<id>]’ switch described in Model parameters.

This decay module can also be used on top of NLO matrix elements, but it does not include any NLO corrections in the decay matrix elements themselves.

Note that the decay handler is an afterburner at the event generation level. It does not affect the calculation and integration of the hard scattering matrix elements. The cross section is thus unaffected during integration, and the branching ratios (if any decay channels have been disabled) are only taken into account for the event weights and cross section output at the end of event generation (if not disabled with the ‘HDH_BR_WEIGHTS’ option, cf. below). Furthermore any cuts or scale definitions are not affected by decays and operate only on the inclusively produced particles before decays.


7.9.1 HDH_NO_DECAY

This option allows to disable an explicit list of decay channels. For example, to disable the hadronic decay channels of the W boson one would use:

HDH_NO_DECAY={24,2,-1}|{24,4,-3}|{-24,-2,1}|{-24,-4,3}

Note that the ordering of the decay products in each channel is important and has to be identical to the ordering in the decay table printed to screen. Multiple decay channels (also for different decaying particles and antiparticles) can be specified using the ‘|’ (pipe) symbol as separator. Spaces are not allowed anywere in the list.


7.9.2 HDH_ONLY_DECAY

This option allows to restrict the decay channels of a particle to the explicitly given list. For example, to allow only the bottom-decay mode of the Higgs one would use

HDH_ONLY_DECAY={25,5,-5}

Note that the ordering of the decay products in each channel is important and has to be identical to the ordering in the decay table printed to screen. Multiple decay channels (also for different decaying particles and antiparticles) can be specified using the ‘|’ (pipe) symbol as separator. Spaces are not allowed anywere in the list.


7.9.3 HARD_SPIN_CORRELATIONS

By default, all decays are done in a factorised manner, i.e. there are no correlations between the production and decay matrix elements of an unstable particle. It is possible to enable spin correlations by specifying ‘HARD_SPIN_CORRELATIONS=1’, which might come with a small performance penalty in more complicated processes.


7.9.4 STORE_DECAY_RESULTS

The decay table and partial widths are calculated on-the-fly during the initialization phase of Sherpa from the given model and its particles and interaction vertices. To store these results in the Results/Decays directory, one has to specify ‘STORE_DECAY_RESULTS=1’.


7.9.5 DECAY_RESULT_DIRECTORY

Specifies the name of the directory where the decay results are to be stored. Defaults to the value of RESULT_DIRECTORY.


7.9.6 HDH_SET_WIDTHS

The decay handler computes LO partial and total decay widths and generates decays with corresponding branching fractions, independently from the particle widths specified by ‘WIDTH[<id>]’. The latter are relevant only for the core process and should be set to zero for all unstable particles appearing in the core-process final state. This guarantees on-shellness and gauge invariance of the core process, and subsequent decays can be handled by the afterburner. In constrast, ‘WIDTH[<id>]’ should be set to the physical width when unstable particles appear (only) as intermediate states in the core process, i.e. when production and decay are handled as a full process or using Decay/DecayOS. In this case, the option ‘HDH_SET_WIDTHS=1’ permits to overwrite the ‘WIDTH[<id>]’ values of unstable particles by the LO widths computed by the decay handler.


7.9.7 HDH_BR_WEIGHTS

By default (‘HDH_BR_WEIGHTS=1’), weights for events which involve a hard decay are multiplied with the corresponding branching ratios (if decay channels have been disabled). This also means that the total cross section at the end of the event generation run already includes the appropriate BR factors. If you want to disable that, e.g. because you want to multiply with your own modified BR, you can set the option ‘HDH_BR_WEIGHTS=0’.


7.9.8 HARD_MASS_SMEARING

If ‘HARD_MASS_SMEARING=1’ is specified, the kinematic mass of the unstable propagator is distributed according to a Breit-Wigner shape a posteriori. All matrix elements are still calculated in the narrow-width approximation with onshell particles. Only the kinematics are affected.


7.9.9 RESOLVE_DECAYS

There are different options how to decide when a 1->2 process should be replaced by the respective 1->3 processes built from its decaying daughter particles.

RESOLVE_DECAYS=Threshold

(default) Only when the sum of decay product masses exceeds the decayer mass.

RESOLVE_DECAYS=ByWidth

As soon as the sum of 1->3 partial widths exceeds the 1->2 partial width.

RESOLVE_DECAYS=None

No 1->3 decays are taken into account.


7.9.10 DECAY_TAU_HARD

By default, the tau lepton is decayed by the hadron decay module, Hadron decays, which includes not only the leptonic decay channels but also the hadronic modes. If ‘DECAY_TAU_HARD=1’ is specified, the tau lepton will be decayed in the hard decay handler, which only takes leptonic and partonic decay modes into account. Note, that in this case the tau needs to also be set massive with ‘MASSIVE[15]=1’.


7.10 Parton showers

The shower setup is covered by the ‘(shower)’ section of the steering file or the shower data file ‘Shower.dat’, respectively.

The following parameters are used to steer the shower setup.


7.10.1 SHOWER_GENERATOR

The only shower option currently available in Sherpa is ‘CSS’, and this is the default for this tag. See the module summaries in Basic structure for details about this shower.

Different shower modules are in principle supported and more choices will be provided by Sherpa in the near future. To list all available shower modules, the tag SHOW_SHOWER_GENERATORS=1 can be specified on the command line.

SHOWER_GENERATOR=None switches parton showering off completely. However, even in the case of strict fixed order calculations, this might not be the desired behaviour as, for example, then neither the METS scale setter, cf. SCALES, nor Sudakov rejection weights can be employed. To circumvent when using the CS Shower see CS Shower options.


7.10.2 JET_CRITERION

The only natively supported option in Sherpa is ‘CSS’, and this is also the default. The corresponding jet criterion is described in [Hoe09]. A custom jet criterion, tailored to a specific experimental analysis, can be supplied using Sherpa’s plugin mechanism.


7.10.3 MASSIVE_PS

This option instructs Sherpa to treat certain partons as massive in the shower, which have been considered massless by the matrix element. The argument is a list of parton flavours, for example ‘MASSIVE_PS 4 5’, if both c- and b-quarks are to be treated as massive.


7.10.4 MASSLESS_PS

When hard decays are used, Sherpa treats all flavours as massive in the parton shower. This option instructs Sherpa to treat certain partons as massless in the shower nonetheless. The argument is a list of parton flavours, for example ‘MASSLESS_PS 1 2 3’, if u-, d- and s-quarks are to be treated as massless.


7.10.5 CS Shower options

Sherpa’s default shower module is based on [Sch07a]. A new ordering parameter for initial state splitters was introduced in [Hoe09] and a novel recoil strategy for initial state splittings was proposed in [Hoe09a]. While the ordering variable is fixed, the recoil strategy for dipoles with initial-state emitter and final-state spectator can be changed for systematics studies. Setting ‘CSS_KIN_SCHEME=0’ (default) corresponds to using the recoil scheme proposed in [Hoe09a], while ‘CSS_KIN_SCHEME=1’ enables the original recoil strategy. The lower cutoff of the shower evolution can be set via ‘CSS_FS_PT2MIN’ and ‘CSS_IS_PT2MIN’ for final and initial state shower, respectively. Note that this value is specified in GeV^2. Scale factors for the evaluation of the strong coupling in the parton shower are given by ‘CSS_FS_AS_FAC’ and ‘CSS_IS_AS_FAC’. They multiply the ordering parameter, which is given in units of GeV^2.

By default, only QCD splitting functions are enabled in the shower. If you also want to allow for photon splittings, you can enable them by using ‘CSS_EW_MODE=1’. Note, that if you have leptons in your matrix-element final state, they are by default treated by a soft photon resummation as explained in QED corrections. To avoid double counting, this has to be disabled as explained in that section.

The CS Shower can be forced not to emit any partons setting ‘CSS_NOEM=1’. Sudakov rejection weights for merged samples are calculated nontheless. Setting ‘CSS_MAXEM=<N>’, on the other hand, forces the CS Shower to truncate its evolution at the Nth emission. This setting, however does not necessarily compute all Sudakov weights correctly. Both settings still enable the CS Shower to be used in the METS scale setter, cf. SCALES.

The evolution variable of the CS shower can be changed using ‘CSS_EVOLUTION_SCHEME’. Two options are currently implemented, which correspond to transverse momentum ordering (option 0) and modified transverse momentum ordering (option 1). The scale at which the strong coupling for gluon splitting into quarks is evaluated can be chosen with ‘CSS_SCALE_SCHEME’, where 0 corresponds to the ordering parameter and 1 corresponds to invariant mass. Additionally, the CS shower allows to disable splittings at scales below the on-shell mass of heavy quarks. The upper limit for the corresponding heavy quark mass is set using ‘CSS_MASS_THRESHOLD’.


7.11 Multiple interactions

The multiple parton interaction (MPI) setup is covered by the ‘(mi)’ section of the steering file or the MPI data file ‘MI.dat’, respectively. The basic MPI model is described in [Sjo87] while Sherpa’s implementation details are discussed in [Ale05]

The following parameters are used to steer the MPI setup.


7.11.1 MI_HANDLER

Specifies the MPI handler. The two possible values at the moment are ‘None’ and ‘Amisic’.


7.11.2 SCALE_MIN

Specifies the transverse momentum integration cutoff in GeV.


7.11.3 PROFILE_FUNCTION

Specifies the hadron profile function. The possible values are ‘Exponential’, ‘Gaussian’ and ‘Double_Gaussian’. For the double gaussian profile, the relative core size and relative matter fraction can be set using PROFILE_PARAMETERS.


7.11.4 PROFILE_PARAMETERS

The potential parameters for hadron profile functions, see PROFILE_FUNCTION. For double gaussian profiles there are two parameters, corresponding to the relative core size and relative matter fraction.


7.11.5 REFERENCE_SCALE

Specifies the centre-of-mass energy at which the transverse momentum integration cutoff is used as is, see SCALE_MIN. This parameter should not be changed by the user. The default is ‘1800’, corresponding to Tevatron Run I energies.


7.11.6 RESCALE_EXPONENT

Specifies the rescaling exponent for fixing the transverse momentum integration cutoff at centre-of-mass energies different from the reference scale, see SCALE_MIN, REFERENCE_SCALE.


7.11.7 SIGMA_ND_FACTOR

Specifies the factor to scale the non-diffractive cross section calculated in the MPI initialisation.


7.11.8 MI_RESULT_DIRECTORY

Specifies the name of the directory where the MPI grid is stored. The default comprises the beam particles, their energies and the PDF used. In its default value, this information safeguards against using unsuitable grids for the current calculation, assuming a standard TUNE has been used.


7.11.9 MI_RESULT_DIRECTORY_SUFFIX

Supplements the default directory name for the MPI grid with a suffix.


7.12 Hadronization

The hadronization setup is covered by the ‘(fragmentation)’ section of the steering file or the fragmentation data file ‘Fragmentation.dat’, respectively.

It covers the fragmentation of partons into primordial hadrons as well as the decays of unstable hadrons into stable final states.


7.12.1 Fragmentation


7.12.1.1 Fragmentation models

The FRAGMENTATION parameter sets the fragmentation module to be employed during event generation.


7.12.1.2 Hadron constituents

The constituent masses of the quarks and diquarks are given by

The diquark masses are composed of the quark masses and some additional parameters, with


7.12.1.3 Hadron multiplets

For the selection of hadrons emerging in such cluster transitions and decays, an overlap between the cluster flavour content and the flavour part of the hadronic wave function is formed. This may be further modified by production probabilities, organised by multiplet and given by the parameters

In addition, there are some enhancement and suppression factors applied to heavy baryons and meson singlets,

For the latter, Sherpa also allows to redfine the mixing angles through parameters such as


7.12.1.4 Cluster transition to hadrons - flavour part

The phase space effects due to these masses govern to a large extent the flavour content of the non-perturbative gluon splittings at the end of the parton shower and in the decay of clusters. They are further modified by relative probabilities with respect to the production of up/down flavours through the parameters

The transition of clusters to hadrons is governed by the following considerations:


7.12.1.5 Cluster transition and decay weights

The probability for a cluster C to be transformed into a hadron H is given by a combination of weights, obtained from the overlap with the flavour part of the hadronic wave function, the relative weight of the corresponding multiplet and a kinematic weight taking into account the mass difference of cluster and hadron and the width of the latter. For the direct decay of a cluster into two hadrons the overlaps with the wave functions of all hadrons, their respective multiplet suppression weights, the flavour weight for the creation of the new flavour q and a kinematical factor are relevant. Here, yet another tuning paramter enters,

which partially compensates phase space effects favouring light hadrons,


7.12.1.6 Cluster decays - kinematics

Cluster decays are generated by firstly emitting a non-perturbative “gluon” from one of the quarks, using a transverse momentum distribution as in the non-perturbative gluon decays, see below, and by then splitting this gluon into a quark–antiquark of anti-diquark–diquark pair, again with the same kinematics. In the first of these splittings, the emission of the gluon, though, the energy distribution of the gluon is given by the quark splitting function, if this quark has been produced in the perturbative phase of the event. If, in contrast, the quark stems from a cluster decay, the energy of the gluon is selected according to a flat distribution.

In clusters decaying to hadrons, the transverse momentum is chosen according to a distribution given by an infrared-continued strong coupling and a term inversemly proportional to the infrared-modified transverse momentum, constrained to be below a maximal transverse momentum. The respective tuning parameters are


7.12.1.7 Splitting kinematics

In each splitting, the kinematics is given by the transverse momentum, the energy splitting parameter and the azimuthal angle. The latter, the azimuthal angle is always seleectred according to a flat distribution, while the energy splitting parameter will either be chosen according to the quark-to-gluon splitting function (if the quark is a leading quark, i.e. produced in the pertrubative phase), to the gluon-to-quark splitting function, or according to a flat distribution. The transverse momentum is given by the same distribution as in the cluster decays to hadrons.


7.12.2 Hadron decays

The treatment of hadron and tau decays is specified by DECAYMODEL. Its allowed values are either the default choice ‘Hadrons’, which renders the HADRONS++ module responsible for performing the decays, or the hadron decays can be disabled with the option ‘Off’.

HADRONS++ is the module within the Sherpa framework which is responsible for treating hadron and tau decays. It contains decay tables with branching ratios for approximately 2500 decay channels, of which many have their kinematics modelled according to a matrix element with corresponding form factors. Especially decays of the tau lepton and heavy mesons have form factor models similar to dedicated codes like Tauola [Jad93] and EvtGen [Lan01].

Some general switches which relate to hadron decays can be adjusted in the (fragmentation) section:

Many aspects of the above mentioned “Decaydata” can be adjusted. There exist three levels of data files, which are explained in the following sections. As with all other setup files, the user can either employ the default “Decaydata” in <prefix>/share/SHERPA-MC/Decaydata, or overwrite it (also selectively) by creating the appropriate files in the directory specified by DECAYPATH.


7.12.2.1 HadronDecays.dat

HadronDecays.dat consists of a table of particles that are to be decayed by HADRONS++. Note: Even if decay tables exist for the other particles, only those particles decay that are set unstable, either by default, or in the model/fragmentation settings. It has the following structure, where each line adds one decaying particle:

<kf-code>       -><subdirectory>/

<filename>.dat

decaying particle    path to decay table    decay table file
default names:<particle>/Decays.dat

It is possible to specify different decay tables for the particle (positive kf-code) and anti-particle (negative kf-code). If only one is specified, it will be used for both particle and anti-particle.

If more than one decay table is specified for the same kf-code, these tables will be used in the specified sequence during one event. The first matching particle appearing in the event is decayed according to the first table, and so on until the last table is reached, which will be used for the remaining particles of this kf-code.

Additionally, this file may contain the keyword CREATE_BOOKLET on a separate line, which will cause HADRONS++ to write a LaTeX document containing all decay tables.


7.12.2.2 Decay table files

The decay table contains information about outgoing particles for each channel, its branching ratio and eventually the name of the file that stores parameters for a specific channel. If the latter is not specified HADRONS++ will produce it and modify the decay table file accordingly.

Additionally to the branching ratio, one may specify the error associated with it, and its source. Every hadron is supposed to have its own decay table in its own subdirectory. The structure of a decay table is

{kf1,kf2,kf3,...}BR(delta BR)[Origin]       <filename>.dat
outgoing particles       branching ratio       decay channel file

It should be stressed here that the branching ratio which is explicitly given for any individual channel in this file is always used regardless of any matrix-element value.


7.12.2.3 Decay channel files

A decay channel file contains various information about that specific decay channel. There are different sections, some of which are optional:


7.12.2.4 HadronConstants.dat

HadronConstants.dat may contain some globally needed parameters (e.g. for neutral meson mixing, see [Kra10]) and also fall-back values for all matrix-element parameters which one specifies in decay channel files. Here, the Interference_X = 1 switch would enable rate asymmetries due to CP violation in the interference between mixing and decay (cf. Decay channel files), and setting Mixing_X = 1 enables explicit mixing in the event record according to the time evolution of the flavour states. By default, all mixing effects are turned off.


7.12.2.5 Further remarks

Spin correlations: a spin correlation algorithm is implemented. It can be switched on through the keyword ‘SOFT_SPIN_CORRELATIONS=1’ in the (run) section.

If spin correlations for tau leptons produced in the hard scattering process are supposed to be taken into account, one needs to specify ‘HARD_SPIN_CORRELATIONS=1’ as well. If using AMEGIC++ as ME generator, note that the Process libraries have to be re-created if this is changed.

Adding new channels: if new channels are added to HADRONS++ (choosing isotropic decay kinematics) a new decay table must be defined and the corresponding hadron must be added to HadronDecays.dat. The decay table merely needs to consist of the outgoing particles and branching ratios, i.e. the last column (the one with the decay channel file name) can safely be dropped. By running Sherpa it will automatically produce the decay channel files and write their names in the decay table.

Some details on tau decays: $\tau$ decays are treated within the HADRONS++ framework, even though the $\tau$ is not a hadron. As for many hadron decays, the hadronic tau decays have form factor models implemented, for details the reader is referred to [Kra10].


7.13 QED corrections

Higher order QED corrections are effected both on hard interaction and, upon their formation, on each hadron’s subsequent decay. The Photons [Sch08] module is called in both cases for this task. It employes a YFS-type resummation [Yen61] of all infrared singular terms to all orders and is equipped with complete first order corrections for the most relevant cases (all other ones receive approximate real emission corrections built up by Catani-Seymour splitting kernels).


7.13.1 General Switches

The relevant switches to steer the higher order QED corrections reside in the ‘(fragmentation)’ section of the steering file or the fragmentation data file ‘Fragmentation.dat’, respectively.


7.13.1.1 YFS_MODE

The keyword YFS_MODE = [0,1,2] determines the mode of operation of Photons. YFS_MODE = 0 switches Photons off. Consequently, neither the hard interaction nor any hadron decay will be corrected for soft or hard photon emission. YFS_MODE = 1 sets the mode to "soft only", meaning soft emissions will be treated correctly to all orders but no hard emission corrections will be included. With YFS_MODE = 2 these hard emission corrections will also be included up to first order in alpha_QED. This is the default setting.


7.13.1.2 YFS_USE_ME

The switch YFS_USE_ME = [0,1] tells Photons how to correct hard emissions to first order in alpha_QED. If YFS_USE_ME = 0, then Photons will use collinearly approximated real emission matrix elements. Virtual emission matrix elements of order alpha_QED are ignored. If, however, YFS_USE_ME=1, then exact real and/or virtual emission matrix elements are used wherever possible. These are presently available for V->FF, V->SS, S->FF, S->SS, S->Slnu, S->Vlnu type decays, Z->FF decays and leptonic tau and W decays. For all other decay types general collinearly approximated matrix elements are used. In both approaches all hadrons are treated as point-like objects. The default setting is YFS_USE_ME = 1. This switch is only effective if YFS_MODE = 2.


7.13.1.3 YFS_IR_CUTOFF

YFS_IR_CUTOFF sets the infrared cut-off dividing the real emission in two regions, one containing the infrared divergence, the other the "hard" emissions. This cut-off is currently applied in the rest frame of the multipole of the respective decay. It also serves as a minimum photon energy in this frame for explicit photon generation for the event record. In contrast, all photons below with energy less than this cut-off will be assumed to have negligible impact on the final-state momentum distributions. The default is YFS_IR_CUTOFF = 1E-3 (GeV). Of course, this switch is only effective if Photons is switched on, i.e. YFS_MODE = [1,2].


7.13.2 QED Corrections to the Hard Interaction

The switch to steer QED corrections to the hard scatter resides in the ’(me)’ section of the steering file or the matrix element data file ‘ME.dat’, respectively.


7.13.2.1 ME_QED

ME_QED = On/Off turns the higher order QED corrections to the matrix element on or off, respectively. The default is ‘On’. Switching QED corrections to the matrix element off has no effect on QED Corrections to Hadron Decays. The QED corrections to the matrix element will only be effected on final state not strongly interacting particles. If a resonant production subprocess for an unambiguous subset of all such particles is specified via the process declaration (cf. Processes) this can be taken into account and dedicated higher order matrix elements can be used (if YFS_MODE = 2 and YFS_USE_ME = 1).


7.13.2.2 ME_QED_CLUSTERING

ME_QED_CLUSTERING = On/Off switches the phase space point dependent identification of possible resonances within the hard matrix element on or off, respectively. The default is ‘On’. Resonances are identified by recombining the electroweak final state of the matrix element into resonances that are allowed by the model. Competing resonances are identified by their on-shell-ness, i.e. the distance of the decay product’s invariant mass from the nominal resonance mass in units of the resonance width.


7.13.2.3 ME_QED_CLUSTERING_THRESHOLD

Sets the maximal distance of the decay product invariant mass from the nominal resonance mass in units of the resonance width in order for the resonance to be identified. The default is ‘ME_QED_CLUSTERING_THRESHOLD = 1’.


7.13.3 QED Corrections to Hadron Decays

If the Photons module is switched on, all hadron decays are corrected for higher order QED effects.


7.14 Minimum bias events

Minimum bias events are simulated through the Shrimps module in Sherpa.


7.14.1 Physics of Shrimps


7.14.1.1 Inclusive part of the model

Shrimps is based on the KMR model [Rys09], which is a multi-channel eikonal model. The incoming hadrons are written as a superposition of Good-Walker states, which are diffractive eigenstates that diagalonise the T-matrix. This allows to include low-mass diffractive excitation. Each combination of colliding Good-Walker states gives rise to a single-channel eikonal. The final eikonal is the superposition of the single-channel eikonals. The number of Good-Walker states is 2 in Shrimps (the original KMR model includes 3 states).

Each single-channel eikonal can be seen as the product of two parton densities, one from each of the colliding Good-Walker states. The evolution of the parton densities in rapidity due to extra emissions and absoption on either of the two hadrons is described by a set of coupled differential equations. The parameter Delta, which can be interpreted as the Pomeron intercept, is the probability for emitting an extra parton per unit of rapidity. The strength of absorptive corrections is quantified by the parameter lambda, which can also be seen as the triple-Pomeron coupling. A small region of size deltaY around the beams is excluded from the evolution due to the finite longitudinal size of the parton densities.

The boundary conditions for the parton densities are form factors, which have a dipole form characterised by the parameters Lambda2, beta_0^2, kappa and xi.

In this framework the eikonals and the cross sections for the different modes (elastic, inelastic, single- and double-diffractive) are calculated.


7.14.1.2 Exclusive part of the model

Inelastic events are generated by explicitely simulating the exchange and rescattering of gluon ladders. The number of primary ladders is given by a Poisson distribution whose parameter is the single-channel eikonal. The decomposition of the incoming hadrons into partons proceeds via suitably infra-red continued PDFs.

The emissions from the ladders are then generated in a Markov chain. The pseudo-Sudakov form factor contains several factors: an ordinary gluon emission term, a factor accounting for the Reggeisation of the gluons and a recombination weight taking absorptive corrections into account. The emission term has the perturbative form alpha_s(k_T^2)/k_T^2, that needs to be continued into the infra-red region. In the case of alpha_s the transition into the infra-red region happens at Q_as^2 while in the case of 1/k_T^2 the transition scale is generated dynamically and depends on the parton densities and is scaled by Q_0^2.

The propagators of the filled ladder can be either in a colour singlet or octet state, the probabilities are again given through the parton densities. The probability for a singlet can also be regulated by hand through the parameter Chi_S. A singlet propagator is the result of an implicit rescattering.

After all emissions have been generated and the colours assigned, further radiation is generated by the parton shower to resum also the logrithms in 1/Q^2. The amount of radiation from the parton shower can be regulated with KT2_Factor, which multiplies the shower starting scale. After parton showering partons emitted from the ladder or the parton shower are subject to explicit rescattering, i.e. they can exchange secondary ladders. The probability for the exchange of a rescattering ladder is characterised by RescProb. The probability for rescattering over a singlet propagator receives an extra factor RescProb1. After all ladder exchanges and rescatterings but before hadronsation colour can be re-arrangd in the event. Finally, the event is hadronised using the standard Sherpa cluster hadronisation.


7.14.2 Parameters and settings

Below is a list of all relevant parameters to steer the Shrimps module.


7.14.2.1 Generating minimum bias events

To generate minimum bias events with Shrimps in the ’(run)’ section of the run card EVENT_TYPE has to be set to MinimumBias and SOFT_COLLISIONS to Shrimps.


7.14.2.2 Shrimps Mode

The setup of minimum bias events and other, related simulations, is covered by the ’(run)’ section of the run card. The exact choice is steered through the parameter Shrimps_Mode (default Inelastic), which allows the following settings:


7.14.2.3 Parameters of the eikonals

The parameters of the differential equations for the parton densities are

The form factors are of the form
F_1/2(q_T) = beta_0^2 (1 +/- kappa) exp[-xi (1 +/- kappa)q_T^2/Lambda^2]/[1 + (1 +/- kappa)q_T^2/Lambda^2]^2
with the parameters


7.14.2.4 Parameters for event generation

The parameters related to the generation of inelastic events are


8. Tips and tricks


8.1 Bash completion

Sherpa will install a file named ‘$prefix/share/SHERPA-MC/sherpa-completion’ which contains tab completion functionality for the bash shell. You simply have to source it in your active shell session by running

  .  $prefix/share/SHERPA-MC/sherpa-completion

and you will be able to tab-complete any parameters on a Sherpa command line.

To permanently enable this feature in your bash shell, you’ll have to add the source command above to your ~/.bashrc.


8.2 Rivet analyses

Sherpa is equipped with an interface to the analysis tool Rivet. To enable it, Rivet and HepMC have to be installed (e.g. using the Rivet bootstrap script) and your Sherpa compilation has to be configured with the following options:

  ./configure --enable-hepmc2=/path/to/hepmc2 --enable-rivet=/path/to/rivet

(Note: Both paths are equal if you used the Rivet bootstrap script.)

To use the interface, specify the switch

  Sherpa ANALYSIS=Rivet

and create an analysis section in Run.dat that reads as follows:

  (analysis){
    BEGIN_RIVET {
      -a D0_2008_S7662670 CDF_2007_S7057202 D0_2004_S5992206 CDF_2008_S7828950
    } END_RIVET
  }(analysis)

The line starting with -a specifies which Rivet analyses to run and the histogram output file can be changed with the normal ANALYSIS_OUTPUT switch.

You can also use rivet-mkhtml (distributed with Rivet) to create plot webpages from Rivet’s output files:

  source /path/to/rivetenv.sh   # see below
  rivet-mkhtml -o output/ file1.aida [file2.aida, ...]
  firefox output/index.html &

If your Rivet installation is not in a standard location, the bootstrap script should have created a rivetenv.sh which you have to source before running the rivet-mkhtml script.


8.3 HZTool analyses

Sherpa is equipped with an interface to the analysis tool HZTool. To enable it, HZTool and CERNLIB have to be installed and your Sherpa compilation has to be configured with the following options:

  ./configure --enable-hztool=/path/to/hztool --enable-cernlib=/path/to/cernlib --enable-hepevtsize=4000

To use the interface, specify the switch

  Sherpa ANALYSIS=HZTool

and create an analysis section in Run.dat that reads as follows:

  (analysis){
    BEGIN_HZTOOL {
      HISTO_NAME output.hbook;
      HZ_ENABLE hz00145 hz01073 hz02079 hz03160;
    } END_HZTOOL;
  }(analysis)

The line starting with HZ_ENABLE specifies which HZTool analyses to run. The histogram output directory can be changed using the ANALYSIS_OUTPUT switch, while HISTO_NAME specifies the hbook output file.


8.4 MCFM interface

Sherpa is equipped with an interface to the NLO library of MCFM for decdicated processes. To enable it, MCFM has to be installed and compiled into a single library, libMCFM.a. To this end, an installation script is provided in AddOns/MCFM/install_mcfm.sh. Please note, due to some process specific changes that are made by the installation script to the MCFM code, only few selected processes of MCFM-6.3 are available through the interface.

Finally, your Sherpa compilation has to be configured with the following options:

  ./configure --enable-mcfm=/path/to/mcfm

To use the interface, specify

  Loop_Generator MCFM;

in the process section of the run card and add it to the list of generators in ME_SIGNAL_GENERATOR. Of course, MCFM’s process.DAT file has to be copied to the current run directory.


8.5 Debugging a crashing/stalled event


8.5.1 Crashing events

If an event crashes, Sherpa tries to obtain all the information needed to reproduce that event and writes it out into a directory named

  Status__<date>_<time>

If you are a Sherpa user and want to report this crash to the Sherpa team, please attach a tarball of this directory to your email. This allows us to reproduce your crashed event and debug it.

To debug it yourself, you can follow these steps (Only do this if you are a Sherpa developer, or want to debug a problem in an addon library created by yourself):


8.5.2 Stalled events

If event generation seems to stall, you first have to find out the number of the current event. For that you would terminate the stalled Sherpa process (using Ctrl-c) and check in its final output for the number of generated events. Now you can request Sherpa to write out the random seed for the event before the stalled one:

  Sherpa [...] EVENTS=[#events - 1] SAVE_STATUS=Status/

(Replace [#events - 1] using the number you figured out earlier)

The created status directory can either be sent to the Sherpa developers, or be used in the same steps as above to reproduce that event and debug it.


8.6 Versioned installation

If you want to install different Sherpa versions into the same prefix (e.g. /usr/local), you have to enable versioning of the installed directories by using the configure option ‘--enable-versioning’. Optionally you can even pass an argument to this parameter of what you want the version tag to look like.


8.7 NLO calculations


8.7.1 Choosing DIPOLE_ALPHA

A variation of the parameter DIPOLE_ALPHA (see Dipole subtraction) changes the contribution from the real (subtracted) piece (RS) and the integrated subtraction terms (I), keeping their sum constant. Varying this parameter provides a nice check of the consistency of the subtraction procedure and it allows to optimize the integration performance of the real correction. This piece has the most complicated momentum phase space and is often the most time consuming part of the NLO calculation. The optimal choice depends on the specific setup and can be determined best by trial.

Hints to find a good value:


8.7.2 Integrating complicated Loop-ME

For complicated processes the evaluation of one-loop matrix elements can be very time consuming. The generation time of a fully optimized integration grid can become prohibitively long. Rather than using a poorly optimized grid in this case it is more advisable to use a grid optimized with either the born matrix elements or the born matrix elements and the finite part of the integrated subtraction terms only, working under the assumption that the distibutions in phase space are rather similar.

This can be done by one of the following methods:

  1. Employ a dummy virtual (requires no computing time, returns 0. as its finite result) to optimise the grid. This only works if V is not the only NLO_QCD_Part specified.
    1. During integration set the Loop_Generator to Internal and add USE_DUMMY_VIRTUAL=1 to your (run){...}(run) section. The grid will then be optimised to the phase space distribution of the sum of the Born matrix element and the finite part of the integrated subtraction term. Note: The cross section displayed during integration will also correspond to the sum of the Born matrix element and the finite part of the integrated subtraction term.
    2. During event generation reset Loop_Generator to your generator supplying the virtual correction. The events generated then carry the correct event weight.
  2. Suppress the evaluation of the virtual and/or the integrated subtraction terms. This only works if Amegic is used as the matrix element generator for the BVI pieces and V is not the only NLO_QCD_Part specified.
    1. During integration add NLO_BVI_MODE=<num> to your (run){...}(run) section. <num> takes the following values: 1-B, 2-I, and 4-V. The values are additive, i.e. 3-BI. Note: The cross section displayed during integration will match the parts selected by NLO_BVI_MODE.
    2. During event generation remove the switch again and the events will carry the correct weight.

Note: this will not work for the RS piece!


8.7.3 Avoiding misbinning effects

Close to the infrared limit, the real emission matrix element and corresponding subtraction events exhibit large cancellations. If the (minor) kinematics difference of the events happens to cross a parton-level cut or analysis histogram bin boundary, then large spurious spikes can appear.

These can be smoothed to some extend by shifting the weight from the subtraction kinematic to the real-emission kinematic if the dipole measure alpha is below a given threshold. The fraction of the shifted weight is inversely proportional to the dipole measure, such that the final real-emission and subtraction weights are calculated as:

  w_r -> w_r + sum_i [1-x(alpha_i)] w_{s,i}
  foreach i: w_{s,i} -> x(alpha_i) w_{s,i}

with the function x(alpha)=(alpha/|alpha_0|)^n for alpha<alpha_0 and 1 otherwise.

The threshold can be set by the parameter ‘NLO_SMEAR_THRESHOLD=<alpha_0>’ and the functional form of alpha and thus interpretation of the threshold can be chosen by its sign (positive: relative dipole kT in GeV, negative: dipole alpha). In addition, the exponent n can be set by ‘NLO_SMEAR_POWER=<n>’.


8.7.4 Enforcing the renormalization scheme

Sherpa takes information about the renormalization scheme from the loop ME generator. The default scheme is MSbar, and this is assumed if no loop ME is provided, for example when integrated subtraction terms are computed by themselves. This can lead to inconsistencies when combining event samples, which may be avoided by setting ‘LOOP_ME_INIT=1’ in the (run) section of the input file.


8.7.5 Checking the pole cancellation

To check whether the poles of the dipole subtraction and the interfaced one-loop matrix element cancel phase space point by phase space point CHECK_POLES=1 can be specified. The accuracy to which the poles do have to cancel can be set via CHECK_POLES_THRESHOLD=<accu>. In the same way, the finite contributions of the infrared subtraction and the one-loop matrix element can be checked by setting CHECK_FINITE=1, and the Born matrix element via CHECK_BORN=1.


8.7.6 Structure of HepMC Output

The generated events can be written out in the HepMC format to be passed through an independent analysis. For this purpose a shortened event structure is used containing only a single vertex. Correlated real and subtraction events are labeled with the same event number such that their possible cancelations can be taken into account properly.

To use this output option Sherpa has to be compiled with HepMC support. cf. Installation. The EVENT_OUTPUT=HepMC_Short[<filename>] has to used, cf. Event output formats.

Using this HepMC output format the internal Rivet interface (Rivet analyses) can be used to pass the events through Rivet. It has to be stressed, however, that Rivet currently cannot take the correlations between real and subtraction events into account properly. The Monte-Carlo error is thus overestimated. Nonetheless, the mean is unaffected.

As above, the Rivet interface has to be instructed to use the shortened HepMC event structure:

  (analysis){
    BEGIN_RIVET {
      USE_HEPMC_SHORT 1
      -a ...
    } END_RIVET
  }(analysis)

8.7.7 Structure of ROOT NTuple Output

The generated events can be stored in a ROOT NTuple file, see Event output formats. The internal ROOT Tree has the following Branches:

id

Event ID to identify correlated real sub-events.

nparticle

Number of outgoing partons.

E/px/py/pz

Momentum components of the partons.

kf

Parton PDG code.

weight

Event weight, if sub-event is treated independently.

weight2

Event weight, if correlated sub-events are treated as single event.

me_wgt

ME weight (w/o PDF), corresponds to ’weight’.

me_wgt2

ME weight (w/o PDF), corresponds to ’weight2’.

id1

PDG code of incoming parton 1.

id2

PDG code of incoming parton 2.

fac_scale

Factorisation scale.

ren_scale

Renormalisation scale.

x1

Bjorken-x of incoming parton 1.

x2

Bjorken-x of incoming parton 2.

x1p

x’ for I-piece of incoming parton 1.

x2p

x’ for I-piece of incoming parton 2.

nuwgt

Number of additional ME weights for loops and integrated subtraction terms.

usr_wgt[nuwgt]

Additional ME weights for loops and integrated subtraction terms.


8.7.7.1 Computing (differential) cross sections of real correction events with statistical errors

Real correction events and their counter-events from subtraction terms are highly correlated and exhibit large cancellations. Although a treatment of sub-events as independent events leads to the correct cross section the statistical error would be greatly overestimated. In order to get a realistic statistical error sub-events belonging to the same event must be combined before added to the total cross section or a histogram bin of a differential cross section. Since in general each sub-event comes with it’s own set of four momenta the following treatment becomes necessary:

  1. An event here refers to a full real correction event that may contain several sub-events. All entries with the same id belong to the same event. Step 2 has to be repeated for each event.
  2. Each sub-event must be checked separately whether it passes possible phase space cuts. Then for each observable add up weight2 of all sub-events that go into the same histogram bin. These sums x_id are the quantities to enter the actual histogram.
  3. To compute statistical errors each bin must store the sum over all x_id and the sum over all x_id^2. The cross section in the bin is given by <x> = 1/N \sum x_id, where N is the number of events (not sub-events). The 1-\sigma statistical error for the bin is \sqrt{ (<x^2>-<x>^2)/(N-1) }

Note: The main difference between weight and weight2 is that they refer to a different counting of events. While weight corresponds to each event entry (sub-event) counted separately, weight2 counts events as defined in step 1 of the above procedure. For NLO pieces other than the real correction weight and weight2 are identical.


8.7.7.2 Computation of cross sections with new PDF’s

Born and real pieces:

Notation:

f_a(x_a) = PDF 1 applied on parton a, F_b(x_b) = PDF 2 applied on parton b.

The total cross section weight is given by

weight = me_wgt f_a(x_a)F_b(x_b).

Loop piece and integrated subtraction terms:

The weights here have an explicit dependence on the renormalization and factorization scales.

To take care of the renormalization scale dependence (other than via alpha_S) the weight w_0 is defined as

w_0 = me_wgt + usr_wgts[0] log((\mu_R^new)^2/(\mu_R^old)^2) + usr_wgts[1] 1/2 [log((\mu_R^new)^2/(\mu_R^old)^2)]^2.

To address the factorization scale dependence the weights w_1,...,w_8 are given by

w_i = usr_wgts[i+1] + usr_wgts[i+9] log((\mu_F^new)^2/(\mu_F^old)^2).

The full cross section weight can be calculated as

weight = w_0 f_a(x_a)F_b(x_b) + (f_a^1 w_1 + f_a^2 w_2 + f_a^3 w_3 + f_a^4 w_4) F_b(x_b) + (F_b^1 w_5 + F_b^2 w_6 + F_b^3 w_7 + F_b^4 w_8) f_a(x_a)

where

f_a^1 = f_a(x_a) (a=quark), \sum_q f_q(x_a) (a=gluon), f_a^2 = f_a(x_a/x'_a)/x'_a (a=quark), \sum_q f_q(x_a/x'_a)x'_a (a=gluon), f_a^3 = f_g(x_a), f_a^4 = f_g(x_a/x'_a)/x'_a.

The scale dependence coefficients usr_wgts[0] and usr_wgts[1] are normally obtained from the finite part of the virtual correction by removing renormalization terms and universal terms from dipole subtraction. This may be undesirable, especially when the loop provider splits up the calculation of the virtual correction into several pieces, like leading and sub-leading color. In this case the loop provider should control the scale dependence coefficients, which can be enforced with option ‘USR_WGT_MODE=0;’ in the (run) section of Sherpa’s input file.

The loop provider must support this option or the scale dependence coefficients will be invalid!


9. Customization

Customizing Sherpa according to your needs.

Sherpa can be easily extended with certain user defined tools. To this extent, a corresponding C++ class must be written, and compiled into an external library:

  g++ -shared \
    -I`$SHERPA_PREFIX/bin/Sherpa-config --incdir` \
    `$SHERPA_PREFIX/bin/Sherpa-config --ldflags` \
    -o libMyCustomClass.so My_Custom_Class.C

This library can then be loaded in Sherpa at runtime with the switch SHERPA_LDADD, e.g.:

  SHERPA_LDADD=MyCustomClass

Several specific examples of features which can be extended in this way are listed in the following sections.


9.1 Exotic physics

It is possible to add your own models to Sherpa in a straightforward way. To illustrate, a simple example has been included in the directory Examples/Models/SM_ZPrime, showing how to add a Z-prime boson to the Standard Model.

The important features of this example include:

To use this model, create the libraries for Sherpa to use by running

 
make

in this directory. Then run Sherpa as normal:

 
../../../bin/Sherpa

To implement your own model, copy these example files anywhere and modify them according to your needs.

Note: You don’t have to modify or recompile any part of Sherpa to use your model. As long as the SHERPA_LDADD parameter is specified as above, Sherpa will pick up your model automatically.

Furthermore note: New physics models with an existing implementation in FeynRules, cf. [Chr08] and [Chr09], can directly be invoked using Sherpa’s interface to FeynRules, see FeynRules model.


9.2 Custom scale setter

You can write a custom calculator to set the factorisation, renormalisation and resummation scales. It has to be implemented as a C++ class which derives from the Scale_Setter_Base base class and implements only the constructor and the Calculate method.

Here is a snippet for a very simple one, which sets all three scales to the invariant mass of the two incoming partons.

#include "PHASIC++/Scales/Scale_Setter_Base.H"
#include "ATOOLS/Org/Message.H"

using namespace PHASIC;
using namespace ATOOLS;

namespace PHASIC {

  class Custom_Scale_Setter: public Scale_Setter_Base {
  protected:

  public:

    Custom_Scale_Setter(const Scale_Setter_Arguments &args) :
      Scale_Setter_Base(args)
    {
      m_scale.resize(3); // by default three scales: fac, ren, res
                         // but you can add more if you need for COUPLINGS
      SetCouplings(); // the default value of COUPLINGS is "Alpha_QCD 1", i.e.
                      // m_scale[1] is used for running alpha_s
                      // (counting starts at zero!)
    }

    double Calculate(const std::vector<ATOOLS::Vec4D> &p,
		     const size_t &mode)
    {
      double muF=(p[0]+p[1]).Abs2();
      double muR=(p[0]+p[1]).Abs2();
      double muQ=(p[0]+p[1]).Abs2();

      m_scale[stp::fac] = muF;
      m_scale[stp::ren] = muR;
      m_scale[stp::res] = muQ;

      // Switch on debugging output for this class with:
      // Sherpa "OUTPUT=2[Custom_Scale_Setter|15]"
      DEBUG_FUNC("Calculated scales:");
      DEBUG_VAR(m_scale[stp::fac]);
      DEBUG_VAR(m_scale[stp::ren]);
      DEBUG_VAR(m_scale[stp::res]);

      return m_scale[stp::fac];
    }

  };

}

// Some plugin magic to make it available for SCALES=CUSTOM
DECLARE_GETTER(Custom_Scale_Setter,"CUSTOM",
	       Scale_Setter_Base,Scale_Setter_Arguments);

Scale_Setter_Base *ATOOLS::Getter
<Scale_Setter_Base,Scale_Setter_Arguments,Custom_Scale_Setter>::
operator()(const Scale_Setter_Arguments &args) const
{
  return new Custom_Scale_Setter(args);
}

void ATOOLS::Getter<Scale_Setter_Base,Scale_Setter_Arguments,
		    Custom_Scale_Setter>::
PrintInfo(std::ostream &str,const size_t width) const
{ 
  str<<"Custom scale scheme";
}

If the code is compiled into a library called libCustomScale.so, then this library is loaded dynamically at runtime with the switch ‘SHERPA_LDADD=CustomScale’ either on the command line or in the run section, cf. Customization. This then allows to use the custom scale like a built-in scale setter by specifying ‘SCALES=CUSTOM’ (cf. SCALES).


9.3 External one-loop ME

Sherpa includes only a very limited selection of one-loop matrix elements. To make full use of the implemented automated dipole subtraction it is possible to link external one-loop codes to Sherpa in order to perform full calculations at QCD next-to-leading order.

In general Sherpa can take care of any piece of the calculation except one-loop matrix elements, i.e. the born ME, the real correction, the real and integrated subtraction terms as well as the phase space integration and PDF weights for hadron collisions. Sherpa will provide sets of four-momenta and request for a specific parton level process the helicity and colour summed one-loop matrix element (more specific: the coefficients of the Laurent series in the dimensional regularization parameter epsilon up to the order epsilon^0).

An example setup for interfacing such an external one-loop code, following the Binoth Les Houches interface proposal [Bin10a] of the 2009 Les Houches workshop, is provided in Zbb production. To use the LH-OLE interface, Sherpa has to be configured with --enable-lhole.

The interface:

The setup (cf. example Zbb production):


9.4 External RNG

To use an external Random Number Generator (RNG) in Sherpa, you need to provide an interface to your RNG in an external dynamic library. This library is then loaded at runtime and Sherpa replaces the internal RNG with the one provided.

In this case Sherpa will not attempt to set, save, read or restore the RNG

The corresponding code for the RNG interface is

#include "ATOOLS/Math/Random.H"

using namespace ATOOLS;

class Example_RNG: public External_RNG {
public:
  double Get() 
  { 
    // your code goes here ... 
  }
};// end of class Example_RNG

// this makes Example_RNG loadable in Sherpa
DECLARE_GETTER(Example_RNG,"Example_RNG",External_RNG,RNG_Key);
External_RNG *ATOOLS::Getter<External_RNG,RNG_Key,Example_RNG>::operator()(const RNG_Key &) const
{ return new Example_RNG(); }
// this eventually prints a help message
void ATOOLS::Getter<External_RNG,RNG_Key,Example_RNG>::PrintInfo(std::ostream &str,const size_t) const
{ str<<"example RNG interface"; }

If the code is compiled into a library called libExampleRNG.so, then this library is loaded dynamically in Sherpa using the command ‘SHERPA_LDADD=ExampleRNG’ either on the command line or in ‘Run.dat’. If the library is bound at compile time, like e.g. in cmt, you may skip this step.

Finally Sherpa is instructed to retrieve the external RNG by specifying ‘EXTERNAL_RNG=Example_RNG’ on the command line or in ‘Run.dat’.


9.5 External PDF

To use an external PDF (not included in LHAPDF) in Sherpa, you need to provide an interface to your PDF in an external dynamic library. This library is then loaded at runtime and it is possible within Sherpa to access all PDFs included.

The simplest C++ code to implement your interface looks as follows

#include "PDF/Main/PDF_Base.H"

using namespace PDF;

class Example_PDF: public PDF_Base {
public:
  void Calculate(double x,double Q2)
  {
    // calculate values x f_a(x,Q2) for all a
  }
  double GetXPDF(const ATOOLS::Flavour a)
  {
    // return x f_a(x,Q2)
  }
  virtual PDF_Base *GetCopy()
  {
    return new Example_PDF();
  }
};// end of class Example_PDF

// this makes Example_PDF loadable in Sherpa
DECLARE_PDF_GETTER(Example_PDF_Getter);
PDF_Base *Example_PDF_Getter::operator()(const Parameter_Type &args) const
{ return new Example_PDF(); }
// this eventually prints a help message
void Example_PDF_Getter::PrintInfo
(std::ostream &str,const size_t width) const
{ str<<"example PDF"; }
// this lets Sherpa initialize and unload the library
Example_PDF_Getter *p_get=NULL;
extern "C" void InitPDFLib()
{ p_get = new Example_PDF_Getter("ExamplePDF"); }
extern "C" void ExitPDFLib() { delete p_get; }

If the code is compiled into a library called libExamplePDFSherpa.so, then this library is loaded dynamically in Sherpa using ‘PDF_LIBRARY=ExamplePDFSherpa’ either on the command line, in ‘Run.dat’ or in ‘ISR.dat’. If the library is bound at compile time, like e.g. in cmt, you may skip this step. It is now possible to list all accessible PDF sets by specifying ‘SHOW_PDF_SETS=1’ on the command line.

Finally Sherpa is instructed to retrieve the external PDF by specifying ‘PDF_SET=ExamplePDF’ on the command line, in ‘Run.dat’ or in ‘ISR.dat’.


9.6 Python Interface

Certain Sherpa classes and methods can be made available to the Python interpreter in the form of an extension module. This module can be loaded in Python and provides access to certain functionalities of the Sherpa event generator in Python. In order to build the module, Sherpa must be configured with the option --enable-pyext. Running make then invokes the automated interface generator SWIG [Bea03] to create the Sherpa module using the Python C/C++ API. SWIG version 1.3.x or later is required for a successful build. Problems might occur if more than one version of Python is present on the system since automake currently doesn’t always handle multiple Python installations properly. A possible workaround is to temporarily uninstall one version of python, configure and build Sherpa, and then reinstall the temporarily uninstalled version of Python.

The following script is a minimal example that shows how to use the Sherpa module in Python. In order to load the Sherpa module, the location where it is installed must be added to the PYTHONPATH. There are several ways to do this, in this example the sys module is used. <sherpa-python-lib-dir> must be replaced by the actual installation directory of the Sherpa module. This is done automatically in the test scripts of the Using the Python interface. The sys module also allows it to directly pass the command line arguments used to run the script to the initialization routine of Sherpa. The script can thus be executed using the normal command line options of Sherpa (see Command line options). Furthermore it illustrates how exceptions that Sherpa might throw can be taken care of. If a run card is present in the directory where the script is executed, the initialization of the generator causes Sherpa to compute the cross sections for the processes specified in the run card. See Computing matrix elements for idividual phase space points for an example that shows how to use the Python interface to compute matrix elements or Generate events using scripts to see how the interface can be used to generate events in Python.

Note that if you have compiled Sherpa with MPI support, you need to source the mpi4py module using from mpi4py import MPI.

  #!/usr/bin/python
  import sys
  sys.path.append('<sherpa-python-lib-dir>')
  import Sherpa

  # set up the generator
  Generator=Sherpa.Sherpa()

  try:
      # initialize the generator, pass command line arguments to initialization routine
      Generator.InitializeTheRun(len(sys.argv),sys.argv)

  # catch exceptions
  except Sherpa.Exception as exc:
      print exc

10. Examples

Some example set-ups are included in Sherpa, in the <prefix>/share/SHERPA-MC/Examples/ directory. These may be useful to new users to practice with, or as templates for creating your own Sherpa run-cards. In this section, we will look at some of the main features of these examples.


10.1 Vector boson + jets production


10.1.1 W+jets production

To change any of the following LHC examples to production at different collider energies or beam types, e.g. proton anti-proton at the Tevatron, simply change the beam settings to.

This is an example setup for inclusive W production at hadron colliders. The inclusive process is calculated at next-to-leading order accuracy matched to the parton shower using the MC@NLO prescription detailed in [Hoe11]. The next few higher jet multiplicities, calculated at next-to-leading order as well, are merged into the inclusive sample using the MEPS@NLO method - an extension of the CKKW method to NLO - as described in [Hoe12a] and [Geh12]. Finally, even higher multiplicities, calculated at leading order, are merged on top of that. A few things to note are detailed below the example. The example can be converted a simple MENLOPS setup by setting LJET:=2, or an MEPS setup by setting LJET:=0, to study the effect of incorporating higher-order matrix elements. The number of additional LO jets can be varied through NJET. Similarly, the merging cut can be changed through QCUT.

 
 
(run){
  % general setting
  EVENTS 1M; ERROR 0.99;
  MASSIVE_PS 4 5;

  % scales, tags for scale variations
  SP_NLOCT 1; FSF:=1.; RSF:=1.; QSF:=1.;
  SCALES METS{FSF*MU_F2}{RSF*MU_R2}{QSF*MU_Q2};

  ## % extra tags for custom jet criterion
  ## SHERPA_LDADD MyJetCriterion;
  ## JET_CRITERION FASTJET[A:antikt,R:0.4,y:5];

  % tags for process setup
  NJET:=4; LJET:=2,3,4; QCUT:=20.;

  % me generator settings
  ME_SIGNAL_GENERATOR Comix Amegic LOOPGEN;
  EVENT_GENERATION_MODE Weighted;
  LOOPGEN:=BlackHat;

  % exclude tau from lepton container
  MASSIVE[15] 1;

  % collider setup
  BEAM_1 2212; BEAM_ENERGY_1 = 4000.;
  BEAM_2 2212; BEAM_ENERGY_2 = 4000.;
}(run)

(processes){
  Process 93 93 -> 90 91 93{NJET};
  Order_EW 2; CKKW sqr(QCUT/E_CMS);
  NLO_QCD_Mode MC@NLO {LJET};
  ME_Generator Amegic {LJET};
  Loop_Generator LOOPGEN {LJET};
  Integration_Error 0.02 {4};
  Integration_Error 0.02 {5};
  Integration_Error 0.05 {6};
  Integration_Error 0.08 {7};
  Integration_Error 0.10 {8};
  Scales LOOSE_METS{FSF*MU_F2}{RSF*MU_R2}{QSF*MU_Q2} {7,8};
  End process;
}(processes)

(selector){
  Mass 11 -12 5. E_CMS
  Mass 13 -14 5. E_CMS
  Mass -11 12 5. E_CMS
  Mass -13 14 5. E_CMS
}(selector)

Things to notice:

The jet criterion used to define the matrix element multiplicity in the context of multijet merging can be supplied by the user. As an example the source code file ./Examples/V_plus_Jets/LHC_WJets/My_JetCriterion.C provides such an alternative jet criterion. It can be compiled using SCons via executing scons in that directory (edit the SConstruct file accordingly). The newly created library is linked at run time using the SHERPA_LDADD flag. The new jet criterion is then evoked by JET_CRITERION.


10.1.2 Z+jets production

To change any of the following LHC examples to production at different collider energies or beam types, e.g. proton anti-proton at the Tevatron, simply change the beam settings to.

This is an example setup for inclusive Z production at hadron colliders. The inclusive process is calculated at next-to-leading order accuracy matched to the parton shower using the MC@NLO prescription detailed in [Hoe11]. The next few higher jet multiplicities, calculated at next-to-leading order as well, are merged into the inclusive sample using the MEPS@NLO method - an extension of the CKKW method to NLO - as described in [Hoe12a] and [Geh12]. Finally, even higher multiplicities, calculated at leading order, are merged on top of that. A few things to note are detailed below the example. The example can be converted a simple MENLOPS setup by setting LJET:=2, or an MEPS setup by setting LJET:=0, to study the effect of incorporating higher-order matrix elements. The number of additional LO jets can be varied through NJET. Similarly, the merging cut can be changed through QCUT.

 
 
(run){
  % general setting
  EVENTS 1M; ERROR 0.99;
  MASSIVE_PS 4 5;

  % scales, tags for scale variations
  SP_NLOCT 1; FSF:=1.; RSF:=1.; QSF:=1.;
  SCALES METS{FSF*MU_F2}{RSF*MU_R2}{QSF*MU_Q2};

  % tags for process setup
  NJET:=4; LJET:=2,3,4; QCUT:=20.;

  % me generator settings
  ME_SIGNAL_GENERATOR Comix Amegic LOOPGEN;
  EVENT_GENERATION_MODE Weighted;
  LOOPGEN:=BlackHat;

  % exclude tau from lepton container
  MASSIVE[15] 1;

  % collider setup
  BEAM_1 2212; BEAM_ENERGY_1 = 4000.;
  BEAM_2 2212; BEAM_ENERGY_2 = 4000.;
}(run)

(processes){
  Process 93 93 -> 90 90 93{NJET};
  Order_EW 2; CKKW sqr(QCUT/E_CMS);
  NLO_QCD_Mode MC@NLO {LJET};
  ME_Generator Amegic {LJET};
  Loop_Generator LOOPGEN {LJET};
  Integration_Error 0.02 {4};
  Integration_Error 0.02 {5};
  Integration_Error 0.05 {6};
  Integration_Error 0.08 {7};
  Integration_Error 0.10 {8};
  Scales LOOSE_METS{FSF*MU_F2}{RSF*MU_R2}{QSF*MU_Q2} {7,8};
  End process;
}(processes)

(selector){
  Mass 11 -11 66 E_CMS
  Mass 13 -13 66 E_CMS
}(selector)

Things to notice:


10.1.3 W+bb production

 
 
(run){
  # generator parameters
  EVENTS 0; LGEN:=Wbb;
  ME_SIGNAL_GENERATOR Amegic LGEN;
  HARD_DECAYS 1; HARD_MASS_SMEARING 0;
  MASSIVE[5] 1; WIDTH[24] 0; STABLE[24] 0;
  HDH_ONLY_DECAY {24,12,-11};
  MI_HANDLER None;
  # physics parameters
  BEAM_1 2212; BEAM_ENERGY_1 7000;
  BEAM_2 2212; BEAM_ENERGY_2 7000;
  SCALES VAR{H_T2+sqr(80.419)};
  PDF_LIBRARY MSTW08Sherpa; PDF_SET mstw2008nlo_nf4;
  MASS[5] 4.75;# consistent with MSTW 2008 nf 4 set
}(run);

(processes){
  Process 93 93 -> 24 5 -5;
  NLO_QCD_Mode MC@NLO;
  NLO_QCD_Part BVIRS;
  Loop_Generator LGEN;
  Order_EW 1;
  End process;
  Process 93 93 -> -24 5 -5;
  NLO_QCD_Mode MC@NLO;
  NLO_QCD_Part BVIRS;
  Loop_Generator LGEN;
  Order_EW 1;
  End process;
}(processes);

(selector){
  FastjetFinder antikt 2 5 0 0.5 0.75 5 100 2;
}(selector);

Things to notice:


10.1.4 Zbb production

 
 
(run){
  # generator parameters
  EVENTS 0; LGEN:=LHOLE;
  ME_SIGNAL_GENERATOR Amegic LGEN;
  HARD_DECAYS 1; HARD_MASS_SMEARING 0;
  MASSIVE[5] 1; WIDTH[23] 0; STABLE[23] 0;
  HDH_ONLY_DECAY {23,11,-11}|{23,13,-13};
  MI_HANDLER None; FRAGMENTATION Off;
  # physics parameters
  BEAM_1 2212; BEAM_ENERGY_1 7000;
  BEAM_2 2212; BEAM_ENERGY_2 7000;
  SCALES VAR{H_T2+sqr(91.188)};
  PDF_LIBRARY MSTW08Sherpa; PDF_SET mstw2008nlo_nf4;
  MASS[5] 4.75;# consistent with MSTW 2008 nf 4 set
}(run);

(processes){
  Process 93 93 -> 23 5 -5;
  NLO_QCD_Mode MC@NLO;
  NLO_QCD_Part BVIRS;
  Loop_Generator LGEN;
  Order_EW 1;
  End process;
}(processes);

(selector){
  FastjetFinder antikt 2 5 0 0.5 0.75 5 100 2;
}(selector);

Things to notice:


10.2 Jet production


10.2.1 Jet production

To change any of the following LHC examples to production at the Tevatron simply change the beam settings to

  BEAM_1  2212; BEAM_ENERGY_1 980;
  BEAM_2 -2212; BEAM_ENERGY_2 980;

10.2.1.1 MC@NLO setup for dijet and inclusive jet production

This is an example setup for dijet and inclusive jet production at hadron colliders at next-to-leading order precission matched to the parton shower using the MC@NLO prescription detailed in [Hoe11] and [Hoe12b]. A few things to note are detailed below the example.

 
 
(run){
  % general settings
  EVENTS 1M;

  % tags and settings for scale definitions
  FSF:=1.; RSF:=1.; QSF:=1.;
  SCALES FASTJET[A:antikt,PT:J1CUT,ET:0,R:0.4,M:0]{FSF*0.0625*H_T2}{RSF*0.0625*H_T2}{QSF*0.25*PPerp2(p[3])}

  % tags and settings for ME-level cuts
  J1CUT:=20.; J2CUT:=10.;

  % tags and settings for ME generators
  LOOPGEN:=<my-loop-gen>;
  ME_SIGNAL_GENERATOR Amegic LOOPGEN;
  EVENT_GENERATION_MODE Weighted;
  RESULT_DIRECTORY res_jJ1CUT_jJ2CUT_ffFSF_rfRSF_qfQSF;

  % model parameters
  MODEL SM;

  % collider setup
  BEAM_1  2212; BEAM_ENERGY_1 3500.0;
  BEAM_2  2212; BEAM_ENERGY_2 3500.0;
}(run)

(processes){
  Process 93 93 -> 93 93;
  NLO_QCD_Mode MC@NLO;
  Loop_Generator LOOPGEN;
  Order_EW 0;
  End process;
}(processes)

(selector){
  FastjetFinder  antikt 2  J2CUT  0.0  0.4
  FastjetFinder  antikt 1  J1CUT  0.0  0.4
}(selector)

Things to notice:


10.2.1.2 MEPS setup for jet production

 
 
(run){
  BEAM_1 = 2212; BEAM_ENERGY_1 = 4000;
  BEAM_2 = 2212; BEAM_ENERGY_2 = 4000;
}(run)

(processes){
  Process 93 93 -> 93 93 93{3}
  Order_EW 0;
  CKKW sqr(20/E_CMS)
  Integration_Error 0.02;
  Selector_File *|(coresel){|}(coresel) {2};
  End process;
}(processes)

(coresel){
  NJetFinder  2  20.0  0.0  0.4  -1
}(coresel)

Things to notice:


10.2.2 Jets at lepton colliders

This section contains two setups to describe jet production at LEP I, either through multijet merging at leading order accuracy or at next-to-leading order accuracy.


10.2.2.1 MEPS setup for ee->jets

 
 
(run){
  % general settings
  EVENTS 5M; NJET:=3;
  % model parameters
  ALPHAS(MZ) 0.1188;
  ORDER_ALPHAS 1;
  MASSIVE_PS 4 5;
  % collider setup
  BEAM_1  11; BEAM_ENERGY_1 45.6;
  BEAM_2 -11; BEAM_ENERGY_2 45.6;
}(run)

(processes){
  Process 11 -11 -> 93 93 93{NJET};
  CKKW pow(10,-2.25);
  Order_EW 2;
  End process;
}(processes)

This example shows a LEP set up, with electrons and positrons colliding at a centre of mass energy of 91.25GeV. Two processes have been specified, one final state with two or more light quarks and gluons being produced, and one with a b b-bar pair and possibly extra light partons. Four b quark production is also included for consistencies sake.

Things to notice:


10.2.2.2 MEPS@NLO setup for ee->jets

 
 
(run){
  % general settings
  EVENTS 5M; ERROR 0.1;

  % tags and settings for scale definitions
  SP_NLOCT 1; SCF:=1.0; FSF:=SCF; RSF:=SCF; QSF:=1.0;
  SCALES METS{FSF*MU_F2}{RSF*MU_R2}{QSF*MU_Q2};

  % tags for process setup
  LJET:=2,3,4; NJET:=3; YCUT:=2.0;
  LMJET:=2; NMJET:=3; YMCUT:=2.0;
  NMMJET:=1; YMMCUT:=2.0;
  EXCLUSIVE_CLUSTER_MODE 1;

  % shower settings
  CSS_KFACTOR_SCHEME 0;

  % tags and settings for ME generators
  LOOPGEN0:=Internal;
  LOOPGEN1:=<my-loop-gen-for-3j>;
  LOOPGEN2:=<my-loop-gen-for-4j>;
  LOOPMGEN:=Internal;
  ME_SIGNAL_GENERATOR Comix Amegic LOOPGEN0 LOOPGEN1 LOOPGEN2 LOOPMGEN;
  EVENT_GENERATION_MODE Weighted;
  INTEGRATOR 4;

  % model parameters
  MODEL SM;
  ALPHAS(MZ) 0.118;
  MASSIVE[5] 1;

  % collider setup
  BEAM_1  11; BEAM_ENERGY_1 45.6;
  BEAM_2 -11; BEAM_ENERGY_2 45.6;
}(run);

(processes){
  Process 11 -11 -> 93 93 93{NJET};
  Order_EW 2; CKKW pow(10,-YCUT);
  NLO_QCD_Mode MC@NLO {LJET};
  Loop_Generator LOOPGEN0 {2};
  Loop_Generator LOOPGEN1 {3};
  Loop_Generator LOOPGEN2 {4};
  ME_Generator Amegic {LJET};
  RS_Enhance_Factor 10;
  End process;
  %
  Process 11 -11 -> 5 -5 93{NMJET};
  Order_EW 2; CKKW pow(10,-YMCUT);
  NLO_QCD_Mode MC@NLO {LMJET};
  Loop_Generator LOOPMGEN {2};
  ME_Generator Amegic {LMJET};
  RS_Enhance_Factor 10;
  End process;
  %
  Process 11 -11 -> 5 5 -5 -5 93{NMMJET};
  Order_EW 2; CKKW pow(10,-YMMCUT);
  Cut_Core 1;
  End process;
}(processes);

This example expands upon the above setup, elevating its description of hard jet production to next-to-leading order.

Things to notice:


10.3 Higgs boson + jets production


10.3.1 H production in gluon fusion with interference effects

This is a setup for inclusive Higgs production through gluon fusion at hadron colliders. The inclusive process is calculated at next-to-leading order accuracy, including all interference effects between Higgs-boson production and the SM gg->yy background. The corresponding matrix elements are taken from [Ber02] and [Dix13].

 
 
(run){
  # generator parameters
  EVENTS 1M; LGEN:=Higgs;
  EVENT_GENERATION_MODE W;
  AMEGIC_ALLOW_MAPPING 0;
  ME_SIGNAL_GENERATOR Amegic LGEN;
  SCALES VAR{Abs2(p[2]+p[3])};
  # collider parameters
  BEAM_1 2212; BEAM_ENERGY_1 4000;
  BEAM_2 2212; BEAM_ENERGY_2 4000;
  # physics parameters
  YUKAWA[4] 1.42; YUKAWA[5] 4.8;
  YUKAWA[15] 1.777;
  EW_SCHEME 3;
}(run);

(processes){
  Process 93 93 -> 22 22;
  NLO_QCD_Mode 1; NLO_QCD_Part BVIRS;
  Order_EW 2; Enable_MHV 12;
  Loop_Generator LGEN;
  Integrator PS2;
  RS_Integrator PS3;
  End process;
}(processes);

(selector){
  HiggsFinder 40 30 2.5 100 150;
  IsolationCut 22 0.4 2 0.025;
}(selector);

Things to notice:

To compute the interference contribution only, as was done in [Dix13], one can set ‘HIGGS_INTERFERENCE_ONLY 1;’ in the (run){...}(run) section. By default, all partonic processes are included in this simulation, however, it is sensible to disable quark initial states at the leading order. This is achieved by setting ‘HIGGS_INTERFERENCE_MODE 3;’ in the (run){...}(run) section.

One can also simulate the production of a spin-2 massive graviton in Sherpa using the same input card by setting ‘HIGGS_INTERFERENCE_SPIN 2;’ in the (run){...}(run) section. Only the massive graviton case is implemented, specifically the scenario where k_q=k_g. NLO corrections are approximated, as the gg->X->yy and qq->X->yy loop amplitudes have not been computed so far.


10.3.2 H+jets production in gluon fusion

This is an example setup for inclusive Higgs production through gluon fusion at hadron colliders. The inclusive process is calculated at next-to-leading order accuracy matched to the parton shower using the MC@NLO prescription detailed in [Hoe11]. The next few higher jet multiplicities, calculated at next-to-leading order as well, are merged into the inclusive sample using the MEPS@NLO method - an extension of the CKKW method to NLO - as described in [Hoe12a] and [Geh12]. Finally, even higher multiplicities, calculated at leading order, are merged on top of that. A few things to note are detailed below the example. The example can be converted a simple MENLOPS setup by setting LJET:=2, or an MEPS setup by setting LJET:=0, to study the effect of incorporating higher-order matrix elements.

 
 
(run){
  % general settings
  EVENTS 5M; ERROR 0.1;

  % tags and settings for scale definitions
  SP_NLOCT=1; FSF:=1.0; RSF:=1.0; QSF:=1.0;
  SCALES STRICT_METS{FSF*MU_F2}{RSF*MU_R2}{QSF*MU_Q2};

  % tags for process setup
  LJET:=1,2,3; NJET:=3; QCUT:=30.;

  % tags and settings for ME generators
  LOOPGEN0:=Internal;
  LOOPGEN1:=MCFM;
  ME_SIGNAL_GENERATOR Comix Amegic LOOPGEN0 LOOPGEN1;
  EVENT_GENERATION_MODE Weighted;
  RESULT_DIRECTORY Results.QCUT;

  % model parameters
  MODEL SM+EHC
  YUKAWA[5] 0; YUKAWA[15] 0;
  MASS[25] 125.; WIDTH[25] 0.; STABLE[25] 0;

  % collider setup
  BEAM_1 2212; BEAM_ENERGY_1 4000;
  BEAM_2 2212; BEAM_ENERGY_2 4000;  
}(run);

(processes){
  Process 93 93 -> 25 93{NJET};
  Order_EW 1; CKKW sqr(QCUT/E_CMS);
  NLO_QCD_Mode MC@NLO {LJET}; 
  Loop_Generator LOOPGEN0 {1,2};
  Loop_Generator LOOPGEN1 {3};
  ME_Generator Amegic {LJET};
  Enhance_Factor 16 {2}; 
  Enhance_Factor 128 {3,4};
  RS_Enhance_Factor 10 {2};
  RS_Enhance_Factor 20 {3};
  End process;
}(processes);

Things to notice:


10.3.3 Associated t anti-t H production at the LHC

This set-up illustrates the interface to an external loop matrix element generator as well as the possibility of specifying hard decays for particles emerging from the hard interaction. The process generated is the production of a Higgs boson in association with a top quark pair from two light partons in the initial state. Each top quark decays into an (anti-)bottom quark and a W boson. The W bosons in turn decay to either quarks or leptons.

 
 
(run){
  # generator parameters
  EVENTS 0; LGEN:=TTH;
  ME_SIGNAL_GENERATOR Amegic LGEN;
  HARD_DECAYS 1; HARD_MASS_SMEARING 0;
  STABLE[6] 0; STABLE[24] 0;
  WIDTH[25] 0; WIDTH[6] 0; 
  MI_HANDLER None;
  # physics parameters
  BEAM_1 2212; BEAM_ENERGY_1 7000;
  BEAM_2 2212; BEAM_ENERGY_2 7000;
  SCALES VAR{sqr(175+125/2)};
  PDF_LIBRARY LHAPDFSherpa;
  PDF_SET MSTW2008lo68cl.LHgrid;
  USE_PDF_ALPHAS 1;
}(run);

(processes){
  Process 93 93 -> 25 6 -6;
  NLO_QCD_Mode 3; NLO_QCD_Part BVIRS;
  Loop_Generator LGEN;
  Order_EW 1;
  End process;
}(processes);

Things to notice:


10.4 Top quark (pair) + jets production


10.4.1 Simulation of top quark pair production using MC@NLO methods

 
 
(run){
  EVENTS 10000;
  LJET:=2; NJET:=0; QCUT:=20;
  # Generator parameters
  LGEN:=OpenLoops;
  ME_SIGNAL_GENERATOR Comix Amegic LGEN;
  # Physics parameters
  BEAM_1 2212; BEAM_ENERGY_1 4000;
  BEAM_2 2212; BEAM_ENERGY_2 4000;
  CORE_SCALE QCD;
  WIDTH[6] 0;
}(run);

(processes){
  Process 93 93 -> 6 -6 93{NJET};
  NLO_QCD_Mode MC@NLO;
  NLO_QCD_Part BVIRS {LJET};
  ME_Generator Amegic {LJET};
  Loop_Generator LGEN;
  CKKW sqr(QCUT/E_CMS);
  Order_EW 0;
  End process;
}(processes);

Things to notice:


10.4.2 Simulation of top quark pair production in association with jets using MEPS methods

 
 
(run){
  EVENTS 10000;
  NJET:=2; QCUT:=20;
  BEAM_1 2212; BEAM_ENERGY_1 4000;
  BEAM_2 2212; BEAM_ENERGY_2 4000;
  WIDTH[6] 0; STABLE[6] 0;
  WIDTH[24] 0; STABLE[24] 0;
  HARD_DECAYS 1;
  HARD_SPIN_CORRELATIONS 1;
  CORE_SCALE QCD;
}(run);

(processes){
  Process 93 93 -> 6 -6 93{NJET};
  Order_EW 0; CKKW sqr(QCUT/E_CMS);
  End process;
}(processes);

Things to notice:


10.4.3 Production of a top quark pair in association with a W-boson

 
 
(run){
  EVENTS=10000
  EVENT_GENERATION_MODE=Weighted

  BEAM_1=2212; BEAM_ENERGY_1=4000;
  BEAM_2=2212; BEAM_ENERGY_2=4000;

  SCF:=1.0; QF:=1.0
  LGEN:=OpenLoops

  ME_SIGNAL_GENERATOR=Comix Amegic LGEN
  SCALES=METS{SCF*MU_F2}{SCF*MU_R2}{QF*MU_Q2}

  HARD_DECAYS=On
  STABLE[6]=0; STABLE[24]=0
  WIDTH[6]=0; WIDTH[24]=0
  HDH_NO_DECAY={24,2,-1}|{24,4,-3}|{24,16,-15}
  HARD_SPIN_CORRELATIONS=1

  # technical parameters
  EXCLUSIVE_CLUSTER_MODE=1
  AMEGIC_DEFAULT_GAUGE=10
}(run);

(processes){
  Process 93 93 -> 6 -6 24;
  NLO_QCD_Mode MC@NLO;
  ME_Generator Amegic;
  Loop_Generator LGEN;
  Order_EW 1;
  End process;
}(processes);

Things to notice:


10.5 Fixed-order next-to-leading order calculations


10.5.1 Production of NTuples

Root NTuples are a convenient way to store the result of cumbersome fixed-order calculations in order to perform multiple analyses. This example shows how to generate such NTuples and reweighted them in order to change factorisation and renormalisation scales. Note that in order to use this setup, Sherpa must be configured with option --enable-root=/path/to/root, see Event output formats. If Sherpa has not been configured with Rivet analysis support, please disable the analysis using ‘-a0’ on the command line, see Command line options.

When using NTuples, one needs to bear in mind that every calculation involving jets in the final state is exclusive in the sense that a lower cutoff on the jet transverse momenta must be imposed. It is therefore necessary to check whether the event sample stored in the NTuple is sufficiently inclusive before using it. Similar remarks apply when photons are present in the NLO calculation or when cuts on leptons have been applied at generation level to increase efficiency. Every NTuple should therefore be accompanied by an appropriate documentation.

This example will generate NTuples for the process pp->lvj, where l is an electron or positron, and v is an electron (anti-)neutrino. We identify parton-level jets using the anti-k_T algorithm with R=0.4 [Cac08]. We require the transverse momentum of these jets to be larger than 20 GeV. No other cuts are applied at generation level.

 
 
(run){
  EVENTS 100k;
  EVENT_GENERATION_MODE W;
  LGEN:=BlackHat; ME_SIGNAL_GENERATOR Amegic LGEN;
  ### Analysis (please configure with --enable-rivet & --enable-hepmc2)
  ANALYSIS Rivet; ANALYSIS_OUTPUT Analysis/HTp/BVI/;
  ### NTuple output (please configure with '--enable-root')
  EVENT_OUTPUT Root[NTuple_B-like];

  BEAM_1 2212; BEAM_ENERGY_1 3500;
  BEAM_2 2212; BEAM_ENERGY_2 3500;
  SCF:=1; ### default scale factor
  SCALES VAR{SCF*sqr(sqrt(H_T2)-PPerp(p[2])-PPerp(p[3])+MPerp(p[2]+p[3]))};
  EW_SCHEME 0; WIDTH_SCHEME Fixed; # sin\theta_w -> 0.23
  DIPOLE_ALPHA 0.03;
  MASSIVE[13] 1; MASSIVE[15] 1;
}(run);
(processes){
  ### The Born piece
  Process 93 93 -> 90 91 93;
  NLO_QCD_Mode 1; NLO_QCD_Part B;
  Order_EW 2;
  End process;
  ### The virtual piece
  Process 93 93 -> 90 91 93;
  NLO_QCD_Mode 1; NLO_QCD_Part V;
  Loop_Generator LGEN;
  Order_EW 2;
  End process;
  ### The integrated subtraction piece
  Process 93 93 -> 90 91 93;
  NLO_QCD_Mode 1; NLO_QCD_Part I;
  Order_EW 2;
  End process;
}(processes);
(selector){
  FastjetFinder antikt 1 20 0 0.4;
}(selector);

(analysis){
  BEGIN_RIVET {
    -a ATLAS_2012_I1083318;
    USE_HEPMC_SHORT 1;
    IGNOREBEAMS 1;
  } END_RIVET;
}(analysis);

Things to notice:


10.5.1.1 NTuple production

Start Sherpa using the command line

  Sherpa -f Run.B-like.dat

Sherpa will first create source code for its matrix-element calculations. This process will stop with a message instructing you to compile. Do so by running

  ./makelibs -j4

Launch Sherpa again, using

  Sherpa -f Run.B-like.dat

Sherpa will then compute the Born, virtual and integrated subtraction contribution to the NLO cross section and generate events. These events are analyzed using the Rivet library and stored in a Root NTuple file called NTuple_B-like.root. We will use this NTuple later to compute an NLO uncertainty band.

The real-emission contribution, including subtraction terms, to the NLO cross section is computed using

  Sherpa -f Run.R-like.dat

Events are generated, analyzed by Rivet and stored in the Root NTuple file NTuple_R-like.root.

The two analyses of events with Born-like and real-emission-like kinematics need to be merged, which can be achieved using scripts like aidaadd. The result can then be plotted and displayed.


10.5.1.2 Usage of NTuples in Sherpa

Next we will compute the NLO uncertainty band using Sherpa. To this end, we make use of the Root NTuples generated in the previous steps. Note that the setup files for reweighting are almost identical to those for generating the NTuples. We have simply replaced ‘EVENT_OUTPUT’ by ‘EVENT_INPUT’.

First we re-evaluate the events with the scale increased by a factor 2:

  Sherpa -f Reweight.B-like.dat
  Sherpa -f Reweight.R-like.dat

Then we re-evaluate the events with the scale decreased by a factor 2:

  Sherpa -f Reweight.B-like.dat SCF:=0.25 -A Analysis/025HTp/BVI
  Sherpa -f Reweight.R-like.dat SCF:=0.25 -A Analysis/025HTp/RS

The two contributions can again be combined using aidaadd.


10.6 Soft QCD: Minimum Bias and Cross Sections


10.6.1 Calculation of inclusive cross sections

 
 
(run){
  OUTPUT              = 2
  EVENT_TYPE          = MinimumBias 
  SOFT_COLLISIONS     = Shrimps
  Shrimps_Mode        = Xsecs
}(run)

(beam){
  BEAM_1 =  2212; BEAM_ENERGY_1 = 450.;
  BEAM_2 =  2212; BEAM_ENERGY_2 = 450.;
}(beam)

(me){
  ME_SIGNAL_GENERATOR = None
}(me)

Things to notice:


10.6.2 Simulation of Minimum Bias events

 
 
run){
  EVENTS              = 1000000
  OUTPUT              = 2
  EVENT_TYPE          = MinimumBias 
  SOFT_COLLISIONS     = Shrimps
  Shrimps_Mode        = Inelastic

  ANALYSIS            = Rivet

  ANALYSIS_OUTPUT     = Shrimps.7TeV
  MAX_PROPER_LIFETIME = 10.
}(run)

(beam){
  BEAM_1 =  2212; BEAM_ENERGY_1 = 3500.;
  BEAM_2 =  2212; BEAM_ENERGY_2 = 3500.;
}(beam)

(analysis){
  BEGIN_RIVET {
  -a DIFFMASS ATLAS_2010_S8918562 ATLAS_2010_S8894728 ATLAS_2011_S8994773 ATLAS_2012_I1084540 TOTEM_2012_DNDETA ATLAS_2011_I919017 CMS_2011_S8978280 CMS_2011_S9120041 CMS_2011_S9215166 #CMS_2010_S8547297 CMS_2010_S8656010 CMS_2011_S8884919 CMS_QCD_10_024
  } END_RIVET
}(analysis)

(me){
  ME_SIGNAL_GENERATOR = None
}(me)

Things to notice:


10.7 Setups for event production at B-factories


10.7.1 QCD continuum

Example setup for QCD continuum production at the Belle/KEK collider. Please note, it does not include any hadronic resonance.

 
 
(run){
  % general settings
  EVENTS 5M;
  % model parameters
  ALPHAS(MZ) 0.1188;
  ORDER_ALPHAS 1;
  MASSIVE[4] 1;
  MASSIVE[5] 1;
  MASSIVE_PS 3 4 5;
  % collider setup
  BEAM_1  11; BEAM_ENERGY_1 7.;
  BEAM_2 -11; BEAM_ENERGY_2 4.;
}(run)

(processes){
  Process 11 -11 -> 93 93;
  Order_EW 2;
  End process;
  Process 11 -11 -> 4 -4;
  Order_EW 2;
  End process;
  Process 11 -11 -> 5 -5;
  Order_EW 2;
  End process;
}(processes)

Things to notice:


10.7.2 Signal process

Example setup for B-hadron pair production on the Y(4S) pole.

 
 
(run){
  % general settings
  EVENTS 5M;
  % model parameters
  ALPHAS(MZ) 0.1188;
  ORDER_ALPHAS 1;
  MASSIVE[4] 1;
  MASSIVE[5] 1;
  MASSIVE_PS 3 4 5;
  ME_SIGNAL_GENERATORS Internal;
  SCALES VAR{sqr(91.2)};
  % collider setup
  BEAM_1  11; BEAM_ENERGY_1 7.;
  BEAM_2 -11; BEAM_ENERGY_2 4.;
}(run)

(processes){
  #
  # electron positron -> Y(4S) -> B+ B-
  #
  Process 11 -11 -> 300553[a];
  Decay 300553[a] -> 521 -521;
  End process;
  #
  # electron positron -> Y(4S) -> B0 B0bar
  #
  Process 11 -11 -> 300553[a];
  Decay 300553[a] -> 511 -511;
  End process;
}(processes)

Things to notice:


10.7.3 Single hadron decay chains

This setup is not a collider setup, but a simulation of a hadronic decay chain.

 
 
(run){
  % general settings
  EVENTS 5M;
  EVENT_TYPE HadronDecay;

  % specify hadron to be decayed
  DECAYER 511;

  % initialise rest for Sherpa not to complain
  % model parameters
  ME_SIGNAL_GENERATORS Internal;
  SCALES VAR{sqr(91.2)};
  % collider setup
  BEAM_1  11; BEAM_ENERGY_1 7.;
  BEAM_2 -11; BEAM_ENERGY_2 4.;
}(run)

(processes){
  Process 11 -11 -> 13 -13;
  End process;
}(processes)

Things to notice:


10.8 Using the Python interface


10.8.1 Computing matrix elements for idividual phase space points

Sherpa’s Python interface (see Python Interface) can be used to compute matrix elemtents for individual phase space points. For processes with coloured external particles this is so far only supported by AMEGIC++. COMIX can be used however if all external particles are colourless.

All information about the incoming and outgoing flavours and momenta of a process are stored in a ’cluster amplitude’. For each incoming and outgoing particle, a ’leg’ must be added to the cluster amplitude using the ’CreateLegFromPyVec4D’ method. This method accepts a ’Vec4D’ object that represents the four-momentum of the corresponding particle as the first argument. The second argument represents it’s flavour. Note that both momenta and flavours must be reversed for legs that correspond to incoming particles. A Flavour can be reversed by setting the second argument of it’s constructor to ’1’. The first argument is the pdg-ID of the particle. Sherpa.Flavour(11,1) represents an anti-electron, for example. Note that the value returned by the ’Differential’ method of the process needs to be multiplied by a factor of two times the center of mass energy squared.

If AMEGIC++ is used as the matrix element generator, executing the script will result in AMEGIC++ writing out libraries and exiting. After compiling the libraries using ./makelibs, the script must be executed again in order to obtain the matrix element. On some systems, this might result in a termination with errors of the form

Library_Loader::LoadLibrary(): ./Process/lib/libProc_P2_2_2_6_24_16_5_0.so: undefined symbol: _ZN6AMEGIC10Basic_Func1XEiii

If this is the case, the library libSherpaMain.so must be preloaded, which can be achieved on a linux system via setting LD_PRELOAD accordingly:

export LD_PRELOAD=<prefix>/lib/SHERPA-MC/libSherpaMain.so

In order to prevent Sherpa’s initialization routine from integrating total cross sections, one can pass the command line argument INIT_ONLY=1 when executing the script in order to save time. Alternatively, this argument can be added in the script itself via sys.argv.append('INIT_ONLY=1').

 
 
#!/usr/bin/env python2
## from mpi4py import MPI
import sys
sys.path.append('@PYLIBDIR@')
import Sherpa

# Add this to the execution arguments to prevent Sherpa from starting to integrate the cross section
sys.argv.append('INIT_ONLY=2')

Generator=Sherpa.Sherpa()
try:
    Generator.InitializeTheRun(len(sys.argv),sys.argv)
    Process=Sherpa.MEProcess(Generator)

    # if MPI.COMM_WORLD.Get_rank()>0:
    #    exit(1)

    # Incoming flavors must be added first!
    Process.AddInFlav(11);
    Process.AddInFlav(-11);
    Process.AddOutFlav(1);
    Process.AddOutFlav(-1);
    Process.Initialize();

    if Process.HasColorIntegrator(): 
        Process.GenerateColorPoint();

    # First argument corresponds to particle index 
    # which in turn is determined by the order in which the 
    # particles were added
    Process.SetMomentum(0, 45.6,0.,0.,45.6)
    Process.SetMomentum(1, 45.6,0.,0.,-45.6)
    Process.SetMomentum(2, 45.6,0.,45.6,0.)
    Process.SetMomentum(3, 45.6,0.,-45.6,0.)

    print '\nSquared matrix element:'
    print Process.CSMatrixElement()
    print '\n'

except Sherpa.Exception as exc:
    exit(1)

10.8.2 Generate events using scripts

This example shows how to generate events with Sherpa using a Python wrapper script. For each event the weight, the number of trials and the particle information is printed to stdout. This script can be used as a basis for constructing interfaces to own analysis routines.

 
 
#!/usr/bin/python2
## from mpi4py import MPI
import sys
sys.path.append('@PYLIBDIR@')
import Sherpa

Generator=Sherpa.Sherpa()
try:
    Generator.InitializeTheRun(len(sys.argv),sys.argv)
    Generator.InitializeTheEventHandler()
    for n in range(1,1+Generator.NumberOfEvents()):
        Generator.GenerateOneEvent()
        blobs=Generator.GetBlobList();
        print "Event",n,"{"
        ## print blobs
        print "  Weight ",blobs.GetFirst(1)["Weight"];
        print "  Trials ",blobs.GetFirst(1)["Trials"];
        for i in range(0,blobs.size()):
            print "  Blob",i,"{"
            ## print blobs[i];
            print "    Incoming particles"
            for j in range(0,blobs[i].NInP()):
                part=blobs[i].InPart(j)
                ## print part
                s=part.Stat()
                f=part.Flav()
                p=part.Momentum()
                print "     ",j,": ",s,f,p
            print "    Outgoing particles"
            for j in range(0,blobs[i].NOutP()):
                part=blobs[i].OutPart(j)
                ## print part
                s=part.Stat()
                f=part.Flav()
                p=part.Momentum()
                print "     ",j,": ",s,f,p
            print "  } Blob",i
        print "} Event",n
        if ((n%100)==0): print "  Event ",n
    Generator.SummarizeRun()
        
except Sherpa.Exception as exc:
    exit(1)

10.8.3 Generate events with MPI using scripts

This example shows how to generate events with Sherpa using a Python wrapper script and MPI. For each event the weight, the number of trials and the particle information is send to the MPI master node and written into a single gzip’ed output file. Note that you need the mpi4py module to run this Example. Sherpa must be configured and installed using ‘--enable-mpi’, see MPI parallelization.

 
 
#!/usr/bin/python2
from mpi4py import MPI
import sys
sys.path.append('@PYLIBDIR@')
import Sherpa
import gzip

class MyParticle:
    def __init__(self,p):
        self.kfc=p.Flav().Kfcode()
        if p.Flav().IsAnti(): self.kfc=-self.kfc
        self.E=p.Momentum()[0]
        self.px=p.Momentum()[1]
        self.py=p.Momentum()[2]
        self.pz=p.Momentum()[3]
    def __str__(self):
        return (str(self.kfc)+" "+str(self.E)+" "
                +str(self.px)+" "+str(self.py)+" "+str(self.pz))

Generator=Sherpa.Sherpa()
try:
    Generator.InitializeTheRun(len(sys.argv),sys.argv)
    Generator.InitializeTheEventHandler()
    comm=MPI.COMM_WORLD
    rank=comm.Get_rank()
    size=comm.Get_size()
    if rank==0:
        outfile=gzip.GzipFile("events.gz",'w')
        for n in range(1,1+Generator.NumberOfEvents()):
            for t in range(1,size):
                weight=comm.recv(source=t,tag=t)
                trials=comm.recv(source=t,tag=2*t)
                parts=comm.recv(source=t,tag=3*t)
                outfile.write("E "+str(weight)+" "+str(trials)+"\n")
                for p in parts:
                    outfile.write(str(p)+"\n")
            if (n%100)==0: print "  Event",n
        outfile.close()
    else:
        for n in range(1,1+Generator.NumberOfEvents()):
            Generator.GenerateOneEvent()
            blobs=Generator.GetBlobList();
            weight=blobs.GetFirst(1)["Weight"]
            trials=blobs.GetFirst(1)["Trials"]
            parts=[]
            for i in range(0,blobs.size()):
                for j in range(0,blobs[i].NOutP()):
                    part=blobs[i].OutPart(j)
                    if part.Stat()==1 and part.HasDecBlob()==0:
                        parts.append(MyParticle(part))
            comm.send(weight,dest=0,tag=rank)
            comm.send(trials,dest=0,tag=2*rank)
            comm.send(parts,dest=0,tag=3*rank)
    Generator.SummarizeRun()

except Sherpa.Exception as exc:
    exit(1)

11. Getting help

If Sherpa exits abnormally, first check the Sherpa output for hints on the reason of program abort, and try to figure out what has gone wrong with the help of the Manual. Note that Sherpa throwing a ‘normal_exit’ exception does not imply any abnormal program termination! When using AMEGIC++ Sherpa will exit with the message:

 
   New libraries created. Please compile.

In this case, follow the instructions given in Running Sherpa with AMEGIC++.

If this does not help, contact the Sherpa team (see the Sherpa Team section of the website http://www.sherpa-mc.de), providing all information on your setup. Please include

  1. A complete tarred and gzipped set of the ‘.dat’ files leading to the crash. Use the status recovery directory Status__<date of crash> produced before the program abort.
  2. The command line (including possible parameters) you used to start Sherpa.
  3. The installation log file, if available.

12. Authors

Sherpa was written by the Sherpa Team, see http://www.sherpa-mc.de.


13. Copying

Sherpa is free software. You can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation. You should have received a copy of the GNU General Public License along with the source for Sherpa; see the file COPYING. If not, write to the Free Software Foundation, 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.

Sherpa is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

Sherpa was created during the Marie Curie RTN’s HEPTOOLS, MCnet and LHCphenonet. The MCnet Guidelines apply, see the file GUIDELINES and http://www.montecarlonet.org/index.php?p=Publications/Guidelines.


A. References


B. Index

Jump to:   1  
A   B   C   D   E   F   G   H   I   K   L   M   N   O   P   Q   R   S   T   U   V   W   X   Y  
Index Entry Section

1
1/ALPHAQED(0)7.4.1 Standard Model
1/ALPHAQED(default)7.4.1 Standard Model

A
A7.4.1 Standard Model
ACTIVE[<id>]7.4 Model parameters
ALPHA7.4.5 Two Higgs Doublet Model
ALPHAS(default)7.4.1 Standard Model
ALPHAS(MZ)7.4.1 Standard Model
ALPHA_4_G_47.4.4 Anomalous Gauge Couplings
ALPHA_57.4.4 Anomalous Gauge Couplings
ANALYSIS7.1.8 ANALYSIS
ANALYSIS_OUTPUT7.1.9 ANALYSIS_OUTPUT
ANALYSIS_OUTPUT8.2 Rivet analyses
ANALYSIS_OUTPUT8.3 HZTool analyses
A_147.4.7 Fourth Generation
A_247.4.7 Fourth Generation
A_347.4.7 Fourth Generation

B
BARYON_FRACTION7.12.1.4 Cluster transition to hadrons - flavour part
BATCH_MODE7.1.12 BATCH_MODE
BEAM_17.2 Beam parameters
BEAM_27.2 Beam parameters
BEAM_ENERGY_17.2 Beam parameters
BEAM_ENERGY_27.2 Beam parameters
BEAM_REMNANTS7.2.2 Intrinsic Transverse Momentum
BEAM_SMAX7.2.1 Beam Spectra
BEAM_SMIN7.2.1 Beam Spectra
BEAM_SPECTRUM_17.2.1 Beam Spectra
BEAM_SPECTRUM_27.2.1 Beam Spectra
beta_0^27.14.2.3 Parameters of the eikonals
BH_SETTINGS_FILE7.6.30.1 BlackHat Interface
BUNCH_17.3 ISR parameters
BUNCH_27.3 ISR parameters

C
CABIBBO7.4.1 Standard Model
CHECK_BORN8.7.5 Checking the pole cancellation
CHECK_FINITE8.7.5 Checking the pole cancellation
CHECK_POLES8.7.5 Checking the pole cancellation
CHECK_POLES_THRESHOLD8.7.5 Checking the pole cancellation
Chi_S7.14.2.4 Parameters for event generation
CKMORDER7.4.1 Standard Model
COMIX_ME_THREADS7.1.18 Multi-threading
COMIX_PS_THREADS7.1.18 Multi-threading
CORE_SCALE7.5.4.7 METS scale setting with multiparton core processes
COUPLINGS7.5.6 COUPLINGS
COUPLING_SCHEME7.5.5 COUPLING_SCHEME
CSS_EVOLUTION_SCHEME7.10.5 CS Shower options
CSS_EW_MODE7.10.5 CS Shower options
CSS_FS_AS_FAC7.10.5 CS Shower options
CSS_FS_PT2MIN7.10.5 CS Shower options
CSS_IS_AS_FAC7.10.5 CS Shower options
CSS_IS_PT2MIN7.10.5 CS Shower options
CSS_KIN_SCHEME7.10.5 CS Shower options
CSS_MASS_THRESHOLD7.10.5 CS Shower options
CSS_MAXEM7.10.5 CS Shower options
CSS_NOEM7.10.5 CS Shower options
CSS_SCALE_SCHEME7.10.5 CS Shower options
CSS_SHOWER_SCALE2_FACTOR7.5.4.6 Scale variations in parton showered and merged samples

D
DEACTIVATE_GGH7.4.6 Effective Higgs Couplings
DEACTIVATE_PPH7.4.6 Effective Higgs Couplings
DECAYMODEL7.12.2 Hadron decays
DECAYPATH7.12.2 Hadron decays
DECAY_OFFSET7.12.1.4 Cluster transition to hadrons - flavour part
DECAY_RESULT_DIRECTORY7.9.5 DECAY_RESULT_DIRECTORY
DECAY_TAU_HARD7.9.10 DECAY_TAU_HARD
Delphes7.1.16 Event output formats
Delta7.14.2.3 Parameters of the eikonals
deltaY7.14.2.3 Parameters of the eikonals
DIPOLE_ALPHA7.5.9 Dipole subtraction
DIPOLE_AMIN7.5.9 Dipole subtraction
DIPOLE_KAPPA7.5.9 Dipole subtraction
DIPOLE_NF_GSPLIT7.5.9 Dipole subtraction

E
EHC_SCALE27.4.6 Effective Higgs Couplings
EPA_AlphaQED7.2.1.3 EPA
EPA_Form_Factor_17.2.1.3 EPA
EPA_Form_Factor_27.2.1.3 EPA
EPA_ptMin_17.2.1.3 EPA
EPA_ptMin_27.2.1.3 EPA
EPA_q2Max_17.2.1.3 EPA
EPA_q2Max_27.2.1.3 EPA
ETA7.4.1 Standard Model
EVENTS7.1.1 EVENTS
EVENT_DISPLAY_INTERVAL7.1.12 BATCH_MODE
EVENT_GENERATION_MODE7.5.3 EVENT_GENERATION_MODE
EVENT_INPUT7.1.16 Event output formats
EVENT_OUTPUT7.1.16 Event output formats
EVENT_TYPE7.1.2 EVENT_TYPE
EVENT_TYPE7.14.2.1 Generating minimum bias events
EVT_FILE_PATH7.1.16 Event output formats
EVT_OUTPUT7.1.4 OUTPUT
EW_SCHEME7.4.1 Standard Model
E_LASER_7.2.1.1 Laser Backscattering
E_LASER_7.2.1.1 Laser Backscattering

F
F4_GAMMA7.4.4 Anomalous Gauge Couplings
F4_Z7.4.4 Anomalous Gauge Couplings
F5_GAMMA7.4.4 Anomalous Gauge Couplings
F5_Z7.4.4 Anomalous Gauge Couplings
FACTORIZATION_SCALE_FACTOR7.5.4.5 Simple scale variations
FeynRules7.4.8 FeynRules model
FILE_SIZE7.1.16 Event output formats
FINISH_OPTIMIZATION7.8.4 FINISH_OPTIMIZATION
FINITE_TOP_MASS7.4.6 Effective Higgs Couplings
FINITE_W_MASS7.4.6 Effective Higgs Couplings
FRAGMENTATION7.12.1.1 Fragmentation models
FR_IDENTFILE7.4.8 FeynRules model
FR_INTERACTIONS7.4.8 FeynRules model
FR_PARAMCARD7.4.8 FeynRules model
FR_PARAMDEF7.4.8 FeynRules model
FR_PARTICLES7.4.8 FeynRules model

G
G1_GAMMA7.4.4 Anomalous Gauge Couplings
G1_Z7.4.4 Anomalous Gauge Couplings
G4_GAMMA7.4.4 Anomalous Gauge Couplings
G4_Z7.4.4 Anomalous Gauge Couplings
G5_GAMMA7.4.4 Anomalous Gauge Couplings
G5_Z7.4.4 Anomalous Gauge Couplings
G_NEWTON7.4.3 ADD Model of Large Extra Dimensions

H
H1_GAMMA7.4.4 Anomalous Gauge Couplings
H1_Z7.4.4 Anomalous Gauge Couplings
H2_GAMMA7.4.4 Anomalous Gauge Couplings
H2_Z7.4.4 Anomalous Gauge Couplings
H3_GAMMA7.4.4 Anomalous Gauge Couplings
H3_Z7.4.4 Anomalous Gauge Couplings
H4_GAMMA7.4.4 Anomalous Gauge Couplings
H4_Z7.4.4 Anomalous Gauge Couplings
HARD_DECAYS7.9 Hard decays
HARD_MASS_SMEARING7.9.8 HARD_MASS_SMEARING
HARD_SPIN_CORRELATIONS7.9.3 HARD_SPIN_CORRELATIONS
HARD_SPIN_CORRELATIONS7.12.2.5 Further remarks
HDH_BR_WEIGHTS7.9.7 HDH_BR_WEIGHTS
HDH_NO_DECAY7.9.1 HDH_NO_DECAY
HDH_ONLY_DECAY7.9.2 HDH_ONLY_DECAY
HDH_SET_WIDTHS7.9.6 HDH_SET_WIDTHS
HEAVY_BARYON_ENHANCEMEMT7.12.1.3 Hadron multiplets
HEPEVT7.1.16 Event output formats
HepMC_GenEvent7.1.16 Event output formats
HepMC_Short7.1.16 Event output formats
HIGGS_INTERFERENCE_MODE10.3.1 H production in gluon fusion with interference effects
HIGGS_INTERFERENCE_ONLY10.3.1 H production in gluon fusion with interference effects
HIGGS_INTERFERENCE_SPIN10.3.1 H production in gluon fusion with interference effects

I
INTEGRATION_ERROR7.8.1 INTEGRATION_ERROR
INTEGRATOR7.8.2 INTEGRATOR
ISR_E_ORDER7.3 ISR parameters
ISR_E_SCHEME7.3 ISR parameters
ISR_SMAX7.3 ISR parameters
ISR_SMIN7.3 ISR parameters

K
kappa7.14.2.3 Parameters of the eikonals
KAPPAT_GAMMA7.4.4 Anomalous Gauge Couplings
KAPPAT_Z7.4.4 Anomalous Gauge Couplings
KAPPA_GAMMA7.4.4 Anomalous Gauge Couplings
KAPPA_Z7.4.4 Anomalous Gauge Couplings
KFACTOR7.5.7 KFACTOR
KK_CONVENTION7.4.3 ADD Model of Large Extra Dimensions
KT2_Factor7.14.2.4 Parameters for event generation
K_PERP_MEAN_17.2.2 Intrinsic Transverse Momentum
K_PERP_MEAN_27.2.2 Intrinsic Transverse Momentum
K_PERP_SIGMA_17.2.2 Intrinsic Transverse Momentum
K_PERP_SIGMA_27.2.2 Intrinsic Transverse Momentum

L
lambda7.14.2.3 Parameters of the eikonals
LAMBDA7.4.1 Standard Model
Lambda27.14.2.3 Parameters of the eikonals
LAMBDAT_GAMMA7.4.4 Anomalous Gauge Couplings
LAMBDAT_Z7.4.4 Anomalous Gauge Couplings
LAMBDA_GAMMA7.4.4 Anomalous Gauge Couplings
LAMBDA_Z7.4.4 Anomalous Gauge Couplings
LASER_ANGLES7.2.1.1 Laser Backscattering
LASER_MODE7.2.1.1 Laser Backscattering
LASER_NONLINEARITY7.2.1.1 Laser Backscattering
LHEF7.1.16 Event output formats
LHOLE_BOOST_TO_CMS9.3 External one-loop ME
LHOLE_CONTRACTFILE9.3 External one-loop ME
LHOLE_IR_REGULARISATION9.3 External one-loop ME
LHOLE_OLP9.3 External one-loop ME
LHOLE_ORDERFILE9.3 External one-loop ME
LOG_FILE7.1.5 LOG_FILE
Loop_Generator8.4 MCFM interface
LOOP_ME_INIT8.7.4 Enforcing the renormalization scheme

M
MassExponent_C->HH7.12.1.5 Cluster transition and decay weights
MASSIVE[<id>]7.4 Model parameters
MASS[<id>]7.4 Model parameters
MASS[<id>]7.12.2 Hadron decays
MAX_PROPER_LIFETIME7.12.2 Hadron decays
MEMLEAK_WARNING_THRESHOLD7.1.11 RLIMIT_AS
ME_QED7.13.2.1 ME_QED
ME_QED_CLUSTERING7.13.2.2 ME_QED_CLUSTERING
ME_QED_CLUSTERING_THRESHOLD7.13.2.3 ME_QED_CLUSTERING_THRESHOLD
ME_SIGNAL_GENERATOR7.5.1 ME_SIGNAL_GENERATOR
Mixing_0+7.12.1.3 Hadron multiplets
Mixing_1-7.12.1.3 Hadron multiplets
MI_HANDLER7.11.1 MI_HANDLER
MI_RESULT_DIRECTORY7.11.8 MI_RESULT_DIRECTORY
MI_RESULT_DIRECTORY_SUFFIX7.11.9 MI_RESULT_DIRECTORY_SUFFIX
MODEL7.4 Model parameters
MSTJ7.12.1.1 Fragmentation models
MSTP7.12.1.1 Fragmentation models
MSTU7.12.1.1 Fragmentation models
MULTI_WEIGHT_L0R0_DELTA_3/27.12.1.3 Hadron multiplets
MULTI_WEIGHT_L0R0_N*_1/27.12.1.3 Hadron multiplets
MULTI_WEIGHT_L0R0_N_1/27.12.1.3 Hadron multiplets
MULTI_WEIGHT_L0R0_PSEUDOSCALARS7.12.1.3 Hadron multiplets
MULTI_WEIGHT_L0R0_TENSORS27.12.1.3 Hadron multiplets
MULTI_WEIGHT_L0R0_TENSORS37.12.1.3 Hadron multiplets
MULTI_WEIGHT_L0R0_TENSORS47.12.1.3 Hadron multiplets
MULTI_WEIGHT_L0R0_VECTORS7.12.1.3 Hadron multiplets
MULTI_WEIGHT_L0R1_AXIALVECTORS7.12.1.3 Hadron multiplets
MULTI_WEIGHT_L0R1_SCALARS7.12.1.3 Hadron multiplets
MULTI_WEIGHT_L1R0_AXIALVECTORS7.12.1.3 Hadron multiplets
MULTI_WEIGHT_L1R0_DELTA*_3/27.12.1.3 Hadron multiplets
MULTI_WEIGHT_L1R0_N*_1/27.12.1.3 Hadron multiplets
MULTI_WEIGHT_L1R0_N*_3/27.12.1.3 Hadron multiplets
MULTI_WEIGHT_L1R0_SCALARS7.12.1.3 Hadron multiplets
MULTI_WEIGHT_L1R0_TENSORS27.12.1.3 Hadron multiplets
MULTI_WEIGHT_L2R0_VECTORS7.12.1.3 Hadron multiplets
MULTI_WEIGHT_L3R0_VECTORS7.12.1.3 Hadron multiplets
M_BIND_07.12.1.2 Hadron constituents
M_BIND_17.12.1.2 Hadron constituents
M_BOTTOM7.12.1.2 Hadron constituents
M_CHARM7.12.1.2 Hadron constituents
M_CUT7.4.3 ADD Model of Large Extra Dimensions
M_DIQUARK_OFFSET7.12.1.2 Hadron constituents
M_S7.4.3 ADD Model of Large Extra Dimensions
M_STRANGE7.12.1.2 Hadron constituents
M_UP_DOWN7.12.1.2 Hadron constituents

N
NUM_ACCURACY7.1.13 NUM_ACCURACY
N_ED7.4.3 ADD Model of Large Extra Dimensions

O
ORDER_ALPHAS7.4.1 Standard Model
OUTPUT7.1.4 OUTPUT
OUTPUT_MIXING7.4.7 Fourth Generation
OUTPUT_PRECISION7.1.16 Event output formats

P
PARJ7.12.1.1 Fragmentation models
PARP7.12.1.1 Fragmentation models
PARTICLE_CONTAINER7.6.1.2 Particle containers
PARU7.12.1.1 Fragmentation models
PATH6. Input structure
PDF_LIBRARY7.3 ISR parameters
PDF_LIBRARY_17.3 ISR parameters
PDF_LIBRARY_27.3 ISR parameters
PDF_SET7.3 ISR parameters
PDF_SET_17.3 ISR parameters
PDF_SET_27.3 ISR parameters
PDF_SET_VERSION7.3 ISR parameters
PG_THREADS7.1.18 Multi-threading
PHI_27.4.7 Fourth Generation
PHI_37.4.7 Fourth Generation
PHI_L27.4.7 Fourth Generation
PHI_L37.4.7 Fourth Generation
PROFILE_FUNCTION7.11.3 PROFILE_FUNCTION
PROFILE_PARAMETERS7.11.4 PROFILE_PARAMETERS
PSI_ITMAX7.8.7 PSI_ITMAX
PSI_ITMIN7.8.6 PSI_ITMIN
PSI_NMAX7.8.5 PSI_NMAX
PT^2_07.12.1.6 Cluster decays - kinematics
PT_MAX7.12.1.6 Cluster decays - kinematics
P_LASER_7.2.1.1 Laser Backscattering
P_LASER_7.2.1.1 Laser Backscattering
P_{QQ_1}/P_{QQ_0}7.12.1.4 Cluster transition to hadrons - flavour part
P_{QS}/P_{QQ}7.12.1.4 Cluster transition to hadrons - flavour part
P_{SS}/P_{QQ}7.12.1.4 Cluster transition to hadrons - flavour part

Q
Q_0^27.14.2.4 Parameters for event generation
Q_as^27.12.1.6 Cluster decays - kinematics
Q_as^27.14.2.4 Parameters for event generation

R
RANDOM_SEED7.1.6 RANDOM_SEED
REFERENCE_SCALE7.11.5 REFERENCE_SCALE
RENORMALIZATION_SCALE_FACTOR7.5.4.5 Simple scale variations
RESCALE_EXPONENT7.11.6 RESCALE_EXPONENT
RescProb7.14.2.4 Parameters for event generation
RescProb17.14.2.4 Parameters for event generation
RESOLVE_DECAYS7.9.9 RESOLVE_DECAYS
RESULT_DIRECTORY7.5.2 RESULT_DIRECTORY
RHO7.4.1 Standard Model
RLIMIT_AS7.1.11 RLIMIT_AS
RLIMIT_BY_CPU7.1.11 RLIMIT_AS
Root7.1.16 Event output formats
RUNDATA6. Input structure

S
SCALES7.5.4 SCALES
SCALE_MIN7.11.2 SCALE_MIN
SHERPA_CPP_PATH7.1.14 SHERPA_CPP_PATH
SHERPA_LDADD9. Customization
SHERPA_LIB_PATH7.1.15 SHERPA_LIB_PATH
SHOW_MODEL_SYNTAX7.4 Model parameters
SHOW_PDF_SETS7.3 ISR parameters
SHOW_VARIABLE_SYNTAX7.7.5 Universal selector
Shrimps_Mode7.14.2.2 Shrimps Mode
SIGMA_ND_FACTOR7.11.7 SIGMA_ND_FACTOR
SIN2THETAW7.4.1 Standard Model
SINGLET_SUPPRESSION7.12.1.3 Hadron multiplets
SLHA_INPUT7.4.2 Minimal Supersymmetric Standard Model
SOFT_COLLISIONS7.14.2.1 Generating minimum bias events
SOFT_MASS_SMEARING7.12.2 Hadron decays
SOFT_SPIN_CORRELATIONS7.12.2.5 Further remarks
SPECTRUM_FILE_17.2.1 Beam Spectra
SPECTRUM_FILE_27.2.1 Beam Spectra
SP_NLOCT7.5.4.6 Scale variations in parton showered and merged samples
STABLE[<id>]7.4 Model parameters
STABLE[<id>]7.9 Hard decays
STABLE[<id>]7.12.2 Hadron decays
STORE_DECAY_RESULTS7.9.4 STORE_DECAY_RESULTS
STRANGE_FRACTION7.12.1.4 Cluster transition to hadrons - flavour part

T
TAN(BETA)7.4.5 Two Higgs Doublet Model
THETA_L147.4.7 Fourth Generation
THETA_L247.4.7 Fourth Generation
THETA_L347.4.7 Fourth Generation
TIMEOUT7.1.10 TIMEOUT
TRANSITION_OFFSET7.12.1.4 Cluster transition to hadrons - flavour part
TUNE7.1.3 TUNE

U
UNITARIZATION_M7.4.4 Anomalous Gauge Couplings
UNITARIZATION_M37.4.4 Anomalous Gauge Couplings
UNITARIZATION_M47.4.4 Anomalous Gauge Couplings
UNITARIZATION_N7.4.4 Anomalous Gauge Couplings
UNITARIZATION_N37.4.4 Anomalous Gauge Couplings
UNITARIZATION_N47.4.4 Anomalous Gauge Couplings
UNITARIZATION_SCALE7.4.4 Anomalous Gauge Couplings
UNITARIZATION_SCALE37.4.4 Anomalous Gauge Couplings
UNITARIZATION_SCALE47.4.4 Anomalous Gauge Couplings
USE_PDF_ALPHAS7.4.1 Standard Model
USR_WGT_MODE8.7.7 Structure of ROOT NTuple Output

V
VEGAS7.8.3 VEGAS
VEV7.4.1 Standard Model

W
WIDTH[<id>]7.4 Model parameters
WIDTH[<id>]7.9.6 HDH_SET_WIDTHS
WIDTH[<id>]7.12.2 Hadron decays
WIDTH_SCHEME7.4.1 Standard Model

X
xi7.14.2.3 Parameters of the eikonals

Y
YFS_IR_CUTOFF7.13.1.3 YFS_IR_CUTOFF
YFS_MODE7.13.1.1 YFS_MODE
YFS_USE_ME7.13.1.2 YFS_USE_ME
YUKAWA[<id>]7.4 Model parameters
YUKAWA_MASSES7.5.8 YUKAWA_MASSES

Jump to:   1  
A   B   C   D   E   F   G   H   I   K   L   M   N   O   P   Q   R   S   T   U   V   W   X   Y  

Table of Contents


This document was generated by Stefan on February 27, 2014 using texi2html 1.82.