sherpa is hosted by Hepforge, IPPP Durham

Sherpa 2.0.beta2 Manual

SherpaLogo

1. Introduction

Sherpa is a Monte Carlo event generator for the Simulation of High-Energy Reactions of PArticles in lepton-lepton, lepton-photon, photon-photon, lepton-hadron and hadron-hadron collisions. This document provides information to help users understand and apply Sherpa for their physics studies. The event generator is introduced, in broad terms, and the installation and running of the program are outlined. The various options and parameters specifying the program are compiled, and their meanings are explained. This document does not aim at giving a complete description of the physics content of Sherpa . To this end, the authors refer the reader to the original publication, [Gle08b] .


1.1 Introduction to Sherpa

Sherpa [Gle08b] is a Monte Carlo event generator that provides complete hadronic final states in simulations of high-energy particle collisions. The produced events may be passed into detector simulations used by the various experiments. The entire code has been written in C++, like its competitors Herwig++ [Bah08b] and Pythia 8 [Sjo07] .

Sherpa simulations can be achieved for the following types of collisions:

The list of physics processes that come with Sherpa covers particle production at tree level in the Standard Model and in models beyond the Standard Model: The complete set of Feynman rules for its Minimal Supersymmetric extension according to [Ros89] [Ros95] has been implemented, including general mixing matrices for inter-generational squark and slepton mixing. Among other interaction models the ADD model of Large Extra Dimensions has been made available, too [Gle03a] . Furthermore, anomalous gauge couplings [Hag86] , a model with an extended Higgs sector [Ded08] , and a version of the Two-Higgs Doublet Model are available. The Sherpa program owes this versatility to the inbuilt matrix-element generators, AMEGIC++ and Comix, and to it’s phase-space generator Phasic [Kra01] , which automatically calculate and integrate tree-level amplitudes for the implemented models. This feature enables Sherpa to be used as a cross-section integrator and parton-level event generator as well. This aspect has been extensively tested, see e.g. [Gle03b] ,[Hag05] .

As a second key feature of Sherpa the program provides an implementation of the merging approach of [Hoe09] . This algorithm yields improved descriptions of multijet production processes, which copiously appear at lepton-hadron colliders like HERA [Car09] , or hadron-hadron colliders like the Tevatron and the LHC, [Kra04] , [Kra05] ,[Gle05] ,[Hoe09a] . An older approach, implemented in previous versions of Sherpa and known as the CKKW technique [Cat01a] ,[Kra02] , has been compared in great detail in [Alw07] with other approaches, such as the MLM merging prescription [Man01] as implemented in Alpgen [Man02] , Madevent [Ste94] ,[Mal02a] , or Helac [Kan00] ,[Pap05] and the CKKW-L prescription [Lon01] ,[Lav05] of Ariadne [Lon92] .

This manual contains all information necessary to get started with Sherpa as quickly as possible. By reading it, users should be enabled to setup the program according to their needs for studying various physics aspects. Therefore, all switches plus options that have been provided are listed. It is explained how to use them, how Sherpa can be run in different modes and how the results and output of Sherpa can be interpreted. For external code that can be linked, corresponding references are given and users are encouraged to cite them accordingly.

On the other hand, the physics of Sherpa and its underlying structure and coding principles are not detailed in this manual. For this, readers are encouraged to refer to original work of the authors. Also, whenever justified, Sherpa users are kindly asked to cite Sherpa ’s original publication [Gle08b] . Moreover the authors strongly recommend the study of the manuals and/or many excellent publications on different aspects of event generation and physics at collider experiments of the other event generator authors.

This manual is organized as follows: in Basic structure the modular structure intrinsic to Sherpa is introduced. Getting started contains information about and instructions for the installation of the package. There is also a description of the steps that are needed to run Sherpa and generate events. The Input structure is then discussed, and the ways in which Sherpa can be steered are explained. All parameters and options are discussed in Parameters. Advanced Tips and Tricks are detailed, and some options for Customization are outlined for those more familiar with Sherpa. There is also a short description of the different Examples provided with Sherpa.

It should be stressed that the construction of a Monte Carlo program requires a number of implicit assumptions, approximations and simplifications of complicated situations. Potential bugs and other shortcomings of the authors may also be included. The results of event generators, independent of their quality, should therefore always be verified and cross-checked with results obtained by the programs of other authors.


1.2 Basic structure

The construction of the Sherpa program has been pursued in a modular way. It fully reflects the paradigm of Monte Carlo event generation of factorizing the simulation into well defined phases. Accordingly, each module encapsulates a different aspect of event generation for high-energy particle reactions. It resides within its own namespace and is located in its own subdirectory of the same name. The main module called SHERPA steers the interplay of all modules – or phases – and the actual generation of the events. Altogether, the following modules are currently distributed with the Sherpa framework:

The actual executable of the Sherpa generator can be found in the subdirectory <prefix>/bin/ and is called Sherpa . To run the program, input files have to be provided in the current working directory or elsewhere by specifying the corresponding path, see Input structure. All output files are then written to this directory as well.


2. Getting started


2.1 Installation

Sherpa is distributed as a tarred and gzipped file named Sherpa-<version>.tar.gz, and can be unpacked in the current working directory with

 
 tar -zxf Sherpa-<version>.tar.gz .

To guarantee successful installation, the following tools should have been made available on the system: make, autoconf, automake and libtool. Furthermore, a C++ and FORTRAN compiler must be provided. Compilation and installation proceed through the following commands

 
 ./configure

 make install

If not specified differently, the directory structure after installation is organized as follows

$(prefix)/bin

Sherpa executeable and scripts

$(prefix)/include

headers for process library compilation

$(prefix)/lib

basic libraries

$(prefix)/share

PDFs, Decaydata, fallback run cards

The installation directory $(prefix) can be specified by using the ./configure --prefix /path/to/installation/target directive and defaults to the current working directory.

If Sherpa has to be moved to a different directory after the installation, one has to set the following environment variables for each run:

Sherpa can be interfaced with various external packages, e.g. HepMC, for event output, or LHAPDF, for PDFs. For this to work, the user has to pass the appropriate commands to the configure step. This is achieved as shown below:

 
./configure --enable-hepmc2=/path/to/hepmc2 --enable-lhapdf=/path/to/lhapdf

Here, the paths have to point to the top level installation directories of the external packages, i.e. the ones containing the lib/, share/, ... subdirectories.

For a complete list of possible configuration options run ‘./configure --help’.

The Sherpa package has successfully been compiled, installed and tested on SuSE, RedHat / Scientific Linux and Debian / Ubuntu Linux systems using the GNU C++ compiler versions 3.2, 3.3, 3.4, 4.0, 4.1, 4.2, 4.3 and 4.4 as well as on Mac OS X 10 using the GNU C++ compiler version 4.0. In all cases the GNU FORTRAN compiler g77 or gfortran has been employed. Note that GCC version 2.96 is not supported.

If you have multiple compilers installed on your system, you can use shell environment variables to specify which of these are to be used. A list of the available variables is printed with

 
./configure --help

in the Sherpa top level directory and looking at the last lines. Depending on the shell you are using, you can set these variables e.g. with export (bash) or setenv (csh). Examples:

 
export CXX=g++-3.4
export CC=gcc-3.4
export CPP=cpp-3.4

MacOS Installation

Installation on MacOS is supported at least in all Sherpa versions > 1.1.2. Before that, there might have been problems on the newer MacOS versions or architectures (10.5, Intel). The following issues have come up on Mac installations before, so please be aware of them:


2.2 Running Sherpa

The Sherpa executable resides in the directory <prefix>/bin/ where <prefix> denotes the path to the Sherpa installation directory. The way a particular simulation will be accomplished is defined by several parameters, which can all be listed in a common file, or data card (Parameters can be alternatively specified on the command line; more details are given in Input structure). This steering file is called Run.dat and some example setups (i.e. Run.dat files) are distributed with the current version of Sherpa. They can be found in the directory <prefix>/share/SHERPA-MC/Examples/, and descriptions of some of their key features can be found in the section Examples.

Please note: It is not in general possible to reuse run cards from previous Sherpa versions. Often there are small changes in the parameter syntax of the run cards from one version to the next. These changes are documented in our manuals. In addition, always use the newer Hadron.dat and Decaydata directories (and reapply any changes which you might have applied to the old ones), see Hadron decays.

The very first step in running Sherpa is therefore to adjust all parameters to the needs of the desired simulation. The details for properly doing this are given in Parameters. In this section, the focus is on the main issues for a successful operation of Sherpa. This is illustrated by discussing and referring to the parameter settings that come in the run card ./Examples/Tevatron_WJets/Run.dat. This is a simple run card created to show the basics of how to operate Sherpa. It should be stressed that this run-card relies on many of Sherpa’s default settings, and, as such, the user should understand those settings before using it to look at physics. For more information on the settings and parameters in Sherpa, see Parameters, and for more examples see the Examples section.


2.2.1 Process selection and initialization

Central to any Monte Carlo simulation is the choice of the hard processes that initiate the events. These hard processes are described by matrix elements. In Sherpa, the selection of processes happens in the (processes) part of the steering file. Only a few 2->2 reactions have been hard-coded. They are available in the EXTRA_XS module. The more usual way to compute matrix elements is to employ one of Sherpa’s automatic tree-level generators, AMEGIC++ and Comix, see Basic structure. If no matrix-element generator is selected, using the ME_SIGNAL_GENERATOR tag, then Sherpa will use whichever generator is capable of calculating the process, checking EXTRA_XS first, then Comix and then AMEGIC++. Therefore, for some processes, several of the options are used. In this example, EXTRA_XS calculates the 2->2 part of the process, and Comix calculates the 2->3,4 parts.

To begin with the example, the Sherpa run has to be started by changing into the <prefix>/share/SHERPA-MC/Examples/Tevatron_WJets/ directory and executing

 
<prefix>/bin/Sherpa 

The user may also run from an arbitrary directory, employing <prefix>/bin/Sherpa PATH=<prefix>/share/SHERPA-MC/Examples/Tevatron_WJets. In the example, the keyword PATH is specified by an absolute path. It may also be specified relative to the current working directory. If it is not specified at all or it is omitted, the current working directory is understood.

For good book-keeping, it is highly recommended to reserve different subdirectories for different simulations as is demonstrated with the example setups.

If AMEGIC++ is used, Sherpa requires an initialization run, where libraries are written out, then the libraries must be compiled and linked by running a makelibs script in the working directory, and then Sherpa is run again for the actual cross section integrations and event generation. For an example of how to run Sherpa using AMEGIC++, see Running Sherpa with AMEGIC++.

If the Internal hard-coded cross sections or Comix are used, and AMEGIC++ is not, an initialization run is not needed, and Sherpa will calculate the cross sections and generate events during the first run.

As the cross sections are integrated, the integration over phase space is optimized to arrive at an efficient event generation. Subsequently events are generated if EVENTS was specified either at the command line or added to the Run.dat file in the (run) section.

The generated events are not stored into a file by default; for details on how to store the events see Event output formats. Note that the computational effort to go through this procedure of generating, compiling and integrating the matrix elements of the hard processes depends on the complexity of the parton-level final states. For low multiplicities ( 2->2,3,4 ), of course, it can be followed instantly.

Usually more than one generation run is wanted. As long as the parameters that affect the matrix-element integration are not changed, it is advantageous to store the cross sections obtained during the generation run for later use. This saves CPU time especially for large final-state multiplicities of the matrix elements. To store the integration results, a <result> directory has to be created in Tevatron_WJets (Alternatively, the command line option ‘-g’ can be invoked, see Command line options). Then utilizing an extended command line reading

 
<prefix>/bin/Sherpa RESULT_DIRECTORY=<result>/

a generation run can be started and the results of the integration will be stored in <result>, see RESULT_DIRECTORY. The next time this command line is used, Sherpa will look for the integration results in <result> and read them in. Of course, if corresponding parameters do change, the cross sections have to be re-evaluated for a valid new generation run. The new results have to be stored in a new directory or the <result> directory may be re-used once it has been emptied. Basically, most of the parameters listed in the (model), (me) and (selector) part of Run.dat determine the calculation of cross sections. Standard examples are changing the magnitude of couplings, renormalization or factorization scales, changing the PDF or centre-of-mass energy, or, applying different cuts at the parton level. If unsure whether a re-integration is required, a simple test is to remove the RESULT_DIRECTORY option from the run command and check whether the new integration numbers (statistically) comply with the stored ones.

One more remark (or maybe warning) concerning the validity of the process libraries is in order here: it is absolutely mandatory to generate new library files, whenever the physics model is altered, i.e. particles are added or removed and hence new or existing diagrams may or may not anymore contribute to the same final states. Also, when particle masses are switched on or off new library files must be generated (however, masses may be changed between non-zero values keeping the same process libraries). Old library files cannot account for such changes, since once generated their functional structure is fixed. The best thing is to create a new and separate setup directory. Otherwise the Process and Result directories have to be erased:

 
rm -rf Process/     and     rm -rf Result/

In either case one has to start over with the whole initialization procedure to prepare for the generation of events again.


2.2.2 The example set-up: W+Jets at Tevatron

The setup (or the Run.dat file) provided in ./Examples/Tevatron_WJets/ can be considered as a standard example to illustrate the generation of fully hadronized events in Sherpa. Such events will include effects from parton showering, hadronization into primary hadrons and their subsequent decays into stable hadrons. Moreover, the example chosen here nicely demonstrates how Sherpa is used in the context of merging matrix elements and parton showers [Hoe09] . In addition to the aforementioned corrections, this simulation of inclusive W production (with the W decaying into electron and anti-electron-neutrino ) will then include higher-order jet corrections at the tree level. As a result the transverse-momentum distribution of the W boson as measured by the D0 and CDF collaborations at Tevatron Run I can be well described, see also [Kra04] ,[Kra05] ,[Gle05] .

Before event generation, the initialization procedure as described in Process selection and initialization has to be completed. The matrix-element processes included in the setup are the following:

 
  proton anti-proton -> parton parton -> electron anti-electron-neutrino + up to two partons

In the (processes) part of the steering file this translates into

  Process 93 93 -> 11 -12 93{2}
  Order_EW 2;
  CKKW sqr(30/E_CMS)
  End process;

The physics model for these processes is the Standard Model (‘SM’) which is the default setting of the parameter MODEL, in the (model) part of Run.dat. Fixing the order of electroweak couplings to ‘2’, matrix elements of all partonic subprocesses for W production without any and with up to two extra QCD parton emissions will be generated. Proton–antiproton collisions are considered at beam energies of 900 GeV; under the (beam) part of the Run.dat file, one therefore has BEAM_1=2212, BEAM_2=-2212 and BEAM_ENERGY_{1,2}=980.0. The default PDF used by Sherpa is CTEQ66. Model parameters and couplings can be set in the Run.dat section (model), and the way couplings are treated can be defined under the (me) category. The QCD radiation matrix elements have to be regularized to obtain meaningful cross sections. This is achieved by specifying ‘CKKW sqr(30/E_CMS)’ in the (processes) part of Run.dat. Simultaneously, this tag initiates the ME-PS merging procedure. To eventually obtain fully hadronized events, the FRAGMENTATION tag has been left on it’s default setting ‘Ahadic’, which will run Sherpa’s cluster hadronization, and the tag DECAYMODEL has it’s default setting ‘Hadrons’, which will run Sherpa’s hadron decays. Additionally corrections owing to photon emissions are taken into account.

To run this example set-up, use the

 
<prefix>/bin/Sherpa 

command as descibed in Running Sherpa. Sherpa displays some output as it runs. At the start of the run, Sherpa initializes the relevant model, and displays a table of particles, with their PDG codes and some properties. It also displays the Particle containers, and their contents. The other relevant parts of Sherpa are initialized, including the matrix element generator(s). The Sherpa output will look like:

 
Initialized the beams Monochromatic*Monochromatic
CTEQ6_Fortran_Interface::CTEQ6_Fortran_Interface(): Init member 400.
PDF set 'cteq6.6m' loaded from 'libCTEQ6Sherpa' for beam 1 (P+).
CTEQ6_Fortran_Interface::CTEQ6_Fortran_Interface(): Init member 400.
PDF set 'cteq6.6m' loaded from 'libCTEQ6Sherpa' for beam 2 (P+b).
Initialized the ISR: (SF)*(SF)
CTEQ6_Fortran_Interface::CTEQ6_Fortran_Interface(): Init member 400.
CTEQ6_Fortran_Interface::CTEQ6_Fortran_Interface(): Init member 400.
Initialize the Standard Model from  / Model.dat
Running_AlphaS::Running_AlphaS() {
  Setting \alpha_s according to PDF
  perturbative order 1
  \alpha_s(M_Z) = 0.118
}
Initialized the Beam_Remnant_Handler.
Initialized the Shower_Handler.
Initialized the Fragmentation_Handler.
+----------------------------------+
|                                  |
|      CCC  OOO  M   M I X   X     |
|     C    O   O MM MM I  X X      |
|     C    O   O M M M I   X       |
|     C    O   O M   M I  X X      |
|      CCC  OOO  M   M I X   X     |
|                                  |
+==================================+
|  Color dressed  Matrix Elements  |
|     http://comix.freacafe.de     |
|   please cite  JHEP12(2008)039   |
+----------------------------------+
Matrix_Element_Handler::BuildProcesses(): Looking for processes . done ( 23252 kB, 0s ).
Matrix_Element_Handler::InitializeProcesses(): Performing tests . done ( 23252 kB, 0s ).
Initialized the Matrix_Element_Handler for the hard processes.
Initialized the Soft_Photon_Handler.

Then Sherpa will start to integrate the cross sections. The output will look like:

 
Process_Group::CalculateTotalXSec(): Calculate xs for '2_2__j__j__e-__nu_eb' (Comix)
Starting the calculation. Lean back and enjoy ... .
1019.05 pb +- ( 52.4212 pb = 5.14411 % ) 5000 ( 5013 -> 99.7 % )
full optimization:  ( 0s elapsed / 15s left )
...

The first line here displays the process which is being calculated. In this example, the integration is for the 2->2 process, parton, parton -> electron, neutrino. The matrix element generator used is displayed after the process. As the integration progresses, summary lines are displayed, like the one shown above. The current estimate of the cross section is displayed, along with its statistical error estimate. The number of phase space points calculated is displayed after this (‘5000’ in this example), and the efficiency is displayed after that. On the line below, the time elapsed is shown, and an estimate of the total time till the optimization is complete.

When the integration is complete, the output will look like:

 
...
1098.97 pb +- ( 0.373022 pb = 0.0339428 % ) 300000 ( 300020 -> 100 % )
integration time:  ( 13s elapsed / 0s left )
1098.86 pb +- ( 0.366442 pb = 0.0333475 % ) 310000 ( 310020 -> 100 % )
integration time:  ( 13s elapsed / 0s left )
2_2__j__j__e-__nu_eb : 1098.86 pb +- ( 0.366442 pb = 0.0333475 % )  exp. eff: 27.3833 %
  reduce max for 2_2__j__j__e-__nu_eb to 0.518502 ( eps = 0.001 )

with the final cross section result and its statistical error displayed.

Sherpa will then move on to integrate the other processes specified in the run card.

When the integration is complete, the event generation will start. As the events are being generated, Sherpa will display a summary line stating how many events have been generated, and an estimate of how long it will take. When the event generation is complete, Sherpa’s output looks like:

 
...
  Event 10000 ( 122 s total )
In Event_Handler::Finish : Summarizing the run may take some time.
+-----------------------------------------------------+
|                                                     |
|  Total XS is 1139.05 pb +- ( 11.1111 pb = 0.97 % )  |
|                                                     |
+-----------------------------------------------------+

A summary of the number of events generated is displayed, with the total cross section for the processes.

The generated events are not stored into a file by default; for details on how to store the events see Event output formats.


2.2.3 Parton-level event generation with Sherpa

Sherpa has its own tree-level matrix-element generators called AMEGIC++ and Comix. Furthermore, with the module PHASIC++, sophisticated and robust tools for phase-space integration are provided. Therefore Sherpa obviously can be used as a cross-section integrator. Because of the way Monte Carlo integration is accomplished, this immediately allows for parton-level event generation. Taking the Tevatron_WJets setup, users have to modify just a few settings in Run.dat and would arrive at a parton-level generation for the process gluon down-quark to electron antineutrino and up-quark, to name an example. When, for instance, the options “EVENTS=0 OUTPUT=2” are added to the command line, a pure cross-section integration for that process would be obtained with the results plus integration errors written to the screen.

For the example, the (processes) section alters to

  Process : 21 1 -> 11 -12 2
  Order_EW 2
  End process

and under the assumption to start afresh, the initialization procedure has to be followed as before. Picking the same collider environment as in the previous example there are only two more changes before the Run.dat file is ready for the calculation of the hadronic cross section of the process g d to e- nu_e-bar u at Tevatron Run I and subsequent parton-level event generation with Sherpa . These changes read SHOWER_GENERATOR=None, to switch off parton showering, and, FRAGMENTATION=Off, to do so for the hadronization effects.


2.2.4 Running Sherpa with AMEGIC++

When Sherpa is run using the matrix element generator AMEGIC++, it is necessary to run it twice. During the first run (the initialization run) Feynman diagrams for the hard processes are constructed and translated into helicity amplitudes. Furthermore suitable phase-space mappings are produced. The amplitudes and corresponding integration channels are written to disk as C++ sourcecode, placed in a subdirectory of LHC_4thGen, which is called Process. The initialization run is started using the standard Sherpa executable, as decribed in Running Sherpa. The relevant command is

 
<prefix>/bin/Sherpa 

The initialization run stops with the message "New libraries created. Please compile.", which is nothing but the request to carry out the compilation and linking procedure for the generated matrix-element libraries. The makelibs script, provided for this purpose and created in the working directory, must be invoked by the user (see ./makelibs -h for help):

 
./makelibs

Afterwards Sherpa can be restarted using the same command as before. In this run (the generation run) the cross sections of the hard processes are evaluated. Simultaneously the integration over phase space is optimized to arrive at an efficient event generation.

Using AMEGIC++ for MEPS@NLO

When using using AMEGIC++ for an MEPS@NLO setup, e.g. MEPS@NLO setup for pp->W+jets, the code for each multiplicity is created sequentially, one NLO-multiplicity at a time. Thus, when merging e.g. W, W+jet and W+2jet at NLO, AMEGIC++ will first create the code for W production and ask for it being compiled, then create the code for W+jet production and again ask for it being compiled and so on. When all code is created and compiled integration and event generation proceed as usual.


3. Cross section

To determine the total cross section, in particular in the context of running CKKW merging with Sherpa, the final output of the event generation run should be used, e.g.

+-----------------------------------------------------+
|                                                     |
|  Total XS is 1612.17 pb +- ( 8.48908 pb = 0.52 % )  |
|                                                     |
+-----------------------------------------------------+

Note that the Monte Carlo error quoted for the total cross section is determined during event generation. It, therefore, might differ substantially from the errors quoted during the integration step.

In contrast to plain leading order results, Sherpa’s total cross section is composed of values from various leading order processes, namely those which are combined by applying the ME-PS merging, see ME-PS merging. In this context, it is important to note that

The exclusive higher order tree-level cross sections determined during the integration step are meaningless by themselves, only the inclusive cross section printed at the end of the event generation run is to be used.

In principle, this value has the same formal accuracy as a leading order result, but it might still differ by a significant amount. Depending on jet definitions, process etc., the merged cross section may be either larger or smaller than the leading order cross section.

Concerning a comparison with NLO calculations: It is known that for, e.g., inclusive Z production the NLO-LO K-factor is larger than one. In some setups the Sherpa cross section is smaller than the LO one, and therefore further from the NLO. Therefore, the Sherpa total cross section should not be thought of as an “improved leading order result”, which would suggest that it is always closer to the NLO than the LO cross section.

Sherpa total cross sections have leading order accuracy.

Broadly speaking, Sherpa’s ME-PS merging is adequate for capturing the information from (resummed) logarithmic corrections to the leading order (as is the parton shower). On the contrary, NLO cross sections are typically dominated by finite terms, as they are often quite inclusive and there are no large logarithms in this case. Sherpa’s merging algorithm has no way to calculate these finite terms, and this is why Sherpa’s cross section is not a better approximation to the NLO cross section. On the other hand, shape observables (especially jet transverse momenta and the like) are typically dominated by logarithmic corrections. If they are concerned, Sherpa can be expected to perform reasonably well.


4. Command line options

The available command line options for Sherpa.

-f <file>

Read input from file ‘<file>’.

-p <path>

Read input file from path ‘<path>’.

-L <path>

Set Sherpa library path to ‘<path>’, see SHERPA_CPP_PATH.

-e <events>

Set number of events to generate ‘<events>’, see EVENTS.

-r <results>

Set the result directory to ‘<results>’, see RESULT_DIRECTORY.

-R <seed>

Set the seed of the random number generator to ‘<seed>’, see RANDOM_SEED.

-m <generators>

Set the matrix element generator list to ‘<generators>’, see ME_SIGNAL_GENERATOR.

-w <mode>

Set the event generation mode to ‘<mode>’, see EVENT_GENERATION_MODE.

-s <generator>

Set the parton shower generator to ‘<generator>’, see SHOWER_GENERATOR.

-F <module>

Set the fragmentation module to ‘<module>’, see Fragmentation.

-D <module>

Set the hadron decay module to ‘<module>’, see Hadron decays.

-a <analyses>

Set the analysis handler list to ‘<analyses>’, see ANALYSIS.

-A <path>

Set the analysis output path to ‘<path>’, see ANALYSIS_OUTPUT.

-O <level>

Set general output level ‘<level>’, see OUTPUT.

-o <level>

Set output level for event generation ‘<level>’, see OUTPUT.

-l <logfile>

Set log file name ‘<logfile>’, see LOG_FILE.

-j <threads>

Set number of threads ‘<threads>’, see Multi-threading.

-g

Do not create result directory, see RESULT_DIRECTORY.

-b

Switch to non-batch mode, see BATCH_MODE.

-V

Print extended version information at startup.

-v, --version

Print versioning information.

-h, --help

Print a help message.

PARAMETER=VALUE

Set the value of a parameter, see Parameters.

TAG:=VALUE

Set the value of a tag, see Tags.


5. Input structure

A Sherpa setup is steered by various parameters, associated with the different components of event generation.

These have to be specified in a run-card which by default is named “Run.dat” in the current working directory. If you want to use a different setup directory for your Sherpa run, you have to specify it on the command line as ‘-p <dir>’ or ‘PATH=<dir>’. To read parameters from a run-card with a different name, you may specify ‘-f <file>’ or ‘RUNDATA=<file>’.

Sherpa’s parameters are grouped according to the different aspects of event generation, e.g. the beam parameters in the group ‘(beam)’ and the fragmentation parameters in the group ‘(fragmentation)’. In the run-card this looks like:

  (beam){
    BEAM_ENERGY_1 = 7000.
    ...
  }(beam)

Each of these groups is described in detail in another chapter of this manual, see Parameters.

If such a section or file does not exist in the setup directory, a Sherpa-wide fallback mechanism is employed, searching the file in various locations in the following order (where $SHERPA_DAT_PATH is an optionally set environment variable):

All parameters can be overwritten on the command line, i.e. command-line input has the highest priority. The syntax is

  <prefix>/bin/Sherpa  KEYWORD1=value1 KEYWORD2=value2 ...

To change, e.g., the default number of events, the corresponding command line reads

  <prefix>/bin/Sherpa  EVENTS=10000

All over Sherpa, particles are defined by the particle code proposed by the PDG. These codes and the particle properties will be listed during each run with ‘OUTPUT=2’ for the elementary particles and ‘OUTPUT=4’ for the hadrons. In both cases, antiparticles are characterized by a minus sign in front of their code, e.g. a mu- has code ‘13’, while a mu+ has ‘-13’.

All quantities have to be specified in units of GeV and millimeter. The same units apply to all numbers in the event output (momenta, vertex positions). Scattering cross sections are denoted in pico-barn in the output.

There are a few extra features for an easier handling of the parameter file(s), namely global tag replacement, see Tags, and algebra interpretation, see Interpreter.


5.1 Interpreter

Sherpa has a built-in interpreter for algebraic expressions, like ‘cos(5/180*M_PI)’. This interpreter is employed when reading integer and floating point numbers from input files, such that certain parameters can be written in a more convenient fashion. For example it is possible to specify the factorisation scale as ‘sqr(91.188)’.
There are predefined tags to alleviate the handling

M_PI

Ludolph’s Number to a precision of 12 digits.

M_C

The speed of light in the vacuum.

E_CMS

The total centre of mass energy of the collision.

The expression syntax is in general C-like, except for the extra function ‘sqr’, which gives the square of its argument. Operator precedence is the same as in C. The interpreter can handle functions with an arbitrary list of parameters, such as ‘min’ and ‘max’.
The interpreter can be employed to construct arbitrary variables from four momenta, like e.g. in the context of a parton level selector, see Selectors. The corresponding functions are

Mass(v)

The invariant mass of v in GeV.

Abs2(v)

The invariant mass squared of v in GeV^2.

PPerp(v)

The transverse momentum of v in GeV.

PPerp2(v)

The transverse momentum squared of v in GeV^2.

MPerp(v)

The transverse mass of v in GeV.

MPerp2(v)

The transverse mass squared of v in GeV^2.

Theta(v)

The polar angle of v in radians.

Eta(v)

The pseudorapidity of v.

Phi(v)

The azimuthal angle of v in radians.

Comp(v,i)

The i’th component of the vector v. i=0 is the energy/time component, i=1, 2, and 3 are the x, y, and z components.

PPerpR(v1,v2)

The relative transverse momentum between v1 and v2 in GeV.

ThetaR(v1,v2)

The relative angle between v1 and v2 in radians.

DEta(v1,v2)

The pseudo-rapidity difference between v1 and v2.

DY(v1,v2)

The rapidity difference between v1 and v2.

DPhi(v1,v2)

The relative polar angle between v1 and v2 in radians.


5.2 Tags

Tag replacement in Sherpa is performed through the data reading routines, which means that it can be performed for virtually all inputs. Specifying a tag on the command line using the syntax ‘<Tag>:=<Value>’ will replace every occurrence of ‘<Tag>’ in all files during read-in. An example tag definition could read

  <prefix>/bin/Sherpa QCUT:=20 NJET:=3

and then be used in the (me) and (processes) sections like

  (me){
    RESULT_DIRECTORY = Result_QCUT/
  }(me)
  (processes){
    Process 93 93 -> 11 -11 93{NJET}
    Order_EW 2;
    CKKW sqr(QCUT/E_CMS)
    End process;
  }(processes)

6. Parameters

A Sherpa setup is steered by various parameters, associated with the different components of event generation. These are set in Sherpa’s run-card, see Input structure for more details. Tag replacements may be performed in all inputs, see Tags.


6.1 Run Parameters

The following parameters describe general run information. They may be set in the (run) section of the run-card, see Input structure.


6.1.1 EVENTS

This parameter specifies the number of events to be generated.
It can alternatively be set on the command line through option ‘-e’, see Command line options.


6.1.2 OUTPUT

This parameter specifies the output level (verbosity) of the program.
It can alternatively be set on the command line through option ‘-O’, see Command line options. A different output level can be specified for the event generation step through ‘EVT_OUTPUT’ or command line option ‘-o’, see Command line options

The value can be any sum of the following:

E.g. OUTPUT=3 would display information, events and errors.


6.1.3 LOG_FILE

This parameter specifies the log file. If set, the standard output from Sherpa is written to the specified file, but output from child processes is not redirected. This option is particularly useful to produce clean log files when running the code in MPI mode, see MPI parallelization. A file name can alternatively be specified on the command line through option ‘-l’, see Command line options.


6.1.4 RANDOM_SEED

SHERPA uses a random-number generator as described in [Florida State University Report FSU-SCRI-87-50]. The two independent integer-valued seeds are specified by the option “RANDOM_SEED=A B”. The seeds A and B may range from 0 to 31328 and from 0 to 30081, respectively. They can also directly be set using “RANDOM_SEED1=A” and “RANDOM_SEED2=B” If RANDOM_SEED is not specified at all or only by one integer number, the old random-number generator (SHERPA 1.0.6 and older) will be used. This value can also be set using the command line option ‘-R’, see Command line options.


6.1.5 ANALYSIS

Analysis routines can be switched on or off by setting the ANALYSIS flag. The default is no analysis, corresponding to option ‘0’. This parameter can also be specified on the command line using option ‘-a’, see Command line options.

The following analysis handlers are currently available

Internal

Sherpa’s internal analysis handler.
To use this option, the package must be configured with option ‘--enable-analysis’.
An output directory can be specified using ANALYSIS_OUTPUT.

Rivet

The Rivet package, see Rivet Website.
To enable it, Rivet and HepMC have to be installed and Sherpa must be configured as described in Rivet analyses.

HZTool

The HZTool package, see HZTool Website.
To enable it, HZTool and CERNLIB have to be installed and Sherpa must be configured as described in HZTool analyses.

Multiple options can be combined using a comma, e.g. ‘ANALYSIS=Internal,Rivet’.


6.1.6 ANALYSIS_OUTPUT

Name of the directory for histogram files when using the internal analysis and name of the Aida file when using Rivet, see ANALYSIS. The directory / file will be created w.r.t. the working directory. The default value is ‘Analysis/’. This parameter can also be specified on the command line using option ‘-A’, see Command line options.


6.1.7 TIMEOUT

A run time limitation can be given in user CPU seconds through TIMEOUT. This option is of some relevance when running SHERPA on a batch system. Since in many cases jobs are just terminated, this allows to interrupt a run, to store all relevant information and to restart it without any loss. This is particularly useful when carrying out long integrations. Alternatively, setting the TIMEOUT variable to -1, which is the default setting, translates into having no run time limitation at all. The unit is seconds.


6.1.8 BATCH_MODE

Whether or not to run Sherpa in batch mode. The default is ‘1’, meaning Sherpa does not attempt to save runtime information when catching a signal or an exception. On the contrary, if option ‘0’ is used, Sherpa will store potential integration information and analysis results, once the run is terminated abnormally. All possible settings are:

The settings are additive such that multiple settings can be employed at the same time.

Note that when running the code on a cluster or in a grid environment, BATCH_MODE should always contain setting 1 (i.e. BATCH_MODE=[1|3|5|7]).

The command line option ‘-b’ should therefore not be used in this case, see Command line options.


6.1.9 NUM_ACCURACY

The targeted numerical accuracy can be specified through NUM ACCURACY, e.g. for comparing two numbers. This might have to be reduced if gauge tests fail for numerical reasons.


6.1.10 SHERPA_CPP_PATH

The path in which Sherpa will eventually store dynamically created C++ source code. If not specified otherwise, sets ‘SHERPA_LIB_PATH’ to ‘$SHERPA_CPP_PATH/Process/lib’. This value can also be set using the command line option ‘-L’, see Command line options.


6.1.11 SHERPA_LIB_PATH

The path in which Sherpa looks for dynamically linked libraries from previously created C++ source code, cf. SHERPA_CPP_PATH.


6.1.12 Event output formats

Sherpa provides the possibility to output events in various formats, e.g. the HepEVT common block structure or the HepMC format. The authors of Sherpa assume that the user is sufficiently acquainted with these formats when selecting them.

If the events are to be written to file, the parameter ‘EVENT_OUTPUT’ must be specified together with a file name. An example would be EVENT_OUTPUT=HepMC_GenEvent[MyFile], where MyFile stands for the desired file base name. The following formats are currently available:

HepMC_GenEvent

Generates output in HepMC::IO_GenEvent format. The HepMC::GenEvent::m_weights weight vector stores the following items: [0] event weight, [1] combined matrix element and phase space weight (missing only PDF information, thus directly suitable for PDF reweighting), [2] event weight normalisation (in case of unweighted events event weights of ~ +/-1 can be obtained by (event weight)/(event weight normalisation)), and [3] number of trials. The total cross section of the simulated event sample can be computed as the sum of event weights divided by the sum of the number of trials. This value must agree with the total cross section quoted by Sherpa at the end of the event generation run, and it can serve as a cross-check on the consistency of the HepMC event file.

HepMC_Short

Generates output in HepMC::IO_GenEvent format, however, only incoming beams and outgoing particles are stored. Intermediate and decayed particles are not listed. The event weights stored as the same as above.

Delphes_GenEvent

Generates output in Root format, which can be passed to Delphes for analyses. Input events are taken from the HepMC interface. Storage space can be reduced by up to 50% compared to gzip compressed HepMC. This output format is available only if Sherpa was configured and installed with options ‘--enable-root’ and ‘--enable-delphes=/path/to/delphes’.

Delphes_Short

Generates output in Root format, which can be passed to Delphes for analyses. Only incoming beams and outgoing particles are stored.

PGS

Generates output in StdHEP format, which can be passed to PGS for analyses. This output format is available only if Sherpa was configured and installed with options ‘--enable-hepevtsize=4000’ and ‘--enable-pgs=/path/to/pgs’. Please refer to the PGS documentation for how to pass StdHEP event files on to PGS. If you are using the LHC olympics executeable, you may run ‘./olympics --stdhep events.lhe <other options>’.

PGS_Weighted

Generates output in StdHEP format, which can be passed to PGS for analyses. Event weights in the HEPEV4 common block are stored in the event file.

HEPEVT

Generates output in HepEvt format.

LHEF

Generates output in Les Houches Event File format. This output format is intended for output of matrix element configurations only. Since the format requires PDF information to be written out in the outdated PDFLIB/LHAGLUE enumeration format this is only available automatically if LHAPDF is used, the identification numbers otherwise have to be given explicitly via LHEF_PDF_NUMBER (LHEF_PDF_NUMBER_1 and LHEF_PDF_NUMBER_2 if both beams carry different structure functions). This format currently outputs matrix element information only, no information about the large-Nc colour flow is given as the LHEF output format is not suited to communicate enough information for meaningful parton showering on top of multiparton final states.

Root

Generates output in ROOT ntuple format for NLO event generation only. For details on the ntuple format, see Structure of ROOT NTuple Output. This output option is only available if Sherpa was linked to ROOT during installation by using the configure option --enable-root=/path/to/root.

The output can be further customized using the following options:

FILE_SIZE

Number of events per file (default: 1000).

NTUPLE_SIZE

File size per NTuple file (default: unlimited).

EVT_FILE_PATH

Directory where the files will be stored.

OUTPUT_PRECISION

Steers the precision of all numbers written to file.

To write events directly to gzipped files instead of plain text, the option ‘--enable-gzip’ has to be specified during the installation.


6.1.13 MPI parallelization

MPI parallelization in Sherpa can be enabled using the configuration option ‘--enable-mpi’. Sherpa supports OpenMPI and MPICH2 . For detailed instructions on how to run a parallel program, please refer to the documentation of your local cluster resources or the many excellent introductions on the internet. MPI parallelization is mainly intended to speed up the integration process, as event generation can be parallelized trivially by starting multiple instances of Sherpa with different random seed, cf. RANDOM_SEED. However, both the internal analysis module and the Root NTuple writeout can be used with MPI. Note that these require substantial data transfer. We only recommend to use them in MPI mode if your local cluster has sufficient ethernet bandwidth.

When compiled with MPI support, Sherpa implements an automatic load balancing to make optimal use of potentially different types of cluster nodes. If you do not wish Sherpa to take control, this option can be disabled by setting ‘PSI_ADJUST_POINTS=0’.


6.1.14 Multi-threading

Multi-threaded integration in Sherpa can be enabled using the configuration option ‘--enable-multithread’. Subsequently the computation of amplitudes for large groups of processes is split into a number of threads which is limited from above by the parameter ‘PG_THREADS’. This parameter can also be specified using the command line option ‘-j’, see Command line options. Additionally, matrix-element calculation and phase-space evaluation for a single process with Comix can be distributed to different threads according to [Gle08] . The number of threads is then specified using the parameters ‘COMIX_ME_THREADS’ and ‘COMIX_PS_THREADS’, respectively.


6.2 Beam Parameters

The setup of the colliding beams is covered by the (beam) section of the steering file or the beam data file Beam.dat, respectively, see Input structure. The mandatory settings to be made are

More options related to beamstrahlung and intrinsic transverse momentum can be found in the following subsections.


6.2.1 Beam Spectra

If desired, you can also specify spectra for beamstrahlung through BEAM_SPECTRUM_1 and BEAM_SPECTRUM_2. The possible values are Possible values are

Monochromatic

The beam energy is unaltered and the beam particles remain unchanged. That is the default and corresponds to ordinary hadron-hadron or lepton-lepton collisions.

Laser_Backscattering

This can be used to describe the backscattering of a laser beam off initial leptons. The energy distribution of the emerging photon beams is modelled by the CompAZ parametrization, see [Zar02] Note that this parametrization is valid only for the proposed TESLA photon collider, as various assumptions about the laser parameters and the initial lepton beam energy have been made.

Simple_Compton

This corresponds to a simple light backscattering off the initial lepton beam and produces initial-state photons with a corresponding energy spectrum.

EPA

This enables the equivalent photon approximation for colliding protons, see [Arc08] . The resulting beam particles are photons that follow a dipole form factor parametrization, cf. [Bud74] . The authors would like to thank T. Pierzchala for his help in implementing and testing the corresponding code.

Spectrum_Reader

A user defined spectrum is used to describe the energy spectrum of the assumed new beam particles. The name of the corresponding spectrum file needs to be given through the keywords SPECTRUM_FILE_1 and SPECTRUM_FILE_2.

The BEAM_SMIN and BEAM_SMAX parameters may be used to specify the minimum/maximum fraction of cms energy squared after Beamstrahlung. The reference value is the total centre of mass energy squared of the collision, not the centre of mass energy after eventual Beamstrahlung.
The parameter can be specified using the internal interpreter, see Interpreter, e.g. as ‘BEAM_SMIN sqr(20/E_CMS)’.


6.2.2 Intrinsic Transverse Momentum

K_PERP_MEAN_1

This parameter specifies the mean intrinsic transverse momentum for the first (left) beam in case of hadronic beams, such as protons.
The default value for protons is 0.8 GeV.

K_PERP_MEAN_2

This parameter specifies the mean intrinsic transverse momentum for the second (right) beam in case of hadronic beams, such as protons.
The default value for protons is 0.8 GeV.

K_PERP_SIGMA_1

This parameter specifies the width of the Gaussian distribution of intrinsic transverse momentum for the first (left) beam in case of hadronic beams, such as protons.
The default value for protons is 0.8 GeV.

K_PERP_SIGMA_2

This parameter specifies the width of the Gaussian distribution of intrinsic transverse momentum for the first (left) beam in case of hadronic beams, such as protons.
The default value for protons is 0.8 GeV.

If the option ‘BEAM_REMNANTS=0’ is specified, pure parton-level events are simulated, i.e. no beam remnants are generated. Accordingly, partons entering the hard scattering process do not acquire primordial transverse momentum.


6.3 ISR Parameters

The following parameters are used to steer the setup of beam substructure and initial state radiation (ISR). They may be set in the (isr) section of the run-card, see Input structure.

BUNCH_1/BUNCH_2

Specify the PDG ID of the first (left) and second (right) bunch particle, i.e. the particle after eventual Beamstrahlung specified through the beam parameters, see Beam Parameters. Per default these are taken to be identical to the parameters BEAM_1/BEAM_2, assuming the default beam spectrum is Monochromatic. In case the Simple Compton or Laser Backscattering spectra are enabled the bunch particles would have to be set to 22, the PDG code of the photon.

ISR_SMIN/ISR_SMAX

This parameter specifies the minimum fraction of cms energy squared after ISR. The reference value is the total centre of mass energy squared of the collision, not the centre of mass energy after eventual Beamstrahlung.
The parameter can be specified using the internal interpreter, see Interpreter, e.g. as ‘ISR_SMIN=sqr(20/E_CMS)’.

Sherpa provides access to a variety of structure functions. They can be configured with the following parameters.

PDF_LIBRARY

Switches between different interfaces to PDFs. If the two colliding beams are of different type, e.g. protons and electrons or photons and electrons, it is possible to specify two different PDF libraries using PDF_LIBRARY_1’ and PDF_LIBRARY_2’. The following options are distributed with Sherpa:

LHAPDFSherpa

Use PDF’s from LHAPDF [Wha05] . The interface is only available if Sherpa has been compiled with support for LHAPDF, see Installation.

CTEQ6Sherpa

Built-in library for some PDF sets from the CTEQ collaboration, cf. [Nad08] . This is the default, if Sherpa has not been compiled with LHAPDF support.

MSTW08Sherpa

Built-in library for PDF sets from the MSTW group, cf. [Mar09a] .

MRST04QEDSherpa

Built-in library for photon PDF sets from the MRST group, cf. [Mar04] .

MRST01LOSherpa

Built-in library for the 2001 leading-order PDF set from the MRST group, cf. [Mar01] .

MRST99Sherpa

Built-in library for the 1999 PDF sets from the MRST group, cf. [Mar99] .

GRVSherpa

Built-in library for the GRV photon PDF [Glu91a] , [Glu91]

PDFESherpa

Built-in library for the electron structure function. The perturbative order of the fine structure constant can be set using the parameter ISR_E_ORDER (default: 1). The switch ISR_E_SCHEME allows to set the scheme of respecting non-leading terms. Possible options are 0 ("mixed choice"), 1 ("eta choice"), or 2 ("beta choice", default).


Furthermore it is simple to build an external interface to an arbitrary PDF and load that dynamically in the Sherpa run. See External PDF for instructions.

PDF_SET

Specifies the PDF set for hadronic bunch particles. All sets available in the chosen PDF_LIBRARY can be figured by running Sherpa with the parameter SHOW_PDF_SETS=1, e.g.:

  Sherpa PDF_LIBRARY=CTEQ6Sherpa SHOW_PDF_SETS=1

If the two colliding beams are of different type, e.g. protons and electrons or photons and electrons, it is possible to specify two different PDF sets using PDF_SET_1’ and PDF_SET_2’.

PDF_SET_VERSION

This parameter allows to eventually select a specific version (member) within the chosen PDF set. Specifying a negative value, e.g.

  PDF_LIBRARY LHAPDFSherpa;
  PDF_SET NNPDF12_100.LHgrid; PDF_SET_VERSION -100;

results in Sherpa sampling all sets 1..100, which can be used to obtain the averaging required when employing PDF’s from the NNPDF collaboration [Bal08] , [Bal09] .


6.4 Model Parameters

The interaction model setup is covered by the (model) section of the steering file or the model data file Model.dat, respectively.

The main switch here is called MODEL and sets the model that Sherpa uses throughout the simulation run. The default is ‘SM’, for the Standard Model. For a complete list of available models, run Sherpa with SHOW_MODEL_SYNTAX=1 on the command line. This will display not only the available models, but also the parameters for those models.

The chosen model also defines the list of particles and their default properties. With the following switches it is possible to change the properties of all fundamental particles:

MASS[<id>]

Sets the mass (in GeV) of the particle with PDG id ‘<id>’.
Masses of particles and corresponding anti-particles are always set simultaneously.
For particles with Yukawa couplings, those are enabled/disabled consistent with the mass (taking into account the MASSIVE flag) by default, but that can be modified using the ‘YUKAWA[<id>]’ parameter. Note that by default the Yukawa couplings are treated as running, cf. YUKAWA_MASSES.

MASSIVE[<id>]

Specifies whether the finite mass of particle with PDG id ‘<id>’ is to be considered in matrix-element calculations or not.

WIDTH[<id>]

Sets the width (in GeV) of the particle with PDG id ‘<id>’.

ACTIVE[<id>]

Enables/disables the particle with PDG id ‘<id>’.

STABLE[<id>]

Sets the particle with PDG id ‘<id>’ either stable or unstable according to the following options:

0

Particle and anti-particle are unstable

1

Particle and anti-particle are stable

2

Particle is stable, anti-particle is unstable

3

Particle is unstable, anti-particle is stable

This option applies to decays of hadrons (cf. Hadron decays) as well as particles produced in the hard scattering (cf. Hard decays). For the latter, alternatively the decays can be specified explicitly in the process setup (see Processes) to avoid the narrow-width approximation.

PRIORITY[<id>]

Allows to overwrite the default automatic flavour sorting in a process by specifying a priority for the given flavour. This way one can identify certain particles which are part of a container (e.g. massless b-quarks), such that their position can be used reliably in selectors and scale setters.

Note: To set properties of hadrons, you can use the same switches (except for MASSIVE) in the fragmentation section, see Hadronization.


6.4.1 Standard Model

The SM inputs for the electroweak sector can be given in four different schemes, that correspond to different choices of which SM physics parameters are considered fixed and which are derived from the given quantities. The input schemes are selected through the EW_SCHEME parameter, whose default is ‘1’. The following options are provided:

The electro-weak coupling is by default not running. If its running has been enabled (cf. COUPLINGS), one can specify its value at zero momentum transfer as input value by 1/ALPHAQED(0).

To account for quark mixing the CKM matrix elements have to be assigned. For this purpose the Wolfenstein parametrization [Wol83] is employed. The order of expansion in the lambda parameter is defined through CKMORDER, with default ‘0’ corresponding to a unit matrix. The parameter convention for higher expansion terms reads:

The remaining parameter to fully specify the Standard Model is the strong coupling constant at the Z-pole, given through ALPHAS(MZ). Its default value is ‘0.118’. If the setup at hand involves hadron collisions and thus PDFs, the value of the strong coupling constant is automatically set consistent with the PDF fit and can not be changed by the user. If Sherpa is compiled with LHAPDF support, it is also possible to use the alphaS evolution provided in LHAPDF by specifying USE_PDF_ALPHAS=1. For this fine structure constant there is also the option to provide a fixed value that can be used in calculations of matrix elements in case running of the coupling is disabled (cf. COUPLINGS). The keyword is ALPHAS(default). When using a running strong coupling, the order of the perturbative expansion used can be set through ORDER_ALPHAS, where the default ‘0’ corresponds to one-loop running and 1,2,3 to 2,3,4-loops, respectively.

If unstable particles (e.g. W/Z bosons) appear as intermediate propagators in the process, Sherpa uses the complex mass scheme to construct MEs in a gauge-invariant way. For full consistency with this scheme, by default the dependent EW parameters are also calculated from the complex masses (‘WIDTHSCHEME=CMS’), yielding complex values e.g. for the weak mixing angle. To keep the parameters real one can set ‘WIDTHSCHEME=Fixed’. This may spoil gauge invariance though.


6.4.2 Minimal Supersymmetric Standard Model

To use the MSSM within Sherpa (cf. [Hag05] ) the MODEL switch has to be set to ‘MSSM’. Further, the parameter spectrum has to be fed in. To achieve this files that conform to the SUSY-Les-Houches-Accord [Ska03] are used. The actual SLHA file name has to be specified by SLHA_INPUT and has to reside in the current run directory, i.e. PATH. From this file the full low-scale MSSM spectrum is read, including sparticle masses, mixing angles etc. In addition information provided on the total particle’s widths is read from the input file. Note that the setting of masses and widths through the SLHA input is superior to setting through MASS[<id>] and WIDTH[<id>].


6.4.3 ADD Model of Large Extra Dimensions

In order to use the ADD model within Sherpa the switch MODEL = ADD has to be set. The parameters of the ADD model can be set as follows:

The variable N_ED specifies the number of extra dimensions. The value of the Newtonian constant can be specified in natural units using the keyword G_NEWTON. The size of the string scale M_S can be defined by the parameter M_S. Setting the value of KK_CONVENTION allows to change between three widely used conventions for the definition of M_S and the way of summing internal Kaluza-Klein propagators. The switch M_CUT one restricts the c.m. energy of the hard process to be below this specified scale.

The masses, widths, etc. of both additional particles can set in the same way as for the Standard Model particles using the MASS[<id>] and WIDTH[<id>] keywords. The ids of the graviton and graviscalar are 39 and 40.

For details of the implementation, the reader is referred to [Gle03a] .


6.4.4 Anomalous Gauge Couplings

Sherpa includes a number of effective Lagrangians describing anomalous gauge interactions:

Due to the effective nature of the anomalous couplings unitarity might be violated for coupling parameters other than the SM values. For very large momentum transfers, such as probed at the LHC, this will lead to unphysical results. As discussed in Ref. [Bau88] this can be avoided introducing form factors to be applied on the deviation of coupling parameters from their Standard Model values, The corresponding switches are UNITARIZATION_SCALE and UNITARIZATION_N. By default the form factor is switched off


6.4.5 Two Higgs Doublet Model

The THDM is incorporated as a subset of the MSSM Lagrangian. It is defined as the extension of the SM by a second SU(2) doublet of Higgs fields. Besides the particle content of the SM it contains interactions of five physical Higgs bosons: a light and a heavy scalar, a pseudo-scalar and two charged ones. Besides the SM inputs the model is defined through the masses and widths of the Higgs particles, MASS[PDG] and WIDTH[PDG], where PDG = [25,35,36,37] for h^0, H^0, A^0 and H^+, respectively. The inputs are complete, when TAN(BETA), the ratio of the two Higgs vacuum expectation values, and ALPHA, the Higgs mixing angle, are specified.

The model is invoked by specifying MODEL = THDM in the (model) section of the steering file or the model data file Model.dat, respectively.


6.4.6 Effective Higgs Couplings

The EHC describes the effective coupling of gluons and photons to Higgs bosons via a top-quark loop, and a W-boson loop in case of photons. This supplement to the Standard Model can be invoked by specifying MODEL = SM+EHC in the (model) section of the steering file or the model data file Model.dat, respectively.

The effective coupling of gluons to the Higgs boson, g_ggH, can be calculated either for a finite top-quark mass or in the limit of an infinitely heavy top using the switch FINITE_TOP_MASS=[1,0]. Similarily, the photon-photon-Higgs coupling, g_ppH, can be calculated both for finite top and/or W masses or in the infinite mass limit using the switches FINITE_TOP_MASS=[1,0] and FINITE_W_MASS=[1,0]. The default choice for both is the infinite mass limit in either case. It can be varied through setting EHC_SCALE2 to a different value;

Either one of these couplings can be switched off using the DEACTIVATE_GGH=[1,0] and DEACTIVATE_PPH=[1,0] switches. Both default to 0.


6.4.7 Fourth Generation

The 4thGen model adds a fourth family of quarks and leptons to the Standard Model. It is invoked by specifying MODEL = SM+4thGen in the ’(model)’ section of the steering file or the model data file ‘Model.dat’, respectively.

The masses and widths of the additional particles are defined via the usual MASS[PDG] and WIDTH[PDG] switches, where PDG = [7,8,17,18] for the fourth generation down and up quarks, the charged lepton and the neutrino, respectively. A general mixing is implemented for both leptons and quarks, parametrised through three additional mixing angles and two additional phases, as described in [Hou87a] : A_14, A_24, A_34, PHI_2 and PHI_3 for quarks, THETA_L14, THETA_L24, THETA_L34, PHI_L2 and PHI_L3 for leptons. Both 4x4 mixing matrices expand upon their 3x3 Standard Model counter parts: the CKM matrix for quarks and the unit matrix for leptons. Both mixing matrices can be printed on screen with OUTPUT_MIXING = 1.

Per default, all particles are set unstable and have to be decayed into Standard Model particles within the matrix element or set stable via STABLE[PDG] = 1.


6.4.8 FeynRules model

To use a model generated using the FeynRules package, cf. Refs. [Chr08] and [Chr09] , the MODEL switch has to be set to ‘FeynRules’ and ME_SIGNAL_GENERATOR has to be set to ‘Amegic’. Note, in order to obtain the FeynRules model output in a format readable by Sherpa the FeynRules subroutine ’WriteSHOutput[ L ]’ needs to be called for the desired model Lagrangian ’L’. This results in a set of ASCII files that represent the considered model through its particle data, model parameters and interaction vertices. Note also that Sherpa/Amegic can only deal with Feynman rules in unitary gauge.

The FeynRules output files need to be copied to the current working directory or have tto reside in the directory referred to by the PATH variable, cf. Input structure. There exists an agreed default naming convention for the FeynRules output files to be read by Sherpa. However, the explicite names of the input files can be changed. They are referred to by the variables

For more details on the Sherpa interface to FeynRules please consult [Chr09] .


6.5 Matrix Elements

The setup of matrix elements is covered by the ‘(me)’ section of the steering file or the ME data file ‘ME.dat’, respectively. There are no mandatory settings to be made.

The following parameters are used to steer the matrix element setup.


6.5.1 ME_SIGNAL_GENERATOR

The list of matrix element generators to be employed during the run. When setting up hard processes from the ‘(processes)’ section of the input file (see Processes), Sherpa calls these generators in order to check whether either one is capable of generating the corresponding matrix element. This parameter can also be set on the command line using option ‘-m’, see Command line options.

The built-in generators are

Internal

Simple matrix element library, implementing a variety of 2->2 processes.

Amegic

The AMEGIC++ generator published under [Kra01]

Comix

The Comix generator published under [Gle08]

It is possible to employ an external matrix element generator within Sherpa. For advice on this topic please contact the authors, Authors.


6.5.2 RESULT_DIRECTORY

This parameter specifies the name of the directory which is used by Sherpa to store integration results and phasespace mappings. The default is ‘Results/’. It can also be set using the command line parameter ‘-r’, see Command line options. The directory will be created automatically, unless the option ‘GENERATE_RESULT_DIRECTORY=0’ is specified. Its location is relative to a potentially specified input path, see Command line options.


6.5.3 EVENT_GENERATION_MODE

This parameter specifies the event generation mode. It can also be set on the command line using option ‘-w’, see Command line options. The three possible options are ‘Weighted’ (shortcut ‘W’), ‘Unweighted’ (shortcut ‘U’) and ‘PartiallyUnweighted’ (shortcut ‘P’). For partially unweighted events, the weight is allowed to exceed a given maximum, which is lower than the true maximum weight. In such cases the event weight will exceed the otherwise constant value.


6.5.4 SCALES

This parameter specifies how to compute the renormalization and factorization scale and potential additional scales.

Sherpa provides several built-in scale setting schemes. For each scheme the scales are then set using expressions understood by the Interpreter. Each scale setter’s syntax is

SCALES <scale-setter>{<scale-definition>}

to define a single scale for both the factorisation and renormalisation scale. They can be set to different values using

SCALES <scale-setter>{<fac-scale-definition>}{<ren-scale-definition>}

In next-to-leading order parton shower matched calculations a third perturbative scale is present, the resummation or parton shower starting scale. It is set using the third argument

SCALES <scale-setter>{<fac-scale-definition>}{<ren-scale-definition>}{<res-scale-definition>}

Note: for all scales their squares have to be given. See Predefined scale tags for some predefined scale tags.

More than three scales can be set as well to be subsequently used, e.g. by different couplings, see COUPLINGS.


6.5.4.1 Scale setters

The scale setter options which are currently available are

VAR

The variable scale setter is the simplest scale setter available. Scales are simply specified by additional parameters in a form which is understood by the internal interpreter, see Interpreter. If, for example the invariant mass of the lepton pair in Drell-Yan production is the desired scale, the corresponding setup reads

SCALES VAR{Abs2(p[2]+p[3])}

Renormalization and factorization scales can be chosen differently. For example in Drell-Yan + jet production one could set

SCALES VAR{Abs2(p[2]+p[3])}{MPerp2(p[2]+p[3])}
FASTJET

If FastJet is enabled by including --enable-fastjet=/path/to/fastjet in the configure options, this scale setter can be used to set a scale based on jet-, rather than parton-momenta. The jets can be defined in all possible ways allowed by FastJet with the arguments given as follows

SCALES FASTJET[A:antikt,PT:20,ET:0,R:0.4,M:1,B:0]{...}

This scale setter first identifies jets in the final state which then form the objects to be used by the scale definition. For a detailed description see Scale setting using jets.

METS

The matrix element is clustered onto a core 2->2 configuration using an inversion of current parton shower, cf. SHOWER_GENERATOR, recombining (n+1) particles into n on-shell particles. Their corresponding flavours are determined using run-time information from the matrix element generator. It defines the three tags MU_F2, MU_R2 and MU_Q2 whose values are assigned through this clustering procedure. While MU_F2 and MU_Q2 are defined as the lowest invariant mass or negative virtuality in the core process (for core interactions which are pure QCD processes scales are set to the maximum transverse mass squared of the outgoing particles), MU_R2 is determined using this core scale and the individual clustering scales such that

  alpha_s(MU_R2)^{n+k} = alpha_s(core-scale)^k alpha_s(kt_1) ... alpha_s(kt_n)

where k is the order in strong coupling of the core process and k is the number of clusterings, kt_i are the relative transverse momenta at each clustering. The tags MU_F2, MU_R2 and MU_Q2 can then be used on equal footing with the tags of Predefined scale tags to define the final scale.

METS is the default scale scheme in Sherpa, since it is employed for truncated shower merging, see ME-PS merging, both at leading and next-to-leading order. Thus, Sherpa’s default is

SCALES METS{MU_F2}{MU_R2}{MU_Q2}

As the tags MU_F2, MU_R2 and MU_Q2 are predefined by the METS scale setter, they may be omitted, i.e.

SCALES METS

leads to an identical scale definition.

The METS scale setter comes in two variants: STRICT_METS and LOOSE_METS. While the former employs the exact inverse of the parton shower for the clustering procedure, and therefore is rather time consuming for multiparton final state, the latter is a simplified version and much faster. Giving METS as the scale setter results in using LOOSE_METS for the integration and STRICT_METS during event generation. Giving either STRICT_METS or LOOSE_METS as the scale setter results in using the respective one during both integration and event generation.

Clusterings onto 2->n (n>2) configurations is possible, see METS scale setting with multiparton core processes.

This scheme might be subject to changes to enable further classes of processes for merging in the future and should therefore be seen with care. Integration results might change slightly between different Sherpa versions.

Occasionally, users might encounter the warning message

METS_Scale_Setter::CalculateScale(): No CSS history for '<process name>' in <percentage>% of calls. Set \hat{s}.

As long as the percentage quoted here is not too high, this does not pose a serious problem. The warning occurs when - based on the current colour configuration and matrix element information - no suitable clustering is found by the algorithm. In such cases the scale is set to the invariant mass of the partonic process.

It is possible to implement a dedicated scale scheme within Sherpa. For advice on this topic please contact the authors, Authors.


6.5.4.2 Predefined scale tags

There exist a few predefined tags to facilitate commonly used scale choices or easily implement a user defined scale.

p[n]

Access to the four momentum of the nth particle. The initial state particles carry n=0 and n=1, the final state momenta start from n=2. Their ordering is determined by Sherpa’s internal particle ordering and can be read e.g. from the process names displayed at run time. Please note, that when building jets out of the final state partons first, e.g. through the FASTJET scale setter, these parton momenta will be replaced by the jet momenta ordered in transverse momenta. For example the process u ub -> e- e+ G G will have the electron and the positron at positions p[2] and p[3] and the gluons on postions p[4] and p[5]. However, when finding jets first, the electrons will still be at p[2] and p[3] while the harder jet will be at p[4] and the softer one at p[6].

H_T2

Square of the scalar sum of the transverse momenta of all final state particles.

H_TY2[fac:<factor>,exp:<exponent>]

Square of the scalar sum of the transverse momenta of all final state particles weighted by their rapidity distance from the final state boost vector. Thus, takes the form

  H_T^{(Y)} = sum_i pT_i exp [ fac |y-yboost|^exp ]

Typical values to use would by fac:0.3 and exp:1.

MU_F2, MU_R2, MU_Q2

Tags holding the values of the factorisation, renormalisation scale and resummation scale determined through backwards clustering in the METS scale setter.

MU_22, MU_32, ..., MU_n2

Tags holding the nodal values of the jet clustering in the FASTJET scale setter, cf. Scale setting using jets.

All of those objects can be operated upon by any operator/function known to the Interpreter.


6.5.4.3 Scale schemes for NLO calculations

For next-to-leading order calculations it must be guaranteed that the scale is calculated separately for the real correction and the subtraction terms, such that within the subtraction procedure the same amount is subtracted and added back. Starting from version 1.2.2 this is the case for all scale setters in Sherpa. Also, the definition of the scale must be infrared safe w.r.t. to the radiation of an extra parton. Infrared safe (for QCD-NLO calculations) are:

Not infrared safe are

Since the total number of partons is different for different pieces of the NLO calculation any explicit reference to a parton momentum will lead to an inconsistent result.


6.5.4.4 Scale setting using jets

Sherpa allows to define scales based on jet rather than parton momenta. The final state parton configuration is first clustered using FastJet and resulting jet momenta are then added back to the list of non strongly interacting particles. The numbering of momenta therefore stays effectively the same as in standard Sherpa, except that final state partons are replaced with jets, if applicable (a parton might not pass the jet criteria and get "lost"). In particular, the indices of the initial state partons and all EW particles are uneffected. Jet momenta can then be accessed as described in Predefined scale tags through the identifiers p[i], and their nodal values can be used through MU_n2. The syntax is

SCALES FASTJET[<jet-algo-parameter>]{<scale-definition>}

Therein the parameters of the jet algorithm to be used to define the jets are given as a comma separated of

Consider the example of lepton pair production in association with jets. The following scale setter

SCALES FASTJET[A:kt,PT:10,R:0.4,M:0]{sqrt(PPerp2(p[4])*PPerp2(p[5]))}

reconstructs jets using the kt-algorithm with R=0.4 and a minimum transverse momentum of 10 GeV. The scale of all strong couplings is then set to the geometric mean of the hardest and second hardest jet. Note M:0.

Similarly, in processes with multiple strong couplings, their renormalisation scales ca be set to different values, e.g.

SCALES FASTJET[A:kt,PT:10,R:0.4,M:1]{PPerp2(p[4])}{PPerp2(p[5])}

sets the scale of one strong coupling to the transverse momentum of the hardest jet, and the scale of the second strong coupling to the transverse momentum of second hardest jet. Note M:1 in this case.

The additional tags MU_22 .. MU_n2 (n=2..njet+1), hold the nodal values of the jet clustering in descending order.

The B parameter, if specified different from its default 0, allows to use b-tagged jets only, based on the parton-level constituents of the jets. There are two options: With B:1 both b and anti-b quarks are counted equally towards b-jets, while for B:2 they are added with a relative sign as constituents, i.e. a jet containing b and anti-b is not tagged.

Please note that currently this type of scale setting can only be done within the process block (Processes) and not within the (me) section.


6.5.4.5 Simple scale variations

Simple scale variations can be done using the following parameters:


6.5.4.6 Scale variations in parton showered and merged samples

When performing scale variations within parton showered samples the naive FACTORIZATION_SCALE_FACTOR and RENORMALIZATION_SCALE_FACTOR cannot be employed because they alter the resummation behaviour of the parton shower and lead to an overestimate of the associated uncertainty. Instead, the scales in the fixed-order matrix element and the parton shower resummation should be varied separately. This can be done for the matrix element by introducing the prefactor into its scale definition, e.g.

SCALES VAR{0.25*H_T2}{0.25*H_T2}

for setting both the renormalisation and factorisation scales to H_T/2. The parton shower scale is then varied by setting CSS_SHOWER_SCALE2_FACTOR=<factor>. It redefines the reference value of the strong coupling constant at mZ by its nuerical value at mZ*<factor> leaving its running unchanged.

In merged samples the METS scale setter has to be used. The scales in the respective multijet matrix elements can then be varied via

SCALES METS{<muF-var-factor>*MU_F2}{<muR-var-factor>*MU_R2}

In NLO-merged (MEPSatNLO) samples proper counterterms for compensating the change of scale at NLO have to be introduced via SP_NLOCT=1. Also, the resummation scale can be varied

SCALES METS{<muF-var-factor>*MU_F2}{<muR-var-factor>*MU_R2}{<muQ-var-factor>*MU_Q2}

6.5.4.7 METS scale setting with multiparton core processes

The METS scale setter can be instructed not to cluster to a 2->2 configuration but to stop at the minimal multiplicity present in the process setup. The core scale of the 2->n process then needs to be defined. This is done by specifying a core scale through

CORE_SCALE <core-scale-setter>{<core-fac-scale-definition>}{<core-ren-scale-definition>}{<core-res-scale-definition>}

As always, for scale setters which define MU_F2, MU_R2 and MU_Q2 the scale definition can be dropped. Possible core scale setters are

VAR

Variable core scale setter. Syntax is identical to variable scale setter.

QCD

QCD core scale setter. Scales are set to harmonic mean of s, t and u. Only useful for 2->2 cores as alternatives to the usual core scale of the METS scale setter.

An example for defining a custom core scale is given in H+jets production in weak boson fusion or Simulation of top quark pair production using MC@NLO methods.


6.5.5 COUPLINGS

Within Sherpa, strong and electroweak couplings can be computed at any scale specified by a scale setter (cf. SCALES). The ‘COUPLINGS’ tag links the argument of a running coupling to one of the respective scales. This is better seen in an example. Assuming the following input

SCALES    VAR{...}{PPerp2(p[2])}{Abs2(p[2]+p[3])}
COUPLINGS Alpha_QCD 1, Alpha_QED 2

Sherpa will compute any strong couplings at scale one, i.e. ‘PPerp2(p[2])’ and electroweak couplings at scale two, i.e. ‘Abs2(p[2]+p[3])’. Note that counting starts at zero.


6.5.6 KFACTOR

This parameter specifies how to evaluate potential K-factors in the hard process. This is equivalent to the ‘COUPLINGS’ specification of Sherpa versions prior to 1.2.2. Currently available options are

NO

No reweighting

VAR

Couplings specified by an additional parameter in a form which is understood by the internal interpreter, see Interpreter. The tags Alpha_QCD and Alpha_QED serve as links to the built-in running coupling implementation.

If for example the process ‘g g -> h g’ in effective theory is computed, one could think of evaluating two powers of the strong coupling at the Higgs mass scale and one power at the transverse momentum squared of the gluon. Assuming the Higgs mass to be 120 GeV, the corresponding reweighting would read

SCALES    VAR{...}{PPerp2(p[3])}
COUPLINGS Alpha_QCD 1
KFACTOR   VAR{sqr(Alpha_QCD(sqr(120))/Alpha_QCD(MU_12))}

As can be seen from this example, scales are referred to as MU_<i>2, where <i> is replaced with the appropriate number. Note that counting starts at zero.

It is possible to implement a dedicated K-factor scheme within Sherpa. For advice on this topic please contact the authors, Authors.


6.5.7 YUKAWA_MASSES

This parameter specifies whether the Yukawa couplings are evaluated using running or fixed quark masses: YUKAWA_MASSES=Running is the default since version 1.2.2 while YUKAWA_MASSES=Fixed was the default until 1.2.1.


6.5.8 Dipole subtraction

This list of parameters can be used to optimize the performance when employing the Catani-Seymour dipole subtraction [Cat96b] as implemented in Amegic [Gle07] .

`DIPOLE_ALPHA'

Specifies a dipole cutoff in the nonsingular region [Nag03] . Changing this parameter shifts contributions from the subtracted real correction piece (RS) to the piece including integrated dipole terms (I), while their sum remains constant. This parameter can be used to optimize the integration performance of the individual pieces. Also the average calculation time for the subtracted real correction is reduced with smaller choices of ‘DIPOLE_ALPHA’ due to the (on average) reduced number of contributing dipole terms. For most processes a reasonable choice is between 0.01 and 1 (default). See also Choosing DIPOLE_ALPHA

`DIPOLE_AMIN'

Specifies the cutoff of real correction terms in the infrared reagion to avoid numerical problems with the subtraction. The default is 1.e-8.

`DIPOLE_NF_GSPLIT'

Specifies the number of quark flavours that are produced from gluon splittings. This number must be at least the number of massless flavours (default). If this number is larger than the number of massless quarks the massive dipole subtraction [Cat02] is employed.

`DIPOLE_KAPPA'

Specifies the kappa-parameter in the massive dipole subtraction formalism [Cat02] .


6.6 Processes

The process setup is covered by the ‘(processes)’ section of the steering file or the process data file ‘Processes.dat’, respectively.

The following parameters are used to steer the process setup.


6.6.1 Process

This tag starts the setup of a process or a set of processes with common properties. It must be followed by the specification of the (core) process itself. The setup is completed by the ‘End process’ tag, see End process. The initial and final state particles are specified by their PDG codes, or by particle containers, see Particle containers. Examples are

Process 93 93 -> 11 -11

Sets up a Drell-Yan process group with light quarks in the initial state.

Process 11 -11 -> 93 93 93{3}

Sets up jet production in e+e- collisions with up to three additional jets.

The syntax for specifying processes is explained in the following sections:


6.6.1.1 PDG codes

Initial and final state particles are specified using their PDG codes (cf. PDG). A list of particles with their codes, and some of their properties, is printed at the start of each Sherpa run, when the OUTPUT is set at level ‘2’.


6.6.1.2 Particle containers

Sherpa contains a set of containers that collect particles with similar properties, namely

These containers hold all massless particles and anti-particles of the denoted type and allow for a more efficient definition of initial and final states to be considered. The jet container consists of the gluon and all massless quarks (as set by MASS[..]=0.0 or MASSIVE[..]=0). A list of particle containers is printed at the start of each Sherpa run, when the OUTPUT is set at level ‘2’.

It is also possible to define a custom particle container using the keyword PARTICLE_CONTAINER either on the command line or in the (run) section of the input file. The container must be given an unassigned particle ID (kf-code) and its name (freely chosen by you) and content must be specified. An example would be the collection of all down-type quarks, which could be declared as

  PARTICLE_CONTAINER 98 downs 1 -1 3 -3 5 -5;

Note that, if wanted, you have to add both particles and anti-particles.


6.6.1.3 Curly brackets

The curly bracket notation when specifying a process allows up to a certain number of jets to be included in the final state. This is easily seen from an example,

Process 11 -11 -> 93 93 93{3}

Sets up jet production in e+e- collisions. The matix element final state may be 2, 3, 4 or 5 light partons or gluons.


6.6.2 Decay

Specifies the exclusive decay of a particle produced in the matrix element. The virtuality of the decaying particle is sampled according to a Breit-Wigner distribution. An example would be

Process 11 -11 -> 6[a] -6[b]
Decay 6[a] -> 5 24[c]
Decay -6[b] -> -5 -24[d]
Decay 24[c] -> -13 14
Decay -24[d] -> 94 94

6.6.3 DecayOS

Specifies the exclusive decay of a particle produced in the matrix element. The decaying particle is on mass-shell, i.e. a strict narrow-width approximation is used. This tag can be specified alternatively as ‘DecayOS’. An example would be

Process 11 -11 -> 6[a] -6[b]
DecayOS 6[a] -> 5 24[c]
DecayOS -6[b] -> -5 -24[d]
DecayOS 24[c] -> -13 14
DecayOS -24[d] -> 94 94

6.6.4 No_Decay

Remove all diagrams associated with the decay of the given flavours. Serves to avoid resonant contributions in processes like W-associated single-top production. Note that this method breaks gauge invariance! At the moment this flag can only be set for Comix. An example would be

Process 93 93 -> 6[a] -24[b] 93{1}
Decay 6[a] -> 5 24[c]
DecayOS 24[c] -> -13 14
DecayOS -24[b] -> 11 -12
No_Decay -6

6.6.5 Scales

Sets a process-specific scale. For the corresponding syntax see SCALES.


6.6.6 Couplings

Sets process-specific couplings. For the corresponding syntax see COUPLINGS.


6.6.7 CKKW

Sets up multijet merging according to [Hoe09] . The additional argument specifies the separation cut in the form (Q_{cut}/E_{cms})^2. It can be given in any form which is understood by the internal interpreter, see Interpreter. Examples are


6.6.8 Selector_File

Sets a process-specific selector file name.


6.6.9 Order_EW

Sets a process-specific electroweak order. The given number is exclusive, i.e. only matrix elements with exactly the given order in the electroweak coupling are generated.

Note that for decay chains with Amegic this setting applies to the core process only, while with Comix it applies to the full process, see Decay and DecayOS.


6.6.10 Max_Order_EW

Sets a process-specific maximum electroweak order. The given number is inclusive, i.e. matrix elements with up to the given order in the electroweak coupling are generated.

Note that for decay chains with Amegic this setting applies to the core process only, while with Comix it applies to the full process, see Decay and DecayOS.


6.6.11 Order_QCD

Sets a process-specific QCD order. The given number is exclusive, i.e. only matrix elements with exactly the given order in the strong coupling are generated.

Note that for decay chains with Amegic this setting applies to the core process only, while with Comix it applies to the full process, see Decay and DecayOS.


6.6.12 Max_Order_QCD

Sets a process-specific maximum QCD order. The given number is inclusive, i.e. matrix elements with up to the given order in the strong coupling are generated.

Note that for decay chains with Amegic this setting applies to the core process only, while with Comix it applies to the full process, see Decay and DecayOS.


6.6.13 Min_N_Quarks

Limits the minimum number of quarks in the process to the given value.


6.6.14 Max_N_Quarks

Limits the maximum number of quarks in the process to the given value.


6.6.15 Min_N_TChannels

Limits the minimum number of t-channel propagators in the process to the given value.


6.6.16 Print_Graphs

Writes out Feynman graphs in LaTeX format. The parameter specifies the directory name in which the diagram information is stored. This directory is created automatically by Sherpa. After Sherpa has run, there will be a .tex-file located in the Comix/Amegic subdirectory of the diagram information directory specified with the name <process>.tex. This has to be compiled by using latex <process>.tex, which produces a .mp-file. Enter mpost *.mp and again latex <process>.tex in order to produce the .dvi-file <process>.dvi containing the Feynman diagrams.


6.6.17 Integration_Error

Sets a process-specific relative integration error target.

For multijet processes, this parameter can be specified per final state multiplicity. An example would be

Process 93 93 -> 93 93 93{2}
Integration_Error 0.02 {3,4}

Here, the integration error target is set to 2% for 2->3 and 2->4 processes.


6.6.18 Max_Epsilon

Sets epsilon for maximum weight reduction. The key idea is to allow weights larger than the maximum during event generation, as long as the fraction of the cross section represented by corresponding events is at most the epsilon factor times the total cross section. In other words, the relative contribution of overweighted events to the inclusive cross section is at most epsilon.


6.6.19 Enhance_Factor

Sets a process specific enhance factor.

For multijet processes, this parameter can be specified per final state multiplicity. An example would be

Process 93 93 -> 93 93 93{2}
Enhance_Factor 4 {3}
Enhance_Factor 16 {4}

Here, 3-jet processes are enhanced by a factor of 4, 4-jet processes by a factor of 16.


6.6.20 RS_Enhance_Factor

Sets an enhance factor for the RS-piece of an MC@NLO process.

For multijet processes, this parameter can be specified per final state multiplicity. An example would be

Process 93 93 -> 90 91 93{3};
NLO_QCD_Mode MC@NLO {2,3};
RS_Enhance_Factor 10 {2};
RS_Enhance_Factor 20 {3};

Here, the RS-pieces of the MC@NLO subprocesses of the 2 particle final state processes are enhanced by a factor of 10, while those of the 3 particle final state processes are enhanced by a factor of 20.


6.6.21 Enhance_Function

Sets a process specific enhance function.

This feature can only be used when generating weighted events.

For multijet processes, the parameter can be specified per final state multiplicity. An example would be

Process 93 93 -> 11 -11 93{1}
Enhance_Function VAR{PPerp2(p[4])} {3}

Here, the 1-jet process is enhanced with the transverse momentum squared of the jet.

Note that the convergence of the Monte Carlo integration can be worse if enhance functions are employed and therefore the integration can take significantly longer. The reason is that the default phase space mapping, which is constructed according to diagrammatic information from hard matrix elements, is not suited for event generation including enhancement. It must first be adapted, which, depending on the enhance function and the final state multiplicity, can be an intricate task.

If Sherpa cannot achieve an integration error target due to the use of enhance functions, it might be appropriate to locally redefine this error target, see Integration_Error.


6.6.22 Enhance_Observable

Allows for the specification of a ME-level observable in which the event generation should be flattened. Of course, this induces an appropriate weight for each event. This option is available for both weighted and unweighted event generation, but for the latter as mentioned above the weight stemming from the enhancement is introduced. For multijet processes, the parameter can be specified per final state multiplicity.

An example would be

Process 93 93 -> 11 -11 93{1}
Enhance_Observable VAR{log10(PPerp(p[2]+p[3]))}|1|3 {3}

Here, the 1-jet process is flattened with respect to the logarithmic transverse momentum of the lepton pair in the limits 1.0 (10 GeV) to 3.0 (1 TeV). For the calculation of the observable one can use any function available in the algebra interpreter (see Interpreter).

Note that the convergence of the Monte Carlo integration can be worse if enhance observables are employed and therefore the integration can take significantly longer. The reason is that the default phase space mapping, which is constructed according to diagrammatic information from hard matrix elements, is not suited for event generation including enhancement. It must first be adapted, which, depending on the enhance function and the final state multiplicity, can be an intricate task.

If Sherpa cannot achieve an integration error target due to the use of enhance functions, it might be appropriate to locally redefine this error target, see Integration_Error.


6.6.23 NLO_QCD_Mode

This setting specifies whether and in which mode an QCD NLO calculation should be performed. Possible values are:

The usual multiplicity identifier apply to this switch as well. Note that this setting implies NLO_QCD_Part BVIRS for the relevant multiplicities. This can be overridden by setting NLO_QCD_Part explicitly in case of fixed-order calculations.

Note that Sherpa includes only a very limited selection of one-loop corrections. For processes not included external codes can be interfaced, see External one-loop ME


6.6.24 NLO_QCD_Part

In case of fixed-order NLO calculations this switch specifies which pieces of a QCD NLO calculation are computed. Possible choices are

Different pieces can be combined in one processes setup. Only pieces with the same number of final state particles and the same order in alpha_S can be treated as one process, otherwise they will be automatically split up.


6.6.25 NLO_EW_Mode

This setting specifies whether and in which mode an electroweak NLO calculation should be performed. Possible values are:


6.6.26 NLO_EW_Part

In case of fixed-order NLO calculations this switch specifies which pieces of a electroweak NLO calculation are computed. Possible choices are

Different pieces can be combined in one processes setup. Only pieces with the same number of final state particles and the same order in alpha_QED can be treated as one process, otherwise they will be automatically split up.


6.6.27 Subdivide_Virtual

Allows to split the virtual contribution to the total cross section into pieces. Currently supported options when run with BlackHat are ‘LeadingColor’ and ‘FullMinusLeadingColor’. For high-multiplicity calculations these settings allow to adjust the relative number of points in the sampling to reduce the overall computation time.


6.6.28 ME_Generator

Set a process specific nametag for the desired tree-ME generator, see ME_SIGNAL_GENERATOR.


6.6.29 Loop_Generator

Set a process specific nametag for the desired loop-ME generator. The only Sherpa-native option is Internal with a few hard coded loop matrix elements.


6.6.29.1 BlackHat Interface

Another source for loop matrix elements is BlackHat. To use this Sherpa has to be linked to BlackHat during installation by using the configure option --enable-blackhat=/path/to/blackhat.


6.6.30 End process

Completes the setup of a process or a list of processes with common properties.


6.7 Selectors

The setup of cuts at the matrix element level is covered by the ‘(selector)’ section of the steering file or the selector data file ‘Selector.dat’, respectively.

Sherpa provides the following selectors


6.7.1 One particle selectors

The selectors listed here implement cuts on the matrix element level, based on single particle kinematics. The corresponding syntax in ‘Selector.dat’ is

<keyword> <flavour code> <min value> <max value>

<min value>’ and ‘<max value>’ are floating point numbers, which can also be given in a form that is understood by the internal algebra interpreter, see Interpreter. The selectors act on all possible particles with the given flavour. Their respective keywords are

Energy

energy cut

ET

transverse energy cut

PT

transverse momentum cut

Rapidity

rapidity cut

PseudoRapidity

pseudorapidity cut


6.7.2 Two particle selectors

The selectors listed here implement cuts on the matrix element level, based on two particle kinematics. The corresponding syntax in ‘Selector.dat’ is

<keyword> <flavour1 code> <flavour2 code> <min value> <max value>

<min value>’ and ‘<max value>’ are floating point numbers, which can also be given in a form that is understood by the internal algebra interpreter, see Interpreter. The selectors act on all possible particles with the given flavour. Their respective keywords are

Mass

invariant mass

Angle

angular separation (rad)

BeamAngle

angular separation w.r.t. beam

(‘<flavour2 code>’ is 0 or 1, referring to beam 1 or 2)

DeltaEta

pseudorapidity separation

DeltaY

rapidity separation

DeltaPhi

azimuthal angle separation (rad)

DeltaR

R separation


6.7.3 Jet finders

There are three different types of jet finders

JetFinder

k_T-algorithm

ConeFinder

cone-algorithm

NJetFinder

k_T-type algorithm to select on a given number of jets

Their respective syntax is

JetFinder  <ycut>[<ycut decay 1>[<ycut decay 11>...]...]... <D parameter>
ConeFinder <min R> 
NJetFinder <n> <ptmin> <etmin> <D parameter> [<exponent>] [<eta max>] [<mass max>]

For ‘JetFinder’, it is possible to give different values of ycut in individual subprocesses of a production-decay chain. The square brackets are then used to denote the decays. In case only one uniform set of ycut is to be used, the square brackets are left out.

<ycut>’, ‘<min R>’ and ‘<D parameter>’ are floating point numbers, which can also be given in a form that is understood by the internal algebra interpreter, see Interpreter.

The ‘NJetFinder’ allows to select for kinematic configurations with at least ‘<n>’ jets that satisfy both, the ‘<ptmin>’ and the ‘<etmin>’ minimum requirements and that are in a PseudoRapidity region |eta|<‘<eta max>’. The ‘<exponent>’ allows to apply a kt-algorithm (1) or an anti-kt algorithm (-1). As only massless partons are clustered by default, the ‘<mass max>’ allows to also include partons with a mass up to the specified values. This is useful e.g. in calculations with massive b-quarks which shall nonetheless satisfy jet criteria.


6.7.4 Universal selector

The universal selector is intended to implement non-standard cuts on the matrix element level. Its syntax is

"<variable>" <kf1>,..,<kfn> <min1>,<max1>:..:<minn>,<maxn> [<order1>,...,<orderm>]

No additional white spaces are allowed

The first word has to be double-quoted, and contains the name of the variable to cut on. The keywords for available predefined <variable>s can be figured by running Sherpa ‘SHOW_VARIABLE_SYNTAX=1’. Or alternatively, an arbitrary cut variable can be constructed using the internal interpreter, see Interpreter. This is invoked with the command ‘Calc(...)’. In the formula specified there you have to use place holders for the momenta of the particles: ‘p[0]’ ... ‘p[n]’ hold the momenta of the respective particles ‘kf1’ ... ‘kfn’. A list of available vector functions and operators can be found here Interpreter.

<kf1>,..,<kfn>’ specify the PDG codes of the particles the variable has to be calculated from. In case this choice is not unique in the final state, you have to specify multiple cut ranges (‘<min1>,<max1>:..:<minn>,<maxn>’) for all (combinations of) particles you want to cut on, separated by semicolons.

If no fourth argument is given, the order of cuts is determined internally, according to Sherpa’s process classification scheme. This then has to be matched if you want to have different cuts on certain different particles in the matrix element. To do this, you should put enough (for the possible number of combinations of your particles) arbitrary ranges at first and run Sherpa with debugging output for the universal selector: ‘Sherpa OUTPUT=2[Variable_Selector::Trigger|15]’. This will start to produce lots of output during integration, at which point you can interrupt the run (Ctrl-c). In the ‘Variable_Selector::Trigger(): {...}’ output you can see, which particle combinations have been found and which cut range your selector has held for them (vs. the arbitrary range you specified). From that you should get an idea, in which order the cuts have to be specified.

If the fourth argument is given, particles are ordered before the cut is applied. Possible orderings are ‘PT_UP’, ‘ET_UP’, ‘E_UP’ and ‘ETA_UP’, (increasing p_T, E_T, E, eta). They have to be specified for each of the particles, separated by commas.

Examples

Two-body transverse mass

"mT" 11,-12 50,E_CMS

Cut on the pT of only the hardest lepton in the event

"PT" 90 50.0,E_CMS [PT_UP]

Using bool operations to restrict eta of the electron to |eta| < 1.1 or 1.5 < |eta| < 2.5

"Calc(abs(Eta(p[0]))<1.1||(abs(Eta(p[0]))>1.5&&abs(Eta(p[0]))<2.5))" 11 1,1

Note the range 1,1 meaning true for bool operations.

Requesting opposite side tag jets in VBF would for example need a setup like this

"Calc(Eta(p[0])*Eta(p[1]))" 93,93 -100,0 [PT_UP,PT_UP]

Restricting electron+photon mass to be outside of [87.0,97.0]:

"Calc(Mass(p[0]+p[1])<87.0||Mass(p[0]+p[1])>97.0)" 11,22 1,1

In ‘Z[lepton lepton] Z[lepton lepton]’, cut on mass of lepton-pairs produced from Z’s:

"m" 90,90 80,100:0,E_CMS:0,E_CMS:0,E_CMS:0,E_CMS:80,100

Here we use knowledge about the internal ordering to cut only on the correct lepton pairs.


6.7.5 Minimum selector

This selector can combine several selectors to pass an event if either those passes the event. It is mainly designed to generate more inclusive samples that, for instance, include several jet finders and that allows a specification later. The syntax is

MinSelector {
  Selector 1
  Selector 2
  ...
} 

6.7.6 NLO selectors

Phase-space cuts that are applied on next-to-leading order calculations must be defined in a infrared safe way. Technically there is also a special treatment for the real (subtracted) correction required. Currently only the following selectors meet this requirement:

QCD parton cuts
NJetFinder <n> <ptmin> <etmin> <D parameter> [<exponent>] [<eta max>] [<mass max>]

(see Jet finders)

Cuts on not strongly interacting particles

One particle selectors

PTNLO <flavour code> <min value> <max value>
RapidityNLO <flavour code> <min value> <max value>
PseudoRapidityNLO <flavour code> <min value> <max value>

Two particle selectors

PT2NLO <flavour1 code> <flavour2 code> <min value> <max value>
Mass <flavour1 code> <flavour2 code> <min value> <max value>
Cuts on photons
IsolationCut 22 <dR> <exponent> <epsilon>

implements the Frixione isolation cone [Fri98] .

The Minimum selector can be used if constructed with other selectors mentioned in this section

6.7.7 Fastjet selector

If FastJet is enabled, the momenta and nodal values of the jets found with Fastjet can be used to calculate more elaborate selector criteria. The syntax of this selector is

FastjetSelector <expression> <algorithm> <n> <ptmin> <etmin> <dr> [<f(siscone)>=0.75] [<eta-max>] [<y-max>] [<bmode>]

wherein algorithm can take the values kt,antikt,cambridge,siscone. In the algebraic expression MU_n2 (n=2..njet+1) signify the nodal values of the jets found and p[i] are their momenta. For details see Scale setting using jets. For example, in lepton pair production in association with jets

FastjetSelector Mass(p[4]+p[5])>100. antikt 2 40. 0. 0.5

selects all phase space points where two anti-kt jets with at least 40 GeV of transverse momentum and an invariant mass of at least 100 GeV are found. The expression must calculate a boolean value. The bmode parameter, if specified different from its default 0, allows to use b-tagged jets only, based on the parton-level constituents of the jets. There are two options: With <bmode>=1 both b and anti-b quarks are counted equally towards b-jets, while for <bmode>=2 they are added with a relative sign as constituents, i.e. a jet containing b and anti-b is not tagged.


6.8 Integration

The integration setup is covered by the ‘(integration)’ section of the steering file or the integration data file ‘Integration.dat’, respectively.

The following parameters are used to steer the integration.


6.8.1 ERROR

Specifies the relative integration error target.


6.8.2 INTEGRATOR

Specifies the integrator. The possible integrator types depend on the matrix element generator. In general users should rely on the default value and otherwise seek the help of the authors, see Authors. The options for AMEGIC++ are

In addition, however, there are a few generator independent choices which have been designed for specific processes and might be more efficient there:


6.8.3 RS_INTEGRATOR

In an NLO process, specifies the integrator for real-subtracted processes. It takes the same values as INTEGRATOR, but defaults to 7.


6.8.4 VEGAS

Specifies whether or not to employ Vegas for adaptive integration. The two possible values are ‘On’ and ‘Off’, the default being ‘On’.


6.8.5 FINISH_OPTIMIZATION

Specifies whether the full Vegas optimization is to be carried out. The two possible options are ‘On’ and ‘Off’, the default being ‘On’.


6.8.6 PSI_NMAX

The maximum number of points before cuts to be generated during integration. This parameter acts on a process-by-process basis.


6.9 Hard decays

The handler for decays of particles produced in the hard scattering process (e.g. W, Z, top, Higgs) can be enabled using the ‘HARD_DECAYS=1’ switch. Which particles should be treated as unstable is determined by the ‘STABLE[<id>]’ switch described in Model Parameters.

This decay module can also be used on top of NLO matrix elements, but it does not include any NLO corrections in the decay matrix elements themselves.

Note that the decay handler is an afterburner at the event generation level. It does not affect the calculation and integration of the hard scattering matrix elements. The cross section is thus unaffected during integration, and the branching ratios (if any decay channels have been disabled) are only taken into account for the event weights and cross section output at the end of event generation. Furthermore any cuts or scale definitions are not affected by decays and operate only on the inclusively produced particles before decays.


6.9.1 HDH_NO_DECAY

This option allows to disable an explicit list of decay channels. For example, to disable the hadronic decay channels of the W boson one would use:

HDH_NO_DECAY={24,2,-1}|{24,4,-3}|{24,16,-15}

Note that the ordering of the decay products in each channel is important and has to be identical to the ordering in the decay table printed to screen. Multiple decay channels (also for different decaying particles) can be specified using the ‘|’ (pipe) symbol as separator. Spaces are not allowed anywere in the list.


6.9.2 HDH_ONLY_DECAY

This option allows to restrict the decay channels of a particle to the explicitly given list. For example, to allow only the bottom-decay mode of the Higgs one would use

HDH_ONLY_DECAY={25,5,-5}

Note that the ordering of the decay products in each channel is important and has to be identical to the ordering in the decay table printed to screen. Multiple decay channels (also for different decaying particles) can be specified using the ‘|’ (pipe) symbol as separator. Spaces are not allowed anywere in the list.


6.9.3 HARD_SPIN_CORRELATIONS

By default, all decays are done in a factorised manner, i.e. there are no correlations between the production and decay matrix elements of an unstable particle. It is possible to enable spin correlations by specifying ‘HARD_SPIN_CORRELATIONS=1’, which might come with a small performance penalty in more complicated processes.


6.9.4 STORE_DECAY_RESULTS

The decay table and partial widths are calculated on-the-fly during the initialization phase of Sherpa from the given model and its particles and interaction vertices. To store these results in the Results/Decays directory, one has to specify ‘STORE_DECAY_RESULTS=1’.


6.9.5 HDH_SET_WIDTHS

The decay handler computes LO partial and total decay widths and generates decays with corresponding branching fractions, independently from the particle widths specified by ‘WIDTH[<id>]’. The latter are relevant only for the core process and should be set to zero for all unstable particles appearing in the core-process final state. This guarantees on-shellness and gauge invariance of the core process, and subsequent decays can be handled by the afterburner.

In constrast, ‘WIDTH[<id>]’ should be set to the physical width when unstable particles appear (only) as intermediate states in the core process, i.e. when production and decay are handled as a full process or using Decay(OS). In this case, the option ‘HDH_SET_WIDTHS=1’ permits to overwrite the ‘WIDTH[<id>]’ values of unstable particles by the LO widths computed by the decay handler.


6.9.6 HARD_MASS_SMEARING

If ‘HARD_MASS_SMEARING=1’ is specified, the kinematic mass of the unstable propagator is distributed according to a Breit-Wigner shape a posteriori. All matrix elements are still calculated in the narrow-width approximation with onshell particles. Only the kinematics are affected.


6.9.7 RESOLVE_DECAYS

There are different options how to decide when a 1->2 process should be replaced by the respective 1->3 processes built from its decaying daughter particles.

RESOLVE_DECAYS=Threshold

(default) Only when the sum of decay product masses exceeds the decayer mass.

RESOLVE_DECAYS=ByWidth

As soon as the sum of 1->3 partial widths exceeds the 1->2 partial width.

RESOLVE_DECAYS=None

No 1->3 decays are taken into account.


6.9.8 DECAY_TAU_HARD

By default, the tau lepton is decayed by the hadron decay module, Hadron decays, which includes not only the leptonic decay channels but also the hadronic modes. If ‘DECAY_TAU_HARD=1’ is specified, the tau lepton will be decayed in the hard decay handler, which only takes leptonic and partonic decay modes into account. Note, that in this case the tau needs to also be set massive with ‘MASSIVE[15]=1’.


6.10 Shower Parameters

The shower setup is covered by the ‘(shower)’ section of the steering file or the shower data file ‘Shower.dat’, respectively.

The following parameters are used to steer the shower setup.


6.10.1 SHOWER_GENERATOR

The only shower option currently available in Sherpa is ‘CSS’, and this is the default for this tag. See the module summaries in Basic structure for details about this shower.

Different shower modules are in principle supported and more choices will be provided by Sherpa in the near future. To list all available shower modules, the tag SHOW_SHOWER_GENERATORS=1 can be specified on the command line.

SHOWER_GENERATOR=None switches parton showering off completely. However, even in the case of strict fixed order calculations, this might not be the desired behaviour as, for example, then neither the METS scale setter, cf. SCALES, nor Sudakov rejection weights can be employed. To circumvent when using the CS Shower see CS Shower options.


6.10.2 JET_CRITERION

The only natively supported option in Sherpa is ‘CSS’, and this is also the default. The corresponding jet criterion is described in [Hoe09] . A custom jet criterion, tailored to a specific experimental analysis, can be supplied using Sherpa’s plugin mechanism.


6.10.3 MASSIVE_PS

This option instructs Sherpa to treat certain partons as massive in the shower, which have been considered massless by the matrix element. The argument is a list of parton flavours, for example ‘MASSIVE_PS 4 5’, if both c- and b-quarks are to be treated as massive.


6.10.4 CS Shower options

Sherpa’s default shower module is based on [Sch07a] . A new ordering parameter for initial state splitters was introduced in [Hoe09] and a novel recoil strategy for initial state splittings was proposed in [Hoe09a] . While the ordering variable is fixed, the recoil strategy for dipoles with initial-state emitter and final-state spectator can be changed for systematics studies. Setting ‘CSS_KIN_SCHEME=0’ (default) corresponds to using the recoil scheme proposed in [Hoe09a] , while ‘CSS_KIN_SCHEME=1’ enables the original recoil strategy. The lower cutoff of the shower evolution can be set via ‘CSS_PT2MIN’. Note that this value is specified in GeV^2.

By default, only QCD splitting functions are enabled in the shower. If you also want to allow for photon splittings, you can enable them by using ‘CSS_EW_MODE=1’. Note, that if you have leptons in your matrix-element final state, they are by default treated by a soft photon resummation as explained in QED Corrections. To avoid double counting, this has to be disabled as explained in that section.

The CS Shower can be forced not to emit any partons setting ‘CSS_NOEM=1’. Sudakov rejection weights for merged samples are calculated nontheless. Setting ‘CSS_MAXEM=<N>’, on the other hand, forces the CS Shower to truncate its evolution at the Nth emission. This setting, however does not necessarily compute all Sudakov weights correctly. Both settings still enable the CS Shower to be used in the METS scale setter, cf. SCALES.


6.11 MPI Parameters

The multiple parton interaction (MPI) setup is covered by the ‘(mi)’ section of the steering file or the MPI data file ‘MI.dat’, respectively. The basic MPI model is described in [Sjo87] while Sherpa’s implementation details are discussed in [Ale05]

The following parameters are used to steer the MPI setup.


6.11.1 MI_HANDLER

Specifies the MPI handler. The two possible values at the moment are ‘None’ and ‘Amisic’.


6.11.2 SCALE_MIN

Specifies the transverse momentum integration cutoff in GeV.


6.11.3 PROFILE_FUNCTION

Specifies the hadron profile function. The possible values are ‘Exponential’, ‘Gaussian’ and ‘Double_Gaussian’. For the double gaussian profile, the relative core size and relative matter fraction can be set using PROFILE_PARAMETERS.


6.11.4 PROFILE_PARAMETERS

The potential parameters for hadron profile functions, see PROFILE_FUNCTION. For double gaussian profiles there are two parameters, corresponding to the relative core size and relative matter fraction.


6.11.5 REFERENCE_SCALE

Specifies the centre-of-mass energy at which the transverse momentum integration cutoff is used as is, see SCALE_MIN. This parameter should not be changed by the user. The default is ‘1800’, corresponding to Tevatron Run I energies.


6.11.6 RESCALE_EXPONENT

Specifies the rescaling exponent for fixing the transverse momentum integration cutoff at centre-of-mass energies different from the reference scale, see SCALE_MIN, REFERENCE_SCALE.


6.11.7 SIGMA_ND_FACTOR

Specifies the factor to scale the non-diffractive cross section calculated in the MPI initialisation.


6.12 Hadronization

The hadronization setup is covered by the ‘(fragmentation)’ section of the steering file or the fragmentation data file ‘Fragmentation.dat’, respectively.

It covers the fragmentation of partons into primordial hadrons as well as the decays of unstable hadrons into stable final states.


6.12.1 Fragmentation

The FRAGMENTATION parameter sets the fragmentation module to be employed during event generation. The default is ‘Ahadic’, alternatively the hadronization can be disabled with the value ‘Off’.

TODO: Description of parameters and their tuned values.


6.12.2 Hadron decays

The treatment of hadron and tau decays is specified by DECAYMODEL. Its allowed values are either the default choice ‘Hadrons’, which renders the HADRONS++ module responsible for performing the decays, or the hadron decays can be disabled with the option ‘Off’.

HADRONS++ is the module within the Sherpa framework which is responsible for treating hadron and tau decays. It contains decay tables with branching ratios for approximately 2500 decay channels, of which many have their kinematics modelled according to a matrix element with corresponding form factors. Especially decays of the tau lepton and heavy mesons have form factor models similar to dedicated codes like Tauola [Jad93] and EvtGen [Lan01] .

Some general switches which relate to hadron decays can be adjusted in the (fragmentation) section:

Many aspects of the above mentioned “Decaydata” can be adjusted. There exist three levels of data files, which are explained in the following sections. As with all other setup files, the user can either employ the default “Decaydata” in <prefix>/share/SHERPA-MC/Decaydata, or overwrite it (also selectively) by creating the appropriate files in the directory specified by DECAYPATH.


6.12.2.1 HadronDecays.dat

HadronDecays.dat consists of a table of particles that are to be decayed by HADRONS++. Note: Even if decay tables exist for the other particles, only those particles decay that are set unstable, either by default, or in the model/fragmentation settings. It has the following structure, where each line adds one decaying particle:

<kf-code>       -><subdirectory>/

<filename>.dat

decaying particle    path to decay table    decay table file
default names:<particle>/Decays.dat

It is possible to specify different decay tables for the particle (positive kf-code) and anti-particle (negative kf-code). If only one is specified, it will be used for both particle and anti-particle.

If more than one decay table is specified for the same kf-code, these tables will be used in the specified sequence during one event. The first matching particle appearing in the event is decayed according to the first table, and so on until the last table is reached, which will be used for the remaining particles of this kf-code.

Additionally, this file may contain the keyword CREATE_BOOKLET on a separate line, which will cause HADRONS++ to write a LaTeX document containing all decay tables.


6.12.2.2 Decay table files

The decay table contains information about outgoing particles for each channel, its branching ratio and eventually the name of the file that stores parameters for a specific channel. If the latter is not specified HADRONS++ will produce it and modify the decay table file accordingly.

Additionally to the branching ratio, one may specify the error associated with it, and its source. Every hadron is supposed to have its own decay table in its own subdirectory. The structure of a decay table is

{kf1,kf2,kf3,...}BR(delta BR)[Origin]       <filename>.dat
outgoing particles       branching ratio       decay channel file

It should be stressed here that the branching ratio which is explicitly given for any individual channel in this file is always used regardless of any matrix-element value.


6.12.2.3 Decay channel files

A decay channel file contains various information about that specific decay channel. There are different sections, some of which are optional:


6.12.2.4 HadronConstants.dat

HadronConstants.dat may contain some globally needed parameters (e.g. for neutral meson mixing, see [Kra10] ) and also fall-back values for all matrix-element parameters which one specifies in decay channel files. Here, the Interference_X = 1 switch would enable rate asymmetries due to CP violation in the interference between mixing and decay (cf. Decay channel files), and setting Mixing_X = 1 enables explicit mixing in the event record according to the time evolution of the flavour states. By default, all mixing effects are turned off.


6.12.2.5 Further remarks

Spin correlations: a spin correlation algorithm is implemented. It can be switched on through the keyword ‘SOFT_SPIN_CORRELATIONS=1’ in the (run) section.

If spin correlations for tau leptons produced in the hard scattering process are supposed to be taken into account, one needs to specify ‘HARD_SPIN_CORRELATIONS=1’ as well. If using AMEGIC++ as ME generator, note that the Process libraries have to be re-created if this is changed.

Adding new channels: if new channels are added to HADRONS++ (choosing isotropic decay kinematics) a new decay table must be defined and the corresponding hadron must be added to HadronDecays.dat. The decay table merely needs to consist of the outgoing particles and branching ratios, i.e. the last column (the one with the decay channel file name) can safely be dropped. By running Sherpa it will automatically produce the decay channel files and write their names in the decay table.

Some details on tau decays: $\tau$ decays are treated within the HADRONS++ framework, even though the $\tau$ is not a hadron. As for many hadron decays, the hadronic tau decays have form factor models implemented, for details the reader is referred to [Kra10] .


6.13 QED Corrections

Higher order QED corrections are effected both on hard interaction and, upon their formation, on each hadron’s subsequent decay. The Photons [Sch08] module is called in both cases for this task. It employes a YFS-type resummation [Yen61] of all infrared singular terms to all orders and is equipped with complete first order corrections for the most relevant cases (all other ones receive approximate real emission corrections built up by Catani-Seymour splitting kernels).


6.13.1 General Switches

The relevant switches to steer the higher order QED corrections reside in the ‘(fragmentation)’ section of the steering file or the fragmentation data file ‘Fragmentation.dat’, respectively.


6.13.1.1 YFS_MODE

The keyword YFS_MODE = [0,1,2] determines the mode of operation of Photons. YFS_MODE = 0 switches Photons off. Consequently, neither the hard interaction nor any hadron decay will be corrected for soft or hard photon emission. YFS_MODE = 1 sets the mode to "soft only", meaning soft emissions will be treated correctly to all orders but no hard emission corrections will be included. With YFS_MODE = 2 these hard emission corrections will also be included up to first order in alpha_QED. This is the default setting.


6.13.1.2 YFS_USE_ME

The switch YFS_USE_ME = [0,1] tells Photons how to correct hard emissions to first order in alpha_QED. If YFS_USE_ME = 0, then Photons will use collinearly approximated real emission matrix elements. Virtual emission matrix elements of order alpha_QED are ignored. If, however, YFS_USE_ME=1, then exact real and/or virtual emission matrix elements are used wherever possible. These are presently available for V->FF, V->SS, S->FF, S->SS, S->Slnu, S->Vlnu type decays, Z->FF decays and leptonic tau and W decays. For all other decay types general collinearly approximated matrix elements are used. In both approaches all hadrons are treated as point-like objects. The default setting is YFS_USE_ME = 1. This switch is only effective if YFS_MODE = 2.


6.13.1.3 YFS_IR_CUTOFF

YFS_IR_CUTOFF sets the infrared cut-off dividing the real emission in two regions, one containing the infrared divergence, the other the "hard" emissions. This cut-off is currently applied in the rest frame of the multipole of the respective decay. It also serves as a minimum photon energy in this frame for explicit photon generation for the event record. In contrast, all photons below with energy less than this cut-off will be assumed to have negligible impact on the final-state momentum distributions. The default is YFS_IR_CUTOFF = 1E-3 (GeV). Of course, this switch is only effective if Photons is switched on, i.e. YFS_MODE = [1,2].


6.13.2 QED Corrections to the Hard Interaction

The switch to steer QED corrections to the hard scatter resides in the ’(me)’ section of the steering file or the matrix element data file ‘ME.dat’, respectively.


6.13.2.1 ME_QED

ME_QED = On/Off turns the higher order QED corrections to the matrix element on or off, respectively. The default is ‘On’. Switching QED corrections to the matrix element off has no effect on QED Corrections to Hadron Decays. The QED corrections to the matrix element will only be effected on final state not strongly interacting particles. If a resonant production subprocess for an unambiguous subset of all such particles is specified via the process declaration (cf. Processes) this can be taken into account and dedicated higher order matrix elements can be used (if YFS_MODE = 2 and YFS_USE_ME = 1).


6.13.3 QED Corrections to Hadron Decays

If the Photons module is switched on, all hadron decays are corrected for higher order QED effects.


6.14 Minimum Bias Simulation

(SHRiMPS is yet to be documented)


7. ME-PS merging

For a large fraction of LHC final states, the application of reconstruction algorithms will lead to the identification of several hard jets. A major task is to distinguish whether such events are signals for new physics or just manifestations of SM physics. Related calculations therefore need to describe as accurately as possible both the full matrix element for the underlying hard processes as well as the subsequent evolution and conversion of the hard partons into jets of hadrons. Several scales therefore determine the thorough development of the event. This makes it difficult to unambiguously disentangle the components, which belong to the hard process from those of the hard-parton evolution. Given an n-jet event of well separated partons, its jet structure is retained when emitting a further collinear or soft parton. An additional hard, large-angle emission however gives rise to an extra jet changing the n to an n+1 final state. The merging scheme has to define, on an event-by-event basis, which possibility has to be followed. Its primary goals are therefore to avoid double counting by preventing events to appear twice, i.e. once for each possibility, as well as dead regions by generating each configuration only once and using the appropriate path.

Various such merging schemes have been proposed. The currently most advanced treatment at tree-level is detailed in [Hoe09] . It relies on a strict separation of the phasespace for additional QCD radiation into a matrix-element and a parton-shower domain. Truncated showers are then needed to account for potential radiation in the parton-shower domain, if radiation in the matrix-element domain has already occured. This technique has been applied to the simulation of final states containing hard photons [Hoe09a] and has been extended to multi-scale processes where the leading order is dominated by very low scales [Car09] . A merging approach similar to [Hoe09] was presented in [Ham09a] for the special case of angular-ordered parton showers. Several older approaches exist. The CKKW scheme as a procedure similar to the truncated shower merging was introduced in [Cat01a] . Its extension to hadronic processes has been discussed in [Kra02] , and the approach has been validated for several cases [Sch05] , [Kra05a] , [Kra04] , [Kra05] , [Gle05] . A reformulation of CKKW to a merging procedure in conjunction with a dipole shower (CKKW-L) has been presented in [Lon01] . The MLM scheme has been developed using a geometric analysis of the unconstrained radiation pattern in terms of cone jets to generate the inclusive samples [Man01] ,[Man06] . In a number of works, all these different algorithms have been implemented in different variations on different levels of sophistication in conjunction with various matrix-element generators or already in full-fledged event generators. Their respective results have been compared e.g. in [Hoc06] , [Alw07] . Common to all schemes is that sequences of tree-level multileg matrix elements with increasing final-state multiplicity are merged with parton showers to yield a fully inclusive sample with no double counting. Their connection with truncated shower merging is outlined in [Hoe09] .


7.1 The algorithm implemented in Sherpa

In Sherpa the merging of matrix elements and parton showers is accomplished as follows, cf. [Hoe09] :

  1. All cross sections sigma_k for processes with k=0,1,...,N extra partons are calculated with the constraint that the matrix-element final states pass the jet criteria. They are determined by the jet measure shown below, and the minimal distance is set by the actual merging scale Q_cut. The measure used for jet identification can be written as
     
      Q_ij^2 = 2p_i.p_j min{2/(Cijk+Cjik)}
    

    where the minimum is taken over the colour-connected partons k (k different from i and j), and where, for final state partons i and j,

     
      Cijk = p_i.p_k/((p_i+p_k).p_j) - m_i^2/(2p_i.p_j)      if j=g,
    
      Cijk = 1                                               else.
    
  2. Processes of fixed parton multiplicity are chosen with probability sigma_k/(sum of all sigma_k). The event’s hard process is picked from the list of partonic processes having the desired multiplicity and according to their particular cross-section contributions. All particle momenta are distributed respecting the correlations encoded in the matrix elements. Merged samples therefore fully include lepton-jet and jet–jet correlations up to N extra jets.
  3. The parton configuration of the matrix element has to be analysed to eventually accomplish the reweighting. Matrix elements are interpreted in the large N_c limit. The partons are clustered backwards according to the shower measure and inverted shower kinematics. The clustering is guided by physically allowed parton combinations, restricting the shower histories to those which correspond to a Feynman diagram. It is stopped after a 2->2 configuration (a core process) has been identified.
  4. The reweighting proceeds according to the reconstructed shower history. The event is accepted or rejected according to a kinematics-dependent weight, which corresponds to evaluating strong couplings in the parton shower scheme.
  5. The parton-shower evolution is started with suitably defined scales for intermediate and final-state particles.
  6. Intermediate partons undergo truncated shower evolution. This allows parton-shower emissions between the scales of one matrix element branching and the next. This leads to a situation where, due to additional partons originating from these branchings, the kinematics of the next matrix-element branching needs to be redefined. If for any reason (e.g. energy-momentum conservation) the matrix element branching cannot be reconstructed after a truncated shower branching, this shower branching must be vetoed.
  7. In all circumstances parton-shower radiation is subject to the condition that no extra jet is produced. If any emission turns out to be harder than the separation cut Q_cut, the event is vetoed. This effectively implements a Sudakov rejection and reduces the individual inclusive cross section to exclusive ones. The exception to this veto – called highest-multiplicity treatment – is for matrix-element configurations with the maximal number N of extra partons. These cases require the parton shower to cover the phase space for more jets than those produced by the matrix elements. To obtain an inclusive N-jet prediction, the veto therefore is on parton emissions at scales harder than the softest parton-shower scale, which can produce allowed emissions harder than the separation scale Q_cut. Of course, correlations including the N+1th jet are only approximately taken into account.

7.2 Generation of merged samples

The generation of inclusive event samples, i.e. the combination of matrix elements for different parton multiplicities with parton showers and hadronization, is completely automatized within Sherpa. To obtain consistent results, certain parameters related to the matrix-element calculation and the parton showers have to be set accordingly. In the following the basic parameter settings for generating “merged” samples are summarised. Potential pitfalls are pointed out.

  1. Process setup

    The starting point is the definition of a basic core (lowest-order) process with respect to which the impact of additional QCD radiation shall be studied. As an illustrative example, consider Drell–Yan lepton-pair production in proton–proton collisions. The lowest-order process reads pp -> l-bar l, mediated through Z/photon exchange. Additional QCD radiation will then manifest itself through additional QCD partons in the final state, i.e. pp -> l\bar l + n jets with n=1,...,N. To initialise the calculation of all the different matrix elements (for pp -> l\bar l+0,1,...,N QCD partons) in a single generator run, besides selecting the basic core process, the maximal number N of additional final-state QCD partons has to be specified in the (processes) section of the steering file. For the above example, assuming N=3, this reads:

        Process 93 93 -> 90 90 93{3}
        Order_EW 2
    

    N is given in the curly brackets belonging to the 93, the code for QCD partons. Note, that it is mandatory to fix the order of electroweak couplings to the corresponding order of the basic core process, here pp -> l-bar l or 93 93 -> 90 90, as only QCD corrections to this process can be considered and further electroweak corrections are not treated by Sherpa’s ME-PS merging implementation.

  2. Setting the merging scale

    The most important parameter to be specified when generating merged samples with Sherpa is the actual value of the jet resolution that separates the subsamples of different parton multiplicities, the merging scale.

    The jet criterion is explained in The algorithm implemented in Sherpa. The separation cut, Q_cut, must be specified. It is set using the CKKW tag, usually in the form (Q_cut/E_cms)^2. For example, a valid setting reads

        CKKW sqr(20/E_CMS)
    

    and must be included in the process specification, before the End process line. As mentioned before, all extra QCD parton radiation is regularised by satisfying the jet criterion. However, divergences of the basic core process, such as vanishing invariant masses of lepton pairs, need to be regularised by imposing additional cuts, see Selectors.

  3. Parton showering

    It always should be ensured that the parton showers are switched on.

Further remarks

Although the merging of different multiplicity matrix-element samples with parton showers attached is fully automatized in Sherpa, some care has to be taken to ensure physical meaningful results. Some of the most prominent mistakes are listed here:

Finally, a few more useful comments related to Sherpa’s merging are stated below:


8. Tips and Tricks


8.1 Bash completion

Sherpa will install a file named ‘$prefix/share/SHERPA-MC/sherpa-completion’ which contains tab completion functionality for the bash shell. You simply have to source it in your active shell session by running

  .  $prefix/share/SHERPA-MC/sherpa-completion

and you will be able to tab-complete any parameters on a Sherpa command line.

To permanently enable this feature in your bash shell, you’ll have to add the source command above to your ~/.bashrc.


8.2 Rivet analyses

Sherpa is equipped with an interface to the analysis tool Rivet. To enable it, Rivet and HepMC have to be installed (e.g. using the Rivet bootstrap script) and your Sherpa compilation has to be configured with the following options:

  ./configure --enable-hepmc2=/path/to/hepmc2 --enable-rivet=/path/to/rivet

(Note: Both paths are equal if you used the Rivet bootstrap script.)

To use the interface, specify the switch

  Sherpa ANALYSIS=Rivet

and create an analysis section in Run.dat that reads as follows:

  (analysis){
    BEGIN_RIVET {
      -a D0_2008_S7662670 CDF_2007_S7057202 D0_2004_S5992206 CDF_2008_S7828950
    } END_RIVET
  }(analysis)

The line starting with -a specifies which Rivet analyses to run and the histogram output file can be changed with the normal ANALYSIS_OUTPUT switch.

You can also use rivet-mkhtml (distributed with Rivet) to create plot webpages from Rivet’s output files:

  source /path/to/rivetenv.sh   # see below
  rivet-mkhtml -o output/ file1.aida [file2.aida, ...]
  firefox output/index.html &

If your Rivet installation is not in a standard location, the bootstrap script should have created a rivetenv.sh which you have to source before running the rivet-mkhtml script.


8.3 HZTool analyses

Sherpa is equipped with an interface to the analysis tool HZTool. To enable it, HZTool and CERNLIB have to be installed and your Sherpa compilation has to be configured with the following options:

  ./configure --enable-hztool=/path/to/hztool --enable-cernlib=/path/to/cernlib --enable-hepevtsize=4000

To use the interface, specify the switch

  Sherpa ANALYSIS=HZTool

and create an analysis section in Run.dat that reads as follows:

  (analysis){
    BEGIN_HZTOOL {
      HISTO_NAME output.hbook;
      HZ_ENABLE hz00145 hz01073 hz02079 hz03160;
    } END_HZTOOL;
  }(analysis)

The line starting with HZ_ENABLE specifies which HZTool analyses to run. The histogram output directory can be changed using the ANALYSIS_OUTPUT switch, while HISTO_NAME specifies the hbook output file.


8.4 MCFM interface

Sherpa is equipped with an interface to the NLO library of MCFM for decdicated processes. To enable it, MCFM has to be installed and compiled into a single library, libMCFM.a, and your Sherpa compilation has to be configured with the following options:

  ./configure --enable-mcfm=/path/to/mcfm

To use the interface, specify

  Loop_Generator MCFM;

in the process section of the run card and add it to the list of generators in ME_SIGNAL_GENERATOR. Of course, MCFM’s process.DAT file has to be copied to the current run directory.


8.5 Debugging a crashing/stalled event


8.5.1 Crashing events

If an event crashes, Sherpa tries to obtain all the information needed to reproduce that event and writes it out into a directory named

  Status__<date>_<time>

If you are a Sherpa user and want to report this crash to the Sherpa team, please attach a tarball of this directory to your email. This allows us to reproduce your crashed event and debug it.

To debug it yourself, you can follow these steps (Only do this if you are a Sherpa developer, or want to debug a problem in an addon library created by yourself):


8.5.2 Stalled events

If event generation seems to stall, you first have to find out the number of the current event. For that you would terminate the stalled Sherpa process (using Ctrl-c) and check in its final output for the number of generated events. Now you can request Sherpa to write out the random seed for the event before the stalled one:

  Sherpa [...] EVENTS=[#events - 1] SAVE_STATUS=Status/

(Replace [#events - 1] using the number you figured out earlier)

The created status directory can either be sent to the Sherpa developers, or be used in the same steps as above to reproduce that event and debug it.


8.6 Versioned installation

If you want to install different Sherpa versions into the same prefix (e.g. /usr/local), you have to enable versioning of the installed directories by using the configure option ‘--enable-versioning’. Optionally you can even pass an argument to this parameter of what you want the version tag to look like.


8.7 NLO calculations


8.7.1 Choosing DIPOLE_ALPHA

A variation of the parameter DIPOLE_ALPHA (see Dipole subtraction) changes the contribution from the real (subtracted) piece (RS) and the integrated subtraction terms (I), keeping their sum constant. Varying this parameter provides a nice check of the consistency of the subtraction procedure and it allows to optimize the integration performance of the real correction. This piece has the most complicated momentum phase space and is often the most time consuming part of the NLO calculation. The optimal choice depends on the specific setup and can be determined best by trial.

Hints to find a good value:


8.7.2 Integrating complicated Loop-ME

For complicated processes the evaluation of one-loop matrix elements can be very time consuming. The generation time of a fully optimized integration grid can become prohibitively long. Rather than using a poorly optimized grid in this case it is more advisable to use a grid optimized with either the born matrix elements or the born matrix elements and the finite part of the integrated subtraction terms only, working under the assumption that the distibutions in phase space are rather similar.

This can be done by one of the following methods:

  1. Employ a dummy virtual (requires no computing time, returns 0. as its finite result) to optimise the grid. This only works if V is not the only NLO_QCD_Part specified.
    1. During integration set the Loop_Generator to Internal and add USE_DUMMY_VIRTUAL=1 to your (run){...}(run) section. The grid will then be optimised to the phase space distribution of the sum of the Born matrix element and the finite part of the integrated subtraction term. Note: The cross section displayed during integration will also correspond to the sum of the Born matrix element and the finite part of the integrated subtraction term.
    2. During event generation reset Loop_Generator to your generator supplying the virtual correction. The events generated then carry the correct event weight.
  2. Suppress the evaluation of the virtual and/or the integrated subtraction terms. This only works if Amegic is used as the matrix element generator for the BVI pieces and V is not the only NLO_QCD_Part specified.
    1. During integration add NLO_BVI_MODE=<num> to your (run){...}(run) section. <num> takes the following values: 1-B, 2-I, and 4-V. The values are additive, i.e. 3-BI. Note: The cross section displayed during integration will match the parts selected by NLO_BVI_MODE.
    2. During event generation remove the switch again and the events will carry the correct weight.

Note: this will not work for the RS piece!


8.7.3 Structure of HepMC Output

The generated events can be written out in the HepMC format to be passed through an independent analysis. For this purpose a shortened event structure is used containing only a single vertex. Correlated real and subtraction events are labeled with the same event number such that their possible cancelations can be taken into account properly.

To use this output option Sherpa has to be compiled with HepMC support. cf. Installation. The HEPMC2_SHORT_OUTPUT=<filename> has to used, cf. Event output formats.

Using this HepMC output format the internal Rivet interface (Rivet analyses) can be used to pass the events through Rivet. It has to be stressed, however, that Rivet currently cannot take the correlations between real and subtraction events into account properly. The Monte-Carlo error is thus overestimated. Nonetheless, the mean is unaffected.

As above, the Rivet interface has to be instructed to use the shortened HepMC event structure:

  (analysis){
    BEGIN_RIVET {
      USE_HEPMC_SHORT 1
      -a ...
    } END_RIVET
  }(analysis)

8.7.4 Structure of ROOT NTuple Output

The generated events can be stored in a ROOT NTuple file, see Event output formats. The internal ROOT Tree has the following Branches:

id

Event ID to identify correlated real sub-events.

nparticle

Number of outgoing partons.

E/px/py/pz

Momentum components of the partons.

kf

Parton PDG code.

weight

Event weight, if sub-event is treated independently.

weight2

Event weight, if correlated sub-events are treated as single event.

me_wgt

ME weight (w/o PDF), corresponds to ’weight’.

me_wgt2

ME weight (w/o PDF), corresponds to ’weight2’.

id1

PDG code of incoming parton 1.

id2

PDG code of incoming parton 2.

fac_scale

Factorisation scale.

ren_scale

Renormalisation scale.

x1

Bjorken-x of incoming parton 1.

x2

Bjorken-x of incoming parton 2.

x1p

x’ for I-piece of incoming parton 1.

x2p

x’ for I-piece of incoming parton 2.

nuwgt

Number of additional ME weights for loops and integrated subtraction terms.

usr_wgt[nuwgt]

Additional ME weights for loops and integrated subtraction terms.


8.7.4.1 Computing (differential) cross sections of real correction events with statistical errors

Real correction events and their counter-events from subtraction terms are highly correlated and exhibit large cancellations. Although a treatment of sub-events as independent events leads to the correct cross section the statistical error would be greatly overestimated. In order to get a realistic statistical error sub-events belonging to the same event must be combined before added to the total cross section or a histogram bin of a differential cross section. Since in general each sub-event comes with it’s own set of four momenta the following treatment becomes necessary:

  1. An event here refers to a full real correction event that may contain several sub-events. All entries with the same id belong to the same event. Step 2 has to be repeated for each event.
  2. Each sub-event must be checked separately whether it passes possible phase space cuts. Then for each observable add up weight2 of all sub-events that go into the same histogram bin. These sums x_id are the quantities to enter the actual histogram.
  3. To compute statistical errors each bin must store the sum over all x_id and the sum over all x_id^2. The cross section in the bin is given by <x> = 1/N \sum x_id, where N is the number of events (not sub-events). The 1-\sigma statistical error for the bin is \sqrt{ (<x^2>-<x>^2)/(N-1) }

Note: The main difference between weight and weight2 is that they refer to a different counting of events. While weight corresponds to each event entry (sub-event) counted separately, weight2 counts events as defined in step 1 of the above procedure. For NLO pieces other than the real correction weight and weight2 are identical.


8.7.4.2 Computation of cross sections with new PDF’s

Born and real pieces:

Notation:

f_a(x_a) = PDF 1 applied on parton a,

F_b(x_b) = PDF 2 applied on parton b.

The total cross section weight is given by

weight = me_wgt f_a(x_a)F_b(x_b).

Loop piece and integrated subtraction terms:

The weights here have an explicit dependence on the renormalization and factorization scales.

To take care of the renormalization scale dependence (other than via alpha_S) the weight w_0 is defined as

w_0 = me_wgt + usr_wgts[0] log((\mu_R^new)^2/(\mu_R^old)^2) + usr_wgts[1] 1/2 [log((\mu_R^new)^2/(\mu_R^old)^2)]^2.

To address the factorization scale dependence the weights w_1,...,w_8 are given by

w_i = usr_wgts[i+1] + usr_wgts[i+9] log((\mu_F^new)^2/(\mu_F^old)^2).

The full cross section weight can be calculated as

weight = w_0 f_a(x_a)F_b(x_b) + (f_a^1 w_1 + f_a^2 w_2 + f_a^3 w_3 + f_a^4 w_4) F_b(x_b) + (F_b^1 w_5 + F_b^2 w_6 + F_b^3 w_7 + F_b^4 w_8) f_a(x_a)

where

f_a^1 = f_a(x_a) (a=quark), \sum_q f_q(x_a) (a=gluon),

f_a^2 = f_a(x_a/x'_a)/x'_a (a=quark), \sum_q f_q(x_a/x'_a)x'_a (a=gluon),

f_a^3 = f_g(x_a),

f_a^4 = f_g(x_a/x'_a)/x'_a.


9. Customization

Customizing Sherpa according to your needs.

Sherpa can be easily extended with certain user defined tools. To this extent, a corresponding class must be written, equipped with a corresponding getter function and compiled into an external library which can be linked to Sherpa at runtime. To this end the switch SHERPA_LDADD has to be given the name of the library to be loaded, ie.

  SHERPA_LDADD <libname>

to load lib<libname>.so. Several specific examples are listed in the following sections.


9.1 External RNG

To use an external Random Number Generator (RNG) in Sherpa, you need to provide an interface to your RNG in an external dynamic library. This library is then loaded at runtime and Sherpa replaces the internal RNG with the one provided.

In this case Sherpa will not attempt to set, save, read or restore the RNG

The corresponding code for the RNG interface is

#include "ATOOLS/Math/Random.H"

using namespace ATOOLS;

class Example_RNG: public External_RNG {
public:
  double Get() 
  { 
    // your code goes here ... 
  }
};// end of class Example_RNG

// this makes Example_RNG loadable in Sherpa
DECLARE_GETTER(Example_RNG_Getter,"Example_RNG",External_RNG,RNG_Key);
External_RNG *Example_RNG_Getter::operator()(const RNG_Key &arg) const
{ return new Example_RNG(); }
// this eventually prints a help message
void Example_RNG_Getter::PrintInfo(std::ostream &str,const size_t width) const
{ str<<"example RNG interface"; }

If the code is compiled into a library called libExampleRNG.so, then this library is loaded dynamically in Sherpa using the command ‘SHERPA_LDADD=ExampleRNG’ either on the command line or in ‘Run.dat’. If the library is bound at compile time, like e.g. in cmt, you may skip this step.

Finally Sherpa is instructed to retrieve the external RNG by specifying ‘EXTERNAL_RNG=Example_RNG’ on the command line or in ‘Run.dat’.


9.2 External PDF

To use an external PDF (not included in LHAPDF) in Sherpa, you need to provide an interface to your PDF in an external dynamic library. This library is then loaded at runtime and it is possible within Sherpa to access all PDFs included.

The simplest C++ code to implement your interface looks as follows

#include "PDF/Main/PDF_Base.H"

using namespace PDF;

class Example_PDF: public PDF_Base {
public:
  void Calculate(double x,double Q2)
  {
    // calculate values x f_a(x,Q2) for all a
  }
  double GetXPDF(const ATOOLS::Flavour a)
  {
    // return x f_a(x,Q2)
  }
  virtual PDF_Base *GetCopy()
  {
    return new Example_PDF();
  }
};// end of class Example_PDF

// this makes Example_PDF loadable in Sherpa
DECLARE_PDF_GETTER(Example_PDF_Getter);
PDF_Base *Example_PDF_Getter::operator()(const Parameter_Type &args) const
{ return new Example_PDF(); }
// this eventually prints a help message
void Example_PDF_Getter::PrintInfo
(std::ostream &str,const size_t width) const
{ str<<"example PDF"; }
// this lets Sherpa initialize and unload the library
Example_PDF_Getter *p_get=NULL;
extern "C" void InitPDFLib()
{ p_get = new Example_PDF_Getter("ExamplePDF"); }
extern "C" void ExitPDFLib() { delete p_get; }

If the code is compiled into a library called libExamplePDFSherpa.so, then this library is loaded dynamically in Sherpa using ‘PDF_LIBRARY=ExamplePDFSherpa’ either on the command line, in ‘Run.dat’ or in ‘ISR.dat’. If the library is bound at compile time, like e.g. in cmt, you may skip this step. It is now possible to list all accessible PDF sets by specifying ‘SHOW_PDF_SETS=1’ on the command line.

Finally Sherpa is instructed to retrieve the external PDF by specifying ‘PDF_SET=ExamplePDF’ on the command line, in ‘Run.dat’ or in ‘ISR.dat’.


9.3 Exotic physics

It is possible to add your own models to Sherpa in a straightforward way. To illustrate, a simple example has been included in the directory Examples/Models/SM_ZPrime, showing how to add a Z-prime boson to the Standard Model.

The important features of this example include:

To use this model, create the libraries for Sherpa to use by running

 
make

in this directory. Then run Sherpa as normal:

 
../../../bin/Sherpa

To implement your own model, copy these example files anywhere and modify them according to your needs.

Note: You don’t have to modify or recompile any part of Sherpa to use your model. As long as the SHERPA_LDADD parameter is specified as above, Sherpa will pick up your model automatically.

Furthermore note: New physics models with an existing implementation in FeynRules, cf. [Chr08] and [Chr09] , can directly be invoked using Sherpa’s interface to FeynRules, see FeynRules model.


9.4 External one-loop ME

Sherpa includes only a very limited selection of one-loop matrix elements. To make full use of the implemented automated dipole subtraction it is possible to link external one-loop codes to Sherpa in order to perform full calculations at QCD next-to-leading order.

In general Sherpa can take care of any piece of the calculation except one-loop matrix elements, i.e. the born ME, the real correction, the real and integrated subtraction terms as well as the phase space integration and PDF weights for hadron collisions. Sherpa will provide sets of four-momenta and request for a specific parton level process the helicity and colour summed one-loop matrix element (more specific: the coefficients of the Laurent series in the dimensional regularization parameter epsilon up to the order epsilon^0).

An example setup for interfacing such an external one-loop code, following the Binoth Les Houches interface proposal [Bin10a] of the 2009 Les Houches workshop, is provided in MC@NLO setup for pp->Z+jet using the BLHA interface. To use the LH-OLE interface, Sherpa has to be configured with --enable-lhole.

The interface:

The setup (cf. example MC@NLO setup for pp->Z+jet using the BLHA interface):


9.5 My own interface

It is possible to pass Sherpa output to an external Fortran or C++ framework on-the-flight. To illustrate this option, a simple, yet functional example is included in the directory ./AddOns/HEPEVT, showing how to fill the HEPEVT common from Sherpa output. It also exemplifies how to retrieve the weight of weighted events and how to access information about the total cross section of the event sample at the end of the run.

Note that only the event converter is included in the sources, you will still need to implement the calling function and an initialize and finalize method, see below. However, these are rather simple.

The important features of this example include:

To use this interface, create the additional library for Sherpa by running

 
make SHERPA_PREFIX=/path/to/sherpa

in the directory AddOns/HEPEVTInterface. After copying the library, run Sherpa from your interface.

Note: You don’t have to modify or recompile any part of Sherpa to use this interface. As long as the SHERPA_LDADD parameter is specified as above, Sherpa will pick up the HEPEVT converter automatically.


9.6 Python Interface

Certain Sherpa classes and methods can be made available to the Python interpreter in the form of an extension module. This module can be loaded in Python and provides access to certain functionalities of the Sherpa event generator in Python. It was designed specifically for the computation of matrix elements in python (Using the Python interface) and its features are currently limited to this purpose. In order to build the module, Sherpa must be configured with the option --enable-pyext. Running make then invokes the automated interface generator SWIG [Bea03] to create the Sherpa module using the Python C/C++ API. SWIG version 1.3.x or later is required for a successful build. Problems might occur if more than one version of Python is present on the system since automake currently doesn’t always handle multiple Python installations properly. A possible workaround is to temporarily uninstall one version of python, configure and build Sherpa, and then reinstall the temporarily uninstalled version of Python.

The following script is a minimal example that shows how to use the Sherpa module in Python. In order to load the Sherpa module, the location where it is installed must be added to the PYTHONPATH. There are several ways to do this, in this example the sys module is used. <sherpa-python-lib-dir> must be replaced by the actual installation directory of the Sherpa module. This is done automatically in the test scripts of the Using the Python interface. The sys module also allows it to directly pass the command line arguments used to run the script to the initialization routine of Sherpa. The script can thus be executed using the normal command line options of Sherpa (see Command line options). Furthermore it illustrates how exceptions that Sherpa might throw can be taken care of. If a run card is present in the directory where the script is executed, the initialization of the generator causes Sherpa to compute the cross sections for the processes specified in the run card. See Computing matrix elements for idividual phase space points for an example that shows how to use the Python interface to compute matrix elements.

Note that if you have compiled Sherpa with MPI support, you need to source the mpi4py module using from mpi4py import MPI.

  #!/usr/bin/python
  import sys
  sys.path.append('<sherpa-python-lib-dir>')
  import Sherpa

  # set up the generator
  Generator=Sherpa.Sherpa()

  try:
      # initialize the generator, pass command line arguments to initialization routine
      Generator.InitializeTheRun(len(sys.argv),sys.argv)

  # catch exceptions
  except Sherpa.Exception as exc:
      print exc

10. Examples

Some example set-ups are included in Sherpa, in the <prefix>/share/SHERPA-MC/Examples/ directory. These may be useful to new users to practice with, or as templates for creating your own Sherpa run-cards. In this section, we will look at some of the main features of these examples.


10.1 Vector boson + jets production


10.1.1 W+jets production

To change any of the following LHC examples to production at the Tevatron simply change the beam settings to

  BEAM_1  2212; BEAM_ENERGY_1 980;
  BEAM_2 -2212; BEAM_ENERGY_2 980;

10.1.1.1 MC@NLO setup for pp->W+jet+X

This is an example setup for W+jet production at hadron colliders at next-to-leading order precision matched to the parton shower using the MC@NLO prescription detailed in [Hoe11] . A few things to note are detailed below the example. It can be straight forwardly modified to the production of a W in association with any number of jets by simply adjusting the process and scale definitions as well as the matrix element cuts in the selector.

 
 
(run){
  % general settings
  EVENTS 2.5M;

  % tags and settings for scale definitions
  FSF:=1.0; RSF:=1.0; QSF:=1.0;
  SCALES VAR{FSF*MPerp2(p[2]+p[3])}{RSF*MPerp2(p[2]+p[3])}{QSF*PPerp2(p[4])};

  % tags and settings for ME generators
  LOOPGEN:=<my-loop-gen>;
  ME_SIGNAL_GENERATOR Amegic LOOPGEN;
  EVENT_GENERATION_MODE Weighted;
  RESULT_DIRECTORY Results.FSF.RSF.QSF

  % model parameters
  MODEL SM;
  MASSIVE[15] 1;

  % collider setup
  BEAM_1 2212; BEAM_ENERGY_1 3500;
  BEAM_2 2212; BEAM_ENERGY_2 3500;
}(run);

(processes){
  Process 93 93 -> 90 91 93;
  Order_EW 2;
  NLO_QCD_Mode MC@NLO;
  Loop_Generator LOOPGEN;
  End process;
}(processes);

(selector){
  Mass 90 91 2. E_CMS;
  NJetFinder 1 20. 0. 0.4 -1
}(selector);

Things to notice:


10.1.1.2 MEPS and MENLOPS setup for pp->W+jets

This is an example setup for inclusive W production at hadron colliders. The inclusive process is calculated at next-to-leading order accuracy matched to the parton shower using the MC@NLO prescription detailed in [Hoe11] . Higher jet multiplicities, calculated each at leading order, are merged into the inclusive sample using the MENLOPS methods described in [Geh12] , [Hoe10] and [Hoe09] . A few things to note are detailed below the example. The example can be converted into a simple MEPS-type leading order merging (CKKW [Cat01a] , [Kra02] and [Hoe09] ) example by setting LJET:=0. This setup provides the option to use a custom jet criterion (see JET_CRITERION), which is defined in My_JetCriterion.C, and which is based on jets identified by FastJet. The corresponding plugin is compiled using scons.

 
 
(run){
  % general settings
  EVENTS 2.5M; ERROR=0.1;

  % tags and settings for scale definitions
  FSF:=1.0; RSF:=1.0; QSF:=1.0;
  SCALES METS{FSF*MU_F2}{RSF*MU_R2}{QSF*MU_Q2};

  % tags for process setup
  LJET:=2; NJET:=4; QCUT:=20;

  ## % extra tags for custom jet criterion
  ## SHERPA_LDADD MyStuff;
  ## JET_CRITERION FASTJET[A:antikt,R:0.4,y:5];

  % tags and settings for ME generators
  LOOPGEN:=Internal;
  ME_SIGNAL_GENERATOR Comix Amegic LOOPGEN;
  EVENT_GENERATION_MODE Weighted;
  RESULT_DIRECTORY res.QCUT.FSF.RSF.QSF;

  % model parameters
  MODEL SM;
  MASSIVE[15] 1;

  % collider setup
  BEAM_1 2212; BEAM_ENERGY_1 3500;
  BEAM_2 2212; BEAM_ENERGY_2 3500;
}(run);

(processes){
  Process 93 93 -> 90 91 93{NJET};
  Order_EW 2; CKKW sqr(QCUT/E_CMS);
  NLO_QCD_Mode MC@NLO {LJET};
  Loop_Generator LOOPGEN {LJET};
  ME_Generator Amegic {LJET};
  Enhance_Factor 16 {3}; 
  Enhance_Factor 128 {4,5};
  End process;
}(processes);

(selector){
  Mass 90 91 2. E_CMS;
}(selector);

Things to notice:


10.1.1.3 MEPS@NLO setup for pp->W+jets

This is an example setup for inclusive W production at hadron colliders. The inclusive process is calculated at next-to-leading order accuracy matched to the parton shower using the MC@NLO prescription detailed in [Hoe11] . The next few higher jet multiplicities, calculated at next-to-leading order as well, are merged into the inclusive sample using the MEPS@NLO method - an extension of the CKKW method to NLO - as described in [Hoe12a] and [Geh12] . Finally, even higher multiplicities, calculated at leading order, are merged on top of that. A few things to note are detailed below the example. The example can be converted a simple MENLOPS setup by setting LJET:=2, or an MEPS setup by setting LJET:=0, to study the effect of incorporating higher-order matrix elements.

 
 
(run){
  % general settings
  EVENTS 2.5M; ERROR=0.1;

  % tags and settings for scale definitions
  SP_NLOCT=1; FSF:=1.0; RSF:=1.0; QSF:=1.0;
  SCALES METS{FSF*MU_F2}{RSF*MU_R2}{QSF*MU_Q2};

  % tags for process setup
  LJET:=2,3,4; NJET:=4; QCUT:=30;
  EXCLUSIVE_CLUSTER_MODE 1;

  % shower settings for NLO
  CSS_KFACTOR_SCHEME 0;

  % tags and settings for ME generators
  LOOPGEN0:=Internal;
  LOOPGEN1:=<my-loop-gen-for-W1j>;
  LOOPGEN2:=<my-loop-gen-for-W2j>;
  ME_SIGNAL_GENERATOR Comix Amegic LOOPGEN0 LOOPGEN1 LOOPGEN2;
  EVENT_GENERATION_MODE Weighted;
  RESULT_DIRECTORY=res.NLO.QCUT.FSF.RSF.QSF;

  % model parameters
  MODEL SM;
  MASSIVE[15] 1;

  % collider setup
  BEAM_1 2212; BEAM_ENERGY_1 3500;
  BEAM_2 2212; BEAM_ENERGY_2 3500;
}(run);

(processes){
  Process 93 93 -> 90 91 93{NJET};
  Order_EW 2; CKKW sqr(QCUT/E_CMS);
  NLO_QCD_Mode MC@NLO {LJET};
  Loop_Generator LOOPGEN0 {2};
  Loop_Generator LOOPGEN1 {3};
  Loop_Generator LOOPGEN2 {4};
  ME_Generator Amegic {LJET};
  Enhance_Factor 16 {3}; 
  Enhance_Factor 64 {4};
  Enhance_Factor 128 {5,6};
  RS_Enhance_Factor 10 {3};
  RS_Enhance_Factor 20 {4};
  End process;
}(processes);

(selector){
  Mass 90 91 2. E_CMS;
}(selector);

Things to notice:


10.1.2 Z+jets production

To change any of the following LHC examples to production at the Tevatron simply change the beam settings to

  BEAM_1  2212; BEAM_ENERGY_1 980;
  BEAM_2 -2212; BEAM_ENERGY_2 980;

10.1.2.1 MC@NLO setup for pp->Z+jet using the BLHA interface

This is an example setup for Z+1jet production at hadron colliders at next-to-leading order precision matched to the parton shower using the MC@NLO prescription detailed in [Hoe11] . In the example given below the BLHA interface to the GoSam generator is used for the virtual matrix elements.

 
 
(run){
  % general settings
  EVENTS 2.5M;

  % tags and settings for scale definitions
  FSF:=1.0; RSF:=1.0; QSF:=1.0;
  
  % tags and settings for ME generators
  ME_SIGNAL_GENERATOR Amegic LHOLE;
  LHOLE_OLP GoSam
  SHERPA_LDADD golem_olp;
  LHOLE_CONTRACTFILE OLE_order.olc;
  EVENT_GENERATION_MODE Weighted;
  RESULT_DIRECTORY Results.FSF.RSF.QSF

  % model parameters
  MODEL SM;
  MASSIVE[15] 1;

  % collider setup
  BEAM_1 2212; BEAM_ENERGY_1 3500;
  BEAM_2 2212; BEAM_ENERGY_2 3500;
}(run);

(processes){
  Process 93 93 -> 11 -11 93;
  Order_EW 2;
  NLO_QCD_Mode MC@NLO;
  Loop_Generator LHOLE;
  Scales FASTJET[A=antikt,PT=20.,ET=0,R=0.4,M=0]{FSF*H_T2}{RSF*H_T2}{QSF*PPerp2(p[4])}
  RS_Enhance_Factor 10;
  End process;
}(processes);

(selector){
  Mass 11 -11 66. 116.
  NJetFinder 1 20. 0. 0.4 -1
}(selector);

Things to notice:


10.1.2.2 MEPS and MENLOPS setup for pp->Z+jets

This is an example setup for tree-level matrix elements merged with the parton shower [Hoe09] in a minimal setup. The process includes real emission matrix elements with up to 4 final-state QCD partons. Via CKKW sqr(30/E_CMS) the merging scale is fixed to 30 GeV.

 
 
(run){
  % general settings
  EVENTS 10000;

  % collider setup
  BEAM_1 = 2212; BEAM_ENERGY_1 = 3500;
  BEAM_2 = 2212; BEAM_ENERGY_2 = 3500;
}(run)

(processes){
  Process 93 93 -> 11 -11 93{4}
  Order_EW 2;
  CKKW sqr(30/E_CMS)
  Integration_Error 0.02 {6};
  End process;
}(processes)

(selector){
  Mass 11 -11 66 116
}(selector)

10.1.2.3 MEPS@NLO setup for pp->Z+jets

This is an example setup for inclusive Z production at hadron colliders. The inclusive process is calculated at next-to-leading order accuracy matched to the parton shower using the MC@NLO prescription detailed in [Hoe11] . The next few higher jet multiplicities, calculated at next-to-leading order as well, are merged into the inclusive sample using the MEPS@NLO method - an extension of the CKKW method to NLO - as described in [Hoe12a] and [Geh12] . Finally, even higher multiplicities, calculated at leading order, are merged on top of that. A few things to note are detailed below the example. The example can be converted a simple MENLOPS setup by setting LJET:=2, or an MEPS setup by setting LJET:=0, to study the effect of incorporating higher-order matrix elements.

 
 
(run){
  % general settings
  EVENTS 2.5M; ERROR=0.1;

  % tags and settings for scale definitions
  SP_NLOCT=1; FSF:=1.0; RSF:=1.0; QSF:=1.0;
  SCALES METS{FSF*MU_F2}{RSF*MU_R2}{QSF*MU_Q2};

  % tags for process setup
  LJET:=2,3,4; NJET:=4; QCUT:=30;
  EXCLUSIVE_CLUSTER_MODE 1;

  % shower settings for NLO
  CSS_KFACTOR_SCHEME 0;

  % tags and settings for ME generators
  LOOPGEN0:=Internal;
  LOOPGEN1:=<my-loop-gen-for-Z1j>;
  LOOPGEN2:=<my-loop-gen-for-Z2j>;
  ME_SIGNAL_GENERATOR Comix Amegic LOOPGEN0 LOOPGEN1 LOOPGEN2;
  EVENT_GENERATION_MODE Weighted;
  RESULT_DIRECTORY=res.NLO.QCUT.FSF.RSF.QSF;

  % model parameters
  MODEL SM;
  MASSIVE[15] 1;

  % collider setup
  BEAM_1 2212; BEAM_ENERGY_1 3500;
  BEAM_2 2212; BEAM_ENERGY_2 3500;
}(run);

(processes){
  Process 93 93 -> 11 -11 93{NJET};
  Order_EW 2; CKKW sqr(QCUT/E_CMS);
  NLO_QCD_Mode MC@NLO {LJET};
  Loop_Generator LOOPGEN0 {2};
  Loop_Generator LOOPGEN1 {3};
  Loop_Generator LOOPGEN2 {4};
  ME_Generator Amegic {LJET};
  Enhance_Factor 16 {3}; 
  Enhance_Factor 64 {4};
  Enhance_Factor 128 {5,6};
  RS_Enhance_Factor 10 {3};
  RS_Enhance_Factor 20 {4};
  End process;
}(processes);

(selector){
  Mass 11 -11 66 116
}(selector);

Things to notice:


10.1.3 W+bb production

 
 
(run){
  # generator parameters
  EVENTS 0; LGEN:=Wbb;
  ME_SIGNAL_GENERATOR Amegic LGEN;
  HARD_DECAYS 1; HARD_MASS_SMEARING 0;
  MASSIVE[5] 1; WIDTH[24] 0; STABLE[24] 0;
  HDH_ONLY_DECAY {24,12,-11};
  MI_HANDLER None;
  # physics parameters
  BEAM_1 2212; BEAM_ENERGY_1 7000;
  BEAM_2 2212; BEAM_ENERGY_2 7000;
  SCALES VAR{H_T2+sqr(80.419)};
  PDF_LIBRARY MSTW08Sherpa; PDF_SET mstw2008nlo_nf4;
  MASS[5] 4.75;# consistent with MSTW 2008 nf 4 set
}(run);

(processes){
  Process 93 93 -> 24 5 -5;
  NLO_QCD_Mode 3; NLO_QCD_Part BVIRS;
  Loop_Generator LGEN;
  Order_EW 1;
  End process;
  Process 93 93 -> -24 5 -5;
  NLO_QCD_Mode 3; NLO_QCD_Part BVIRS;
  Loop_Generator LGEN;
  Order_EW 1;
  End process;
}(processes);

(selector){
  FastjetFinder antikt 2 5 0 0.5 0.75 5 100 2;
}(selector);

Things to notice:


10.2 Jet production


10.2.1 Jet production

To change any of the following LHC examples to production at the Tevatron simply change the beam settings to

  BEAM_1  2212; BEAM_ENERGY_1 980;
  BEAM_2 -2212; BEAM_ENERGY_2 980;

10.2.1.1 MC@NLO setup for dijet and inclusive jet production

This is an example setup for dijet and inclusive jet production at hadron colliders at next-to-leading order precission matched to the parton shower using the MC@NLO prescription detailed in [Hoe11] and [Hoe12b] . A few things to note are detailed below the example.

 
 
(run){
  % general settings
  EVENTS 1M;

  % tags and settings for scale definitions
  FSF:=1.; RSF:=1.; QSF:=1.;

  % tags and settings for ME-level cuts
  J1CUT:=20.; J2CUT:=10.;

  % tags and settings for ME generators
  LOOPGEN:=<my-loop-gen>;
  ME_SIGNAL_GENERATOR Amegic LOOPGEN;
  EVENT_GENERATION_MODE Weighted;
  RESULT_DIRECTORY res_jJ1CUT_jJ2CUT_ffFSF_rfRSF_qfQSF;

  % model parameters
  MODEL SM;

  % collider setup
  BEAM_1  2212; BEAM_ENERGY_1 3500.0;
  BEAM_2  2212; BEAM_ENERGY_2 3500.0;
}(run)

(processes){
  Process 93 93 -> 93 93;
  NLO_QCD_Mode MC@NLO;
  Loop_Generator LOOPGEN;
  Order_EW 0;
  Scales FASTJET[A=antikt,PT=J1CUT,ET=0,R=0.4,M=0]{FSF*0.0625*H_T2}{RSF*0.0625*H_T2}{QSF*0.25*PPerp2(p[3])}
  End process;
}(processes)

(selector){
  FastjetFinder  antikt 2  J2CUT  0.0  0.4
  FastjetFinder  antikt 1  J1CUT  0.0  0.4
}(selector)

Things to notice:


10.2.1.2 MEPS setup for jet production

 
 
(run){
  BEAM_1 = 2212; BEAM_ENERGY_1 = 4000;
  BEAM_2 = 2212; BEAM_ENERGY_2 = 4000;
}(run)

(processes){
  Process 93 93 -> 93 93 93{3}
  Order_EW 0;
  CKKW sqr(20/E_CMS)
  Integration_Error 0.02;
  Selector_File *|(coresel){|}(coresel) {2};
  End process;
}(processes)

(coresel){
  NJetFinder  2  20.0  0.0  0.4  -1
}(coresel)

Things to notice:


10.2.2 Jets at lepton colliders

 
 
(run){
  EVENTS = 10000
  FRAGMENTATION Off
  LOOPTAG:=Internal
  RESULT_DIRECTORY = Results_LOOPTAG
  GENERATE_RESULT_DIRECTORY = 1
}(run)

(beam){
  BEAM_1 =  11; BEAM_ENERGY_1 = 45.625;
  BEAM_2 = -11; BEAM_ENERGY_2 = 45.625;
}(beam)

(isr){
#  ISR_1 On; ISR_2 On; PDF_SET PDFe;
}(isr)

(processes){
  Process 11 -11 -> 93 93;
  Loop_Generator LOOPTAG;
  NLO_QCD_Part BVIRS;
  Order_EW 2;
  End process;
}(processes)

(model){
  ORDER_ALPHAS = 1
  ALPHAS(MZ) = 0.1188
}(model)

(me){
  ME_SIGNAL_GENERATOR = Internal Amegic
  EVENT_GENERATION_MODE = Weighted
  NLO_Mode = 3
  SCALES = VAR{sqr(91.25)}
}(me)

(analysis){
  BEGIN_RIVET {
    -a JADE_OPAL_2000_S4300807
  } END_RIVET
}(analysis)

This example shows a LEP set up, with electrons and positrons colliding at a centre of mass energy of 91.25GeV. Two processes have been specified, one final state with two or three light quarks and gluons being produced, and one with a b b-bar pair and possibly an extra light parton.

Things to notice:


10.3 Higgs boson + jets production


10.3.1 H+jets production in gluon fusion

This is an example setup for inclusive Higgs production through gluon fusion at hadron colliders. The inclusive process is calculated at next-to-leading order accuracy matched to the parton shower using the MC@NLO prescription detailed in [Hoe11] . The next few higher jet multiplicities, calculated at next-to-leading order as well, are merged into the inclusive sample using the MEPS@NLO method - an extension of the CKKW method to NLO - as described in [Hoe12a] and [Geh12] . Finally, even higher multiplicities, calculated at leading order, are merged on top of that. A few things to note are detailed below the example. The example can be converted a simple MENLOPS setup by setting LJET:=2, or an MEPS setup by setting LJET:=0, to study the effect of incorporating higher-order matrix elements.

 
 
(run){
  % general settings
  EVENTS 5M; ERROR 0.1;

  % tags and settings for scale definitions
  SP_NLOCT=1; FSF:=1.0; RSF:=1.0; QSF:=1.0;
  SCALES STRICT_METS{FSF*MU_F2}{RSF*MU_R2}{QSF*MU_Q2};

  % tags for process setup
  LJET:=1,2,3; NJET:=3; QCUT:=30.;
  EXCLUSIVE_CLUSTER_MODE 1;

  % shower settings for NLO
  CSS_FS_PT2MIN 2; CSS_IS_PT2MIN 2;
  CSS_FS_AS_FAC 1; CSS_IS_AS_FAC 1;

  % tags and settings for ME generators
  LOOPGEN0:=Internal;
  LOOPGEN1:=LHOLE;
  SHERPA_LDADD golem_olp;
  LHOLE_CONTRACTFILE OLE_order.olc;
  LHOLE_OLP GoSam;
  ME_SIGNAL_GENERATOR Comix Amegic LOOPGEN0 LOOPGEN1;
  EVENT_GENERATION_MODE Weighted;
  RESULT_DIRECTORY Results.QCUT;

  % model parameters
  MODEL SM+EHC
  YUKAWA[5] 0; YUKAWA[15] 0;
  MASS[25] 125.; WIDTH[25] 0.; STABLE[25] 0;

  % collider setup
  BEAM_1 2212; BEAM_ENERGY_1 4000;
  BEAM_2 2212; BEAM_ENERGY_2 4000;  
}(run);

(processes){
  Process 93 93 -> 25 93{NJET};
  Order_EW 1; CKKW sqr(QCUT/E_CMS);
  NLO_QCD_Mode MC@NLO {LJET}; 
  Loop_Generator LOOPGEN0 {1,2};
  Loop_Generator LOOPGEN1 {3};
  ME_Generator Amegic {LJET};
  Enhance_Factor 16 {2}; 
  Enhance_Factor 128 {3,4};
  RS_Enhance_Factor 10 {2};
  RS_Enhance_Factor 20 {3};
  End process;
}(processes);

Things to notice:


10.3.2 H+jets production in weak boson fusion

This is an example setup for inclusive Higgs production through weak boson fusion at hadron colliders. The inclusive process is calculated at leading order accuracy merged with up to one additional jet using the ME+PS prescription detailed in [Hoe09] . This example shows how to implement and use a custom core scale in Sherpa METS scale setting with multiparton core processes.

 
 
(run){
  ###### event generation #####################
  EVENTS 10k;
  EVENT_GENERATION_MODE=Weighted
  MI_HANDLER Off;
  ###### generator parameters #################
  NJET:=1; QCUT:=20;
  ME_SIGNAL_GENERATOR=Comix;
  HARD_DECAYS 1; 
  MASS[25]=125.5;  WIDTH[25]=0.0;
  STABLE[25] 0; STABLE[23] 0; STABLE[24] 0;
  ERROR = 0.1;
  STORE_DECAY_RESULTS 1;
  ###### scale setting parameters #############
  SCF:=1.0; SCR:=1.0; QF:=1.0
  SCALES STRICT_METS{SCF*MU_F2}{SCR*MU_R2}{QF*MU_Q2};
  EXCLUSIVE_CLUSTER_MODE 1;
  CORE_SCALE WBF;
  ###### physics parameters ###################
  BEAM_1 2212; BEAM_ENERGY_1 7000;
  BEAM_2 2212; BEAM_ENERGY_2 7000;
}(run);

(processes){
  Process 93 93 -> 25 93 93 93{NJET};
  Order_EW 3;
  CKKW sqr(QCUT/E_CMS);
  End process;
}(processes);
(selector){
  NJetFinder 2 20. 0 0.4;
}(selector);

Things to notice:


10.3.3 Associated t anti-t H production at the LHC

This set-up illustrates the interface to an external loop matrix element generator as well as the possibility of specifying hard decays for particles emerging from the hard interaction. The process generated is the production of a Higgs boson in association with a top quark pair from two light partons in the initial state. Each top quark decays into an (anti-)bottom quark and a W boson. The W bosons in turn decay to either quarks or leptons.

 
 
(run){
  # generator parameters
  EVENTS 0; LGEN:=TTH;
  ME_SIGNAL_GENERATOR Amegic LGEN;
  HARD_DECAYS 1; HARD_MASS_SMEARING 0;
  STABLE[6] 0; STABLE[24] 0;
  WIDTH[25] 0; WIDTH[6] 0; 
  MI_HANDLER None;
  # physics parameters
  BEAM_1 2212; BEAM_ENERGY_1 7000;
  BEAM_2 2212; BEAM_ENERGY_2 7000;
  SCALES VAR{sqr(175+125/2)};
  PDF_LIBRARY LHAPDFSherpa;
  PDF_SET MSTW2008lo68cl.LHgrid;
  USE_PDF_ALPHAS 1;
}(run);

(processes){
  Process 93 93 -> 25 6 -6;
  NLO_QCD_Mode 3; NLO_QCD_Part BVIRS;
  Loop_Generator LGEN;
  Order_EW 1;
  End process;
}(processes);

Things to notice:


10.4 Top quark (pair) + jets production


10.4.1 Simulation of top quark pair production using MC@NLO methods

 
 
(run){
  EVENTS 10000;
  LJET:=2; NJET:=0; QCUT:=20;
  # Generator parameters
  LGEN:=LHOLE;
  # SHERPA_LDADD golem_olp;
  # LHOLE_CONTRACTFILE OLE_order.olc;
  LHOLE_OLP GoSam;
  ME_SIGNAL_GENERATOR Comix Amegic LGEN;
  INTEGRATOR 7; NLO_Mode 3;
  EXCLUSIVE_CLUSTER_MODE 1;
  CSS_KFACTOR_SCHEME 0;
  # Physics parameters
  BEAM_1 2212; BEAM_ENERGY_1 4000;
  BEAM_2 2212; BEAM_ENERGY_2 4000;
  CORE_SCALE QCD;
  WIDTH[6] 0;
}(run);

(processes){
  Process 93 93 -> 6 -6 93{NJET};
  NLO_QCD_Part BVIRS {LJET};
  ME_Generator Amegic {LJET};
  Loop_Generator LGEN;
  CKKW sqr(QCUT/E_CMS);
  Order_EW 0;
  End process;
}(processes);

Things to notice:


10.4.2 Simulation of top quark pair production in association with jets using MEPS methods

 
 
(run){
  EVENTS 10000;
  NJET:=2; QCUT:=20;
  STABLE[6] 0; STABLE[24] 0;
  HARD_DECAYS 1;
  HARD_SPIN_CORRELATIONS 1;
  CORE_SCALE QCD;
}(run);

(beam){
  BEAM_1 2212; BEAM_ENERGY_1 4000;
  BEAM_2 2212; BEAM_ENERGY_2 4000;
}(beam);

(processes){
  Process 93 93 -> 6 -6 93{NJET};
  Order_EW 0; CKKW sqr(QCUT/E_CMS);
  End process;
}(processes);

(mi){
  MI_HANDLER None; # None or Amisic
}(mi);

Things to notice:


10.4.3 Production of a top quark pair in association with a W-boson

 
 
(run){
  EVENTS=10000
  EVENT_GENERATION_MODE=Weighted

  BEAM_1=2212; BEAM_ENERGY_1=4000;
  BEAM_2=2212; BEAM_ENERGY_2=4000;

  LJET:=3
  SCF:=1.0
  QF:=1.0
  LGEN:=OpenLoops

  ME_SIGNAL_GENERATOR=Comix Amegic LGEN
  SCALES=METS{SCF*MU_F2}{SCF*MU_R2}{QF*MU_Q2}

  HARD_DECAYS=On
  STABLE[6]=0
  STABLE[24]=0
  WIDTH[6]=0
  WIDTH[24]=0
  HDH_NO_DECAY={24,2,-1}|{24,4,-3}|{24,16,-15}
  HARD_SPIN_CORRELATIONS=1
  HARD_MASS_SMEARING=0

  # technical parameters
  EXCLUSIVE_CLUSTER_MODE=1
  AMEGIC_DEFAULT_GAUGE=10
}(run);

(processes){
  Process 93 93 -> 6 -6 24;
  NLO_QCD_Mode MC@NLO {LJET};
  ME_Generator Amegic {LJET};
  Loop_Generator LGEN;
  Order_EW 1;
  End process;
}(processes);

Things to notice:


10.5 Fixed-order next-to-leading order calculations


10.5.1 Production of NTuples

Root NTuples are a convenient way to store the result of cumbersome fixed-order calculations in order to perform multiple analyses. This example shows how to generate such NTuples and reweighted them in order to change factorisation and renormalisation scales. Note that in order to use this setup, Sherpa must be configured with option --enable-root=/path/to/root, see Event output formats. If Sherpa has not been configured with Rivet analysis support, please disable the analysis using ‘-a0’ on the command line, see Command line options.

When using NTuples, one needs to bear in mind that every calculation involving jets in the final state is exclusive in the sense that a lower cutoff on the jet transverse momenta must be imposed. It is therefore necessary to check whether the event sample stored in the NTuple is sufficiently inclusive before using it. Similar remarks apply when photons are present in the NLO calculation or when cuts on leptons have been applied at generation level to increase efficiency. Every NTuple should therefore be accompanied by an appropriate documentation.

This example will generate NTuples for the process pp->lvj, where l is an electron or positron, and v is an electron (anti-)neutrino. We identify parton-level jets using the anti-k_T algorithm with R=0.4 [Cac08] . We require the transverse momentum of these jets to be larger than 20 GeV. No other cuts are applied at generation level.

 
 
(run){
  EVENTS 100k;
  EVENT_GENERATION_MODE W;
  LGEN:=BlackHat; ME_SIGNAL_GENERATOR Amegic LGEN;
  ### Analysis (please configure with --enable-rivet & --enable-hepmc2)
  ANALYSIS Rivet; ANALYSIS_OUTPUT Analysis/HTp/BVI/;
  ### NTuple output (please configure with '--enable-root')
  EVENT_OUTPUT Root[NTuple_B-like];

  BEAM_1 2212; BEAM_ENERGY_1 3500;
  BEAM_2 2212; BEAM_ENERGY_2 3500;
  SCF:=1; ### default scale factor
  SCALES VAR{SCF*sqr(sqrt(H_T2)-PPerp(p[2])-PPerp(p[3])+MPerp(p[2]+p[3]))};
  EW_SCHEME 0; WIDTH_SCHEME Fixed; # sin\theta_w -> 0.23
  DIPOLE_ALPHA 0.03;
  MASSIVE[13] 1; MASSIVE[15] 1;
}(run);
(processes){
  ### The Born piece
  Process 93 93 -> 90 91 93;
  NLO_QCD_Mode 1; NLO_QCD_Part B;
  Order_EW 2;
  End process;
  ### The virtual piece
  Process 93 93 -> 90 91 93;
  NLO_QCD_Mode 1; NLO_QCD_Part V;
  Loop_Generator LGEN;
  Order_EW 2;
  End process;
  ### The integrated subtraction piece
  Process 93 93 -> 90 91 93;
  NLO_QCD_Mode 1; NLO_QCD_Part I;
  Order_EW 2;
  End process;
}(processes);
(selector){
  FastjetFinder antikt 1 20 0 0.4;
}(selector);

(analysis){
  BEGIN_RIVET {
    -a ATLAS_2012_I1083318;
    USE_HEPMC_SHORT 1;
    IGNOREBEAMS 1;
  } END_RIVET;
}(analysis);

Things to notice:


10.5.1.1 NTuple production

Start Sherpa using the command line

  Sherpa -f Run.B-like.dat

Sherpa will first create source code for its matrix-element calculations. This process will stop with a message instructing you to compile. Do so by running

  ./makelibs -j4

Launch Sherpa again, using

  Sherpa -f Run.B-like.dat

Sherpa will then compute the Born, virtual and integrated subtraction contribution to the NLO cross section and generate events. These events are analyzed using the Rivet library and stored in a Root NTuple file called NTuple_B-like.root. We will use this NTuple later to compute an NLO uncertainty band.

The real-emission contribution, including subtraction terms, to the NLO cross section is computed using

  Sherpa -f Run.R-like.dat

Events are generated, analyzed by Rivet and stored in the Root NTuple file NTuple_R-like.root.

The two analyses of events with Born-like and real-emission-like kinematics need to be merged, which can be achieved using scripts like aidaadd. The result can then be plotted and displayed.


10.5.1.2 Usage of NTuples in Sherpa

Next we will compute the NLO uncertainty band using Sherpa. To this end, we make use of the Root NTuples generated in the previous steps.

First we re-evaluate the events with the scale increased by a factor 2:

  Sherpa -f Reweight.B-like.dat
  Sherpa -f Reweight.R-like.dat

Then we re-evaluate the events with the scale decreased by a factor 2:

  Sherpa -f Reweight.B-like.dat SCF:=0.25 -A Analysis/025HTp/BVI
  Sherpa -f Reweight.R-like.dat SCF:=0.25 -A Analysis/025HTp/RS

The two contributions can again be combined using aidaadd.


10.6 Using the Python interface


10.6.1 Computing matrix elements for idividual phase space points

Sherpa’s Python interface (see Python Interface) can be used to compute matrix elemtents for individual phase space points. For processes with coloured external particles this is so far only supported by AMEGIC++. COMIX can be used however if all external particles are colourless.

All information about the incoming and outgoing flavours and momenta of a process are stored in a ’cluster amplitude’. For each incoming and outgoing particle, a ’leg’ must be added to the cluster amplitude using the ’CreateLegFromPyVec4D’ method. This method accepts a ’Vec4D’ object that represents the four-momentum of the corresponding particle as the first argument. The second argument represents it’s flavour. Note that both momenta and flavours must be reversed for legs that correspond to incoming particles. A Flavour can be reversed by setting the second argument of it’s constructor to ’1’. The first argument is the pdg-ID of the particle. Sherpa.Flavour(11,1) represents an anti-electron, for example. Note that the value returned by the ’Differential’ method of the process needs to be multiplied by a factor of two times the center of mass energy squared.

If AMEGIC++ is used as the matrix element generator, executing the script will result in AMEGIC++ writing out libraries and exiting. After compiling the libraries using ./makelibs, the script must be executed again in order to obtain the matrix element. On some systems, this might result in a termination with errors of the form

Library_Loader::LoadLibrary(): ./Process/lib/libProc_P2_2_2_6_24_16_5_0.so: undefined symbol: _ZN6AMEGIC10Basic_Func1XEiii

If this is the case, the library libSherpaMain.so must be preloaded, which can be achieved on a linux system via setting LD_PRELOAD accordingly:

export LD_PRELOAD=<prefix>/lib/SHERPA-MC/libSherpaMain.so

In order to prevent Sherpa’s initialization routine from integrating total cross sections, one can pass the command line argument INIT_ONLY=1 when executing the script in order to save time. Alternatively, this argument can be added in the script itself via sys.argv.append('INIT_ONLY=1').

 
 
#!/usr/bin/python2
## from mpi4py import MPI
import sys
sys.path.append('@PYLIBDIR@')
import Sherpa
sys.argv.append('INIT_ONLY=1')

Generator=Sherpa.Sherpa()
try:
    Generator.InitializeTheRun(len(sys.argv),sys.argv)
    ## if MPI.COMM_WORLD.Get_rank()>0:
    ##    exit(1)
    
    Amp=Sherpa.Cluster_Amplitude.New()
    Amp.SetNIn(2)
    Amp.CreateLegFromPyVec4D(-Sherpa.Vec4D(45.6,0,0,45.6),Sherpa.Flavour(11,1))
    Amp.CreateLegFromPyVec4D(-Sherpa.Vec4D(45.6,0,0,-45.6),Sherpa.Flavour(11,0))
    Amp.CreateLegFromPyVec4D(Sherpa.Vec4D(45.6,0,45.6,0),Sherpa.Flavour(1,0))
    Amp.CreateLegFromPyVec4D(Sherpa.Vec4D(45.6,0,-45.6,0),Sherpa.Flavour(1,1))
    ME=Generator.GetInitHandler().GetMatrixElementHandler().GetWeight(Amp)
    print "*****************************"
    print "ME^2/s^2: ",ME
    print "*****************************"
        
except Sherpa.Exception as exc:
    exit(1)

10.6.2 Generate events using scripts

This example shows how to generate events with Sherpa using a Python wrapper script. For each event the weight, the number of trials and the particle information is printed to stdout. This script can be used as a basis for constructing interfaces to own analysis routines.

 
 
#!/usr/bin/python2
## from mpi4py import MPI
import sys
sys.path.append('@PYLIBDIR@')
import Sherpa

Generator=Sherpa.Sherpa()
try:
    Generator.InitializeTheRun(len(sys.argv),sys.argv)
    Generator.InitializeTheEventHandler()
    for n in range(1,1+Generator.NumberOfEvents()):
        Generator.GenerateOneEvent()
        blobs=Generator.GetBlobList();
        print "Event",n,"{"
        ## print blobs
        print "  Weight ",blobs.GetFirst(1)["Weight"];
        print "  Trials ",blobs.GetFirst(1)["Trials"];
        for i in range(0,blobs.size()):
            print "  Blob",i,"{"
            ## print blobs[i];
            print "    Incoming particles"
            for j in range(0,blobs[i].NInP()):
                part=blobs[i].InPart(j)
                ## print part
                s=part.Stat()
                f=part.Flav()
                p=part.Momentum()
                print "     ",j,": ",s,f,p
            print "    Outgoing particles"
            for j in range(0,blobs[i].NOutP()):
                part=blobs[i].OutPart(j)
                ## print part
                s=part.Stat()
                f=part.Flav()
                p=part.Momentum()
                print "     ",j,": ",s,f,p
            print "  } Blob",i
        print "} Event",n
        if ((n%100)==0): print "  Event ",n
    Generator.SummarizeRun()
        
except Sherpa.Exception as exc:
    exit(1)

11. Getting help

If Sherpa exits abnormally, first check the Sherpa output for hints on the reason of program abort, and try to figure out what has gone wrong with the help of the Manual. Note that Sherpa throwing a ‘normal_exit’ exception does not imply any abnormal program termination! When using AMEGIC++ Sherpa will exit with the message:

 
   New libraries created. Please compile.

In this case, follow the instructions given in Running Sherpa with AMEGIC++.

If this does not help, contact the Sherpa team (see the Sherpa Team section of the website http://www.sherpa-mc.de), providing all information on your setup. Please include

  1. A complete tarred and gzipped set of the ‘.dat’ files leading to the crash. Use the status recovery directory Status__<date of crash> produced before the program abort.
  2. The command line (including possible parameters) you used to start Sherpa.
  3. The installation log file, if available.

12. Authors

Sherpa was written by the Sherpa Team, see http://www.sherpa-mc.de.


13. Copying

Sherpa is free software. You can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation. You should have received a copy of the GNU General Public License along with the source for Sherpa; see the file COPYING. If not, write to the Free Software Foundation, 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.

Sherpa is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

Sherpa was created during the Marie Curie RTN’s HEPTOOLS, MCnet and LHCphenonet. The MCnet Guidelines apply, see the file GUIDELINES and http://www.montecarlonet.org/index.php?p=Publications/Guidelines.


A. References


B. Index

Jump to:   1  
A   B   C   D   E   F   G   H   I   K   L   M   N   O   P   R   S   T   U   V   W   Y  
Index Entry Section

1
1/ALPHAQED(0)6.4.1 Standard Model
1/ALPHAQED(default)6.4.1 Standard Model

A
A6.4.1 Standard Model
ACTIVE[<id>]6.4 Model Parameters
ALPHA6.4.5 Two Higgs Doublet Model
ALPHAS(default)6.4.1 Standard Model
ALPHAS(MZ)6.4.1 Standard Model
ALPHA_4_G_46.4.4 Anomalous Gauge Couplings
ALPHA_56.4.4 Anomalous Gauge Couplings
ANALYSIS6.1.5 ANALYSIS
ANALYSIS_OUTPUT6.1.6 ANALYSIS_OUTPUT
ANALYSIS_OUTPUT8.2 Rivet analyses
ANALYSIS_OUTPUT8.3 HZTool analyses
A_146.4.7 Fourth Generation
A_246.4.7 Fourth Generation
A_346.4.7 Fourth Generation

B
BATCH_MODE6.1.8 BATCH_MODE
BEAM_16.2 Beam Parameters
BEAM_26.2 Beam Parameters
BEAM_ENERGY_16.2 Beam Parameters
BEAM_ENERGY_26.2 Beam Parameters
BEAM_REMNANTS6.2.2 Intrinsic Transverse Momentum
BEAM_SMAX6.2.1 Beam Spectra
BEAM_SMIN6.2.1 Beam Spectra
BEAM_SPECTRUM_16.2.1 Beam Spectra
BEAM_SPECTRUM_26.2.1 Beam Spectra
BUNCH_16.3 ISR Parameters
BUNCH_26.3 ISR Parameters

C
CABIBBO6.4.1 Standard Model
CKMORDER6.4.1 Standard Model
COMIX_ME_THREADS6.1.14 Multi-threading
COMIX_PS_THREADS6.1.14 Multi-threading
CORE_SCALE6.5.4.7 METS scale setting with multiparton core processes
COUPLINGS6.5.5 COUPLINGS
CSS_EW_MODE6.10.4 CS Shower options
CSS_KIN_SCHEME6.10.4 CS Shower options
CSS_MAXEM6.10.4 CS Shower options
CSS_NOEM6.10.4 CS Shower options
CSS_PT2MIN6.10.4 CS Shower options
CSS_SHOWER_SCALE2_FACTOR6.5.4.6 Scale variations in parton showered and merged samples

D
DEACTIVATE_GGH6.4.6 Effective Higgs Couplings
DEACTIVATE_PPH6.4.6 Effective Higgs Couplings
DECAYMODEL6.12.2 Hadron decays
DECAYPATH6.12.2 Hadron decays
DECAY_TAU_HARD6.9.8 DECAY_TAU_HARD
Delphes6.1.12 Event output formats

E
EHC_SCALE26.4.6 Effective Higgs Couplings
ERROR6.8.1 ERROR
ETA6.4.1 Standard Model
EVENTS6.1.1 EVENTS
EVENT_GENERATION_MODE6.5.3 EVENT_GENERATION_MODE
EVENT_OUTPUT6.1.12 Event output formats
EVT_FILE_PATH6.1.12 Event output formats
EVT_OUTPUT6.1.2 OUTPUT
EW_SCHEME6.4.1 Standard Model

F
F4_GAMMA6.4.4 Anomalous Gauge Couplings
F4_Z6.4.4 Anomalous Gauge Couplings
F5_GAMMA6.4.4 Anomalous Gauge Couplings
F5_Z6.4.4 Anomalous Gauge Couplings
FACTORIZATION_SCALE_FACTOR6.5.4.5 Simple scale variations
FeynRules6.4.8 FeynRules model
FILE_SIZE6.1.12 Event output formats
FINISH_OPTIMIZATION6.8.5 FINISH_OPTIMIZATION
FINITE_TOP_MASS6.4.6 Effective Higgs Couplings
FINITE_W_MASS6.4.6 Effective Higgs Couplings
FRAGMENTATION6.12.1 Fragmentation
FR_IDENTFILE6.4.8 FeynRules model
FR_INTERACTIONS6.4.8 FeynRules model
FR_PARAMCARD6.4.8 FeynRules model
FR_PARAMDEF6.4.8 FeynRules model
FR_PARTICLES6.4.8 FeynRules model

G
G1_GAMMA6.4.4 Anomalous Gauge Couplings
G1_Z6.4.4 Anomalous Gauge Couplings
G4_GAMMA6.4.4 Anomalous Gauge Couplings
G4_Z6.4.4 Anomalous Gauge Couplings
G5_GAMMA6.4.4 Anomalous Gauge Couplings
G5_Z6.4.4 Anomalous Gauge Couplings
G_NEWTON6.4.3 ADD Model of Large Extra Dimensions

H
H1_GAMMA6.4.4 Anomalous Gauge Couplings
H1_Z6.4.4 Anomalous Gauge Couplings
H2_GAMMA6.4.4 Anomalous Gauge Couplings
H2_Z6.4.4 Anomalous Gauge Couplings
H3_GAMMA6.4.4 Anomalous Gauge Couplings
H3_Z6.4.4 Anomalous Gauge Couplings
H4_GAMMA6.4.4 Anomalous Gauge Couplings
H4_Z6.4.4 Anomalous Gauge Couplings
HARD_DECAYS6.9 Hard decays
HARD_MASS_SMEARING6.9.6 HARD_MASS_SMEARING
HARD_SPIN_CORRELATIONS6.9.3 HARD_SPIN_CORRELATIONS
HARD_SPIN_CORRELATIONS6.12.2.5 Further remarks
HDH_NO_DECAY6.9.1 HDH_NO_DECAY
HDH_ONLY_DECAY6.9.2 HDH_ONLY_DECAY
HDH_SET_WIDTHS6.9.5 HDH_SET_WIDTHS
HEPEVT6.1.12 Event output formats
HepMC_GenEvent6.1.12 Event output formats
HepMC_Short6.1.12 Event output formats

I
INTEGRATOR6.8.2 INTEGRATOR
ISR_E_ORDER6.3 ISR Parameters
ISR_E_SCHEME6.3 ISR Parameters
ISR_SMAX6.3 ISR Parameters
ISR_SMIN6.3 ISR Parameters

K
KAPPAT_GAMMA6.4.4 Anomalous Gauge Couplings
KAPPAT_Z6.4.4 Anomalous Gauge Couplings
KAPPA_GAMMA6.4.4 Anomalous Gauge Couplings
KAPPA_Z6.4.4 Anomalous Gauge Couplings
KFACTOR6.5.6 KFACTOR
KK_CONVENTION6.4.3 ADD Model of Large Extra Dimensions
K_PERP_MEAN_16.2.2 Intrinsic Transverse Momentum
K_PERP_MEAN_26.2.2 Intrinsic Transverse Momentum
K_PERP_SIGMA_16.2.2 Intrinsic Transverse Momentum
K_PERP_SIGMA_26.2.2 Intrinsic Transverse Momentum

L
LAMBDA6.4.1 Standard Model
LAMBDAT_GAMMA6.4.4 Anomalous Gauge Couplings
LAMBDAT_Z6.4.4 Anomalous Gauge Couplings
LAMBDA_GAMMA6.4.4 Anomalous Gauge Couplings
LAMBDA_Z6.4.4 Anomalous Gauge Couplings
LHEF6.1.12 Event output formats
LHOLE_BOOST_TO_CMS9.4 External one-loop ME
LHOLE_CONTRACTFILE9.4 External one-loop ME
LHOLE_IR_REGULARISATION9.4 External one-loop ME
LHOLE_OLP9.4 External one-loop ME
LHOLE_ORDERFILE9.4 External one-loop ME
LOG_FILE6.1.3 LOG_FILE
Loop_Generator8.4 MCFM interface

M
MASSIVE[<id>]6.4 Model Parameters
MASS[<id>]6.4 Model Parameters
MASS[<id>]6.12.2 Hadron decays
MAX_PROPER_LIFETIME6.12.2 Hadron decays
ME_QED6.13.2.1 ME_QED
ME_SIGNAL_GENERATOR6.5.1 ME_SIGNAL_GENERATOR
MI_HANDLER6.11.1 MI_HANDLER
MODEL6.4 Model Parameters
M_CUT6.4.3 ADD Model of Large Extra Dimensions
M_S6.4.3 ADD Model of Large Extra Dimensions

N
NUM_ACCURACY6.1.9 NUM_ACCURACY
N_ED6.4.3 ADD Model of Large Extra Dimensions

O
ORDER_ALPHAS6.4.1 Standard Model
OUTPUT6.1.2 OUTPUT
OUTPUT_MIXING6.4.7 Fourth Generation
OUTPUT_PRECISION6.1.12 Event output formats

P
PARTICLE_CONTAINER6.6.1.2 Particle containers
PATH5. Input structure
PDF_LIBRARY6.3 ISR Parameters
PDF_LIBRARY_16.3 ISR Parameters
PDF_LIBRARY_26.3 ISR Parameters
PDF_SET6.3 ISR Parameters
PDF_SET_16.3 ISR Parameters
PDF_SET_26.3 ISR Parameters
PDF_SET_VERSION6.3 ISR Parameters
PG_THREADS6.1.14 Multi-threading
PHI_26.4.7 Fourth Generation
PHI_36.4.7 Fourth Generation
PHI_L26.4.7 Fourth Generation
PHI_L36.4.7 Fourth Generation
PROFILE_FUNCTION6.11.3 PROFILE_FUNCTION
PROFILE_PARAMETERS6.11.4 PROFILE_PARAMETERS
PSI_ADJUST_POINTS6.1.13 MPI parallelization
PSI_NMAX6.8.6 PSI_NMAX

R
RANDOM_SEED6.1.4 RANDOM_SEED
RANDOM_SEED16.1.4 RANDOM_SEED
RANDOM_SEED26.1.4 RANDOM_SEED
REFERENCE_SCALE6.11.5 REFERENCE_SCALE
RENORMALIZATION_SCALE_FACTOR6.5.4.5 Simple scale variations
RESCALE_EXPONENT6.11.6 RESCALE_EXPONENT
RESOLVE_DECAYS6.9.7 RESOLVE_DECAYS
RESULT_DIRECTORY6.5.2 RESULT_DIRECTORY
RHO6.4.1 Standard Model
Root6.1.12 Event output formats
RS_INTEGRATOR6.8.3 RS_INTEGRATOR
RUNDATA5. Input structure

S
SCALES6.5.4 SCALES
SCALE_MIN6.11.2 SCALE_MIN
SHERPA_CPP_PATH6.1.10 SHERPA_CPP_PATH
SHERPA_LDADD9. Customization
SHERPA_LIB_PATH6.1.11 SHERPA_LIB_PATH
SHOW_MODEL_SYNTAX6.4 Model Parameters
SHOW_PDF_SETS6.3 ISR Parameters
SHOW_VARIABLE_SYNTAX6.7.4 Universal selector
SIGMA_ND_FACTOR6.11.7 SIGMA_ND_FACTOR
SIN2THETAW6.4.1 Standard Model
SLHA_INPUT6.4.2 Minimal Supersymmetric Standard Model
SOFT_MASS_SMEARING6.12.2 Hadron decays
SOFT_SPIN_CORRELATIONS6.12.2.5 Further remarks
SPECTRUM_FILE_16.2.1 Beam Spectra
SPECTRUM_FILE_26.2.1 Beam Spectra
SP_NLOCT6.5.4.6 Scale variations in parton showered and merged samples
STABLE[<id>]6.4 Model Parameters
STABLE[<id>]6.9 Hard decays
STABLE[<id>]6.12.2 Hadron decays
STORE_DECAY_RESULTS6.9.4 STORE_DECAY_RESULTS

T
TAN(BETA)6.4.5 Two Higgs Doublet Model
THETA_L146.4.7 Fourth Generation
THETA_L246.4.7 Fourth Generation
THETA_L346.4.7 Fourth Generation
TIMEOUT6.1.7 TIMEOUT

U
UNITARIZATION_N6.4.4 Anomalous Gauge Couplings
UNITARIZATION_SCALE6.4.4 Anomalous Gauge Couplings
USE_PDF_ALPHAS6.4.1 Standard Model

V
VEGAS6.8.4 VEGAS
VEV6.4.1 Standard Model

W
WIDTH[<id>]6.4 Model Parameters
WIDTH[<id>]6.9.5 HDH_SET_WIDTHS
WIDTH[<id>]6.12.2 Hadron decays
WIDTH_SCHEME6.4.1 Standard Model

Y
YFS_IR_CUTOFF6.13.1.3 YFS_IR_CUTOFF
YFS_MODE6.13.1.1 YFS_MODE
YFS_USE_ME6.13.1.2 YFS_USE_ME
YUKAWA[<id>]6.4 Model Parameters
YUKAWA_MASSES6.5.7 YUKAWA_MASSES

Jump to:   1  
A   B   C   D   E   F   G   H   I   K   L   M   N   O   P   R   S   T   U   V   W   Y  

Table of Contents


This document was generated by Frank Siegert on March 14, 2013 using texi2html 1.82.