sherpa is hosted by Hepforge, IPPP Durham

# Sherpa Manual Version 3.0.0

The Sherpa Team, see http://sherpa.hepforge.org

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU General Public License.

# 1 Introduction

Sherpa is a Monte Carlo event generator for the Simulation of High-Energy Reactions of PArticles in lepton-lepton, lepton-photon, photon-photon, lepton-hadron and hadron-hadron collisions. This document provides information to help users understand and apply Sherpa for their physics studies. The event generator is introduced, in broad terms, and the installation and running of the program are outlined. The various options and parameters specifying the program are compiled, and their meanings are explained. This document does not aim at giving a complete description of the physics content of Sherpa . To this end, the authors refer the reader to the original publication, [Gle08b].

## 1.1 Introduction to Sherpa

Sherpa [Gle08b] is a Monte Carlo event generator that provides complete hadronic final states in simulations of high-energy particle collisions. The produced events may be passed into detector simulations used by the various experiments. The entire code has been written in C++, like its competitors Herwig++ [Bah08b] and Pythia 8 [Sjo07].

Sherpa simulations can be achieved for the following types of collisions:

• for lepton–lepton collisions, as explored by the CERN LEP experiments,
• for lepton–photon collisions,
• for photon–photon collisions with both photons either resolved or unresolved,
• for deep-inelastic lepton-hadron scattering, as investigated by the HERA experiments at DESY, and,
• in particular, for hadronic interactions as studied at the Fermilab Tevatron or the CERN LHC.

The list of physics processes that can be simulated with Sherpa covers all reactions in the Standard Model. Other models can be implemented either using Sherpa’s own model syntax, or by using the generic interface [Hoe14c] to the UFO output [Deg11] of FeynRules [Chr08],[Chr09]. The Sherpa program owes this versatility to the two inbuilt matrix-element generators, AMEGIC++ and Comix, and to it’s phase-space generator Phasic [Kra01], which automatically calculate and integrate tree-level amplitudes for the implemented models. This feature enables Sherpa to be used as a cross-section integrator and parton-level event generator as well. This aspect has been extensively tested, see e.g. [Gle03b], [Hag05].

As a second key feature of Sherpa the program provides an implementation of the merging approaches of [Hoe09] and [Geh12], [Hoe12a]. These algorithms yield improved descriptions of multijet production processes, which copiously appear at lepton-hadron colliders like HERA [Car09], or hadron-hadron colliders like the Tevatron and the LHC, [Kra04], [Kra05], [Gle05], [Hoe09a]. An older approach, implemented in previous versions of Sherpa and known as the CKKW technique [Cat01a], [Kra02], has been compared in great detail in [Alw07] with other approaches, such as the MLM merging prescription [Man01] as implemented in Alpgen [Man02], Madevent [Ste94], [Mal02a], or Helac [Kan00], [Pap05] and the CKKW-L prescription [Lon01], [Lav05] of Ariadne [Lon92].

This manual contains all information necessary to get started with Sherpa as quickly as possible. It lists options and switches of interest for steering the simulation of various physics aspects of the collision. It does not describe the physics simulated by Sherpa or the underlying structure of the program. Many external codes can be linked with Sherpa. This manual explains how to do this, but it does not contain a description of the external programs. You are encouraged to read their corresponding documentations, which are referenced in the text. If you use external programs with Sherpa, you are encouraged to cite them accordingly.

The MCnet Guidelines apply to Sherpa. You are kindly asked to cite [Gle08b] if you have used the program in your work.

The Sherpa authors strongly recommend the study of the manuals and many excellent publications on different aspects of event generation and physics at collider experiments written by other event generator authors.

This manual is organized as follows: in Basic structure the modular structure intrinsic to Sherpa is introduced. Getting started contains information about and instructions for the installation of the package. There is also a description of the steps that are needed to run Sherpa and generate events. The Input structure is then discussed, and the ways in which Sherpa can be steered are explained. All parameters and options are discussed in Parameters. Advanced Tips and tricks are detailed, and some options for Customization are outlined for those more familiar with Sherpa. There is also a short description of the different Examples provided with Sherpa.

The construction of Monte Carlo programs requires several assumptions, approximations and simplifications of complicated physics aspects. The results of event generators should therefore always be verified and cross-checked with results obtained by other programs, and they should be interpreted with care and common sense.

## 1.2 Basic structure

Sherpa is a modular program. This reflects the paradigm of Monte Carlo event generation, with the full simulation is split into well defined event phases, based on QCD factorization theorems. Accordingly, each module encapsulates a different aspect of event generation for high-energy particle reactions. It resides within its own namespace and is located in its own subdirectory of the same name. The main module called SHERPA steers the interplay of all modules – or phases – and the actual generation of the events. Altogether, the following modules are currently distributed with the Sherpa framework:

• ATOOLS

This is the toolbox for all other modules. Since the Sherpa framework does not rely on CLHEP etc., the ATOOLS contain classes with mathematical tools like vectors and matrices, organization tools such as read-in or write-out devices, and physics tools like particle data or classes for the event record.

• METOOLS

In this module some general methods for the evaluation of helicity amplitudes have been accumulated. They are used in AMEGIC++ , the EXTRA_XS module, and the new matrix-element generator Comix. This module also contains helicity amplitudes for some generic matrix elements, that are, e.g., used by HADRONS++ . Further, METOOLS also contains a simple library of tensor integrals which are used in the PHOTONS++ matrix element corrections.

• BEAM

This module manages the treatment of the initial beam spectra for different colliders. The three options which are currently available include a monochromatic beam, which requires no extra treatment, photon emission in the Equivalent Photon Approximation (EPA) and - for the case of an electron collider - laser backscattering off the electrons, leading to photonic initial states.

• PDF

The PDF module provides access to various parton density functions (PDFs) for the proton and the photon. In addition, it hosts an interface to the LHAPDF package, which makes a full wealth of PDFs available. An (analytical) electron structure function is supplied in the PDF module as well.

• MODEL

This module sets up the physics model for the simulation. It initializes particle properties, basic physics parameters (coupling constants, mixing angles, etc.) and the set of available interaction vertices (Feynman rules). By now, there exist explicit implementations of the Standard Model (SM), its Minimal Supersymmetric extension (MSSM), the ADD model of large extra dimensions, and a comprehensive set of operators parametrizing anomalous triple and quartic electroweak gauge boson couplings. An Interface to FeynRules is also available.

• EXTRA_XS

In this module a (limited) collection of analytic expressions for simple 2->2 processes within the SM are provided together with classes embedding them into the Sherpa framework. This also includes methods used for the definition of the starting conditions for parton-shower evolution, such as colour connections and the hard scale of the process.

• AMEGIC++

AMEGIC++ [Kra01] is Sherpa ’s original matrix-element generator. It employs the method of helicity amplitudes [Kle85], [Bal92] and works as a generator, which generates generators: During the initialization run the matrix elements for a given set of processes, as well as their specific phase-space mappings are created by AMEGIC++ . Corresponding C++ sourcecode is written to disk and compiled by the user using the makelibs script or scons. The produced libraries are linked to the main program automatically in the next run and used to calculate cross sections and to generate weighted or unweighted events. AMEGIC++ has been tested for multi-particle production in the Standard Model [Gle03b]. Its MSSM implementation has been validated in [Hag05].

• COMIX

Comix is a multi-leg tree-level matrix element generator, based on the color dressed Berends-Giele recursive relations [Duh06]. It employs a new algorithm to recursively compute phase-space weights. The module is a useful supplement to older matrix element generators like AMEGIC++ in the high multiplicity regime. Due to the usage of colour sampling it is particularly suited for an interface with parton shower simulations and can hence be easily employed for the ME-PS merging within Sherpa. It is Sherpa’s default large multiplicity matrix element generator for the Standard Model.

• PHASIC++

All base classes dealing with the Monte Carlo phase-space integration are located in this module. For the evaluation of the initial-state (laser backscattering, initial-state radiation) and final-state integrals, the adaptive multi-channel method of [Kle94], [Ber94] is used by default together with a Vegas optimization [Lep80] of the single channels. In addition, final-state integration accomplished by Rambo [Kle85a], Sarge [Dra00] and HAAG [Ham02] is supported.

• CSSHOWER++

This is the module hosting Sherpa’s default parton shower, which was published in [Sch07a]. The corresponding shower model was originally proposed in [Nag05], [Nag06]. It relies on the factorisation of real-emission matrix elements in the CS subtraction framework [Cat96b], [Cat02]. There exist four general types of CS dipole terms that capture the complete infrared singularity structure of next-to-leading order QCD amplitudes. In the large-N_C limit, the corresponding splitter and spectator partons are always adjacent in colour space. The dipole functions for the various cases, taken in four dimensions and averaged over spins, are used as shower splitting kernels.

• DIRE

This is the module hosting Sherpa’s alternative parton shower [Hoe15]. In the Dire model, the ordering variable exhibits a symmetry in emitter and spectator momenta, such that the dipole-like picture of the evolution can be re-interpreted as a dipole picture in the soft limit. At the same time, the splitting functions are regularized in the soft anti-collinear region using partial fractioning of the soft eikonal in the Catani-Seymour approach [Cat96b], [Cat02]. They are then modified to satisfy the sum rules in the collinear limit. This leads to an invariant formulation of the parton-shower algorithm, which is in complete analogy to the standard DGLAP case, but generates the correct soft anomalous dimension at one-loop order.

• AMISIC++

AMISIC++ contains classes for the simulation of multiple parton interactions according to [Sjo87]. In Sherpa the treatment of multiple interactions has been extended by allowing for the simultaneous evolution of an independent parton shower in each of the subsequent (semi-)hard collisions. The beam–beam remnants are organized such that partons which are adjacent in colour space are also adjacent in momentum space. The corresponding classes for beam remnant handling reside in the PDF and SHERPA modules.

AHADIC++ is Sherpa ’s hadronization package, for translating the partons (quarks and gluons) into primordial hadrons, to be further decayed in HADRONS++. The algorithm bases on the cluster fragmentation ideas presented in [Got82], [Got83], [Web83], [Got86] and implemented in the Herwig family of event generators. The actual Sherpa implementation, based on [Win03], differs from the original model in several respects.

HADRONS++ is the module for simulating hadron and tau-lepton decays. The resulting decay products respect full spin correlations (if desired). Several matrix elements and form-factor models have been implemented, such as the Kühn-Santamaría model, form-factor parametrizations from Resonance Chiral Theory for the tau and form factors from heavy quark effective theory or light cone sum rules for hadron decays.

• PHOTONS++

The PHOTONS++ module holds routines to add QED radiation to hadron and tau-lepton decays. This has been achieved by an implementation of the YFS algorithm [Yen61]. The structure of PHOTONS++ is such that the formalism can be extended to scattering processes and to a systematic improvement to higher orders of perturbation theory [Sch08]. The application of PHOTONS++ therefore accounts for corrections that usually are added by the application of PHOTOS [Bar93] to the final state.

• SHERPA

Finally, SHERPA is the steering module that initializes, controls and evaluates the different phases during the entire process of event generation. All routines for the combination of truncated showers and matrix elements, which are independent of the specific matrix element and parton shower generator are found in this module.

The actual executable of the Sherpa generator can be found in the subdirectory <prefix>/bin/ and is called Sherpa. To run the program, input files have to be provided in the current working directory or elsewhere by specifying the corresponding path, see Input structure. All output files are then written to this directory as well.

# 2 Getting started

## 2.1 Installation

Sherpa is distributed as a tarred and gzipped file named SHERPA-MC-3.0.0.tar.gz, and can be unpacked in the current working directory with

 tar -zxf SHERPA-MC-3.0.0.tar.gz


Alternatively, it can also be accessed via Git through the location specified on the download page. In that case, before continuing, it is necessary to construct the build scripts by running autoreconf -i once after cloning the Git repo.

To guarantee successful installation, the following tools should be available on the system:

• C++ and Fortran compilers (e.g. from the gcc suite)
• make
• SQLite 3 (including the -dev package if installed through a package manager)

If SQLite is installed in a non-standard location, please specify the installation path using option ‘--with-sqlite3=/path/to/sqlite’. If SQLite is not installed on your system, the Sherpa configure script provides the fallback option of installing it into the same directory as Sherpa itself. To do so, please run configure with option ‘--with-sqlite3=install’ (This may not work if you are cross-compiling using ‘--host’. In this case, please install SQLite by yourself and reconfigure using ‘--with-sqlite3=/path/to/sqlite’).

Compilation and installation proceed through the following commands:

 ./configure

 make install


If not specified differently, the directory structure after installation is organized as follows

$(prefix)/bin Sherpa executeable and scripts $(prefix)/include

$(prefix)/lib basic libraries $(prefix)/share

PDFs, Decaydata, fallback run cards

The installation directory $(prefix) can be specified by using the ./configure --prefix /path/to/installation/target directive and defaults to the current working directory. If Sherpa has to be moved to a different directory after the installation, one has to set the following environment variables for each run: • SHERPA_INCLUDE_PATH=$newprefix/include/SHERPA-MC
• SHERPA_SHARE_PATH=$newprefix/share/SHERPA-MC • SHERPA_LIBRARY_PATH=$newprefix/lib/SHERPA-MC
• LD_LIBRARY_PATH=$SHERPA_LIBRARY_PATH:$LD_LIBRARY_PATH

Sherpa can be interfaced with various external packages, e.g. HepMC, for event output, or LHAPDF, for PDFs. For this to work, the user has to pass the appropriate commands to the configure step. This is achieved as shown below:

./configure --enable-hepmc2=/path/to/hepmc2 --enable-lhapdf=/path/to/lhapdf


Here, the paths have to point to the top level installation directories of the external packages, i.e. the ones containing the lib/, share/, ... subdirectories.

For a complete list of possible configuration options run ‘./configure --help’.

The Sherpa package has successfully been compiled, installed and tested on SuSE, RedHat / Scientific Linux and Debian / Ubuntu Linux systems using the GNU C++ compiler versions 3.2, 3.3, 3.4, and 4.x as well as on Mac OS X 10 using the GNU C++ compiler version 4.0. In all cases the GNU FORTRAN compiler g77 or gfortran has been employed.

If you have multiple compilers installed on your system, you can use shell environment variables to specify which of these are to be used. A list of the available variables is printed with

./configure --help


in the Sherpa top level directory and looking at the last lines. Depending on the shell you are using, you can set these variables e.g. with export (bash) or setenv (csh). Examples:

export CXX=g++-3.4
export CC=gcc-3.4
export CPP=cpp-3.4


#### Installation on Cray XE6 / XK7

Sherpa has been installed successfully on Cray XE6 and Cray XK7. The following configure command should be used

./configure <your options> --enable-mpi --host=i686-pc-linux CC=CC CXX=CC FC='ftn -fPIC' LDFLAGS=-dynamic


Sherpa can then be run with

aprun -n <nofcores> <prefix>/bin/Sherpa -lrun.log


The modularity of the code requires setting the environment variable ‘CRAY_ROOTFS’, cf. the Cray system documentation.

#### Installation on IBM BlueGene/Q

Sherpa has been installed successfully on an IBM BlueGene/Q system. The following configure command should be used

./configure <your options> --enable-mpi --host=powerpc64-bgq-linux CC=mpic++ CXX=mpic++ FC='mpif90 -fPIC -funderscoring' LDFLAGS=-dynamic


Sherpa can then be run with

qsub -A <account> -n <nofcores> -t 60 --mode c16 <prefix>/bin/Sherpa -lrun.log


#### MacOS Installation

Since it is more complicated to set up the necessary compiler environment on a Mac we recommend using a package manager to install Sherpa and its dependencies. David Hall is hosting a repository for Homebrew packages at: http://davidchall.github.io/homebrew-hep/

In case you are compiling yourself, please be aware of the following issues which have come up on Mac installations before:

• On 10.4 and 10.5 only gfortran is supported, and you will have to install it e.g. from HPC
• If you want to reconfigure, i.e. run the command autoreconf or (g)libtoolize, you have to make sure that you have a recent version of GNU libtool (>=1.5.22 has been tested). Don’t confuse this with the native non-GNU libtool which is installed in /usr/bin/libtool and of no use! Also make sure that your autools (autoconf >= 2.61, automake >= 1.10 have been tested) are of recent versions. All this should not be necessary though, if you only run configure.
• Make sure that you don’t have two versions of g++ and libstdc++ installed and being used inconsistently. This appeared e.g. when the gcc suite was installed through Fink to get gfortran. This caused Sherpa to use the native MacOS compilers but link the libstdc++ from Fink (which is located in /sw/lib). You can find out which libraries are used by Sherpa by running otool -L bin/Sherpa

## 2.2 Running Sherpa

The Sherpa executable resides in the directory <prefix>/bin/ where <prefix> denotes the path to the Sherpa installation directory. The way a particular simulation will be accomplished is defined by several parameters, which can all be listed in a common file, or data card (Parameters can be alternatively specified on the command line; more details are given in Input structure). This steering file is called Sherpa.yaml and some example setups (i.e. Sherpa.yaml files) are distributed with the current version of Sherpa. They can be found in the directory <prefix>/share/SHERPA-MC/Examples/, and descriptions of some of their key features can be found in the section Examples.

Please note: It is not in general possible to reuse steering files from previous Sherpa versions. Often there are small changes in the parameter syntax of the files from one version to the next. These changes are documented in our manuals. In addition, update any custom Decaydata directories you may have used (and reapply any changes which you might have applied to the old ones), see Hadron decays.

The very first step in running Sherpa is therefore to adjust all parameters to the needs of the desired simulation. The details for doing this properly are given in Parameters. In this section, the focus is on the main issues for a successful operation of Sherpa. This is illustrated by discussing and referring to the parameter settings that come in the example steering file ./Examples/V_plus_Jets/LHC_ZJets/Sherpa.yaml, cf. Z+jets production. This is a simple configuration created to show the basics of how to operate Sherpa. It should be stressed that this steering file relies on many of Sherpa’s default settings, and, as such, you should understand those settings before using it to look at physics. For more information on the settings and parameters in Sherpa, see Parameters, and for more examples see the Examples section.

### 2.2.1 Process selection and initialization

Central to any Monte Carlo simulation is the choice of the hard processes that initiate the events. These hard processes are described by matrix elements. In Sherpa, the selection of processes happens in the PROCESSES part of the steering file. Only a few 2->2 reactions have been hard-coded. They are available in the EXTRA_XS module. The more usual way to compute matrix elements is to employ one of Sherpa’s automated tree-level generators, AMEGIC++ and Comix, see Basic structure. If no matrix-element generator is selected, using the ME_GENERATORS tag, then Sherpa will use whichever generator is capable of calculating the process, checking Comix first, then AMEGIC++ and then EXTRA_XS. Therefore, for some processes, several of the options are used. In this example, however, all processes will be calculated by Comix.

To begin with the example, the Sherpa run has to be started by changing into the <prefix>/share/SHERPA-MC/Examples/V_plus_Jets/LHC_ZJets/ directory and executing

<prefix>/bin/Sherpa


You may also run from an arbitrary directory, employing <prefix>/bin/Sherpa --path=<prefix>/share/SHERPA-MC/Examples/V_plus_Jets/LHC_ZJets. In the example, an absolute path is passed to the optional argument --path. It may also be specified relative to the current working directory. If it is not specified at all, the current working directory is understood.

For good book-keeping, it is highly recommended to reserve different subdirectories for different simulations as is demonstrated with the example setups.

If AMEGIC++ is used, Sherpa requires an initialization run, where C++ source code is written to disk. This code must be compiled into dynamic libraries by the user by running the makelibs script in the working directory. Alternatively, if scons is installed, you may invoke <prefix>/bin/make2scons and run scons install. After this step Sherpa is run again for the actual cross section integrations and event generation. For more information on and examples of how to run Sherpa using AMEGIC++, see Running Sherpa with AMEGIC++.

If the internal hard-coded matrix elements or Comix are used, and AMEGIC++ is not, an initialization run is not needed, and Sherpa will calculate the cross sections and generate events during the first run.

As the cross sections are integrated, the integration over phase space is optimized to arrive at an efficient event generation. Subsequently events are generated if a number of events is passed to the optional argument --events or set in the Sherpa.yaml file with the EVENTS parameters.

The generated events are not stored into a file by default; for details on how to store the events see Event output formats. Note that the computational effort to go through this procedure of generating, compiling and integrating the matrix elements of the hard processes depends on the complexity of the parton-level final states. For low multiplicities ( 2->2,3,4 ), of course, it can be followed instantly.

Usually more than one generation run is wanted. As long as the parameters that affect the matrix-element integration are not changed, it is advantageous to store the cross sections obtained during the generation run for later use. This saves CPU time especially for large final-state multiplicities of the matrix elements. Per default, Sherpa stores these integration results in a directory called Results/. The name of the output directory can be customised via

<prefix>/bin/Sherpa -r <result>/


or with RESULT_DIRECTORY: <result>/ in the steering file, see RESULT_DIRECTORY. The storage of the integration results can be prevented by either using

<prefix>/bin/Sherpa -g


or by specifying GENERATE_RESULT_DIRECTORY: false in the steering file.

If physics parameters change, the cross sections have to be recomputed. The new results should either be stored in a new directory or the <result> directory may be re-used once it has been emptied. Parameters which require a recomputation are any parameters affecting the Models, Matrix elements or Selectors. Standard examples are changing the magnitude of couplings, renormalisation or factorisation scales, changing the PDF or centre-of-mass energy, or, applying different cuts at the parton level. If unsure whether a recomputation is required, a simple test is to temporarily use a different value for the RESULT_DIRECTORY option and check whether the new integration numbers (statistically) comply with the stored ones.

A warning on the validity of the process libraries is in order here: it is absolutely mandatory to generate new library files, whenever the physics model is altered, i.e. particles are added or removed and hence new or existing diagrams may or may not anymore contribute to the same final states. Also, when particle masses are switched on or off, new library files must be generated (however, masses may be changed between non-zero values keeping the same process libraries). The best recipe is to create a new and separate setup directory in such cases. Otherwise the Process and Results directories have to be erased:

rm -rf Process/ Results/


In either case one has to start over with the whole initialization procedure to prepare for the generation of events.

### 2.2.2 The example set-up: Z+Jets at the LHC

The setup file (Sherpa.yaml) provided in ./Examples/V_plus_Jets/LHC_ZJets/ can be considered as a standard example to illustrate the generation of fully hadronised events in Sherpa, cf. Z+jets production. Such events will include effects from parton showering, hadronisation into primary hadrons and their subsequent decays into stable hadrons. Moreover, the example chosen here nicely demonstrates how Sherpa is used in the context of merging matrix elements and parton showers [Hoe09]. In addition to the aforementioned corrections, this simulation of inclusive Drell-Yan production (electron-positron channel) will then include higher-order jet corrections at the tree level. As a result the transverse-momentum distribution of the Drell-Yan pair and the individual jet multiplicities as measured by the ATLAS and CMS collaborations at the LHC can be well described.

Before event generation, the initialization procedure as described in Process selection and initialization has to be completed. The matrix-element processes included in the setup are the following:

  proton proton -> parton parton -> electron positron + up to four partons


In the PROCESSES list of the steering file this translates into

  - Process: "93 93 -> 90 90 93{4}"
Order: {QCD: Any, EW: 2}
CKKW: 30


The physics model for these processes is the Standard Model (‘SM’) which is the default setting of the parameter MODEL and is therefore not set explicitly. Fixing the order of electroweak couplings to ‘2’, matrix elements of all partonic subprocesses for Drell-Yan production without any and with up to two extra QCD parton emissions will be generated. Proton–proton collisions are considered at beam energies of 3.5 TeV. The default PDF used by Sherpa is CT14. Model parameters and couplings can all be defined in the Sherpa.yaml file. Similarly, the way couplings are treated can be defined. As no options are set the default parameters and scale setting procedures are used.

The QCD radiation matrix elements have to be regularised to obtain meaningful cross sections. This is achieved by specifying CKKW: 30 when defining the process in Sherpa.yaml. Simultaneously, this tag initiates the ME-PS merging procedure. To eventually obtain fully hadronized events, the FRAGMENTATION setting has been left on it’s default value ‘Ahadic’ (and therefore been omitted from the steering file), which will run Sherpa’s cluster hadronisation, and the DECAYMODEL setting has it’s default value ‘Hadrons’, which will run Sherpa’s hadron decays. Additionally corrections owing to photon emissions are taken into account.

To run this example set-up, use the

<prefix>/bin/Sherpa


command as descibed in Running Sherpa. Sherpa displays some output as it runs. At the start of the run, Sherpa initializes the relevant model, and displays a table of particles, with their PDG codes and some properties. It also displays the Particle containers, and their contents. The other relevant parts of Sherpa are initialized, including the matrix element generator(s). The Sherpa output will look like:

Welcome to Sherpa, <user name> on <host name>. Initialization of framework underway.
[...]
Random::SetSeed(): Seed set to 1234
[...]
Beam_Spectra_Handler :
type = Monochromatic*Monochromatic
for    P+  ((4000,0,0,4000))
and    P+  ((4000,0,0,-4000))
PDF set 'ct14nn' loaded for beam 1 (P+).
PDF set 'ct14nn' loaded for beam 2 (P+).
Initialized the ISR.
Standard_Model::FixEWParameters() {
Input scheme: 2
alpha(m_Z) scheme, input: 1/\alphaQED(m_Z), m_W, m_Z, m_h, widths
Ren. scheme:  2
alpha(m_Z)
Parameters:   sin^2(\theta_W) = 0.222928 - 0.00110708 i
vev              = 243.034 - 3.75492 i
}
Running_AlphaQED::PrintSummary() {
Setting \alpha according to EW scheme
1/\alpha(0)   = 128.802
1/\alpha(def) = 128.802
}
One_Running_AlphaS::PrintSummary() {
Setting \alpha_s according to PDF
perturbative order 2
\alpha_s(M_Z) = 0.118
}
[...]
Initialized the Fragmentation_Handler.
Initialized the Soft_Collision_Handler.
Initialized the Shower_Handler.
[...]
Matrix_Element_Handler::BuildProcesses(): Looking for processes .. done
Matrix_Element_Handler::InitializeProcesses(): Performing tests .. done
Matrix_Element_Handler::InitializeProcesses(): Initializing scales  done
Initialized the Matrix_Element_Handler for the hard processes.
Primordial_KPerp::Primordial_KPerp() {
scheme = 0
beam 1: P+, mean = 1.1, sigma = 0.914775
beam 2: P+, mean = 1.1, sigma = 0.914775
}
Initialized the Beam_Remnant_Handler.
[...]
R

Then Sherpa will start to integrate the cross sections. The output will look like:

Process_Group::CalculateTotalXSec(): Calculate xs for '2_2__j__j__e-__e+' (Comix)
Starting the calculation at 11:58:56. Lean back and enjoy ... .
822.035 pb +- ( 16.9011 pb = 2.05601 % ) 5000 ( 11437 -> 43.7 % )
full optimization:  ( 0s elapsed / 22s left ) [11:58:56]
841.859 pb +- ( 11.6106 pb = 1.37916 % ) 10000 ( 18153 -> 74.4 % )
full optimization:  ( 0s elapsed / 21s left ) [11:58:57]
...


The first line here displays the process which is being calculated. In this example, the integration is for the 2->2 process, parton, parton -> electron, positron. The matrix element generator used is displayed after the process. As the integration progresses, summary lines are displayed, like the one shown above. The current estimate of the cross section is displayed, along with its statistical error estimate. The number of phase space points calculated is displayed after this (‘10000’ in this example), and the efficiency is displayed after that. On the line below, the time elapsed is shown, and an estimate of the total time till the optimisation is complete. In square brackets is an output of the system clock.

When the integration is complete, the output will look like:

...
852.77 pb +- ( 0.337249 pb = 0.0395475 % ) 300000 ( 313178 -> 98.8 % )
integration time:  ( 19s elapsed / 0s left ) [12:01:35]
852.636 pb +- ( 0.330831 pb = 0.038801 % ) 310000 ( 323289 -> 98.8 % )
integration time:  ( 19s elapsed / 0s left ) [12:01:35]
2_2__j__j__e-__e+ : 852.636 pb +- ( 0.330831 pb = 0.038801 % )  exp. eff: 13.4945 %
reduce max for 2_2__j__j__e-__e+ to 0.607545 ( eps = 0.001 )


with the final cross section result and its statistical error displayed.

Sherpa will then move on to integrate the other processes specified in the run card.

When the integration is complete, the event generation will start. As the events are being generated, Sherpa will display a summary line stating how many events have been generated, and an estimate of how long it will take. When the event generation is complete, Sherpa’s output looks like:

Event 10000 ( 72 s total ) = 1.20418e+07 evts/day
In Event_Handler::Finish : Summarizing the run may take some time.
+----------------------------------------------------+
|                                                    |
|  Total XS is 900.147 pb +- ( 8.9259 pb = 0.99 % )  |
|                                                    |
+----------------------------------------------------+


A summary of the number of events generated is displayed, with the total cross section for the process.

The generated events are not stored into a file by default; for details on how to store the events see Event output formats.

### 2.2.3 Parton-level event generation with Sherpa

Sherpa has its own tree-level matrix-element generators called AMEGIC++ and Comix. Furthermore, with the module PHASIC++, sophisticated and robust tools for phase-space integration are provided. Therefore Sherpa obviously can be used as a cross-section integrator. Because of the way Monte Carlo integration is accomplished, this immediately allows for parton-level event generation. Taking the LHC_ZJets setup, users have to modify just a few settings in Sherpa.yaml and would arrive at a parton-level generation for the process gluon down-quark to electron positron and down-quark, to name an example. When, for instance, the options “EVENTS: 0” and “OUTPUT: 2” are added to the steering file, a pure cross-section integration for that process would be obtained with the results plus integration errors written to the screen.

For the example, the process definition in PROCESSES simplifies to

- Process: 21 1 -> 11 -11 1
Order: {QCD: Any, EW: 2}


with all other settings in the process block remoted. and under the assumption to start afresh, the initialization procedure has to be followed as before. Picking the same collider environment as in the previous example there are only two more changes before the Sherpa.yaml file is ready for the calculation of the hadronic cross section of the process g d to e- e+ d at LHC and subsequent parton-level event generation with Sherpa. These changes read SHOWER_GENERATOR: None, to switch off parton showering, FRAGMENTATION: None, to do so for the hadronisation effects, MI_HANDLER: None, to switch off multiparton interactions, and ME_QED: {ENABLED: false}, to switch off resummed QED corrections onto the Z -> e- e+ decay. Additionally, the non-perturbative intrinsic transverse momentum may be wished to not be taken into account, therefore set BEAM_REMNANTS: false.

### 2.2.4 Multijet merged event generation with Sherpa

For a large fraction of LHC final states, the application of reconstruction algorithms leads to the identification of several hard jets. Calculations therefore need to describe as accurately as possible both the hard jet production as well as the subsequent evolution and the interplay of multiple such topologies. Several scales determine the evolution of the event.

Various such merging schemes have been proposed: [Cat01a], [Lon01], [Man01], [Kra02], [Man06], [Lav08], [Hoe09], [Ham09a], [Ham10], [Hoe10], [Lon11], [Hoe12a], [Geh12], [Lon12b], [Lon12a]. Comparisons of the older approaches can be found e.g. in [Hoc06], [Alw07]. The currently most advanced treatment at tree-level, detailed in [Hoe09], [Hoe09a], [Car09], is implemented in Sherpa.

How to setup a multijet merged calculation is detailed in most Examples, eg. W+jets production, Z+jets production or Top quark (pair) + jets production.

### 2.2.5 Running Sherpa with AMEGIC++

When Sherpa is run using the matrix element generator AMEGIC++, it is necessary to run it twice. During the first run (the initialization run) Feynman diagrams for the hard processes are constructed and translated into helicity amplitudes. Furthermore suitable phase-space mappings are produced. The amplitudes and corresponding integration channels are written to disk as C++ sourcecode, placed in a subdirectory called Process. The initialization run is started using the standard Sherpa executable, as decribed in Running Sherpa. The relevant command is

<prefix>/bin/Sherpa


The initialization run stops with the message "New libraries created. Please compile.", which is nothing but the request to carry out the compilation and linking procedure for the generated matrix-element libraries. The makelibs script, provided for this purpose and created in the working directory, must be invoked by the user (see ./makelibs -h for help):

./makelibs


Note that the following tools have to be available for this step: autoconf, automake and libtool.

Alternatively, if scons is installed, you may invoke <prefix>/bin/make2scons and run scons install. If scons was detected during the compilation of Sherpa, also makelibs uses scons per default (can be forced to use autotools by ./makelibs -s.

Afterwards Sherpa can be restarted using the same command as before. In this run (the generation run) the cross sections of the hard processes are evaluated. Simultaneously the integration over phase space is optimized to arrive at an efficient event generation.

## 2.3 Cross section determination

To determine the total cross section, in particular in the context of multijet merging with Sherpa, the final output of the event generation run should be used, e.g.

+-----------------------------------------------------+
|                                                     |
|  Total XS is 1612.17 pb +- ( 8.48908 pb = 0.52 % )  |
|                                                     |
+-----------------------------------------------------+


Note that the Monte Carlo error quoted for the total cross section is determined during event generation. It, therefore, might differ substantially from the errors quoted during the integration step, and it can be reduced simply by generating more events.

In contrast to plain fixed order results, Sherpa’s total cross section in multijet merging setups (MEPS, MENLOPS, MEPS@NLO) is composed of values from various fixed order processes, namely those which are combined by applying the multijet merging, see Multijet merged event generation with Sherpa. In this context, it is important to note:

The higher multiplicity tree-level cross sections determined during the integration step are meaningless by themselves, only the inclusive cross section printed at the end of the event generation run is to be used.

Sherpa total cross sections have leading order accuracy when the generator is run in LO merging mode (MEPS), in NLO merging (MENLOPS, MEPS@NLO) mode they have NLO accuracy.

### 2.3.1 Differential cross sections from single events

To calculate the expectation value of an observable defined through a series of cuts and requirements each event produced by Sherpa has to be evaluated whether it meets the required criteria. The expectation value is then given by

<O> = 1/N_trial * \sum_i^n w_i(\Phi_i) O(\Phi_i) .


Therein the w_i(\Phi_i) are the weight of the event with the phase space configuration \Phi_i and O(\Phi_i) is the value of the observable at this point. N_trial = \sum_i^n n_trial,i is the sum of number of trials n_trial,i of all events. A good cross check is to reproduce the inclusive cross section as quoted by Sherpa (see above).

In case of unweighted events one might want to rescale the uniform event weight to unity using w_norm. The above equation then reads

<O> = w_norm/N_trial * \sum_i^n w_i(\Phi_i)/w_norm O(\Phi_i) .


wherein w_i(\Phi_i)/w_norm = 1, ie. the sum simply counts how many events pass the observable’s selection criteria. If however, PartiallyUnweighted event weights or Enhance_Factor or Enhance_Observable are used, this is no longer the case and the full form needs to be used.

All required quantities, w_i, w_norm and n_trial,i, accompany each event and are written e.g. into the HepMC output (cf. Event output formats).

# 3 Command line options

The available command line options for Sherpa.

--run-data, -f <file>

Read settings from input file ‘<file>’.

--path, -p <path>

Read input file from path ‘<path>’.

--sherpa-lib-path, -L <path>

Set Sherpa library path to ‘<path>’, see SHERPA_CPP_PATH.

--events, -e <N_events>

Set number of events to generate ‘<N_events>’, see EVENTS.

--event-type, -t <event_type>

Set the event type to ‘<event_type>’, see EVENT_TYPE.

--result-directory, -r <path>

Set the result directory to ‘<path>’, see RESULT_DIRECTORY.

--random-seed, -R <seed>

Set the seed of the random number generator to ‘<seed>’, see RANDOM_SEED.

--me-generators, -m <generators>

Set the matrix element generator list to ‘<generators>’, see ME_GENERATORS. If you specify more than one generator, use the YAML sequence syntax, e.g. ‘-m '[Amegic, Comix]'’.

--mi-handler, -M <handler>

Set multiple interaction handler to ‘<handler>’, see MI_HANDLER.

--event-generation-mode, -w <mode>

Set the event generation mode to ‘<mode>’, see EVENT_GENERATION_MODE.

--shower-generator, -s <generator>

Set the parton shower generator to ‘<generator>’, see SHOWER_GENERATOR.

--fragmentation, -F <module>

Set the fragmentation module to ‘<module>’, see Fragmentation.

--decay, -D <module>

--analysis, -a <analyses>

Set the analysis handler list to ‘<analyses>’, see ANALYSIS. If you specify more than one analysis handler, use the YAML sequence syntax, e.g. ‘-a '[Rivet, Internal]'’.

--analysis-output, -A <path>

Set the analysis output path to ‘<path>’, see ANALYSIS_OUTPUT.

--output, -O <level>

Set general output level ‘<level>’, see OUTPUT.

--event-output, -o <level>

Set output level for event generation ‘<level>’, see OUTPUT.

--log-file, -l <logfile>

Set log file name ‘<logfile>’, see LOG_FILE.

--disable-result-directory-generation, -g

Do not create result directory, see RESULT_DIRECTORY.

--disable-batch-mode, -b

Switch to non-batch mode, see BATCH_MODE.

--print-version-info, -V

Print extended version information at startup.

--version, -v

Print versioning information.

--help, -h

Print a help message.

'PARAMETER: Value'

Set the value of a parameter, see Parameters.

'Tags: {TAG: Value}'

Set the value of a tag, see Tags.

# 4 Input structure

A Sherpa setup is steered by various parameters, associated with the different components of event generation.

These have to be specified in a configuration file which by default is named Sherpa.yaml in the current working directory. If you want to use a different setup directory for your Sherpa run, you have to specify it on the command line as -p <dir> or 'PATH: <dir>' (including the quotes). To read parameters from a configuration file with a different name, you may specify -f <file> or 'RUNDATA: <file>'.

Sherpa’s configuration files are writtin in the YAML format. Most settings are just written as the settings’ name followed by its value, like this:

  EVENTS: 100M
BEAMS: 2212
BEAM_ENERGIES: 7000
...


Others use a more nested structure:

  HARD_DECAYS:
Enabled: true
Apply_Branching_Ratios: false


where Enabled and Apply_Branching_Ratios are sub-settings of the top-level HARD_DECAYS setting, which is denoted by indentation (here two additional spaces).

The different settings and their structure are described in detail in another chapter of this manual, see Parameters.

All parameters can be overwritten on the command line, i.e. command-line input has the highest priority. Each argument is parsed as a single YAML line. This usually means that you have to quote each argument:

  <prefix>/bin/Sherpa 'KEYWORD1: value1' 'KEYWORD2: value2' ...


Because each argument is parsed as YAML, you can also specify nested settings, e.g. to disable hard decays (even if it is enabled in the config file) you can write:

  <prefix>/bin/Sherpa 'HARD_DECAYS: {Enabled: false}'


Or you can specify the list of matrix-element generators writing:

  <prefix>/bin/Sherpa 'ME_GENERATORS: [Comix, Amegic]'


All over Sherpa, particles are defined by the particle code proposed by the PDG. These codes and the particle properties will be listed during each run with OUTPUT: 2 for the elementary particles and OUTPUT: 4 for the hadrons. In both cases, antiparticles are characterized by a minus sign in front of their code, e.g. a mu- has code 13, while a mu+ has -13.

All quantities have to be specified in units of GeV and millimeter. The same units apply to all numbers in the event output (momenta, vertex positions). Scattering cross sections are denoted in pico-barn in the output.

There are a few extra features for an easier handling of the parameter file(s), namely global tag replacement, see Tags, and algebra interpretation, see Interpreter.

## 4.1 Interpreter

Sherpa has a built-in interpreter for algebraic expressions, like ‘cos(5/180*M_PI)’. This interpreter is employed when reading integer and floating point numbers from input files, such that certain parameters can be written in a more convenient fashion. For example it is possible to specify the factorisation scale as ‘sqr(91.188)’.
There are predefined tags to alleviate the handling

M_PI

Ludolph’s Number to a precision of 12 digits.

M_C

The speed of light in the vacuum.

E_CMS

The total centre of mass energy of the collision.

The expression syntax is in general C-like, except for the extra function ‘sqr’, which gives the square of its argument. Operator precedence is the same as in C. The interpreter can handle functions with an arbitrary list of parameters, such as ‘min’ and ‘max’.
The interpreter can be employed to construct arbitrary variables from four momenta, like e.g. in the context of a parton level selector, see Selectors. The corresponding functions are

Mass(v)

The invariant mass of v in GeV.

Abs2(v)

The invariant mass squared of v in GeV^2.

PPerp(v)

The transverse momentum of v in GeV.

PPerp2(v)

The transverse momentum squared of v in GeV^2.

MPerp(v)

The transverse mass of v in GeV.

MPerp2(v)

The transverse mass squared of v in GeV^2.

Theta(v)

The polar angle of v in radians.

Eta(v)

The pseudorapidity of v.

Y(v)

The rapidity of v.

Phi(v)

The azimuthal angle of v in radians.

Comp(v,i)

The i’th component of the vector v. i=0 is the energy/time component, i=1, 2, and 3 are the x, y, and z components.

PPerpR(v1,v2)

The relative transverse momentum between v1 and v2 in GeV.

ThetaR(v1,v2)

The relative angle between v1 and v2 in radians.

DEta(v1,v2)

The pseudo-rapidity difference between v1 and v2.

DY(v1,v2)

The rapidity difference between v1 and v2.

DPhi(v1,v2)

The relative polar angle between v1 and v2 in radians.

## 4.2 Tags

Tag replacement in Sherpa is performed through the data reading routines, which means that it can be performed for virtually all inputs. Specifying a tag on the command line or in the configuration file using the syntax TAGS: {<Tag>: <Value>} will replace every occurrence of @(<Tag>) in all files during read-in. An example tag definition could read

  <prefix>/bin/Sherpa 'TAGS: {QCUT: 20, NJET: 3}'


and then be used in the configuration file like:

  RESULT_DIRECTORY: Result_$(QCUT) PROCESSES: - Process: "93 93 -> 11 -11 93{$(NJET)}"
Order: {QCD: Any, EW: 2}

### 5.1.16 SHERPA_LIB_PATH

The path in which Sherpa looks for dynamically linked libraries from previously created C++ source code, cf. SHERPA_CPP_PATH.

### 5.1.17 Event output formats

Sherpa provides the possibility to output events in various formats, e.g. the HepEVT common block structure or the HepMC format. The authors of Sherpa assume that the user is sufficiently acquainted with these formats when selecting them.

If the events are to be written to file, the parameter ‘EVENT_OUTPUT’ must be specified together with a file name. An example would be EVENT_OUTPUT: HepMC_GenEvent[MyFile], where MyFile stands for the desired file base name. More than one output can also be specified:

EVENT_OUTPUT:
- HepMC_GenEvent[MyFile]
- Root[MyFile]


The following formats are currently available:

HepMC_GenEvent

Generates output in HepMC::IO_GenEvent format. The HepMC::GenEvent::m_weights weight vector stores the following items: [0] event weight, [1] combined matrix element and PDF weight (missing only phase space weight information, thus directly suitable for evaluating the matrix element value of the given configuration), [2] event weight normalisation (in case of unweighted events event weights of ~ +/-1 can be obtained by (event weight)/(event weight normalisation)), and [3] number of trials. The total cross section of the simulated event sample can be computed as the sum of event weights divided by the sum of the number of trials. This value must agree with the total cross section quoted by Sherpa at the end of the event generation run, and it can serve as a cross-check on the consistency of the HepMC event file. Note that Sherpa conforms to the Les Houches 2013 suggestion (http://phystev.in2p3.fr/wiki/2013:groups:tools:hepmc) of indicating interaction types through the GenVertex type-flag. Multiple event weights can also be enabled with HepMC versions >=2.06, cf. Scale and PDF variations. The following additional customisations can be used

HEPMC_USE_NAMED_WEIGHTS: <false|true> Enable filling weights with an associated name. The nominal event weight has the key Weight. MEWeight, WeightNormalisation and NTrials provide additional information for each event as described above. Needs HepMC version >=2.06.

HEPMC_EXTENDED_WEIGHTS: <false|true> Write additional event weight information needed for a posteriori reweighting into the WeightContainer, cf. A posteriori scale and PDF variations using the HepMC GenEvent Output. Necessitates the use of HEPMC_USE_NAMED_WEIGHTS.

HEPMC_TREE_LIKE: <false|true> Force the event record to be stricly tree-like. Please note that this removes some information from the matrix-element-parton-shower interplay which would be otherwise stored.

HepMC_Short

Generates output in HepMC::IO_GenEvent format, however, only incoming beams and outgoing particles are stored. Intermediate and decayed particles are not listed. The event weights stored as the same as above, and HEPMC_USE_NAMED_WEIGHTS and HEPMC_EXTENDED_WEIGHTS can be used to customise.

HepMC3_GenEvent

Generates output using HepMC3 library. The format of the output is set with HEPMC3_IO_TYPE: <0|1|2|3|4> tag. The default value is 0 and corresponds to ASCII GenEvent. Other available options are 1: HepEvt 2: ROOT file with every event written as an object of class GenEvent. 3: ROOT file with GenEvent objects writen into TTree. Otherwise similar to HepMC_GenEvent.

Delphes_GenEvent

Generates output in Root format, which can be passed to Delphes for analyses. Input events are taken from the HepMC interface. Storage space can be reduced by up to 50% compared to gzip compressed HepMC. This output format is available only if Sherpa was configured and installed with options ‘--enable-root’ and ‘--enable-delphes=/path/to/delphes’.

Delphes_Short

Generates output in Root format, which can be passed to Delphes for analyses. Only incoming beams and outgoing particles are stored.

PGS

Generates output in StdHEP format, which can be passed to PGS for analyses. This output format is available only if Sherpa was configured and installed with options ‘--enable-hepevtsize=4000’ and ‘--enable-pgs=/path/to/pgs’. Please refer to the PGS documentation for how to pass StdHEP event files on to PGS. If you are using the LHC olympics executeable, you may run ‘./olympics --stdhep events.lhe <other options>’.

PGS_Weighted

Generates output in StdHEP format, which can be passed to PGS for analyses. Event weights in the HEPEV4 common block are stored in the event file.

HEPEVT

Generates output in HepEvt format.

LHEF

Generates output in Les Houches Event File format. This output format is intended for output of matrix element configurations only. Since the format requires PDF information to be written out in the outdated PDFLIB/LHAGLUE enumeration format this is only available automatically if LHAPDF is used, the identification numbers otherwise have to be given explicitly via LHEF_PDF_NUMBER (LHEF_PDF_NUMBER_1 and LHEF_PDF_NUMBER_2 if both beams carry different structure functions). This format currently outputs matrix element information only, no information about the large-Nc colour flow is given as the LHEF output format is not suited to communicate enough information for meaningful parton showering on top of multiparton final states.

Root

Generates output in ROOT ntuple format for NLO event generation only. For details on the ntuple format, see A posteriori scale and PDF variations using the ROOT NTuple Output. This output option is available only if Sherpa was linked to ROOT during installation by using the configure option --enable-root=/path/to/root. ROOT ntuples can be read back into Sherpa and analyzed using the option ‘EVENT_INPUT’. This feature is described in Production of NTuples.

The output can be further customized using the following options:

FILE_SIZE

Number of events per file (default: unlimited).

EVENT_FILE_PATH

Directory where the files will be stored.

EVENT_OUTPUT_PRECISION

Steers the precision of all numbers written to file (default: 12).

For all output formats except ROOT and Delphes, events can be written directly to gzipped files instead of plain text. The option ‘--enable-gzip’ must be given during installation to enable this feature.

### 5.1.18 Scale and PDF variations

Sherpa can compute alternative event weights for different scale and PDF choices on-the-fly, resulting in alternative weights for the generated event. The can be evoked with the following syntax

VARIATIONS:
- "<muR-fac-1>,<muF-fac-1>,<PDF-1>"
- "<muR-fac-2>,<muF-fac-2>,<PDF-2>"
...


The key word VARIATIONS takes a list of variation factors for the nominal renormalisation and factorisation scale and an associated PDF set. If the scale factors are omitted, they default to 1. Any set present in any of the PDF library interfaces loaded through PDF_LIBRARY can be used. If no PDF set is given it defaults to the nominal one. Specific PDF members can be specified by appending the PDF set name with /<member-id>. Enclosing the pdf set with square brackets will expand to variations over all members of that set. This only works with LHAPDF6 sets or the internal default sets. Please note that scales are, as always in Sherpa, given in their quadratic form. Thus, a variation of factor 4 of the squared scale [GeV^2] means a variation of factor 2 on the scale itself [GeV]. Scales also support square bracket expansion, e.g. [4] expands to 1/4, 1, 4. Enclosing both scale tags with square brackets expands to the 7-point scale variation:

VARIATIONS:
- "[4,4]"
# is equivalent to
VARIATIONS:
- 0.25,0.25
- 1,0.25
- 0.25,1
- 1,1
- 4,1
- 1,4
- 4,4


Thus, a complete variation using the PDF4LHC convention would read

VARIATIONS:
- "[4,4]"
- "[CT10nlo]"
- "[MMHT2014nlo68cl]"
- "[NNPDF30_nlo_as_0118]"


Please note, this syntax will create 7+53+51+101=212 additional weights for each event.

Note that the square bracket expansion includes trivial scale variations and the central PDF set. This can be disabled with VARIATIONS_INCLUDE_CV: false.

The additional event weights can then be written into the event output. However, this is currently only supported for HepMC_GenEvent and HepMC_Short with versions >=2.06 and HEPMC_USE_NAMED_WEIGHTS: true. The alternative event weights follow the Les Houches naming convention for such variations, ie. they are named MUR<fac>_MUF<fac>_PDF<id>. When using Sherpa’s interface to Rivet, Rivet analyses, separate instances of Rivet, one for each alternative event weight in addition to the nominal one, are instantiated leading to one set of histograms each. They are again named using the MUR<fac>_MUF<fac>_PDF<id> convention.

The user must also be aware that, of course, the cross section of the event sample, changes when using an alternative event weight as compared to the nominal one. Any histograming therefore has to account for this and recompute the total cross section as the sum of weights devided by the number of trials, cf. Cross section determination.

The on-the-fly reweighting works for all event generation modes (weighted or (partially) unweighted) and all calculation types (LO, LOPS, NLO, NLOPS, MEPS@LO, MEPS@NLO and MENLOPS). The on-the-fly reweighting includes the parton shower. All shower emissions with a transverse momentum larger than 5 GeV^2 are reweighted. Softer emissions are excluded for reasons of numerical stability. This threshold can be modified using CSS_REWEIGHT_SCALE_CUTOFF. To include the ME-only variations along with the full variations in the HepMC/Rivet output, you can use HEPMC_INCLUDE_ME_ONLY_VARIATIONS: true and RIVET: { INCLUDE_HEPMC_ME_ONLY_VARIATIONS: true }, respectively.

### 5.1.19 MPI parallelization

MPI parallelization in Sherpa can be enabled using the configuration option ‘--enable-mpi’. Sherpa supports OpenMPI and MPICH2 . For detailed instructions on how to run a parallel program, please refer to the documentation of your local cluster resources or the many excellent introductions on the internet. MPI parallelization is mainly intended to speed up the integration process, as event generation can be parallelized trivially by starting multiple instances of Sherpa with different random seed, cf. RANDOM_SEED. However, both the internal analysis module and the Root NTuple writeout can be used with MPI. Note that these require substantial data transfer.

Please note that the process information contained in the Process directory for both Amegic and Comix needs to be generated without MPI parallelization first. Therefore, first run

Sherpa -f <run-card> INIT_ONLY=1


and, in case of using Amegic, compile the libraries. Then start your parallized integration, e.g.

mpirun -n <n> Sherpa -f <run-card> -e 0


After the integration has finished, you can submit individual jobs to generate event samples (with a different random seed for each job). Upon completion, the results can be merged.

## 5.2 Beam parameters

Mandatory settings to set up the colliding particle beams are

• The initial beam particles specified through ‘BEAMS’, given by their PDG particle number. For (anti)protons and (positrons) electrons, for example, these are given by (-)2212 or (-)11, respectively. The code for photons is 22. If you provide a single particle number, both beams will consist of that particle type. If the beams consist of different particles, a list of two values have to be provided.
• The energies of both incoming beams are defined through ‘BEAM_ENERGIES’, given in units of GeV. Again, single values apply to both beams, whereas a list of two values have to be given when the two beams do not have the same energy.

Examples would be

# LHC
BEAMS: 2212
BEAM_ENERGIES: 7000

# HERA
BEAMS: [-11, 2212]
BEAM_ENERGIES: [27.5, 820]


More options related to beamstrahlung and intrinsic transverse momentum can be found in the following subsections.

### 5.2.1 Beam Spectra

If desired, you can also specify spectra for beamstrahlung through BEAM_SPECTRA. The possible values are Possible values are

Monochromatic

The beam energy is unaltered and the beam particles remain unchanged. That is the default and corresponds to ordinary hadron-hadron or lepton-lepton collisions.

Laser_Backscattering

This can be used to describe the backscattering of a laser beam off initial leptons. The energy distribution of the emerging photon beams is modelled by the CompAZ parametrization, see [Zar02]. Note that this parametrization is valid only for the proposed TESLA photon collider, as various assumptions about the laser parameters and the initial lepton beam energy have been made. See details below.

Simple_Compton

This corresponds to a simple light backscattering off the initial lepton beam and produces initial-state photons with a corresponding energy spectrum. See details below.

EPA

This enables the equivalent photon approximation for colliding protons, see [Arc08]. The resulting beam particles are photons that follow a dipole form factor parametrization, cf. [Bud74]. The authors would like to thank T. Pierzchala for his help in implementing and testing the corresponding code. See details below.

A user defined spectrum is used to describe the energy spectrum of the assumed new beam particles. The name of the corresponding spectrum file needs to be given through the keywords SPECTRUM_FILES.

The BEAM_SMIN and BEAM_SMAX parameters may be used to specify the minimum/maximum fraction of cms energy squared after Beamstrahlung. The reference value is the total centre of mass energy squared of the collision, not the centre of mass energy after eventual Beamstrahlung.
The parameter can be specified using the internal interpreter, see Interpreter, e.g. as ‘BEAM_SMIN: sqr(20/E_CMS)’.

#### 5.2.1.1 Laser Backscattering

The energy distribution of the photon beams is modelled by the CompAZ parametrisation, see [Zar02], with various assumptions valid only for the proposed TESLA photon collider. The laser energies can be set by E_LASER. P_LASER sets their polarisations, defaulting to 0.. Both settings can either be set to a single value, applying to both beams, or to a list of two values, one for each beam. The LASER_MODE takes the values -1, 0, and 1, defaulting to 0. LASER_ANGLES and LASER_NONLINEARITY can be set to true or to false (default).

#### 5.2.1.2 Simple Compton

This corresponds to a simple light backscattering off the initial lepton beam and produces initial-state photons with a corresponding energy spectrum. It is a special case of the above Laser Backscattering with LASER_MODE: -1.

#### 5.2.1.3 EPA

The equivalent photon approximation, cf. [Arc08], [Bud74], has a few free parameters:

EPA_q2Max

Parameter of the EPA spectra of the two beams, defaults to 2. in units of GeV squared.

EPA_ptMin

Infrared regulator to the EPA beam spectra. Given in GeV, the value must be between 0. and 1. for EPA approximation to hold. Defaults to 0., i.e. the spectrum has to be regulated by cuts on the observable, cf Selectors.

EPA_Form_Factor

Form factor model to be used on the beams. The options are 0 (pointlike), 1 (homogeneously charged sphere, 2 (gaussian shaped nucleus), and 3 (homogeneously charged sphere, smoothed at low and high x). Applicable only to heavy ion beams. Defaults to 0.

EPA_AlphaQED

Value of alphaQED to be used in the EPA. Defaults to 0.0072992701.

EPA_q2Max, EPA_ptMin, EPA_Form_Factor can either be set to single values that are then applied to both beams, or to a list of two values, for the respective beams.

### 5.2.2 Intrinsic Transverse Momentum

The intrinsic transverse momentum of the colliding particles can be set by subsettings of the INTRINSIC_KPERP setting:

INTRINSIC_KPERP:
Parameter_1: <value_1>
Parameter_2: <value_2>
...


The possible parameters and their defaults are

ENABLED (default: true)

Setting this to false disables the intrinsic transverse momentum altogether.

SCHEME (default for protons: 0)

This parameter specifies the scheme to calculate the intrinsic transverse momentum of the beams in case of hadronic beams, such as protons.

MEAN (default: 1.1)

This parameter specifies the mean intrinsic transverse momentum in GeV for the beams in case of hadronic beams, such as protons.
If two values are provided, the intrinsic momenta means of the two beams are set to these two values, respectively.

SIGMA (default: 0.85)

This parameter specifies the width of the Gaussian distribution in GeV of the intrinsic transverse momenta for the beams in case of hadronic beams, such as protons.
If two values are provided, the intrinsic momenta widths of the two beams are set to these two values, respectively.

EXP (default: 0.55)

This parameter specifies the energy extrapolation exponent of the width of the Gaussian distribution of intrinsic transverse momentum for the beams in case of hadronic beams, such as protons.
If two values are provided, the exponents for each of the two beams are set to these two values, respectively.

EREF (default: 7000)

This parameter specifies the reference scale in GeV in the energy extrapolation of the width of the Gaussian distribution of intrinsic transverse momentum for the beams in case of hadronic beams, such as protons.
If two values are provided, the reference scales for each of the two beams are set to these two values, respectively.

If the option ‘BEAM_REMNANTS: false’ is specified, pure parton-level events are simulated, i.e. no beam remnants are generated. Accordingly, partons entering the hard scattering process do not acquire primordial transverse momentum.

## 5.3 ISR parameters

The following parameters are used to steer the setup of beam substructure and initial state radiation (ISR).

BUNCHES

Specify the PDG ID of the first (left) and second (right) bunch particle (or both if only one value is provided), i.e. the particle after eventual Beamstrahlung specified through the beam parameters, see Beam parameters. Per default these are taken to be identical to the values set using BEAMS, assuming the default beam spectrum is Monochromatic. In case the Simple Compton or Laser Backscattering spectra are enabled the bunch particles would have to be set to 22, the PDG code of the photon.

ISR_SMIN/ISR_SMAX

This parameter specifies the minimum fraction of cms energy squared after ISR. The reference value is the total centre of mass energy squared of the collision, not the centre of mass energy after eventual Beamstrahlung.
The parameter can be specified using the internal interpreter, see Interpreter, e.g. as ISR_SMIN: sqr(20/E_CMS).

Sherpa provides access to a variety of structure functions. They can be configured with the following parameters.

PDF_LIBRARY

This parameter takes the list of PDF interfaces to load. The following options are distributed with Sherpa:

LHAPDFSherpa

Use PDF’s from LHAPDF [Wha05]. The interface is only available if Sherpa has been compiled with support for LHAPDF, see Installation.

CT14Sherpa

Built-in library for some PDF sets from the CTEQ collaboration, cf. [Dulat15]. This is the default.

CT12Sherpa

Built-in library for some PDF sets from the CTEQ collaboration, cf. [Gao13].

CT10Sherpa

Built-in library for some PDF sets from the CTEQ collaboration, cf. [Lai10].

CTEQ6Sherpa

Built-in library for some PDF sets from the CTEQ collaboration, cf. [Nad08].

NNPDF30NLO

Built-in library for PDF sets from the NNPDF group, cf. [Ball14].

MSTW08Sherpa

Built-in library for PDF sets from the MSTW group, cf. [Mar09a].

MRST04QEDSherpa

Built-in library for photon PDF sets from the MRST group, cf. [Mar04].

MRST01LOSherpa

Built-in library for the 2001 leading-order PDF set from the MRST group, cf. [Mar01].

MRST99Sherpa

Built-in library for the 1999 PDF sets from the MRST group, cf. [Mar99].

GRVSherpa

Built-in library for the GRV photon PDF [Glu91a], [Glu91]

PDFESherpa

Built-in library for the electron structure function. The perturbative order of the fine structure constant can be set using the parameter ISR_E_ORDER (default: 1). The switch ISR_E_SCHEME allows to set the scheme of respecting non-leading terms. Possible options are 0 ("mixed choice"), 1 ("eta choice"), or 2 ("beta choice", default).

None

No PDF. Fixed beam energy.

Furthermore it is simple to build an external interface to an arbitrary PDF and load that dynamically in the Sherpa run. See External PDF for instructions.

PDF_SET

Specifies the PDF set for hadronic bunch particles. All sets available in the chosen PDF_LIBRARY can be figured by running Sherpa with the parameter SHOW_PDF_SETS: 1, e.g.:

  Sherpa 'PDF_LIBRARY: CTEQ6Sherpa' 'SHOW_PDF_SETS: 1'


If the two colliding beams are of different type, e.g. protons and electrons or photons and electrons, it is possible to specify two different PDF sets by providing two values: ‘PDF_SET: [pdf1, pdf2]’. The special value Default can be used as a placeholder for letting Sherpa choose the appropriate PDF set (or none).

PDF_SET_VERSIONS

This parameter allows to select a specific version (member) within the chosen PDF set. It is possible to specify two different PDF sets using ‘PDF_SET_VERSIONS: [version1, version2]’.

## 5.4 Models

The main switch MODEL sets the model that Sherpa uses throughout the simulation run. The default is ‘SM’, the built-in Standard Model implementation of Sherpa. For BSM simulations, Sherpa offers an option to use the Universal FeynRules Output Format (UFO) [Deg11].

Please note: AMEGIC can only be used for the built-in models (SM and HEFT). For anything else, please use Comix.

### 5.4.1 Built-in Models

#### 5.4.1.1 Standard Model

The SM inputs for the electroweak sector can be given in four different schemes, that correspond to different choices of which SM physics parameters are considered fixed and which are derived from the given quantities. The input schemes are selected through the EW_SCHEME parameter, whose default is ‘1’. The following options are provided:

• Case 0:

all EW parameters are explicitly given. Here the W, Z and Higgs masses are taken as inputs, and the parameters 1/ALPHAQED(0), ALPHAQED_DEFAULT_SCALE (cf. below), SIN2THETAW (weak mixing angle), VEV (Higgs field vacuum expectation value) and LAMBDA (Higgs quartic coupling) have to be specified.

Note that this mode allows to violate the tree-level relations between some of the parameters and might thus lead to gauge violations in some regions of phase space.

• Case 1:

all EW parameters are calculated from the W, Z and Higgs masses and the fine structure constant (taken from 1/ALPHAQED(0) + ALPHAQED_DEFAULT_SCALE, cf. below) using tree-level relations.

• Case 2:

all EW parameters are calculated from the W, Z and Higgs masses and the fine structure constant (taken from 1/ALPHAQED(MZ)) using tree-level relations.

• Case 3:

this choice corresponds to the G_mu-scheme. The EW parameters are calculated out of the weak gauge boson masses M_W, M_Z, the Higgs boson mass M_H and the Fermi constant GF using tree-level relations.

• Case 4:

this choice corresponds to the scheme employed in the FeynRules/UFO setup. The EW parameters are calculated out of the Z boson mass M_Z, the Higgs boson mass M_H, the Fermi constant GF and the fine structure constant (taken from 1/ALPHAQED(0) + ALPHAQED_DEFAULT_SCALE, cf. below) using tree-level relations. Note, the W boson mass is not an input parameter in this scheme.

The electro-weak coupling is by default not running, unless its running has been enabled (cf. COUPLINGS). In EW schemes 0 and 1, the squared scale at which the fixed EW coupling is to be evaluated can be specified by ALPHAQED_DEFAULT_SCALE, which defaults to the Z mass squared (note that this scale has to be specified in GeV squared). By default 1/ALPHAQED(0): 137.03599976 and ALPHAQED_DEFAULT_SCALE: 8315.18 (=mZ^2), which means that the MEs are evaluated with a fixed value of alphaQED=1/128.802.

To account for quark mixing the CKM matrix elements have to be assigned. For this purpose the Wolfenstein parametrization [Wol83] is employed. The order of expansion in the lambda parameter is defined through

CKM:
Order: <order>
# other CKM settings ...


The default for Order is ‘0’, corresponding to a unit matrix. The parameter convention for higher expansion terms reads:

• Order: 1, the Cabibbo subsetting has to be set, it parametrizes lambda and has the default value ‘0.22537’.
• Order: 2, in addition the value of CKM_A has to be set, its default is ‘0.814’.
• Order: 3, the order lambda^3 expansion, Eta and Rho have to be specified. Their default values are ‘0.353’ and ‘0.117’, respectively.

The CKM matrix elements V_ij can also be read in using

CKM:
Matrix_Elements:
i,j: <V_ij>
# other CKM matrix elements ...
# other CKM settings ...


Complex values can be given by providing two values: <V_ij> -> [Re, Im]. Values not explicitly given are taken from the afore computed Wolfenstein parametrisation. Setting CKM: {Output: true} enables an output of the CKM matrix.

The remaining parameter to fully specify the Standard Model is the strong coupling constant at the Z-pole, given through ALPHAS(MZ). Its default value is ‘0.118’. If the setup at hand involves hadron collisions and thus PDFs, the value of the strong coupling constant is automatically set consistent with the PDF fit and can not be changed by the user. If Sherpa is compiled with LHAPDF support, it is also possible to use the alphaS evolution provided in LHAPDF by specifying ALPHAS: {USE_PDF: 1}. The perturbative order of the running of the strong coupling can be set via ORDER_ALPHAS, where the default ‘0’ corresponds to one-loop running and 1,2,3 to 2,3,4-loops, respectively. If the setup at hand involves PDFs, this parameter is set consistent with the information provided by the PDF set.

If unstable particles (e.g. W/Z bosons) appear as intermediate propagators in the process, Sherpa uses the complex mass scheme to construct MEs in a gauge-invariant way. For full consistency with this scheme, by default the dependent EW parameters are also calculated from the complex masses (‘WIDTH_SCHEME: CMS’), yielding complex values e.g. for the weak mixing angle. To keep the parameters real one can set ‘WIDTH_SCHEME: Fixed’. This may spoil gauge invariance though.

With the following switches it is possible to change the properties of all fundamental particles:

PARTICLE_DATA:
<id>:
<Property>: <value>
# other properties for this particle ...
# data for other particles


Here, <id> is the PDG ID of the particle for which one more properties are to be modified. <Property> can be one of the following:

Mass

Sets the mass (in GeV) of the particle.
Masses of particles and corresponding anti-particles are always set simultaneously.
For particles with Yukawa couplings, those are enabled/disabled consistent with the mass (taking into account the ‘Massive’ parameter) by default, but that can be modified using the ‘Yukawa’ parameter. Note that by default the Yukawa couplings are treated as running, cf. YUKAWA_MASSES.

Massive

Specifies whether the finite mass of the particle is to be considered in matrix-element calculations or not. Can be ‘true’ or ‘false’.

Width

Sets the width (in GeV) of the particle.

Active

Enables/disables the particle with PDG id ‘<id>’. Can be ‘true’ or ‘false’.

Stable

Sets the particle either stable or unstable according to the following options:

0

Particle and anti-particle are unstable

1

Particle and anti-particle are stable

2

Particle is stable, anti-particle is unstable

3

Particle is unstable, anti-particle is stable

This option applies to decays of hadrons (cf. Hadron decays) as well as particles produced in the hard scattering (cf. Hard decays). For the latter, alternatively the decays can be specified explicitly in the process setup (see Processes) to avoid the narrow-width approximation.

Priority

Allows to overwrite the default automatic flavour sorting in a process by specifying a priority for the given flavour. This way one can identify certain particles which are part of a container (e.g. massless b-quarks), such that their position can be used reliably in selectors and scale setters.

Note: PARTICLE_DATA can also be used to the properties of hadrons, you can use the same switches (except for Massive), see Hadronization.

#### 5.4.1.2 Effective Higgs Couplings

The HEFT describes the effective coupling of gluons and photons to Higgs bosons via a top-quark loop, and a W-boson loop in case of photons. This supplement to the Standard Model can be invoked by configuring MODEL: HEFT.

The effective coupling of gluons to the Higgs boson, g_ggH, can be calculated either for a finite top-quark mass or in the limit of an infinitely heavy top using the switch FINITE_TOP_MASS: true or FINITE_TOP_MASS: false, respectively. Similarily, the photon-photon-Higgs coupling, g_ppH, can be calculated both for finite top and/or W masses or in the infinite mass limit using the switches FINITE_TOP_MASS and FINITE_W_MASS. The default choice for both is the infinite mass limit in either case. Note that these switches affect only the calculation of the value of the effective coupling constants. Please refer to the example setup H+jets production in gluon fusion with finite top mass effects for information on how to include finite top quark mass effects on a differential level.

Either one of these couplings can be switched off using the DEACTIVATE_GGH: true and DEACTIVATE_PPH: true switches. Both default to false.

### 5.4.2 UFO Model Interface

To use a model generated by the FeynRules package [Chr08],[Chr09], the model must be made available to Sherpa by running

  <prefix>/bin/Sherpa-generate-model <path-to-ufo-model>


where <path-to-ufo-model> specifies the location of the directory where the UFO model can be found. UFO support must be enabled using the ‘--enable-ufo’ option of the configure script, as described in Installation. This requires Python version 2.6 or later and an installation of SCons.

The above command generates source code for the UFO model, compiles it, and installs the corresponding library, making it available for event generation. Python, SCons, and the UFO model directory are not required for event generation once the above command has finished. Note that the installation directory for the created library and the paths to Sherpa libraries and headers are predetermined automatically durin the installation of Sherpa. If the Sherpa installation is moved afterwards or if the user does not have the necessary permissions to install the new library in the predetermined location, these paths can be set manually. Pleas run

  <prefix>/bin/Sherpa-generate-model --help


for information on the relevan command line arguments.

An example configuration file will be written to the working directory while the model is generated with Sherpa-generate-model. This config file shows the syntax for the respective model parameters and can be used as a template. It is also possible to use an external parameter file by specifying the path to the file with the switch ‘UFO_PARAM_CARD’ in the configuration file or on the command line. Relative and absolute file paths are allowed. This option allows it to use the native UFO parameter cards, as used by MadGraph for example.

Note that the use of the SM ‘PARTICLE_DATA’ switches ‘Mass’, ‘Massive’, ‘Width’, and ‘Stable’ is discouraged when using UFO models as the UFO model completely defines all particle properties and their relation to the independent model parameters. These model parameters should be set using the standard UFO parameter syntax as shown in the example run card generated by the Sherpa-generate-model command.

For parts of the simulation other than the hard process (hadronization, underlying event, running of the SM couplings) Sherpa uses internal default values for the Standard Model fermion masses if they are massless in the UFO model. This is necessary for a meaningful simulation. In the hard process however, the UFO model masses are always respected.

For an example UFO setup, see Event generation in the MSSM using UFO. For more details on the Sherpa interface to FeynRules please consult [Chr09],[Hoe14c].

Please note that AMEGIC can only be used for the built-in models (SM and HEFT). The use of UFO models is only supported by Comix.

## 5.5 Matrix elements

The following parameters are used to steer the matrix element setup:

### 5.5.1 ME_GENERATORS

The list of matrix element generators to be employed during the run. When setting up hard processes, Sherpa calls these generators in order to check whether either one is capable of generating the corresponding matrix element. This parameter can also be set on the command line using option ‘-m’, see Command line options.

The built-in generators are

Internal

Simple matrix element library, implementing a variety of 2->2 processes.

Amegic

The AMEGIC++ generator published under [Kra01]

Comix

The Comix generator published under [Gle08]

### 5.5.2 RESULT_DIRECTORY

This parameter specifies the name of the directory which is used by Sherpa to store integration results and phasespace mappings. The default is ‘Results/’. It can also be set using the command line parameter ‘-r’, see Command line options. The directory will be created automatically, unless the option ‘GENERATE_RESULT_DIRECTORY: false’ is specified. Its location is relative to a potentially specified input path, see Command line options.

### 5.5.3 EVENT_GENERATION_MODE

This parameter specifies the event generation mode. It can also be set on the command line using option ‘-w’, see Command line options. The three possible options are:

Weighted

(alias ‘W’) Weighted events.

Unweighted

(alias ‘U’) Events with constant weight, which have been unweighted against the maximum determined during phase space integration. In case of rare events with w > max the parton level event is repeated floor(w/max) times and the remainder is unweighted. While this leads to unity weights for all events it can be misleading since the statistical impact of a high-weight event is not accounted for. In the extreme case this can lead to a high-weight event looking like a significant bump in distributions (in particular after the effects of the parton shower).

PartiallyUnweighted

(alias ‘P’) Identical to ‘Unweighted’ events, but if the weight exceeds the maximum determined during the phase space integration, the event will carry a weight of w/max to correct for that. This is the recommended option to generate unweighted events and the default setting in Sherpa.

For ‘Unweighted’ and ‘PartiallyUnweighted’ events the user may set ‘OVERWEIGHT_THRESHOLD: <maxweight>’ to cap the maximal over-weight w/max taken into account.

### 5.5.4 SCALES

This parameter specifies how to compute the renormalization and factorization scale and potential additional scales.

Sherpa provides several built-in scale setting schemes. For each scheme the scales are then set using expressions understood by the Interpreter. Each scale setter’s syntax is

SCALES: <scale-setter>{<scale-definition>}


to define a single scale for both the factorisation and renormalisation scale. They can be set to different values using

SCALES: <scale-setter>{<fac-scale-definition>}{<ren-scale-definition>}


In parton shower matched/merged calculations a third perturbative scale is present, the resummation or parton shower starting scale. It can be set by the user in the third argument like

SCALES: <scale-setter>{<fac-scale-definition>}{<ren-scale-definition>}{<res-scale-definition>}


If the final state of your hard scattering process contains QCD partons, their kinematics fix the resummation scale for subsequent emissions (cf. the description of the ‘METS’ scale setter below). With the CS Shower, you can instead specify your own resummation scale also in such a case: Set CSS_RESPECT_Q2: true and use the third argument to specify your resummation scale as above.

Note: for all scales their squares have to be given. See Predefined scale tags for some predefined scale tags.

More than three scales can be set as well to be subsequently used, e.g. by different couplings, see COUPLINGS.

#### 5.5.4.1 Scale setters

The scale setter options which are currently available are

VAR

The variable scale setter is the simplest scale setter available. Scales are simply specified by additional parameters in a form which is understood by the internal interpreter, see Interpreter. If, for example the invariant mass of the lepton pair in Drell-Yan production is the desired scale, the corresponding setup reads

SCALES: VAR{Abs2(p[2]+p[3])}


Renormalization and factorization scales can be chosen differently. For example in Drell-Yan + jet production one could set

SCALES: VAR{Abs2(p[2]+p[3])}{MPerp2(p[2]+p[3])}

FASTJET

If FastJet is enabled by including --enable-fastjet=/path/to/fastjet in the configure options, this scale setter can be used to set a scale based on jet-, rather than parton-momenta.

The final state parton configuration is first clustered using FastJet and resulting jet momenta are then added back to the list of non strongly interacting particles. The numbering of momenta therefore stays effectively the same as in standard Sherpa, except that final state partons are replaced with jets, if applicable (a parton might not pass the jet criteria and get "lost"). In particular, the indices of the initial state partons and all EW particles are uneffected. Jet momenta can then be accessed as described in Predefined scale tags through the identifiers p[i], and the nodal values of the clustering sequence can be used through MU_n2. The syntax is

SCALES: FASTJET[<jet-algo-parameter>]{<scale-definition>}


Therein the parameters of the jet algorithm to be used to define the jets are given as a comma separated list of

• the jet algorithm A:kt,antikt,cambridge,siscone (default antikt)
• phase space restrictions, i.e. PT:<min-pt>, ET:<min-et>, Eta:<max-eta>, Y:<max-rap> (otherwise unrestricted)
• radial parameter R:<rad-param> (default 0.4)
• f-parameter for Siscone f:<f-param> (default 0.75)
• recombination scheme C:E,pt,pt2,Et,Et2,BIpt,BIpt2 (default E)
• b-tagging mode B:0,1,2 (default 0) This parameter, if specified different from its default 0, allows to use b-tagged jets only, based on the parton-level constituents of the jets. There are two options: With B:1 both b and anti-b quarks are counted equally towards b-jets, while for B:2 they are added with a relative sign as constituents, i.e. a jet containing b and anti-b is not tagged.
• scale setting mode M:0,1 (default 1) It is possible to specify multiple scale definition blocks, each enclosed in curly brackets. The scale setting mode parameter then determines, how those are interpreted: In the M:0 case, they specify factorisation, renormalisation and resummation scale separately in that order. In the M:1 case, the n given scales are used to calculate a mean scale such that alpha_s^n(mu_mean)=alpha_s(mu_1)...alpha_s(mu_n) This scale is then used for factorisation, renormalisation and resummation scale.

Consider the example of lepton pair production in association with jets. The following scale setter

SCALES: FASTJET[A:kt,PT:10,R:0.4,M:0]{sqrt(PPerp2(p[4])*PPerp2(p[5]))}


reconstructs jets using the kt-algorithm with R=0.4 and a minimum transverse momentum of 10 GeV. The scale of all strong couplings is then set to the geometric mean of the hardest and second hardest jet. Note M:0.

Similarly, in processes with multiple strong couplings, their renormalisation scales can be set to different values, e.g.

SCALES: FASTJET[A:kt,PT:10,R:0.4,M:1]{PPerp2(p[4])}{PPerp2(p[5])}


sets the scale of one strong coupling to the transverse momentum of the hardest jet, and the scale of the second strong coupling to the transverse momentum of second hardest jet. Note M:1 in this case.

The additional tags MU_22 .. MU_n2 (n=2..njet+1), hold the nodal values of the jet clustering in descending order.

Please note that currently this type of scale setting can only be done within the process block (Processes) and not within the (me) section.

METS

The matrix element is clustered onto a core 2->2 configuration using an inversion of current parton shower, cf. SHOWER_GENERATOR, recombining (n+1) particles into n on-shell particles. Their corresponding flavours are determined using run-time information from the matrix element generator. It defines the three tags MU_F2, MU_R2 and MU_Q2 whose values are assigned through this clustering procedure. While MU_F2 and MU_Q2 are defined as the lowest invariant mass or negative virtuality in the core process (for core interactions which are pure QCD processes scales are set to the maximum transverse mass squared of the outgoing particles), MU_R2 is determined using this core scale and the individual clustering scales such that

  alpha_s(MU_R2)^{n+k} = alpha_s(core-scale)^k alpha_s(kt_1) ... alpha_s(kt_n)


where k is the order in strong coupling of the core process and k is the number of clusterings, kt_i are the relative transverse momenta at each clustering. The tags MU_F2, MU_R2 and MU_Q2 can then be used on equal footing with the tags of Predefined scale tags to define the final scale.

METS is the default scale scheme in Sherpa, since it is employed for truncated shower merging, see Multijet merged event generation with Sherpa, both at leading and next-to-leading order. Thus, Sherpa’s default is

SCALES: METS{MU_F2}{MU_R2}{MU_Q2}


As the tags MU_F2, MU_R2 and MU_Q2 are predefined by the METS scale setter, they may be omitted, i.e.

SCALES: METS


leads to an identical scale definition.

The METS scale setter comes in two variants: STRICT_METS and LOOSE_METS. While the former employs the exact inverse of the parton shower for the clustering procedure, and therefore is rather time consuming for multiparton final state, the latter is a simplified version and much faster. Giving METS as the scale setter results in using LOOSE_METS for the integration and STRICT_METS during event generation. Giving either STRICT_METS or LOOSE_METS as the scale setter results in using the respective one during both integration and event generation.

Clusterings onto 2->n (n>2) configurations is possible, see METS scale setting with multiparton core processes.

This scheme might be subject to changes to enable further classes of processes for merging in the future and should therefore be seen with care. Integration results might change slightly between different Sherpa versions.

Occasionally, users might encounter the warning message

METS_Scale_Setter::CalculateScale(): No CSS history for '<process name>' in <percentage>% of calls. Set \hat{s}.


As long as the percentage quoted here is not too high, this does not pose a serious problem. The warning occurs when - based on the current colour configuration and matrix element information - no suitable clustering is found by the algorithm. In such cases the scale is set to the invariant mass of the partonic process.

#### 5.5.4.2 Custom scale implementation

When the flexibility of the ‘VAR’ scale setter above is not sufficient, it is also possible to implement a completely custom scale scheme within Sherpa as C++ class plugin. For details please refer to the Customization section.

#### 5.5.4.3 Predefined scale tags

There exist a few predefined tags to facilitate commonly used scale choices or easily implement a user defined scale.

p[n]

Access to the four momentum of the nth particle. The initial state particles carry n=0 and n=1, the final state momenta start from n=2. Their ordering is determined by Sherpa’s internal particle ordering and can be read e.g. from the process names displayed at run time. Please note, that when building jets out of the final state partons first, e.g. through the FASTJET scale setter, these parton momenta will be replaced by the jet momenta ordered in transverse momenta. For example the process u ub -> e- e+ G G will have the electron and the positron at positions p[2] and p[3] and the gluons on postions p[4] and p[5]. However, when finding jets first, the electrons will still be at p[2] and p[3] while the harder jet will be at p[4] and the softer one at p[5].

H_T2

Square of the scalar sum of the transverse momenta of all final state particles.

H_TM2

Square of the scalar sum of the transverse energies of all final state particles, i.e. contrary to H_T2 H_TM2 takes particle masses into account.

H_TY2(<factor>,<exponent>)

Square of the scalar sum of the transverse momenta of all final state particles weighted by their rapidity distance from the final state boost vector. Thus, takes the form

  H_T^{(Y)} = sum_i pT_i exp [ fac |y-yboost|^exp ]


Typical values to use would by 0.3 and 1.

H_Tp2

Scale setter for lepton-pair production in association with jets only, implements

  H_T' = sqrt(m_ll^2 + pT(ll)^2) + sum_i pT_i (i not l)

DH_Tp2(<recombination-method>,<dR>)

Implements a version of H_Tp2 which dresses charged particles first. The parameter <recombination-method> can take the following values: Cone, kt, CA or antikt, while <dR> is the respecitve algorithm’s angular distance parameter.

MU_F2, MU_R2, MU_Q2

Tags holding the values of the factorisation, renormalisation scale and resummation scale determined through backwards clustering in the METS scale setter.

MU_22, MU_32, ..., MU_n2

Tags holding the nodal values of the jet clustering in the FASTJET scale setter, cf. Scale setters.

All of those objects can be operated upon by any operator/function known to the Interpreter.

#### 5.5.4.4 Scale schemes for NLO calculations

For next-to-leading order calculations it must be guaranteed that the scale is calculated separately for the real correction and the subtraction terms, such that within the subtraction procedure the same amount is subtracted and added back. Starting from version 1.2.2 this is the case for all scale setters in Sherpa. Also, the definition of the scale must be infrared safe w.r.t. to the radiation of an extra parton. Infrared safe (for QCD-NLO calculations) are:

• any function of momenta of NOT strongly interacting particles
• sum of transverse quantities of all partons (e.g. H_T2)
• any quantity refering to jets, constructed by an IR safe jet algorithm, see below.

Not infrared safe are

• any function of momenta of specific partons
• for processes with hadrons in the initial state: any quantity that depends on parton momenta along the beam axis, including the initial state partons itself

Since the total number of partons is different for different pieces of the NLO calculation any explicit reference to a parton momentum will lead to an inconsistent result.

#### 5.5.4.5 Explicit scale variations

The factorisation and renormalisation scales in the fixed-order matrix elements can be varied separately simply by introducing a prefactor into the scale definition, e.g.

SCALES: VAR{0.25*H_T2}{0.25*H_T2}


for setting both the renormalisation and factorisation scales to H_T/2.

Similarly, the starting scale of the parton shower resummation in a ME+PS merged sample can be varied using the METS scale setter’s third argument like:

SCALES: METS{MU_F2}{MU_R2}{4.0*MU_Q2}


#### 5.5.4.6 METS scale setting with multiparton core processes

The METS scale setter stops clustering when no combination is found that corresponds to a parton shower branching, or if two subsequent branchings are unordered in terms of the parton shower evolution parameter. The core scale of the remaining 2->n process then needs to be defined. This is done by specifying a core scale through

CORE_SCALE: <core-scale-setter>{<core-fac-scale-definition>}{<core-ren-scale-definition>}{<core-res-scale-definition>}


As always, for scale setters which define MU_F2, MU_R2 and MU_Q2 the scale definition can be dropped. Possible core scale setters are

VAR

Variable core scale setter. Syntax is identical to variable scale setter.

QCD

QCD core scale setter. Scales are set to harmonic mean of s, t and u. Only useful for 2->2 cores as alternatives to the usual core scale of the METS scale setter.

TTBar

Core scale setter for processes involving top quarks. Implementation details are described in Appendix C of [Hoe13].

SingleTop

Core scale setter for single-top production in association with one jet. If the W is in the t-channel (s-channel), the squared scales are set to the Mandelstam variables t=2*p[0]*p[2] (t=2*p[0]*p[1]).

### 5.5.5 COUPLINGS

Within Sherpa, strong and electroweak couplings can be computed at any scale specified by a scale setter (cf. SCALES). The ‘COUPLINGS’ tag links the argument of a running coupling to one of the respective scales. This is better seen in an example. Assuming the following input

SCALES: VAR{...}{PPerp2(p[2])}{Abs2(p[2]+p[3])}
COUPLINGS:
- "Alpha_QCD 1"
- "Alpha_QED 2"


Sherpa will compute any strong couplings at scale one, i.e. ‘PPerp2(p[2])’ and electroweak couplings at scale two, i.e. ‘Abs2(p[2]+p[3])’. Note that counting starts at zero.

### 5.5.6 KFACTOR

This parameter specifies how to evaluate potential K-factors in the hard process. This is equivalent to the ‘COUPLINGS’ specification of Sherpa versions prior to 1.2.2. Currently available options are

None

No reweighting

VAR

Couplings specified by an additional parameter in a form which is understood by the internal interpreter, see Interpreter. The tags Alpha_QCD and Alpha_QED serve as links to the built-in running coupling implementation.

If for example the process ‘g g -> h g’ in effective theory is computed, one could think of evaluating two powers of the strong coupling at the Higgs mass scale and one power at the transverse momentum squared of the gluon. Assuming the Higgs mass to be 120 GeV, the corresponding reweighting would read

SCALES:    VAR{...}{PPerp2(p[3])}
COUPLINGS: "Alpha_QCD 1"
KFACTOR:   VAR{sqr(Alpha_QCD(sqr(120))/Alpha_QCD(MU_12))}


As can be seen from this example, scales are referred to as MU_<i>2, where <i> is replaced with the appropriate number. Note that counting starts at zero.

### 5.5.7 YUKAWA_MASSES

This parameter specifies whether the Yukawa couplings are evaluated using running or fixed quark masses: YUKAWA_MASSES: Running is the default since version 1.2.2 while YUKAWA_MASSES: Fixed was the default until 1.2.1.

### 5.5.8 Dipole subtraction

This list of parameters can be used to optimize the performance when employing the Catani-Seymour dipole subtraction [Cat96b] as implemented in Amegic [Gle07]. The dipole parameters are specified as subsettings to the DIPOLES setting, like this:

DIPOLES:
ALPHA: <alpha>
NF_GSPLIT: <nf>
# other dipole settings ...


The following parameters can be customised:

ALPHA'

Specifies a dipole cutoff in the nonsingular region [Nag03]. Changing this parameter shifts contributions from the subtracted real correction piece (RS) to the piece including integrated dipole terms (I), while their sum remains constant. This parameter can be used to optimize the integration performance of the individual pieces. Also the average calculation time for the subtracted real correction is reduced with smaller choices of ‘ALPHA’ due to the (on average) reduced number of contributing dipole terms. For most processes a reasonable choice is between 0.01 and 1 (default). See also Choosing DIPOLES ALPHA

AMIN'

Specifies the cutoff of real correction terms in the infrared reagion to avoid numerical problems with the subtraction. The default is 1.e-8.

NF_GSPLIT'

Specifies the number of quark flavours that are produced from gluon splittings. This number must be at least the number of massless flavours (default). If this number is larger than the number of massless quarks the massive dipole subtraction [Cat02] is employed.

KAPPA'

Specifies the kappa-parameter in the massive dipole subtraction formalism [Cat02]. The default is 2.0/3.0.

## 5.6 Processes

The process setup takes the following general form

PROCESSES:
# Process 1:
- Process: <process declaration>
<parameter>: <value>
<multiplities-to-be-applied-to>:
<parameter>: <value>
...
# Process 2:
- Process: <process declaration>
...


I.e. PROCESSES followed by a list of process definitions. Each process definition has one mandatory sub-setting, Process, which defines the initial- and final-state particles.

In addition, several optional parameters steer the process setup. Most of them can be grouped under a multiplicity key, which can either be a single integer or a range of integers, e.g. 2-4: { <settings-that-affect-only-multiplicties-2-to-4> }.

The following parameters are used to steer the process setup:

### 5.6.1 Process

This tag starts the setup of a process or a set of processes with common properties. It must be followed by the specification of the (core) process itself. The initial and final state particles are specified by their PDG codes, or by particle containers, see Particle containers. Examples are

Process: 93 93 -> 11 -11

Sets up a Drell-Yan process group with light quarks in the initial state.

Process: "11 -11 -> 93 93 93{3}"

Sets up jet production in e+e- collisions with up to three additional jets.

The syntax for specifying processes is explained in the following sections:

#### 5.6.1.1 PDG codes

Initial and final state particles are specified using their PDG codes (cf. PDG). A list of particles with their codes, and some of their properties, is printed at the start of each Sherpa run, when the OUTPUT is set at level ‘2’.

#### 5.6.1.2 Particle containers

Sherpa contains a set of containers that collect particles with similar properties, namely

• lepton (carrying number 90),
• neutrino (carrying number 91),
• fermion (carrying number 92),
• jet (carrying number 93),
• quark (carrying number 94).

These containers hold all massless particles and anti-particles of the denoted type and allow for a more efficient definition of initial and final states to be considered. The jet container consists of the gluon and all massless quarks, as set by

PARTICLE_DATA:
<id>:
Mass: 0
# ... and/or ...
Massive: false


A list of particle containers is printed at the start of each Sherpa run, when the OUTPUT is set at level ‘2’.

It is also possible to define a custom particle container using the keyword PARTICLE_CONTAINER. The container must be given an unassigned particle ID (kf-code) and its name (freely chosen by you) and the flavour content must be specified. An example would be the collection of all down-type quarks using the unassigned ID 98, which could be declared as

PARTICLE_CONTAINER:
98:
Name: downs
Flavours: [1, -1, 3, -3, 5, -5]


Note that, if wanted, you have to add both particles and anti-particles.

#### 5.6.1.3 Parentheses

The parenthesis notation allows to group a list of processes with different flavor content but similar structure. This is most useful in the context of simulations containing heavy quarks. In a setup with massive b-quarks, for example, the b-quark will not be part of the jets container. In order to include b-associated processes easily, the following can be used:

PARTICLE_DATA:
5: {Massive: true}
PARTICLE_CONTAINER:
98: {Name: B, Flavours: [5, -5]}
PROCESSES:
- Process: "11 -11 -> (93,98) (93,98)"
...


#### 5.6.1.4 Curly brackets

The curly bracket notation when specifying a process allows up to a certain number of jets to be included in the final state. This is easily seen from an example,

Process: "11 -11 -> 93 93 93{3}"

Sets up jet production in e+e- collisions. The matix element final state may be 2, 3, 4 or 5 light partons or gluons.

### 5.6.2 Decay

Specifies the exclusive decay of a particle produced in the matrix element. The virtuality of the decaying particle is sampled according to a Breit-Wigner distribution. In practice this amouts to selecting only those diagrams containing s-channels of the specified flavour while the phase space is kept general. Consequently, all spin correlations are preserved. An example would be

Process: 11 -11 -> 6[a] -6[b]
Decay:     6[a] -> 5 24[c]
Decay:    -6[b] -> -5 -24[d]
Decay:    24[c] -> -13 14
Decay:   -24[d] -> 94 94


### 5.6.3 DecayOS

Specifies the exclusive decay of a particle produced in the matrix element. The decaying particle is on mass-shell, i.e. a strict narrow-width approximation is used. This tag can be specified alternatively as ‘DecayOS’. In practice this amouts to selecting only those diagrams containing s-channels of the specified flavour and the phase space is factorised as well. Nonetheless, all spin correlations are preserved. An example would be

Process: 11 -11 -> 6[a] -6[b]
DecayOS:   6[a] -> 5 24[c]
DecayOS:  -6[b] -> -5 -24[d]
DecayOS:  24[c] -> -13 14
DecayOS: -24[d] -> 94 94


### 5.6.4 No_Decay

Remove all diagrams associated with the decay/s-channel of the given flavours. Serves to avoid resonant contributions in processes like W-associated single-top production. Note that this method breaks gauge invariance! At the moment this flag can only be set for Comix. An example would be

Process: "93 93 -> 6[a] -24[b] 93{1}"
Decay:     6[a] -> 5 24[c]
DecayOS:  24[c] -> -13 14
DecayOS: -24[b] -> 11 -12
No_Decay: -6


### 5.6.5 Scales

Sets a process-specific scale. For the corresponding syntax see SCALES.

### 5.6.6 Couplings

Sets process-specific couplings. For the corresponding syntax see COUPLINGS.

### 5.6.7 CKKW

Sets up multijet merging according to [Hoe09]. The additional argument specifies the parton separation criterion ("merging cut") Q_{cut} in GeV. It can be given in any form which is understood by the internal interpreter, see Interpreter. Examples are

• Hadronic collider: CKKW: 20
• Leptonic collider: CKKW: pow(10,-2.5/2.0)*E_CMS

  .  $prefix/share/SHERPA-MC/sherpa-completion  and you will be able to tab-complete any parameters on a Sherpa command line. To permanently enable this feature in your bash shell, you’ll have to add the source command above to your ~/.bashrc. ## 6.2 Rivet analyses Sherpa is equipped with an interface to the analysis tool Rivet. To enable it, Rivet and HepMC have to be installed (e.g. using the Rivet bootstrap script) and your Sherpa compilation has to be configured with the following options:  ./configure --enable-hepmc2=/path/to/hepmc2 --enable-rivet=/path/to/rivet  (Note: Both paths are equal if you used the Rivet bootstrap script.) To use the interface, you need to enable it using the ‘ANALYSIS’ option and to configure it it using the ‘RIVET’ settings group as follows: ANALYSIS: Rivet RIVET: -a: - D0_2008_S7662670 - CDF_2007_S7057202 - D0_2004_S5992206 - CDF_2008_S7828950  The -a list specifies which Rivet analyses to run and the histogram output file can be changed with the normal ANALYSIS_OUTPUT switch. You can also use rivet-mkhtml (distributed with Rivet) to create plot webpages from Rivet’s output files:  source /path/to/rivetenv.sh # see below rivet-mkhtml -o output/ file1.yoda [file2.yoda, ...] firefox output/index.html &  If your Rivet installation is not in a standard location, the bootstrap script should have created a rivetenv.sh which you have to source before running the rivet-mkhtml script. ## 6.3 HZTool analyses Sherpa is equipped with an interface to the analysis tool HZTool. To enable it, HZTool and CERNLIB have to be installed and your Sherpa compilation has to be configured with the following options:  ./configure --enable-hztool=/path/to/hztool --enable-cernlib=/path/to/cernlib --enable-hepevtsize=4000  Note that an example CERNLIB installation bootstrap script is provided in AddOns/HZTool/start_cern_64bit. Note that this script is only provided for convenience, we will not provide support if it is not working as expected. To use the interface, enable it using the ‘ANALYSIS’ and configure it using the ‘HZTool’ settings group: ANALYSIS: HZTool HZTOOL: HISTO_NAME: output.hbook HZ_ENABLE: - hz00145 - hz01073 - hz02079 - hz03160  The HZ_ENABLE list specifies which HZTool analyses to run. The histogram output directory can be changed using the ANALYSIS_OUTPUT switch, while HZTOOL:HISTO_NAME specifies the hbook output file. ## 6.4 MCFM interface Sherpa is equipped with an interface to the NLO library of MCFM for decdicated processes. To enable it, MCFM has to be installed and compiled into a single library, libMCFM.a. To this end, an installation script is provided in AddOns/MCFM/install_mcfm.sh. Please note, due to some process specific changes that are made by the installation script to the MCFM code, only few selected processes of MCFM-6.3 are available through the interface. Finally, your Sherpa compilation has to be configured with the following options:  ./configure --enable-mcfm=/path/to/mcfm  To use the interface, specify  Loop_Generator: MCFM  in the process section of the run card and add it to the list of generators in ME_GENERATORS. Of course, MCFM’s process.DAT file has to be copied to the current run directory. ## 6.5 Debugging a crashing/stalled event ### 6.5.1 Crashing events If an event crashes, Sherpa tries to obtain all the information needed to reproduce that event and writes it out into a directory named  Status__<date>_<time>  If you are a Sherpa user and want to report this crash to the Sherpa team, please attach a tarball of this directory to your email. This allows us to reproduce your crashed event and debug it. To debug it yourself, you can follow these steps (Only do this if you are a Sherpa developer, or want to debug a problem in an addon library created by yourself): • Copy the random seed out of the status directory into your run path:  cp Status__<date>_<time>/random.dat ./  • Run your normal Sherpa commandline with an additional parameter:  Sherpa [...] 'STATUS_PATH: ./'  Sherpa will then read in your random seed from “./random.dat” and generate events from it. • Ideally, the first event will lead to the crash you saw earlier, and you can now turn on debugging output to find out more about the details of that event and test code changes to fix it:  Sherpa [...] --output 15 'STATUS_PATH: ./'  ### 6.5.2 Stalled events If event generation seems to stall, you first have to find out the number of the current event. For that you would terminate the stalled Sherpa process (using Ctrl-c) and check in its final output for the number of generated events. Now you can request Sherpa to write out the random seed for the event before the stalled one:  Sherpa [...] --events <#events - 1> 'SAVE_STATUS: Status/'  (Replace <#events - 1> using the number you figured out earlier.) The created status directory can either be sent to the Sherpa developers, or be used in the same steps as above to reproduce that event and debug it. ## 6.6 Versioned installation If you want to install different Sherpa versions into the same prefix (e.g. /usr/local), you have to enable versioning of the installed directories by using the configure option ‘--enable-versioning’. Optionally you can even pass an argument to this parameter of what you want the version tag to look like. ## 6.7 NLO calculations ### 6.7.1 Choosing DIPOLES ALPHA A variation of the parameter DIPOLES:ALPHA (see Dipole subtraction) changes the contribution from the real (subtracted) piece (RS) and the integrated subtraction terms (I), keeping their sum constant. Varying this parameter provides a nice check of the consistency of the subtraction procedure and it allows to optimize the integration performance of the real correction. This piece has the most complicated momentum phase space and is often the most time consuming part of the NLO calculation. The optimal choice depends on the specific setup and can be determined best by trial. Hints to find a good value: • The smaller DIPOLES:ALPHA is the less dipole term have to be calculated, thus the less time the evaluation/phase space point takes. • Too small choices lead to large cancelations between the RS and the I parts and thus to large statisical errors. • For very simple processes (with only a total of two partons in the iniatial and the final state of the born process) the best choice is typically DIPOLES: {ALPHA: 1}. The more complicated a process is the smaller DIPOLES:ALPHA should be (e.g. with 5 partons the best choice is typically around 0.01). • A good choice is typically such that the cross section from the RS piece is significantly positive but not much larger than the born cross section. ### 6.7.2 Integrating complicated Loop-ME For complicated processes the evaluation of one-loop matrix elements can be very time consuming. The generation time of a fully optimized integration grid can become prohibitively long. Rather than using a poorly optimized grid in this case it is more advisable to use a grid optimized with either the born matrix elements or the born matrix elements and the finite part of the integrated subtraction terms only, working under the assumption that the distibutions in phase space are rather similar. This can be done by one of the following methods: 1. Employ a dummy virtual (requires no computing time, returns a finite value as its result) to optimise the grid. This only works if V is not the only NLO_QCD_Part specified. 1. During integration set the Loop_Generator to Dummy and add Dummy to your list of ‘ME_GENERATORS’. The grid will then be optimised to the phase space distribution of the sum of the Born matrix element and the finite part of the integrated subtraction term, plus a finite value from Dummy. Note: The cross section displayed during integration will also correspond to these contributions. 2. During event generation reset Loop_Generator to your generator supplying the virtual correction. The events generated then carry the correct event weight. 2. Suppress the evaluation of the virtual and/or the integrated subtraction terms. This only works if Amegic is used as the matrix element generator for the BVI pieces and V is not the only NLO_QCD_Part specified. 1. During integration add AMEGIC: { NLO_BVI_MODE: <num> } to your configuration. <num> takes the following values: 1-B, 2-I, and 4-V. The values are additive, i.e. 3-BI. Note: The cross section displayed during integration will match the parts selected by NLO_BVI_MODE. 2. During event generation remove the switch again and the events will carry the correct weight. Note: this will not work for the RS piece! ### 6.7.3 Avoiding misbinning effects Close to the infrared limit, the real emission matrix element and corresponding subtraction events exhibit large cancellations. If the (minor) kinematics difference of the events happens to cross a parton-level cut or analysis histogram bin boundary, then large spurious spikes can appear. These can be smoothed to some extend by shifting the weight from the subtraction kinematic to the real-emission kinematic if the dipole measure alpha is below a given threshold. The fraction of the shifted weight is inversely proportional to the dipole measure, such that the final real-emission and subtraction weights are calculated as:  w_r -> w_r + sum_i [1-x(alpha_i)] w_{s,i} foreach i: w_{s,i} -> x(alpha_i) w_{s,i}  with the function x(alpha)=(alpha/|alpha_0|)^n for alpha<alpha_0 and 1 otherwise. The threshold can be set by the parameter ‘NLO_SMEAR_THRESHOLD: <alpha_0>’ and the functional form of alpha and thus interpretation of the threshold can be chosen by its sign (positive: relative dipole kT in GeV, negative: dipole alpha). In addition, the exponent n can be set by ‘NLO_SMEAR_POWER: <n>’. ### 6.7.4 Enforcing the renormalization scheme Sherpa takes information about the renormalization scheme from the loop ME generator. The default scheme is MSbar, and this is assumed if no loop ME is provided, for example when integrated subtraction terms are computed by themselves. This can lead to inconsistencies when combining event samples, which may be avoided by setting ‘AMEGIC: { LOOP_ME_INIT: 1 }’. ### 6.7.5 Checking the pole cancellation The following options are all sub-settings for ‘AMEGIC’ and can be specified as follows: AMEGIC: <option>: <value> ...  To check whether the poles of the dipole subtraction and the interfaced one-loop matrix element cancel phase space point by phase space point CHECK_POLES: 1 can be specified. In the same way, the finite contributions of the infrared subtraction and the one-loop matrix element can be checked by setting CHECK_FINITE: 1, and the Born matrix element via CHECK_BORN: 1. The accuracy to which the poles, finite parts and Born matrix elements are checked is set via CHECK_THRESHOLD: <accu>. # 7 A posteriori scale variations There are several ways to compute the effects of changing the scales and PDFs of any event produced by Sherpa. They can computed explicitly, cf. Explicit scale variations, on-the-fly, cf. Scale and PDF variations (restricted to multiplicative factors), or reconstructed a posteriori. The latter method needs plenty of additional information in the event record and is (depending on the actual calculation) available in two formats: ## 7.1 A posteriori scale and PDF variations using the HepMC GenEvent Output Events generated in a LO, LOPS, NLO, NLOPS, MEPS@LO, MEPS@NLO or MENLOPS calculation can be written out in the HepMC format including all infomation to carry out arbitrary scale variations a posteriori. For this feature HepMC of at least version 2.06 is necessary and both HEPMC_USE_NAMED_WEIGHTS: true and HEPMC_EXTENDED_WEIGHTS: true have to enabled. Detailed instructions on how to use this information to construct the new event weight can be found here https://sherpa.hepforge.org/doc/ScaleVariations-Sherpa-2.2.0.pdf. ## 7.2 A posteriori scale and PDF variations using the ROOT NTuple Output Events generated at fixed-order LO and NLO can be stored in ROOT NTuples that allow arbitrary a posteriori scale and PDF variations, see Event output formats. An example for writing and reading in such ROOT NTuples can be found here: Production of NTuples. The internal ROOT Tree has the following Branches: id Event ID to identify correlated real sub-events. nparticle Number of outgoing partons. E/px/py/pz Momentum components of the partons. kf Parton PDG code. weight Event weight, if sub-event is treated independently. weight2 Event weight, if correlated sub-events are treated as single event. me_wgt ME weight (w/o PDF), corresponds to ’weight’. me_wgt2 ME weight (w/o PDF), corresponds to ’weight2’. id1 PDG code of incoming parton 1. id2 PDG code of incoming parton 2. fac_scale Factorisation scale. ren_scale Renormalisation scale. x1 Bjorken-x of incoming parton 1. x2 Bjorken-x of incoming parton 2. x1p x’ for I-piece of incoming parton 1. x2p x’ for I-piece of incoming parton 2. nuwgt Number of additional ME weights for loops and integrated subtraction terms. usr_wgt[nuwgt] Additional ME weights for loops and integrated subtraction terms. ### 7.2.1 Computing (differential) cross sections of real correction events with statistical errors Real correction events and their counter-events from subtraction terms are highly correlated and exhibit large cancellations. Although a treatment of sub-events as independent events leads to the correct cross section the statistical error would be greatly overestimated. In order to get a realistic statistical error sub-events belonging to the same event must be combined before added to the total cross section or a histogram bin of a differential cross section. Since in general each sub-event comes with it’s own set of four momenta the following treatment becomes necessary: 1. An event here refers to a full real correction event that may contain several sub-events. All entries with the same id belong to the same event. Step 2 has to be repeated for each event. 2. Each sub-event must be checked separately whether it passes possible phase space cuts. Then for each observable add up weight2 of all sub-events that go into the same histogram bin. These sums x_id are the quantities to enter the actual histogram. 3. To compute statistical errors each bin must store the sum over all x_id and the sum over all x_id^2. The cross section in the bin is given by <x> = 1/N \sum x_id, where N is the number of events (not sub-events). The 1-\sigma statistical error for the bin is \sqrt{ (<x^2>-<x>^2)/(N-1) }  Note: The main difference between weight and weight2 is that they refer to a different counting of events. While weight corresponds to each event entry (sub-event) counted separately, weight2 counts events as defined in step 1 of the above procedure. For NLO pieces other than the real correction weight and weight2 are identical. ### 7.2.2 Computation of cross sections with new PDF’s Born and real pieces: Notation: f_a(x_a) = PDF 1 applied on parton a, F_b(x_b) = PDF 2 applied on parton b. The total cross section weight is given by weight = me_wgt f_a(x_a)F_b(x_b). Loop piece and integrated subtraction terms: The weights here have an explicit dependence on the renormalization and factorization scales. To take care of the renormalization scale dependence (other than via alpha_S) the weight w_0 is defined as  w_0 = me_wgt + usr_wgts[0] log((\mu_R^new)^2/(\mu_R^old)^2) + usr_wgts[1] 1/2 [log((\mu_R^new)^2/(\mu_R^old)^2)]^2. To address the factorization scale dependence the weights w_1,...,w_8 are given by w_i = usr_wgts[i+1] + usr_wgts[i+9] log((\mu_F^new)^2/(\mu_F^old)^2). The full cross section weight can be calculated as weight = w_0 f_a(x_a)F_b(x_b) + (f_a^1 w_1 + f_a^2 w_2 + f_a^3 w_3 + f_a^4 w_4) F_b(x_b) + (F_b^1 w_5 + F_b^2 w_6 + F_b^3 w_7 + F_b^4 w_8) f_a(x_a) where f_a^1 = f_a(x_a) (a=quark), \sum_q f_q(x_a) (a=gluon), f_a^2 = f_a(x_a/x'_a)/x'_a (a=quark), \sum_q f_q(x_a/x'_a)x'_a (a=gluon), f_a^3 = f_g(x_a), f_a^4 = f_g(x_a/x'_a)/x'_a. The scale dependence coefficients usr_wgts[0] and usr_wgts[1] are normally obtained from the finite part of the virtual correction by removing renormalization terms and universal terms from dipole subtraction. This may be undesirable, especially when the loop provider splits up the calculation of the virtual correction into several pieces, like leading and sub-leading color. In this case the loop provider should control the scale dependence coefficients, which can be enforced with option ‘USR_WGT_MODE: false’. The loop provider must support this option or the scale dependence coefficients will be invalid! # 8 Customization Customizing Sherpa according to your needs. Sherpa can be easily extended with certain user defined tools. To this extent, a corresponding C++ class must be written, and compiled into an external library:  g++ -shared \ -I$SHERPA_PREFIX/bin/Sherpa-config --incdir \
$SHERPA_PREFIX/bin/Sherpa-config --ldflags \ -o libMyCustomClass.so My_Custom_Class.C  This library can then be loaded in Sherpa at runtime with the option SHERPA_LDADD, e.g.: SHERPA_LDADD: - MyCustomClass  Several specific examples of features which can be extended in this way are listed in the following sections. ## 8.1 Exotic physics It is possible to add your own models to Sherpa in a straightforward way. To illustrate, a simple example has been included in the directory Examples/Models/SM_ZPrime, showing how to add a Z-prime boson to the Standard Model. The important features of this example include: • The SM_Zprime.C file. This file contains the initialisation of the Z-prime boson. The properties of the Z-prime are set here, such as mass, width, electromagnetic charge, spin etc. • The Interaction_Model_SM_Zprime.C file. This file contains the definition of the Z-prime boson’s interactions. The right- and left-handed couplings to each of the fermions are set here. • An example Makefile. This shows how to compile the sources above into a shared library. • The line SHERPA_LDADD: SMZprime in the config file. This line tells Sherpa to load the extra libraries created from the *.C files above. • The line MODEL: SMZprime in the config file. This line tells Sherpa which model to use for the run. • The following lines in the config file:  PARTICLE_DATA: 32: Mass: 1000 Width: 50  These lines show how you can overrule the choices you made for the properties of the new particle in the SM_Zprime.C file. For more information on changing parameters in Sherpa, see Input structure and Parameters. • The lines Zp_cpl_L: 0.3 and Zp_cpl_R: 0.6 set the couplings to left and right handed fermions in the config file. To use this model, create the libraries for Sherpa to use by running make  in this directory. Then run Sherpa as normal: ../../../bin/Sherpa  To implement your own model, copy these example files anywhere and modify them according to your needs. Note: You don’t have to modify or recompile any part of Sherpa to use your model. As long as the SHERPA_LDADD parameter is specified as above, Sherpa will pick up your model automatically. Furthermore note: New physics models with an existing implementation in FeynRules, cf. [Chr08] and [Chr09], can directly be invoked using Sherpa’s support for the UFO model format, see UFO Model Interface. ## 8.2 Custom scale setter You can write a custom calculator to set the factorisation, renormalisation and resummation scales. It has to be implemented as a C++ class which derives from the Scale_Setter_Base base class and implements only the constructor and the Calculate method. Here is a snippet for a very simple one, which sets all three scales to the invariant mass of the two incoming partons. #include "PHASIC++/Scales/Scale_Setter_Base.H" #include "ATOOLS/Org/Message.H" using namespace PHASIC; using namespace ATOOLS; namespace PHASIC { class Custom_Scale_Setter: public Scale_Setter_Base { protected: public: Custom_Scale_Setter(const Scale_Setter_Arguments &args) : Scale_Setter_Base(args) { m_scale.resize(3); // by default three scales: fac, ren, res // but you can add more if you need for COUPLINGS SetCouplings(); // the default value of COUPLINGS is "Alpha_QCD 1", i.e. // m_scale[1] is used for running alpha_s // (counting starts at zero!) } double Calculate(const std::vector<ATOOLS::Vec4D> &p, const size_t &mode) { double muF=(p[0]+p[1]).Abs2(); double muR=(p[0]+p[1]).Abs2(); double muQ=(p[0]+p[1]).Abs2(); m_scale[stp::fac] = muF; m_scale[stp::ren] = muR; m_scale[stp::res] = muQ; // Switch on debugging output for this class with: // Sherpa "OUTPUT=2[Custom_Scale_Setter|15]" DEBUG_FUNC("Calculated scales:"); DEBUG_VAR(m_scale[stp::fac]); DEBUG_VAR(m_scale[stp::ren]); DEBUG_VAR(m_scale[stp::res]); return m_scale[stp::fac]; } }; } // Some plugin magic to make it available for SCALES=CUSTOM DECLARE_GETTER(Custom_Scale_Setter,"CUSTOM", Scale_Setter_Base,Scale_Setter_Arguments); Scale_Setter_Base *ATOOLS::Getter <Scale_Setter_Base,Scale_Setter_Arguments,Custom_Scale_Setter>:: operator()(const Scale_Setter_Arguments &args) const { return new Custom_Scale_Setter(args); } void ATOOLS::Getter<Scale_Setter_Base,Scale_Setter_Arguments, Custom_Scale_Setter>:: PrintInfo(std::ostream &str,const size_t width) const { str<<"Custom scale scheme"; }  If the code is compiled into a library called libCustomScale.so, then this library is loaded dynamically at runtime with the switch ‘SHERPA_LDADD: CustomScale’ either on the command line or in the run section, cf. Customization. This then allows to use the custom scale like a built-in scale setter by specifying ‘SCALES: CUSTOM’ (cf. SCALES). ## 8.3 External one-loop ME Sherpa includes only a very limited selection of one-loop matrix elements. To make full use of the implemented automated dipole subtraction it is possible to link external one-loop codes to Sherpa in order to perform full calculations at QCD next-to-leading order. In general Sherpa can take care of any piece of the calculation except one-loop matrix elements, i.e. the born ME, the real correction, the real and integrated subtraction terms as well as the phase space integration and PDF weights for hadron collisions. Sherpa will provide sets of four-momenta and request for a specific parton level process the helicity and colour summed one-loop matrix element (more specific: the coefficients of the Laurent series in the dimensional regularization parameter epsilon up to the order epsilon^0). An example setup for interfacing such an external one-loop code, following the Binoth Les Houches interface proposal [Bin10a] of the 2009 Les Houches workshop, is provided in Zbb production. To use the LH-OLE interface, Sherpa has to be configured with --enable-lhole. The interface: • During an initialization run Sherpa stores setup information (schemes, model information etc.) and requests a list of parton-level one-loop processes that are needed for the NLO calculation. This information is stored in a file, by default called OLE_order.lh. The external one-loop code (OLE) should confirm these settings/requests and write out a file OLE_contract.lh. Both filenames can be customised using LHOLE_ORDERFILE: <order-file> and <LHOLE_CONTRACTFILE: <contract-file>. For the syntax of these files and more details see [Bin10a]. For Sherpa the output/input of the order/contract file is handled in LH_OLE_Communicator.[CH]. The actual interface is contained in LH_OLE_Interface.C. The parameters to be exchanged with the OLE are defined in the latter file via  lhfile.AddParameter(...);  and might require an update for specific OLE or processes. Per default, in addition to the standard options MatrixElementSquareType, CorrectionType, IRregularisation, AlphasPower, AlphaPower and OperationMode the masses and width of the W, Z and Higgs bosons and the top and bottom quarks are written out in free format, such that the respective OLE parameters can be easily synchronised. • At runtime the communication is performed via function calls. To allow Sherpa to call the external code the functions  void OLP_Start(const char * filename); void OLP_EvalSubProcess(int,double*,double,double,double*);  which are defined and called in LH_OLE_Interface.C must be specified. For keywords and possible data fields passed with this functions see [Bin10a]. The function OLP_Start(...) is called once when Sherpa is starting. The function OLP_EvalSubProcess(...) will be called many times for different subprocesses and momentum configurations. The setup (cf. example Zbb production): • The line Loop_Generator: LHOLE tells the code to use the interface for computing one-loop matrix elements. • The switch SHERPA_LDADD has to be set to the appropriate library name (and path) of the one-loop generator. • The IR regularisation scheme can be set via LHOLE_IR_REGULARISATION. Possible values are DRED (default) and CDR. • Per default, Sherpa generates phase space points in the lab frame. If LHOLE_BOOST_TO_CMS: true is set, these phase space points are boosted to the centre of mass system before they are passed to the OLE. • The original BLHA interface does not allow for run-time parameter passing. While this is discussed for an updated of the accord a workable solution is implemented for the use of GoSam and enabled through LHOLE_OLP: GoSam. The LHOLE_BOOST_TO_CMS is also automatically active with this setup. This, of course, can be adapted for other one-loop programs if need be. • Sherpa’s internal analysis package can be used to generate a few histograms. Thus, then when installing Sherpa the option --enable-analysis must be include on the command line when Sherpa is configured, see ANALYSIS. ## 8.4 External RNG To use an external Random Number Generator (RNG) in Sherpa, you need to provide an interface to your RNG in an external dynamic library. This library is then loaded at runtime and Sherpa replaces the internal RNG with the one provided. In this case Sherpa will not attempt to set, save, read or restore the RNG The corresponding code for the RNG interface is #include "ATOOLS/Math/Random.H" using namespace ATOOLS; class Example_RNG: public External_RNG { public: double Get() { // your code goes here ... } };// end of class Example_RNG // this makes Example_RNG loadable in Sherpa DECLARE_GETTER(Example_RNG,"Example_RNG",External_RNG,RNG_Key); External_RNG *ATOOLS::Getter<External_RNG,RNG_Key,Example_RNG>::operator()(const RNG_Key &) const { return new Example_RNG(); } // this eventually prints a help message void ATOOLS::Getter<External_RNG,RNG_Key,Example_RNG>::PrintInfo(std::ostream &str,const size_t) const { str<<"example RNG interface"; }  If the code is compiled into a library called libExampleRNG.so, then this library is loaded dynamically in Sherpa using the command ‘SHERPA_LDADD: ExampleRNG’ either on the command line or in ‘Sherpa.yaml’. If the library is bound at compile time, like e.g. in cmt, you may skip this step. Finally Sherpa is instructed to retrieve the external RNG by specifying ‘EXTERNAL_RNG: Example_RNG’ on the command line or in ‘Sherpa.yaml’. ## 8.5 External PDF To use an external PDF (not included in LHAPDF) in Sherpa, you need to provide an interface to your PDF in an external dynamic library. This library is then loaded at runtime and it is possible within Sherpa to access all PDFs included. The simplest C++ code to implement your interface looks as follows #include "PDF/Main/PDF_Base.H" using namespace PDF; class Example_PDF: public PDF_Base { public: void Calculate(double x,double Q2) { // calculate values x f_a(x,Q2) for all a } double GetXPDF(const ATOOLS::Flavour a) { // return x f_a(x,Q2) } virtual PDF_Base *GetCopy() { return new Example_PDF(); } };// end of class Example_PDF // this makes Example_PDF loadable in Sherpa DECLARE_PDF_GETTER(Example_PDF_Getter); PDF_Base *Example_PDF_Getter::operator()(const Parameter_Type &args) const { return new Example_PDF(); } // this eventually prints a help message void Example_PDF_Getter::PrintInfo (std::ostream &str,const size_t width) const { str<<"example PDF"; } // this lets Sherpa initialize and unload the library Example_PDF_Getter *p_get=NULL; extern "C" void InitPDFLib() { p_get = new Example_PDF_Getter("ExamplePDF"); } extern "C" void ExitPDFLib() { delete p_get; }  If the code is compiled into a library called libExamplePDFSherpa.so, then this library is loaded dynamically in Sherpa using ‘PDF_LIBRARY: ExamplePDFSherpa’ either on the command line, or in the ‘Sherpa.yaml’. If the library is bound at compile time, like e.g. in cmt, you may skip this step. It is now possible to list all accessible PDF sets by specifying ‘SHOW_PDF_SETS: 1’ on the command line. Finally Sherpa is instructed to retrieve the external PDF by specifying ‘PDF_SET: ExamplePDF’ on the command line or in the ‘Sherpa.yaml’. ## 8.6 Python Interface Certain Sherpa classes and methods can be made available to the Python interpreter in the form of an extension module. This module can be loaded in Python and provides access to certain functionalities of the Sherpa event generator in Python. In order to build the module, Sherpa must be configured with the option --enable-pyext. Running make then invokes the automated interface generator SWIG [Bea03] to create the Sherpa module using the Python C/C++ API. SWIG version 1.3.x or later is required for a successful build. Problems might occur if more than one version of Python is present on the system since automake currently doesn’t always handle multiple Python installations properly. If you have multiple Python versions installed on your system, please set the PYTHON environment variable to the Python 2 executable via  export PYTHON=<path-to-python2>  before executing the configure script (see. Certain Sherpa classes and methods can be made available to the Python interpreter in the form of an extension module. This module can be loaded in Python and provides access to certain functionalities of the Sherpa event generator in Python. It was designed specifically for the computation of matrix elements in python (Using the Python interface) and its features are currently limited to this purpose. In order to build the module, Sherpa must be configured with the option --enable-pyext. Running make then invokes the automated interface generator SWIG [Bea03] to create the Sherpa module using the Python C/C++ API. SWIG version 1.3.x or later is required for a successful build. Problems might occur if more than one version of Python is present on the system since automake currently doesn’t always handle multiple Python installations properly. A possible workaround is to temporarily uninstall one version of python, configure and build Sherpa, and then reinstall the temporarily uninstalled version of Python. The following script is a minimal example that shows how to use the Sherpa module in Python. In order to load the Sherpa module, the location where it is installed must be added to the PYTHONPATH. There are several ways to do this, in this example the sys module is used. The sys module also allows it to directly pass the command line arguments used to run the script to the initialization routine of Sherpa. The script can thus be executed using the normal command line options of Sherpa (see Command line options). Furthermore it illustrates how exceptions that Sherpa might throw can be taken care of. If a run card is present in the directory where the script is executed, the initialization of the generator causes Sherpa to compute the cross sections for the processes specified in the run card. See Computing matrix elements for individual phase space points using the Python Interface for an example that shows how to use the Python interface to compute matrix elements or Generate events using scripts to see how the interface can be used to generate events in Python. Note that if you have compiled Sherpa with MPI support, you need to source the mpi4py module using from mpi4py import MPI.  #!/usr/bin/python import sys sys.path.append('<sherpa-prefix>/lib/<your-python-version>/site-packages/>') import Sherpa # set up the generator Generator=Sherpa.Sherpa(len(sys.argv),sys.argv) try: # initialize the generator, pass command line arguments to initialization routine Generator.InitializeTheRun() # catch exceptions except Sherpa.Exception as exc: print exc  # 9 Examples Some example set-ups are included in Sherpa, in the <prefix>/share/SHERPA-MC/Examples/ directory. These may be useful to new users to practice with, or as templates for creating your own Sherpa run-cards. In this section, we will look at some of the main features of these examples. ## 9.1 Vector boson + jets production To change any of the following LHC examples to production at different collider energies or beam types, e.g. proton anti-proton at the Tevatron, simply change the beam settings accordingly:  BEAMS: [2212, -2212] BEAM_ENERGIES: 980  ### 9.1.1 W+jets production This is an example setup for inclusive W production at hadron colliders. The inclusive process is calculated at next-to-leading order accuracy matched to the parton shower using the MC@NLO prescription detailed in [Hoe11]. The next few higher jet multiplicities, calculated at next-to-leading order as well, are merged into the inclusive sample using the MEPS@NLO method – an extension of the CKKW method to NLO – as described in [Hoe12a] and [Geh12]. Finally, even higher multiplicities, calculated at leading order, are merged on top of that. A few more things to note are detailed below the example. # set up two proton beams, each at 6.5 TeV BEAMS: 2212 BEAM_ENERGIES: 6500 # matrix-element calculation ME_GENERATORS: - Comix - Amegic - OpenLoops # optional: use a custom jet criterion #SHERPA_LDADD: MyJetCriterion #JET_CRITERION: FASTJET[A:antikt,R:0.4,y:5] # exclude tau (15) from (massless) lepton container (90) PARTICLE_DATA: 15: Massive: 1 # pp -> W[lv]+jets PROCESSES: - Process: 93 93 -> 90 91 93{4} Order: {QCD: 0, EW: 2} CKKW: 20 # set up MC@NLO final-state multiplicities 2->2-4: NLO_Mode: MC@NLO NLO_Order: {QCD: 1, EW: 0} ME_Generator: Amegic RS_ME_Generator: Comix Loop_Generator: OpenLoops # make calculation of higher final-state multiplicities faster 2->4-6: Integration_Error: 0.05 SELECTORS: # Safety cuts to avoid PDF calls with muF < 1 GeV - [Mass, 11, -12, 1.0, E_CMS] - [Mass, 13, -14, 1.0, E_CMS] - [Mass, -11, 12, 1.0, E_CMS] - [Mass, -13, 14, 1.0, E_CMS]  Things to notice: • The Order in the process definition in a multi-jet merged setup defines the order of the core process (here 93 93 -> 90 91 with two electroweak couplings). The additional strong couplings for multi-jet production are implicitly understood. • The settings necessary for NLO accuracy are restricted to the 2->2,3,4 processes using the 2->2-4 key below the # set up MC@NLO ... comment. The example can be converted into a simple MENLOPS setup by using 2->2 instead, or into an MEPS setup by removing these lines altogether. Thus one can study the effect of incorporating higher-order matrix elements. • The number of additional LO jets can be varied through changing the integer within the curly braces in the Process definition, which gives the maximum number of additional partons in the matrix elements. • OpenLoops is used here as the provider of the one-loop matrix elements for the respective multiplicities. • Tau leptons are set massive in order to exclude them from the massless lepton container (90). • As both Comix and Amegic are specified as matrix element generators to be used, Amegic has to be specified to be used for all MC@NLO multiplicities using ME_Generator: Amegic. Additionally, we specify RS_ME_Generator: Comix such that the subtracted real-emission bit of the NLO matrix elements is calculated more efficiently with Comix instead of Amegic. This combination is currently the only one supported for NLO-matched/merged setups. The jet criterion used to define the matrix element multiplicity in the context of multijet merging can be supplied by the user. As an example the source code file ./Examples/V_plus_Jets/LHC_WJets/My_JetCriterion.C provides such an alternative jet criterion. It can be compiled using SCons via executing scons in that directory (edit the SConstruct file accordingly). The newly created library is linked at run time using the SHERPA_LDADD flag. The new jet criterion is then evoked by JET_CRITERION. ### 9.1.2 Z+jets production This is an example setup for inclusive Z production at hadron colliders. The inclusive process is calculated at next-to-leading order accuracy matched to the parton shower using the MC@NLO prescription detailed in [Hoe11]. The next few higher jet multiplicities, calculated at next-to-leading order as well, are merged into the inclusive sample using the MEPS@NLO method – an extension of the CKKW method to NLO – as described in [Hoe12a] and [Geh12]. Finally, even higher multiplicities, calculated at leading order, are merged on top of that. A few things to note are detailed below the previous W+jets production example and apply also to this example. # Sherpa configuration for Z+Jets production # set up beams for LHC run 2 BEAMS: 2212 BEAM_ENERGIES: 6500 # matrix-element calculation ME_GENERATORS: - Comix - Amegic - OpenLoops # pp -> Z[ee]+jets PROCESSES: - Process: 93 93 -> 11 -11 93{4} Order: {QCD: 0, EW: 2} CKKW: 20 # set up NLO+PS final-state multiplicities 2->2-4: NLO_Mode: MC@NLO NLO_Order: {QCD: 1, EW: 0} ME_Generator: Amegic RS_ME_Generator: Comix Loop_Generator: OpenLoops # make integration of higher final-state multiplicities faster 2->4-6: Integration_Error: 0.05 SELECTORS: - [Mass, 11, -11, 66, E_CMS]  ### 9.1.3 W+bb production This example is currently broken. Please contact the Authors for more information. ### 9.1.4 Zbb production BEAMS: 2212 BEAM_ENERGIES: 6500 # general settings EVENTS: 1M # me generator settings ME_GENERATORS: [Comix, Amegic, LHOLE] HARD_DECAYS: Enabled: true Mass_Smearing: 0 Channels: 23 -> 11 -11: {Status: 2} 23 -> 13 -13: {Status: 2} PARTICLE_DATA: 5: Massive: true Mass: 4.75 # consistent with MSTW 2008 nf 4 set 23: Width: 0 Stable: 0 MI_HANDLER: None FRAGMENTATION: None SCALES: VAR{H_T2+sqr(91.188)} PDF_LIBRARY: MSTW08Sherpa PDF_SET: mstw2008nlo_nf4 PROCESSES: - Process: 93 93 -> 23 5 -5 NLO_Mode: MC@NLO NLO_Order: {QCD: 1, EW: 0} ME_Generator: Amegic RS_ME_Generator: Comix Loop_Generator: LHOLE Order: {QCD: Any, EW: 1} SELECTORS: - FastjetFinder: Algorithm: antikt N: 2 PTMin: 5.0 DR: 0.5 EtaMax: 5 Nb: 2  Things to notice: • The matrix elements are interfaced via the Binoth Les Houches interface proposal [Bin10a], [Ali13], External one-loop ME. • The Z-boson is stable in the hard matrix elements. It is decayed using the internal decay module, indicated by the settings ‘HARD_DECAYS:Enabled: true’ and ‘PARTICLE_DATA:23:Stable: 0’. • FastJet is used to regularize the hard cross section. Note that Sherpa must be configured with option ‘--enable-fastjet’, see Jet selectors. We require two b-jets, indicated by ‘Nb: 2’ at the end of the ‘FastjetFinder’ options. • Four-flavour PDF are used to comply with the calculational setup. ## 9.2 Jet production ### 9.2.1 Jet production To change any of the following LHC examples to production at the Tevatron simply change the beam settings to BEAMS: [2212, -2212] BEAM_ENERGIES: 980  #### 9.2.1.1 MC@NLO setup for dijet and inclusive jet production This is an example setup for dijet and inclusive jet production at hadron colliders at next-to-leading order precision matched to the parton shower using the MC@NLO prescription detailed in [Hoe11] and [Hoe12b]. A few things to note are detailed below the example. # collider setup BEAMS: 2212 BEAM_ENERGIES: 6500.0 SCALES: FASTJET[A:antikt,PT:20.0,ET:0,R:0.4,M:0]{0.0625*H_T2}{0.0625*H_T2}{0.25*PPerp2(p[3])} ME_GENERATORS: [Amegic, OpenLoops] PROCESSES: - Process: 93 93 -> 93 93 NLO_Mode: MC@NLO NLO_Order: {QCD: 1, EW: 0} Loop_Generator: OpenLoops Order: {QCD: 2, EW: 0} SELECTORS: - FastjetFinder: Algorithm: antikt N: 2 PTMin: 10.0 ETMin: 0.0 DR: 0.4 - FastjetFinder: Algorithm: antikt N: 1 PTMin: 20.0 ETMin: 0.0 DR: 0.4  Things to notice: • Asymmetric cuts are implemented (relevant to the RS-piece of an MC@NLO calculation) by requiring at least two jets with pT > 10 GeV, one of which has to have pT > 20 GeV. • Both the factorisation and renormalisation scales are set to the above defined scale factors times a quarter of the scalar sum of the transverse momenta of all anti-kt jets (R = 0.4, pT > 20 GeV) found on the ME-level before any parton shower emission. See SCALES for details on scale setters. • The resummation scale, which sets the maximum scale of the additional emission to be resummed by the parton shower, is set to the above defined resummation scale factor times half of the transverse momentum of the softer of the two jets present at Born level. • The external generator OpenLoops provides the one-loop matrix elements. • The NLO_QCD_Mode is set to MC@NLO. #### 9.2.1.2 MEPS setup for jet production BEAMS: 2212 BEAM_ENERGIES: 6500 PROCESSES: - Process: 93 93 -> 93 93 93{0} Order: {QCD: Any, EW: 0} CKKW: 20 Integration_Error: 0.02 SELECTORS: - NJetFinder: N: 2 PTMin: 20.0 ETMin: 0.0 R: 0.4 Exp: -1  Things to notice: • Order is set to ‘{QCD: Any, EW: 0}’. This ensures that all final state jets are produced via the strong interaction. • An NJetFinder selector is used to set a resolution criterion for the two jets of the core process. This is necessary because the ‘CKKW’ tag does not apply any cuts to the core process, but only to the extra-jet matrix elements, see Multijet merged event generation with Sherpa. ### 9.2.2 Jets at lepton colliders This section contains two setups to describe jet production at LEP I, either through multijet merging at leading order accuracy or at next-to-leading order accuracy. #### 9.2.2.1 MEPS setup for ee->jets This example shows a LEP set-up, with electrons and positrons colliding at a centre of mass energy of 91.2 GeV. BEAMS: [11, -11] BEAM_ENERGIES: 45.6 ALPHAS(MZ): 0.1188 ORDER_ALPHAS: 1 PROCESSES: - Process: 11 -11 -> 93 93 93{3} CKKW: pow(10,-2.25/2.00)*E_CMS Order: {QCD: Any, EW: 2}  Things to notice: • The running of alpha_s is set to leading order and the value of alpha_s at the Z-mass is set. • Note that initial-state radiation is enabled by default. See ISR parameters on how to disable it if you want to evaluate the (unphysical) case where the energy for the incoming leptons is fixed. #### 9.2.2.2 MEPS@NLO setup for ee->jets This example expands upon the above setup, elevating its description of hard jet production to next-to-leading order. # collider setup BEAMS: [11, -11] BEAM_ENERGIES: 45.6 TAGS: # tags for process setup YCUT: 2.0 # tags for ME generators LOOPGEN0: Internal LOOPGEN1: <my-loop-gen-for-3j> LOOPGEN2: <my-loop-gen-for-4j> LOOPMGEN: <my=loop-gen-for-massive-2j> # settings for ME generators ME_GENERATORS: - Comix - Amegic -$(LOOPGEN0)
- $(LOOPGEN1) -$(LOOPGEN2)
- $(LOOPMGEN) AMEGIC: {INTEGRATOR: 4} # model parameters MODEL: SM ALPHAS(MZ): 0.118 PARTICLE_DATA: {5: {Massive: true}} PROCESSES: - Process: 11 -11 -> 93 93 93{3} CKKW: pow(10,-$(YCUT)/2.00)*E_CMS
Order: {QCD: Any, EW: 2}
RS_Enhance_Factor: 10
2->2: { Loop_Generator: $(LOOPGEN0) } 2->3: { Loop_Generator:$(LOOPGEN1) }
2->4: { Loop_Generator: $(LOOPGEN2) } 2->2-4: NLO_Mode: MC@NLO NLO_Order: {QCD: 1, EW: 0} ME_Generator: Amegic RS_ME_Generator: Comix - Process: 11 -11 -> 5 -5 93{3} CKKW: pow(10,-$(YCUT)/2.00)*E_CMS
Order: {QCD: Any, EW: 2}
Loop_Generator: $(LOOPMGEN) RS_Enhance_Factor: 10 2->2: NLO_Mode: MC@NLO NLO_Order: {QCD: 1, EW: 0} ME_Generator: Amegic RS_ME_Generator: Comix - Process: 11 -11 -> 5 5 -5 -5 93{1} CKKW: pow(10,-$(YCUT)/2.00)*E_CMS
Order: {QCD: Any, EW: 2}
Cut_Core: 1


Things to notice:

• the b-quark mass has been enabled for the matrix element calculation (the default is massless) because it is not negligible for LEP energies
• the b b-bar and b b b-bar b-bar processes are specified separately because the ‘93’ particle container contains only partons set massless in the matrix element calculation, see Particle containers.
• model parameters can be modified in the config file; in this example, the value of alpha_s at the Z mass is set.

## 9.3 Higgs boson + jets production

### 9.3.1 H production in gluon fusion with interference effects

This is a setup for inclusive Higgs production through gluon fusion at hadron colliders. The inclusive process is calculated at next-to-leading order accuracy, including all interference effects between Higgs-boson production and the SM gg->yy background. The corresponding matrix elements are taken from [Ber02] and [Dix13].

# collider parameters
BEAMS: 2212
BEAM_ENERGIES: 6500

# generator parameters
EVENTS: 1M
EVENT_GENERATION_MODE: Weighted
AMEGIC: {ALLOW_MAPPING: 0}
ME_GENERATORS: [Amegic, Higgs]
SCALES: VAR{Abs2(p[2]+p[3])}

# physics parameters
PARTICLE_DATA:
4:  {Yukawa: 1.42}
5:  {Yukawa: 4.92}
15: {Yukawa: 1.777}
EW_SCHEME: 3
RUN_MASS_BELOW_POLE: 1

PROCESSES:
- Process: 93 93 -> 22 22
NLO_Mode: Fixed_Order
Order: {QCD: Any, EW: 2}
NLO_Order: {QCD: 1, EW: 0}
Enable_MHV: 12
Loop_Generator: Higgs
Integrator: PS2
RS_Integrator: PS3

SELECTORS:
- HiggsFinder:
PT1: 40
PT2: 30
Eta: 2.5
MassRange: [100, 150]
- [IsolationCut, 22, 0.4, 2, 0.025]


Things to notice:

• This calculation is at fixed-order NLO.
• All scales, i.e. the factorisation, renormalisation and resummation scales are set to the invariant mass of the di-photon pair.
• Dedicated phase space generators are used by setting ‘Integrator: PS2’ and ‘RS_Integrator: PS3’, cf. Integrator.

To compute the interference contribution only, as was done in [Dix13], one can set ‘HIGGS_INTERFERENCE_ONLY: 1’. By default, all partonic processes are included in this simulation, however, it is sensible to disable quark initial states at the leading order. This is achieved by setting ‘HIGGS_INTERFERENCE_MODE: 3’.

One can also simulate the production of a spin-2 massive graviton in Sherpa using the same input card by setting ‘HIGGS_INTERFERENCE_SPIN: 2’. Only the massive graviton case is implemented, specifically the scenario where k_q=k_g. NLO corrections are approximated, as the gg->X->yy and qq->X->yy loop amplitudes have not been computed so far.

### 9.3.2 H+jets production in gluon fusion

This is an example setup for inclusive Higgs production through gluon fusion at hadron colliders used in [Hoe14a]. The inclusive process is calculated at next-to-leading order accuracy matched to the parton shower using the MC@NLO prescription detailed in [Hoe11]. The next few higher jet multiplicities, calculated at next-to-leading order as well, are merged into the inclusive sample using the MEPS@NLO method – an extension of the CKKW method to NLO – as described in [Hoe12a] and [Geh12]. Finally, even higher multiplicities, calculated at leading order, are merged on top of that. A few things to note are detailed below the example.

# collider parameters
BEAMS: 2212
BEAM_ENERGIES: 6500

# settings for ME generators
ME_GENERATORS: [Comix, Amegic, Internal, MCFM]

# settings for hard decays
HARD_DECAYS:
Enabled: true
Channels:
25 -> 22 22: {Status: 2}
Apply_Branching_Ratios: false

# model parameters
MODEL: HEFT
PARTICLE_DATA:
25: {Mass: 125, Width: 0}

PROCESSES:
- Process: 93 93 -> 25 93{2}
Order: {QCD: Any, EW: 0, HEFT: 1}
CKKW: 30
2->1-2: { Loop_Generator: Internal }
2->3:   { Loop_Generator: MCFM }
2->1-3:
NLO_Mode: MC@NLO
NLO_Order: {QCD: 1, EW: 0}
ME_Generator: Amegic
RS_ME_Generator: Comix


Things to notice:

• The example can be converted into a simple MENLOPS setup by replacing 2->1-3 with 2->1, or into an MEPS setup with 2->0, to study the effect of incorporating higher-order matrix elements.
• Providers of the one-loop matrix elements for the respective multiplicities are set using Loop_Generator. For the two simplest cases Sherpa can provide it internally. Additionally, MCFM is interfaced for the H+2jet process, cf. MCFM interface.
• To enable the Higgs to decay to a pair of photons, for example, the hard decays are invoked. For details on the hard decay handling and how to enable specific decay modes see Hard decays.

### 9.3.3 H+jets production in gluon fusion with finite top mass effects

This is example is similar to H+jets production in gluon fusion but with finite top quark mass taken into account as described in [Bu03] for all merged jet multiplicities. Mass effects in the virtual corrections are treated in an approximate way. In case of the tree-level contributions, including real emission corrections, no approximations are made concerning the mass effects.

# collider parameters
BEAMS: 2212
BEAM_ENERGIES: 6500

# settings for ME generators
ME_GENERATORS: [Amegic, Internal, OpenLoops]

# settings for hard decays
HARD_DECAYS:
Enabled: true
Channels:
25 -> 22 22: {Status: 2}
Apply_Branching_Ratios: false

# model parameters
MODEL: HEFT
PARTICLE_DATA:
25: {Mass: 125, Width: 0}

# finite top mass effects
KFACTOR: GGH
OL_IGNORE_MODEL: true
OL_PARAMETERS:
preset: 2
allowed_libs: pph2,pphj2,pphjj2
psp_tolerance: 1.0e-7

PROCESSES:
- Process: 93 93 -> 25 93{1}
Order: {QCD: 0, EW: 0, HEFT: 1}
CKKW: 30
Loop_Generator: Internal
2->1-2:
NLO_Mode: MC@NLO
NLO_Order: {QCD: 1, EW: 0}


Things to notice:

• One-loop matrix elements from OpenLoops [Cas11] are used in order to correct for top mass effects. Sherpa must therefore be compiled with OpenLoops support to run this example. Also, the OpenLoops process libraries listed in the run card must be installed.
• The maximum jet multiplicities that can be merged in this setup are limited by the availability of loop matrix elements used to correct for finite top mass effects.
• The comments in H+jets production in gluon fusion apply here as well.

### 9.3.4 H+jets production in associated production

This section collects example setups for Higgs boson production in association with vector bosons

#### 9.3.4.1 Higgs production in association with W bosons and jets

This is an example setup for Higgs boson production in association with a W boson and jets, as used in [Hoe14b]. It uses the MEPS@NLO method to merge pp->WH and pp->WHj at next-to-leading order accuracy and adds pp->WHjj at leading order. The Higgs boson is decayed to W-pairs and all W decay channels resulting in electrons or muons are accounted for, including those with intermediate taus.

# collider setup
BEAMS: 2212
BEAM_ENERGIES: 6500

ME_GENERATORS: [Comix, Amegic, OpenLoops]

# define custom particle container for easy process declaration
PARTICLE_CONTAINERS:
900: {Name: W, Flavours: [24, -24]}
901: {Name: lightflavs, Flavours: [1, -1, 2, -2, 3, -3, 4, -4, 21]}
NLO_CSS_DISALLOW_FLAVOUR: 5

# particle properties (ME widths need to be zero if external)
PARTICLE_DATA:
24: {Width: 0}
25: {Mass: 125.5, Width: 0}
15: {Stable: 0, Massive: true}

# hard decays setup, specify allowed decay channels, ie.:
# h->Wenu, h->Wmunu, h->Wtaunu, W->enu, W->munu, W->taunu, tau->enunu, tau->mununu + cc
HARD_DECAYS:
Enabled: true
Channels:
25 -> 24 -12 11: {Status: 2}
25 -> 24 -14 13: {Status: 2}
25 -> 24 -16 15: {Status: 2}
25 -> -24 12 -11: {Status: 2}
25 -> -24 14 -13: {Status: 2}
25 -> -24 16 -15: {Status: 2}
24 -> 12 -11: {Status: 2}
24 -> 14 -13: {Status: 2}
24 -> 16 -15: {Status: 2}
-24 -> -12 11: {Status: 2}
-24 -> -14 13: {Status: 2}
-24 -> -16 15: {Status: 2}
15 -> 16 -12 11: {Status: 2}
15 -> 16 -14 13: {Status: 2}
-15 -> -16 12 -11: {Status: 2}
-15 -> -16 14 -13: {Status: 2}
Decay_Tau: 1
Apply_Branching_Ratios: 0

PROCESSES:
- Process: 901 901 -> 900 25 901{2}
Order: {QCD: 0, EW: 2}
CKKW: 30
2->2-3:
NLO_Mode: MC@NLO
NLO_Order: {QCD: 1, EW: 0}
ME_Generator: Amegic
RS_ME_Generator: Comix
Loop_Generator: OpenLoops


Things to notice:

• Two custom particle container, cf. Particle containers, have been declared, facilitating the process declaration.
• As the bottom quark is treated as being massless by default, a five flavour calculation is performed. The particle container ensures that no external bottom quarks, however, are considered to resolve the overlap with single top and top pair processes.
• OpenLoops [Cas11] is used as the provider of the one-loop matrix elements.
• To enable the decays of the Higgs, W boson and tau lepton the hard decay handler is invoked. For details on the hard decay handling and how to enable specific decay modes see Hard decays.

#### 9.3.4.2 Higgs production in association with Z bosons and jets

This is an example setup for Higgs boson production in association with a Z boson and jets, as used in [Hoe14b]. It uses the MEPS@NLO method to merge pp->ZH and pp->ZHj at next-to-leading order accuracy and adds pp->ZHjj at leading order. The Higgs boson is decayed to W-pairs. All W and Z bosons are allowed to decay into electrons, muons or tau leptons. The tau leptons are then allowed to decay into all possible partonic channels, leptonic and hadronic, to allow for all possible trilepton signatures, unavoidably producing two and four lepton events as well.

# collider setup
BEAMS: 2212
BEAM_ENERGIES: 6500

ME_GENERATORS: [Comix, Amegic, OpenLoops]

# define custom particle container for easy process declaration
PARTICLE_CONTAINERS:
901: {Name: lightflavs, Flavours: [1, -1, 2, -2, 3, -3, 4, -4, 21]}
NLO_CSS_DISALLOW_FLAVOUR: 5

# particle properties (ME widths need to be zero if external)
PARTICLE_DATA:
23: {Width: 0}
25: {Mass: 125.5, Width: 0}
15: {Stable: 0, Massive: true}

# hard decays setup, specify allowed decay channels
# h->Wenu, h->Wmunu, h->Wtaunu, W->enu, W->munu, W->taunu,
# Z->ee, Z->mumu, Z->tautau, tau->any + cc
HARD_DECAYS:
Enabled: true
Channels:
25 -> 24 -12 11: {Status: 2}
25 -> 24 -14 13: {Status: 2}
25 -> 24 -16 15: {Status: 2}
25 -> -24 12 -11: {Status: 2}
25 -> -24 14 -13: {Status: 2}
25 -> -24 16 -15: {Status: 2}
24 -> 12 -11: {Status: 2}
24 -> 14 -13: {Status: 2}
24 -> 16 -15: {Status: 2}
23 -> 15 -15: {Status: 2}
-24 -> -12 11: {Status: 2}
-24 -> -14 13: {Status: 2}
-24 -> -16 15: {Status: 2}
15 -> 16 -12 11: {Status: 2}
15 -> 16 -14 13: {Status: 2}
-15 -> -16 12 -11: {Status: 2}
-15 -> -16 14 -13: {Status: 2}
15 -> 16 -2 1: {Status: 2}
15 -> 16 -4 3: {Status: 2}
-15 -> -16 2 -1: {Status: 2}
-15 -> -16 4 -3: {Status: 2}
Decay_Tau: 1
Apply_Branching_Ratios: 0

PROCESSES:
- Process: 901 901 -> 23 25 901{2}
Order: {QCD: 0, EW: 2}
CKKW: 30
2->2-3:
NLO_Mode: MC@NLO
NLO_Order: {QCD: 1, EW: 0}
ME_Generator: Amegic
RS_ME_Generator: Comix
Loop_Generator: OpenLoops


Things to notice:

• A custom particle container, cf. Particle containers, has been declared, facilitating the process declaration.
• As the bottom quark is treated as being massless by default, a five flavour calculation is performed. The particle container ensures that no external bottom quarks, however, are considered to resolve the overlap with single top and top pair processes.
• OpenLoops [Cas11] is used as the provider of the one-loop matrix elements.
• To enable the decays of the Higgs, W and Z bosons and tau lepton the hard decay handler is invoked. For details on the hard decay handling and how to enable specific decay modes see Hard decays.

#### 9.3.4.3 Higgs production in association with lepton pairs

This is an example setup for Higgs boson production in association with an electron-positron pair using the MC@NLO technique. The Higgs boson is decayed to b-quark pairs. Contrary to the previous examples this setup does not use on-shell intermediate vector bosons in its matrix element calculation.

BEAMS: 2212
BEAM_ENERGIES: 6500

ME_GENERATORS: [Comix, Amegic, OpenLoops]

SCALES: VAR{Abs2(p[2]+p[3]+p[4])}

PARTICLE_DATA:
5: {Massive: true}
15: {Massive: true}
25: {Stable: 0, Width: 0.0}

# hard decays setup, specify allowed decay channels h->bb
HARD_DECAYS:
Enabled: true
Channels:
25 -> 5 -5: {Status: 2}
Apply_Branching_Ratios: false

PROCESSES:
- Process: 93 93 -> 11 -11 25
Order: {QCD: 0, EW: 3}
NLO_Mode: MC@NLO
NLO_Order: {QCD: 1, EW: 0}
Loop_Generator: OpenLoops
ME_Generator: Amegic
RS_ME_Generator: Comix
Integration_Error: 0.1


Things to notice:

• The central scale is set to the invariant mass of the Higgs boson and the lepton pair.
• As the bottom quark is set to be treated massively, a four flavour calculation is performed.
• OpenLoops [Cas11] is used as the provider of the one-loop matrix elements.
• To enable the decays of the Higgs the hard decay handler is invoked. For details on the hard decay handling and how to enable specific decay modes see Hard decays.

### 9.3.5 Associated t anti-t H production at the LHC

This set-up illustrates the interface to an external loop matrix element generator as well as the possibility of specifying hard decays for particles emerging from the hard interaction. The process generated is the production of a Higgs boson in association with a top quark pair from two light partons in the initial state. Each top quark decays into an (anti-)bottom quark and a W boson. The W bosons in turn decay to either quarks or leptons.

BEAMS: 2212
BEAM_ENERGIES: 6500

ME_GENERATORS: [Comix, Amegic, OpenLoops]
# to use a dedicated loop library you can run scons in this Example directory and replace OpenLoops with TTH throughout this run card

SCALES: VAR{sqr(175+125/2)}

PARTICLE_DATA:
5: {Yukawa: 4.92}
6: {Stable: 0, Width: 0.0}
24: {Stable: 0}
25: {Stable: 0, Width: 0.0}

# hard decays setup, specify allowed decay channels h->bb
HARD_DECAYS:
Enabled: true
Channels:
25 -> 5 -5: {Status: 2}
Apply_Branching_Ratios: false

PROCESSES:
- Process: 93 93 -> 25 6 -6
Order: {QCD: 2, EW: 1}
NLO_Mode: MC@NLO
NLO_Order: {QCD: 1, EW: 0}
Loop_Generator: OpenLoops
ME_Generator: Amegic
RS_ME_Generator: Comix
Integration_Error: 0.1


Things to notice:

• The virtual matrix elements can be interfaced either from OpenLoops or from [Rei01], [Rei01a], [Daw02], [Daw03]. In the latter case, the shared library necessary for running this setup is built using scons -f SConstruct-TTH.
• The top quarks are stable in the hard matrix elements. They are decayed using the internal decay module, indicated by the settings in the ‘HARD_DECAYS’ and ‘PARTICLE_DATA’ blocks.
• Widths of top and Higgs are set to 0 for the matrix element calculation. A kinematical Breit-Wigner distribution will be imposed a-posteriori in the decay module.
• The Yukawa coupling of the b-quark has been set to a non-zero value to allow the H->bb decay channel even despite keeping the b-quark massless for a five-flavour-scheme calculation.
• Higgs BRs are not included in the cross section (‘Apply_Branching_Ratios: false’) as they will be LO only and not include loop-induced decays

## 9.4 Top quark (pair) + jets production

### 9.4.1 Top quark pair production

# collider setup
BEAMS: 2212
BEAM_ENERGIES: 6500

# scales
EXCLUSIVE_CLUSTER_MODE: 1
CORE_SCALE: TTBar

# me generator settings
ME_GENERATORS: [Comix, Amegic, OpenLoops]

# decays
HARD_DECAYS:
Enabled: true
Channels:
24 -> 2 -1: {Status: 0}
24 -> 4 -3: {Status: 0}
-24 -> -2 1: {Status: 0}
-24 -> -4 3: {Status: 0}

# particle properties (width of external particles of the MEs must be zero)
PARTICLE_DATA:
6: {Width: 0}

PROCESSES:
- Process: 93 93 -> 6 -6 93{3}
Order: {QCD: 2, EW: 0}
CKKW: 20
2->2-3:
NLO_Mode: MC@NLO
NLO_Order: {QCD: 1, EW: 0}
ME_Generator: Amegic
RS_ME_Generator: Comix
Loop_Generator: OpenLoops
2->5-8:
Max_N_Quarks: 6
Integration_Error: 0.05


Things to notice:

• We use OpenLoops to compute the virtual corrections [Cas11].
• We match matrix elements and parton showers using the MC@NLO technique for massive particles, as described in [Hoe13].
• A non-default METS core scale setter is used, cf. METS scale setting with multiparton core processes
• We enable top decays through the internal decay module using ‘HARD_DECAYS:Enabled: true

### 9.4.2 Production of a top quark pair in association with a W-boson

# collider setup
BEAMS: 2212
BEAM_ENERGIES: 6500

# me generator settings
ME_GENERATORS: [Comix, Amegic, OpenLoops]

# settings for hard decays
HARD_DECAYS:
Enabled: true
Channels:
24 -> 2 -1: {Status: 0}
24 -> 4 -3: {Status: 0}
24 -> 16 -15: {Status: 0}

# model parameters
PARTICLE_DATA:
6: {Width: 0}
24: {Width: 0}

# technical parameters
EXCLUSIVE_CLUSTER_MODE: 1

PROCESSES:
- Process: 93 93 -> 6 -6 24
NLO_Mode: MC@NLO
NLO_Order: {QCD: 1, EW: 0}
Order: {QCD: 2, EW: 1}
ME_Generator: Amegic
RS_ME_Generator: Comix
Loop_Generator: OpenLoops


Things to notice:

• Hard decays are enabled through ‘HARD_DECAYS:Enabled: true’.
• Top quarks and W bosons are final states in the hard matrix elements, so their widths are set to zero using ‘Width: 0’ in their ‘PARTICLE_DATA’ settings.
• Certain decay channels are disabled using ‘Status: 0’ in the ‘Channels’ sub-settings of the ‘HARD_DECAYS’ setting.

## 9.5 Single-top production in the s, t and tW channel

In this section, examples for single-top production in three different channels are described. For the channel definitions and a validation of these setups, see [Both17].

### 9.5.1 t-channel single-top production

# SHERPA run card for t-channel single top-quark production at MC@NLO
# and N_f = 5

# collider setup
BEAMS: 2212
BEAM_ENERGIES: 6500

# me generator settings
ME_GENERATORS: [Comix, Amegic, OpenLoops]

# scales
# CORESCALE SingleTop:
#   use Mandelstam \hat{t} for s-channel 2->2 core process
CORE_SCALE: SingleTop

HARD_DECAYS:
Enabled: true
Channels:
24 -> 2 -1: {Status: 0}
24 -> 4 -3: {Status: 0}
-24 -> -2 1: {Status: 0}
-24 -> -4 3: {Status: 0}

# choose EW Gmu input scheme
EW_SCHEME: 3

# required for using top-quark in ME
PARTICLE_DATA: { 6: {Width: 0} }

PROCESSES:
- Process: 93 93 -> 6 93
NLO_Mode: MC@NLO
NLO_Order: {QCD: 1, EW: 0}
Order: {QCD: 0, EW: 2}
ME_Generator: Amegic
RS_ME_Generator: Comix
Loop_Generator: OpenLoops
Min_N_TChannels: 1  # require t-channel W


Things to notice:

• We use OpenLoops to compute the virtual corrections [Cas11].
• We match matrix elements and parton showers using the MC@NLO technique for massive particles, as described in [Hoe13].
• A non-default METS core scale setter is used, cf. METS scale setting with multiparton core processes
• We enable top and W decays through the internal decay module using ‘HARD_DECAYS:Enabled: true’. The W is restricted to its leptonic decay channels.
• By setting ‘Min_N_TChannels: 1’, only t-channel diagrams are used for the calculation

### 9.5.2 t-channel single-top production with N_f=4

# SHERPA run card for t-channel single top-quark production at MC@NLO
# and N_f = 4

# collider setup
BEAMS: 2212
BEAM_ENERGIES: 6500

# me generator settings
ME_GENERATORS: [Comix, Amegic, OpenLoops]

# scales
#   muR = transverse momentum of the bottom
#   muF = muQ = transverse momentum of the top
SCALES: VAR{MPerp2(p[2])}{MPerp2(p[3])}{MPerp2(p[2])}

HARD_DECAYS:
Enabled: true
Channels:
24 -> 2 -1: {Status: 0}
24 -> 4 -3: {Status: 0}
-24 -> -2 1: {Status: 0}
-24 -> -4 3: {Status: 0}

# choose EW Gmu input scheme
EW_SCHEME: 3

PARTICLE_DATA:
6: {Width: 0}  # required for using top-quark in ME
5: {Massive: true, Mass: 4.18}  # mass as in NNPDF30_nlo_as_0118_nf_4

# configure for N_f = 4
PDF_LIBRARY: LHAPDFSherpa
PDF_SET: NNPDF30_nlo_as_0118_nf_4
ALPHAS: {USE_PDF: 1}

PROCESSES:
- Process: 93 93 -> 6 -5 93
NLO_Mode: MC@NLO
NLO_Order: {QCD: 1, EW: 0}
Order: {QCD: 1, EW: 2}
ME_Generator: Amegic
RS_ME_Generator: Comix
Loop_Generator: OpenLoops
Min_N_TChannels: 1  # require t-channel W


Things to notice:

### 9.5.3 s-channel single-top production

# SHERPA run card for s-channel single top-quark production at MC@NLO
# and N_f = 5

# collider setup
BEAMS: 2212
BEAM_ENERGIES: 6500

# me generator settings
ME_GENERATORS: [Comix, Amegic, OpenLoops]

# scales
# CORESCALE SingleTop:
#   use Mandelstam \hat{s} for s-channel 2->2 core process
CORE_SCALE: SingleTop

HARD_DECAYS:
Enabled: true
Channels:
24 -> 2 -1: {Status: 0}
24 -> 4 -3: {Status: 0}
-24 -> -2 1: {Status: 0}
-24 -> -4 3: {Status: 0}

# choose EW Gmu input scheme
EW_SCHEME: 3

# required for using top-quark in ME
PARTICLE_DATA: { 6: {Width: 0} }

# there is no bottom in the initial-state in s-channel production
PARTICLE_CONTAINERS:
900: {Name: lj, Flavours: [1, -1, 2, -2, 3, -3, 4, -4, 21]}

PROCESSES:
- Process: 900 900 -> 6 93
NLO_Mode: MC@NLO
NLO_Order: {QCD: 1, EW: 0}
Order: {QCD: 0, EW: 2}
ME_Generator: Amegic
RS_ME_Generator: Comix
Loop_Generator: OpenLoops
Max_N_TChannels: 0  # require s-channel W


Things to notice:

• By excluding the bottom quark from the initial-state at Born level using ‘PARTICLE_CONTAINER’, and by setting ‘Max_N_TChannels: 0’, only s-channel diagrams are used for the calculation
• See t-channel single-top production for more comments

### 9.5.4 tW-channel single-top production

# SHERPA run card for tW-channel single top-quark production at MC@NLO
# and N_f = 5

# collider setup
BEAMS: 2212
BEAM_ENERGIES: 6500

# me generator settings
ME_GENERATORS: [Comix, Amegic, OpenLoops]

# scales
# mu = transverse momentum of the top
SCALES: VAR{MPerp2(p[3])}{MPerp2(p[3])}{MPerp2(p[3])}

HARD_DECAYS:
Enabled: true
Channels:
24 -> 2 -1: {Status: 0}
24 -> 4 -3: {Status: 0}
-24 -> -2 1: {Status: 0}
-24 -> -4 3: {Status: 0}

# choose EW Gmu input scheme
EW_SCHEME: 3

# required for using top-quark/W-boson in ME
PARTICLE_DATA:
6: {Width: 0}
24: {Width: 0}

PROCESSES:
- Process: 93 93 -> 6 -24
No_Decay: -6  # remove ttbar diagrams
NLO_Mode: MC@NLO
NLO_Order: {QCD: 1, EW: 0}
Order: {QCD: 1, EW: 1}
ME_Generator: Amegic
RS_ME_Generator: Comix
Loop_Generator: OpenLoops


Things to notice:

• By setting ‘No_Decay: -6’, the doubly-resonant TTbar diagrams are removed. Only the singly-resonant diagrams remain as required by the definition of the channel.
• See t-channel single-top production for more comments

## 9.6 Vector boson pairs + jets production

### 9.6.1 Dilepton, missing energy and jets production

# collider setup
BEAMS: 2212
BEAM_ENERGIES: 6500

# me generator settings
ME_GENERATORS: [Comix, Amegic, OpenLoops]
METS: { CLUSTER_MODE: 16 }

# define parton container without b-quarks to
# remove 0 processes with top contributions
PARTICLE_CONTAINERS:
901: {Name: lightflavs, Flavours: [1, -1, 2, -2, 3, -3, 4, -4, 21]}
NLO_CSS_DISALLOW_FLAVOUR: 5

PROCESSES:
- Process: 901 901 -> 90 91 90 91 901{3}
Order: {QCD: 0, EW: 4}
CKKW: 30
2->4-5:
NLO_Mode: MC@NLO
NLO_Order: {QCD: 1, EW: 0}
ME_Generator: Amegic
RS_ME_Generator: Comix
Loop_Generator: OpenLoops
2->5-7:
Integration_Error: 0.05

SELECTORS:
- VariableSelector:
Variable: PT
Flavs: 90
Ranges: [[5.0, E_CMS], [5.0, E_CMS]]
Ordering: "[PT_UP]"
- [Mass, 11, -11, 10.0, E_CMS]
- [Mass, 13, -13, 10.0, E_CMS]
- [Mass, 15, -15, 10.0, E_CMS]


### 9.6.2 Dilepton, missing energy and jets production (gluon initiated)

# collider setup
BEAMS: 2212
BEAM_ENERGIES: 6500

# scales
CORE_SCALE: VAR{Abs2(p[2]+p[3]+p[4]+p[5])/4.0}

# me generator settings
ME_GENERATORS: [Amegic, OpenLoops]
AMEGIC: { ALLOW_MAPPING: 0 }
# the following phase space libraries have to be generated with the
# corresponding qq->llvv setup (RUNDATA=Sherpa.tree.yaml) first;
# they will appear in Process/Amegic/lib/libProc_fsrchannels*.so

PROCESSES:
- Process: 93 93 -> 90 90 91 91 93{1}
CKKW: $(QCUT) Enable_MHV: 10 Loop_Generator: OpenLoops 2->4: Order: {QCD: 2, EW: 4} Integrator: fsrchannels4 2->5: Order: {QCD: 3, EW: 4} Integrator: fsrchannels5 Integration_Error: 0.02 SELECTORS: - [Mass, 11, -11, 10.0, E_CMS] - [Mass, 13, -13, 10.0, E_CMS] - [Mass, 15, -15, 10.0, E_CMS]  ### 9.6.3 Four lepton and jets production # collider setup BEAMS: 2212 BEAM_ENERGIES: 6500 # me generator settings ME_GENERATORS: [Comix, Amegic, OpenLoops] METS: { CLUSTER_MODE: 16 } PROCESSES: - Process: 93 93 -> 90 90 90 90 93{3} Order: {QCD: 0, EW: 4} CKKW: 30 2->4-5: NLO_Mode: MC@NLO NLO_Order: {QCD: 1, EW: 0} ME_Generator: Amegic RS_ME_Generator: Comix Loop_Generator: OpenLoops 2->5-7: Integration_Error: 0.05 SELECTORS: - VariableSelector: Variable: PT Flavs: 90 Ranges: [[5.0, E_CMS], [5.0, E_CMS]] Ordering: "[PT_UP]" - [Mass, 11, -11, 10.0, E_CMS] - [Mass, 13, -13, 10.0, E_CMS] - [Mass, 15, -15, 10.0, E_CMS]  ### 9.6.4 Four lepton and jets production (gluon initiated) # collider setup BEAMS: 2212 BEAM_ENERGIES: 6500 # scales CORE_SCALE: VAR{Abs2(p[2]+p[3]+p[4]+p[5])/4.0} # me generator settings ME_GENERATORS: [Amegic, OpenLoops] AMEGIC: { ALLOW_MAPPING: 0 } # the following phase space libraries have to be generated with the # corresponding qq->llll setup (RUNDATA=Sherpa.tree.yaml) first; # they will appear in Process/Amegic/lib/libProc_fsrchannels*.so SHERPA_LDADD: [Proc_fsrchannels4, Proc_fsrchannels5] PROCESSES: - Process: 93 93 -> 90 90 90 90 93{1} CKKW:$(QCUT)
Enable_MHV: 10
Loop_Generator: OpenLoops
2->4:
Order: {QCD: 2, EW: 4}
Integrator: fsrchannels4
2->5:
Order: {QCD: 3, EW: 4}
Integrator: fsrchannels5
Integration_Error: 0.02

SELECTORS:
- [Mass, 11, -11, 10.0, E_CMS]
- [Mass, 13, -13, 10.0, E_CMS]
- [Mass, 15, -15, 10.0, E_CMS]


### 9.6.5 WZ production with jets production

# collider setup
BEAMS: 2212
BEAM_ENERGIES: 6500

# scales
CORE_SCALE: VAR{Abs2(p[2]+p[3])/4.0}

# me generator settings
ME_GENERATORS: [Comix, Amegic, OpenLoops]

HARD_DECAYS:
Enabled: true
Channels:
24 -> 2 -1: {Status: 2}
24 -> 4 -3: {Status: 2}
-24 -> -2 1: {Status: 2}
-24 -> -4 3: {Status: 2}
23 -> 12 -12: {Status: 2}
23 -> 14 -14: {Status: 2}
23 -> 16 -16: {Status: 2}

PARTICLE_DATA:
23: {Width: 0}
24: {Width: 0}

PROCESSES:
- Process: 93 93 -> 24 23 93{3}
Order: {QCD: 0, EW: 2}
CKKW: 30
2->2-3:
NLO_Mode: MC@NLO
NLO_Order: {QCD: 1, EW: 0}
ME_Generator: Amegic
RS_ME_Generator: Comix
Loop_Generator: OpenLoops
2->3-7:
Integration_Error: 0.05
- Process: 93 93 -> -24 23 93{3}
Order: {QCD: 0, EW: 2}
CKKW: 30
2->2-3:
NLO_Mode: MC@NLO
NLO_Order: {QCD: 1, EW: 0}
ME_Generator: Amegic
RS_ME_Generator: Comix
Loop_Generator: OpenLoops
2->3-7:
Integration_Error: 0.05


### 9.6.6 Same sign dilepton, missing energy and jets production

# collider setup
BEAMS: 2212
BEAM_ENERGIES: 6500

# choose EW Gmu input scheme
EW_SCHEME: 3

# tags for process setup
TAGS:
NJET: 1
QCUT: 30

# scales
CORE_SCALE: VAR{Abs2(p[2]+p[3]+p[4]+p[5])}
EXCLUSIVE_CLUSTER_MODE: 1

# solves problem with dipole QED modeling
ME_QED: { CLUSTERING_THRESHOLD: 10 }

# improve integration performance
PSI: { ITMIN: 25000 }
INTEGRATION_ERROR: 0.05

PROCESSES:
- Process: 93 93 -> 11 11 -12 -12 93 93 93{$(NJET)} Order: {QCD: Any, EW: 6} CKKW:$(QCUT)
- Process: 93 93 -> 13 13 -14 -14 93 93 93{$(NJET)} Order: {QCD: Any, EW: 6} CKKW:$(QCUT)
- Process: 93 93 -> 15 15 -16 -16 93 93 93{$(NJET)} Order: {QCD: Any, EW: 6} CKKW:$(QCUT)
- Process: 93 93 -> 11 13 -12 -14 93 93 93{$(NJET)} Order: {QCD: Any, EW: 6} CKKW:$(QCUT)
- Process: 93 93 -> 11 15 -12 -16 93 93 93{$(NJET)} Order: {QCD: Any, EW: 6} CKKW:$(QCUT)
- Process: 93 93 -> 13 15 -14 -16 93 93 93{$(NJET)} Order: {QCD: Any, EW: 6} CKKW:$(QCUT)
- Process: 93 93 -> -11 -11 12 12 93 93 93{$(NJET)} Order: {QCD: Any, EW: 6} CKKW:$(QCUT)
- Process: 93 93 -> -13 -13 14 14 93 93 93{$(NJET)} Order: {QCD: Any, EW: 6} CKKW:$(QCUT)
- Process: 93 93 -> -15 -15 16 16 93 93 93{$(NJET)} Order: {QCD: Any, EW: 6} CKKW:$(QCUT)
- Process: 93 93 -> -11 -13 12 14 93 93 93{$(NJET)} Order: {QCD: Any, EW: 6} CKKW:$(QCUT)
- Process: 93 93 -> -11 -15 12 16 93 93 93{$(NJET)} Order: {QCD: Any, EW: 6} CKKW:$(QCUT)
- Process: 93 93 -> -13 -15 14 16 93 93 93{$(NJET)} Order: {QCD: Any, EW: 6} CKKW:$(QCUT)

SELECTORS:
- [PT, 90, 5.0, E_CMS]
- NJetFinder:
N: 2
PTMin: 15.0
ETMin: 0.0
R: 0.4
Exp: -1


## 9.7 Event generation in the MSSM using UFO

This is an example for event generation in the MSSM using Sherpa’s UFO support. In the corresponding Example directory <prefix>/share/SHERPA-MC/Examples/UFO_MSSM/, you will find a directory MSSM that contains the UFO output for the MSSM (https://feynrules.irmp.ucl.ac.be/wiki/MSSM). To run the example, generate the model as described in UFO Model Interface by executing

cd <prefix>/share/SHERPA-MC/Examples/UFO_MSSM/
<prefix>/bin/Sherpa-generate-model MSSM


An example run card will be written to the working directory. Use this run card as a template to generate events.

## 9.8 Deep-inelastic scattering

### 9.8.1 DIS at HERA

This is an example of a setup for hadronic final states in deep-inelastic lepton-nucleon scattering at a centre-of-mass energy of 300 GeV. Corresponding measurements were carried out by the H1 and ZEUS collaborations at the HERA collider at DESY Hamburg.

# collider setup
BEAMS: [-11, 2212]
BEAM_ENERGIES: [27.5, 820]
PDF_SET: [None, Default]

# technical parameters
TAGS:
QCUT: 5
SDIS: 1.0
LGEN: BlackHat
ME_GENERATORS:
- Comix
- Amegic
- $(LGEN) RESPECT_MASSIVE_FLAG: true CSS_KIN_SCHEME: 1 # hadronization tune PARJ: - [21, 0.432] - [41, 1.05] - [42, 1.0] - [47, 0.65] MSTJ: - [11, 5] FRAGMENTATION: Lund DECAYMODEL: Lund PROCESSES: - Process: -11 93 -> -11 93 93{4} CKKW:$(QCUT)/sqrt(1.0+sqr($(QCUT)/$(SDIS))/Abs2(p[2]-p[0]))
Order: {QCD: Any, EW: 2}
Max_N_Quarks: 6
Loop_Generator: $(LGEN) 2->2-3: NLO_Mode: MC@NLO NLO_Order: {QCD: 1, EW: 0} ME_Generator: Amegic RS_ME_Generator: Comix 2->3: PSI_ItMin: 25000 Integration_Error: 0.03 SELECTORS: - [Q2, -11, -11, 4, 1e12]  Things to notice: • the beams are asymmetric with the positrons at an energy of 27.5 GeV, while the protons carry 820 GeV of energy. • the multi-jet merging cut is set dynamically for each event, depending on the photon virtuality, see [Car09]. • there is a selector cut on the photon virtuality. This cut implements the experimental requirements for identifying the deep-inelastic scattering process. ## 9.9 Fixed-order next-to-leading order calculations ### 9.9.1 Production of NTuples Root NTuples are a convenient way to store the result of cumbersome fixed-order calculations in order to perform multiple analyses. This example shows how to generate such NTuples and reweighted them in order to change factorisation and renormalisation scales. Note that in order to use this setup, Sherpa must be configured with option --enable-root=/path/to/root, see Event output formats. If Sherpa has not been configured with Rivet analysis support, please disable the analysis using ‘-a0’ on the command line, see Command line options. When using NTuples, one needs to bear in mind that every calculation involving jets in the final state is exclusive in the sense that a lower cut-off on the jet transverse momenta must be imposed. It is therefore necessary to check whether the event sample stored in the NTuple is sufficiently inclusive before using it. Similar remarks apply when photons are present in the NLO calculation or when cuts on leptons have been applied at generation level to increase efficiency. Every NTuple should therefore be accompanied by an appropriate documentation. NTuple compression can be customized using the parameter ‘ROOTNTUPLE_COMPRESSION’, which is used to call TFile::SetCompressionSettings. For a detailed documentation of available options, see http://root.cern.ch This example will generate NTuples for the process pp->lvj, where l is an electron or positron, and v is an electron (anti-)neutrino. We identify parton-level jets using the anti-k_T algorithm with R=0.4 [Cac08]. We require the transverse momentum of these jets to be larger than 20 GeV. No other cuts are applied at generation level. EVENTS: 100k EVENT_GENERATION_MODE: Weighted TAGS: LGEN: BlackHat ME_GENERATORS: [Amegic,$(LGEN)]
# Analysis (please configure with --enable-rivet & --enable-hepmc2)
ANALYSIS: Rivet
ANALYSIS_OUTPUT: Analysis/HTp/BVI/
# NTuple output (please configure with '--enable-root')
EVENT_OUTPUT: EDRoot[NTuple_B-like]
BEAMS: 2212
BEAM_ENERGIES: 3500
SCALES: VAR{sqr(sqrt(H_T2)-PPerp(p[2])-PPerp(p[3])+MPerp(p[2]+p[3]))/4}
EW_SCHEME: 0
WIDTH_SCHEME: Fixed  # sin\theta_w -> 0.23
DIPOLES: {ALPHA: 0.03}
PARTICLE_DATA:
13: {Massive: true}
15: {Massive: true}
PROCESSES:
# The Born piece
- Process: 93 93 -> 90 91 93
NLO_QCD_Mode: Fixed_Order
NLO_QCD_Part: B
Order: {QCD: Any, EW: 2}
# The virtual piece
- Process: 93 93 -> 90 91 93
NLO_QCD_Mode: Fixed_Order
NLO_QCD_Part: V
Loop_Generator: \$(LGEN)
Order: {QCD: Any, EW: 2}
# The integrated subtraction piece
- Process: 93 93 -> 90 91 93
NLO_QCD_Mode: Fixed_Order
NLO_QCD_Part: I
Order: {QCD: Any, EW: 2}
SELECTORS:
- FastjetFinder:
Algorithm: antikt
N: 1
PTMin: 20
ETMin: 0
DR: 0.4
RIVET:
-a: ATLAS_2012_I1083318
USE_HEPMC_SHORT: 1
IGNOREBEAMS: 1


Things to notice:

• NTuple production is enabled by ‘EVENT_OUTPUT: Root[NTuple_B-like]’, see Event output formats.
• The scale used is defined as in [Ber09a].
• EW_SCHEME: 0’ and ‘WIDTH_SCHEME: Fixed’ are used to set the value of the weak mixing angle to 0.23, consistent with EW precision measurements.
• DIPOLES:ALPHA: 0.03’ is used to limit the active phase space of dipole subtractions.
• 13:Massive: true’ and ‘15:Massive: 1’ are used to limit the number of active lepton flavours to electron and positron.
• The option ‘USE_HEPMC_SHORT: 1’ is used in the Rivet analysis section as the events produced by Sherpa are not at particle level.

#### 9.9.1.1 NTuple production

Start Sherpa using the command line

  Sherpa -f Sherpa.B-like.yaml


Sherpa will first create source code for its matrix-element calculations. This process will stop with a message instructing you to compile. Do so by running

  ./makelibs -j4


Launch Sherpa again, using

  Sherpa -f Sherpa.B-like.yaml


Sherpa will then compute the Born, virtual and integrated subtraction contribution to the NLO cross section and generate events. These events are analysed using the Rivet library and stored in a Root NTuple file called NTuple_B-like.root. We will use this NTuple later to compute an NLO uncertainty band.

The real-emission contribution, including subtraction terms, to the NLO cross section is computed using

  Sherpa -f Sherpa.R-like.yaml


Events are generated, analysed by Rivet and stored in the Root NTuple file NTuple_R-like.root.

The two analyses of events with Born-like and real-emission-like kinematics need to be merged, which can be achieved using scripts like yodamerge. The result can then be plotted and displayed.

#### 9.9.1.2 Usage of NTuples in Sherpa

Next we will compute the NLO uncertainty band using Sherpa. To this end, we make use of the Root NTuples generated in the previous steps. Note that the setup files for reweighting are almost identical to those for generating the NTuples. We have simply replaced ‘EVENT_OUTPUT’ by ‘EVENT_INPUT’.

We re-evaluate the events with the scale variation as defined in the Reweight configuration files:

  Sherpa -f Sherpa.Reweight.B-like.yaml
Sherpa -f Sherpa.Reweight.R-like.yaml


The contributions can again be combined using yodamerge.

### 9.9.2 MINLO

The following configuration file shows how to implement the MINLO procedure from [Ham12]. A few things to note are detailed below. MINLO can also be applied when reading NTuples, see Production of NTuples. In this case, the scale and K factor must be defined, see SCALES and KFACTOR.

BEAMS: 2212
BEAM_ENERGIES: 6500

EVENT_GENERATION_MODE: W
CORE_SCALE: VAR{Abs2(p[2]+p[3])+0.25*sqr(sqrt(H_T2)-PPerp(p[2])-PPerp(p[3])+PPerp(p[2]+p[3]))}

PROCESSES:
- Process: 93 93 -> 11 -12 93
Scales: MINLO
KFactor: MINLO
ME_Generator: Amegic
Loop_Generator: BlackHat
Order: {QCD: Any, EW: 2}

SELECTORS:
- [Mass, 11, -12, 2, E_CMS]
- FastjetFinder:
Algorithm: antikt
N: 1
PTMin: 1.0
ETMin: 1.0
DR: 0.4


Things to notice:

• The R parameter of the flavour-based kT clustering algorithm can be changed using MINLO:DELTA_R.
• Setting MINLO: {SUDAKOV_MODE: 0} defines whether to include power corrections stemming from the finite parts in the integral over branching probabilities. It defaults to 1.
• The parameter MINLO:SUDAKOV_PRECISION defines the precision target for integration of the Sudakov exponent. It defaults to 1e-4.

## 9.10 Soft QCD: Minimum Bias and Cross Sections

### 9.10.1 Calculation of inclusive cross sections

Note that this example is not yet updated to the new YAML input format. Contact the Authors for more information.

(run){
OUTPUT              = 2
EVENT_TYPE          = MinimumBias
SOFT_COLLISIONS     = Shrimps
Shrimps_Mode        = Xsecs

deltaY    =  1.5;
Lambda2   =  1.7;
beta_0^2  =  20.0;
kappa     =  0.6;
xi        =  0.2;
lambda    =  0.3;
Delta     =  0.4;
}(run)

(beam){
BEAM_1 =  2212; BEAM_ENERGY_1 = 450.;
BEAM_2 =  2212; BEAM_ENERGY_2 = 450.;
}(beam)

(me){
ME_SIGNAL_GENERATOR = None
}(me)



Things to notice:

• Inclusive cross sections (total, inelastic, low-mass single-diffractive, low-mass double-diffractive, elastic) and the elastic slope are calculated for varying centre-of-mass energies in pp collisions
• The results are written to the file InclusiveQuantities/xsecs_total.dat and to the screen. The directory will automatically be created in the path from where Sherpa is run.
• The parameters of the model are not very well tuned.

### 9.10.2 Simulation of Minimum Bias events

Note that this example is not yet updated to the new YAML input format. Contact the Authors for more information.

(run){
EVENTS              = 50k
OUTPUT              = 2
EVENT_TYPE          = MinimumBias
SOFT_COLLISIONS     = Shrimps
Shrimps_Mode        = Inelastic

ANALYSIS            = Rivet

ANALYSIS_OUTPUT     = test6

ALPHAS(MZ) 0.118;
ORDER_ALPHAS 1;
CSS_FS_PT2MIN       1.00

deltaY    =  1.5;
Lambda2   =  1.376;
beta_0^2  =  18.76;
kappa     =  0.6;
xi        =  0.2;
lambda    =  0.2151;
Delta     =  0.3052;

Q_0^2           = 2.25;
Chi_S           = 1.0;
Shower_Min_KT2  = 4.0;
Diff_Factor     = 4.0;
KT2_Factor      = 4.0;
RescProb        = 2.0;
RescProb1       = 0.5;
Q_RC^2          = 0.9;
ReconnProb      = -25.;
Resc_KTMin      = off;

Misha           = 0
}(run)

(beam){
BEAM_1 =  2212; BEAM_ENERGY_1 = 3500.;
BEAM_2 =  2212; BEAM_ENERGY_2 = 3500.;
}(beam)

(analysis){
BEGIN_RIVET {
-a ATLAS_2010_S8918562 ATLAS_2010_S8894728 ATLAS_2011_S8994773 ATLAS_2012_I1084540 TOTEM_2012_DNDETA ATLAS_2011_I919017 CMS_2011_S8978280 CMS_2011_S9120041 CMS_2011_S9215166 CMS_2010_S8656010 CMS_2011_S8884919 CMS_QCD_10_024
} END_RIVET
}(analysis)

(me){
ME_SIGNAL_GENERATOR = None
}(me)



Things to notice:

• The SHRiMPS model is not properly tuned yet – all parameters are set to very natural values, such as for example 1.0 GeV for infrared parameters.
• Elastic scattering and low-mass diffraction are not included.
• A large number of Minimum Bias-type analyses is enabled.

## 9.11 Setups for event production at B-factories

### 9.11.1 QCD continuum

Example setup for QCD continuum production at the Belle/KEK collider. Please note, it does not include any hadronic resonance.

# collider setup
BEAMS:  [11, -11]
BEAM_ENERGIES: [7., 4.]

# model parameters
ALPHAS(MZ): 0.1188
ORDER_ALPHAS: 1
PARTICLE_DATA:
4: {Massive: 1}
5: {Massive: 1}
MASSIVE_PS: 3

PROCESSES:
- Process: 11 -11 -> 93 93
Order: {QCD: Any, EW: 2}
- Process: 11 -11 -> 4 -4
Order: {QCD: Any, EW: 2}
- Process: 11 -11 -> 5 -5
Order: {QCD: Any, EW: 2}


Things to notice:

• Asymmetric beam energies, photon ISR is switched on per default.
• Full mass effects of c and b quarks computed.
• Strong coupling constant value set to 0.1188 and two loop (NLO) running.

### 9.11.2 Signal process

Example setup for B-hadron pair production on the Y(4S) pole.

# collider setup
BEAMS:  [11, -11]
BEAM_ENERGIES: [7., 4.]

# model parameters
ALPHAS(MZ): 0.1188
ORDER_ALPHAS: 1
PARTICLE_DATA:
4: {Massive: 1}
5: {Massive: 1}
MASSIVE_PS: 3
ME_GENERATORS: Internal
SCALES: VAR{sqr(91.2)}

PROCESSES:
#
# electron positron -> Y(4S) -> B+ B-
#
- Process: 11 -11 -> 300553[a]
Decay: "300553[a] -> 521 -521"
Order: {QCD: Any, EW: 2}
#
# electron positron -> Y(4S) -> B0 B0bar
#
- Process: 11 -11 -> 300553[a]
Decay: "300553[a] -> 511 -511"
Order: {QCD: Any, EW: 2}


Things to notice:

• Same setup as QCD continuum, except for process specification.
• Production of both B0 and B+ pairs, in due proportion.

### 9.11.3 Single hadron decay chains

This setup is not a collider setup, but a simulation of a hadronic decay chain.

# collider setup
BEAMS:  [11, -11]
BEAM_ENERGIES: [7., 4.]

# general settings

# specify hadron to be decayed
DECAYER: 511

# initialise rest for Sherpa not to complain
# model parameters
ME_GENERATORS: Internal
SCALES: VAR{sqr(91.2)}

PROCESSES:
- Process: 11 -11 -> 13 -13


Things to notice:

• EVENT_TYPE is set to HadronDecay.
• DECAYER specifies the hadron flavour initialising the decay chain.
• A place holder process is declared such that the Sherpa framework can be initialised. That process will not be used.

## 9.12 Calculating matrix element values for externally given configurations

### 9.12.1 Computing matrix elements for individual phase space points using the Python Interface

Sherpa’s Python interface (see Python Interface) can be used to compute matrix elements for individual phase space points. Access to a designated class “MEProcess” is provided by interface to compute matrix elements as illustrated in the example script.

Please note that the process in the script must be compatible with the one specified in the Sherpa configuration file in the working directory. A random phase space point for the process of interest can be generated as shown in the example.

If AMEGIC++ is used as the matrix element generator, executing the script will result in AMEGIC++ writing out libraries and exiting. After compiling the libraries using ./makelibs, the script must be executed again in order to obtain the matrix element.

#!/usr/bin/env python2
import sys
sys.path.append('@PYLIBDIR@')
import Sherpa

# Add this to the execution arguments to prevent Sherpa from starting the cross section integration
sys.argv.append('INIT_ONLY: 2')

Generator=Sherpa.Sherpa(len(sys.argv),sys.argv)
try:
Generator.InitializeTheRun()
Process=Sherpa.MEProcess(Generator)

# Incoming flavors must be added first!
Process.Initialize();

# First argument corresponds to particle index:
# index 0 correspons to particle added first, index 1 is the particle added second, and so on...
Process.SetMomentum(0, 45.6,0.,0.,45.6)
Process.SetMomentum(1, 45.6,0.,0.,-45.6)
Process.SetMomentum(2, 45.6,0.,45.6,0.)
Process.SetMomentum(3, 45.6,0.,-45.6,0.)
print '\nSquared ME: ', Process.CSMatrixElement()

# Momentum setting via list of floats
Process.SetMomenta([[45.6,0.,0.,45.6],
[45.6,0.,0.,-45.6],
[45.6,0.,45.6,0.],
[45.6,0.,-45.6,0.]])
print '\nSquared ME: ', Process.CSMatrixElement()

# Random momenta
E_cms = 500.0
wgt = Process.TestPoint(E_cms)
print '\nRandom test point '
print 'Squared ME: ', Process.CSMatrixElement(), '\n'

except Sherpa.SherpaException as exc:
print exc
exit(1)


### 9.12.2 Computing matrix elements for individual phase space points using the C++ Interface

Matrix elements values for user defined phase space points can also be quarried using a small C++ executable provided in Examples/API/ME2. It can be compiled using the provided Makefile. The test program is then run typing (note: the LD_LIBRARY_PATH must be set to include <Sherpa-installation>/lib/SHERPA-MC)

./test <options>


where the usual options for Sherpa are passed. An example configuration file, giving both the process and the requested phase space points looks like

BEAMS: [11, -11]
BEAM_ENERGIES: 45.6

EVENTS: 0
INIT_ONLY: 2
PDF_LIBRARY: None

PROCESSES:
- Process: 11 -11 -> 2 -2 21 21 21 21

NUMBER_OF_POINTS: 4

MOMENTA:
- [
[  11,  45.6,   0.0,   0.0,  45.6 ],
[ -11,  45.6,   0.0,   0.0, -45.6 ],
[  21,  10.0,   0.0,   0.0, -10.0, 1, 2 ],
[  21,  10.0,   0.0,   0.0,  10.0, 2, 3 ],
[  21,  10.0,  10.0,   0.0,   0.0, 3, 1 ],
[  21,  10.0, -10.0,   0.0,   0.0, 1, 3 ],
[   2,  25.6,   0.0,  25.6,   0.0, 3, 0 ],
[  -2,  25.6,   0.0, -25.6,   0.0, 0, 1 ]
]
- [
[  11,  45.6,   0.0,   0.0,  45.6 ],
[ -11,  45.6,   0.0,   0.0, -45.6 ],
[  21,  12.0,   0.0,   0.0, -12.0, 1, 2 ],
[  21,  12.0,   0.0,   0.0,  12.0, 2, 3 ],
[  21,  12.0,  12.0,   0.0,   0.0, 3, 1 ],
[  21,  12.0, -12.0,   0.0,   0.0, 1, 3 ],
[   2,  21.6,   0.0,  21.6,   0.0, 3, 0 ],
[  -2,  21.6,   0.0, -21.6,   0.0, 0, 1 ]
]
- [
[  11,  45.6,   0.0,   0.0,  45.6 ],
[ -11,  45.6,   0.0,   0.0, -45.6 ],
[  21,  14.0,   0.0,   0.0, -14.0, 1, 2 ],
[  21,  14.0,   0.0,   0.0,  14.0, 2, 3 ],
[  21,  14.0,  14.0,   0.0,   0.0, 3, 1 ],
[  21,  14.0, -14.0,   0.0,   0.0, 1, 3 ],
[   2,  17.6,   0.0,  17.6,   0.0, 3, 0 ],
[  -2,  17.6,   0.0, -17.6,   0.0, 0, 1 ]
]
- [
[  11,  45.6,   0.0,   0.0,  45.6 ],
[ -11,  45.6,   0.0,   0.0, -45.6 ],
[  21,  16.0,   0.0,   0.0, -16.0, 1, 2 ],
[  21,  16.0,   0.0,   0.0,  16.0, 2, 3 ],
[  21,  16.0,  16.0,   0.0,   0.0, 3, 1 ],
[  21,  16.0, -16.0,   0.0,   0.0, 1, 3 ],
[   2,  13.6,   0.0,  13.6,   0.0, 3, 0 ],
[  -2,  13.6,   0.0, -13.6,   0.0, 0, 1 ]
]


Please note that both the process and the beam specifications need to be present in order for Sherpa to initialise properly. The momenta need to be given in the proper ordering employed in Sherpa, which can be read from the process name printed on screen. For each entry the sequence is the following

  [<pdg-id>, <E>, <px>, <py>, <pz>, triplet-index, antitriplet-index]


with the colour indices ranging from 1..3 for both the triplet and the antitriplet index in the colour-flow basis. The colour information is only needed if Comix is used for the calculation as Comix then also gives the squared matrix element value for this colour configuration. Otherwise, the last two arguments can be omitted. In any case, the colour-summed value is printed to screen.

## 9.13 Using the Python interface

### 9.13.1 Generate events using scripts

This example shows how to generate events with Sherpa using a Python wrapper script. For each event the weight, the number of trials and the particle information is printed to stdout. This script can be used as a basis for constructing interfaces to own analysis routines.

#!/usr/bin/python2
import sys
sys.path.append('@PYLIBDIR@')
import Sherpa

Generator=Sherpa.Sherpa(len(sys.argv),sys.argv)
try:
Generator.InitializeTheRun()
Generator.InitializeTheEventHandler()
for n in range(1,1+Generator.NumberOfEvents()):
Generator.GenerateOneEvent()
blobs=Generator.GetBlobList();
print "Event",n,"{"
## print blobs
print "  Weight ",blobs.GetFirst(1)["Weight"];
print "  Trials ",blobs.GetFirst(1)["Trials"];
for i in range(0,blobs.size()):
print "  Blob",i,"{"
## print blobs[i];
print "    Incoming particles"
for j in range(0,blobs[i].NInP()):
part=blobs[i].InPart(j)
## print part
s=part.Stat()
f=part.Flav()
p=part.Momentum()
print "     ",j,": ",s,f,p
print "    Outgoing particles"
for j in range(0,blobs[i].NOutP()):
part=blobs[i].OutPart(j)
## print part
s=part.Stat()
f=part.Flav()
p=part.Momentum()
print "     ",j,": ",s,f,p
print "  } Blob",i
print "} Event",n
if ((n%100)==0): print "  Event ",n
Generator.SummarizeRun()

except Sherpa.SherpaException as exc:
exit(1)


### 9.13.2 Generate events with MPI using scripts

This example shows how to generate events with Sherpa using a Python wrapper script and MPI. For each event the weight, the number of trials and the particle information is send to the MPI master node and written into a single gzip’ed output file. Note that you need the mpi4py module to run this Example. Sherpa must be configured and installed using ‘--enable-mpi’, see MPI parallelization.

#!/usr/bin/python2
import sys
sys.path.append('@PYLIBDIR@')
import Sherpa
import gzip

class MyParticle:
def __init__(self,p):
self.kfc=p.Flav().Kfcode()
if p.Flav().IsAnti(): self.kfc=-self.kfc
self.E=p.Momentum()[0]
self.px=p.Momentum()[1]
self.py=p.Momentum()[2]
self.pz=p.Momentum()[3]
def __str__(self):
return (str(self.kfc)+" "+str(self.E)+" "
+str(self.px)+" "+str(self.py)+" "+str(self.pz))

Generator=Sherpa.Sherpa(len(sys.argv),sys.argv)
try:
Generator.InitializeTheRun()
Generator.InitializeTheEventHandler()
comm=MPI.COMM_WORLD
rank=comm.Get_rank()
size=comm.Get_size()
if rank==0:
outfile=gzip.GzipFile("events.gz",'w')
for n in range(1,1+Generator.NumberOfEvents()):
for t in range(1,size):
weight=comm.recv(source=t,tag=t)
trials=comm.recv(source=t,tag=2*t)
parts=comm.recv(source=t,tag=3*t)
outfile.write("E "+str(weight)+" "+str(trials)+"\n")
for p in parts:
outfile.write(str(p)+"\n")
if (n%100)==0: print "  Event",n
outfile.close()
else:
for n in range(1,1+Generator.NumberOfEvents()):
Generator.GenerateOneEvent()
blobs=Generator.GetBlobList();
weight=blobs.GetFirst(1)["Weight"]
trials=blobs.GetFirst(1)["Trials"]
parts=[]
for i in range(0,blobs.size()):
for j in range(0,blobs[i].NOutP()):
part=blobs[i].OutPart(j)
if part.Stat()==1 and part.HasDecBlob()==0:
parts.append(MyParticle(part))
comm.send(weight,dest=0,tag=rank)
comm.send(trials,dest=0,tag=2*rank)
comm.send(parts,dest=0,tag=3*rank)
Generator.SummarizeRun()

except Sherpa.SherpaException as exc:
exit(1)


# 10 Getting help

If Sherpa exits abnormally, first check the Sherpa output for hints on the reason of program abort, and try to figure out what has gone wrong with the help of the Manual. Note that Sherpa throwing a normal_exit exception does not imply any abnormal program termination! When using AMEGIC++ Sherpa will exit with the message:

   New libraries created. Please compile.


In this case, follow the instructions given in Running Sherpa with AMEGIC++.

If this does not help, contact the Sherpa team (see the Sherpa Team section of the website sherpa.hepforge.org), providing all information on your setup. Please include

1. A complete tarred and gzipped set of the ‘Sherpa.yaml’ config file leading to the crash. Use the status recovery directory Status__<date of crash> produced before the program abort.
2. The command line (including possible parameters) you used to start Sherpa.
3. The installation log file, if available.

# 11 Authors

Sherpa was written by the Sherpa Team, see sherpa.hepforge.org.

# 12 Copying

Sherpa is free software. You can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation. You should have received a copy of the GNU General Public License along with the source for Sherpa; see the file COPYING. If not, write to the Free Software Foundation, 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.

Sherpa is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

Sherpa was created during the Marie Curie RTN’s HEPTOOLS, MCnet and LHCphenonet. The MCnet Guidelines apply, see the file GUIDELINES and http://www.montecarlonet.org/index.php?p=Publications/Guidelines.

# Appendix A References

• [Ale05] S. Alekhin and others, HERA and the LHC - A workshop on the implications of HERA for LHC physics: Proceedings Part A, hep-ph/0601012.

• [Ali13] S. Alioli and others, Update of the Binoth Les Houches Accord for a standard interface between Monte Carlo tools and one-loop programs, arXiv:1308.3462.

• [Alw07] J. Alwall and others, Comparative study of various algorithms for the merging of parton showers and matrix elements in hadronic collisions, Eur. Phys. J. C53 (2008) 473-500 [arXiv:0706.2569].

• [Arc08] J. Archibald and T. Gleisberg and S. Hoche and F. Krauss and M. Schonherr and S. Schumann and F. Siegert and J. C. Winter, Simulation of photon-photon interactions in hadron collisions with Sherpa, Nucl. Phys. 179 (2008) 218-225 .

• [Bah08b] M. Bahr and others, Herwig++ Physics and Manual, Eur. Phys. J. C58 (2008) 639-707 [arXiv:0803.0883].

• [Bal92] A. Ballestrero and E. Maina and S. Moretti, Heavy quarks and leptons at e^+e^- colliders, Nucl. Phys. B415 (1994) 265-292 [hep-ph/9212246].

• [Bar93] E. Barberio and Z. Wc as, PHOTOS - a universal monte carlo for QED radiative corrections: version 2.0, Comput. Phys. Commun. 79 (1994) 291-308 .

• [Ber02] Z. Bern and L. J. Dixon and C. Schmidt, Isolating a light Higgs boson from the di-photon background at the LHC, Phys. Rev. D66 (2002) 074018 [hep-ph/0206194].

• [Ber09a] C. F. Berger and Z. Bern and L. J. Dixon and F. Febres-Cordero and D. Forde and T. Gleisberg and H. Ita and D. A. Kosower and D. Ma^itre, Next-to-leading order QCD predictions for W+3-Jet distributions at hadron colliders, Phys. Rev. D80 (2009) 074036 [arXiv:0907.1984].

• [Ber94] F. A. Berends and R. Pittau and R. Kleiss, All electroweak four-fermion processes in electron-positron collisions, Nucl. Phys. B424 (1994) 308 [hep-ph/9404313].

• [Bin10a] T. Binoth and others, A proposal for a standard interface between Monte Carlo tools and one-loop programs, Comput. Phys. Commun. 181 (2010) 1612-1622 [arXiv:1001.1307].

• [Bud74] V. M. Budnev and I. F. Ginzburg and G. V. Meledin and V. G. Serbo, The two photon particle production mechanism. Physical problems. Applications. Equivalent photon approximation, Phys. Rept. 15 (1974) 181-281 .

• [Cac08] M. Cacciari and G. P. Salam and G. Soyez, The Anti-k(t) jet clustering algorithm, JHEP 0804 (2008) 063 [arXiv:0802.1189].

• [Car09] T. Carli and T. Gehrmann and S. Hoche, Hadronic final states in deep-inelastic scattering with Sherpa, Eur. Phys. J. C67 (2010) 73 [arXiv:0912.3715].

• [Cas11] F. Cascioli and P. Maierhofer and S. Pozzorini, Scattering Amplitudes with Open Loops, Eur.Phys.J. C72 (2012) 1889 [arXiv:1111.5206].

• [Cat01a] S. Catani and F. Krauss and R. Kuhn and B. R. Webber, QCD matrix elements + parton showers, JHEP 11 (2001) 063 [hep-ph/0109231].

• [Cat02] S. Catani and S. Dittmaier and M. H. Seymour and Z. Trocsanyi, The dipole formalism for next-to-leading order QCD calculations with massive partons, Nucl. Phys. B627 (2002) 189-265 [hep-ph/0201036].

• [Cat96b] S. Catani and M. H. Seymour, A general algorithm for calculating jet cross sections in NLO QCD, Nucl. Phys. B485 (1997) 291-419 [hep-ph/9605323].

• [Chr08] N. D. Christensen and C. Duhr, FeynRules - Feynman rules made easy, Comput. Phys. Commun. 180 (2009) 1614-1641 [arXiv:0806.4194].

• [Chr09] N. D. Christensen and P. de Aquino and C. Degrande and C. Duhr and B. Fuks and M. Herquet and F. Maltoni and S. Schumann, A comprehensive approach to new physics simulations, Eur. Phys. J. C71 (2011) 1541 [arXiv:0906.2474].

• [Daw02] S. Dawson and L.H. Orr and L. Reina and D. Wackeroth, Associated top quark Higgs boson production at the LHC, Phys.Rev. D67 (2003) 071503 [hep-ph/0211438].

• [Daw03] S. Dawson and C. Jackson and L.H. Orr and L. Reina and D. Wackeroth, Associated Higgs production with top quarks at the large hadron collider: NLO QCD corrections, Phys.Rev. D68 (2003) 034022 [hep-ph/0305087].

• [Deg11] C. Degrande and C. Duhr and B. Fuks and D. Grellscheid and O. Mattelaer and T. Reiter, UFO - The Universal FeynRules Output, Comput. Phys. Commun. 138 (2012) 1201 [arXiv:1108.2040].

• [Dix13] L. J. Dixon and Y. Li, Bounding the Higgs Boson Width Through Interferometry, arXiv:1305.3854.

• [Dra00] P. D. Draggiotis and A. van Hameren and R. Kleiss, SARGE: An algorithm for generating QCD-antennas, Phys. Lett. B483 (2000) 124-130 [hep-ph/0004047].

• [Duh06] C. Duhr and S. Hoche and F. Maltoni, Color-dressed recursive relations for multi-parton amplitudes, JHEP 08 (2006) 062 [hep-ph/0607057].

• [Fie82a] R. D. Field and S. Wolfram, A QCD model for e^+e^- annihilation, Nucl. Phys. B213 (1983) 65 .

• [Fri98] S. Frixione, Isolated photons in perturbative QCD, Phys. Lett. B429 (1998) 369-374 [hep-ph/9801442].

• [Gao13] J. Gao and M. Guzzi and J. Huston and H. L. Lai and Z. Li and others, The CT10 NNLO Global Analysis of QCD, arXiv:1302.6246.

• [Geh12] T. Gehrmann and S. Hoche and F. Krauss and M. Schonherr and F. Siegert, NLO QCD matrix elements + parton showers in e^+e^- -> hadrons, JHEP 01 (2013) 144 [arXiv:1207.5031].

• [Gle03b] T. Gleisberg and F. Krauss and C. G. Papadopoulos and A. Schalicke and S. Schumann, Cross sections for multi-particle final states at a linear collider, Eur. Phys. J. C34 (2004) 173-180 [hep-ph/0311273].

• [Gle05] T. Gleisberg and F. Krauss and A. Schalicke and S. Schumann and J. C. Winter, Studying W^+ W^- production at the Fermilab Tevatron with Sherpa, Phys. Rev. D72 (2005) 034028 [hep-ph/0504032].

• [Gle07] T. Gleisberg and F. Krauss, Automating dipole subtraction for QCD NLO calculations, Eur. Phys. J. C53 (2008) 501-523 [arXiv:0709.2881].

• [Gle08] T. Gleisberg and S. Hoche, Comix a new matrix element generator, JHEP 12 (2008) 039 [arXiv:0808.3674].

• [Gle08b] T. Gleisberg and S. Hoche and F. Krauss and M. Schonherr and S. Schumann and F Siegert and J. Winter, Event generation with Sherpa 1.1, JHEP 02 (2009) 007 [arXiv:0811.4622].

• [Glu91] M. Gluck and E. Reya and A. Vogt, Parton structure of the photon beyond the leading order, Phys. Rev. D45 (1992) 3986-3994 .

• [Glu91a] M. Gluck and E. Reya and A. Vogt, Photonic parton distributions, Phys. Rev. D46 (1992) 1973-1979 .

• [Got82] T. D. Gottschalk, A realistic model for e^+e^- annihilation including parton bremsstrahlung effects, Nucl. Phys. B214 (1983) 201 .

• [Got83] T. D. Gottschalk, An improved description of hadronization in the QCD cluster model for e^+e^- annihilation, Nucl. Phys. B239 (1984) 349 .

• [Got86] T. D. Gottschalk and D. A. Morris, A new model for hadronization and e^+e^- annihilation, Nucl. Phys. B288 (1987) 729 .

• [Hag05] K. Hagiwara and others, Supersymmetry simulations with off-shell effects for the CERN LHC and an ILC, Phys. Rev. D73 (2006) 055005 [hep-ph/0512260].

• [Ham02] A. van Hameren and C. G. Papadopoulos, A hierarchical phase space generator for QCD antenna structures, Eur. Phys. J. C25 (2002) 563-574 [hep-ph/0204055].

• [Ham09a] K. Hamilton and P. Richardson and J. Tully, A modified CKKW matrix element merging approach to angular-ordered parton showers, JHEP 11 (2009) 038 [arXiv:0905.3072].

• [Ham10] K. Hamilton and P. Nason, Improving NLO-parton shower matched simulations with higher order matrix elements, JHEP 06 (2010) 039 [arXiv:1004.1764].

• [Ham12] K. Hamilton and P. Nason and G. Zanderighi, MINLO: Multi-scale improved NLO, arXiv:1206.3572.

• [Hoc06] S. Hoche and others, Matching Parton Showers and Matrix Elements, hep-ph/0602031.

• [Hoe09] S. Hoche and F. Krauss and S. Schumann and F. Siegert, QCD matrix elements and truncated showers, JHEP 05 (2009) 053 [arXiv:0903.1219].

• [Hoe09a] S. Hoche and S. Schumann and F. Siegert, Hard photon production and matrix-element parton-shower merging, Phys. Rev. D81 (2010) 034026 [arXiv:0912.3501].

• [Hoe10] S. Hoche and F. Krauss and M. Schonherr and F. Siegert, NLO matrix elements and truncated showers, JHEP 08 (2011) 123 [arXiv:1009.1127].

• [Hoe11] S. Hoche and F. Krauss and M. Schonherr and F. Siegert, A critical appraisal of NLO+PS matching methods, JHEP 09 (2012) 049 [arXiv:1111.1220].

• [Hoe12a] S. Hoche and F. Krauss and M. Schonherr and F. Siegert, QCD matrix elements + parton showers: The NLO case, JHEP 04 (2013) 027 [arXiv:1207.5030].

• [Hoe12b] S. Hoche and M. Schonherr, Uncertainties in NLO + parton shower matched simulations of inclusive jet and dijet production, Phys.Rev. D86 (2012) 094042 arXiv:1208.2815.

• [Hoe13] S. Hoeche and J. Huang and G. Luisoni and M. Schoenherr and J. Winter, Zero and one jet combined NLO analysis of the top quark forward-backward asymmetry, Phys.Rev. D88 (2013) 014040 arXiv:1306.2703.

• [Hoe14a] S. Hoeche and F. Krauss and M. Schoenherr, Uncertainties in MEPS@NLO calculations of h+jets, arXiv:1401.7971.

• [Hoe14b] S. Hoeche and F. Krauss and S. Pozzorini and M. Schoenherr and J. M. Thompson and K. C. Zapp, Triple vector boson production through Higgs-Strahlung with NLO multijet merging, arXiv:1403.7516.

• [Hoe14c] S. Hoeche and S. Kuttimalai and S. Schumann and F. Siegert, Beyond Standard Model calculations with Sherpa, Eur. Phys. J. C75 (2015) 135 [arXiv:1412.6478].

• [Hoe15] S. Hoeche and S. Prestel, The midpoint between dipole and parton showers, arXiv:1506.05057.

• [Jad93] S. Jadach and Z. Was and R. Decker and J. H. Kuhn, The tau decay library TAUOLA: Version 2.4, Comput. Phys. Commun. 76 (1993) 361-380 .

• [Kan00] A. Kanaki and C. G. Papadopoulos, HELAC: A package to compute electroweak helicity amplitudes, Comput. Phys. Commun. 132 (2000) 306-315 [hep-ph/0002082].

• [Kle85] R. Kleiss and W. J. Stirling, Spinor techniques for calculating pbarpto W^pm/Z^0+jets, Nucl. Phys. B262 (1985) 235-262 .

• [Kle85a] R. Kleiss and W. J. Stirling and S. D. Ellis, A new Monte Carlo treatment of multiparticle phase space at high energies, Comput. Phys. Commun. 40 (1986) 359 .

• [Kle94] R. Kleiss and R. Pittau, Weight optimization in multichannel Monte Carlo, Comput. Phys. Commun. 83 (1994) 141-146 [hep-ph/9405257].

• [Kra01] F. Krauss and R. Kuhn and G. Soff, AMEGIC++ 1.0: A Matrix Element Generator In C++, JHEP 02 (2002) 044 [hep-ph/0109036].

• [Kra02] F. Krauss, Matrix elements and parton showers in hadronic interactions, JHEP 0208 (2002) 015 [hep-ph/0205283].

• [Kra04] F. Krauss and A. Schalicke and S. Schumann and G. Soff, Simulating W/Z + jets production at the Tevatron, Phys. Rev. D70 (2004) 114009 [hep-ph/0409106].

• [Kra05] F. Krauss and A. Schalicke and S. Schumann and G. Soff, Simulating W/Z + jets production at the CERN LHC, Phys. Rev. D72 (2005) 054017 [hep-ph/0503280].

• [Kra10] F. Krauss and T. Laubrich and F. Siegert, Simulation of hadron decays in Sherpa, .

• [Lai10] H. L. Lai and M. Guzzi and J. Huston and Z. Li and P. M. Nadolsky and others, New parton distributions for collider physics, Phys.Rev. D82 (2010) 074024 [arXiv:1007.2241].

• [Lan01] D. J. Lange, The EvtGen particle decay simulation package, Nucl. Instrum. Meth. A462 (2001) 152-155 .

• [Lav05] N. Lavesson and L. Lonnblad, W+jets matrix elements and the dipole cascade, JHEP 07 (2005) 054 [hep-ph/0503293].

• [Lav08] N. Lavesson and L. Lonnblad, Extending CKKW-merging to one-loop matrix elements, JHEP 12 (2008) 070 [arXiv:0811.2912].

• [Lep80] G. P. Lepage, VEGAS - An Adaptive Multi-dimensional Integration Program, .

• [Lon01] L. Lonnblad, Correcting the colour-dipole cascade model with fixed order matrix elements, JHEP 05 (2002) 046 [hep-ph/0112284].

• [Lon11] L. Lonnblad and S. Prestel, Matching Tree-Level Matrix Elements with Interleaved Showers, JHEP 03 (2012) 019 [arXiv:1109.4829].

• [Lon12a] L. Lonnblad and S. Prestel, Merging Multi-leg NLO Matrix Elements with Parton Showers, arXiv:1211.7278.

• [Lon12b] L. Lonnblad and S. Prestel, Unitarising Matrix Element + Parton Shower merging, arXiv:1211.4827.

• [Lon92] L. Lonnblad, Ariadne version 4: A program for simulation of QCD cascades implementing the colour dipole model, Comput. Phys. Commun. 71 (1992) 15-31 .

• [Mal02a] F. Maltoni and T. Stelzer, MadEvent: automatic event generation with MadGraph, JHEP 02 (2003) 027 [hep-ph/0208156].

• [Man01] M. L. Mangano and M. Moretti and R. Pittau, Multijet matrix elements and shower evolution in hadronic collisions: W bbarb+n-jets as a case study, Nucl. Phys. B632 (2002) 343-362 [hep-ph/0108069].

• [Man02] M. L. Mangano and M. Moretti and F. Piccinini and R. Pittau and A. D. Polosa, ALPGEN a generator for hard multiparton processes in hadronic collisions, JHEP 07 (2003) 001 [hep-ph/0206293].

• [Man06] M. L. Mangano and M. Moretti and F. Piccinini and M. Treccani, Matching matrix elements and shower evolution for top-pair production in hadronic collisions, JHEP 01 (2007) 013 [hep-ph/0611129].

• [Mar01] A. D. Martin and R. G. Roberts and W. J. Stirling and R. S. Thorne, MRST2001: Partons and alpha_s from precise deep inelastic scattering and Tevatron jet data, Eur. Phys. J. C23 (2002) 73-87 [hep-ph/0110215].

• [Mar04] A. D. Martin and R. G. Roberts and W. J. Stirling and R. S. Thorne, Parton distributions incorporating QED contributions, Eur. Phys. J. C39 (2005) 155-161 [hep-ph/0411040].

• [Mar09a] A. D. Martin and W. J. Stirling and R. S. Thorne and G. Watt, Parton distributions for the LHC, Eur. Phys. J. C63 (2009) 189-295 [arXiv:0901.0002].

• [Mar87] G. Marchesini and B. R. Webber, Monte Carlo Simulation of General Hard Processes with Coherent QCD Radiation, Nucl. Phys. B310 (1988) 461 .

• [Mar99] A. D. Martin and R. G. Roberts and W. J. Stirling and R. S. Thorne, Parton distributions and the LHC: W and Z production, Eur. Phys. J. C14 (2000) 133-145 [hep-ph/9907231].

• [Nad08] P. M. Nadolsky and others, Implications of CTEQ global analysis for collider observables, Phys. Rev. D78 (2008) 013004 [arXiv:0802.0007].

• [Nag03] Z. Nagy, Next-to-leading order calculation of three-jet observables in hadron-hadron collisions, Phys. Rev. D68 (2003) 094002 [hep-ph/0307268].

• [Nag05] Z. Nagy and D. E. Soper, Matching parton showers to NLO computations, JHEP 10 (2005) 024 [hep-ph/0503053].

• [Nag06] Z. Nagy and D. E. Soper, A new parton shower algorithm: Shower evolution matching at leading and next-to-leading order level, hep-ph/0601021.

• [Pap05] C. G. Papadopoulos and M. Worek, Multi-parton cross sections at hadron colliders, Eur. Phys. J. C50 (2007) 843-856 [hep-ph/0512150].

• [Rei01] L. Reina and S. Dawson, Next-to-leading order results for t anti-t h production at the Tevatron, Phys.Rev.Lett. 87 (2001) 201804 [hep-ph/0107101].

• [Rei01a] L. Reina and S. Dawson and D. Wackeroth, QCD corrections to associated t anti-t h production at the Tevatron, Phys.Rev. D65 (2002) 053017 [hep-ph/0109066].

• [Rys09] M. G. Ryskin and A. D. Martin and V. A. Khoze, Soft processes at the LHC I: Multi-component model, Eur. Phys. J. C60 (2009) 249-264 [arXiv:0812.2407].

• [Sch07a] S. Schumann and F. Krauss, A parton shower algorithm based on Catani-Seymour dipole factorisation, JHEP 03 (2008) 038 [arXiv:0709.1027].

• [Sch08] M. Schonherr and F. Krauss, Soft photon radiation in particle decays in Sherpa, JHEP 12 (2008) 018 [arXiv:0810.5071].

• [Sjo06] T. Sjostrand and S. Mrenna and P. Skands, PYTHIA 6.4 physics and manual, JHEP 05 (2006) 026 [hep-ph/0603175].

• [Sjo07] T. Sjostrand and S. Mrenna and P. Skands, A brief introduction to PYTHIA 8.1, Comput. Phys. Commun. 178 (2008) 852-867 [arXiv:0710.3820].

• [Sjo87] T. Sjostrand and M. van Zijl, A multiple-interaction model for the event structure in hadron collisions, Phys. Rev. D36 (1987) 2019 .

• [Ste94] T. Stelzer and W. F. Long, Automatic generation of tree level helicity amplitudes, Comput. Phys. Commun. 81 (1994) 357-371 [hep-ph/9401258].

• [Ste10] I. W. Stewart and F. J. Tackmann W. J. and Waalewijn, N-Jettiness: An Inclusive Event Shape to Veto Jets, Phys. Rev. Lett. 105 (2010) 092002 [arXiv:1004.2489].

• [Web83] B. R. Webber, A QCD model for jet fragmentation including soft gluon interference, Nucl. Phys. B238 (1984) 492 .

• [Wha05] M. R. Whalley and D. Bourilkov and R. C. Group, The Les Houches Accord PDFs (LHAPDF) and LHAGLUE, hep-ph/0508110.

• [Win03] J. C. Winter and F. Krauss and G. Soff, A modified cluster-hadronisation model, Eur. Phys. J. C36 (2004) 381-395 [hep-ph/0311085].

• [Wol83] L. Wolfenstein, Parametrization of the Kobayashi-Maskawa Matrix, Phys. Rev. Lett. 51 (1983) 1945 .

• [Yen61] D. R. Yennie and S. C. Frautschi and H. Suura, The Infrared Divergence Phenomena and High-Energy Processes, Ann. Phys. 13 (1961) 379-452 .

• [Zar02] A. F. Zarnecki, CompAZ: Parametrization of the luminosity spectra for the photon collider, Acta Phys. Polon. B34 (2003) 2741-2758 [hep-ex/0207021].

• [Bea03] D. M. Beazley, Automated scientific software scripting with SWIG, Future Generation Computer Systems 19 (2003) 599-609 .

• [Bu03] M. Buschmann and others, Mass Effects in the Higgs-Gluon Coupling: Boosted vs Off-Shell Production, JHEP 1502 (2015) 038 [hep-ph/1410.5806].

• [Both17] E. Bothmann and M. Schonherr and F. Krauss, Single top-quark production with Sherpa, [arXiv:1711.02568] .

• [Ball14] R. D. Ball and others, Parton distributions for the LHC Run II, [arXiv:1410.8849] .

• [Dulat15] S. Dulat and others, New parton distribution functions from a global analysis of quantum chromodynamics, [arXiv:1506.07443] .

# Appendix B Index

Jump to: 1   A   B   C   D   E   F   H   I   K   L   M   N   O   P   R   S   T   U   V   W   X   Y
Jump to: 1   A   B   C   D   E   F   H   I   K   L   M   N   O   P   R   S   T   U   V   W   X   Y