Trigger Validation Workshop - 26 September 2007 =============================================== Present: ======= At CERN: John Baines, Andrea Coccaro, Denis Damazio, Bilge Demirkoz, Simon George, Ricardo Goncalo, Olga Igonkina, Valeria Perez Reale, Jiri Masik, Laura Silva, Giovanni Siragusa, Frank Winklmeier On the phone: Teresa Fonseca Martin, Julie Kirk, Martha Losada, Allen Mincer, Chris Potter, Margherita Primavera, Diego Rodriguez, David Strom, Long Zhao Minutes: ======= The workshop comprised a discussion on the status of the validation effort and on future goals, as well as three tutorials on validation tools and code performance. (meeting page: http://indico.cern.ch/conferenceDisplay.py?confId=21514) The first tutorial was led by Frank and included a some very useful advice on improving the code speed in the use of STL containers and arithmetic functions. It then described the use of code and memory profiling tools; PerfMon, Valgrind, Hephaestus, Callgrind and KCachegrind were covered (wiki can be found at: https://twiki.cern.ch/twiki/bin/view/Atlas/TriggerPerfMonTutorial13028). The second tutorial, led by Olya, covered everyday validation work using the available automatic tests in ATN and RTT and other validation infrastructure. This was a hands-on tutorial, in which everyone was asked to find a bug from the nightly tests and report it using Savannah. The validation how-to wiki can be found at: https://twiki.cern.ch/twiki/bin/view/Atlas/TriggerValidationHOWTO. The third tutorial, led by Diego, described the use of standalone DCube. This included a step-by-step set of instructions on how to analyse the output of monitoring histograms used in validation and compare them with a reference. The discussion session, led by Ricardo, focussed on the current status of the validation activity and on future plans. This included a quick analysis of the experience gained from the 13.0.30 release and of what can be improved in future releases. Each slice was requested to provide one person to verify the status of the nightly releases using the automatic validation tests and report on this using the "Trigger Status in 13.0.x nightlies" wiki: https://twiki.cern.ch/twiki/bin/view/Atlas/TriggerRecipe130xntly. The shift rota can be found here: https://twiki.cern.ch/twiki/bin/view/Atlas/TriggerValidation#Who_is_on_validation_shift_this. It was agreed that the people on shift will requre training and guidance. The first source of information when starting a shift week should be the validation how-to (https://twiki.cern.ch/twiki/bin/view/Atlas/TriggerValidationHOWTO). Any further questions can then be sent to the "Trigger Release Validation" Hypernews forum or to Ricardo. It was discussed that we should try to improve on the reporting of automatic test results: ideally, to have tests which are dependent only on the code being tested, and so whose results can be more easily understood (it should be easy to see if the test succeeds/fails). It was noted that the summary page from Simon, which is now in the TriggerTest and TrigAnalysisTest packages has contributed a lot to this and saved a lot of time and effort in the validation of 13.0.30. Other small changes can be very useful when they make the infrastructure easier to use and more user-friendly, such as links to ease the navigation between automatic test pages. Everyone was encouraged to run quick tests by hand during code development, as explained here: https://twiki.cern.ch/twiki/bin/view/Atlas/TriggerValidationHOWTO#HOWTO_test_code_changes. It was noted that the main problems to address at an earlier stage for the next release were connected with non trigger-specific functionality, such as AOD writing etc. There should be a well-maintained set of tests to warn of possible problems as early as possible. In the future, the validation activity needs to cover the online integration, with a view to prepare for real data. Actions from the discussion: * Add coding advice from Frank to the HLT coding guidelines wiki: https://twiki.cern.ch/twiki/bin/view/Atlas/TriggerCodingGuidelines * Everyone (especially Ricardo) to contribute to updating the validation how-to * Ricardo to add test to check TrigChainMoniValidation results (TrigSteer) showing how many events were accepted/failed * Ricardo to remove OldConfig tests (OldConfig python configuration scripts to be removed soon) * People on validation shift to check not only trigger tests but also some reconstruction tests from David Rousseau * Ricardo to disable emails from NICOS on request from slice contact people * Ricardo to reduce number of emails sent to "Trigger Validation Nightly Reports" Hypernews forum: only the summary should be sent there * Performance comparison to older releases should be more frequent and standardized * Ricardo to talk to Marc-Andre Dufour to develop a regular measurement of trigger rates (this requires running over a very large background sample, and so can not be done very frequently).