A Dataflow Meta—Computing Framework for Event Processing in the H1 Experiment

(整期优先)网络出版时间:2001-01-11
/ 1
LinuxbasednetworkedPCsclustersarereplacingboththeVMEnonuniformdirectmemoryaccesssystemsandSMPsharedmemorysystemsusedpreviouslyfortheonlineeventfilteringandreconstrucion.ToallowanoptimaluseofthedistributedresourcesofPCclustersanopensoftwareframeworkispresentlybeingdevelopedbasedonadataflowparadigmforeventprocessing.Thisframeworkallowsforthedistributionofthedataofphysicseventsandassociatedcalibrationdatatomultiplecomputersfrommultipleinputsourcesforprocessingandthesubsequentcollectionoftheprocessedeventsatmultipleoutputs.Thebasisofthesystemistheeventrepository,basicallyafirst-infirst-outeventstorewhichmaybereadandwritteninamannersimilartosequentialfileaccess.Eventsarestoredinandtransferredbetweenrepositoriesassuitablylargesequencestoenablehighthroughput.Multiplereaderscanreadsimultaneouslyfromasinglerepositorytoreceiveeventsequencesandmultiplewriterscaninserteventsequencestoarepository,Hencerepositoriesareusedforeventdistributionandcollection.Tosupportsynchronisationoftheeventfolowtherepositoryimplementsbaaiers.Abarriermustbewrittenbyallthewritersofarepositorybeforeanyreadercanreadthebarrier,Areadermustreadabarrierbeforeitmayreceivedatafrombehindit.Onlyafterallreadershavereadthebarrieristhebarrieremovedfromtherepository.Abarriermayalsohaveattacheddata,Inthiswaycalibrationdatacanbedistributedtoallproessuingunits.Therepositoriesareimplementedasmulti-threadedCORBAobjectsinC++andCORMAisusedforalldatatransfers,JobsetupscriptsarewritteninpythonandinteractivestatusandhistogramdisplayisprovidedbyaJavaprogram.JobsrununderthePBSbatchsystemprovidingshareduseofresourcesforonlinetriggering,offlinemassreporcessinganduseranalysisjobs.