Planning and steering numerical experiments that involve many simulations are difficult tasks to automate. We describe how a simulation scheduling tool can help experimenters submit and revoke simulation jobs on the basis of the most up to date partial results and resource estimates. We show how ideas such as pre- and post-conditions; interrupt handling; rapid experiment schema creation; and sparse parameter cross-products can be used to make a generalisable and user- friendly scheduling toolset. We describe our prototype in the context of typical long-running computational experiments of a complex networks simulation problem.
We observe the spontaneous emergence of spatial tribes in an animat agent model where simple genetic inheritance is supported. Our predator-prey model simulates a flat-world of animat agents which breed, move, eat and predate according to priorities encoded in their genotype. Initialising a random mixture of all possible priority list genotypes, we find not only that only a small fraction of possible genotypes are favoured for survival, but that distinct spatial patterns of different tribes emerge. We report on the emergent macroscopic features in our model and discuss their correspondent mapping to microscopic animat rules and genotypes. Even a simple gene-reordering mechanism gives rise to complex emergent behaviour.
Abstract Parallel computer clusters now represent a main-stream resource of commodity compute cycles inmany academic departments. We review our expe-riences overthe past 7 years of building and experi-mentingwith compute clustersandspeculate aboutfuture trends. We focus on scheduling and utili-sation and discuss data collected from a completeyear of use of one system. We discuss schedulinginefficiencies and the lost or missing cycles that areinevitable when running a multi user resource.Keywords: scheduling and resource manage-ment; recent historical trends in cluster comput-ing. 1 Introduction Cluster computing has become a mainstreambranch of parallel computing and many organisa-tions reportsuccess building cluster systems. Com-pute clusters are now achieving significant penetra-tion onto the Top 500 Supercomputer list [15]; inthe current list, published in November 2004, clus-tersfeatureas58%ofthetop500machines. Infact,five machines in the top 10 are clusters and anotherthree machines are constellations (clusters of clus-ters). The clusters typically number between 2200and 4096 processors, connected by Gigabit Ether-netorMyrinettechnology. Inthe June2004listthenumber 3 machine in the world comprised of 2200commodity Apple Macintosh G5 [1] processors.Cluster computing remains an attractive routefor cost conscious university research groups inyielding systems well suited to throughput com-puting and a number of quality scheduling soft-ware packages are available either freely or at lowcost [4]. Those large computer vendors that re-main after the big supercomputer vendor shake-outof the 1990’s all offer cluster products. Typicallythese offerings will have well integrated hardware,based around individual or shared-memory proces-sor systems sold by the vendor as workstations orin other products. Certainly it has been our ex-perience that these systems are easy to install anduse, but nevertheless there is still a strong trendtowards using very cheap PC systems and a do-it-yourself approach to hardware integration. Theso-called Beowulf approach [14] and a collectionof shelf mounted PC boxes connected with off-theshelf fast Ethernet (and more recently gigabit Eth-ernets) is still common in university departments.Our research group built and operated Beowulfclusters in Australia and Wales for simulationwork [7, 8]. A team in our present department inNew Zealand operates a successful Beowulf clus-ter for bioinformatics and more recently theoreticalchemistry work [2]. We have also worked with ad-hoc clusters built from Apple workstations; Alphaworkstations and closely integrated clusters builtfrom Sun Microsystems components. We are nowin a position to review work over the full lifecy-cle of Beowulf clusters and consider the perceivedadvantages and disadvantages in terms of scienceachieved for resource expended.InthispaperwediscussthelifecycleofaBeowulfcluster (section 2) and some operational schedulingissuesconnectedwith processorutilisationachieved
We describe a web services system for managing scientific simulation metadata. We show how the use of an appropriate data model based on sets can help define operations to specify; filter; reduce or expand against a set of simulation jobs, parameter values or even filenames. We describe a Java-based implementation of our ideas and discuss its implications for remotely managing and monitoring large scale scientific simulations on parallel and cluster compute systems in a grid environment.
We report on summer student programmes and activities we have run, or contributed towards, for undergraduate students at Universities in the UK; in the USA; in Australia and in New Zealand over the last 16 years. We discuss the aims and objectives of such activities based around bootstrapping research projects; introducing undergraduates to the research process; and seeding new outreach and collaborative processes from research centres. Looking back, we are able to compare outcomes and different approaches in different research cultures in different national settings. We believe these activities are not so hard to initiate at any University and that they deliver some very worthwhile short-, mediumand longterm outcomes for computer science. We summarise our thoughts on such programmes and provide some advice on how to manage similar programmes.
Planning and steering numerical experiments that involve many simulations are difficult tasks to automate. We describe how a simulation scheduling tool can help experimenters submit and revoke simulation jobs on the basis of the most up to date partial results and resource estimates. We show how ideas such as pre- and post-conditions; interrupt handling; rapid experiment schema creation; and sparse parameter cross-products can be used to make a generalisable and user- friendly scheduling toolset. We describe our prototype in the context of typical long-running computational experiments of a complex networks simulation problem.
Efficient, scalable remote access to data is a key aspect in wide area metacomputing environments. One of the limitations of current client-server computing models is their inability to create, retain and trade tokens which represent data or services on remote computers alongwith the metadata to adequately describe the data or services. Most current client-server software systems require the user to submit all the data inputs that are needed for a remote operation, and after the operation is complete, all the resultant output data is returned to the originating client. Pipelining remote processes requires data be retained at the remote site for achieving performance on high latency wide area networks. We introduce the DISCWorld Remote Access Mechanism (DRAM), an integral component of our DISCWorld metacomputing environment, which provides the user and system with a scalable abstraction over remote data and the operations that are possible on the data. We present a formal notation for DRAM's and discuss the implementation and performance of DRAM's when compared with traditional client-server systems.
Abstract A microscopic agent formulation is an appeal-ing approach from a simulation perspective formany complex systems involving cooperative be-haviour. It is satisfying to construct a detailedlocalised model of contributing agents and toexperiment with a collective to study emergenteffects in the overall system without having tobuild in global heuristics that anticipate knownsolutions or behaviours. The collective world inwhich the agents transact their operations cantake several different forms, the most general ofwhichisanarbitrarygraph. Wedescribeoursim-ulation framework engine for studying coopera-tive effects amongst agents on graph structuresand report on some experiments on path-finderagents that are limited to localised knowledgeand heuristics. We explore some consequencesof graph connectivity and the interplay betweenshortandlong-rangeagentspatialknowledgeandpresent some preliminary results on autonomousexploration agents. We also describe ideas andissues for generalised simulation engines for in-teracting agents on graphs.Keywords: agent; graph; simulation en-gine; simulation visualisation; path-finder exper-iments.