CPC 2006 CPC 2006

Call for Participation
Current List of
Program (List of Talks)
Registration Form
Travel Information

Funded by the Vicerreitoría de Investigación of the University of A Coruña and the Ministry of Education and Science of Spain

Universidade de A Coruña Ministerio de
		      Educación y Ciencia

Dynamic Scheduling of Parallel Tasks in Multiprogrammed Parallel Processing Systems

Arun Kejariwal1, Alexandru Nicolau1 and Constantine D. Polychronopoulos2
1 Center for Embedded Computer Systems, University of California at Irvine, USA
2 Center of Supercomputing Research and Development, University of Illinois at Urbana-Champaign, USA

With the emergence of new programming paradigms such as cluster and grid computing and with the development of large parallel systems such as BlueGene, there has been a shift from traditional multiprocessing to multiprogramming environments. In a typical multiprogrammed parallel system several jobs (each consisting of a number of parallel tasks) may be run at the same time. In such a scenario, processors are allocated either statically or dynamically to the different jobs; further, a processor may be taken away from a task of one job and be reassigned to a task of another job. In this context, prior work in scheduling has addressed problems such as the effect of batch processing on multiprocessing systems, minimizing mean response time, minimizing makespan et cetera. However, the existing techniques do not account for the effect of systemic variations while scheduling tasks of an individual application (on a multiprogrammed parallel system) which can potentially result in sub-optimal performance or may result in violation of deadlines.

In this paper we present an approach for dynamic scheduling of parallel tasks on multiprogrammed multiprocessor systems. In other words, we present a self-adapting scheduling technique which is responsive to systemic parameters such as number of processors available et cetera. In addition, our technique also captures the effect of application-level parameters such as variability in parallelism amongst different tasks, 1 variability in workload of the different tasks et cetera. The key characteristic of approach is the orthogonalization of job-level and task-level scheduling in a multiprogramming environment which yields high performance from both systemic and user pointof-view. Furthermore, the unified approach helps achieve efficient processor utilization. Experimental results show the effectiveness of our technique.

Back to the Workshop Program 

Please contact our webadmin with any comments or changes.