Multilevel Feedback Queue Scheduling and Multiple-Processor Scheduling


Multilevel Feedback Queue Scheduling

  • Multilevel feedback queue scheduling allows a process to move between queues.
  • The idea is to separate processes with different CPU-burst characteristics. If a process uses too much CPU time, it will be moved to a lower-priority queue.
  • This scheme leaves I/O-bound and interactive processes in the higher-priority queues. Similarly, a process that waits too long in a lower- priority queue may be moved to a higher-priority queue. This form of aging prevents starvation.
  • For example, consider a multilevel feedback queue scheduler with three queues, numbered from 0 to 2 (Figure ).
  • The scheduler first executes all processes in queue 0. Only when queue 0 is empty will it execute processes in queue 1.Similarly, processes in queue 2 will only be executed if queues 0 and I are empty.
  • A process that arrives for queue 1 will preempt a process in queue 2. A process in queue 1 will in turn be preempted by a process arriving for queue 0.
  • A process entering the ready queue is put in queue 0. A process in queue 0 is given a time quantum of 8 milliseconds. If it does not finish within this time, it is moved to the tail of queue 1. If queue 0 is empty, the process at the head of queue I is given a quantum of 16 milliseconds.

  • If it does not complete, it is preempted and is put into queue 2. Processes in queue 2 are run on an FCFS basis, only when queues 0 and I are empty.
  • This scheduling algorithm gives highest priority to any process with a CPU burst of 8 milliseconds or less. Such a process will quickly get the CPU, finish its CPU burst, and go off to its next I/O burst. Processes that need more than 8, but less than 24, milliseconds are also served quickly, although with lower priority than shorter processes. Long processes automatically sink to queue 2 and are served in FCFS order with any CPU cycles left over from queues 0 and 1.
  • In general, a multilevel feedback queue scheduler is defined by the follow ing parameters:

• The number of queues
• The scheduling algorithm for each queue
• The method used to determine when to upgrade a process to a higher- priority queue
• The method used to determine when to demote a process to a lower-priority queue
• The method used to determine which queue a process will enter when that process needs service

  • The definition of a multilevel feedback queue scheduler makes it the most general CPU scheduling algorithm.
  • It can be configured to match a specific system under design.
  • Unfortunately, it also requires some means of selecting values for all the parameters to define the best scheduler.
  • Although a multilevel feedback queue is the most general scheme, it is also the most complex.

Multiple-Processor Scheduling

  • If multiple CPUs are available, the scheduling problem is correspondingly more complex. Even within homogeneous multiprocessor, there are sometimes limitations on scheduling. Consider a system with an I/O device attached to a private bus of one processor. Processes wishing to use that device must be scheduled to run on that processor, otherwise the device would not be available.
  • If several identical processors are available, then load sharing can occur. It would be possible to provide a separate queue for each processor.
  • In this case, however, one processor could be idle, with an empty queue, while another processor was very busy.
  • To prevent this situation, we use a common ready queue. All processes go into one queue and are scheduled onto any available processor.
  • In such a scheme, one of two scheduling approaches may be used. In one approach, each processor is self-scheduling.
  • Each processor examines the common ready queue and selects a process to execute. If we have multiple processors trying to access and update a common data structure, each processor must be programmed very carefully.
  • We must ensure that two processors do not choose the same process, and that processes are not lost from the queue. The other approach avoids this problem by appointing one processor as scheduler for the other processors, thus creating a master—slave structure.
  • Some systems carry this structure one step further, by having all scheduling decisions, i/o processing, and other system activities handled by one single processor — the master server.
  • The other processors only execute user code. This asymmetric multiprocessing is far simpler than symmetric multiprocessing, because only one processor accesses the system data structures, alleviating the need for data sharing.

Want to discuss about Real time scheduling? Visit again…