Real time Scheduling – CPU scheduling

Real-Time Scheduling

  • Real-time computing is divided into two types. Hard real-time systems are required to complete a critical task within a guaranteed amount of time.
  • Generally, a process is submitted along with a statement of the amount of time in which it needs to complete or perform I/O. The scheduler then either admits the process, guaranteeing that the process will complete on time, or rejects the request as
  • impossible. This is known as resource reservation. Such a guarantee requires that the scheduler know exactly how long each type of operating-system function takes to perform, and therefore each operation must be guaranteed to take a maximum amount of time.
  • Such a guarantee is impossible in a system with secondary storage or virtual memory because these subsystems cause unavoidable and unforeseeable variation in the amount of time to execute a particular process.
  • Therefore, hard real-time systems are composed of special-purpose software running on hardware dedicated to their critical process, and lack the full functionality of modern computers and operating systems.
  • Soft real-time computing is less restrictive.
  • It requires that critical processes receive priority over less fortunate ones.
  • Although adding soft real-time functionality to a time-sharing system may cause an unfair allocation of resources and may result in longer delays, or even starvation, for some processes, it is at least possible to achieve.
  • The result is a general-purpose system that can also support multimedia, high-speed interactive graphics, and a variety of tasks that would not function acceptably in an environment that does not support soft real-time computing.
  • Implementing soft real-time functionality requires careful design of the scheduler and related aspects of the operating system.
  • First, the system must have priority scheduling, and real-time processes must have the highest priority.
  • The priority of real-time processes must not degrade over time, even though the priority of non—real-time processes may.
  • Second, the dispatch latency must be small. The smaller the latency, the faster a realtime process can start executing once it is run able.
  • It is relatively simple to ensure that the former property holds. However, ensuring the latter property is much more involved.
  • The problem is that many operating systems, including most versions of UNIX, are forced to wait for either a system call to complete or for an I/O block to take place before doing a context switch. The dispatch latency in such systems can be long, since some system calls are complex and some I/O devices are slow.
  • To keep dispatch latency low, we need to allow system calls to be pre-emptible.
  • There are several ways to achieve this goal.
  • One is to insert preemption points in long-duration system calls, which check to see whether a high-priority process needs to be run. If so, a context switch takes place and, when the high priority process terminates, the interrupted process continues with the system call.

  • Preemption points can be placed at only “safe” locations in the kernel — only where kernel data structures are not being modified. Even with preemption points dispatch latency can be large, because only a few preemption points can be practically added
  • to a kernel.
  • Another method for dealing with preemption is to make the entire kernel preemptible. So that correct operation is ensured, all kernel data structures must be protected through the use of various synchronization mechanisms.
  • With this method, the kernel can always be pre-emptible, because any kernel data being updated are protected from modification by the high-priority process. This is the method used in Solaris 2.
  • But what happens if the higher-priority process needs to read or modify kernel data that are currently being accessed by another, lower-priority process? The highpriority process would be waiting for a lower-priority one to finish. This situation is known as priority inversion. In fact, there could be a chain of processes, all accessing resources that the high-priority process needs.
  • This problem can be solved via the priority-inheritance protocol, in which all these processes (the processes that are accessing resources that the high-priority process needs) inherit the high priority until they are done with the resource in question.
  • When they are finished, their priority reverts to its natural value.
  • In Figure, we show the makeup of dispatch latency. The conflict phase of dispatch latency has three components:
  1. Preemption of any process running in the kernel
  2. Low-priority processes releasing resources needed by the high-priority process
  3. Context switching from the current process to the high-priority process
  •  As an example, in Solaris 2, the dispatch latency with preemption disabled is over 100 milliseconds. However, the dispatch latency with preemption enabled is usually reduced to 2 milliseconds.

Want to find the hot topic of next discussion? Visit again…