Jump to content

Talk:Fixed-priority pre-emptive scheduling

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Untitled

[edit]

In my opinion it is wrong to say advantage of making sure no task hogs the processor for any time longer than the time slice fixed priority means only the highest priority thread will run.

I agree; "priority" implies that you want some tasks to run first. You have to realize that trying to squeeze too many tasks into a r-t system can get you in trouble. A lot of papers have been written about how to guarantee that all critical tasks will always complete on time, but it may be more useful to advise practitioners to aim for no more than about 50% r-t utilization.
A lot of designers try to cover problems by leaving interrupts off. This leads to systems which have potentially large interrupt latency and as a consequence a high variance in that value. Among other problems, this leads to systems that cannot guarantee effectively uniform sampling times, which violates one of the basic assumptions of most digital signal processing algorithms.
--AJim (talk) 17:47, 26 May 2012 (UTC)[reply]

Why are we talking about "real-time"? If the system has to allocate the cpu to a task for some time (since there are less cpus than processes/tasks), it's more "time slicing" ... (http://wiki.riteme.site/wiki/Preemption_(computing)#Time_slice ) Zenkutsu (talk) 05:52, 20 March 2013 (UTC)[reply]

Perhaps I misunderstand you, but I think you may have taken too narrow a definition of "real time". I think of time slicing in situations where the processor allocation is rotated between some number of very long computations, each with run times much longer than a slice, so that they all appear to be making progress, all at a reduced rate. I think of real time as implying that a computation is being performed in synchronism with events in the real world outside the computer. Consider what I think is a fairly common design, a digital simulation of an analog controller in an embedded system. It takes samples and provides updates frequently enough to satisfy the anti-aliasing criteria of the analog system. This "task", the servo simulation thread, might never end, but the "process" it supports runs in real time, performs each update on schedule, then waits to be triggered again for the next update. Just like with a priority interrupt system, the thread can be suspended temporarily to allow even more time-critical work to run, and, when it is time for it to perform an update, it can temporarily suspend any less time-critical work. The amount of time it gets is not determined by an independent "time slice" parameter, it is determined by the time it takes to complete the next update, so that it keeps in sync with the outside world (to which it appears to be an analog circuit). The lowest priority task, or thread, is ultimately the one that gives up some of its allocated time to let this happen. Instead of hardware interrupts, we might have software events, semaphores say, that trigger the updates, although in this example they would be synchronized to a time source or external event of some sort. By the way, I also think that most systems that employ time slicing only guarantee that a slice is the longest continuous time a computation will get, and that a computation generally also looses control whenever it needs to wait for an external event such as receiving the next bucket of data from a file.
--AJim (talk) 20:57, 20 March 2013 (UTC)[reply]

advantages of this model

[edit]

There are many good things about this approach left unsaid, especially when the tasks (threads) not only have fixed priorities but have a bounded number. If the priorities are fixed then the number of tasks is likely to be fixed also, but this is unsaid; however, there are significant consequential advantages in this case.

Implementation, in this case, can be truly compact. I doubt any other approach can use less memory or less code.

When the number of tasks is bounded, they can be represented as members of a set by assigning them positions in a bit map, so only one bit is needed per task in the queue of tasks ready to run. If there are 7 or fewer tasks, then the queue only requires one byte of storage. The sign bit should be set in this queue and be assigned to the idle task that is executed when no real task is ready to run. The idle task is typically something like this: <here: jump here> or else the console/control panel task.

Task priority can be equated to the bit position in the map. The convention that the right-most (least significant) bit has the highest priority allows selecting the highest priority task by a simple, parallel operation, in fixed time. The result of and-ing a number with it's two's-complement is to isolate the right-most bit in a register, which, in this convention, represents the highest priority task in the set.

The discussion misses the point that tasks may not need to run to completion but may instead be cyclical; that is they may be allowed run until they block on I/O or the the need for a result of some other task. This is perhaps a minor point, but the blocking operation can be managed by a semaphore. In a real-time system with dedicated tasks, no time slicing may be needed.

A semaphore must maintain a resource counter and a queue of tasks waiting for the resource. If the resource count is negative, that is if tasks are waiting for the resource controlled by the semaphore, then the count is bounded by the number of tasks. Because of this, the waiting set can be represented as a bit-map set containing bits for the waiting tasks, just like the ready queue. To minimize storage, the same register that is used as the semaphore resource count can be used as the semaphore waiting task queue. The sign bit can be used to indicate the state of the register; if negative it is a queue of waiting tasks, otherwise it is a number indicating the units of resource available. For 7 or fewer tasks this scheme only requires one byte per semaphore. The idle task is never moved from the ready queue, so there can be no confusion in the meaning of the sign bit,

Transferring a task from one queue to another involves isolating the current highest priority task bit, as described above, then xor-ing it into both queues. The bit representing any specific task will always be in exactly one of the existing set of queues.

The only other ram storage required by the scheduler is a place to store a stack pointer for each suspended task. Of course the tasks themselves need stack space and other work space, but this does not count against the scheduler.

Without going into the details, it should be clear that such a minimal system can be implemented in a small amount of code; experience with several simple microprocessors shows this is typically 100-200 bytes. And this scheme only requires a few bytes of ram, as outlined above. I doubt any other scheduling approach can use less memory or less code. Also, small code usually results in fast execution and few bugs.

--AJim (talk) 21:14, 25 May 2011 (UTC)[reply]