Contemplation

Sunday, January 04, 2009

Synchronization, Delays and PreEmption in Linux

Pre emptible kernel,
Assume that you have a while (1) loop in a kernel thread, If you dont have a premptible kernel your CPU will freeze once the thread is scheduled in. But if you have a pre emptible kernel, your system will work though with a reduced through put


Atomic Context
There are certain contexts from which, you just cant invoke the scheduler (the current process cant give away the CPU) by either (sleeping mdelay (), waiting for a semaphore, waiting for a completions), the only way to delay execution from these contexts is to busy wait. If you try to use any of the above mentioned re scheduling mechanisms in an atomic context, you are sure to get
: "BUG: scheduling while atomic" error.
The idea behind atomic contexts is that you are in the middle of a transaction which should be executed atomically (critical sections) .... There is no way you can yield the processor to another process, lest it interferes with the atomic nature of transaction.


How do you see whether you are in an atomic context or not ? : in_atomic()

Apart from interrupt handlers ... When all will i be in atomic contexts ? (Some times in the kernel drivers we write : we may implement certain call back functions, which will be called by the upper layer subsystems :

Eg. Assume that the network subsystem (TCP /IP stack) has a packet to tranmit . The physical layer driver eg. the ethernet driver, or USB network gadget driver, will initially register with its packet transferring routine with the TCP IP sub system (net_device->start_hard_transmit).

Now when a packet becomes ready from upper layers , our driver's transmit function will be called. In this particular case if you print the value of
in_atomic() , it will be 1 - indicating that the network subsystem has called your driers call back in atomic context. Basically what it means is that you cannot attempt to reschedule (delay, wait for completion or semaphore - all these will internally call schedule ()) within this call back. The only way to wait if you are very particular is to busy wait, or :

while (time_before(jiffies, j1))
cpu_relax( );



Spin Locks
This is a synchronization mechanism in which sleeping is not involved. Just like mutex, Spin lock is a lock which has two states, but the difference here is that : suppose the first thread acquires the lock, now if the second thread also attempts to take the lock, it will busy wait (a mutex would have slept)

Spinlocks are, by their nature, intended for use on multiprocessor systems, althougha uniprocessor workstation running a preemptive kernel behaves like SMP, as far as concurrency is concerned.

If a nonpreemptive uniprocessor system ever went into a spin on a lock, it would spin forever; no other thread would ever be able to obtain the CPU to release the lock. For this reason, spinlock perations on uniprocessor systems without preemption enabled are optimized to do nothing, with the exception of the ones that change the IRQ masking status.

Spinlocks and Atomic Context

Imagine for a moment that your driver acquires a spinlockand goes about its business within its critical section. Somewhere in the middle, your driver loses the processor. Perhaps it has called a function (copy_from_user, say) that puts the process to sleep. Or, perhaps, kernel preemption kicks in, and a higher-priority process pushes your code aside. Your code is now holding a lockthat it will not release any time in the foreseeable future. If some other thread tries to obtain the same lock, it will, in the best case, wait (spinning in the processor) for a very long time. In the worst case, the system could deadlock entirely.

Most readers would agree that this scenario is best avoided.
Therefore, the core rule that applies to spinlocks is that any code must, while holding a spinlock, be atomic. It cannot sleep; in fact, it cannot relinquish the processor for any reason except to
service interrupts (and sometimes not even then).

The kernel preemption case is handled by the spinlock code itself. Any time kernel code holds a spinlock, preemption is disabled on the relevant processor. Even uniprocessorsystems must disable preemption in this way to avoid race conditions. That is why proper locking is required even if you never expect your code to run on a multiprocessor machine.

[ certain portions of this article are borrowe from ldd 3 ]