Tag Archives: multitasking

The uses of kernal threads

A novel approach to multitasking is assigning engine subsystems to run separate threads. The processor is what actually runs a code stream whilst the thread acts as an execution context for the processor.

A kernel thread is a kernel entity, like processes and interrupts handlers; it is the entity handled by the system scheduler. A kernel thread runs in user mode environment when executing user functions or library calls; it switches to kernel mode environment when executing system calls.

It stores the execution state of the 3 basic parts of the code stream; the stack which contains private data from execution, the registers which save and restore the thread, and the thread control block.

When the OS schedules a thread, it stores the registers of the current one and restores the registers of the new one. New thread starts running just where it left off.

It is fairly common for kernel code to create lightweight processes – kernel threads – which perform a certain task asynchronously. To see these threads, run ps ax on a 2.6 kernel and note all of the processes in at the beginning of the listing. The code which sets up threads has tended to be implemented again every time a new thread is needed, however, certain tasks are not always handled well. The current kernel also does not easily allow the creator of a kernel thread to control the behavior of that thread.

A kernel thread is created with the following line of code:

    struct task_struct *kthread_create(int (*threadfn)(void *data),

                                       void *data,

                                                       const char *namefmt, …);

Sometimes, a thread asks the kernel to do something that doesn’t need to execute instructions such as reading a block off the disk, then receive a packet from the network. These calls block inside the kernel.

The kernel starts the operation, puts the thread on a wait queue for when the operation completes, and schedules a new thread to run. It’s as if the call to schedule() only returns when the operation is complete: the stack and data registers are maintained

    Because kernel has full knowledge of all threads, Scheduler may decide to give more time to a process having large number of threads than process having small number of threads. Kernel-level threads are especially good for applications that frequently block.

The kernel-level threads are slow and inefficient. For instance, threads operations are hundreds of times slower than that of user-level threads.

    Since kernel must manage and then schedule threads as well as processes. It require a full thread control block (TCB) for each thread to maintain information about threads. As a result there is significant overhead and increased in kernel complexity.

Kernel-Level threads make concurrency much cheaper than process because, much less state to allocate and initialize. However, for fine-grained concurrency, kernel-level threads will still suffer from too much overhead. Thread operations still require system calls. Ideally, we require thread operations to be as fast as a procedure call. Kernel-Level threads have to be general to support the needs of all programmers, languages, runtimes, etc.