Operating System
Unit I
Process Management
- A
program in execution is called a process.
- The
execution of a process must progress in a sequential fashion.
- In
order to complete its task, process needs the computer resources.
- A
process can be defined as an entity which represents the basic unit of
work to be implemented in the system.
- The operating system has to manage all the processes
which require the same resource at the same time , in a convenient and
efficient way.
The
Process Management Involves,
- Scheduling processes and
threads on the CPUs.
- Creating and deleting both user
and system processes.
- Suspending and resuming
processes.
- Providing mechanisms for
process synchronization.
- Providing mechanisms for
process communication.
Process Architecture
- A program becomes a process, when
it is loaded into the memory.
- A process in memory can be divided
in 4 sections.
- Heap
- Text &
- Data
Stack:
The process Stack holds temporary data such as
- Method/Function
- Parameters,
- Return
address
- Local
variables.
Heap:
- · Heap allocates memory to a process dynamically, which may be processed during its run time.
Data:
- · Data section holds the global and static variables.
Text:
- · Text Section includes the current activity, which is represented by the value of the Program Counter.
Process States
A
process state is a condition of the process at a specific instant of time. It
also defines the current position of the process.
·
The
process, from its creation to completion, passes through various states. The
minimum number of states is five.
·
The
process may be in one of the following states during execution.
1. New
A program which is going to be picked up by the OS into the
main memory is called a new process.
2. Ready
Whenever a process is created, it directly enters in the ready state, in which, it waits for the CPU to be assigned. The OS picks the new processes from the secondary memory and put all of them in the main memory.
The processes which are ready for the execution and reside in the main memory are called ready state processes. There can be many processes present in the ready state.
3. Running
One of the processes from the ready state will be chosen by the OS depending upon the scheduling algorithm. Hence, if we have only one CPU in our system, the number of running processes for a particular time will always be one. If we have n processors in the system then we can have n processes running simultaneously.
4. Block or wait
From the Running state, a process can make the transition to the block or wait state depending upon the scheduling algorithm or the intrinsic behavior of the process.
When a process waits for a certain resource to be assigned or for the input from the user then the OS move this process to the block or wait state and assigns the CPU to the other processes.
5. Completion or
termination
When a process finishes its execution, it comes in the termination state. All the context of the process (Process Control Block) will also be deleted & the process will be terminated by the Operating system.
Process
Control Block (PCB)
· Every process is represented in the operating system by a process control block, which is also called a task control block.
· Here, are important components of PCB
- Process state:
- A
process can be new, ready, running, waiting, etc.
- Program counter:
- The
program counter lets you know the address of the next instruction, which
should be executed for that process.
- CPU registers:
- This
component includes accumulators, index and general-purpose
registers, and information of condition code.
- CPU scheduling
information:
- This
component includes a process priority, pointers for
scheduling queues, and various other scheduling parameters.
- Accounting and business
information:
- It
includes the amount of CPU and time utilities like real time used, job or
process numbers, etc.
- Memory-management
information:
- This
information includes the value of the base and limit registers, the page,
or segment tables. This depends on the memory system, which is used by
the operating system.
- I/O status information:
- This
block includes a list of open files, the list of I/O devices that are
allocated to the process, etc.
PROCESS SCHEDULING
·
The
act of determining which process is in the ready state, and
should be moved to the running state is known as Process
Scheduling.
·
The
process scheduling, keeps the CPU busy all the time and to deliver minimum
response time for all programs.
·
There
are 2 types of Scheduling
1. Non Pre-emptive Scheduling: When the currently executing process
gives up the CPU voluntarily.
2. Pre-emptive Scheduling: When the operating system decides to
favour another process, pre-empting the currently executing process.
Scheduling Queues
- All processes,
entering into the system, are stored in the Job Queue.
- Processes in
the Ready state are placed in the Ready Queue.
- Processes
waiting for a device to become available are placed in Device
Queues. There are unique device queues available for each I/O device.
A
new process is initially put in the Ready queue. It waits in the
ready queue until it is selected for execution(or dispatched). Once the process
is assigned to the CPU and is executing, one of the following several events
can occur:
- The process
could issue an I/O request, and then be placed in the I/O queue.
- The process
could create a new sub process and wait for its termination.
- The process
could be removed forcibly from the CPU, as a result of an interrupt, and
be put back in the ready queue.
A process continues this cycle until it
terminates, at which time it is removed from all queues and has its PCB and
resources deallocated.
Types of Schedulers
There are three types of
schedulers available:
- Long
Term Scheduler
- Short
Term Scheduler
- Medium
Term Scheduler
Long Term Scheduler
Long term scheduler runs
less frequently. Long Term Schedulers decide which program must get into the
job queue. From the job queue, the Job Processor, selects processes and loads
them into the memory for execution. Primary aim of the Job Scheduler is to
maintain a good degree of Multiprogramming. An optimal degree of Multiprogramming
means the average rate of process creation is equal to the average departure
rate of processes from the execution memory.
Short Term Scheduler
·
This is also known as CPU Scheduler and runs very frequently.
The primary aim of this scheduler is to enhance CPU performance and increase
process execution rate.
Medium Term Scheduler
·
This scheduler removes the processes from memory (and from
active contention for the CPU), and thus reduces the degree of
multiprogramming. At some later time, the process can be reintroduced into
memory and its execution van be continued where it left off.
· This scheme is called swapping. The process is swapped out, and is later swapped in, by the medium term scheduler.
Context Switch
- Switching
the CPU to another process requires saving the state of the
old process and loading the saved state for the new process.
This task is known as a Context Switch.
- The context of
a process is represented in the Process Control Block(PCB) of
a process; it includes the value of the CPU registers, the process state and
memory-management information. When a context switch occurs, the Kernel saves
the context of the old process in its PCB and loads the saved context of the
new process scheduled to run.
- Context
switch time is pure overhead, because the system does no
useful work while switching. Its speed varies from machine to machine,
depending on the memory speed, the number of registers that must be copied, and
the existence of special instructions(such as a single instruction to load or store
all registers). Typical speeds range from 1 to 1000 microseconds.
- Context Switching has become such a performance bottleneck that programmers are using new structures(threads) to avoid it whenever and wherever possible.
INTER PROCESS
COMMUNICATION
·
Inter process communication is the mechanism provided by the
operating system that allows processes to communicate with each other.
·
This communication could involve a process letting another process
know that some event has occurred or the transferring of data from one process
to another.
· Processes can communicate with each other through both:
o Shared Memory
o Message passing
1. Shared Memory:
Ø Shared
memory is a memory shared between two or more processes that are established using
shared memory between all the processes.
Ø This
type of memory requires protecting from each other by synchronizing access
across all the processes.
Ø All POSIX systems, as well as Windows
operating systems use shared memory.
Example: Producer-Consumer
problem
ü There are two processes:
Producer and Consumer.
ü The producer produces some
items and the Consumer consumes that item.
ü The two processes share a
common space or memory location known as a buffer where the item produced by
the Producer is stored and from which the Consumer consumes the item if needed.
ü There are two versions of
this problem:
1. The first one is known as
the unbounded buffer problem in which the Producer can keep on producing items
and there is no limit on the size of the buffer.
2. The second one is known as
the bounded buffer problem in which the Producer can produce up to a certain
number of items before it starts waiting for Consumer to consume it.
o Solution:
ü First, the Producer and the
Consumer will share some common memory, and then the producer will start
producing items.
ü If the total produced item is equal to the
size of the buffer, the producer will wait to get it consumed by the Consumer.
ü Similarly, the consumer
will first check for the availability of the item. If no item is available, the
Consumer will wait for the Producer to produce it.
ü
If there are items available, Consumer will consume
them.
2.
Message queue:
Ø A message queue is a linked list of messages
stored within the kernel. It is identified by a message queue identifier.
Ø This method offers communication between single
or multiple processes with full-duplex capacity.
Ø Message queues are quite useful for
Interprocess communication and are used by most operating systems..
Ø If two processes p1 and p2
want to communicate with each other, they proceed as follows:
o Establish
a communication link (if a link already exists, no need to establish it again.)
o Start
exchanging messages using basic primitives.
We need at least two primitives:
§
send(message,
destination) or send(message)
§ receive(message,
host) or receive(message)
Direct Communication:-
Ø
In Direct
Communication process, a link is created or established between two
communicating processes.
Ø
In
every pair of communicating processes, only one link can exist.
Indirect Communication
Ø
Indirect
communication can only exist or be established when processes share a common
mailbox, and each pair of these processes shares multiple communication links.
Ø
These
shared links can be unidirectional or bi-directional.
Need for Interprocess
Communication
There are numerous reasons to use
inter-process communication for sharing the data. Here are some of the most
important reasons that are given below:
- It helps to speedup modularity
- Computational
- Privilege separation
- Convenience
- Helps operating system to communicate
with each other and synchronize their actions as well.