Question the current time. Once the task

Question 2: What is the role of interrupts in Operating Systems? Why are they
important towards processing and CPU utilization?

Answer:  Interrupts in Operating Systems
are signals to the processor provided by a hardware or software that is
requesting an event to occur and requires attention. The methodology to handle
the event is provided by an ISR (interrupt service routine) and their main
function is within the Fetch – Decode – Execute method/cycle. Interrupts are
very useful in computers and especially important in CPU utilization because
other than polling input of changes, if an event occurs (e.g. a button being
pressed) the CPU is ‘interrupted’ from its current task, thus, takes a snapshot
of all the registers in the CPU at the current time. Once the task is done, the
CPU executes an ISR or an Interrupt Service Routine which is an interrupt
handler and determines the task at which the interrupt is asking for, and once
it completes, the CPU can go back to its original state. The importance of this
mechanism is to allow for interactive computing. Interrupts allow for
interaction from end user/system to the OS in real-time and can allow for
process control. Another importance is the ability for batch-oriented
computing, which can provide the halt or stop ability in run-away
programs/processes such as infinite loops/recursions which are very expensive
and wasteful.

Question 3: Assume the context of an Operating System that provides access to its
service/library through System Calls. (i) Why are these calls needed in such
Operating Systems? (ii) Illustrate your answer with an example and explain the
functioning of such calls in low level.

Answer:  System Calls are the interface between processes and the kernel, and
are a way for user programs (running in user mode) to request some service from
Operating System and are used in Process Control, File Management, Device
Management, Information Management and Communication. In other words, system
calls allow the user programs to ask OS to do some stuff on behalf of the user
program. In general, systems provide a library or API that sits between
normal programs and the operating system. Application
developers often do not have direct access to the system calls, but can access
them through an application programming interface (API). The functions that are
included in the API invoke the actual system calls. (i) These system calls are needed in such operating systems due to simplicity
and portability: You should not have to write a complex program in order to
open or save a file to the disk, or print a document. (ii) Further, you don’t
want to have anything become compromised in the operating system, such as
device drivers or other system components. System calls are executed through
Kernel code which are ran by Interrupts and Exceptions. Usually, System calls
are ran in a series as such:

 1. Application calls library wrapper function for desired system
call

 

 2. Library function performs sys call
instruction

 

 3. Kernel exception handler runs

 

• creates trap frame to save application
program state

• determines that this is a system call
exception

• determines which system call is being
requested

• does the work for the requested system
call

• restores the application program state
from the trap frame

• Returns from the exception

 

4. Library wrapper function finishes and
returns from its call

 

5. Application continues execution

 

Question 4: The PCB consists of the PID, Process State
Information and Process Control Information. For each of the above, expand and
exemplify each section. What is each used for and what is contained within
those sections and why?

Answer: The main
function of the PCB in operating systems is for storing the collection of
information about various processes. The PCB usually contains an ID to identify
the process, pointers to various locations in the program, contents of the
Registers, flag/switch states, upper/lower memory bounded pointers for process,
process priority and list of files, and required I/O status by the process.

Although with these items being said, there are three common parts/categories
we identify that strictly holds certain structures and
parts, which are known as Process Identification Data (PID), Process State
Information and Process Control Information.

Process Identification Data or PID includes a unique identifier for the process usually and
integer value and, in a multiuser-multitasking system data is common as an
identifier towards the parent process, user identifier, user group
identifier, etc. The process id is often used to cross-reference OS
tables, e.g. allowing to identify which process is using which I/O
devices, or memory areas.   
Process State Information                                                                                                                                                                                                                                                                             

 

Question 5: (i) What is the difference between
multiprogramming and time sharing? (ii) Give examples and illustrate them
towards performance aspects.

Answer:  Multiprogramming is the allocation of more
than one concurrent program on a computer system and its resources.

Multiprogramming allows using the CPU effectively by allowing various users to
use the CPU and I/O devices effectively and is basically  the fast switching of CPU between several
programs. Time Sharing is the sharing of computing resources among several
users at the same time. In time sharing systems, several terminals are attached
to a single dedicated server having its own CPU. Actions/commands executed by
the operating system of a time sharing system have a very short time span.

Therefore the CPU is assigned to users at the terminals for a short time
period, thus a user in a terminal gets the feeling that she has a CPU dedicated
to her behind her terminal. The short time period that a command is executed on
a time sharing system is called a time slice or a time quantum The main
difference between multiprogramming and time sharing is that multiprogramming
is the effective utilization of CPU time, by allowing several programs to use
the CPU at the same time but time sharing is the sharing of a computing
facility by several users that want to use the same facility at the same time.

Each user on a time sharing system gets her own terminal and gets the feeling
that she is using the CPU alone. Actually, time sharing systems use the concept
of multiprogramming to share the CPU time between multiple users at the same
time. This improves efficiency because, in time sharing systems, there are
chances that the CPU could idle.

Question 6: Consider the diagram below, at Figure 2. It
summarizes the transitions among the process states, including two suspended
states. The transitions are represented by arrows, and each arrow is identified by a couple of
states plus a general event. For instance, you may have an transition as (New,
Ready, Admit). For each transition, provide an explanation, as well as a list
of conditions which result  in a state
transition. As an example from the provided transition description, you may
have the following:

Transition (New, Ready, Admit): The process has just started, being
admitted by the OS. Not having any blocking event, it is ready to execute, so
it is placed in the Ready Queue.

Figure 2. Process state transition
diagram with suspend states

Question 7: What are the primary differences between a thread and a
process?

The primary differences
between a thread and a process is :

A process is simply a program in
execution or the abstraction of running programs. For example, a WordPad
program being used to edit a document is a process. Each process has its own code, data, address space (shareable with
other threads of that process, thus threads of the process share the same
code) and kernel context (VM structures, descriptor table, etc).

Therefore, processes cannot intertwine when running at the same time.

Processes have individual address spaces that threads can use and run on,
however individual processes only run in separate address spaces.

 

2.     A thread is a part of a
program (process) or unit of execution that is running concurrently with other
parts of the program. For example, when you are using the WordPad program, you
can edit a document and can also print another document at the same time. This
is only possible because of multiple threads in execution at the same time in
WordPad program, where one thread is looking after editing the document and
other thread looking after printing a document. Thus, threads can communicate
with each other using methods like wait(), notify() and notifyAll() act.

 

Question 8: (i) What are the basic differences between user-level threads and
kernel-level threads? (ii) What are the benefits and drawbacks of having
user-level threads? (iii) Explain the impact of systems in pure user-level
threads.

User level threads are managed by a user level library however, they still
require a kernel system call to operate however the kernel only takes care of
the execution. The lack of cooperation between user level threads and the
kernel is a known disadvantage. In this case, the kernel may not favor a
process that has many threads. User level threads are typically fast. Creating
threads, switching between threads and synchronizing threads only needs a
procedure call. They are a good choice for non blocking tasks otherwise the
entire process will block if any of the threads blocks. On the other hand, Kernel
level threads are managed by the OS, therefore, thread operations (ex.

Scheduling) are implemented in the kernel code. This means kernel level threads
may favor thread heavy processes. Moreover, they can also utilize
multiprocessor systems by splitting threads on different processors or cores.

They are a good choice for processes that block frequently. If one thread
blocks it does not cause the entire process to block. Kernel level threads have
disadvantages as well. They are slower than user level threads due to the management
overhead. Finally, they are not portable because the implementation is
operating system dependent.

(iii) The impact of systems in pure
User level threads, a multithreaded application cannot take
advantage of multiprocessing. This is because, in ULTs (User Level Threads) a
kernel assigns one process to only one processor at a time. Therefore, only a
single thread within a process can execute at a time. In effect, we have
application-level multiprogramming within a single process. While this multiprogramming
can result in a significant speedup of the application, there are applications
that would benefit from the ability to execute portions of code simultaneously
. Also, in an Operating System many system calls are blocking. As a result,
when a ULT executes a system call, not only is that thread blocked, but also
all of the threads within the process are blocked.

Question 9: What is the basic difference between a mutex and semaphore? A binary
semaphore is a semaphore that takes only the value 0 and 1. Would a binary
semaphore be a mutex?

Answer: Both mutex and semaphore are kernel
resources that provide synchronization services (also called as synchronization
primitives), and are both used to ‘solve’ the producer-consumer problem. A mutex
provides mutual exclusion, either producer or consumer can have the key (mutex)
and proceed with their work. A mutex is usually an Object; as long as the
buffer is filled by producer, the consumer needs to wait, and vice versa. On
the other hand, A semaphore is a generalized mutex or a process
synchronization tool and is usually an integer value. The basic difference
between a mutex and semaphore is that a semaphore is
a signaling mechanism such as wait() and signal() operation performed on
semaphore variable indicates whether a process is acquiring the resource or
releasing the resource. On the other hands, the mutex is a locking
mechanism, as to acquire a resource, a process needs to lock the mutex object
and while releasing a resource process has to unlock mutex object. A Binary
Semaphore  is essentially the same or
similar to a mutex, however there are some differences. The mutex is similar to the principles of the binary
semaphore with one significant difference: the principle of ownership.

Ownership is the simple concept that when a task locks (acquires) a mutex only
it can unlock (release) it. If a task tries to unlock a mutex it has not locked
(therefore, it doesn’t own) then an error condition is encountered and, most
importantly, the mutex is not unlocked. If the mutual exclusion object doesn’t
have ownership then, irrelevant of what it is called, it is not a mutex.

Question 10: Consider the following processes, where all variables are global, shared,
and undefined at the start.

P1: a=2 b=4

P2: c=a+b

P3: b=a

(i) What are the variable dependencies? (ii) What is the race condition?
(iii) Using semaphores, write 3 procedures (one for each process) ensuring no
race condition is possible.

(i)

(ii)

(iii)

Question 11: A monitor will synchronize with condition variable controlled by a
signal(V) and wait(V). Consider a scheme where a single primitive waitUnil(V),
had a Boolean predicate as its variable. For example, waitUnit(x<0) or waitUnit(a+b