Solution of Assignment Synopsis & Project Dissertation Report



Title Name Amity Solved Assignment MCA 4th Sem For Operating System
University AMITY
Service Type Assignment
Course MCA
Semister Semester-IV Cource: MCA
Short Name or Subject Code Operating System
Commerce line item Type Semester-IV Cource: MCA
Product Assignment of MCA Semester-IV (AMITY)

Solved Assignment

  Questions :-

                                                                                              Operating System and Data Storage

Notes: Questions are may be not serial wise if all question is not match then email me regarding this

Assignment A

1. What is an operating system? Discuss the functions of an operating system? Give a general architecture of the operating system. 

2. What are the cooperating processes and how they communicate? Explain by taking a suitable example.

3 .Write a note on the multi-processor scheduling.   

4.  What are the characteristics of a good scheduling algorithm? Give a comparison of FCFS

And RR algorithm giving pros and cons of each with example.

5. Define deadlock. What are the necessary and sufficient conditions for deadlock to occur?

6. What is the need for Page Replacement? Discuss different page replacement techniques giving a performance comparison?

7. What is the requirement of Memory Management? How the address binding of instructions and data is carried to memory address.           

8. What is the purpose of directory structure in file system? Explain various types of directory structure with suitable diagram

Assignment B

Case Study :

  1. What is the content of the Matrix Need? 
  2.  Is the system in a safe state?
  3. If a request from process P1 arrives for (0, 4, 2, 0) can this request be granted immediately? 

Assignment C

Question No: 1

___________ begins at the root and follows a path down to the specified file

  1. Relative path name
  2. Absolute path name
  3. Standalone name
  4. All of the above

Question No: 2

Process State is a part of

  1. Process Control block
  2. Encode
  3. File Allocation Table
  4. None of the above

Question No: 3

Virtual Memory is commonly implemented by __________.

  1. Segmentation
  2. Swapping
  3. Demand Paging
  4. None of the above 

    Question No: 4

A ___________ contains information about the file, including ownership, permissions, and location of the file contents.

  1. File Control Block (FCB)
  2. File
  3. Device drivers
  4. File system

Question No: 5

The operating system of a computer serves as a software interface between the user and the ________.

  1. Hardware
  2. Peripheral
  3. Memory
  4. Screen

Question No: 6

If the Disk head is located initially at 32, find the number of disk moves required with FCFS if the disk queue of I/O blocks requests are 98, 37,14,124,65,67.

  1. 310
  2. 324
  3. 315
  4. 321

Question No: 7

Multiprogramming systems ________.

  1. Are easier to develop than single programming systems
  2. Execute each job faster
  3. Execute more jobs in the same time
  4. Are used only on large main frame computers

Question No: 8

Cryptography technique is used in ________.

  1. Polling
  2. Job Scheduling
  3. Protection
  4. File Management

Question No: 9

In the ___________ method of data transfer, the participation of the processor is eliminated during data transfer.

  1. Buffering
  2. Caching
  3. Direct Memory Access
  4. Indirect Memory Access

Question No: 10

A thread is a __________ process.

  1. Heavy Weight
  2. Mutliprocess
  3. Inter Thread
  4. Light weight

Question No: 11

Which of the following is crucial time while accessing data on the disk?

  1. Seek time
  2. Rotational time
  3. Transmission time
  4. Waiting time

Question No: 12

In ______ OS, the response time is very critical

  1. Multitasking
  2. Batch
  3. Online
  4. Real-time

Question No: 13

Inter process communication can be done through __________.

  1. Mails
  2. Messages
  3. System calls
  4. Traps

Question No: 14

Identify the odd thing in the services of operating system.

  1. Accounting
  2. Protection
  3. Error detection and correction
  4. Dead lock handling

Question No: 15

Which of the following memory allocation scheme suffers from External fragmentation?

  1. Segmentation
  2. Pure demand paging
  3. Swapping
  4. Paging

Question No: 16

Distributed OS works on the ________ principle.

  1. File Foundation
  2. Single system image
  3. Multi system image
  4. Networking image

Question No: 17

Which file system does Windows 95 typically use?

  1. FAT16
  2. FAT32
  3. NTFS
  4. LMFS

Question No: 18

Real time systems are ________.

  1. Primarily used on mainframe computers
  2. Used for monitoring events as they occur
  3. Used for program development
  4. Used for real time interactive users

Question No: 19

The primary job of the operating system of a computer is to ________.

  1. Command Resources
  2. Manage Resources
  3. Provide Utilities
  4. Be user friendly

Question No: 20

Consider the two statements.

(i) A network operating system, the users access remote resources in the same manner as local resource.

(ii) In a distributed operating system, the user can access remote resources either by logging into the appropriate remote machine or transferring data from the remote machine to their own machine. Which of the statement is true?

  1. (i) true, (ii) false
  2. (ii) true, (i) false
  3. Both (i) and (ii) false
  4. Both (i) and (ii) true

Question No: 21

Information about a process is maintained in a _________.

  1. Stack
  2. Translation Lookaside Buffer
  3. Process Control Block
  4. Program Control Block

Question No: 22

Using Priority Scheduling algorithm, find the average waiting time for the following set of processes given with their priorities in the order: Process : Burst Time : Priority respectively .

P1 : 10 : 3 ,

P2 : 1 : 1 ,

P3 : 2 : 4 ,

P4 : 1 : 5 ,

P5 : 5 : 2.

  1. 8 milliseconds
  2. 8.2 milliseconds
  3. 7.75 milliseconds
  4. 3 milliseconds

Question No: 23

The operating system manages ________.

  1. Memory
  2. Processor
  3. Disk and I/O devices
  4. All of the above

Question No: 24

The Hardware mechanism that enables a device to notify the CPU is called-_____

  1. Polling
  2. Interrupt
  3. System Call
  4. None of the above

Question No: 25

Virtual memory is __________.

  1. An extremely large main memory
  2. An extremely large secondary memory
  3. An illusion of extremely large main memory
  4. A type of memory used in super computers

Question No: 26

A major problem with priority scheduling is _________.

  1. Definite blocking
  2. Starvation
  3. Low priority
  4. None of the above

Question No: 27

Routine is not loaded until it is called. All routines are kept on disk in a relocatable load format. The main program is loaded into memory & is executed. This type of loading is called--

  1. Static loading
  2. Dynamic loading
  3. Dynamic linking
  4. Overlays

Question No: 28

Which of the following is not advantage of multiprogramming?

  1. Increased throughput
  2. Shorter response time
  3. Decreased operating system overhead
  4. Ability to assign priorities to jobs

Question No: 29

_________ page replacement algorithm suffers from Belady´s anomaly.

  1. LRU
  2. MRU
  3. FIFO
  4. LIFO

Question No: 30

Paging _________.

  1. Solves the memory fragmentation problem
  2. Allows modular programming
  3. Allows structured programming
  4. Avoids deadlock

Question No: 31

The term “Operating System“means ________.

  1. A set of programs which controls computer working
  2. The way a computer operator works
  3. Conversion of high-level language in to machine level language
  4. The way a floppy disk drive operates

Question No: 32

The collection of processes on the disk that is waiting to be brought into memory for execution forms the ___________

  1. Ready queue
  2. Device queue
  3. Input queue
  4. Priority queue

Question No: 33

Mutual exclusion

  1. if one process is in a critical region others are excluded
  2. prevents deadlock
  3. requires semaphores to implement
  4. is found only in the Windows NT operating system

Question No: 34

Which scheduler controls the degree of multiprogramming?

  1. Short term scheduler
  2. Long term scheduler
  3. Middle term scheduler
  4. None of the above

Question No: 35

In memory management, a technique called as paging, physical memory is broken into fixed-sized blocks called ___________.

  1. Pages
  2. Frames
  3. Blocks
  4. Segments

Question No: 36

In the running state

  1. only the process which has control of the processor is found
  2. all the processes waiting for I/O to be completed are found
  3. all the processes waiting for the processor are found
  4. none of the above

Question No: 37

In a multi threaded environment _______.

  1. Each thread is allocated with new memory from main memory.
  2. Main threads terminate after the termination of child threads.
  3. Every process can have only one thread.
  4. None of the above

Question No: 38

Which of the following statement is not true?

  1. Multiprogramming implies multitasking
  2. Multi-user does not imply multiprocessing
  3. Multitasking does not imply multiprocessing
  4. Multithreading implies multi-user

Question No: 39

In one of the deadlock prevention methods, impose a total ordering of all resource types, and require that each process requests resources in an increasing order of enumeration. This violates the _______________ condition of deadlock

  1. Mutual exclusion
  2. Hold and Wait
  3. Circular Wait
  4. No Preemption

Question No: 40

If all page frames are initially empty, and a process is allocated 3 page frames in real memory and references its pages in the order 1 2 3 2 4 5 2 3 2 4 1 and the page replacement is FIFO, the total number of page faults caused by the process will be __________.

  1. 10
  2. 7
  3. 8
  4. 9
  Answers :-

                                                                                                                                      Operating System and Data Storage


Assignment A

  1. What is an operating system? Discuss the functions of an operating system? Give a general architecture of the operating system.


The operating system is the core software component of your computer. It performs many functions and is, in very basic terms, an interface between your computer and the outside world. In the section about hardware, a computer is described as consisting of several component parts including your monitor, keyboard, mouse, and other parts. The operating system provides an interface to these parts using what is referred to as "drivers". This is why sometimes when you install a new printer or other piece of hardware, your system will ask you to install more software called a driver.


An operating system performs these services for applications:


In a multitasking operating system where multiple programs can be running at the same time, the operating system determines which applications should run in what order and how much time should be allowed for each application before giving another application a turn.

It manages the sharing of internal memory among multiple applications.

It handles input and output to and from attached hardware devices, such as hard disks, printers, and dial-up ports.

It sends messages to each application or interactive user (or to a system operator) about the status of operation and any errors that may have occurred.

It can offload the management of what are called batch jobs (for example, printing) so that the initiating application is freed from this work.

On computers that can provide parallel processing, an operating system can manage how to divide the program so that it runs on more than one processor at a time.


 The computer architecture of a computing system defines its attributes as seen by the programs that are executed in that system, that is, the conceptual structure and functional behavior of the machine hardware. Then, the computer architect defines the functions to be executed in the hardware and the protocol to be used by the software in order to exploit such functions. Note that the architecture has nothing to do with the organization of the data flow, the logical design, the physical design, and the performance of any particular implementation in the hardware.

      Hence By Architecture we mean the order in which certain hardware Processes are carried out by the OS and has nothing to do with the logical software flow of the Computer.

Simple view

An Operating System is the layer between the hardware and software, as in 


An Operating System is responsible for the following functions

  • Device management using device drivers
  • Process management using processes and threads
  • Inter-process communication
  • Memory management
  • File systems

In addition, all operating systems come with a set of standard utilities. The utilities allow common tasks to be performed such as

  • being able to start and stop processes
  • being able to organise the set of available applications
  • organise files into sets such as directories
  • view files and sets of files
  • edit files
  • rename, copy, delete files
  • communicate between processes


The kernel of an operating system is the part responsible for all other operations. When a computer boots up, it goes through some initialisation functions, such as checking memory. It then loads the kernel and switches control to it. The kernel then starts up all the processes needed to communicate with the user and the rest of the environment (e.g. the LAN)

The kernel is always loaded into memory, and kernel functions always run, handling processes, memory, files and devices.

The traditional structure of a kernel is a layered system, such as UNIX. In this, all layers are part of the kernel, and each layer can talk to only a few other layers. Application programs and utilities live above the kernel.

The UNIX kernel looks like

Most of the Operating Systems being built now use instead a micro kernel, which minimises the size of the kernel. Many traditional services are made into user level services. Communication being services is often by an explicit message passing mechanism.

The major micro-kernel Operating System is Mach. Many others use the concepts of Mach.

Some systems, such as Windows NT use a mixed approach 






  1. What are the cooperating processes and how they communicate? Explain by taking a suitable example.



Inter process Communication

  • Processes within a system may be independent or cooperating
  • Cooperating process can affect or be affected by other processes, including sharing data
  • Reasons for cooperating processes:
  • Information sharing
  • Computation speedup
  • Modularity
  • Convenience
  • Cooperating processes need inter process communication (IPC)
  • Two models of IPC
  • Shared memory
  • Message passing

Communications Models

Cooperating Processes

  • Independent process cannot affect or be affected by the execution of another process
  • Cooperating process can affect or be affected by the execution of another process
  • Advantages of process cooperation
  • Information sharing
  • Computation speed-up
  • Modularity
  • Convenience



Inter process Communication (IPC) Facilities

A computer provides many resources which allow processes to communicate with each other. These resources are Inter process Communication (IPC) Facilities. The simplest, is through the file system on disk. This is a resource that all processes in the system share, and with appropriate file permissions, two otherwise unrelated processes can exchange data. There are three problems with using files to communicate:

  1. Processes can only communicate if they share the same file system. A network such as the internet opens-up possible connections between processes. The network socket is the IPC facility provided by the operating system to allow access for processes through a network.
  2. Because files reside on disk, access to them is VERY slow, so using files as a means of IPC is very slow and inefficient.
  3. Because files are so easily accessed, they make it very difficult to synchronize processes to ensure that their cooperation is not destructive. The Reader/Writer Problem frequently arises through the unintentional activities of processes sharing files.


It is because of the second and third problem that the Kernel provides alternative methods of IPC.

Consider the second problem above: processes share the disk (at least when given permission by files owner), but the disk is very slow to access. Why can´t files share Main Memory instead? They can, but much greater care is needed. Main memory is the primary source of the code and data of a process which is executed, so it is crucial that the operating system carefully control access to the memory resources of each process. The way the kernel does this, is by constructing virtual memory which blinds a process to all but the memory that is allocated to it. This powerful protection makes it difficult to share memory between processes: processes only see their own virtual address space. The operating system must step in if processes are to share data.


An alternative to sharing disk memory, is to share main memory; but to do this requires careful control by the kernel. Each process which wants to use a shared memory segment, must first request that they be attached to that segment, then the kernel will provide them with the virtual address for that segment. In this way multiple processes can share the same piece of main memory, although each process has a different virtual address for the memory segment. Any change made to the shared memory segment by one process is seen by all processes. The advantage of sharing the same main memory segment is that reading and writing data is much easier and much quicker, then exchanging data through a file. Still, the problem of synchronization between processes to ensure mutual cooperation is a problem.


There are other ways of IPC the kernel provides which can solve the synchronization problem. These forms of IPC use the Operating System to enforce synchronization. The Operating System can provide its own memory to act as a buffer between processes: this is what it does with Message Queues. A message queue is a special buffer that the Operating System maintains with its own memory. It allows for exchange of data, and synchronizes the reading and writing of data to ensure mutual cooperation. The drawback is that message queues are very limited in size, and they may be too restrictive to suit the communication needs of processes. An important alternative, which is really at the heart of UNIX, are Pipes. Pipes were implemented in UNIX from the very beginning, and allow large quantities of data to pass between processes. Pipes can be unnamed, allowing only related processes to use them, or named, allowing unrelated processes to use them. The Operating System allows only one process to read from, and write to a pipe at a time. This provides careful synchronization of processes. Still, pipes are a resource with limitations:


  • Pipes are Half-Duplex: You can read into an end or write into an end, not both. This limits the kinds of exchanges between processes, and requires processes communicate protocols ahead of time if they want bidirectional communication. Imagine a telephone system which requires you use two phones--one for talking and one for listening--and the phones were located in different rooms. Communicating would be difficult.
  • Pipes are not a buffer: they do not store data, they are a channel of communication that requires both parties to be present to communicate.



3 .Write a note on the multi-processor scheduling.  


5.5 Multiple-Processor Scheduling

  • When multiple processors are available, then the scheduling gets more complicated, because now there is more than one CPU which must be kept busy and in effective use at all times.
  • Load sharingrevolves around balancing the load between multiple processors.
  • Multi-processor systems may be heterogeneous, (different kinds of CPUs), or homogenous, (all the same kind of CPU). Even in the latter case there may be special scheduling constraints, such as devices which are connected via a private bus to only one of the CPUs. This book will restrict its discussion to homogenous systems.

5.5.1 Approaches to Multiple-Processor Scheduling

  • One approach to multi-processor scheduling is asymmetric multiprocessing, in which one processor is the master, controlling all activities and running all kernel code, while the other runs only user code. This approach is relatively simple, as there is no need to share critical system data.
  • Another approach is symmetric multiprocessing, SMP, where each processor schedules its own jobs, either from a common ready queue or from separate ready queues for each processor.
  • Virtually all modern OSes support SMP, including XP, Win 2000, Solaris, Linux, and Mac OSX.

5.5.2 Processor Affinity

  • Processors contain cache memory, which speeds up repeated accesses to the same memory locations.
  • If a process were to switch from one processor to another each time it got a time slice, the data in the cache ( for that process ) would have to be invalidated and re-loaded from main memory, thereby obviating the benefit of the cache.
  • Therefore SMP systems attempt to keep processes on the same processor, via processor affinity.Soft affinity occurs when the system attempts to keep processes on the same processor but makes no guarantees. Linux and some other OSes support hard affinity, in which a process specifies that it is not to be moved between processors.
  • Main memory architecture can also affect process affinity, if particular CPUs have faster access to memory on the same chip or board than to other memory loaded elsewhere. (Non-Uniform Memory Access, NUMA. ) As shown below, if a process has an affinity for a particular CPU, then it should preferentially be assigned memory storage in "local" fast access areas.

Load Balancing

  • Obviously an important goal in a multiprocessor system is to balance the load between processors, so that one processor won´t be sitting idle while another is overloaded.
  • Systems using a common ready queue are naturally self-balancing, and do not need any special handling. Most systems, however, maintain separate ready queues for each processor.
  • Balancing can be achieved through either push migration orpull migration:
    • Push migration involves a separate process that runs periodically, ( e.g. every 200 milliseconds ), and moves processes from heavily loaded processors onto less loaded ones.
    • Pull migrationinvolves idle processors taking processes from the ready queues of other processors.
    • Push and pull migration are not mutually exclusive.
  • Note that moving processes from processor to processor to achieve load balancing works against the principle of processor affinity, and if not carefully managed, the savings gained by balancing the system can be lost in rebuilding caches. One option is to only allow migration when imbalance surpasses a given threshold.

Multicore Processors

  • Traditional SMP required multiple CPU chips to run multiple kernel threads concurrently.
  • Recent trends are to put multiple CPUs ( cores ) onto a single chip, which appear to the system as multiple processors.
  • Compute cycles can be blocked by the time needed to access memory, whenever the needed data is not already present in the cache. ( Cache misses. ) In Figure 5.10, as much as half of the CPU cycles are lost to memory stall.
  • By assigning multiple kernel threads to a single processor, memory stall can be avoided ( or reduced ) by running one thread on the processor while the other thread waits for memory.
  • A dual-threaded dual-core system has four logical processors available to the operating system. The UltraSPARC T1 CPU has 8 cores per chip and 4 hardware threads per core, for a total of 32 logical processors per chip.
  • There are two ways to multi-thread a processor:
    1. Coarse-grained multithreading switches between threads only when one thread blocks, say on a memory read. Context switching is similar to process switching, with considerable overhead.
    2. Fine-grained multithreading occurs on smaller regular intervals, say on the boundary of instruction cycles. However the architecture is designed to support thread switching, so the overhead is relatively minor.
  • Note that for a multi-threaded multi-core system, there are twolevels of scheduling, at the kernel level:
    • The OS schedules which kernel thread(s) to assign to which logical processors, and when to make context switches using algorithms as described above.
    • On a lower level, the hardware schedules logical processors on each physical core using some other algorithm.
      • The UltraSPARC T1 uses a simple round-robin method to schedule the 4 logical processors ( kernel threads ) on each physical core.
      • The Intel Itanium is a dual-core chip which uses a 7-level priority scheme ( urgency ) to determine which thread to schedule when one of 5 different events occurs.

Virtualization and Scheduling

  • Virtualization adds another layer of complexity and scheduling.
  • Typically there is one host operating system operating on "real" processor(s) and a number of guest operating systems operating on virtual processors.
  • The Host OS creates some number of virtual processors and presents them to the guest OSes as if they were real processors.
  • The guest OSes don´t realize their processors are virtual, and make scheduling decisions on the assumption of real processors.
  • As a result, interactive and especially real-time performance can be severely compromised on guest systems. The time-of-day clock will also frequently be off.




  1. What are the characteristics of a good scheduling algorithm? Give a comparison of FCFS

And RR algorithm giving pros and cons of each with example.


Different CPU scheduling algorithms have different properties, and the choice of a particular algorithm may favour one class of processes over another. In choosing which algorithm to use in a particular situation, we must consider the properties of the various algorithms. Many criteria have been suggested for comparing CPU scheduling algorithms. Which characteristics are used for comparison can make a substantial difference in which algorithm is judged to be best. The criteria include the following:


CPU Utilization. We want to keep the CPU as busy as possible.

Throughput. If the CPU is busy executing processes, then work is being done. One measure of work is the number of processes that are completed per time unit, called throughput. For long processes, this rate may be one process per hour; for short transactions, it may be 10 processes per second.


Turnaround time. From the point of view of a particular process, the important criterion is how long it takes to execute that process. The interval from the time of submission of a process to the time of completion is the turnaround time. Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O.


Waiting time. The CPU scheduling algorithm does not affect the amount of the time during which a process executes or does I/O; it affects only the amount of time that a process spends waiting in the ready queue. Waiting time is the sum of periods spend waiting in the ready queue.


Response time. In an interactive system, turnaround time may not be the best criterion. Often, a process can produce some output fairly early and can continue computing new results while previous results are being output to the user. Thus, another measure is the time from the submission of a request until the first response is produced. This measure, called response time, is the time it takes to start responding, not the time it takes to output the response. The turnaround time is generally limited by the speed of the output device.


     It is desirable to maximize CPU utilization and throughput and to minimize turnaround time, waiting time, and response time. In most cases, we optimize the average measure. However, under some circumstances, it is desirable to optimize the minimum or maximum values rather than the average. For example, to guarantee that all users get good service, we may want to minimize the maximum response time. Investigators have suggested that, for interactive systems, it is more important to minimize the variance in the response time than to minimize the average response time. A system with reasonable and predictable response time may be considered more desirable than a system that is faster on the average but is highly variable. However, little work has been done on CPU-scheduling algorithms that minimize variance.


First Come First Served (FCFS) scheduling algorithm: It is a non preemptive scheduling algorithm. Here processes are kept in a queue and are executed one by one. Whenever a new process arrives it is kept at the end of the queue. So, this is the last process in the queue. Now, when a new process arrives after this process, the new process becomes the last process in the queue. When a process which is running is blocked the next process in the queue is run and the blocked process (when it is ready) is placed at the end of the queue. FCFS is usually used in batch systems. The advantage of FCFS algorithm is that it is very easy to implement. The disadvantage of FCFS algorithm is that since it is non preemptive, CPU usage can be wasted in some situations.


Round Robin scheduling algorithm: It is a preemptive scheduling algorithm. In this algorithm, every process is considered as equal. A time interval to run known as quantum is assigned to every process. A queue is maintained with quantum for each process. After a process has finished its quantum, it is placed at the end of the queue and a new process is run from the queue. Round Robin scheduling algorithm is used for personal computers and servers.


Scheduling Policy Comparison

Now that we have some tools and terminology, we can begin our analysis of each policy. Let’s jump in.

For each policy we are going to aim to answer 4 questions.

  1. Are all types of processes (long, short, I/O bound, processor bound) treated fairly?
  2. What are the effects on response time?
  3. Does the policy have high or low throughput?
  4. Is starvation possible, and if so, in what case?

I am using the same example and numbers from my first post. The diagrams from that initial post may still be a helpful reference here.

First-come-first-served (FCFS)

FCFS is a non-preemptive policy that selects and runs a process until completion based on FIFO.








Arrival Time







Service Time








Finish Time





















Are all types of processes treated fairly?

No, they are not. Longer processes are going to fare better than shorter processes, especially in the case when a short process arrives just after a long process. This can be observed in the NTAT data. A short process, process E, arrives just after a long process, process D. Process E’s NTAT is 6.00, which is more than twice of any other process.

Processor-bound processes are favored over I/O-bound processes.** Suppose we have a collection of processor-bound and I/O-bound processes. When a processor-bound process is running, all I/O-bound processes will have to wait. Some of these I/O-bound processes will be in a blocked state, but could move to the ready state even while the processor-bound process is running once they receive their required input or finish their output operation. When the processor-bound process finally finishes executing, the I/O-bound process will execute only to become quickly blocked by I/O operations again.

**I despise this terminology, but you will encounter it everywhere, so I’m sticking to convention. A I/O-bound process is one that will have to wait on I/O operations (e.g. user inserting a disc, user entering a password) and is therefore limited by the speed at which the I/O requests are handled. A processor-bound process is limited only by the speed of the processor—the process would go faster if the processor could go faster.

What are the effects on response time?

There will be high response times in the case when there is a large variance in process execution times. So the case when we have long processes followed by short processes will cause significant delays for those shorter processes.

Does the policy have high or low throughput?

In FCFS, throughput is not emphasized as it is in other policies. As a result, FCFS throughput is dependent only the lengths of the processes in the system.

Is starvation possible, and if so, in what case?

No, starvation is not possible and every process will eventually get run.


Round Robin (RR)


Round Robin is a policy that uses pre-emption based on a clock interrupt at periodic intervals.








Arrival Time







Service Time







RR (q=1)

Finish Time





















Are all types of processes treated fairly?

Yes, all are treated fairly. However, there is an edge case that is fun to consider. It only presents itself when the average running time of I/O bound processes is less than the time quantum before they are blocked for I/O. This results in the I/O bound processes not being able to use their entire time slice and processor bound processes end up getting more executing time.

The solution? I’m not going to go into all the detail, but it’s called Virtual Round Robin (VRR). The main difference is that there are two queues instead of one: an auxiliary queue for processes that were blocked by I/O and a main queue for the processes that were pre-empted. This auxiliary queue gets priority over the main queue when a process needs to be selected. However, a process from the auxiliary queue doesn’t run for longer than the time quantum minus the time it already used from its last ran.

Studies have shown that VRR is superior to general RR in terms of fairness.

What are the effects on response time?

The closer a process’s service time is to the time quantum, the fewer round robin cycles and delays it is going to experience. As a result, round robin provides good response time for short processes.

Does the policy have high or low throughput?

The most important thing with round robin is choosing the length of the time quantum. The smaller the time quantum, the greater amount of overhead required in order to switch between the processes. The more time we spend switching between processes, the less time we spend running processes. So throughput (the number of processes that are 100% completed per unit time) may be low if the selected time quantum is too small.

Rule of thumb: your time quantum should be slightly greater than the average needed running time; otherwise, most processes will require two time quanta. If your time quantum is too long, round robin reduces to FCFS.

Is starvation possible, and if so, in what case?

No, starvation is not possible, because all processes are serviced and rotated between equally.



  1. Define deadlock. What are the necessary and sufficient conditions for deadlock to occur?


Deadlocks are a set of blocked processes each holding a resource and waiting to acquire a resource held by another process.


How to avoid Deadlocks

Deadlocks can be avoided by avoiding at least one of the four conditions, because all this four conditions are required simultaneously to cause deadlock.

  1. Mutual Exclusion

Resources shared such as read-only files do not lead to deadlocks but resources, such as printers and tape drives, requires exclusive access by a single process.

  1. Hold and Wait

In this condition processes must be prevented from holding one or more resources while simultaneously waiting for one or more others.

  1. No Pre-emption

Pre-emption of process resource allocations can avoid the condition of deadlocks, where ever possible.

  1. Circular Wait

Circular wait can be avoided if we number all resources, and require that processes request resources only in strictly increasing (or decreasing) order.


Handling Deadlock

The above points focus on preventing deadlocks. But what to do once a deadlock has occurred. Following three strategies can be used to remove deadlock after its occurrence.

  1. Pre-emption

We can take a resource from one process and give it to other. This will resolve the deadlock situation, but sometimes it does causes problems.

  1. Rollback

In situations where deadlock is a real possibility, the system can periodically make a record of the state of each process and when deadlock occurs, roll everything back to the last checkpoint, and restart, but allocating resources differently so that deadlock does not occur.

  1. Kill one or more processes

This is the simplest way, but it works.

Necessary Conditions


There are four conditions that are necessary to achieve deadlock:


  • Mutual Exclusion - At least one resource must be held in a non-sharable mode; If any other process requests this resource, then that process must wait for the resource to be released.
  • Hold and Wait - A process must be simultaneously holding at least one resource and waiting for at least one resource that is currently being held by some other process.
  • No pre-emption - Once a process is holding a resource ( i.e. once its request has been granted ), then that resource cannot be taken away from that process until the process voluntarily releases it.
  • Circular Wait - A set of processes { P0, P1, P2, . . ., PN } must exist such that every P[ i ] is waiting for P[ ( i + 1 ) % ( N + 1 ) ]. ( Note that this condition implies the hold-and-wait condition, but it is easier to deal with the conditions if the four are considered separately. )




6.What is the need for Page Replacement? Discuss different page replacement techniques giving a performance comparison.   



Page replacement algorithms are the techniques using which Operating System decides which memory pages to swap out, write to disk when a page of memory needs to be allocated. Paging happens whenever a page fault occurs and a free page cannot be used for allocation purpose accounting to reason that pages are not available or the number of free pages is lower than required pages.

When the page that was selected for replacement and was paged out, is referenced again then it has to read in from disk, and this requires for I/O completion. This process determines the quality of the page replacement algorithm: the lesser the time waiting for page-ins, the better is the algorithm. A page replacement algorithm looks at the limited information about accessing the pages provided by hardware, and tries to select which pages should be replaced to minimize the total number of page misses, while balancing it with the costs of primary storage and processor time of the algorithm itself. There are many different page replacement algorithms. We evaluate an algorithm by running it on a particular string of memory reference and computing the number of page faults.

Reference String

The string of memory references is called reference string. Reference strings are generated artificially or by tracing a given system and recording the address of each memory reference. The latter choice produces a large number of data, where we note two things.

  • For a given page size we need to consider only the page number, not the entire address.
  • If we have a reference to a page p, then any immediately following references to page p will never cause a page fault. Page p will be in memory after the first reference; the immediately following references will not fault.
  • For example, consider the following sequence of addresses - 123,215,600,1234,76,96
  • If page size is 100 then the reference string is 1,2,6,12,0,0

First In First Out (FIFO) algorithm

  • Oldest page in main memory is the one which will be selected for replacement.
  • Easy to implement, keep a list, replace pages from the tail and add new pages at the head.

Optimal Page algorithm

  • An optimal page-replacement algorithm has the lowest page-fault rate of all algorithms. An optimal page-replacement algorithm exists, and has been called OPT or MIN.
  • Replace the page that will not be used for the longest period of time . Use the time when a page is to be used.

Least Recently Used (LRU) algorithm

  • Page which has not been used for the longest time in main memory is the one which will be selected for replacement.
  • Easy to implement, keep a list, replace pages by looking back into time.

Page Buffering algorithm

  • To get process start quickly, keep a pool of free frames.
  • On page fault, select a page to be replaced.
  • Write new page in the frame of free pool, mark the page table and restart the process.
  • Now write the dirty page out of disk and place the frame holding replaced page in free pool.

Least frequently Used(LFU) algorithm

  • Page with the smallest count is the one which will be selected for replacement.
  • This algorithm suffers from the situation in which a page is used heavily during the initial phase of a process, but then is never used again.

Most frequently Used(MFU) algorithm

  • This algorithm is based on the argument that the page with the smallest count was probably just brought in and has yet to be used.



  1. What is the requirement of Memory Management? How the address binding of instructions and data is carried to memory address.


Memory management is the functionality of an operating system which handles or manages primary memory. Memory management keeps track of each and every memory location either it is allocated to some process or it is free. It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.

Memory management provides protection by using two registers, a base register and a limit register. The base register holds the smallest legal physical memory address and the limit register specifies the size of the range. For example, if the base register holds 300000 and the limit register is 1209000, then the program can legally access all addresses from 300000 through 411999.

Instructions and data to memory addresses can be done in following ways

  • Compile time-- When it is known at compile time where the process will reside, compile time binding is used to generate the absolute code.
  • Load time-- When it is not known at compile time where the process will reside in memory, then the compiler generates re-locatable code.
  • Execution time-- If the process can be moved during its execution from one memory segment to another, then binding must be delayed to be done at run time

Dynamic Loading

In dynamic loading, a routine of a program is not loaded until it is called by the program. All routines are kept on disk in a re-locatable load format. The main program is loaded into memory and is executed. Other routines methods or modules are loaded on request. Dynamic loading makes better memory space utilization and unused routines are never loaded.

Dynamic Linking

Linking is the process of collecting and combining various modules of code and data into a executable file that can be loaded into memory and executed. Operating system can link system level libraries to a program. When it combines the libraries at load time, the linking is called static linking and when this linking is done at the time of execution, it is called as dynamic linking.

In static linking, libraries linked at compile time, so program code size becomes bigger whereas in dynamic linking libraries linked at execution time so program code size remains smaller.

Logical versus Physical Address Space

An address generated by the CPU is a logical address whereas address actually available on memory unit is a physical address. Logical address is also known a Virtual address.

Virtual and physical addresses are the same in compile-time and load-time address-binding schemes. Virtual and physical addresses differ in execution-time address-binding scheme.

The set of all logical addresses generated by a program is referred to as a logical address space. The set of all physical addresses corresponding to these logical addresses is referred to as a physical address space.

The run-time mapping from virtual to physical address is done by the memory management unit (MMU) which is a hardware device. MMU uses following mechanism to convert virtual address to physical address.

  • The value in the base register is added to every address generated by a user process which is treated as offset at the time it is sent to memory. For example, if the base register value is 10000, then an attempt by the user to use address location 100 will be dynamically reallocated to location 10100.
  • The user program deals with virtual addresses; it never sees the real physical addresses.


Swapping is a mechanism in which a process can be swapped temporarily out of main memory to a backing store , and then brought back into memory for continued execution.

Backing store is a usually a hard disk drive or any other secondary storage which fast in access and large enough to accommodate copies of all memory images for all users. It must be capable of providing direct access to these memory images.

Major time consuming part of swapping is transfer time. Total transfer time is directly proportional to the amount of memory swapped. Let us assume that the user process is of size 100KB and the backing store is a standard hard disk with transfer rate of 1 MB per second. The actual transfer of the 100K process to or from memory will take

100KB / 1000KB per second

= 1/10 second

= 100 milliseconds

Memory Allocation

Main memory usually has two partitions

  • Low Memory-- Operating system resides in this memory.
  • High Memory-- User processes then held in high memory.

Operating system uses the following memory allocation mechanism.


Memory Allocation



Single-partition allocation

In this type of allocation, relocation-register scheme is used to protect user processes from each other, and from changing operating-system code and data. Relocation register contains value of smallest physical address whereas limit register contains range of logical addresses. Each logical address must be less than the limit register.


Multiple-partition allocation

In this type of allocation, main memory is divided into a number of fixed-sized partitions where each partition should contain only one process. When a partition is free, a process is selected from the input queue and is loaded into the free partition. When the process terminates, the partition becomes available for another process.


As processes are loaded and removed from memory, the free memory space is broken into little pieces. It happens after sometimes that processes can not be allocated to memory blocks considering their small size and memory blocks remains unused. This problem is known as Fragmentation.

Fragmentation is of two types





External fragmentation

Total memory space is enough to satisfy a request or to reside a process in it, but it is not contiguous so it can not be used.


Internal fragmentation

Memory block assigned to process is bigger. Some portion of memory is left unused as it can not be used by another process.

External fragmentation can be reduced by compaction or shuffle memory contents to place all free memory together in one large block. To make compaction feasible, relocation should be dynamic.


External fragmentation is avoided by using paging technique. Paging is a technique in which physical memory is broken into blocks of the same size called pages (size is power of 2, between 512 bytes and 8192 bytes). When a process is to be executed, it´s corresponding pages are loaded into any available memory frames.

Logical address space of a process can be non-contiguous and a process is allocated physical memory whenever the free memory frame is available. Operating system keeps track of all free frames. Operating system needs n free frames to run a program of size n pages.

Address generated by CPU is divided into

  • Page number (p)-- page number is used as an index into a page table which contains base address of each page in physical memory.
  • Page offset (d)-- page offset is combined with base address to define the physical memory address.

Following figure show the paging table architecture.




Segmentation is a technique to break memory into logical pieces where each piece represents a group of related information. For example ,data segments or code segment for each process, data segment for operating system and so on. Segmentation can be implemented using or without using paging.

Unlike paging, segment are having varying sizes and thus eliminates internal fragmentation. External fragmentation still exists but to lesser extent.

Address generated by CPU is divided into

  • Segment number (s)-- segment number is used as an index into a segment table which contains base address of each segment in physical memory and a limit of segment.
  • Segment offset (o)-- segment offset is first checked against limit and then is combined with base address to define the physical memory address.




  1. What is the purpose of directory structure in file system? Explain various types of directory structure with suitable diagram


In computing, a directory structure is the way an operating system´s file system and its files are displayed to the user. Files are typically displayed in a hierarchical tree structure.


Directory Structures :

Information in a Device Directory





Current length

Maximum length

Date last accessed (for archival)

Date last updated (for dump)

Owner ID (who pays)

Protection information (discuss later)


Operations Performed on Directory


Search for a file

Create a file

Delete a file

List a directory

Rename a file

Traverse the file system


Organize the Directory (Logically) to Obtain


Efficiency – locating a file quickly.

Naming – convenient to users.

  1. Two users can have same name for different files.
  2. The same file can have several different names.

Grouping – logical grouping of files by properties, (e.g., all Java programs, all games, …)


Single-Level Directory



A single directory for all users.

Naming problem

Grouping problem


Two-Level Directory

Separate directory for each user.




  • Path name
  • Can have the same file name for different user
  • Efficient searching
  • No grouping capability


Tree-Structured Directories




Efficient searching

Grouping Capability

Current directory (working directory) /spell/mail / prog

  1. type list
  2. Absolute or relative path name
  3. Creating a new file is done in current directory.
  4. Delete a file


Creating a new subdirectory is done in current directory.



Acyclic-Graph Directories

Have shared subdirectories and files.




Two different names (aliasing)

If dict deletes list _ dangling pointer.


  • Back pointers, so we can delete all pointers.   

Variable size records a problem.

  1. Back pointers using a daisy chain organization.
  2. Entry-hold-count solution.


General Graph Directory




How do we guarantee no cycles?

  1. Allow only links to file not subdirectories.
  2. Garbage collection.
  3. Every time a new link is added use a cycle detection algorithm to determine whether it is  


































Assignment B



  1. What is the content of the Matrix Need?





P0 0 0 0 0

P1 0 7 5 0

P2 1 0 0 2

P3 0 0 2 0

P4 0 6 4 2



  1. Is the system in a safe state?



Yes, there exist several sequences that satisfy safety requirements (e.g.

P0, P2, P1, P3, P4).



  1. If a request from process P1 arrives for (0, 4, 2, 0) can this request be granted immediately?


Pretend that the allocation can be made since the Available matrix is (1, 5, 2, 0), and it will now change to (1, 1, 0, 0). The next step is to find the safe sequence of processes. Alloc for P1 becomes (1,4,2,0) and Need for P1 becomes (0, 3, 3, 0). One possible safe sequence is: P0, P2, P3, P1, P4.



Assignment C

Question No: 1

___________ begins at the root and follows a path down to the specified file

  1. Relative path name
  2. Absolute path name
  3. Standalone name
  4. All of the above


Average user rating

4.8 / 5

Rating breakdown

80% Complete (danger)
80% Complete (danger)
80% Complete (danger)
80% Complete (danger)
80% Complete (danger)

January 29, 2015
This was nice in buy
Assignment from solve zone is probably one of the first preference of students.

October 09, 2016
This was nice in buy
I recommend a website that was really helpful throughout your session.

March 19, 2017
Some day ago
This was nice in buy
This was good in buy . I found all the answer correct and meaningful and had scored good marks
Back to top