Top Operating System Interview Questions and Answer

There are given interview questions with answers on 20+ topics such as PHP, CodeIgniter, Laravel, OOP'S, SQL, PostGreSQL, Javascript, JQuery, Python etc. Each topic contains at least 25 interview questions with answers.

The operating system is a software program that facilitates computer hardware to communicate and operate with the computer software. It is the most important part of a computer system without it computer is just like a box.



There are two main purposes of an operating system:

  • 1. It is designed to make sure that a computer system performs well by managing its computational activities.
  • 2. It provides an environment for the development and execution of programs.



1. Batched operating systems

2. Distributed operating systems

3. Timesharing operating systems

4. Multi-programmed operating systems

5. Real-time operating systems



Real-time systems are used when rigid time requirements have been placed on the operation of a processor. It has well defined and fixed time constraints.



The basic functions of an operating system (OS) are essential for the smooth running of a computer. The OS manages the computer's memory, ensuring that each application gets the memory it needs to operate efficiently. It also handles processing tasks by managing the CPU, prioritizing tasks, and allocating resources to ensure that multiple applications can run simultaneously without conflicts. 


The functions of an operating system can be categorized into several key areas:


  • Memory Management: The OS manages the computer’s memory, ensuring that each application has enough memory to run and that different processes do not interfere with each other.
     
  • Process Management: The OS handles the execution of processes, scheduling them, and ensuring that each gets the necessary CPU time to function properly.
     
  • File System Management: It manages files on the disk, organizing them in a way that makes it easy for users and applications to find and use them.
     
  • Device Management: The OS controls and coordinates the use of hardware components like printers, disk drives, and monitors.
     
  • Security and Access Control: The OS protects data and resources by controlling access and ensuring that only authorized users and applications can access sensitive information.
     

By performing these functions, the operating system ensures the efficient operation of a computer, making it a critical component in any computing environment.



It is a useful, memory-saving technique for multiprogrammed timesharing systems. A Reentrant Procedure is one in which multiple users can share a single copy of a program during the same period. Reentrancy has 2 key aspects: The program code cannot modify itself, and the local data for each user process must be stored separately. Thus, the permanent part is the code, and the temporary part is the pointer back to the calling program and local variables used by that program. Each execution instance is called activation. It executes the code in the permanent part, but has its own copy of local variables/parameters. The temporary part associated with each activation is the activation record. Generally, the activation record is kept on the stack.


Note: A reentrant procedure can be interrupted and called by an interrupting program, and still execute correctly on returning to the procedure.



A socket is used to make connection between two applications. Endpoints of the connection are called socket.

A socket is a software construct that allows two applications to communicate over a network. It is one endpoint of a two-way communication link. The other endpoint is another socket. The two sockets are identified by their IP addresses and port numbers. When two sockets connect, they create a socket connection. This connection allows the two applications to exchange data.



Kernel is the core and most important part of a computer operating system which provides basic services for all parts of the OS.


There are basically five types of Kernels as given below:

Monolithic Kernel In this architecture of kernel, all the system services were packaged into a single system module which lead to poor maintainability and huge size of kernel.

MicroKernel They follow the modular approach of architecture. Maintainability became easier with this model as only the concerned module is to be altered and loaded for every function. This model also keeps a tab on the ever growing code size of the kernel.

Hybrid Kernel  :

Nano Kernel :

Exo Kernel :



  1. Manage Resources: Control hardware like CPU, memory, and storage.
  2. Run Programs: Handle multiple tasks simultaneously.
  3. Organize Files: Manage file storage and retrieval.
  4. Interact with Devices: Handle peripherals like keyboards and printers.
  5. Provide Security: Protect against threats and unauthorized access.
  6. Offer User Interface: Enable user interaction with the system.
  7. Support Networking: Facilitate communication in networked environments.
  8. Handle ErrorsManage and prevent system errors.
  9. Enable Updates: Allow for software enhancements and security patches.



Deadlock occurs in a system when two or more processes cannot proceed because each is waiting for a resource held by another process, which is also waiting for a resource held by another process in the cycle. This situation leads to a deadlock, where no progress can be made by any processes involved.

For example, consider two trains on a single-track railway line, each waiting for the other to move before they can proceed. If neither train moves, they are deadlocked. Similarly, in operating systems, deadlock happens when multiple processes hold resources and wait for others to release the necessary resources, creating a circular dependency that halts all progress.



The following are the four conditions:

  • Mutual Exclusion Condition: It states that the resources must not be shared.
  • Hold and Wait Condition: This condition states that a process must wait for additional resources while still holding allocated resources.
  • Condition of Non-Preemption: Resources cannot be removed while processes utilize them.
  • Circular Wait Condition: It elucidates the second condition. It indicates that the system processes are arranged in a circular list or chain, with each process waiting for a resource held by the next process in the chain.



It doesn’t interact with the computer directly. There is an operator which takes similar jobs having the exact requirement and groups them into batches.  

It is the responsibility of the operator to sort the jobs with similar needs. 


Advantages of Batch Operating System: 

* Multiple users can share the batch systems 

* Batch system’s idle time is significantly less 

* It is easy to manage extensive work repeatedly in batch systems 
 

Disadvantages of Batch Operating System:  

* Batch systems are hard to debug 

* If any job fails, the other jobs will have to wait for an unknown time.



In a time-sharing Operating System, each user gets the CPU time as they use a single system, and each task is given some time to execute so that all the tasks work smoothly. Hence, this system is also known as Multitasking Systems.  

The time that each task gets to execute is called quantum. After this time interval is over, the OS switches over to the next task. 


Advantages of a Time-Sharing Operating System:

* Each task gets an equal opportunity 

* CPU idle time can be reduced 

 

Disadvantages of Time-Sharing Operating System: 

* One must have to take care of the security and integrity of user programs and data



In the Distributed Operating System, various autonomous, interconnected computers communicate with each other using a shared communication network. All the independent systems possess their memory unit and CPU. So, These are also known as loosely coupled systems.  

These system’s processors differ in size and function.  

The primary benefit of working with these operating system types is that it is always possible that one user can access the files or software that are not present on his system. Still, on some other system connected within this network i.e., remote access is enabled within the connected devices.


Benefits of Distributed Operating System: 

* Failure of one will not affect the other network communication, as all systems are independent of each other.

* Since resources are being shared, computation is high-speed and durable.

* These systems are easily scalable as many systems can be easily added to the network.


Disadvantages of Distributed Operating System: 

* Failure of the leading network will stop the entire communication



Real-Time Operating Systems are types of OSs that serve real-time systems.  

The time interval required to process and respond to inputs is minimal. This time interval is called response time

They are generally used when there are stringent time requirements, like missile systems, air traffic control systems, robots, etc. 


Benefits of Real-Time Operating System: 

* Maximum Consumption: Maximum utilization of devices and system, thus more output from all the resources 

Error Free: These types of systems are error-free. 
 

Disadvantages of Real-Time Operating System: 

Complex Algorithms: The algorithms are very complex and challenging for the designer to write on.



1. Hard Real-Time Systems: 

These OSs are meant for applications where time constraints are stringent, and even the shortest possible delay is unacceptable.  

Example: These systems are built for saving life like automatic parachutes or airbags, which are required to be readily available in case of an accident. Virtual memory is seldom found in these systems. 
 

2. Soft Real-Time Systems: 

These OSs are for applications where time-constraint is less strict.
 



 Different states of the process are:

  • New Process
     
  • Running Process
     
  • Waiting Process
     
  • Ready Process
     
  • Terminated Process



A thread is a single sequence stream within a process. Because threads have some of the properties of processes, they are sometimes called lightweight processes. Threads are a popular way to improve the application through parallelism. For example, in a browser, multiple tabs can be different threads. MS Word uses multiple threads, one thread to format the text, another thread to process inputs, etc. 



Thrashing is a situation when the performance of a computer degrades or collapses. Thrashing occurs when a system spends more time processing page faults than executing transactions. While processing page faults is necessary in order to appreciate the benefits of virtual memory, thrashing has a negative effect on the system. As the page fault rate increases, more transactions need processing from the paging device. The queue at the paging device increases, resulting in increased service time for a page fault.



A buffer is a memory area that stores data being transferred between two devices or between a device and an application.



Virtual memory creates an illusion that each user has one or more contiguous address spaces, each beginning at address zero. The sizes of such virtual address spaces are generally very high. The idea of virtual memory is to use disk space to extend the RAM. Running processes don’t need to care whether the memory is from RAM or disk. The illusion of such a large amount of memory is created by subdividing the virtual memory into smaller pieces, which can be loaded into physical memory whenever they are needed by a process. 



Banker’s algorithm is used to avoid deadlock. It is the one of deadlock-avoidance method. It is named as Banker’s algorithm on the banking system where bank never allocates available cash in such a manner that it can no longer satisfy the requirements of all of its customers.



Processes are stored and removed from memory, which makes free memory space, which is too little to even consider utilizing by different processes.  Suppose, that process is not ready to dispense to memory blocks since its little size and memory hinder consistently staying unused is called fragmentation. This kind of issue occurs during a dynamic memory allotment framework when free blocks are small, so it can’t satisfy any request.



There are two types of fragmentation:

1. Internal fragmentation: It is occurred when we deal with the systems that have fixed size allocation units.

2. External fragmentation: It is occurred when we deal with systems that have variable-size allocation units.




 A scheduling algorithm is a process that is used to improve efficiency by utilizing maximum CPU and providing minimum waiting time to tasks. It simply deals with the problem of deciding which of outstanding requests is to be allocated resources. Its main aim is to reduce resource starvation and to ensure fairness amongst parties that are utilizing the resources. In simple words, it is used to allocate resources among various competing tasks. 


The different types of scheduling algorithms are as follows:

First come First serve(FCFS): First came process is served first.

Round Robin(RR): Each process is given a quantum amount of time.

Shortest job first(SJF): Process with lowest execution time is given first preference.

Priority scheduling (PS): Priority value called (nice value) is used for selecting process. Its value is from 0 to 99. 0 being maxed and 99 being least.



Semaphore is a synchronization mechanism that is used to control access to shared resources in multi-threaded or multi-process systems. It maintains a count of available resources and provides two atomic operations: wait() and signal(). It can have a count greater than one, allowing it to control access to a finite pool of resources

There are basically two atomic operations that are possible:

Wait()

Signal()


Types of Semaphores

There are two main types of semaphores:

Binary semaphore: A binary semaphore is a synchronization object that can only have two values: 0 and 1. It is used to signal the availability of a single resource, such as a shared memory location or a file.

Counting semaphore: A counting semaphore is a synchronization object that can have a value greater than 1. It is used to control access to a finite number of resources, such as a pool of database connections or a limited number of threads.



When several threads (or processes) share data, running in parallel on different cores, then changes made by one process may override changes made by another process running parallel. Resulting in inconsistent data. So, this requires processes to be synchronized, handling system resources and processes to avoid such situation is known as Process Synchronization.


Different synchronization mechanisms are:

 Mutex

Semaphores

Monitors

Condition variables

Critical regions

Read/ Write locks



IPC, or Interprocess Communication, involves utilizing shared resources such as memory between processes or threads. Through IPC, the operating system facilitates communication among different processes. Its primary function is to exchange data between multiple threads within one or more programs or processes under the supervision of the OS.


Different IPC Mechanisms:

    • Pipes
    • Message Queuing
    • Semaphores
    • Sockets
    • Shared Memory
    • Signals



Redundant Arrays of Independent Disks(RAID) is a technology that combines multiple physical hard drives into a single logical unit to improve data storage performance, reliability, and capacity. It uses techniques such as data striping (spreading data across multiple disks), mirroring (creating identical copies of data on separate disks), or parity (calculating and storing error-checking information) to achieve these benefits.

RAID is employed to enhance data security, system speed, storage capacity, and overall efficiency of data storage systems. It aims to ensure data redundancy, which helps minimize the risk of data loss in case of disk failure.


  • RAID 0: Striping without redundancy: This configuration is implemented to enhance the server's performance.
  • RAID 1: Reflecting and duplicating: Also called disk mirroring, this level is seen as the easiest method to achieve fault tolerance.
  • RAID 2: employs memory-based error-correcting codes, typically employing hamming code parity as linear error correction.
  • RAID 3: Bit-interleaved Parity: This tier mandates a separate parity drive for storing data.
  • RAID 4: It stores all parity data on one specific drive.
  • RAID 5: Block-interleaved distributed Parity: This level offers superior performance compared to disk mirroring while also providing fault tolerance.
  • RAID 6 P+Q Redundancy: Typically, this level offers protection against up to two drive failures.



Paging is a memory-management scheme that permits the physical address space of a process to be non contiguous or in other words eliminates the need for contiguous allocation of physical memory.


That is we can have logically use memory spaces that physically lie at different locations in the memory.

This allows viewing memory spaces that physically lie at different locations in the hardware to be logically viewed as contiguous.


Paging: It is generally a memory management technique that allows OS to retrieve processes from secondary storage into main memory. It is a non-contiguous allocation technique that divides each process in the form of pages. 
SegmentationIt is generally a memory management technique that divides processes into modules and parts of different sizes. These parts and modules are known as segments that can be allocated to process. 



Spooling (Simultaneous Peripheral Operations On-line) involves buffering data from various I/O jobs in a designated memory or hard disk accessible to I/O devices.


In a distributed environment, an operating system handles spooling by:

  • Managing the varying data access rates of I/O devices.
  • Maintaining a spooling buffer as a waiting area for data allows slower devices to catch up.
  • Facilitating parallel computation through spooling, enabling the computer to perform I/O operations concurrently, such as reading from a tape, writing to a disk, and printing to a printer while performing computational tasks.



If the CPU gets the processes of the higher burst time at the front end of the ready queue then the processes of lower burst time may get blocked which means they may never get the CPU if the job in the execution has a very high burst time. This is called convoy effect or starvation.


In starvation resources are continuously utilized by high priority processes. Problem of starvation can be resolved using Aging.

In Aging priority of long waiting processes is gradually increased.



Context switching is basically a process of saving the context of one process and loading the context of another process. It is one of the cost-effective and time-saving measures executed by CPU the because it allows multiple processes to share a single CPU. Therefore, it is considered an important part of a modern OS. This technique is used by OS to switch a process from one state to another i.e., from running state to ready state. It also allows a single CPU to handle and control various different processes or threads without even the need for additional resources



Below are the differences between multithreading vs multitasking in simple form.


Multithreading

Multiple threads run simultaneously within the same program or different parts of it.

The CPU alternates between various threads.

It represents a lightweight process.

It is a characteristic of the process.

Multithreading involves sharing computing resources among threads of a single process.


Multitasking

Multiple programs are executed at the same time.

The CPU alternates between different tasks and processes.

It represents a heavyweight process.

It is a characteristic of the OS.

Multitasking involves sharing computing resources (CPU, memory, devices, etc.) among processes.



When more than one processes access the same code segment that segment is known as the critical section. The critical section contains shared variables or resources which are needed to be synchronized to maintain the consistency of data variables. In simple terms, a critical section is a group of instructions/statements or regions of code that need to be executed atomically such as accessing a resource (file, input or output port, global data, etc.).



RAID operates transparently with the underlying system. This allows it to appear to the host system as a large single disk structured as a linear array of blocks. This seamless integration enables replacing older technologies with RAID without requiring extensive changes to existing code.

Key Evaluation Points for a RAID System:

  • Reliability: How many disk faults can the system withstand?
  • Availability: What proportion of total session time is the system operational (uptime)?
  • Performance: What is the responsiveness and throughput of the system?
  • Capacity: How much usable capacity is available to the user given N disks with B blocks each?



Swapping

Interactive User Request

Timing

Parent Process Reques



Demand paging is a memory management technique used by operating systems to optimize the use of memory resources.

In demand paging, we only load the required pages of a process into the main memory instead of loading the entire process.

When we first load a process into memory, only the pages that are necessary for the initial execution of the program are loaded. As the program runs, we bring additional pages into memory on demand as needed. Hence, this allows the operating system to optimize memory usage. Additionally, it doesn’t have to load all the pages of a program into memory at once.


Demand paging allows the system to swap out pages that are not currently in use, freeing up memory for other processes. When a page that has been swapped out is needed again, the system can bring it back into memory. Therefore, the main motivation behind demand paging is to reduce the time taken for process initialization. Additionally, it also helps to reduce the memory requirements for a process.

 The following steps are generally followed:

Attempt to access the page.

If the page is valid (in memory) then continue processing instructions as normal.

If a page is invalid then a page-fault trap occurs.

Check if the memory reference is a valid reference to a location on secondary memory. If not, the process is terminated (illegal memory access). Otherwise, we have to page in the required page.

Schedule disk operation to read the desired page into main memory.

Restart the instruction that was interrupted by the operating system trap



Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves:


1. Switching context

2. Switching to user mode

3. Jumping to the proper location in the user program to restart that program

4. Dispatch latency – time it takes for the dispatcher to stop one process and start another running.



Overlays is basically a programming method that divides processes into pieces so that instructions that are important and need can be saved in memory. It does not need any type of support from the OS. It can run programs that are bigger in size than physical memory by only keeping only important data and instructions that can be needed at any given time



Some of the top OS’s that are used mostly are given below:

MS-Windows

Ubuntu

Mac OS

Fedora

Solaris

Free BSD

Chrome OS

CentOS

Debian

Android



Throughput – number of processes that complete their execution per time unit.

Turnaround time – amount of time to execute a particular process.

Waiting time – amount of time a process has been waiting in the ready queue.

Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment).



For efficient management of tasks in a system, proper utilization of memory is  necessary. Hence memory management is one of the prime tasks performed by an operating system. The system is composed of two types of memory– Physical and Virtual memory. 



(i) Physical Memory 

Physical memoryis also known as RAM. All the programs and data during execution along with the kernel of operating system are stored in RAM. Elements stored in this memory are directly accessible by the processor. Addresses that belong to physical memory are called physical address space. It can be further divided into user address space where user’s data or program can be stored and kernel’s address space where kernel is stored. 


(ii) Virtual Memory

Virtual memory is capability of using hard disk as additional memorywhen RAM does not have sufficient capacity to store the data or programs. Data or programs stored in RAM are also mapped to virtual address space by operating system.  Operating system has a program called virtual memory manager which uses method of paging to map virtual to physical address space. If physical memory is able to accommodate the processes, then virtual addresses are directly mapped to physical address but if physical memory is not able to store processes together then virtual memory manager allocate memory to processes one by one till all the processes complete using technique disc paging and demand Paging.


Disc Paging extends the computer’s physical memory (RAM) by reserving space on the hard disc called Page File which the processor views as non-volatile RAM. When there is not enough memory available in RAM to allocate to a new process, the virtual memory manager moves data from RAM to the Page file. Moving data to the Page file frees up RAM making room for the new process to complete its work.

Demand Paging is key to using the physical memory when a number of processes must run with a combined memory demand exceeding the available physical memory. This is achieved by segmenting the process into smaller tasks


Memory Representation 

The amount of physical memory used by a process is called a Working Set. The Working Set of a process is comprised by its Private working set and its Sharable working set both of which are owned by the same process. The Private working set is the amount of physical memory in use pertaining to tasks that are dedicated to the process. The Sharable working set is the amount of physical memory in use by the process pertaining to tasks that can be shared with other processes.

Working Set of a process = Private Working Set + Sharable Working Set.


Commit memory for a process is the amount of memory reserved by operating system which is usually equal to the page file size required by a process. This memory is not allocated until it is necessary to page out a process’s private working set from RAM to page file. Virtual memory required by the system from page file is hence equal to the sum of all process’s commit memory. However windows allows user to modify page file and virtual memory size. During bootupprocess, operatingsystemcreatestwodynamic pools in physical memory for kernel components. These pools are paged and non-paged pools.

=> Paged pool –this is the physical memory allocated for kernel components that can be later written to disk when not required.

=> Non-pagedpool –this is the physicalmemoryallocatedfor kernelcomponents or objects that should always remain in physical memory compulsorily but can optionally stored in disk.

=> Allocations to both these pools can be viewed in task manager. 



  • To Share this Link, Choose your plateform