Operating system Mid term Preparation

 Operating System

Midterm Syllabus:
1. Operating System Basics
2. Types of Operating Systems
3. System Calls, System Programs, System Structures
4. Process Concept, Process State Diagram, Process Control Block
5. Operations on Processes, Cooperating Processes
6. Inter-Process Communication (IPC)
7. Communication in Client-Server Systems
8. Thread Concepts, Thread Types, Thread Control Block
9. Thread’s Design, Multithreading Models, Threading Issues
10. CPU Scheduling (FCFS Algorithms, SJF)
11. CPU Scheduling (SRTF, Round Robin)
12. CPU Scheduling (Priority Non-Preemptive, Priority Preemptive)
13. CPU Scheduling (Multi-level Queues, Feedback)
14. Thread Scheduling, Multiple Processor Scheduling, Real-Time Scheduling
15. Comparison of CPU Scheduling Algorithms
16. Process Synchronization, Race Conditions, Critical Section
17. Critical Section Issues, Critical Section Problem Algorithms, Bakery Algorithm

Theory (Summarized )


1. Operating System Basics

Introduction to the fundamental concepts of Operating Systems (OS), such as what an OS is, its role in managing hardware and software resources, and basic terms like processes, memory management, and file systems.

2. Types of Operating Systems

Explanation of various types of operating systems, such as single-user vs. multi-user, single-tasking vs. multitasking, batch processing, real-time, distributed, and network operating systems.

3. System Calls, System Programs, System Structures

System Calls: Interfaces between user applications and the operating system, allowing programs to request services (e.g., file manipulation, process management).

System Programs: Programs that provide utility functions, like compilers, editors, and file management tools.

System Structures: The internal design of an operating system, including kernel and user-space components.

4. Process Concept, Process State Diagram, Process Control Block

Process Concept: A process is a program in execution, with its own state, resources, and execution context.

Process State Diagram: A visual representation of the various states a process can be in, such as New, Ready, Running, Waiting, and Terminated.

Process Control Block (PCB): A data structure containing information about a process, like its state, program counter, and resources.


5. Operations on Processes, Cooperating Processes


Operations on Processes: Includes process creation, scheduling, synchronization, and termination.

Cooperating Processes: Processes that interact or share data with each other.


6. Inter-Process Communication (IPC)

Mechanisms allowing processes to communicate and synchronize with each other, such as message passing and shared memory.

7. Communication in Client-Server Systems

Explains how communication occurs in client-server models, where clients request services from servers over a network.

8. Thread Concepts, Thread Types, Thread Control Block

Thread: A lightweight process that shares the same resources with other threads of the same process.

Thread Types: User threads and kernel threads.

Thread Control Block (TCB): A data structure similar to the PCB, but for threads.

9. Thread’s Design, Multithreading Models, Threading Issues


Thread Design: How threads are created, managed, and scheduled.

Multithreading Models: User-level threads, kernel-level threads, and hybrid models.

Threading Issues: Includes problems like deadlocks, race conditions, and synchronization.


10. CPU Scheduling (FCFS Algorithms, SJF)


First-Come, First-Served (FCFS): A basic scheduling algorithm where processes are executed in the order they arrive.

Shortest Job First (SJF): A scheduling algorithm where the process with the smallest execution time is given priority.


11. CPU Scheduling (SRTF, Round Robin)

Shortest Remaining Time First (SRTF): A preemptive version of SJF.

Round Robin (RR): A time-sharing scheduling algorithm where each process gets a fixed time slice before being swapped out.

12. CPU Scheduling (Priority Non-Preemptive, Priority Preemptive)

Priority Scheduling: Processes are assigned priorities, and the one with the highest priority gets executed first. Non-preemptive means the process runs until it finishes; preemptive means it can be interrupted.

13. CPU Scheduling (Multi-level Queues, Feedback)

Multi-level Queues: Processes are divided into different queues based on priority or type, and scheduling happens according to each queue's policies.

Feedback: Multi-level feedback queues allow processes to move between different priority levels based on their behavior.


14. Thread Scheduling, Multiple Processor Scheduling, Real-Time Scheduling

Thread Scheduling: Managing the execution of threads within processes.

Multiple Processor Scheduling: Scheduling on systems with multiple processors (symmetric or asymmetric).

Real-Time Scheduling: Scheduling algorithms specifically for real-time systems, where tasks must meet specific deadlines.


15. Comparison of CPU Scheduling Algorithms

A comparison of the efficiency, fairness, and complexity of different CPU scheduling algorithms.


16. Process Synchronization, Race Conditions, Critical Section

Process Synchronization: Techniques to ensure that processes access shared resources without causing conflicts.

Race Conditions: Situations where the outcome depends on the sequence of events, leading to unpredictable results.

Critical Section: A portion of code where a process accesses shared resources, which needs to be protected from concurrent access.


17. Critical Section Issues, Critical Section Problem Algorithms, Bakery Algorithm

Critical Section Problem: How to prevent multiple processes from entering the critical section at the same time.

BAkery Algorithm: A solution to the critical section problem, using a numbering system to control access.


A-Z Slides:




Handwritten Numerical Solved



A-Z Diagrams:





Post a Comment

0 Comments