Syllabus PDF -
![]() |
Unit - 1
INTRODUCTION - Definition And Types :
An operating system (OS) is a software program that manages computer hardware and software resources and provides common services for computer programs.
There are several types of operating systems, including:
Windows OS: This is a family of operating systems developed by Microsoft Corporation, which includes Windows 10, Windows 8, Windows 7, and earlier versions.
Mac OS: This is the operating system used on Apple's Macintosh computers, including macOS and Mac OS X.
Linux OS: This is a free and open-source operating system that is widely used in servers, supercomputers, and mobile devices, as well as on personal computers.
Unix OS: This is a multi-user, multi-tasking operating system that is widely used in servers and workstations, particularly in enterprise environments.
iOS: This is the operating system used on Apple's mobile devices, including iPhones and iPads.
Android OS: This is a mobile operating system developed by Google and used on a wide range of smartphones and tablets from various manufacturers.
Chrome OS: This is a Linux-based operating system designed by Google for use in Chromebooks, which are low-cost laptops designed primarily for web browsing and cloud computing.
Real-time OS: This is an operating system designed for applications that require a very fast and predictable response time, such as in industrial control systems, robotics, and military systems.
- Embedded OS: This is an operating system designed for use in small devices with limited resources, such as embedded systems in cars, appliances, and medical devices.
STRUCTURE -
The structure of an operating system can vary depending on the type and design of the system, but generally, an operating system has several layers or components that work together to manage computer hardware and software resources. Here are some of the typical components that make up an operating system:
Kernel: This is the core component of the operating system that provides basic services for all other components. The kernel manages hardware resources, such as memory and CPU, and provides low-level interfaces for device drivers and system calls.
Device drivers: These are software components that control hardware devices, such as printers, keyboards, and disk drives. Device drivers communicate with the kernel to access hardware resources and provide services to applications.
System libraries: These are collections of pre-written software functions and routines that are used by applications and other system components. System libraries provide high-level interfaces to the operating system services, such as file management, networking, and security.
User interface: This is the component that allows users to interact with the operating system and applications. The user interface can be graphical, command-line, or a combination of both.
File system: This is the component that manages the organization and storage of files and directories on the computer's hard drive or other storage devices. The file system provides a hierarchical structure for files and folders and ensures that data is stored and retrieved efficiently and securely.
Process management: This is the component that manages the creation, execution, and termination of processes or programs. The process manager schedules processes for execution on the CPU and provides mechanisms for inter-process communication and synchronization.
Memory management: This is the component that manages the allocation and deallocation of memory resources for processes. The memory manager ensures that each process has sufficient memory to execute and prevents memory conflicts and leaks.
Components And Services -
Components and services of an operating system can be broadly classified into two categories:
- System components: These are the essential parts of the operating system that provide low-level functionality, such as hardware and resource management, process and memory management, and device drivers.
Some of the common system components of an operating system are:
Kernel: This is the core component of an operating system that manages the system's resources and provides a bridge between hardware and software.
Device drivers: These are software components that allow the operating system to communicate with hardware devices such as printers, keyboards, and disk drives.
File system: This is a component of the operating system that manages the storage and retrieval of files and directories.
System libraries: These are collections of pre-written software functions and routines that are used by applications and other system components.
User interface: This is the component of the operating system that allows users to interact with the system, either through a graphical user interface (GUI) or a command-line interface (CLI).
- System services: These are the higher-level services and utilities that run on top of the system components to provide additional functionality to users and applications.
Some of the common system services of an operating system are:
Networking services: These services provide connectivity and communication capabilities to the system, such as internet access, network file sharing, and remote access.
Security services: These services ensure the security and integrity of the system, such as virus protection, firewall protection, and user authentication.
System management services: These services allow system administrators to manage and monitor the system, such as system backups, system updates, and performance monitoring.
Application support services: These services provide support for the installation and execution of applications, such as application programming interfaces (APIs), database management systems (DBMS), and multimedia support.
Overall, the components and services of an operating system work together to provide a robust and reliable platform for users and applications to run on.
System Calls -
System calls are the interface between user-level applications and the operating system. They provide a way for applications to request services from the operating system, such as input/output operations, process management, memory management, file system operations, and network communication.
Here are some examples of common system calls in an operating system:
Process management system calls: These system calls allow applications to create, manage, and terminate processes. Examples include fork(), exec(), wait(), and exit().
File system system calls: These system calls allow applications to perform file-related operations such as creating, opening, reading, writing, and deleting files. Examples include open(), read(), write(), close(), and unlink().
Memory management system calls: These system calls allow applications to manage memory resources such as allocating and freeing memory. Examples include malloc(), free(), and map().
Input/output system calls: These system calls allow applications to interact with input/output devices such as keyboards, mice, and printers. Examples include read(), write(), and ioctl().
Network system calls: These system calls allow applications to perform network-related operations such as sending and receiving data over a network. Examples include socket(), bind(), connect(), and send().
When an application makes a system call, it triggers a transition from user mode to kernel mode, where the operating system provides the requested service. After the service is completed, the operating system returns control to the application in user mode. System calls are an essential part of the operating system and are used by virtually all applications to perform various tasks.
System Programs -
System programs are software programs that are included in an operating system to provide additional functionality to users and applications. These programs are designed to work closely with the operating system and provide services such as system administration, file management, debugging, and performance analysis. Here are some common system programs found in operating systems:
File management programs: These programs provide utilities for creating, copying, moving, and deleting files and directories. Examples include file managers, disk utilities, and backup software.
Text editors: These programs allow users to create and edit text-based documents such as source code, configuration files, and documentation. Examples include vi, nano, and emacs.
System administration programs: These programs provide tools for managing the system, such as user and group management, system backup and recovery, and system monitoring. Examples include task manager, performance monitor, and system configuration tools.
Debugging programs: These programs provide tools for finding and fixing errors in software code. Examples include debuggers, profilers, and error reporting tools.
Communication programs: These programs provide tools for communication between users, applications, and networks. Examples include email clients, instant messaging clients, and web browsers.
Security programs: These programs provide tools for protecting the system from unauthorized access, viruses, and other security threats. Examples include antivirus software, firewalls, and encryption tools.
Overall, system programs are an essential part of the operating system and provide users and applications with additional functionality and tools to improve productivity, security, and performance.
PROCESS MANAGEMENT
Process Concept -
In operating systems, a process is an instance of a running program. It is a unit of work that is scheduled and managed by the operating system's kernel. A process consists of a virtual address space, which contains the program code, data, and stack, and a set of system resources such as files, input/output devices, and network connections. Here are some key concepts related to processes in operating systems:
Process states: A process can be in one of several states, including running, waiting, ready, and terminated. The state of a process depends on its interaction with the system resources and the scheduling algorithm of the operating system.
Process creation: A process is created by a parent process or the operating system itself. The parent process typically creates a child process by using a system call such as fork(), which creates a copy of the parent process. The child process can then execute a different program or code from the parent process.
Process synchronization: Processes may need to communicate and synchronize with each other to share data and resources. Operating systems provide synchronization mechanisms such as semaphores, locks, and monitors to ensure that processes access shared resources in a coordinated manner.
Process scheduling: The operating system schedules processes to run on the CPU using a scheduling algorithm. The scheduling algorithm decides which process to run next based on factors such as process priority, CPU usage, and waiting time.
Process termination: A process can be terminated either by its own request or by the operating system. When a process terminates, it releases its resources back to the system, including its virtual address space, files, and system resources.
Process scheduling -
Process scheduling is the process by which a computer operating system manages the allocation of resources, such as CPU time and memory, to various running processes. The goal of process scheduling is to optimize system performance by ensuring that all processes are executed fairly and efficiently.
The scheduling algorithm used by an operating system determines which process will be executed next, and for how long. This algorithm typically uses a combination of priority levels and time-sharing to allocate resources to processes. For example, a high-priority process may be given more CPU time than a low-priority process, or a process may be given a certain amount of CPU time before being preempted in favor of another process.
There are several common scheduling algorithms used in operating systems, including:
First-Come, First-Served (FCFS): This algorithm simply executes processes in the order in which they arrive in the ready queue.
Shortest Job First (SJF): This algorithm selects the process with the shortest expected execution time next, in order to minimize the average waiting time of all processes.
Priority Scheduling: This algorithm assigns a priority level to each process and executes the highest-priority process first.
Round Robin: This algorithm assigns a fixed time slice to each process, and each process is executed for the allotted time before being preempted in favor of the next process in the queue.
Multi-level Queue Scheduling: This algorithm divides the ready queue into multiple queues based on priority or process characteristics, and assigns each queue a different scheduling algorithm.
The choice of scheduling algorithm depends on the specific requirements of the system, including the types of processes being run, the expected workload, and the desired performance metrics.
Cooperating Process -
In an operating system, cooperating processes refer to processes that work together to accomplish a common goal or task. These processes must communicate with each other and coordinate their actions to achieve their objectives efficiently.
To facilitate communication and synchronization between cooperating processes, the operating system provides various Inter-Process Communication (IPC) mechanisms. Some of the commonly used IPC mechanisms include:
Shared memory: In this mechanism, two or more processes share a common area of memory to exchange data. It is a fast and efficient way of IPC, but it requires careful coordination to avoid race conditions and ensure the consistency of shared data.
Message passing: In message passing, processes communicate by exchanging messages through a system-provided message queue or mailbox. It is a more secure IPC mechanism than shared memory, but it can be slower due to the overhead of sending and receiving messages.
Synchronization: Processes may use synchronization primitives like semaphores and mutexes to coordinate their access to shared resources such as shared memory or files.
Cooperating processes may follow different models, such as client-server or peer-to-peer. In the client-server model, one process (the server) provides services to other processes (the clients), while in the peer-to-peer model, all processes work as equals to accomplish a task.
However, cooperating processes may face issues such as deadlock and starvation, where processes are unable to proceed due to conflicting resource requirements. Therefore, process management in the operating system should ensure that the IPC mechanisms and synchronization primitives used are efficient and deadlock-free.
THREADS -
In simple language, threads are a way to perform multiple tasks concurrently within a process, enabling faster and more efficient execution. They can communicate and synchronize their actions through shared memory, but careful coordination is necessary to avoid conflicts and ensure correct behavior. The operating system provides features for thread management to ensure efficient use of system resources.
INTERPROCESS COMMUNICATION -
Interprocess communication (IPC) is a method of exchanging data and messages between different processes running on the same computer or across a network. IPC enables processes to work together and coordinate their actions to perform complex tasks efficiently.
In simple language, IPC allows different programs or processes to communicate with each other, share data, and coordinate their actions. For example, one process may need to send a message to another process to request information or to update a shared resource.
There are several methods of IPC, including shared memory, message passing, and synchronization. Shared memory involves creating a region of memory that multiple processes can access and modify. Message passing involves sending messages between processes, typically using a mailbox or message queue. Synchronization involves coordinating the actions of multiple processes to ensure they do not interfere with each other, for example, by using semaphores or mutexes.
IPC is important in many areas of computing, including operating systems, distributed systems, and client-server applications. By enabling processes to communicate and coordinate their actions, IPC can improve system performance, reduce resource usage, and enhance the overall user experience.
CPU SCHEDULING CRITERIA -
CPU scheduling criteria are the factors that are used by the operating system to decide which process should be given CPU time. The main criteria used in CPU scheduling are:
CPU Burst Time: This refers to the amount of time a process requires to complete its execution. The shorter the burst time, the higher the priority of the process.
Arrival Time: This refers to the time at which a process enters the system. The earlier a process arrives, the higher its priority.
Priority: This refers to the importance of a process relative to other processes in the system. Processes with higher priority get more CPU time.
I/O Operations: This refers to the number of input/output operations a process requires. Processes that require more I/O operations may be given higher priority to minimize the waiting time for I/O.
Age: This refers to the amount of time a process has spent in the system. Processes that have been waiting for a long time may be given higher priority to prevent starvation.
Preemption: This refers to the ability of the operating system to stop the execution of a process and allocate CPU time to another process with higher priority.
Scheduling Algorithm -
Scheduling algorithms are used by the operating system to determine which process to execute next based on various scheduling criteria. Here are some commonly used scheduling algorithms:
First-Come, First-Served (FCFS): This algorithm executes processes in the order they arrive in the ready queue. The process that arrives first gets executed first.
Shortest Job First (SJF): This algorithm executes the process with the shortest expected execution time first. It can minimize average waiting time and turnaround time but requires knowledge of the process execution time.
Round Robin (RR): This algorithm allocates a fixed time slice to each process in the ready queue, allowing all processes to get a fair share of CPU time. After a time slice expires, the process is preempted and put back into the ready queue.
Priority Scheduling: This algorithm assigns priorities to each process and executes the highest priority process first. It can be either preemptive or non-preemptive.
Multi-Level Feedback Queue Scheduling: This algorithm assigns processes to different priority levels based on their characteristics and allocates CPU time to each level based on a predetermined set of rules.
Guaranteed Scheduling: This algorithm ensures that each process is guaranteed a minimum amount of CPU time, regardless of its priority or burst time.
Lottery Scheduling: This algorithm assigns each process a number of lottery tickets based on its priority, burst time, or other characteristics. The operating system then randomly selects a winning ticket and executes the process associated with that ticket.
Multi - Process Scheduling -
Multiple process scheduling refers to the management of multiple processes by the operating system, which involves scheduling and allocating CPU time to multiple processes concurrently. The main goal of multiple process scheduling is to optimize the utilization of CPU resources and ensure that each process gets the necessary CPU time to complete its tasks.
The operating system maintains a list of processes that are waiting to be executed, known as the ready queue. When a CPU becomes available, the operating system selects a process from the ready queue and assigns it to the CPU for execution.
Real - time Scheduling -
Real-time scheduling is a type of scheduling used in real-time operating systems (RTOS) to meet the timing requirements of real-time applications. Real-time applications are those where the output is required within a guaranteed time interval. In such applications, missed deadlines can lead to severe consequences.
Real-time scheduling algorithms can be classified into two categories: static and dynamic.
Static algorithms are those where the scheduling decisions are made at the design time and remain unchanged at run-time. These algorithms are suitable for applications with fixed and predictable workload patterns. Examples of static scheduling algorithms include Rate Monotonic Scheduling (RMS) and Earliest Deadline First (EDF).
Dynamic algorithms are those where the scheduling decisions are made at run-time based on the current system state. These algorithms are suitable for applications with dynamic and unpredictable workload patterns. Examples of dynamic scheduling algorithms include Round Robin (RR), Priority Inheritance, and Priority Ceiling.
The evaluation of real-time scheduling algorithms is typically done based on the following criteria:
Response Time: The time taken by the system to respond to an event or a request.
Deadline Miss Ratio: The ratio of the number of missed deadlines to the total number of deadlines.
Utilization: The percentage of time that the CPU is being used.
Overhead: The amount of time and resources consumed by the scheduler itself.
Fairness: The degree to which the scheduling algorithm allocates CPU time fairly among the processes.
Predictability: The ability of the scheduling algorithm to predict the worst-case execution time of a process.
Robustness: The ability of the scheduling algorithm to handle unexpected situations, such as system failures or resource conflicts.
The choice of scheduling algorithm depends on the specific requirements of the real-time application, including the response time, deadline, and workload characteristics. The evaluation of real-time scheduling algorithms is an ongoing research area, and new algorithms are constantly being proposed and evaluated based on these criteria.

.png)



No comments:
Post a Comment