Skip to content
The Computer Science
TheCScience
  • Engineering Subjects
    • Human Values
    • Computer System Architecture
    • Microprocessor
    • Digital Communication
    • Internet of Things
  • NCERT Solutions
    • Class 12
    • Class 11
  • Solutions
    • HackerRank
      • C Solutions
      • C++ Solutions
      • Java Solutions
      • Python Solutions
      • Algorithms Solutions
      • Data Structures Solutions
    • HackerEarth Solutions
    • Leetcode Solutions
  • JEE 2027
The Computer Science
TheCScience

Types of Operating System | OS Tutorials

YASH PAL, May 12, 2026May 12, 2026

Operating systems have been evolving through the years. Each type of operating system has features like Processor Scheduling, Memory Management, I/O Management, and File Management. With regards to the following aspect, operating systems are broadly classified into various categories as shown below:

Table of Contents

  • Types of Operating Systems
    • Serial Processing Operating System
      • Scheduling
      • Setup Time
    • Batch Operating System
      • SPOOLING (Simultaneous Peripheral Operations Online)
    • Multiprogramming Batched System
    • Time Sharing System
    • Parallel Systems
      • Symmetric and Asymmetric Multiprocessing
    • Distributed Systems
      • Reason for Building a Distributed System
    • Real-Time System

Types of Operating Systems

  1. Serial Processing System
  2. Batch Operating System
  3. Multi-Programmed Batch System
  4. Time Sharing System
  5. Real-Time System
  6. Parallel System
  7. Distributed System

Serial Processing Operating System

  • With the earliest computers (late 1940s to mid 1950s), the programmer interacted directly with the computer hardware; there was no operating system. These machines are called Bare Machines.
  • Programs in machine code were loaded via the input device (eg, Card Reader). If an error halted the program, the error condition was signaled.
  • If the program completed normally, the output appeared on the Printer.
  • These early systems presented two main problems.

Scheduling

  • Most installations used a hardcopy signup sheet to reserve computer time. A user may sign up for an hour but finishes his job in around 45 minutes, which would result in wasted computer processing time.
  • Also, the user might run into problems, not finish in the allotted time, and be forced to stop before resolving the problem.

Setup Time

  • A single program, called a job, could involve loading the compiler and the high-level language program (Source program) into the memory, saving the compiled program, and then loading and linking with common functions.
  • Each of these steps involved tapes or card decks. If an error occured, the user had to go back to the beginning of the setup sequence.
  • Thus, lots of time was spent just setting up the program to run.

Batch Operating System

  • Early computers were very expensive and very large (physically), and therefore, it was important to maximize processor utilization. The wasted time due to scheduling and setup was unacceptable.
  • To improve utilization and to speed up processing, jobs with similar needs were batched together and run through the computer as a group.
  • Thus, the programmer would leave their programs with the operator. The operator would sort programs into batches with similar requirements, and as the computer became available, would run each batch.
Batch operating system in types of OS
Figure 1: Batch Operating System Diagram
  • The Central idea behind the batch processing scheme was the use of a software called Monitor.
  • The main drawback of a batch system is the lack of interaction between the user and the job while that job is executing.
  • In this execution environment, the CPU is often idle because the speed of the mechanical I/O device is slower than the CPU.
  • To overcome the problem of speed mismatch in batch operating systems, the concept of “Spooling” was introduced.
  • Improvements in technology and the introduction of disks have resulted in faster I/O devices. The introduction of disk technology allows the operating system to keep all jobs on a disk, rather than on a card reader.
  • With direct access to the job’s operating system performs job scheduling to accomplish its tasks efficiently.
  • Scheduling in a batch system is very simple, that is, in the FCFS (First Come First Serve) method.
  • Memory Management is also very simple.
  • Since at most one program is in execution at any time, a batch system does not require any time-critical device management.

SPOOLING (Simultaneous Peripheral Operations Online)

  • Spooling is a technique to minimize the problems due to the slowness of input/output devices and shares the system resources to complete the process efficiently.
  • Spooling essentially uses the disk as a very large buffer for reading as far ahead as possible on input devices and for storing output files until the output devices can accept them.
  • Figure 2 shows the concept of spooling.
spooling (simultaneous peripheral operations online) diagram
Figure 2: Spooling Diagram
  • Spooling is used for processing data at remote sites. The remote processing is done at its own speed with no CPU intervention.
  • Spooling overlaps the I/O of one job with the computations of other jobs.
  • Spooling directly affects the performance of the system. It can keep both the CPU and the I/O devices working at much higher rates.

Multiprogramming Batched System

  • In a multiprogramming batched system, the operating system keeps several jobs in memory at a time, as shown in Figure 3. This set of jobs is a subset of the jobs kept in the job pool (several jobs that have been read, waiting on disk, ready to run).
  • The operating system picks and begins to execute one of the jobs from the job pool in the memory. The job may have to wait for some task, such as an I/O operation, to complete, etc.
  • In a nonmultiprogrammed system, the CPU would sit idle. However, in a multiprogramming system, the operating system simply switches to and executes another job.
  • When that job needs to wait, the CPU is switched to another job, and so on. Eventually, the first job finishes waiting and gets the CPU back.
  • Since some of the jobs are always executed, the CPU will never be idle.
Multiprogramming with three program batched system diagram
Figure 3: Multiprogramming Diagram
  • Multiprogramming is the first instance where the operating system must make decisions for the users.

Time Sharing System

  • Time sharing or Multitasking is a logical extension of multiprogramming. In this technique, processor time is shared among multiple users.
  • Multiple jobs are executed by the CPU switching between them, but the switches occur so frequently that the user may interact with each program while it is running.
  • Time-sharing systems were developed to provide interactive use of a computer system at a reasonable cost.
  • A Time shared operating system uses CPU scheduling (If several jobs are ready to run at the same time, the system must choose among them. Making this decision is CPU scheduling) and multiprogramming to provide each user with a small portion of a time-shared computer.
  • Both Batch Processing and Time Sharing use multiprogramming. The key differences are listed in Table 1 below:
Batch Multi-ProgrammingTime Sharing
Principal ObjectiveMaximize Processor UseMinimize Response Time
Source of Directives to OSJob Control language commands are provided with the JobCommand entered at the terminal
Table 1: Difference between Batch Programming and Time Sharing
  • One of the first time-sharing operating systems to be developed was the Compatible Time Sharing System (CTSS).
  • In this system (CTSS), after every 0.2 seconds, the system clock generated interrupts, and at each clock interrupt, the OS regained control and could assign the processor to another user. This technique is known as Time Slicing.

Consider an example, assume that there are four interactive users with the following memory requirements.

  • JOB 1: 1500
  • JOB 2: 2000
  • JOB 3: 500
  • JOB 4: 1000

(a) Initially, the monitor loads JOB1 and transfers control to it.

time sharing system job1
Figure 4: JOB1

(b) Later, the monitor decides to transfer control to JOB2, because JOB2 requires more memory than JOB1, JOB1 must be written out first, and then JOB2 can be loaded. It is shown in Figure 5.

Time sharing system JOB2
Figure 5: JOB2

(c) Next JOB3 is loaded to be run. Because JOB3 is smaller than JOB2, a portion of JOB2 can remain in memory, reducing disk write time. It is shown in Figure 6.

Time sharing system job3
Figure 6: JOB 2 & JOB 3

(d) Now the Monitor decides to transfer control back to JOB1. An additional portion of JOB2 must be written out, and JOB1 is loaded back into memory.

Time sharing system control job
Figure 7: JOB 1 & JOB 2

(e) When JOB4 is loaded, part of JOB1 and a portion of JOB2 remain in memory and are retained.

Time Sharing system JOB4
Figure 8: JOB 4 & JOB 1 & JOB 2

(f) At this point, either JOB1 or JOB2 is activated, and only a partial load will be required. If JOB1 is to run next, then JOB4 and the remaining resident portion of JOB1 will be written out, and the missing portions of JOB1 will be read.

Time sharing system JOB1
Figure 9: JOB1

Parallel Systems

  • Most of the computers used were single-processor systems, that is, they have only one main CPU. However, nowadays multiprocessor systems are used. Such systems have more than one processor sharing the resources (memory, I/O devices, buses, etc.). These systems are referred to as Tightly Coupled Systems.
  • Parallel Operating Systems are primarily concerned with managing the resources of parallel machines.
  • The advantage of building these systems is to increase the throughput (number of processes completed per unit time).
  • Another reason for a multiprocessor system is that they increase Reliability. Failure of one processor will not halt the system.

Symmetric and Asymmetric Multiprocessing

  • In symmetric multiprocessing, each processor runs as an identical copy of the operating system, and these copies communicate with one another as needed. Most of the multiprocessor systems use symmetric multiprocessing.
  • In Asymmetric multiprocessing, each processor is assigned a specific task. A master processor controls the system, and the other processor either looks to the master for instructions or has a predefined task. Master processor schedules and allocates work to the slave processors.

Distributed Systems

  • In a modern computer system, computation is distributed among several processors. Since in this each processor has its own local memory, the processors do not share memory or clock, etc., as in a tightly coupled system.
  • These systems are referred to as loosely coupled systems. However, the processors in these systems may vary in size and functions.
  • They may include small microprocessors, workstations, minicomputers, etc. These processors are referred to by different names, such as nodes, sites, and so on.

Reason for Building a Distributed System

  1. Reason Sharing – If the number of different nodes is connected, then a user at one node may be able to use the resource available to another.
  2. Computation SpeedUp – If any particular computation can be partitioned into a number of subcomputations that can run individually, then a distributed system may allow us to distribute the computation among the various sites.
  3. Reliability – Since it includes different sites, if one site fails, the remaining site can continue operating.
  4. Communication – When many sites are connected by a communication network, the processes at different sites have the opportunity to exchange information.

Real-Time System

  • A Primary objective of a real-time system is to provide quick event response time and thus meet the scheduling deadlines.
  • A real-time operating system (RTOS) is an operating system that guarantees a certain capability within a specified time constraint.
  • A real-time system is used when there are rigid time constraints on the operation of a processor or the flow of data.
  • Processing must be done within the defined constraints, or the system will fail.

Real-time systems can be classified into two categories:

  1. Hard Real Time System – This system guarantees that critical tasks must be completed on time. In this type of system, if any deadline is missed, something catastrophic will occur, or the system will fail. For example, Rocket Launching, Flight Control system.
  2. Soft Real-Time System – This system is less restrictive than hard real-time. In this, if certain deadlines are missed, nothing catastrophic will occur, but performance will be degraded. For example, a multimedia system, Video Conferencing.
engineering subjects Operating System Operating System

Post navigation

Previous post
Next post

Leave a Reply

Your email address will not be published. Required fields are marked *

Engineering Core Subjects

Digital Communication Subject
Internet of Things Subject
Computer Architecture subject
Human Value Subject

JEE Study Materials

JEE Physics Notes
JEE Chemistry Notes

TheCScience

At TheCScience.com, our mission is to make quality education accessible to everyone. We provide in-depth, easy-to-understand articles covering Secondary, Senior Secondary, and Graduation-level subjects.

Our content is designed to simplify complex concepts through clear explanations, diagrams, and structured learning—helping students build strong fundamentals and succeed academically without financial barriers.

Pages

About US

Contact US

Privacy Policy

DMCA

Our Tools

Hosting - get 20% off

Engineering Subjects

Internet of Things

Human Values

Digital Communication

Computer System Architecture

Microprocessor

Programming Tutorials

Data Structure and Algorithm

C

Java

NCERT

Class 12th

©2026 TheCScience | WordPress Theme by SuperbThemes