Discuss About The Essentials Of Organization And Architecture.
The device management is the component within the operating systems that controls hardware devices. There is a wide range of devices that are found on a modern computer, such as keyboards, USB drives, network devices, printers, mice, display adapters and disk drives. The management of these devices is highly critical in the sense that there is a whole system devoted to their management, and that is the device management component. To effectively manage input/output(I/O) devices, the device management must be able to do the following. Firstly, is the standardization of device interfaces that make it easier to accommodate new devices as they are developed, to fit in the existing systems. Secondly, the device management must be flexible to accommodate the development of unforeseen devices with which the existing interfaces do not easily fit. Below is a discussion of management of the operating systems and I/O devices by the device management component (Ann McHoes, Ida M. Flynn, 2017).
To clearly understand the management of I/O requests, there is a prerequisite piece of information that should be considered, which includes the components of an I/O hardware. I/O devices are mainly categorized as user interface, storage and communications (for example networks). The devices communicate with the computer via ports, bus and adapters (Linda Null, Julia Lobur, 2006). Buses are the key components that facilitate communication in the internal system of a computer. They contain rigid protocols for each type of message transferred across the computer system. Buses also control and resolve scheduling contention issues. There are different types of bus namely, PCI bus, expansion bus, SCSI bus and daisy-chain bus. The PCI bus deals with high-speed and high-bandwidth devices that connect to the RAM and CPU. The expansion bus handles low-bandwidth devices that uses buffering (Hornberg, 2017).
I/O scheduling refers to the method through which the order of operations is decided. It basically addresses the ways in which memory disks are accessed. When the disk is being operated on, the disk rotates at a constant speed. Therefore, the head of the disk must be positioned on the track in order to read or write. The time taken to get the head into this position is known as the seek time. Additionally, the head must move to the desired sector to start reading. This causes a delay in time known as rotational delay. Both the seek time and the rotational delay results to the disk access time. One of the purpose of scheduling is to reduce the access time ( Brucker, 2013).
Common purposes and goals of scheduling include improving the overall efficiency of the computer system, prioritizing certain I/O processes and requests, share the disk bandwidth to running processes and making sure that certain requests are processed before a particular deadline is reached. Herewith are some of the common scheduling techniques employed by the device management (Goodwin, 2013).
The CPU scheduling decisions occur under different circumstances. When a process is waiting for an I/O request or the termination of its child process, it is switched from running to waiting state. This type of scheduling is known as non-preemptive scheduling. In a non-preemptive scheduling, the CPU gets allocated to the process and the process does not release the CPU until it terminates or switches to waiting state. For example, Microsoft Windows 3.1 uses this type of scheduling because no special hardware is required for preemptive scheduling ( Dale, 2015). Additionally, in the event of an interruption, a process is switched from running state into ready state. This is known as preemptive scheduling. Preemptive scheduling works about task priorities. A higher priority task is run first before any other task. This can cause other running tasks to be interrupted to give the computing resources to the high priority task. Moreover, when a process completes and is terminated, another process is moved from waiting state to ready. Lastly, is in the event when a process terminates, then scheduling determines the next course of action (Sibsankar Haldar, Alex Alagarsamy Aravind, 2010).
This algorithm uses the algorithm that the first process to arrive, or the first process to request the CPU, gets executed first. The algorithm uses a queue data structure where a process enters the queue through the tail and the scheduler picks processes in the queue from the head. A good example in a real-life situation is that of a queue in a bank. The waiting time of the last process to arrive will therefore be a summation of the waiting times of the processes ahead. As much as this algorithm uses a fair technique, there are shortcomings associated with it ( computergrad, 2012).
Firstly, the algorithm is non-preemptive. This means that the priority of the process is not exercised hence it does not matter. For example, assuming a process like daily back up routines is running, but the system is about to crash, so it sends an interrupt to prevent the system crash. The high priority interrupt will have to wait for the low priority backup routine to terminate first and hence in situation the system will eventually crash. Additionally, parallel resource utilization is not possible. This leads to a convoy effect that causes poor performance of the system. A convoy effect occurs where one process holds a resource for a long time and blocks other processes that need to use that resource for a short time. Lastly, with this algorithm, there is no optimal average awaiting time. This is because different processes take different time frames while executing ( computergrad, 2012).
This algorithm works on the principle of the process with the shortest duration or burst time, is executed first. To implement this algorithm, the processes’ burst time should be known in advance by the processor. However, this is practically not the case all the time. Therefore, for optimality of this algorithm, all the jobs to be scheduled must arrive for execution at the same time. This algorithm takes two approaches. First, is the non-preemptive shortest job first and the second, is the preemptive shortest job first.
In non-preemptive approach, the processes do not arrive at the same time hence they are not all available in the queue ready for scheduling. Therefore, when a process with a shorter burst time arrives later, it must wait for the current process to finish execution even though the running process has a longer burst time. This causes a starvation problem where shorter jobs keep on coming to the queue but do not get the resources they require (Computergrad, 2012).
In preemptive approach, processes are put into the ready queue as they arrive and the process with the shortest burst is scheduled for execution where the current longer running process is preempted to release the resources.
The device manager divides the management task and each task is handled by a software component of I/O system. Traffic controlling is one of the task of the device manager. The I/O traffic controller constantly checks the status of all control units, all devices and channels. The traffic controller carries out three main jobs. Firstly, it determines if whether there is at least one path available. Secondly, where there are more than one paths available, the traffic controller must decide the optimal path to select. Thirdly, in the event all the paths are busy, the traffic controller must determine when a path will become available for use. This job becomes more complex and complicated when the number of control units increases as well as when the number of paths in the units increase. To effectively perform these tasks, the traffic controller maintains a database that stores the status and connection paths of every unit in the I/O system and categories these databases according to the blocks the belong to. That is, the control unit block, the channel control block and the device control block ( Chopra, 2009).
When there is an I/O request the traffic controller chooses a free path and traces back from the requested device. In case a path is unavailable, such as when there is heavy traffic the process is kept in the queues linked to the requesting device, the corresponding channel and corresponding control unit. These leads to multiple queues and each queue is assigned a path. Later, when there is an available path, the traffic controller selects the first process from the queue ( Chopra, 2009).
Conclusion
In a nutshell, the device manager, a component of the operating system, facilitates all the interaction between a user and the applications, and between the applications and the hardware. Most operating systems are constructed in general manner so that only the hardware manufacturers will customize their hardware to suite the operating system’s specific requirements. This allows many devices from different manufacturers to be able to run on the same operating system. All I/O requests are handled by a devoted control system that enhances the performance, the quality of delivery and conflict resolutions between processes.
References
Brucker, P., 2013. Scheduling Algorithms. 4 ed. Berlin: Springer Science & Business Media.
Chopra, R., 2009. Operating System (A Practical App). New Delhi: S. Chand Publishing.
computergrad, 2012. FCFS CPU Scheduling Algorithm. [Online]
Available at: https://operatingsystemfundas.blogspot.co.ke/2012/05/post5.html
[Accessed 20 April 2018].
Dale, N. B., 2015. Computer Science Illuminated. s.l.:Jones & Bartlett Publishers.
Ann McHoes, Ida M. Flynn, 2017. Understanding Operating Systems. illustrated ed. Boston: Cengage Learning,.
Computergrad, 2012. SJF CPU Scheduling Algorithm. [Online]
Available at: https://operatingsystemfundas.blogspot.co.ke/2012/05/post6.html
[Accessed 20 April 2018].
Goodwin, D., 2013. Operating Systems LECTURE #4: 10 MANAGEMENT. In: UNDERSTANDING OPERATING SYSTEMS 4TH ED. BEDFORDSHIRE: s.n.
Hornberg, A., 2017. Handbook of Machine and Computer Vision: The Guide for Developers and Users. New york: John Wiley & Sons.
Linda Null, Julia Lobur, 2006. The Essentials of Computer Organization and Architecture. illustrated ed. Massachusets: Jones & Bartlett Learning,.
Sibsankar Haldar, Alex Alagarsamy Aravind, 2010. Operating Systems. illustrated ed. London: Pearson.
Essay Writing Service Features
Our Experience
No matter how complex your assignment is, we can find the right professional for your specific task. Contact Essay is an essay writing company that hires only the smartest minds to help you with your projects. Our expertise allows us to provide students with high-quality academic writing, editing & proofreading services.Free Features
Free revision policy
$10Free bibliography & reference
$8Free title page
$8Free formatting
$8How Our Essay Writing Service Works
First, you will need to complete an order form. It's not difficult but, in case there is anything you find not to be clear, you may always call us so that we can guide you through it. On the order form, you will need to include some basic information concerning your order: subject, topic, number of pages, etc. We also encourage our clients to upload any relevant information or sources that will help.
Complete the order formOnce we have all the information and instructions that we need, we select the most suitable writer for your assignment. While everything seems to be clear, the writer, who has complete knowledge of the subject, may need clarification from you. It is at that point that you would receive a call or email from us.
Writer’s assignmentAs soon as the writer has finished, it will be delivered both to the website and to your email address so that you will not miss it. If your deadline is close at hand, we will place a call to you to make sure that you receive the paper on time.
Completing the order and download