Essays.club - Get Free Essays and Term Papers
Search

Deez Nuts

Autor:   •  October 22, 2017  •  2,039 Words (9 Pages)  •  547 Views

Page 1 of 9

...

When you start a process, it is new. After all appropriate structures are initialized, it becomes ready. At this point, it will probably be added to a list of processes ready to run (commonly called a runqueue). When the operating system decides that it is that process' turn, its state will be changed to running and it will get some time on the CPU.

Scheduling: Each user has at least one separate program in memory. A program loaded into memory is called a process. When a process executes, it typically executes for a short time before it finishes or needs to perform another I/O. Time sharing and multiprogramming require that several jobs be kept simultaneously in memory. If several jobs are ready to be brought into memory, and if there isn’t enough room for all of them, then the system must choose among them. This involves job scheduling. When selecting a job from the job pool, it loads that job into memory for execution. Having many jobs in memory simultaneously requires memory management. If several jobs are ready to run at the same time, the system must decide which job will run first, also known as CPU scheduling.

Time Sharing/Events: The operating system must ensure reasonable response time. This is accomplished through swapping, where processes are swapped in and out of main memory to the disk. A time-sharing system must also provide a file system. The file system resides on a collection of disks. Modern operating systems are interrupt driven. The operating system sits quietly waiting for something to happen. Events are almost always warranted by the occurrence of an interrupt or a trap, a software-generated interrupt caused by an error or a specific request from a user program that an operating system request be performed.

[pic 5]

The figure above is the swapping of two processes using a disk as a backing store.

Process Management: A program does nothing unless its instructions are executed by a CPU. A program in execution is a process. A process needs certain resources, including CPU time, memory, files, and I/O devices to accomplish the task at hand. These resources are either given to the process when it is created or allocated to it while it is running. When the process terminates, the operating system will reclaim any reusable resources. The operating system is responsible for the following activities in process management: scheduling processes and threads on the CPUs, creating and deleting both user and system processes, suspending and resuming processes, and providing mechanisms for process synchronization and process communication.

[pic 6]

The picture above shows process management in a multi-tasking operating system.

Memory Management: For a program to be executed, it must be mapped to absolute addresses and loaded into memory. As the program executes, it accesses program instructions and data from memory by generating these absolute addresses. Eventually, the program terminates, its memory space is declared available, and the next program can be loaded and executed. The operating system is responsible for the following activities in memory management: keeping track of which parts of memory are currently being used and who is using them, deciding which processes (or parts of processes) and data to move into and out of memory, and allocating and deallocating memory space as needed.

[pic 7]

The picture above shows an example of memory management in an operating system.

File-System Management: A file is a collection of related information defined by the creator. Files represent programs and data. The operating system implements the abstract concept of a file by managing mass-storage media and the devices that control them. Files are normally organized into directories to make them easier to use. The operating system is responsible for the following activities in connection with file management: creating and deleting files, creating and deleting directories to organize files, supporting primitives for manipulating files and directories, mapping files onto secondary storage, and backing up files on stable storage media.

Mass-Storage Management: Most programs are stored on a disk until loaded into memory. Then then use the disk as both the source and destination of their processing. The operating system is responsible for the following activities in connection with disk management: free-space management, storage allocation, and disk scheduling.

Caching: Cache is a faster storage system on a temporary basis. When we need a particular piece of information, we first check whether it is in the cache. If it is, we use the information directly from cache. If not, we use the information from the source and place a copy in cache. Because caches have limited size, cache management is an important design problem.

[pic 8]

The picture above gives us an example of caching in an operating system.

I/O Systems: One of the purposes of an operating system is to hide the peculiarities of specific hardware devices from the user. The I/O subsystem consists of several components: a general device-driver interface, drivers for specific hardware devices, and a memory-management component that includes buffering, caching, and spooling.

[pic 9]

The picture above gives us an example of the use of a system call to perform I/O.

Linux provides a programming interface and user interface compatible with standard UNIX systems and can run a large number of UNIX applications. The core Linux operating system is entirely original, but it allows much existing UNIX software to run, resulting in an entire UNIX-compatible operating system free from proprietary code. Linux is a multiuser system, providing protection between processes and running multiple processes according to a time-sharing scheduler. Newly created processes can share selective parts of their execution environment with their parent processes, allowing multithreaded programming. The memory-management system uses page sharing and copy-on-write to minimize the duplication of data shared by different processes. Pages are loaded on demand when they are first referenced and are paged back out to backing store according to an LFU algorithm if physical memory needs to be reclaimed. To the user, the file system appears as a hierarchical directory tree that obeys UNIX semantics. Internally, Linux uses an abstraction layer to manage

...

Download:   txt (12.9 Kb)   pdf (96.4 Kb)   docx (13.9 Kb)  
Continue for 8 more pages »
Only available on Essays.club