Thursday, July 30, 2015

Synchronization


Thread Creation, Manipulation and Synchronization



  • We first must postulate a thread creation and manipulation interface. Will use the one in Nachos:
    class Thread {
      public:
        Thread(char* debugName); 
        ~Thread();
        void Fork(void (*func)(int), int arg);
        void Yield();
        void Finish();
    }
    
  • The Thread constructor creates a new thread. It allocates a data structure with space for the TCB.
  • To actually start the thread running, must tell it what function to start running when it runs. The Fork method gives it the function and a parameter to the function.
  • What does Fork do? It first allocates a stack for the thread. It then sets up the TCB so that when the thread starts running, it will invoke the function and pass it the correct parameter. It then puts the thread on a run queue someplace. Fork then returns, and the thread that called Fork continues.
  • How does OS set up TCB so that the thread starts running at the function? First, it sets the stack pointer in the TCB to the stack. Then, it sets the PC in the TCB to be the first instruction in the function. Then, it sets the register in the TCB holding the first parameter to the parameter. When the thread system restores the state from the TCB, the function will magically start to run.
  • The system maintains a queue of runnable threads. Whenever a processor becomes idle, the thread scheduler grabs a thread off of the run queue and runs the thread.
  • Conceptually, threads execute concurrently. This is the best way to reason about the behavior of threads. But in practice, the OS only has a finite number of processors, and it can't run all of the runnable threads at once. So, must multiplex the runnable threads on the finite number of processors.
  • Let's do a few thread examples. First example: two threads that increment a variable.
    int a = 0;
    void sum(int p) { 
      a++;
      printf("%d : a = %d\n", p, a);
    }
    void main() {
      Thread *t = new Thread("child");
      t->Fork(sum, 1);
      sum(0);
    }
    
  • The two calls to sum run concurrently. What are the possible results of the program? To understand this fully, we must break the sum subroutine up into its primitive components.
  • sum first reads the value of a into a register. It then increments the register, then stores the contents of the register back into a. It then reads the values of of the control string, p and a into the registers that it uses to pass arguments to the printf routine. It then calls printf, which prints out the data.
  • The best way to understand the instruction sequence is to look at the generated assembly language (cleaned up just a bit). You can have the compiler generate assembly code instead of object code by giving it the -S flag. It will put the generated assembly in the same file name as the .c or .cc file, but with a .s suffix.
            la      a, %r0
            ld      [%r0],%r1
            add     %r1,1,%r1
            st      %r1,[%r0]
    
            ld      [%r0], %o3 ! parameters are passed starting with %o0
            mov     %o0, %o1
            la      .L17, %o0
            call    printf
    
  • So when execute concurrently, the result depends on how the instructions interleave. What are possible results?
    0 : 1                                      0 : 1
    1 : 2                                      1 : 1
    
    1 : 2                                      1 : 1
    0 : 1                                      0 : 1
    
    1 : 1                                      0 : 2
    0 : 2                                      1 : 2
     
    0 : 2                                      1 : 2
    1 : 1                                      0 : 2
    
    So the results are nondeterministic - you may get different results when you run the program more than once. So, it can be very difficult to reproduce bugs. Nondeterministic execution is one of the things that makes writing parallel programs much more difficult than writing serial programs.
  • Chances are, the programmer is not happy with all of the possible results listed above. Probably wanted the value of a to be 2 after both threads finish. To achieve this, must make the increment operation atomic. That is, must prevent the interleaving of the instructions in a way that would interfere with the additions.
  • Concept of atomic operation. An atomic operation is one that executes without any interference from other operations - in other words, it executes as one unit. Typically build complex atomic operations up out of sequences of primitive operations. In our case the primitive operations are the individual machine instructions.
  • More formally, if several atomic operations execute, the final result is guaranteed to be the same as if the operations executed in some serial order.
  • In our case above, build an increment operation up out of loads, stores and add machine instructions. Want the increment operation to be atomic.
  • Use synchronization operations to make code sequences atomic. First synchronization abstraction: semaphores. A semaphore is, conceptually, a counter that supports two atomic operations, P and V. Here is the Semaphore interface from Nachos:
    class Semaphore {
      public:
        Semaphore(char* debugName, int initialValue);       
        ~Semaphore();                                      
        void P();
        void V();
    }
    
  • Here is what the operations do:
    • Semphore(name, count) : creates a semaphore and initializes the counter to count.
    • P() : Atomically waits until the counter is greater than 0, then decrements the counter and returns.
    • V() : Atomically increments the counter.
  • Here is how we can use the semaphore to make the sum example work:
    int a = 0;
    Semaphore *s;
    void sum(int p) {
      int t;
      s->P();
      a++;
      t = a;
      s->V();
      printf("%d : a = %d\n", p, t);
    }
    void main() {
      Thread *t = new Thread("child");
      s = new Semaphore("s", 1);
      t->Fork(sum, 1);
      sum(0);
    }
    
  • We are using semaphores here to implement a mutual exclusion mechanism. The idea behind mutual exclusion is that only one thread at a time should be allowed to do something. In this case, only one thread should access a. Use mutual exclusion to make operations atomic. The code that performs the atomic operation is called a critical section.
  • Semaphores do much more than mutual exclusion. They can also be used to synchronize producer/consumer programs. The idea is that the producer is generating data and the consumer is consuming data. So a Unix pipe has a producer and a consumer. You can also think of a person typing at a keyboard as a producer and the shell program reading the characters as a consumer.
  • Here is the synchronization problem: make sure that the consumer does not get ahead of the producer. But, we would like the producer to be able to produce without waiting for the consumer to consume. Can use semaphores to do this. Here is how it works:
    Semaphore *s;
    void consumer(int dummy) {
      while (1) { 
        s->P();
        consume the next unit of data
      }
    }
    void producer(int dummy) {
      while (1) {
        produce the next unit of data
        s->V();
      }
    }
    void main() {
      s = new Semaphore("s", 0);
      Thread *t = new Thread("consumer");
      t->Fork(consumer, 1);
      t = new Thread("producer");
      t->Fork(producer, 1);
    }
    
    In some sense the semaphore is an abstraction of the collection of data.

Processes and Threads

Processes and Threads



  • A process is an execution stream in the context of a particular process state.
    • An execution stream is a sequence of instructions.
    • Process state determines the effect of the instructions. It usually includes (but is not restricted to):
      • Registers
      • Stack
      • Memory (global variables and dynamically allocated memory)
      • Open file tables
      • Signal management information
      Key concept: processes are separated: no process can directly affect the state of another process.
  • Process is a key OS abstraction that users see - the environment you interact with when you use a computer is built up out of processes.
    • The shell you type stuff into is a process.
    • When you execute a program you have just compiled, the OS generates a process to run the program.
    • Your WWW browser is a process.
  • Organizing system activities around processes has proved to be a useful way of separating out different activities into coherent units.
  • Two concepts: uniprogramming and multiprogramming.
    • Uniprogramming: only one process at a time. Typical example: DOS. Problem: users often wish to perform more than one activity at a time (load a remote file while editing a program, for example), and uniprogramming does not allow this. So DOS and other uniprogrammed systems put in things like memory-resident programs that invoked asynchronously, but still have separation problems. One key problem with DOS is that there is no memory protection - one program may write the memory of another program, causing weird bugs.
    • Multiprogramming: multiple processes at a time. Typical of Unix plus all currently envisioned new operating systems. Allows system to separate out activities cleanly.
  • Multiprogramming introduces the resource sharing problem - which processes get to use the physical resources of the machine when? One crucial resource: CPU. Standard solution is to use preemptive multitasking - OS runs one process for a while, then takes the CPU away from that process and lets another process run. Must save and restore process state. Key issue: fairness. Must ensure that all processes get their fair share of the CPU.
  • How does the OS implement the process abstraction? Uses a context switch to switch from running one process to running another process.
  • How does machine implement context switch? A processor has a limited amount of physical resources. For example, it has only one register set. But every process on the machine has its own set of registers. Solution: save and restore hardware state on a context switch. Save the state in Process Control Block (PCB). What is in PCB? Depends on the hardware.
    • Registers - almost all machines save registers in PCB.
    • Processor Status Word.
    • What about memory? Most machines allow memory from multiple processes to coexist in the physical memory of the machine. Some may require Memory Management Unit (MMU) changes on a context switch. But, some early personal computers switched all of process's memory out to disk (!!!).
  • Operating Systems are fundamentally event-driven systems - they wait for an event to happen, respond appropriately to the event, then wait for the next event. Examples:
    • User hits a key. The keystroke is echoed on the screen.
    • A user program issues a system call to read a file. The operating system figures out which disk blocks to bring in, and generates a request to the disk controller to read the disk blocks into memory.
    • The disk controller finishes reading in the disk block and generates and interrupt. The OS moves the read data into the user program and restarts the user program.
    • A Mosaic or Netscape user asks for a URL to be retrieved. This eventually generates requests to the OS to send request packets out over the network to a remote WWW server. The OS sends the packets.
    • The response packets come back from the WWW server, interrupting the processor. The OS figures out which process should get the packets, then routes the packets to that process.
    • Time-slice timer goes off. The OS must save the state of the current process, choose another process to run, the give the CPU to that process.
  • When build an event-driven system with several distinct serial activities, threads are a key structuring mechanism of the OS.
  • A thread is again an execution stream in the context of a thread state. Key difference between processes and threads is that multiple threads share parts of their state. Typically, allow multiple threads to read and write same memory. (Recall that no processes could directly access memory of another process). But, each thread still has its own registers. Also has its own stack, but other threads can read and write the stack memory.
  • What is in a thread control block? Typically just registers. Don't need to do anything to the MMU when switch threads, because all threads can access same memory.
  • Typically, an OS will have a separate thread for each distinct activity. In particular, the OS will have a separate thread for each process, and that thread will perform OS activities on behalf of the process. In this case we say that each user process is backed by a kernel thread.
    • When process issues a system call to read a file, the process's thread will take over, figure out which disk accesses to generate, and issue the low level instructions required to start the transfer. It then suspends until the disk finishes reading in the data.
    • When process starts up a remote TCP connection, its thread handles the low-level details of sending out network packets.
  • Having a separate thread for each activity allows the programmer to program the actions associated with that activity as a single serial stream of actions and events. Programmer does not have to deal with the complexity of interleaving multiple activities on the same thread.
  • Why allow threads to access same memory? Because inside OS, threads must coordinate their activities very closely.
    • If two processes issue read file system calls at close to the same time, must make sure that the OS serializes the disk requests appropriately.
    • When one process allocates memory, its thread must find some free memory and give it to the process. Must ensure that multiple threads allocate disjoint pieces of memory.
    Having threads share the same address space makes it much easier to coordinate activities - can build data structures that represent system state and have threads read and write data structures to figure out what to do when they need to process a request.
  • One complication that threads must deal with: asynchrony. Asynchronous events happen arbitrarily as the thread is executing, and may interfere with the thread's activities unless the programmer does something to limit the asynchrony. Examples:
    • An interrupt occurs, transferring control away from one thread to an interrupt handler.
    • A time-slice switch occurs, transferring control from one thread to another.
    • Two threads running on different processors read and write the same memory.
  • Asynchronous events, if not properly controlled, can lead to incorrect behavior. Examples:
    • Two threads need to issue disk requests. First thread starts to program disk controller (assume it is memory-mapped, and must issue multiple writes to specify a disk operation). In the meantime, the second thread runs on a different processor and also issues the memory-mapped writes to program the disk controller. The disk controller gets horribly confused and reads the wrong disk block.
    • Two threads need to write to the display. The first thread starts to build its request, but before it finishes a time-slice switch occurs and the second thread starts its request. The combination of the two threads issues a forbidden request sequence, and smoke starts pouring out of the display.
    • For accounting reasons the operating system keeps track of how much time is spent in each user program. It also keeps a running sum of the total amount of time spent in all user programs. Two threads increment their local counters for their processes, then concurrently increment the global counter. Their increments interfere, and the recorded total time spent in all user processes is less than the sum of the local times.
  • So, programmers need to coordinate the activities of the multiple threads so that these bad things don't happen. Key mechanism: synchronization operations. These operations allow threads to control the timing of their events relative to events in other threads. Appropriate use allows programmers to avoid problems like the ones outlined above.

Overview and History Of Operating System

Overview and History



  • What is an operating system? Hard to define precisely, because operating systems arose historically as people needed to solve problems associated with using computers.
  • Much of operating system history driven by relative cost factors of hardware and people. Hardware started out fantastically expensive relative to people and the relative cost has been decreasing ever since. Relative costs drive the goals of the operating system.
    • In the beginning: Expensive Hardware, Cheap People Goal: maximize hardware utilization.
    • Now: Cheap Hardware, Expensive People Goal: make it easy for people to use computer.
  • In the early days of computer use, computers were huge machines that are expensive to buy, run and maintain. Computer used in single user, interactive mode. Programmers interact with the machine at a very low level - flick console switches, dump cards into card reader, etc. The interface is basically the raw hardware.
    • Problem: Code to manipulate external I/O devices. Is very complex, and is a major source of programming difficulty.
    • Solution: Build a subroutine library (device drivers) to manage the interaction with the I/O devices. The library is loaded into the top of memory and stays there. This is the first example of something that would grow into an operating system.
  • Because the machine is so expensive, it is important to keep it busy.
    • Problem: computer idles while programmer sets things up. Poor utilization of huge investment.
    • Solution: Hire a specialized person to do setup. Faster than programmer, but still a lot slower than the machine.
    • Solution: Build a batch monitor. Store jobs on a disk (spooling), have computer read them in one at a time and execute them. Big change in computer usage: debugging now done offline from print outs and memory dumps. No more instant feedback.
    • Problem: At any given time, job is actively using either the CPU or an I/O device, and the rest of the machine is idle and therefore unutilized.
    • Solution: Allow the job to overlap computation and I/O. Buffering and interrupt handling added to subroutine library.
    • Problem: one job can't keep both CPU and I/O devices busy. (Have compute-bound jobs that tend to use only the CPU and I/O-bound jobs that tend to use only the I/O devices.) Get poor utilization either of CPU or I/O devices.
    • Solution: multiprogramming - several jobs share system. Dynamically switch from one job to another when the running job does I/O. Big issue: protection. Don't want one job to affect the results of another. Memory protection and relocation added to hardware, OS must manage new hardware functionality. OS starts to become a significant software system. OS also starts to take up significant resources on its own.
  • Phase shift: Computers become much cheaper. People costs become significant.
    • Issue: It becomes important to make computers easier to use and to improve the productivity of the people. One big productivity sink: having to wait for batch output (but is this really true?). So, it is important to run interactively. But computers are still so expensive that you can't buy one for every person. Solution: interactive timesharing.
    • Problem: Old batch schedulers were designed to run a job for as long as it was utilizing the CPU effectively (in practice, until it tried to do some I/O). But now, people need reasonable response time from the computer.
    • Solution: Preemptive scheduling.
    • Problem: People need to have their data and programs around while they use the computer.
    • Solution: Add file systems for quick access to data. Computer becomes a repository for data, and people don't have to use card decks or tapes to store their data.
    • Problem: The boss logs in and gets terrible response time because the machine is overloaded.
    • Solution: Prioritized scheduling. The boss gets more of the machine than the peons. But, CPU scheduling is just an example of resource allocation problems. The timeshared machine was full of limited resources (CPU time, disk space, physical memory space, etc.) and it became the responsibility of the OS to mediate the allocation of the resources. So, developed things like disk and physical memory quotas, etc.
    Overall, time sharing was a success. However, it was a limited success. In practical terms, every timeshared computer became overloaded and the response time dropped to annoying or unacceptable levels. Hard-core hackers compensated by working at night, and we developed a generation of pasty-looking, unhealthy insomniacs addicted to caffeine.
  • Computers become even cheaper. It becomes practical to give one computer to each user. Initial cost is very important in market. Minimal hardware (no networking or hard disk, very slow microprocessors and almost no memory) shipped with minimal OS (MS-DOS). Protection, security less of an issue. OS resource consumption becomes a big issue (computer only has 640K of memory). OS back to a shared subroutine library.
  • Hardware becomes cheaper and users more sophisticated. People need to share data and information with other people. Computers become more information transfer, manipulation and storage devices rather than machines that perform arithmetic operations. Networking becomes very important, and as sharing becomes an important part of the experience so does security. Operating systems become more sophisticated. Start putting back features present in the old time sharing systems (OS/2, Windows NT, even Unix).
  • Rise of network. Internet is a huge popular phenomenon and drives new ways of thinking about computing. Operating system is no longer interface to the lower level machine - people structure systems to contain layers of middleware. So, a Java API or something similar may be the primary thing people need, not a set of system calls. In fact, what the operating system is may become irrelevant as long as it supports the right set of middleware.
  • Network computer. Concept of a box that gets all of its resources over the network. No local file system, just network interfaces to acquire all outside data. So have a slimmer version of OS.
  • In the future, computers will become physically small and portable. Operating systems will have to deal with issues like disconnected operation and mobility. People will also start using information with a psuedo-real time component like voice and video. Operating systems will have to adjust to deliver acceptable performance for these new forms of data.
  • What does a modern operating system do?
    • Provides Abstractions Hardware has low-level physical resources with complicated, idiosyncratic interfaces. OS provides abstractions that present clean interfaces. Goal: make computer easier to use. Examples: Processes, Unbounded Memory, Files, Synchronization and Communication Mechanisms.
    • Provides Standard Interface Goal: portability. Unix runs on many very different computer systems. To a first approximation can port programs across systems with little effort.
    • Mediates Resource Usage Goal: allow multiple users to share resources fairly, efficiently, safely and securely. Examples:
      • Multiple processes share one processor. (preemptable resource)
      • Multiple programs share one physical memory (preemptable resource).
      • Multiple users and files share one disk. (non-preemptable resource)
      • Multiple programs share a given amount of disk and network bandwidth (preemptable resource).
    • Consumes Resources Solaris takes up about 8Mbytes physical memory (or about $400).
  • Abstractions often work well - for example, timesharing, virtual memory and hierarchical and networked file systems. But, may break down if stressed. Timesharing gives poor performance if too many users run compute-intensive jobs. Virtual memory breaks down if working set is too large (thrashing), or if there are too many large processes (machine runs out of swap space). Abstractions often fail for performance reasons.
  • Abstractions also fail because they prevent programmer from controlling machine at desired level. Example: database systems often want to control movement of information between disk and physical memory, and the paging system can get in the way. More recently, existing OS schedulers fail to adequately support multimedia and parallel processing needs, causing poor performance.
  • Concurrency and asynchrony make operating systems very complicated pieces of software. Operating systems are fundamentally non-deterministic and event driven. Can be difficult to construct (hundreds of person-years of effort) and impossible to completely debug. Examples of concurrency and asynchrony:
    • I/O devices run concurrently with CPU, interrupting CPU when done.
    • On a multiprocessor multiple user processes execute in parallel.
    • Multiple workstations execute concurrently and communicate by sending messages over a network. Protocol processing takes place asynchronously.
    Operating systems are so large no one person understands whole system. Outlives any of its original builders.
  • The major problem facing computer science today is how to build large, reliable software systems. Operating systems are one of very few examples of existing large software systems, and by studying operating systems we may learn lessons applicable to the construction of larger systems.

INTRODUCTION TO OPERATING SYSTEM

Operating Systems

Introduction

An operating system (OS) is the software component of a computer system that is responsible for the management and coordination of activities and the sharing of the resources of the computer. The OS acts as a host for application programs that are run on the machine. As a host, one of the purposes of an OS is to handle the details of the operation of the hardware. This relieves application programs from having to manage these details and makes it easier to write applications. Almost all computers use an OS of some type.
OSs offer a number of services to application programs and users. Applications access these services through application programming interfaces (APIs) or system calls. By using these interfaces, the application can request a service from the OS, pass parameters, and receive the results of the operation. Users may also interact with the OS by typing commands or using a graphical user interface (GUI).

The Big 3

Common contemporary OSs include Microsoft Windows, Mac OS X, and Linux. Microsoft Windows has a significant majority of market share in the desktop and notebook computer markets, while the server and embedded device markets are split amongst several OSs.

Linux

Linux (also known as GNU/Linux) is one of the most prominent examples of free software and open source development which means that typically all underlying source code can be freely modified, used, and redistributed by anyone. The name “Linux” comes from the Linux kernel, started in 1991 by Linus Torvalds. The system’s utilities and libraries usually come from the GNU operating system (which is why it is also known as GNU/Linux).
Linux is predominantly known for its use in servers. It is also used as an operating system for a wide variety of computer hardware, including desktop computers, supercomputers, video game systems, and embedded devices such as mobile phones and routers.

Design

Linux is a modular Unix-like OS. It derives much of its basic design from principles established in Unix during the 1970s and 1980s. Linux uses a monolithic kernel which handles process control, networking, and peripheral and file system access. The device drivers are integrated directly with the kernel. Much of Linux’s higher-level functionality is provided by seperate projects which interface with the kernel. The GNU userland is an important part of most Linux systems, providing the shell and Unix tools which carry out many basic OS tasks. On top of the kernel, these tools form a Linux system with a GUI that can be used, usually running in the X Windows System (X).
Linux can be controlled by one or more of a text-based command line interface (CLI), GUI, or through controls on the device itself (like on embedded machines). Desktop machines have 3 popular user interfaces (UIs): KDE, GNOME, and Xfce. These UIs run on top of X, which provides network transparency, enabling a graphical application running on one machine to be displayed and controlled from another (that’s like running a game on your computer but your friend’s computer can control and see the game from his computer). The window manager provides a means to control the placement and appearance of individual application windows, and interacts with the X window system.
GNOME Screenshot
GNOME Screenshot
A Linux system usually provides a CLI of some sort through a shell. Linux distros for a server might only use a CLI and nothing else. Most low-level Linux components use the CLI exclusively. The CLI is particularly suited for automation of repetitive or delayed tasks, and provides very simple inter-process communication. A graphical terminal is often used to access the CLI from a Linux desktop.
Bash Screenshot
Bash Screenshot

Development

The primary difference between Linux and many other OSs is that the Linux kernel and other components are free and open source software. Free software projects, although developed in a collaborative fashion, are often produced independently of each other. A Linux distribution, commonly called a “distro”, is a project that manages a remote collection of Linux-based software, and facilitates installation of a Linux OS. Distros include system software and application software in the form of packages. A distribution is responsible for the default configuration of installed Linux systems, system security, and more generally integration of the different software packages into a coherent whole.
Linux is largely driven by its developer and user communities. Some vendors develop and fund their distros on a volunteer basis. Others maintain a community versionof their commercial distros. In many cities and regions, local associations known as Linux Users Groups (LUGs) promote Linux and free software. There are also many online communities that seek to provide support to Linux users and developers. Most distros also have IRC chatrooms or newsgroups for communication. Online forums are another means for support. Linux distros host mailing lists also.
Most Linux distros support dozens of programming languages. The most common collection of utilities for building both Linux applications and OS programs is found within the GNU toolchain, which includes the GNU Compiler Collection (GCC) and the GNU build system. GCC provieds compilers for Ada, C, C++, Java, and Fortran. Most distros also include support for Perl, Ruby, Python and other dynamic languages. The two main frameworks for developing graphical applications are those of GNOME and KDE.
Ubuntu CD
Ubuntu CD

Uses

As well as those designed for general purpose use on desktops and servers, distros may be specialized for different purposes including: computer architecture support, embedded systems, stability, security, localization to a specific region or language, targeting of specific user groups, support for real-time applications, or commitment to a given desktop environment. Linux runs on a more diverse range of computer architecture than any other OS.
Although there is a lack of Linux ports for some Mac OS X and Microsoft Windows programs in domains such as desktop publishing and professional audio, applications roughly equivalent to those available for OS X and Windows are available for Linux. Most Linux distros have some sort of program for browsing through a list of free software applications that have already been tested and configured for the specific distro. There are many free software titles popular on Windows that are available for Linux the same way there are a growing amount of proprietary software that is being supported for Linux.
Historically, Linux has been used as a server OS and been very successful in that area due to its relative stability and long uptime. Linux is the cornerstone of the LAMP server-software combination (Linux, Apache, MySQL, Perl/PHP/Python) which has achieved popularity among developers, and which is one of the more common platforms for website hosting.

Windows

Windows (created by Microsoft) is the most dominant OS on the market today. The two most popular versions of Windows for the desktop are XP and Vista (Vista being the latest version). There is also a mobile version of Windows as well as a server version of Windows (the latest being Windows Server 2008). Windows is all proprietary, closed-source which is much different than Linux licenses. Most of the popular manufacturers make all of their hardware compatible with Windows which makes Windows operate and almost all kinds of new hardware.

XP

The term “XP” stands for experience. Windows XP is the successor to both Windows 2000 Professional and Windows ME. Within XP there are 2 main editions: Home and Professional. The Professional version has additional features and is targeted at power users and business clients. There is also a Media Center version that has additional multimedia features enhancing the ability to record and watch TV shows, view DVD movies, and listen to music.
Windows XP features a task-based GUI. XP analyzes the performance impact of visual effects and uses this to determine whether to enable them, so as to prevent the new functionaility from consuming excessive additional processing overhead. The different themes are controlled by the user changing their preferences.
Windows XP Screenshot
Windows XP Screenshot
Windows XP has released a set of service packs (currently there are 3) which fix problems and add features. Each service pack is a superset of all previous service packs and patches so that only the latest service pack needs to be installed, and also includes new revisions. Support for Windows XP Service Pack 2 will end on July 13, 2010 (6 years after its general ability).

Vista

Windows Vista contains many changes and new features from XP, including an update GUI and visual style, improved searching features, new multimedia creation tools, and redesigned networking, audio, print, and display sub-systems. Vista also aims to increase the level of communication between machines on a home network, using peer-to-peer technology to simplify sharing files and digital media between computers and devices.
Windows vista is intended to be a technology-based release, to provide a base to include advanced technologies, any of which are related to how the system functions and thus not readily visible to the user. An example is the complete restructuring of the architecture of the audio, print, display, and networking subsystems; while the results of this work are visible to software developers, end-users will only see what appear to be evolutionary changes in the UI.
Windows Vista Screenshot
Windows Vista Screenshot
Vista includes technologies which employ fast flash memory to improve system performance by chaching commonly used programs and data. Other new technology utilizes machine learning techniques to analyze usage patterns to allow Windows Vista to make intelligent decisions about what content should be present in system meomry at any given time. As a part of the redesign of the networking architecture, IPv6 has been fully incorporated into the OS and a number of performance improvements have been introduced, such as TCP window scaling. For graphics, it has a new Windows Display Driver Model and a major revision to Direct3D. At the core of the OS, many improvements have been made to the memory manager, process scheduler and I/O scheduler.

Security

Windows is the most vulnerable OS to attacks. Security software is a must when you’re using Windows which is much different then Linux and OS X. It has been criticized for its susceptibility to malware, viruses, trojan horses, and worms. Security issues are compounded by the fact that users of the Home edition, by default, receive an administrator account that provides unrestricted access to the underpinnings of the system. If the administrator’s account is broken into, there is no limit to the control that can be asserted over the compromised PC.
Windows has historically been a tempting target for virus creators because of its world market dominance. Security holes are often invisible until they are exploited, making preemptive action difficult. Microsoft has stated that the release of patches to fix security holes is often what causes the spread of exploits against those very same holes, as crackers figured out what problems the patches fixed, and then launch attacks against unpatched systems. It is recommended to have automatic updates turned on to prevent a system from being attacked by an unpatched bug.

OS X

OS X is the major operating system that is created by Apple Inc. Unlike its predecessor (referred to Classic or OS 9), OS X is a UNIX based operating system. Currently OS X is in version 10.5, with 10.5.3 being the last major software update and plans for 10.6 having been announced. Apple has chosen to name each version of OS X after a large cat with 10.0 being Cheetah, 10.1 as Puma, 10.2 as Jaguar, 10.3 as Panther, 10.4 as Tiger, 10.5 as Leopard, and the unreleased 10.6 named Snow Leopard.
Apple also develops a server OS X that is very similar to the normal OS X, but is designed to work on Apple’s X-Serve hardware. Some of the tools included with the server OS X are workgroup management and administration software that provide simplified access to common network services, including a mail transfer agent, a Samba server, an LDAP server, a domain name server, a graphical interface for distributed computing (which Apple calls Xgrid Admin), and others.

Description

OS X is a UNIX based OS built on top of the XNU kernel, with standard Unix facilities available from the CLI. Apple has layered a number of components over this base, including their own GUI. The most notable features of their GUI are the Dock and the Finder.

UNIVERSITY QUESTION PAPERS FOR OPERATING SYSTEM