CS411/511. Operating Systems
Homework 4 - Solutions
Chapter 6: Review Questions 6.1, 6.4, 6.18
511 students only: 6.19
- Busy waiting: A process is waiting for an event to occur and it does so
by executing instructions.
- Other types of waiting: A process is waiting for an event to occur in some
waiting queue (e.g., I/O semaphore) and it does so without having the
CPU assigned to it.
- Busy waiting cannot be avoided altogether (see p. 170).
6.4.Note that this is the same principle as Dekker's Algorithm, which we
studied in class. It was a slightly earlier version, and therefore is
more complicated than it really needs to be.
To prove that this algorithm is correct, we need to show that mutual
exclusion is preserved, the progress requirement is satisfied, and the
bounded-waiting requirement is met.
- To prove mutual exclusion, we note that each P enters its critical section
only if flag[j]!=in-cs for all other processes. Since each process
is the only one that can set its own flag value, and since each
process waits to examine the other processes' flags until it has set its
own flag to in-cs, it can't be possible for two processes not
to know that each other's flag is set. Therefore, it's not possible
for two processes to be in their critical sections at the same time.
- To prove progress, we note that the turn value can only be modified when a
process is just about to enter its critical section, or has just
completed its critical section. The turn value remains constant
whenever there is no process executing/leaving its critical section.
If other processes want to enter their critical sections, one will
always be able to (the "first" one in the cyclic ordering [turn,
turn+1, ... n-1, 0, ... turn-1]).
- To prove bounded waiting, we note that whenever a process leaves its
critical section, it checks to see if any other processes want to
enter their critical sections. If so, it always designates which
one will go next -- the "first" one in the cyclic ordering. This
means that any process wanting to enter its critical section will
do so within n-1 turns.
6.18. Note that the authors are referring to "cost" very generally (as in
cost/benefit analysis), rather than specifically to "price."
- Volatile storage fails when there is a power failure. Cache, main memory,
and registers require a steady power source; when the system crashes
and this source is interrupted, the contents are lost. Access is very
- Non-volatile storage retains its content despite power failures. For
example, disk and CDROM survive anything other than demagnetization
or hardware/head crashes (and less likely things like fire, immersion
in water, etc.). Access is much slower.
- Stable storage theoretically survives any type of failure. It can only
be approximated through techniques such as mirroring. Access is
certainly slower than volatile, and possibly slower than non-volatile
6.19. If there is no checkpointing, the entire log must be searched after
a crash, and all transactions "redone" from the log. If checkpointing is
used, most of the log can be discarded. Since checkpoints are very expensive,
how often they should be taken depends upon how reliable the system is. The
more reliable the system, the less often a checkpoint should be taken.
When no failure occurs, the cost of checkpointing is "pure overhead"
and can degrade performance if checkpointing is done frequently.
When a system crash occurs, recovery time is proportional to the number
of transactions since the last checkpoint. Assuming that a disk crash
means the checkpoint file has been lost, recovery will involve a full rollback
and re-doing of all transactions in the log.
Chapter 7: Review Questions 7.4, 7.6, 7.8, 7.13 (hint for 7.4(b): this is what drivers are
really supposed to do)
511 students only: 7.14
7.4. Note that each section of the street (including each intersection)
is considered to be a "resource."
To keep deadlocks from occurring, allow a vehicle to cross an intersection
only if it is assured that the vehicle will not have to stop at the
- Mutual exclusion: only one vehicle on a section of the street
- Hold and wait: each vehicle is occupying a section of the street and is
waiting to move to the next section
- No preemption: a section of a street that is occupied by a vehicle cannot
be taken away from the vehicle unless the car moves into the next section
- Circular wait: each vehicle is waiting for the next vehicle in front of
it to move
- (a) anytime
- (b) only if MAX demand of each process does not exceed total number of
available resources, and the system remains in a safe state.
- (c) only if MAX demand of each process does not exceed total number of
available resources, and the system remains in a safe state.
- (d) anytime
- (e) anytime
- (f) anytime
7.8. Suppose the system is deadlocked. This implies that each process is
holding one resource and is waiting for one more. Since there are three
processes and four resources, one process must be able to obtain two resources.
This process requires no more resources, and therefore it will terminate,
releasing its resources. This in turn frees sufficient resources for the
other processes to complete, one at a time.
- (a) Since NEED=MAX-ALLOC, the content of NEED is as follows
|A B C D
|0 0 0 0
|0 7 5 0
|1 0 0 2
|0 0 2 0
|0 6 4 2
- (b)Yes, the sequence [P0, P2, P1, P3, P4] satisfies the safety requirement.
- (c)Yes. The request does not exceed either Available or Max. Further
the new system state, which is
| ||Alloc ||Max ||Need
|P0 ||0 0 1 2 ||0 0 1 2 ||0 0 0 0
|P1 ||1 4 2 0 ||1 7 5 0 ||0 3 3 0
|P2 ||1 3 5 4 ||2 3 5 6 ||1 0 0 2
|P3 ||0 6 3 2 ||0 6 5 2 ||0 0 2 0
|P4 ||0 0 1 4 ||0 6 5 6 ||0 6 4 2
is safe, as demonstrated by the sequence [P0, P2, P1, P3, P4].
- (a) Deadlock cannot occur because preemption exists. That is, the
criterion of "no preemption" has been prevented.
- (b) Yes. A process may never acquire all the resources it needs, if
they are continuously preempted by a series of requests such as those of
Chapters 22-23: 511 students only: 22.6, 23.11
- Advantages: The primary impact of disallowing paging of kernel memory is that the
non-preemptibility of the kernel is preserved. (Any process taking a page
fault, whether in kernel mode or in user mode, risks being preempted
while the required data is paged in from disk.) Because the kernel is
guaranteed not to be preempted, locking requirements to protect the integrity
of its primary data structures are also greatly simplified.
It imposes constraints on the amount of memory that the kernel can use, since
it is unreasonable to keep very large data structures in non-pageable memory
(that memory cannot be used for anything else). Two disadvantages that stem
from this are:
- The kernel must prune back many of its internal data structures manually,
since it can't rely on virtual memory mechanisms to keep physical
memory usage under control.
- It is infeasible to implement features requiring very large amounts of
memory in the kernel (such as the /tmp-filesystem, a fast
virtual-memory based filesystem found in some UNIX systems).
Note that the complexity of managing page faults while running kernel code
is not an issue. The Linux kernel code is already able to deal with page
faults (which can occur in a system call whose arguments reference user
memory, which may be paged out on disk).
23.11. A process in NT is limited to 2 GB address space for data. The
two-stage process allows the access of much larger datasets, by reserving
space in the process's address space first, and then committing the storage to
a memory-mapped file. An application can thus "window" through a large
database (by changing the committed section) without exceeding process quotas
or utilizing a huge amount of physical memory.