First, RAM Doubler allocates page frames to virtual addresses on a demand basis. If a program has been allocated a large block of memory, but only uses part of that block, RAM Doubler only allocates page frames for those pages that are actually referenced. Since classic Mac applications are written in terms of real memory, not virtual memory, they generally claim the largest block of memory they might later need, right up front.
Second, RAM Doubler uses compression. Less recently used pages are compressed in order to free up page frames to be allocated to active applications.
Third, RAM Doubler will use disk as backing storage, should the first two techniques fail to free enough page frames to allow support of a virtual address space twice as big as the physical address space.
Think about the extent to which different applications would be likely to bring these different virtual memory strategies into action. As the author of an application that you wanted to run under RAM Doubler, how could you improve the likelyhood that your application would behave well? What about the internals of RAM Doubler. Given a function compress(a,b) that compresses one page from location a to location b in real memory, returning the number of bytes the page was compressed to, and a function expand(a,b) that reverses the compression of a page, returning the number of bytes of compressed data consumed, how would you handle page faults.
Think about basic questions first. How big is the page table for a process? How many page table entries fit in one page? Would you expect the page table to be stored in the MMU, in main memory, or in virtual memory? How would you expect unused portions of the page table to be handled? What is the role of segmenting in this system?
Think about how you could use software to implement features such as those found on the CAP system. How would you represent capabilities for objects which are not pages (at least, they aren't pages from the user point of view). How would you represent capabilities for address spaces? How would you represent capabilities for sealed objects? How would you detect and interpret (with software) attempted operations on these objects?
The obvious implementation for this system is to attach a semaphore to each database record, with the rule that a user must lock all records involved prior to a read-modify-write update. In this case, all code involved in performing the updates would belong to the user processes.
Think about the problem of avoiding or detecting deadlocks in this model. What deadlock detection model is applicable, if any? When a deadlock is detected, what action would be appropriate? If the set of records involved in an update does not depend on the contents of any of those records, does this simplify the problem?
send( channel, buffer ) receive( channel, buffer ) wait( semaphore ) signal( semaphore )The channels used for this communication are memoryless. When a process tries to receive a message, it is blocked until a message is delivered from a sender. When a process tries to sends a message, it is blocked until the message is copied to a receiver. Thus, if two processes try to communicate, either the sender or the receiver will be blocked until the message can be transferred.
The semaphores are general semaphores with the usual semantics. Processes may share semaphores and channels, but may not share memory.
Think about the problem of constructing a bounded buffer FIFO queue abstraction using these primitives. Your solution will no-doubt involve a queue-manager process, where queue users send and receive buffers full of data to and from the queue, through some channel(s). How would you use the semaphores to assure that no blocking occurs except when the queue abstraction demands blocking? Do you need to include some kind of operation code in the buffer, or can you use multiple channels to somehow encode the operation to be performed?
Think about how the optimal sector size depends on the disk scheduling algorithm.