Consider a distributed system, with many machines scattered around a net. Some machines have disks, some have none. What are the alternative approaches for providing file service on this system?
There are two separate questions: First, how is the responsibility for disk I/O, file mapping and directory management divided between different clients and servers, and second, how is file service presented to users.
The answer to the first question may vary between a monolithic file server, where all details of file service, from directory management to disk I/O, are handled by the same server, to a fully distributed file server. The answer to the second question may range from servers that present a UNIX-like model of disk I/O to those that support a stateless model of I/O.
A monolithic file server can be easily implemented, for example, by taking a conventional file system, for example, that of UNIX or MS-DOS, and simply offering its services to remote users. This requires a single server process that awaits requests for file service from remote users and then performs the indicated operations on the local file system.
The UNIX NFS system, developed by Sun microsystems, is a slightly modified variant on this theme. Under NFS, each client machine has its own file system, with an intermediate layer inserted between the local file system and the client applications code. This intermediate layer interprets an added data structure on each open file descriptor that indicates whether the indicated file is local or remote. For local files, the I/O requests are passed on to the local file system. For remote files, the NFS protocol is used to communicate with a remote server.
Furthermore, the NFS extensions to UNIX allow a remote server to be mounted in the same way as any other file system may be mounted, by adding the tree of files supported by the remote server as a subtree of the local file hierarchy. In the extreme, diskless clients can be supported by mounting the remote server at the root of the local file system.
The disadvantages of this model of file service are many: First, files are pinned down to specific locations. This may be hidden from the users by naming conventions, as is done with UNIX NFS, but the constraints are still there. The NFS mount instruction says "mount file system from machine xyz on my file hierarchy as /xyz". In this context, opening "/xyz/x" is not much different from explicitly saying "open x on file server xyz", except that the identity of the server is deduced from the structure of the file tree.
This model prevents such things as automatic migration of files from one server to another, for example, in order to balance the load on the system. It also prevents redundant file storage, for example, for the sake of fault tolerance or to improve the availability of popular files.
At the other extreme from a monolithic file server is the composite model where the job of providing file service is broken down into a number of servers:
Disk I/O servers must be located on machines with disks; these provide the most rudimentary service, accepting network requests to read or write sectors on disk. These do not impose any file organization on the sectors! They may provide a degree of device independance, for example, by masking some issues of sector, track and cylinder structure, but if they do this, they must typically provide information to their clients to allow the clients to take into account the physical device structure, where that may impact performance.
File servers sit on top of disk I/O servers. A file server takes the raw storage of a disk I/O server and organizes it into files. The primary service it provides to its clients is mapping from virtual disk addresses, relative to particular files, to physical disk addresses, relative to particular disks.
File servers may use multiple disk servers, for example, to provide for fault tolerance by storing duplicate copies of files, or to provide for the storage of huge files by spreading them over multiple devices. In this model, a file server does not typically handle directories; at this level, files are typically named using some low level construct equivalent to UNIX I-nodes.
Directory servers arrange files, as delivered by various file servers, into directories. There is no inherent requirement that all files in a given directory reside on the same file server, and in fact, some directory servers take responsibility for load balancing by automatically moving files from overcrowded file servers to servers that have free space. Furthermore, a directory server may maintain multiple copies of a given file on multiple file servers, for example, in order to reduce contention for file and disk service for popular files.
Just as file servers must use some space on disk servers for storing the mapping information that describes where the sectors of each file are stored, the directory server must typically use some of the files on the file servers it manages to store directories.
Most programmers are used to dealing with file systems that maintain state information on behalf of users. Thus, for example, between the time a user opens a file and closes it, the file system is expected to maintain a record of the fact that that user has that file open. Furthermore, we expect this record to contain the current file position, so that the file behaves as a sequential file unless seek operations are done to change the sequence in which the data is read.
While programmers find it convenient for the file system to maintain state information, it poses some problem for the file system. If a user process terminates, the file system must learn of this termination in order to reclaim the storage occupied by state information, and there must be provisions for allocation of this information in the first place.
In a centralized system, these problems are easily solved, but in a distributed system, they become more severe. How can we guarantee that a file server learns of the termination or failure of a client for which it maintains state information? Where do we store the state information for large numbers of low-traffic clients that we would expect in very large distributed systems?
These problems can be avoided by using stateless servers. In a stateless server, the client is given the job of storing all state information. When a client opens a file, the server returns, to the client, the complete state needed to access the file. The client then presents this state information with every read and write request.
The necessary state information can be quite small. A directory server, for example, might respond to an open request by returning the network address of a file server and the I-node number needed by that file server when a user opens a file. Each read and write request would be directed to that file server, with the I-node number and file position provided as the necessary state information.
This raises new issues of network security, but other than that, it is quite straightforward to use. With a minimal layer of added local software, this scheme can even be made to look just like a conventional sequential file model, to the user.