Support for MultiRPC exists both at the language level and at the runtime level. The runtime level support includes the MultiRPC routines themselves along with the associated library routines which perform argument packing and unpacking. The language level support consists mainly of the argument descriptor information supplied by RP2Gen for each subsystem. The client may choose to interface directly with the runtime MultiRPC system without taking advantage of the RP2Gen simplifications, but the discussion in the following sections assumes the existence of the RP2Gen interface except where explicitly noted otherwise.
The procedure for making a MultiRPC call is very similar to that for making an RPC2 call. The subsystem is designed and the specification is written into a < subsys > .rpc2 file (the specification format is described in Section @ref[RP2GenChapter]). RP2Gen is then invoked on the specification file, and it generates both the standard server and client side interfaces as well as the MultiRPC argument descriptor structures and definitions for each server operation. The relevant descriptor pointers are made available to the client through the associated < subsys > .h file.
Once the interface has been specified, the subsystem implementor is responsible for writing the server main loop and the procedures to perform the server operations. This implementation is completely independent of any considerations relating to MultiRPC; MultiRPC is completely transparent to the server side of a subsystem.
From the clients perspective, making a MultiRPC call is slightly different from the RPC2 case. Instead of the procedure-like client side interface supplied by the stub routines, the single library routine @MRPC(MakeMulti) is used to interface to @RPC2(MultiRPC). The use of the library routine represents a large space savings in the executable files, but requires some additional information from the client making the call (see Sections @ref[MakeMultiInfo] and @ref[MakeMulti]). The client is also responsible for supplying a handler routine for any server operation which is used in a MultiRPC call. This handler routine is called by RPC2 as each individual server response arrives; it is used both for providing individual server return codes to the client and for giving the client control over the continuation or termination of the MultiRPC call. The handler routine is discussed in greater detail in the following section, and its interface is described in Section < @@ref > HandleResultXXX.
The client handler routine is intended to give the client control and flexibility in handling the incoming server responses from the MultiRPC call. For each connection specified in a @RPC2(MultiRPC) call, the client handler is called either when a connection error is detected or when the server response for that connection arrives. This allows the client to examine the replies as they arrive, and provides the opportunity to perform incremental bookeeping and analysis of the responses. The handler also has the ability to abort the MultiRPC call at any time. A more detailed discussion of the handler specifications can be found in Section < @@ref > HandleResultXXX.
Since a MultiRPC call could potentially last a long time, it is crucial to provide the client with some measure of control over the progress and termination of the call. With many server responses, there are many variables that the client might wish to monitor in order to evaluate the progress of the call. In particular, the server responses and return codes themselves have a significant effect on the clients perception of the progress of the call. To address these requirements, RPC2 periodically passes control to the client during execution of the MultiRPC call. A client supplied routine designed to be called as each server response arrives provides access to complete current information about the status of the call; it also gives the client the ability to perform any incremental processing he considers necessary or useful. The client then indicates his decision to either continue accepting server responses or to terminate the MultiRPC call via the handler return code.
The value of client control over the progress of the MultiRPC call can best be illustrated with some specific examples. One example is in the case of connection errors. If the client requires responses on all of the designated connections and one of them returns an error, then the final result of the MultiRPC call will be useless and the remainder of the processing time will have been wasted. With the client handler routine the client has the ability to notice the connection error. He then has the ability to abort the call, or even to use the handler routine as an opportunity to rebind to the failed site and make an RPC2 call on that connection.
Another example is in the implementation of a replicated server. A useful way to deal with operation quorums (specified as some subset n) of the total number of replicated servers is to send messages out to all or many of the available servers and abort the call as soon as the first n responses arrive. This has the advantage of supplying the fastest possible execution for the replicated call; furthermore, since the n members of the quorum need not be chosen explicitly, the call will rarely have to be repeated if one of the servers is busy or inoperational.
The handler receives full sets of arguments each time it is called, along with an index identifying the current connection. The types of the server arguments to the client handler are identical to the types in the original MakeMulti call: the argument list is in fact passed through RPC2 and returned to the handler. Any processing is permissible in the handler routine, although it should be noted that since @RPC2(MultiRPC) does not support enqueueing of server requests any call made on a connection already active in a MultiRPC call will generate a return code of @RPC2(BUSY). Also, for lengthy blocking computations the same cautions with respect to lightweight processes apply as for RPC2.
It should also be noted that the use of the abort facility of the client handler carries with it some risks. These are discussed in more detail in Section < @@ref > HandleResultXXX < @@ref > errorsXXX.
The flow of control in MultiRPC is much the same as for RPC2 except for the iterative calling of the client handler. The client initiates the MultiRPC call by calling the library routine @MRPC(MakeMulti). MakeMulti packs the client arguments into a request buffer, and calls @RPC2(MultiRPC) with the request buffer, some argument packing information, and a pointer to @MRPC(UnpackMulti), the library unpacking routine.
@RPC2(MultiRPC) sets up the processing environment, initializes the request packet headers for all the designated servers, and performs any necessary side effect initialization. It then calls an internal routine to perform the transmission of the request packets. This transmission routine does not return until either the client supplied timeout expires or until it has received responses from all of the designated servers. Once the request packets have been transmitted, the routine settles into a loop waiting for server responses to arrive. As each response arrives, some preliminary processing is performed, and any remaining side effect processing is completed. Then RPC2 calls @MRPC(UnpackMulti) to unpack the response buffer into the clients original arguments. @MRPC(UnpackMulti) unpacks the buffer and calls the client handler routine with the current serverss information. The client then performs whatever processing he wishes, and returns with his instructions to continue or terminate the call. If he wishes to continue, the internal loop continues until all the server responses have been received. Otherwise, the loop terminates and the transmission routine cleans up any loose ends caused by the termination.
Control then returns to @RPC2(MultiRPC), which checks the return code and returns to @MRPC(MakeMulti). MakeMulti simply passes the supplied return code back to the client as it returns.
Since side effects are completely determined by the @SE[Descriptor] and the connection, extending the side effect mechanism to MultiRPC requires nothing more than supplying a unique @SE[Descriptor] for each connection.
@RPC2(MultiRPC) is the RPC2 runtime routine responsible for setting up the internal state properly for sending the request packets to the specified servers. It is called via the RPC2 library routine @MRPC(MakeMulti). One of the arguments to MultiRPC is the ArgInfo structure. This structure is never examined by RPC2, but is simply passed through UnpackMulti. If the RP2Gen interface is used, this argument is supplied by @MRPC(MakeMulti) and need not concern the client. If the RP2Gen interface is not used, this can point to any structure needed by the clients unpacking routine.
The UnpackMulti argument is also related to the RP2Gen interface. If the RP2Gen interface is used, this argument is automatically supplied by @MRPC(MakeMulti) and will point to the RPC2 library unpacking routine. If the RP2Gen interface is not used, the client is responsible for supplying a pointer to a routine matching the UnpackMulti specification (see Section @ref[UnpackMulti]).
@MRPC(MakeMulti) is the library routine which provides the parameter packing interface to @RPC2(MultiRPC). It takes the place of the individual client side stub routines generated by RP2Gen. In additon to the usual information supplied in an RPC2 call, it takes as arguments RP2Gen generated argument and operation descriptors, the number of servers to be called, and a pointer to a client supplied handler routine (see Section @ref[MakeMulti] for more detailed information). Using the argument descriptors, @MRPC(MakeMulti) packs the supplied server arguments into an RPC2 request buffer and creates a data structure containing call specific information and a pointer to the client handler routine. It then makes the MultiRPC call, and passes the final return code back to the client when the call terminates.
OUT and INOUT parameters must be supplied in the form of arrays of pointers to the appropriate argument types. The parameter interface specifications are discussed in sectin XXX . The size of the array is dependent on the number of servers designated by the client. For INOUT parameters it is only necessary to actually fill in a value for the first element of the array, although storage must be properly allocated for all of the elements.