This section goes into a level of technical detail that is probably not necessary in order to configure and use Metaproxy. It is provided only for those who like to know how things work. You should feel free to skip on to the next section if this one doesn't seem like fun.
Hold on tight - this may get a little hairy.
In the general course of things, a Z39.50 Init request may carry
with it an otherInfo packet of type VAL_PROXY,
whose value indicates the address of a Z39.50 server to which the
ultimate connection is to be made. (This otherInfo packet is
supported by YAZ-based Z39.50 clients and servers, but has not yet
been ratified by the Maintenance Agency and so is not widely used
in non-Index Data software. We're working on it.)
The VAL_PROXY packet functions
analogously to the absoluteURI-style Request-URI used with the GET
method when a web browser asks a proxy to forward its request: see
the
Request-URI
section of
the HTTP 1.1 specification.
Within Metaproxy, Search requests that are part of the same
session as an Init request that carries a
VAL_PROXY otherInfo are also annotated with the
same information. The role of the virt_db
filter is to rewrite this otherInfo packet dependent on the
virtual database that the client wants to search.
When Metaproxy receives a Z39.50 Init request from a client, it doesn't immediately forward that request to the back-end server. Why not? Because it doesn't know which back-end server to forward it to until the client sends a Search request that specifies the database that it wants to search in. Instead, it just treasures the Init request up in its heart; and, later, the first time the client does a search on one of the specified virtual databases, a connection is forged to the appropriate server and the Init request is forwarded to it. If, later in the session, the same client searches in a different virtual database, then a connection is forged to the server that hosts it, and the same cached Init request is forwarded there, too.
All of this clever Init-delaying is done by the
frontend_net filter. The
virt_db filter knows nothing about it; in
fact, because the Init request that is received from the client
doesn't get forwarded until a Search request is received, the
virt_db filter (and the
z3950_client filter behind it) doesn't even get
invoked at Init time. The only thing that a
virt_db filter ever does is rewrite the
VAL_PROXY otherInfo in the requests that pass
through it.
It is possible for a virt_db filter to contain
multiple
<target>
elements. What does this mean? Only that the filter will add
multiple VAL_PROXY otherInfo packets to the
Search requests that pass through it. That's because the virtual
DB filter is dumb, and does exactly what it's told - no more, no
less.
If a Search request with multiple VAL_PROXY
otherInfo packets reaches a z3950_client
filter, this is an error. That filter doesn't know how to deal
with multiple targets, so it will either just pick one and search
in it, or (better) fail with an error message.
The multi filter comes to the rescue! This is
the only filter that knows how to deal with multiple
VAL_PROXY otherInfo packets, and it does so by
making multiple copies of the entire Search request: one for each
VAL_PROXY. Each of these new copies is then
passed down through the remaining filters in the route. (The
copies are handled in parallel though the
spawning of new threads.) Since the copies each have only one
VAL_PROXY otherInfo, they can be handled by the
z3950_client filter, which happily deals with
each one individually. When the results of the individual
searches come back up to the multi filter, it
merges them into a single Search response, which is what
eventually makes it back to the client.
![[Here there should be a diagram showing the progress of packages through the filters during a simple virtual-database search and a multi-database search, but is seems that your tool chain has not been able to include the diagram in this document.]](multi.png)
A picture is worth a thousand words (but only five hundred on 64-bit architectures)