Next: References
Up: Compact and Efficient Presentation
Previous: Size/Speed Trade-Off
The results presented in this paper support the following three key
insights:
- Marshalling code exhibits locality. For many applications,
a large fraction of the speedup achieved by fully optimizing the presentation conversion
code can be achieved by optimizing only a subset of the types in the interface
specification. The reason for this is that some of the types defined in an
interface specification are used far more frequently than others.
- Locality can be detected by static analysis. The
number of times that a particular type in an interface specification will be
used on run-time can be determined by mapping the type reference graph of the
interface specification onto a Markov Model. Used in conjunction with a set of
simple heuristics, the solution of these equations gives the frequency of each
type in the interface with very good accuracy.
- The size-speed trade-off can be solved using a Knapsack
optimization model. The optimization problem to be solved is to select the
subset of types of an interface specification that should be optimized given a
constraint on the maximal size of the presentation conversion code. This can
be modeled as a well-known optimization problem (Knapsack problem), and thus
solved with standard optimization algorithms.
In summary, the work in this paper has shown that it is possible to automate
the size/speed trade-off for presentation conversion code, and has proposed
and evaluated a particular optimization method. An experimental evaluation of
this method on a set of benchmarks showed that by investing 25% of the code
size required by fully optimized code, 55% to 68% of the interpreter calls
could be eliminated. Increasing the code size investment to 50% of the
maximal code size resulted in saving 87% to 94% of all interpreter calls.
The approach presented in this paper is not limited to the particular interface
definition language used in the experiments (ASN.1). Since we start from a
general model, it is straightforward to apply the same optimization
approach in stub compilers for systems like CORBA, DCE or Java-RMI.
A potential limitation is the heuristic branch prediction approach. It is
works well if the type reference graph of an application contains many
references to a number of ``central'' data types. This is very likely in
complex end user-oriented distributed applications such as databases or EDI,
and these are the applications where the size of presentation conversion
routines becomes problematic. As a fallback, Mavros offers the possibility to
manually add frequency values to type references.
The results can be further improved by refining the method used for estimating
the profit and cost of an optimization. For instance, the measurements
presented in this paper show that the performance of marshalling and
unmarshalling operations is not symmetric. One amelioration could be thus to
introduce one optimization variable
per (type, operation) pair,
instead of using only an optimization variable per type.
Next: References
Up: Compact and Efficient Presentation
Previous: Size/Speed Trade-Off
Philipp Hoschka
Sun Oct 26 21:35:40 MET 1997