Skip to main content

Transparency and Interpretability

Avenue provides both interpretability and transparency—two distinct but complementary properties.

Definitions

Interpretability refers to predictive models whose calculations are inherently understandable. For an interpretable model, you can see how the system works and what information it relies upon in a particular instance. The inner workings are accessible, providing clear information about the factors used and how they combine to produce results.

Transparency refers to sharing the underlying formula for the model. This enables independent researchers to conduct evaluations and assess the accuracy of the model.

Key Distinction

A model can be:

  • Transparent but not interpretable: Formula is available but too complex to understand (e.g., publishing a neural network's complete architecture and weights)
  • Interpretable but not transparent: Reasoning for individual predictions is provided without access to the full model (e.g., SHAP values explain predictions but don't disclose the underlying model)
  • Both transparent and interpretable: The full model is disclosed AND its calculations are understandable (e.g., factor tables)

Why Factor Tables Are Better Than SHAP Values

SHAP (SHapley Additive exPlanations) values are a popular post-hoc explanation method, but they have critical limitations:

SHAP provides local interpretability only:

  • SHAP explains one prediction at a time by assigning importance values to each feature for that specific instance
  • You must calculate SHAP values separately for every prediction you want to explain
  • There's no single formula showing how the model works overall

Factor tables provide global transparency:

  • Factor tables show exactly how the model produces predictions for ALL possible inputs
  • The complete model logic is disclosed in a compact set of tables
  • Anyone can independently verify how any prediction is calculated

Additional SHAP limitations:

  • Explanations may not faithfully represent the actual model behavior
  • Can be inconsistent across instances
  • The underlying black-box model remains unchanged
  • Not suitable for regulatory filing—you can't submit SHAP explanations as your model

Avenue's Approach

Avenue achieves both transparency and interpretability while maintaining GBM-level predictive performance:

  1. Exact representation: The GBM IS the factor tables (mathematical equivalence, not approximation)
  2. Global transparency: Complete model disclosed in compact tables
  3. Interpretability: Predictions calculated through simple table lookups and arithmetic
  4. Performance: No information loss—predictions match the GBM exactly

This contrasts with approaches that either:

  • Provide interpretability without transparency (SHAP, LIME)
  • Provide both but with limited predictive power (traditional GLMs)
  • Achieve performance but sacrifice interpretability and transparency (standard GBMs)

Practical Benefits

Regulatory compliance: Factor tables meet requirements for transparent models that regulators can review and approve.

Operational benefits:

  • Debug unexpected predictions by examining factor contributions
  • Update specific factors based on market conditions
  • Validate model behavior against domain expertise
  • Enable knowledge transfer and model maintenance

Trust: Stakeholders can verify exactly how the model works rather than relying on summary explanations.

Next Steps