Presentation on Conditional Portfolio Optimization

Jun 6, 2024

Presentation on Conditional Portfolio Optimization

Speaker Introduction

  • Name: Not specified
  • Role: Junior qualitative researcher at Arts and Teams
  • Education: Master's in Mathematics, University of Pittsburgh
  • Field: Transitioned from Mathematics to Quantitative Finance

Core Idea

  • Thesis: A novel method (Conditional Portfolio Optimization - CPO) addressing the limitations of traditional portfolio optimization models.

Key Quote

  • Quote:

    "Past performance is not indicative of future results"

Structure of Presentation

  1. Introduction
  2. Traditional Optimization Methods vs Machine Learning Methods
  3. Core Concept of CPO
  4. Methodology of CPO
  5. Bare Bone Process
  6. Results
  7. **Conclusion and Personal Remarks
  8. References

Introduction

Repetition of Traditional Optimization Methods

  • Traditional methods involve mathematical models to maximize returns given a level of risk (e.g., Sharpe Ratio).
  • Includes naive to advanced methods like:
    • Equal Weighting: Simple, based on diversifying risk.
    • Risk Parity: Allocates risk equally among assets.
    • Mean-Variance (Markowitz): Balances returns based on market risks.
  • Major limitation: Do not account for market regime shifts (e.g., COVID-19 pandemic, war in Ukraine).

Traditional Optimization Methods vs Machine Learning Methods

Traditional Methods Characteristics

  • Stable market assumptions
  • Normally distributed asset returns
  • Single assumption constraints (e.g., Sharpe Ratio)

Machine Learning Methods Characteristics

  • Identify and adapt to different market regimes
  • Account for dynamic changes in asset returns and correlations
  • Potential for robust and adaptive portfolio allocation

Core Concept of CPO

Two-Step Approach

  1. Regime Identification: Train machine learning model on historical market data, using algorithms like neural networks or gradient boosting integration trees.
  2. Portfolio Optimization: Use trained model to optimize portfolio allocation. The optimization aims to maximize a chosen objective function (e.g., future Sharpe Ratio).

Methodology of CPO

  • Training Stage: Model trains on historical market features and portfolio control features.
  • Pre-Processing Stage: Refines historical data.
  • Intelligent Sampling: Crucial but ambiguous method for reducing allocation possibilities.
  • Inference Stage: Apply trained model with current market features to predict the objective function.
  • Optimization Stage: Use constraints to finalize optimal allocation.

Bare Bone Process

Detailed Steps:

  1. Training Stage: Historical Market and Portfolio Data: Processed to train a machine learning model.
  2. Market Features and Control Features: Incorporated for training and prediction.
  3. Apply Constraints: Optimize based on Sharpe ratio or chosen objective function.
  4. Predicted Outputs: Both Sharpe ratio predictions and portfolio allocations.

Results

  • ETF Comparison: CPO outperforms naive and risk parity methods, especially under constraints.
  • Portfolio Value Graph: CPO excels during bull markets and adapts to bear markets (e.g., Ukraine war impact).
  • Cash Allocations: Higher in times of market downturns, demonstrating adaptive behavior.

Interesting Findings

  • Surprise Results: Equal Weight allocation outperformed CPO in specific out-of-sample tests.
  • Constraint Significance: Constraints have an outsized impact on optimization results.
  • Machine Learning Application: Demonstrates benefits but also poses questions about the completeness and transparency of CPO.

Conclusion and Personal Remarks

  • Points to Remember: CPO

    • Two-Step Process: Identification and optimization.
    • Mixed Results: Successes and unexpected outcomes.
  • Machine Learning Integration: A significant advancement, requiring more thorough testing and transparency.

  • Further Research: Needed for broader applicability and robustness.

  • Concerns: Lack of transparency, limited statistical back-testing.

  • Final Thoughts: The methodology is groundbreaking but needs access to more detailed methods and implementations to be fully appreciated.