Algorithm Overview
Read the summaries below to determine which of our matching algorithms is right for you.
Table of contents
- Dynamic Almost Matching Exactly (DAME)
- Fast Large-Scale Almost Matching Exactly (FLAME)
- Matching After Learning to Stretch (MALTS)
- Adaptive Hyper-Box Matching (AHB)
- Model-to-Match: Lasso Coefficient Matching (LCM)
Dynamic Almost Matching Exactly (DAME)
Fast Large-Scale Almost Matching Exactly (FLAME)
Languages: |
R, Python |
Input data: |
Categorical covariates, scales well to large datasets with millions of observations |
Matching method: |
Uses bit-vector computations to match units based on a learned, weighted Hamming distance.
FLAME successively drops irrelevant covariates to lessen the computational load while still
maintaining enough covariates for high-quality conditional average treatment effect (CATE)
estimation. |
Paper: |
FLAME: A Fast Large-scale Almost Matching Exactly Approach to Causal Inference
|
Matching After Learning to Stretch (MALTS)
Languages: |
Python |
Input data: |
Continuous, categorical, or mixed (continous and categorical) covariates |
Matching method: |
Uses exact matching for discrete variables and learned, generalized Mahalanobis distances
for continuous variables. Instead of a predetermined distance metric, the covariates
contributing more towards predicting the outcome are given higher weights. |
Paper: |
MALTS: Matching After Learning to Stretch
|
Adaptive Hyper-Box Matching (AHB)
Model-to-Match: Lasso Coefficient Matching (LCM)