AutoMLPipeline
Documentation | Build Status | Help |
---|---|---|
[![][docs-dev-img]][docs-dev-url] [![][docs-stable-img]][docs-stable-url] | [![][gha-img]][gha-url] [![][codecov-img]][codecov-url] | [![][slack-img]][slack-url] [![][gitter-img]][gitter-url] |
Star History
AutoMLPipeline (AMLP) is a package that makes it trivial to create complex ML pipeline structures using simple expressions. It leverages on the built-in macro programming features of Julia to symbolically process, manipulate pipeline expressions, and makes it easy to discover optimal structures for machine learning regression and classification.
To illustrate, here is a pipeline expression
and evaluation of a typical machine learning
workflow that extracts numerical features (numf
)
for ica
(Independent Component Analysis)
and pca
(Principal Component Analysis)
transformations, respectively, concatenated with
the hot-bit encoding (ohe
) of categorical
features (catf
) of a given data for rf
(Random Forest) modeling:
model = (catf |> ohe) + (numf |> pca) + (numf |> ica) |> rf
fit!(model,Xtrain,Ytrain)
prediction = transform!(model,Xtest)
score(:accuracy,prediction,Ytest)
crossvalidate(model,X,Y,"balanced_accuracy_score")
Just take note that +
has higher priority than |>
so if you
are not sure, enclose the operations inside parentheses.
### these two expressions are the same
a |> b + c; a |> (b + c)
### these two expressions are the same
a + b |> c; (a + b) |> c
Please read this AutoMLPipeline Paper for benchmark comparisons.
Recorded Video/Conference Presentations:
- 2023 JuliaCon (Wrapping Up Offline RL as Part of AutoMLPipeline Workflow)
- 2022 JuliaCon (Distributed AutoML Pipeline Search in PC/RasPi K8s Cluster)
- 2021 JuliaCon (Finding an Effective Strategy for AutoML Pipeline Optimization)
- 2021 PyData Ireland Meetup (Symbolic ML Pipeline Expression and Benchmarking)
- 2020 JuliaCon (AutoMLPipeline: A ToolBox for Building ML Pipelines)
Related Video/Conference Presentations:
- 2021 JuliaCon (Lale in Julia: A package for semi-automated data science)
- 2019 JuliaCon (TSML: Time Series Machine Learning Pipeline)
- 2021 OpenSource Guild in IBM (Overview of HPC and Data Science in Julia Programming with AutoML)
More examples can be found in the examples folder including optimizing pipelines by multi-threading or distributed computing.
Motivations
The typical workflow in machine learning classification or prediction requires some or combination of the following preprocessing steps together with modeling:
- feature extraction (e.g. ica, pca, svd)
- feature transformation (e.g. normalization, scaling, ohe)
- feature selection (anova, correlation)
- modeling (rf, adaboost, xgboost, lm, svm, mlp)
Each step has several choices of functions to use together with their corresponding parameters. Optimizing the performance of the entire pipeline is a combinatorial search of the proper order and combination of preprocessing steps, optimization of their corresponding parameters, together with searching for the optimal model and its hyper-parameters.
Because of close dependencies among various steps, we can consider the entire process to be a pipeline optimization problem (POP). POP requires simultaneous optimization of pipeline structure and parameter adaptation of its elements. As a consequence, having an elegant way to express pipeline structure can help lessen the complexity in the management and analysis of the wide-array of choices of optimization routines.
The target of future work will be the implementations of different pipeline optimization algorithms ranging from evolutionary approaches, integer programming (discrete choices of POP elements), tree/graph search, and hyper-parameter search.
Package Features
- Symbolic pipeline API for easy expression and high-level description of complex pipeline structures and processing workflow
- Common API wrappers for ML libs including Scikitlearn, DecisionTree, etc
- Easily extensible architecture by overloading just two main interfaces: fit! and transform!
- Meta-ensembles that allow composition of ensembles of ensembles (recursively if needed) for robust prediction routines
- Categorical and numerical feature selectors for specialized preprocessing routines based on types
Installation
AutoMLPipeline is in the Julia Official package registry.
The latest release can be installed at the Julia
prompt using Julia's package management which is triggered
by pressing ]
at the julia prompt:
julia> ]
pkg> update
pkg> add AutoMLPipeline
Sample Usage
Below outlines some typical way to preprocess and model any dataset.
1. Load Data, Extract Input (X) and Target (Y)
# Make sure that the input feature is a dataframe and the target output is a 1-D vector.
using AutoMLPipeline
profbdata = getprofb()
X = profbdata[:,2:end]
Y = profbdata[:,1] |> Vector;
head(x)=first(x,5)
head(profbdata)
5×7 DataFrame. Omitted printing of 1 columns
│ Row │ Home.Away │ Favorite_Points │ Underdog_Points │ Pointspread │ Favorite_Name │ Underdog_name │
│ │ String │ Int64 │ Int64 │ Float64 │ String │ String │
├─────┼───────────┼─────────────────┼─────────────────┼─────────────┼───────────────┼───────────────┤
│ 1 │ away │ 27 │ 24 │ 4.0 │ BUF │ MIA │
│ 2 │ at_home │ 17 │ 14 │ 3.0 │ CHI │ CIN │
│ 3 │ away │ 51 │ 0 │ 2.5 │ CLE │ PIT │
│ 4 │ at_home │ 28 │ 0 │ 5.5 │ NO │ DAL │
│ 5 │ at_home │ 38 │ 7 │ 5.5 │ MIN │ HOU │
2. Load Filters, Transformers, and Learners
using AutoMLPipeline
#### Decomposition
pca = skoperator("PCA")
fa = skoperator("FactorAnalysis")
ica = skoperator("FastICA")
#### Scaler
rb = skoperator("RobustScaler")
pt = skoperator("PowerTransformer")
norm = skoperator("Normalizer")
mx = skoperator("MinMaxScaler")
std = skoperator("StandardScaler")
#### categorical preprocessing
ohe = OneHotEncoder()
#### Column selector
catf = CatFeatureSelector()
numf = NumFeatureSelector()
disc = CatNumDiscriminator()
#### Learners
rf = skoperator("RandomForestClassifier")
gb = skoperator("GradientBoostingClassifier")
lsvc = skoperator("LinearSVC")
svc = skoperator("SVC")
mlp = skoperator("MLPClassifier")
ada = skoperator("AdaBoostClassifier")
sgd = skoperator("SGDClassifier")
skrf_reg = skoperator("RandomForestRegressor")
skgb_reg = skoperator("GradientBoostingRegressor")
jrf = RandomForest()
tree = PrunedTree()
vote = VoteEnsemble()
stack = StackEnsemble()
best = BestLearner()
Note: You can get a listing of available Preprocessors
and Learners
by invoking the function:
skoperator()
3. Filter categories and hot-encode them
pohe = catf |> ohe
tr = fit_transform!(pohe,X,Y)
head(tr)
5×56 DataFrame. Omitted printing of 47 columns
│ Row │ x1 │ x2 │ x3 │ x4 │ x5 │ x6 │ x7 │ x8 │ x9 │
│ │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │
├─────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┤
│ 1 │ 1.0 │ 0.0 │ 0.0 │ 0.0 │ 0.0 │ 0.0 │ 0.0 │ 0.0 │ 0.0 │
│ 2 │ 0.0 │ 1.0 │ 0.0 │ 0.0 │ 0.0 │ 0.0 │ 0.0 │ 0.0 │ 0.0 │
│ 3 │ 0.0 │ 0.0 │ 1.0 │ 0.0 │ 0.0 │ 0.0 │ 0.0 │ 0.0 │ 0.0 │
│ 4 │ 0.0 │ 0.0 │ 0.0 │ 1.0 │ 0.0 │ 0.0 │ 0.0 │ 0.0 │ 0.0 │
│ 5 │ 0.0 │ 0.0 │ 0.0 │ 0.0 │ 1.0 │ 0.0 │ 0.0 │ 0.0 │ 0.0 │
4. Numerical Feature Extraction Example
4.1 Filter numeric features, compute ica and pca features, and combine both features
pdec = (numf |> pca) + (numf |> ica)
tr = fit_transform!(pdec,X,Y)
head(tr)
5×8 DataFrame
│ Row │ x1 │ x2 │ x3 │ x4 │ x1_1 │ x2_1 │ x3_1 │ x4_1 │
│ │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │
├─────┼──────────┼──────────┼──────────┼──────────┼────────────┼────────────┼────────────┼────────────┤
│ 1 │ 2.47477 │ 7.87074 │ -1.10495 │ 0.902431 │ 0.0168432 │ 0.00319873 │ -0.0467633 │ 0.026742 │
│ 2 │ -5.47113 │ -3.82946 │ -2.08342 │ 1.00524 │ -0.0327947 │ -0.0217808 │ -0.0451314 │ 0.00702006 │
│ 3 │ 30.4068 │ -10.8073 │ -6.12339 │ 0.883938 │ -0.0734292 │ 0.115776 │ -0.0425357 │ 0.0497831 │
│ 4 │ 8.18372 │ -15.507 │ -1.43203 │ 1.08255 │ -0.0656664 │ 0.0368666 │ -0.0457154 │ -0.0192752 │
│ 5 │ 16.6176 │ -6.68636 │ -1.66597 │ 0.978243 │ -0.0338749 │ 0.0643065 │ -0.0461703 │ 0.00671696 │
4.2 Filter numeric features, transform to robust and power transform scaling, perform ica and pca, respectively, and combine both
ppt = (numf |> rb |> ica) + (numf |> pt |> pca)
tr = fit_transform!(ppt,X,Y)
head(tr)
5×8 DataFrame
│ Row │ x1 │ x2 │ x3 │ x4 │ x1_1 │ x2_1 │ x3_1 │ x4_1 │
│ │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │
├─────┼─────────────┼─────────────┼────────────┼───────────┼───────────┼──────────┼────────────┼───────────┤
│ 1 │ -0.00308891 │ -0.0269009 │ -0.0166298 │ 0.0467559 │ -0.64552 │ 1.40289 │ -0.0284468 │ 0.111773 │
│ 2 │ 0.0217799 │ -0.00699717 │ 0.0329868 │ 0.0449952 │ -0.832404 │ 0.475629 │ -1.14881 │ -0.01702 │
│ 3 │ -0.115577 │ -0.0503802 │ 0.0736173 │ 0.0420466 │ 1.54491 │ 1.65258 │ -1.35967 │ -2.57866 │
│ 4 │ -0.0370057 │ 0.0190459 │ 0.065814 │ 0.0454864 │ 1.32065 │ 0.563565 │ -2.05839 │ -0.74898 │
│ 5 │ -0.0643088 │ -0.00711682 │ 0.0340452 │ 0.0459816 │ 1.1223 │ 1.45555 │ -0.88864 │ -0.776195 │
5. A Pipeline for the Voting Ensemble Classification
# take all categorical columns and hot-bit encode each,
# concatenate them to the numerical features,
# and feed them to the voting ensemble
using AutoMLPipeline.Utils
pvote = (catf |> ohe) + (numf) |> vote
pred = fit_transform!(pvote,X,Y)
sc=score(:accuracy,pred,Y)
println(sc)
crossvalidate(pvote,X,Y,"accuracy_score")
fold: 1, 0.5373134328358209
fold: 2, 0.7014925373134329
fold: 3, 0.5294117647058824
fold: 4, 0.6716417910447762
fold: 5, 0.6716417910447762
fold: 6, 0.6119402985074627
fold: 7, 0.5074626865671642
fold: 8, 0.6323529411764706
fold: 9, 0.6268656716417911
fold: 10, 0.5671641791044776
errors: 0
(mean = 0.6057287093942055, std = 0.06724940684190235, folds = 10, errors = 0)
Note: crossvalidate()
supports the following sklearn's performance metric
classification:
accuracy_score
,balanced_accuracy_score
,cohen_kappa_score
jaccard_score
,matthews_corrcoef
,hamming_loss
,zero_one_loss
f1_score
,precision_score
,