交叉驗(yàn)證應(yīng)用于時(shí)間序列需要注意是要防止泄漏和獲得可靠的性能估計(jì)本文將介紹蒙特卡洛交叉驗(yàn)證。這是一種流行的TimeSeriesSplits方法的替代方法。
交叉驗(yàn)證應(yīng)用于時(shí)間序列需要注意是要防止泄漏和獲得可靠的性能估計(jì)本文將介紹蒙特卡洛交叉驗(yàn)證。這是一種流行的TimeSeriesSplits方法的替代方法。

時(shí)間序列交叉驗(yàn)證
TimeSeriesSplit通常是時(shí)間序列數(shù)據(jù)進(jìn)行交叉驗(yàn)證的首選方法。下圖1說明了該方法的操作方式??捎玫臅r(shí)間序列被分成幾個(gè)大小相等的折疊。然后每一次折首先被用來測試一個(gè)模型,然后重新訓(xùn)練它。除了第一折只用于訓(xùn)練。

使用TimeSeriesSplit進(jìn)行交叉驗(yàn)證的主要好處如下:
- 它保持了觀察的順序。這個(gè)問題在有序數(shù)據(jù)集(如時(shí)間序列)中非常重要。
- 它生成了很多拆分 。幾次拆分后可以獲得更穩(wěn)健的評(píng)估。如果數(shù)據(jù)集不大,這一點(diǎn)尤其重要。
TimeSeriesSplit的主要缺點(diǎn)是跨折疊的訓(xùn)練樣本量是不一致的。這是什么意思?
假設(shè)將該方法應(yīng)用于圖1所示的5次分折。在第一次迭代中,所有可用觀測值的20%用于訓(xùn)練。但是,這個(gè)數(shù)字在上次迭代中是80%。因此,初始迭代可能不能代表完整的時(shí)間序列。這個(gè)問題會(huì)影響性能估計(jì)。
那么如何解決這個(gè)問題?
蒙特卡羅交叉驗(yàn)證
蒙特卡羅交叉驗(yàn)證(MonteCarloCV)是一種可以用于時(shí)間序列的方法。這個(gè)想法是在不同的隨機(jī)起點(diǎn)來獲取一個(gè)時(shí)間周期的數(shù)據(jù),下面是這種方法的可視化描述:

像TimeSeriesSplit一樣,MonteCarloCV也保留了觀測的時(shí)間順序。它還會(huì)保留多次重復(fù)估計(jì)過程。
MonteCarloCV與TimeSeriesSplit的區(qū)別主要有兩個(gè)方面:
- 對于訓(xùn)練和驗(yàn)證樣本量,使用TimeSeriesSplit時(shí)訓(xùn)練集的大小會(huì)增加。在MonteCarloCV中,訓(xùn)練集的大小在每次迭代過程中都是固定的,這樣可以防止訓(xùn)練規(guī)模不能代表整個(gè)數(shù)據(jù);
- 隨機(jī)的分折,在MonteCarloCV中,驗(yàn)證原點(diǎn)是隨機(jī)選擇的。這個(gè)原點(diǎn)標(biāo)志著訓(xùn)練集的結(jié)束和驗(yàn)證的開始。在TimeSeriesSplit的情況下,這個(gè)點(diǎn)是確定的。它是根據(jù)迭代次數(shù)預(yù)先定義的。
MonteCarloCV最初由Picard和Cook使用。詳細(xì)信息可以查看參考文獻(xiàn)。
經(jīng)過詳細(xì)研究MonteCarloCV。這包括與TimeSeriesSplit等其他方法的比較。MonteCarloCV可以獲得更好的估計(jì),所以我一直在使用它。你可以在參考文獻(xiàn)[2]中查看完整的研究。
不幸的是,scikit-learn不提供MonteCarloCV的實(shí)現(xiàn)。所以,我們決定自己手動(dòng)實(shí)現(xiàn)它:
from typing import List, Generator
import numpy as np
from sklearn.model_selection._split import _BaseKFold
from sklearn.utils.validation import indexable, _num_samples
class MonteCarloCV(_BaseKFold):
def __init__(self,
n_splits: int,
train_size: float,
test_size: float,
gap: int = 0):
"""
Monte Carlo Cross-Validation
Holdout applied in multiple testing periods
Testing origin (time-step where testing begins) is randomly chosen according to a monte carlo simulation
:param n_splits: (int) Number of monte carlo repetitions in the procedure
:param train_size: (float) Train size, in terms of ratio of the total length of the series
:param test_size: (float) Test size, in terms of ratio of the total length of the series
:param gap: (int) Number of samples to exclude from the end of each train set before the test set.
"""
self.n_splits = n_splits
self.n_samples = -1
self.gap = gap
self.train_size = train_size
self.test_size = test_size
self.train_n_samples = 0
self.test_n_samples = 0
self.mc_origins = []
def split(self, X, y=None, groups=None) -> Generator:
"""Generate indices to split data into training and test set.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training data, where `n_samples` is the number of samples
and `n_features` is the number of features.
y : array-like of shape (n_samples,)
Always ignored, exists for compatibility.
groups : array-like of shape (n_samples,)
Always ignored, exists for compatibility.
Yields
------
train : ndarray
The training set indices for that split.
test : ndarray
The testing set indices for that split.
"""
X, y, groups = indexable(X, y, groups)
self.n_samples = _num_samples(X)
self.train_n_samples = int(self.n_samples * self.train_size) - 1
self.test_n_samples = int(self.n_samples * self.test_size) - 1
# Make sure we have enough samples for the given split parameters
if self.n_splits > self.n_samples:
raise ValueError(
f'Cannot have number of folds={self.n_splits} greater'
f' than the number of samples={self.n_samples}.'
)
if self.train_n_samples - self.gap <= 0:
raise ValueError(
f'The gap={self.gap} is too big for number of training samples'
f'={self.train_n_samples} with testing samples={self.test_n_samples} and gap={self.gap}.'
)
indices = np.arange(self.n_samples)
selection_range = np.arange(self.train_n_samples + 1, self.n_samples - self.test_n_samples - 1)
self.mc_origins = \
np.random.choice(a=selection_range,
size=self.n_splits,
replace=True)
for origin in self.mc_origins:
if self.gap > 0:
train_end = origin - self.gap + 1
else:
train_end = origin - self.gap
train_start = origin - self.train_n_samples - 1
test_end = origin + self.test_n_samples
yield (
indices[train_start:train_end],
indices[origin:test_end],
)
def get_origins(self) -> List[int]:
return self.mc_origins
MonteCarloCV接受四個(gè)參數(shù):
- n_splitting:分折或迭代的次數(shù)。這個(gè)值趨向于10;
- training_size:每次迭代時(shí)訓(xùn)練集的大小與時(shí)間序列大小的比值;
- test_size:類似于training_size,但用于驗(yàn)證集;
- gap:分離訓(xùn)練集和驗(yàn)證集的觀察數(shù)。與TimeSeriesSplits一樣,此參數(shù)的值默認(rèn)為0(無間隙)。
每次迭代的訓(xùn)練和驗(yàn)證大小取決于輸入數(shù)據(jù)。我發(fā)現(xiàn)一個(gè)0.6/0.1的分區(qū)工作得很好。也就是說,在每次迭代中,60%的數(shù)據(jù)被用于訓(xùn)練。10%的觀察結(jié)果用于驗(yàn)證。
實(shí)際使用的例子
下面是配置的一個(gè)例子:
from sklearn.datasets import make_regression
from src.mccv import MonteCarloCV
X, y = make_regression(n_samples=120)
mccv = MonteCarloCV(n_splits=5,
train_size=0.6,
test_size=0.1,
gap=0)
for train_index, test_index in mccv.split(X):
print("TRAIN:", train_index, "TEST:", test_index)
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
該實(shí)現(xiàn)也與scikit-learn兼容。以下是如何結(jié)合GridSearchCV:
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import GridSearchCV
model = RandomForestRegressor()
param_search = {'n_estimators': [10, 100]}
gsearch = GridSearchCV(estimator=model, cv=mccv, param_grid=param_search)
gsearch.fit(X, y)
我希望你發(fā)現(xiàn)MonteCarloCV有用!