“reg:linear” --线性回归
“reg:logistic” --逻辑回归
“binary:logistic” --二分类的逻辑回归,返回预测的概率(不是类别)
“binary:logitraw” --输出归一化前的得分
“count:poisson” --poisson regression for count data, output mean of poisson distribution
max_delta_step is set to 0.7 by default in poisson regression (used to safeguard optimization)
“multi:softmax” --设定XGBoost做多分类,你需要同时设定num_class(类别数)的值
“multi:softprob” --输出维度为ndata * nclass的概率矩阵
“rank:pairwise” --设定XGBoost去完成排序问题(最小化pairwise loss)
“reg:gamma” --gamma regression with log-link. Output is a mean of gamma distribution. It might be useful, e.g., for modeling insurance claims severity, or for any outcome that might be gamma-distributed
“reg:tweedie” --Tweedie regression with log-link. It might be useful, e.g., for modeling total loss in insurance, or for any outcome that might be Tweedie-distributed.
base_score [ default=0.5 ]
the initial prediction score of all instances, global bias
for sufficient number of iterations, changing this value will not have too much effect.
eval_metric [ 默认是根据 损失函数/目标函数 自动选定的 ]
有如下的选择:
a. “rmse”: 均方误差
b.“mae”: 绝对平均误差
c. “logloss”: negative log损失
d. “error”: 二分类的错误率
e. “error@t”: 通过提供t为阈值(而不是0.5),计算错误率
f. “merror”: 多分类的错误类,计算公式为#(wrong cases)/#(all cases).
g. “mlogloss”: 多类log损失
h. “auc”: ROC曲线下方的面积 for ranking evaluation.
i. “ndcg”:Normalized Discounted Cumulative Gain
g. “map”:平均准确率
k. “ndcg@n”,“map@n”: n can be assigned as an integer to cut off the top positions in the lists for evaluation.
l. “ndcg-”,“map-”,“ndcg@n-”,“map@n-”: In XGBoost, NDCG and MAP will evaluate the score of a list without any positive samples as 1. By adding “-” in the evaluation metric XGBoost will evaluate these score as 0 to be consistent under some conditions. training repeatedly
m. “poisson-nloglik”: negative log-likelihood for Poisson regression
n. “gamma-nloglik”: negative log-likelihood for gamma regression
o. “gamma-deviance”: residual deviance for gamma regression
p. “tweedie-nloglik”: negative log-likelihood for Tweedie regression (at a specified value of the tweedie_variance_power parameter)
seed [ default=0 ]
random number seed.
原始使用形态
#!/usr/bin/python
import numpy as np
import xgboost as xgb
###
# advanced: customized loss function
#
print('start running example to used customized objective function')
dtrain = xgb.DMatrix('../data/agaricus.txt.train')
dtest = xgb.DMatrix('../data/agaricus.txt.test')
# note: for customized objective function, we leave objective as default
# note: what we are getting is margin value in prediction
# you must know what you are doing
param = {'max_depth': 2, 'eta': 1, 'silent': 1}
watchlist = [(dtest, 'eval'), (dtrain, 'train')]
num_round = 2
# user define objective function, given prediction, return gradient and second order gradient
# this is log likelihood loss
def logregobj(preds, dtrain):
labels = dtrain.get_label()
preds = 1.0 / (1.0 + np.exp(-preds))
grad = preds - labels
hess = preds * (1.0 - preds)
return grad, hess
# user defined evaluation function, return a pair metric_name, result
# NOTE: when you do customized loss function, the default prediction value is margin
# this may make builtin evaluation metric not function properly
# for example, we are doing logistic loss, the prediction is score before logistic transformation
# the builtin evaluation error assumes input is after logistic transformation
# Take this in mind when you use the customization, and maybe you need write customized evaluation function
def evalerror(preds, dtrain):
labels = dtrain.get_label()
# return a pair metric_name, result. The metric name must not contain a colon (:) or a space
# since preds are margin(before logistic transformation, cutoff at 0)
return 'my-error', float(sum(labels != (preds > 0.0))) / len(labels)
# training with customized objective, we can also do step by step training
# simply look at xgboost.py's implementation of train
bst = xgb.train(param, dtrain, num_round, watchlist, obj=logregobj, feval=evalerror)