I've submitted this on stackoverflow but I just wanted to hear you're guys' thoughts as well because I'm not quite sure where I'm going wrong here, I'm new and a little lost so hopefully this isn't against the sub rules (very sorry if it is!!) any help would be appreciated!! Anyways, here it goes:
So I am relatively new to the ML/AI game in python, and I'm currently working on a problem surrounding the implementation of a custom objective function for XGBoost.
My differential equation knowledge is pretty rusty so I've created a custom obj function with a gradient and hessian that models the mean squared error function that is ran as the default objective function in XGBRegressor to make sure that I am doing all of this correctly. The problem is, the results of the model (the error outputs are close but not identical for the most part (and way off for some points). I don't know what I'm doing wrong or how that could be possible if I am computing things correctly. If you all could look at this an maybe provide insight into where I am wrong, that would be awesome!
The original code without a custom function is:
import xgboost as xgb reg = xgb.XGBRegressor(n_estimators=150, max_depth=2,objective ="reg:squarederror",n_jobs=-1) reg.fit(X_train, y_train) y_pred_test = reg.predict(X_test)
and my custom objective function for MSE is as follows:
def gradient_se(y_true, y_pred): #Compute the gradient squared error. return (-2 * y_true) + (2 * y_pred) def hessian_se(y_true, y_pred): #Compute the hessian for squared error return 0*(y_true + y_pred) + 2 def custom_se(y_true, y_pred): #squared error objective. A simplified version of MSE used as #objective function. grad = gradient_se(y_true, y_pred) hess = hessian_se(y_true, y_pred) return grad, hess
the documentation reference is here