GRU 模型的 ROC-AUC 损失:不能在 keras 中使用 tflearn 的损失

数据挖掘 神经网络 喀拉斯 推荐系统 损失函数 学习
2022-02-12 04:04:25

我试图tflearn.objectives.roc_auc_score在 Keras 中用作 GRU 网络的损失函数,但出现以下错误:

> ValueError: An operation has `None` for gradient. Please make sure
> that all of your ops have a gradient defined (i.e. are
> differentiable). Common ops without gradient: K.argmax, K.round,
> K.eval.

这是令人惊讶的,因为该实现显然是基于一个应该是可微的近似值。

供您参考,这里是来自tflearn Github的代码:

def roc_auc_score(y_pred, y_true):
    """ ROC AUC Score.
    Approximates the Area Under Curve score, using approximation based on
    the Wilcoxon-Mann-Whitney U statistic.
    Yan, L., Dodier, R., Mozer, M. C., & Wolniewicz, R. (2003).
    Optimizing Classifier Performance via an Approximation to the Wilcoxon-Mann-Whitney Statistic.
    Measures overall performance for a full range of threshold levels.
    Arguments:
        y_pred: `Tensor`. Predicted values.
        y_true: `Tensor` . Targets (labels), a probability distribution.
    """
    with tf.name_scope("RocAucScore"):

        pos = tf.boolean_mask(y_pred, tf.cast(y_true, tf.bool))
        neg = tf.boolean_mask(y_pred, ~tf.cast(y_true, tf.bool))

        pos = tf.expand_dims(pos, 0)
        neg = tf.expand_dims(neg, 1)

        # original paper suggests performance is robust to exact parameter choice
        gamma = 0.2
        p     = 3

        difference = tf.zeros_like(pos * neg) + pos - neg - gamma

        masked = tf.boolean_mask(difference, difference < 0.0)

return tf.reduce_sum(tf.pow(-masked, p)) 
2个回答

当我更改函数的声明时,这对我有用:

def roc_auc_score(y_true, y_pred)

反而:

def roc_auc_score(y_pred, y_true)

我定义错误并且它有效,当我修复它时,我得到了和你一样的错误。我不太了解您的代码逻辑,但如果这对您有用,您可以。

对我来说,至少这个问题是在上述函数之外引起的,因为我tf.argmax用来获取y_pred不可微分(而且也不正确)。我换成了y_pred[:, 1]解决问题的那个。