In loss.go file:
type lossFnOptions struct {
ClassWeights []float64
Reduction int64 // 0: "None", 1: "mean", 2: "sum"
IgnoreIndex int64
PosWeight int64 // index of the weight attributed to positive class. Used in BCELoss
}
In BCELoss function options.PosWeight is used as follows:
posWeight = ts.MustOfSlice([]int64{options.PosWeight})
Then, posWeight tensor is used in MustBinaryCrossEntropyWithLogits call. This function finally calls AtgBinaryCrossEntropyWithLogits. It seems that this function has no limitation for tensor type. Pytorch accepts float tensor in BCEWithLogitsLoss call as well.
Why do you force PosWeight to be integer?
In
loss.gofile:In
BCELossfunctionoptions.PosWeightis used as follows:posWeight = ts.MustOfSlice([]int64{options.PosWeight})Then,
posWeighttensor is used inMustBinaryCrossEntropyWithLogitscall. This function finally callsAtgBinaryCrossEntropyWithLogits. It seems that this function has no limitation for tensor type. Pytorch accepts float tensor inBCEWithLogitsLosscall as well.Why do you force
PosWeightto be integer?