您好,欢迎来到99网。
搜索
您的当前位置:首页softmax与交叉熵损失

softmax与交叉熵损失

来源:99网

softmax:

import torch
import numpy as np

#softmax
def softmax(input):

    sum = np.sum([torch.exp(i) for i in input])
    softmax_out = [torch.exp(i)/sum for i in input]
    return softmax_out

input = torch.tensor([1,2,3],dtype=torch.float32)
L = softmax(input)
print(L)

[tensor(0.0900), tensor(0.2447), tensor(0.6652)]

交叉熵损失函数:

import torch
from torch import nn

model_out = torch.tensor([[1,2,3],
                          [4,5,6]],requires_grad=True,dtype=torch.float32)

#类别
y = torch.tensor(
    [0,2]
)

ce_mean = nn.CrossEntropyLoss(reduction='mean')
loss = ce_mean(model_out,y)
print(loss)
loss.backward()

#softmax
softmax = torch.softmax(model_out,dim=1)
print("q(x):",softmax)
log_model_out = torch.log(softmax)
#-log(q(x))
log_model_out_softmax = -log_model_out
print("-log(q(x))",log_model_out_softmax)
#CrossEntropyLoss
#-∑(p(x)log(q(x))) 
#2.4076 + 0.4076
CEL_Sum = log_model_out_softmax[0][0] + log_model_out_softmax[1][2]
CEL_Mean = CEL_Sum / 2
print("crossentropy_mean:",CEL_Mean)
CEL_Mean.backward()
tensor(1.4076, grad_fn=<NllLossBackward0>)
q(x): tensor([[0.0900, 0.2447, 0.6652],
        [0.0900, 0.2447, 0.6652]], grad_fn=<SoftmaxBackward0>)
-log(q(x)) tensor([[2.4076, 1.4076, 0.4076],
        [2.4076, 1.4076, 0.4076]], grad_fn=<NegBackward0>)
crossentropy_mean: tensor(1.4076, grad_fn=<DivBackward0>)

因篇幅问题不能全部显示,请点此查看更多更全内容

Copyright © 2019- 99spj.com 版权所有 湘ICP备2022005869号-5

违法及侵权请联系:TEL:199 18 7713 E-MAIL:2724546146@qq.com

本站由北京市万商天勤律师事务所王兴未律师提供法律服务