在PyTorch中,对于MNIST手写数字分类任务,通常推荐使用ReLU(Rectified Linear Unit)作为激活函数。ReLU函数能够加速模型的训练过程,并且有助于解决梯度消失问题。
以下是一个简单的示例,展示了如何在PyTorch中使用ReLU激活函数进行MNIST分类:
import torch
import torch.nn as nn
import torch.optim as optim
# 定义一个简单的卷积神经网络(CNN)模型
class MNISTClassifier(nn.Module):
def __init__(self):
super(MNISTClassifier, self).__init__()
self.conv1 = nn.Conv2d(1, 32, kernel_size=3, stride=1, padding=1)
self.relu1 = nn.ReLU()
self.maxpool1 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv2 = nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1)
self.relu2 = nn.ReLU()
self.maxpool2 = nn.MaxPool2d(kernel_size=2, stride=2)
self.fc1 = nn.Linear(64 * 7 * 7, 128)
self.relu3 = nn.ReLU()
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = self.conv1(x)
x = self.relu1(x)
x = self.maxpool1(x)
x = self.conv2(x)
x = self.relu2(x)
x = self.maxpool2(x)
x = x.view(-1, 64 * 7 * 7)
x = self.fc1(x)
x = self.relu3(x)
x = self.fc2(x)
return x
# 创建模型实例
model = MNISTClassifier()
# 定义损失函数和优化器
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
# 训练模型
for epoch in range(10):
for data, target in train_loader:
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
print(f"Epoch {epoch+1}, Loss: {loss.item():.4f}")
# 测试模型
model.eval()
with torch.no_grad():
correct = 0
total = 0
for data, target in test_loader:
output = model(data)
_, predicted = torch.max(output.data, 1)
total += target.size(0)
correct += (predicted == target).sum().item()
print(f"Accuracy: {100 * correct / total:.2f}%")
在这个示例中,我们定义了一个简单的CNN模型,并在其中使用了ReLU激活函数。请注意,ReLU函数在输入为负数时返回0,这有助于缓解梯度消失问题。此外,我们还使用了交叉熵损失函数(nn.CrossEntropyLoss()
)和Adam优化器(optim.Adam()
)来训练模型。