偷偷摘套内射激情视频,久久精品99国产国产精,中文字幕无线乱码人妻,中文在线中文a,性爽19p

用 PyTorch 構(gòu)建神經(jīng)網(wǎng)絡(luò)的 12 個(gè)實(shí)戰(zhàn)案例

開(kāi)發(fā) 人工智能
本文通過(guò)12個(gè)實(shí)戰(zhàn)案例,詳細(xì)介紹了如何使用PyTorch構(gòu)建各種類(lèi)型的神經(jīng)網(wǎng)絡(luò)模型,每個(gè)案例都提供了詳細(xì)的代碼示例和解釋。

用PyTorch構(gòu)建神經(jīng)網(wǎng)絡(luò)是機(jī)器學(xué)習(xí)領(lǐng)域中非常熱門(mén)的話(huà)題。PyTorch因其易用性和靈活性而受到廣大開(kāi)發(fā)者的喜愛(ài)。本文將通過(guò)12個(gè)實(shí)戰(zhàn)案例,帶你從零開(kāi)始構(gòu)建神經(jīng)網(wǎng)絡(luò),逐步掌握PyTorch的核心概念和高級(jí)技巧。

案例1:簡(jiǎn)單的線(xiàn)性回歸模型

目標(biāo):使用PyTorch構(gòu)建一個(gè)簡(jiǎn)單的線(xiàn)性回歸模型。

代碼示例:

import torch
import torch.nn as nn
import torch.optim as optim

# 定義數(shù)據(jù)
X = torch.tensor([[1.0], [2.0], [3.0], [4.0]])
y = torch.tensor([[2.0], [4.0], [6.0], [8.0]])

# 定義模型
class LinearRegressionModel(nn.Module):
    def __init__(self):
        super(LinearRegressionModel, self).__init__()
        self.linear = nn.Linear(1, 1)

    def forward(self, x):
        return self.linear(x)

model = LinearRegressionModel()

# 定義損失函數(shù)和優(yōu)化器
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.01)

# 訓(xùn)練模型
num_epochs = 1000
for epoch in range(num_epochs):
    model.train()
    optimizer.zero_grad()
    outputs = model(X)
    loss = criterion(outputs, y)
    loss.backward()
    optimizer.step()

# 測(cè)試模型
model.eval()
with torch.no_grad():
    predicted = model(X)
    print(predicted)

解釋?zhuān)?/p>

  • nn.Linear(1, 1):定義了一個(gè)線(xiàn)性層,輸入特征為1,輸出也為1。
  • nn.MSELoss():均方誤差損失函數(shù)。
  • optim.SGD(model.parameters(), lr=0.01):隨機(jī)梯度下降優(yōu)化器,學(xué)習(xí)率為0.01。
  • model.train() 和 model.eval():分別用于訓(xùn)練模式和評(píng)估模式。

案例2:邏輯回歸模型

目標(biāo):使用PyTorch構(gòu)建一個(gè)邏輯回歸模型。

代碼示例:

import torch
import torch.nn as nn
import torch.optim as optim

# 定義數(shù)據(jù)
X = torch.tensor([[1.0, 2.0], [2.0, 3.0], [3.0, 1.0], [4.0, 3.0]])
y = torch.tensor([0, 0, 1, 1])

# 定義模型
class LogisticRegressionModel(nn.Module):
    def __init__(self):
        super(LogisticRegressionModel, self).__init__()
        self.linear = nn.Linear(2, 1)
        self.sigmoid = nn.Sigmoid()

    def forward(self, x):
        out = self.linear(x)
        out = self.sigmoid(out)
        return out

model = LogisticRegressionModel()

# 定義損失函數(shù)和優(yōu)化器
criterion = nn.BCELoss()
optimizer = optim.SGD(model.parameters(), lr=0.01)

# 訓(xùn)練模型
num_epochs = 1000
for epoch in range(num_epochs):
    model.train()
    optimizer.zero_grad()
    outputs = model(X).squeeze()
    loss = criterion(outputs, y.float())
    loss.backward()
    optimizer.step()

# 測(cè)試模型
model.eval()
with torch.no_grad():
    predicted = (model(X) > 0.5).float()
    print(predicted)

解釋?zhuān)?/p>

  • nn.Sigmoid():Sigmoid激活函數(shù),用于將輸出轉(zhuǎn)換為概率值。
  • nn.BCELoss():二元交叉熵?fù)p失函數(shù),適用于二分類(lèi)問(wèn)題。
  • outputs.squeeze():去除輸出張量中的單維度條目。

案例3:多層感知機(jī)(MLP)

目標(biāo):使用PyTorch構(gòu)建一個(gè)多層感知機(jī)(MLP)模型。

代碼示例:

import torch
import torch.nn as nn
import torch.optim as optim

# 定義數(shù)據(jù)
X = torch.randn(100, 10)
y = torch.randint(0, 2, (100,))

# 定義模型
class MLP(nn.Module):
    def __init__(self):
        super(MLP, self).__init__()
        self.fc1 = nn.Linear(10, 5)
        self.relu = nn.ReLU()
        self.fc2 = nn.Linear(5, 2)
        self.softmax = nn.Softmax(dim=1)

    def forward(self, x):
        out = self.fc1(x)
        out = self.relu(out)
        out = self.fc2(out)
        out = self.softmax(out)
        return out

model = MLP()

# 定義損失函數(shù)和優(yōu)化器
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.01)

# 訓(xùn)練模型
num_epochs = 100
for epoch in range(num_epochs):
    model.train()
    optimizer.zero_grad()
    outputs = model(X)
    loss = criterion(outputs, y)
    loss.backward()
    optimizer.step()

# 測(cè)試模型
model.eval()
with torch.no_grad():
    predicted = torch.argmax(model(X), dim=1)
    print(predicted)

解釋?zhuān)?/p>

  • nn.ReLU():ReLU激活函數(shù),用于引入非線(xiàn)性。
  • nn.CrossEntropyLoss():交叉熵?fù)p失函數(shù),適用于多分類(lèi)問(wèn)題。
  • torch.argmax(model(X), dim=1):獲取每個(gè)樣本的最大概率對(duì)應(yīng)的類(lèi)別索引。

案例4:卷積神經(jīng)網(wǎng)絡(luò)(CNN)

目標(biāo):使用PyTorch構(gòu)建一個(gè)卷積神經(jīng)網(wǎng)絡(luò)(CNN)模型。

代碼示例:

import torch
import torch.nn as nn
import torch.optim as optim

# 定義數(shù)據(jù)
X = torch.randn(100, 1, 28, 28)
y = torch.randint(0, 10, (100,))

# 定義模型
class CNN(nn.Module):
    def __init__(self):
        super(CNN, self).__init__()
        self.conv1 = nn.Conv2d(1, 16, kernel_size=3, stride=1, padding=1)
        self.pool = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
        self.fc1 = nn.Linear(16 * 14 * 14, 128)
        self.fc2 = nn.Linear(128, 10)
        self.relu = nn.ReLU()

    def forward(self, x):
        out = self.conv1(x)
        out = self.relu(out)
        out = self.pool(out)
        out = out.view(-1, 16 * 14 * 14)
        out = self.fc1(out)
        out = self.relu(out)
        out = self.fc2(out)
        return out

model = CNN()

# 定義損失函數(shù)和優(yōu)化器
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.01)

# 訓(xùn)練模型
num_epochs = 10
for epoch in range(num_epochs):
    model.train()
    optimizer.zero_grad()
    outputs = model(X)
    loss = criterion(outputs, y)
    loss.backward()
    optimizer.step()

# 測(cè)試模型
model.eval()
with torch.no_grad():
    predicted = torch.argmax(model(X), dim=1)
    print(predicted)

解釋?zhuān)?/p>

  • nn.Conv2d(1, 16, kernel_size=3, stride=1, padding=1):定義一個(gè)卷積層,輸入通道為1,輸出通道為16,卷積核大小為3x3,步幅為1,填充為1。
  • nn.MaxPool2d(kernel_size=2, stride=2, padding=0):最大池化層,池化窗口大小為2x2,步幅為2。
  • out.view(-1, 16 * 14 * 14):將卷積層的輸出展平為一維向量。

案例5:循環(huán)神經(jīng)網(wǎng)絡(luò)(RNN)

目標(biāo):使用PyTorch構(gòu)建一個(gè)循環(huán)神經(jīng)網(wǎng)絡(luò)(RNN)模型。

代碼示例:

import torch
import torch.nn as nn
import torch.optim as optim

# 定義數(shù)據(jù)
X = torch.randn(100, 10, 50)  # (batch_size, sequence_length, input_size)
y = torch.randint(0, 10, (100,))

# 定義模型
class RNN(nn.Module):
    def __init__(self, input_size, hidden_size, num_layers, num_classes):
        super(RNN, self).__init__()
        self.hidden_size = hidden_size
        self.num_layers = num_layers
        self.rnn = nn.RNN(input_size, hidden_size, num_layers, batch_first=True)
        self.fc = nn.Linear(hidden_size, num_classes)

    def forward(self, x):
        h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(x.device)
        out, _ = self.rnn(x, h0)
        out = self.fc(out[:, -1, :])
        return out

model = RNN(input_size=50, hidden_size=128, num_layers=2, num_classes=10)

# 定義損失函數(shù)和優(yōu)化器
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.01)

# 訓(xùn)練模型
num_epochs = 10
for epoch in range(num_epochs):
    model.train()
    optimizer.zero_grad()
    outputs = model(X)
    loss = criterion(outputs, y)
    loss.backward()
    optimizer.step()

# 測(cè)試模型
model.eval()
with torch.no_grad():
    predicted = torch.argmax(model(X), dim=1)
    print(predicted)

解釋?zhuān)?/p>

  • nn.RNN(input_size, hidden_size, num_layers, batch_first=True):定義一個(gè)RNN層,輸入大小為50,隱藏層大小為128,層數(shù)為2,batch_first=True表示輸入數(shù)據(jù)的第一個(gè)維度是batch大小。
  • h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(x.device):初始化隱藏狀態(tài)。
  • out[:, -1, :]:取最后一個(gè)時(shí)間步的輸出。

案例6:長(zhǎng)短時(shí)記憶網(wǎng)絡(luò)(LSTM)

目標(biāo):使用PyTorch構(gòu)建一個(gè)長(zhǎng)短時(shí)記憶網(wǎng)絡(luò)(LSTM)模型。

代碼示例:

import torch
import torch.nn as nn
import torch.optim as optim

# 定義數(shù)據(jù)
X = torch.randn(100, 10, 50)  # (batch_size, sequence_length, input_size)
y = torch.randint(0, 10, (100,))

# 定義模型
class LSTM(nn.Module):
    def __init__(self, input_size, hidden_size, num_layers, num_classes):
        super(LSTM, self).__init__()
        self.hidden_size = hidden_size
        self.num_layers = num_layers
        self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True)
        self.fc = nn.Linear(hidden_size, num_classes)

    def forward(self, x):
        h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(x.device)
        c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(x.device)
        out, _ = self.lstm(x, (h0, c0))
        out = self.fc(out[:, -1, :])
        return out

model = LSTM(input_size=50, hidden_size=128, num_layers=2, num_classes=10)

# 定義損失函數(shù)和優(yōu)化器
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.01)

# 訓(xùn)練模型
num_epochs = 10
for epoch in range(num_epochs):
    model.train()
    optimizer.zero_grad()
    outputs = model(X)
    loss = criterion(outputs, y)
    loss.backward()
    optimizer.step()

# 測(cè)試模型
model.eval()
with torch.no_grad():
    predicted = torch.argmax(model(X), dim=1)
    print(predicted)

解釋?zhuān)?/p>

  • nn.LSTM(input_size, hidden_size, num_layers, batch_first=True):定義一個(gè)LSTM層,輸入大小為50,隱藏層大小為128,層數(shù)為2,batch_first=True表示輸入數(shù)據(jù)的第一個(gè)維度是batch大小。
  • c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(x.device):初始化細(xì)胞狀態(tài)。

案例7:門(mén)控循環(huán)單元(GRU)

目標(biāo):使用PyTorch構(gòu)建一個(gè)門(mén)控循環(huán)單元(GRU)模型。

代碼示例:

import torch
import torch.nn as nn
import torch.optim as optim

# 定義數(shù)據(jù)
X = torch.randn(100, 10, 50)  # (batch_size, sequence_length, input_size)
y = torch.randint(0, 10, (100,))

# 定義模型
class GRU(nn.Module):
    def __init__(self, input_size, hidden_size, num_layers, num_classes):
        super(GRU, self).__init__()
        self.hidden_size = hidden_size
        self.num_layers = num_layers
        self.gru = nn.GRU(input_size, hidden_size, num_layers, batch_first=True)
        self.fc = nn.Linear(hidden_size, num_classes)

    def forward(self, x):
        h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(x.device)
        out, _ = self.gru(x, h0)
        out = self.fc(out[:, -1, :])
        return out

model = GRU(input_size=50, hidden_size=128, num_layers=2, num_classes=10)

# 定義損失函數(shù)和優(yōu)化器
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.01)

# 訓(xùn)練模型
num_epochs = 10
for epoch in range(num_epochs):
    model.train()
    optimizer.zero_grad()
    outputs = model(X)
    loss = criterion(outputs, y)
    loss.backward()
    optimizer.step()

# 測(cè)試模型
model.eval()
with torch.no_grad():
    predicted = torch.argmax(model(X), dim=1)
    print(predicted)

解釋?zhuān)?/p>

  • nn.GRU(input_size, hidden_size, num_layers, batch_first=True):定義一個(gè)GRU層,輸入大小為50,隱藏層大小為128,層數(shù)為2,batch_first=True表示輸入數(shù)據(jù)的第一個(gè)維度是batch大小。

案例8:殘差網(wǎng)絡(luò)(ResNet)

目標(biāo):使用PyTorch構(gòu)建一個(gè)殘差網(wǎng)絡(luò)(ResNet)模型。

代碼示例:

import torch
import torch.nn as nn
import torch.optim as optim

# 定義數(shù)據(jù)
X = torch.randn(100, 3, 32, 32)
y = torch.randint(0, 10, (100,))

# 定義殘差塊
class ResidualBlock(nn.Module):
    def __init__(self, in_channels, out_channels, stride=1):
        super(ResidualBlock, self).__init__()
        self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1)
        self.bn1 = nn.BatchNorm2d(out_channels)
        self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1)
        self.bn2 = nn.BatchNorm2d(out_channels)
        self.relu = nn.ReLU()
        self.shortcut = nn.Sequential()
        if stride != 1 or in_channels != out_channels:
            self.shortcut = nn.Sequential(
                nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=stride),
                nn.BatchNorm2d(out_channels)
            )

    def forward(self, x):
        out = self.relu(self.bn1(self.conv1(x)))
        out = self.bn2(self.conv2(out))
        out += self.shortcut(x)
        out = self.relu(out)
        return out

# 定義模型
class ResNet(nn.Module):
    def __init__(self, block, num_blocks, num_classes=10):
        super(ResNet, self).__init__()
        self.in_channels = 64
        self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1)
        self.bn1 = nn.BatchNorm2d(64)
        self.layer1 = self._make_layer(block, 64, num_blocks[0], stride=1)
        self.layer2 = self._make_layer(block, 128, num_blocks[1], stride=2)
        self.layer3 = self._make_layer(block, 256, num_blocks[2], stride=2)
        self.layer4 = self._make_layer(block, 512, num_blocks[3], stride=2)
        self.linear = nn.Linear(512, num_classes)
        self.relu = nn.ReLU()

    def _make_layer(self, block, out_channels, num_blocks, stride):
        strides = [stride] + [1] * (num_blocks - 1)
        layers = []
        for stride in strides:
            layers.append(block(self.in_channels, out_channels, stride))
            self.in_channels = out_channels
        return nn.Sequential(*layers)

    def forward(self, x):
        out = self.relu(self.bn1(self.conv1(x)))
        out = self.layer1(out)
        out = self.layer2(out)
        out = self.layer3(out)
        out = self.layer4(out)
        out = nn.functional.avg_pool2d(out, 4)
        out = out.view(out.size(0), -1)
        out = self.linear(out)
        return out

model = ResNet(ResidualBlock, [2, 2, 2, 2])

# 定義損失函數(shù)和優(yōu)化器
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.01)

# 訓(xùn)練模型
num_epochs = 10
for epoch in range(num_epochs):
    model.train()
    optimizer.zero_grad()
    outputs = model(X)
    loss = criterion(outputs, y)
    loss.backward()
    optimizer.step()

# 測(cè)試模型
model.eval()
with torch.no_grad():
    predicted = torch.argmax(model(X), dim=1)
    print(predicted)

解釋?zhuān)?/p>

  • ResidualBlock:定義一個(gè)殘差塊,包含兩個(gè)卷積層和一個(gè)跳躍連接。
  • _make_layer:構(gòu)建多個(gè)殘差塊的層。
  • nn.functional.avg_pool2d(out, 4):全局平均池化層。

案例9:卷積自編碼器(Convolutional Autoencoder)

目標(biāo):使用PyTorch構(gòu)建一個(gè)卷積自編碼器模型。

代碼示例:

import torch
import torch.nn as nn
import torch.optim as optim

# 定義數(shù)據(jù)
X = torch.randn(100, 1, 28, 28)

# 定義模型
class ConvAutoencoder(nn.Module):
    def __init__(self):
        super(ConvAutoencoder, self).__init__()
        # 編碼器
        self.encoder = nn.Sequential(
            nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1),
            nn.ReLU(),
            nn.Conv2d(16, 32, kernel_size=3, stride=2, padding=1),
            nn.ReLU(),
            nn.Conv2d(32, 64, kernel_size=7)
        )
        # 解碼器
        self.decoder = nn.Sequential(
            nn.ConvTranspose2d(64, 32, kernel_size=7),
            nn.ReLU(),
            nn.ConvTranspose2d(32, 16, kernel_size=3, stride=2, padding=1, output_padding=1),
            nn.ReLU(),
            nn.ConvTranspose2d(16, 1, kernel_size=3, stride=2, padding=1, output_padding=1),
            nn.Sigmoid()
        )

    def forward(self, x):
        encoded = self.encoder(x)
        decoded = self.decoder(encoded)
        return decoded

model = ConvAutoencoder()

# 定義損失函數(shù)和優(yōu)化器
criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=0.01)

# 訓(xùn)練模型
num_epochs = 10
for epoch in range(num_epochs):
    model.train()
    optimizer.zero_grad()
    outputs = model(X)
    loss = criterion(outputs, X)
    loss.backward()
    optimizer.step()

# 測(cè)試模型
model.eval()
with torch.no_grad():
    reconstructed = model(X)
    print(reconstructed)

解釋?zhuān)?/p>

  • nn.Conv2d 和 nn.ConvTranspose2d:分別用于編碼器和解碼器中的卷積和反卷積操作。
  • nn.Sigmoid():用于將解碼器的輸出限制在0到1之間。

案例10:變分自編碼器(Variational Autoencoder, VAE)

目標(biāo):使用PyTorch構(gòu)建一個(gè)變分自編碼器模型。

代碼示例:

import torch
import torch.nn as nn
import torch.optim as optim
from torch.distributions import Normal

# 定義數(shù)據(jù)
X = torch.randn(100, 1, 28, 28)

# 定義模型
class VAE(nn.Module):
    def __init__(self, latent_dim):
        super(VAE, self).__init__()
        self.latent_dim = latent_dim
        # 編碼器
        self.encoder = nn.Sequential(
            nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1),
            nn.ReLU(),
            nn.Conv2d(16, 32, kernel_size=3, stride=2, padding=1),
            nn.ReLU(),
            nn.Flatten(),
            nn.Linear(32 * 7 * 7, 256),
            nn.ReLU()
        )
        self.fc_mu = nn.Linear(256, latent_dim)
        self.fc_logvar = nn.Linear(256, latent_dim)
        # 解碼器
        self.decoder = nn.Sequential(
            nn.Linear(latent_dim, 256),
            nn.ReLU(),
            nn.Linear(256, 32 * 7 * 7),
            nn.ReLU(),
            nn.Unflatten(1, (32, 7, 7)),
            nn.ConvTranspose2d(32, 16, kernel_size=3, stride=2, padding=1, output_padding=1),
            nn.ReLU(),
            nn.ConvTranspose2d(16, 1, kernel_size=3, stride=2, padding=1, output_padding=1),
            nn.Sigmoid()
        )

    def encode(self, x):
        h = self.encoder(x)
        mu = self.fc_mu(h)
        logvar = self.fc_logvar(h)
        return mu, logvar

    def reparameterize(self, mu, logvar):
        std = torch.exp(0.5 * logvar)
        eps = torch.randn_like(std)
        return mu + eps * std

    def decode(self, z):
        return self.decoder(z)

    def forward(self, x):
        mu, logvar = self.encode(x)
        z = self.reparameterize(mu, logvar)
        reconstructed = self.decode(z)
        return reconstructed, mu, logvar

model = VAE(latent_dim=16)

# 定義損失函數(shù)
def vae_loss(reconstructed, x, mu, logvar):
    recon_loss = nn.MSELoss()(reconstructed, x)
    kl_loss = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())
    return recon_loss + kl_loss

# 定義優(yōu)化器
optimizer = optim.Adam(model.parameters(), lr=0.01)

# 訓(xùn)練模型
num_epochs = 10
for epoch in range(num_epochs):
    model.train()
    optimizer.zero_grad()
    reconstructed, mu, logvar = model(X)
    loss = vae_loss(reconstructed, X, mu, logvar)
    loss.backward()
    optimizer.step()

# 測(cè)試模型
model.eval()
with torch.no_grad():
    reconstructed, _, _ = model(X)
    print(reconstructed)

解釋?zhuān)?/p>

  • self.fc_mu 和 self.fc_logvar:分別用于生成均值和對(duì)數(shù)方差。
  • reparameterize:重參數(shù)化技巧,用于從分布中采樣。
  • vae_loss:變分自編碼器的損失函數(shù),包括重構(gòu)損失和KL散度。

案例11:生成對(duì)抗網(wǎng)絡(luò)(GAN)

目標(biāo):使用PyTorch構(gòu)建一個(gè)生成對(duì)抗網(wǎng)絡(luò)(GAN)模型。

代碼示例:

import torch
import torch.nn as nn
import torch.optim as optim

# 定義數(shù)據(jù)
X = torch.randn(100, 1, 28, 28)

# 定義生成器
class Generator(nn.Module):
    def __init__(self, latent_dim):
        super(Generator, self).__init__()
        self.model = nn.Sequential(
            nn.Linear(latent_dim, 256),
            nn.ReLU(),
            nn.Linear(256, 512),
            nn.ReLU(),
            nn.Linear(512, 784),
            nn.Tanh()
        )

    def forward(self, z):
        img = self.model(z)
        img = img.view(img.size(0), 1, 28, 28)
        return img

# 定義判別器
class Discriminator(nn.Module):
    def __init__(self):
        super(Discriminator, self).__init__()
        self.model = nn.Sequential(
            nn.Linear(784, 512),
            nn.LeakyReLU(0.2),
            nn.Linear(512, 256),
            nn.LeakyReLU(0.2),
            nn.Linear(256, 1),
            nn.Sigmoid()
        )

    def forward(self, img):
        img_flat = img.view(img.size(0), -1)
        validity = self.model(img_flat)
        return validity

# 實(shí)例化模型
latent_dim = 100
generator = Generator(latent_dim)
discriminator = Discriminator()

# 定義損失函數(shù)和優(yōu)化器
adversarial_loss = nn.BCELoss()
optimizer_G = optim.Adam(generator.parameters(), lr=0.0002)
optimizer_D = optim.Adam(discriminator.parameters(), lr=0.0002)

# 訓(xùn)練模型
num_epochs = 10
for epoch in range(num_epochs):
    # 訓(xùn)練生成器
    generator.train()
    optimizer_G.zero_grad()
    z = torch.randn(100, latent_dim)
    gen_imgs = generator(z)
    validity = discriminator(gen_imgs)
    g_loss = adversarial_loss(validity, torch.ones((100, 1)))
    g_loss.backward()
    optimizer_G.step()

    # 訓(xùn)練判別器
    discriminator.train()
    optimizer_D.zero_grad()
    real_imgs = X
    real_validity = discriminator(real_imgs)
    real_loss = adversarial_loss(real_validity, torch.ones((100, 1)))
    fake_validity = discriminator(gen_imgs.detach())
    fake_loss = adversarial_loss(fake_validity, torch.zeros((100, 1)))
    d_loss = (real_loss + fake_loss) / 2
    d_loss.backward()
    optimizer_D.step()

# 生成新圖像
generator.eval()
with torch.no_grad():
    z = torch.randn(100, latent_dim)
    gen_imgs = generator(z)
    print(gen_imgs)

解釋?zhuān)?/p>

  • Generator:生成器模型,用于生成假圖像。
  • Discriminator:判別器模型,用于判斷圖像是否真實(shí)。
  • adversarial_loss:二元交叉熵?fù)p失函數(shù),用于計(jì)算生成器和判別器的損失。
  • gen_imgs.detach():分離生成的圖像,使其不參與判別器的梯度計(jì)算。

案例12:序列到序列模型(Seq2Seq)

目標(biāo):使用PyTorch構(gòu)建一個(gè)序列到序列模型(Seq2Seq)。

代碼示例:

import torch
import torch.nn as nn
import torch.optim as optim

# 定義數(shù)據(jù)
X = torch.randint(0, 10, (100, 10))  # (batch_size, sequence_length)
y = torch.randint(0, 10, (100, 10))

# 定義編碼器
class Encoder(nn.Module):
    def __init__(self, input_size, hidden_size, num_layers):
        super(Encoder, self).__init__()
        self.hidden_size = hidden_size
        self.num_layers = num_layers
        self.embedding = nn.Embedding(input_size, hidden_size)
        self.gru = nn.GRU(hidden_size, hidden_size, num_layers, batch_first=True)

    def forward(self, x):
        embedded = self.embedding(x)
        outputs, hidden = self.gru(embedded)
        return outputs, hidden

# 定義解碼器
class Decoder(nn.Module):
    def __init__(self, hidden_size, output_size, num_layers):
        super(Decoder, self).__init__()
        self.hidden_size = hidden_size
        self.num_layers = num_layers
        self.embedding = nn.Embedding(output_size, hidden_size)
        self.gru = nn.GRU(hidden_size, hidden_size, num_layers, batch_first=True)
        self.out = nn.Linear(hidden_size, output_size)
        self.softmax = nn.LogSoftmax(dim=2)

    def forward(self, x, hidden):
        embedded = self.embedding(x)
        output, hidden = self.gru(embedded, hidden)
        output = self.softmax(self.out(output))
        return output, hidden

# 定義模型
class Seq2Seq(nn.Module):
    def __init__(self, encoder, decoder):
        super(Seq2Seq, self).__init__()
        self.encoder = encoder
        self.decoder = decoder

    def forward(self, src, trg, teacher_forcing_ratio=0.5):
        batch_size = src.size(0)
        trg_len = trg.size(1)
        trg_vocab_size = self.decoder.out.out_features

        outputs = torch.zeros(batch_size, trg_len, trg_vocab_size).to(src.device)

        _, hidden = self.encoder(src)

        input = trg[:, 0].unsqueeze(1)  # SOS token

        for t in range(1, trg_len):
            output, hidden = self.decoder(input, hidden)
            outputs[:, t, :] = output.squeeze(1)
            teacher_force = torch.rand(1) < teacher_forcing_ratio
            top1 = output.argmax(2)
            input = trg[:, t].unsqueeze(1) if teacher_force else top1

        return outputs

# 實(shí)例化模型
input_size = 10
hidden_size = 128
output_size = 10
num_layers = 2
encoder = Encoder(input_size, hidden_size, num_layers)
decoder = Decoder(hidden_size, output_size, num_layers)
model = Seq2Seq(encoder, decoder)

# 定義損失函數(shù)和優(yōu)化器
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.01)

# 訓(xùn)練模型
num_epochs = 10
for epoch in range(num_epochs):
    model.train()
    optimizer.zero_grad()
    outputs = model(X, y)
    loss = criterion(outputs.view(-1, output_size), y.view(-1))
    loss.backward()
    optimizer.step()

# 測(cè)試模型
model.eval()
with torch.no_grad():
    predicted = model(X, y, teacher_forcing_ratio=0).argmax(dim=2)
    print(predicted)

解釋?zhuān)?/p>

  • Encoder:編碼器模型,用于將輸入序列編碼為隱藏狀態(tài)。
  • Decoder:解碼器模型,用于將隱藏狀態(tài)解碼為輸出序列。
  • Seq2Seq:序列到序列模型,結(jié)合編碼器和解碼器。
  • teacher_forcing_ratio:教師強(qiáng)制比例,用于在訓(xùn)練過(guò)程中決定是否使用真實(shí)標(biāo)簽作為下一個(gè)時(shí)間步的輸入。

實(shí)戰(zhàn)案例:手寫(xiě)數(shù)字識(shí)別

目標(biāo):使用PyTorch構(gòu)建一個(gè)卷積神經(jīng)網(wǎng)絡(luò)(CNN)模型,對(duì)手寫(xiě)數(shù)字進(jìn)行分類(lèi)。

代碼示例:

import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms

# 定義數(shù)據(jù)集
transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize((0.5,), (0.5,))
])

trainset = torchvision.datasets.MNIST(root='./data', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=100, shuffle=True, num_workers=2)

testset = torchvision.datasets.MNIST(root='./data', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=100, shuffle=False, num_workers=2)

# 定義模型
class CNN(nn.Module):
    def __init__(self):
        super(CNN, self).__init__()
        self.conv1 = nn.Conv2d(1, 16, kernel_size=3, stride=1, padding=1)
        self.pool = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
        self.fc1 = nn.Linear(16 * 14 * 14, 128)
        self.fc2 = nn.Linear(128, 10)
        self.relu = nn.ReLU()

    def forward(self, x):
        out = self.conv1(x)
        out = self.relu(out)
        out = self.pool(out)
        out = out.view(-1, 16 * 14 * 14)
        out = self.fc1(out)
        out = self.relu(out)
        out = self.fc2(out)
        return out

model = CNN()

# 定義損失函數(shù)和優(yōu)化器
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.01)

# 訓(xùn)練模型
num_epochs = 10
for epoch in range(num_epochs):
    model.train()
    running_loss = 0.0
    for i, data in enumerate(trainloader, 0):
        inputs, labels = data
        optimizer.zero_grad()
        outputs = model(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()
        running_loss += loss.item()
    print(f'Epoch {epoch + 1}, Loss: {running_loss / (i + 1)}')

# 測(cè)試模型
model.eval()
correct = 0
total = 0
with torch.no_grad():
    for data in testloader:
        images, labels = data
        outputs = model(images)
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()

print(f'Accuracy on the test set: {100 * correct / total}%')

解釋?zhuān)?/p>

  • torchvision.datasets.MNIST:加載MNIST數(shù)據(jù)集。
  • transforms.Compose:定義數(shù)據(jù)預(yù)處理步驟,包括轉(zhuǎn)換為張量和歸一化。
  • DataLoader:創(chuàng)建數(shù)據(jù)加載器,用于批量加載數(shù)據(jù)。
  • nn.MaxPool2d:最大池化層,用于降采樣。
  • nn.Linear:全連接層,用于分類(lèi)。
  • torch.max(outputs.data, 1):獲取每個(gè)樣本的最大概率對(duì)應(yīng)的類(lèi)別索引。

總結(jié)

本文通過(guò)12個(gè)實(shí)戰(zhàn)案例,詳細(xì)介紹了如何使用PyTorch構(gòu)建各種類(lèi)型的神經(jīng)網(wǎng)絡(luò)模型,包括線(xiàn)性回歸、邏輯回歸、多層感知機(jī)、卷積神經(jīng)網(wǎng)絡(luò)、循環(huán)神經(jīng)網(wǎng)絡(luò)、長(zhǎng)短時(shí)記憶網(wǎng)絡(luò)、門(mén)控循環(huán)單元、殘差網(wǎng)絡(luò)、卷積自編碼器、變分自編碼器、生成對(duì)抗網(wǎng)絡(luò)和序列到序列模型。每個(gè)案例都提供了詳細(xì)的代碼示例和解釋?zhuān)瑤椭阒鸩秸莆誔yTorch的核心概念和高級(jí)技巧。

責(zé)任編輯:趙寧寧 來(lái)源: 手把手PythonAI編程
相關(guān)推薦

2020-12-19 11:05:57

循環(huán)神經(jīng)網(wǎng)絡(luò)PyTorch神經(jīng)網(wǎng)絡(luò)

2019-08-19 00:31:16

Pytorch神經(jīng)網(wǎng)絡(luò)深度學(xué)習(xí)

2022-07-28 09:00:00

深度學(xué)習(xí)網(wǎng)絡(luò)類(lèi)型架構(gòu)

2018-09-17 15:12:25

人工智能神經(jīng)網(wǎng)絡(luò)編程語(yǔ)言

2018-03-22 13:34:59

TensorFlow神經(jīng)網(wǎng)絡(luò)

2018-08-27 17:05:48

tensorflow神經(jīng)網(wǎng)絡(luò)圖像處理

2025-02-25 14:13:31

2017-04-26 08:31:10

神經(jīng)網(wǎng)絡(luò)自然語(yǔ)言PyTorch

2020-06-15 17:40:32

神經(jīng)網(wǎng)絡(luò)人工智能可視化工具

2023-05-12 14:58:50

Java神經(jīng)網(wǎng)絡(luò)深度學(xué)習(xí)

2025-02-19 15:12:17

神經(jīng)網(wǎng)絡(luò)PyTorch大模型

2018-07-03 16:10:04

神經(jīng)網(wǎng)絡(luò)生物神經(jīng)網(wǎng)絡(luò)人工神經(jīng)網(wǎng)絡(luò)

2020-09-18 11:40:44

神經(jīng)網(wǎng)絡(luò)人工智能PyTorch

2021-12-28 08:48:54

PyTorch神經(jīng)網(wǎng)絡(luò)人工智能

2018-05-28 13:12:49

深度學(xué)習(xí)Python神經(jīng)網(wǎng)絡(luò)

2017-04-26 09:30:53

卷積神經(jīng)網(wǎng)絡(luò)實(shí)戰(zhàn)

2022-07-25 08:00:00

機(jī)器學(xué)習(xí)SOM算法

2024-04-30 14:54:10

2020-03-26 09:00:00

神經(jīng)網(wǎng)絡(luò)AI人工智能

2020-05-11 13:44:38

神經(jīng)網(wǎng)絡(luò)人工智能深度學(xué)習(xí)
點(diǎn)贊
收藏

51CTO技術(shù)棧公眾號(hào)