下面是一个使用BERT进行句子分类的代码示例:
首先,需要安装相应的库:
pip install transformers
pip install torch
接下来,导入需要的库:
import torch
from torch.utils.data import DataLoader, TensorDataset
from transformers import BertTokenizer, BertForSequenceClassification
加载预训练的BERT模型和tokenizer:
model = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=2) # 2表示二分类任务
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
定义数据集和数据加载器:
# 假设你有一个包含句子和对应标签的数据集
sentences = ['This is sentence 1', 'This is sentence 2', ...]
labels = [0, 1, ...] # 标签可以是任意整数,0表示负样本,1表示正样本
# 使用tokenizer对句子进行编码
encoded_inputs = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# 创建数据集
dataset = TensorDataset(encoded_inputs['input_ids'], encoded_inputs['attention_mask'], torch.tensor(labels))
# 创建数据加载器
dataloader = DataLoader(dataset, batch_size=16)
定义训练循环:
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.to(device)
model.train()
optimizer = torch.optim.AdamW(model.parameters(), lr=2e-5)
for epoch in range(3):
for batch in dataloader:
input_ids, attention_mask, targets = batch
input_ids, attention_mask, targets = input_ids.to(device), attention_mask.to(device), targets.to(device)
optimizer.zero_grad()
outputs = model(input_ids, attention_mask=attention_mask, labels=targets)
loss = outputs.loss
logits = outputs.logits
loss.backward()
optimizer.step()
进行预测:
model.eval()
test_sentences = ['This is a test sentence', 'Another test sentence', ...]
encoded_inputs = tokenizer(test_sentences, padding=True, truncation=True, return_tensors='pt')
input_ids = encoded_inputs['input_ids'].to(device)
attention_mask = encoded_inputs['attention_mask'].to(device)
with torch.no_grad():
outputs = model(input_ids, attention_mask=attention_mask)
logits = outputs.logits
predicted_labels = torch.argmax(logits, dim=1).tolist()
以上代码演示了如何使用BERT进行句子分类任务。你需要根据自己的数据集和任务需求进行适当的调整。