- 确认处理数据的方法和模型评估的代码没有问题,可以通过多次训练来确定模型是否可靠。
- 尝试使用不同的BERT模型和参数进行训练。例如,使用预训练模型与更高的学习速率以提高训练效果。
- 尝试使用其他文本分类算法来解决问题,例如FastText、SVM等。
以下是使用 pytorch-transformers 库来训练BERT模型并对新数据进行分类的示例代码:
from pytorch_transformers import BertForSequenceClassification, BertTokenizer
import torch
# Load pretrained model/tokenizer
model = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=2)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
# Load training set
with open('train.txt', 'r') as f:
texts = [l.strip() for l in f.readlines()]
labels = [int(t.split('\t')[1]) for t in texts]
texts = [t.split('\t')[0] for t in texts]
# Tokenize inputs
max_length = 128
inputs = tokenizer(texts, padding=True, truncation=True, max_length=max_length, return_tensors='pt')
labels = torch.tensor(labels)
# Train model
optimizer = torch.optim.Adam(model.parameters())
train_dataset = torch.utils.data.TensorDataset(inputs['input_ids'], inputs['attention_mask'], labels)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=16)
epochs = 5
for epoch in range(epochs):
for batch in train_loader:
model.train()
optimizer.zero_grad()
outputs = model(batch[0], attention_mask=batch[1], labels=batch[2])
loss = outputs[0]
loss.backward()
optimizer.step()
# Load test set
with open('test.txt', 'r') as f:
texts = [l.strip() for l in f.readlines()]
# Tokenize test inputs
test_inputs = tokenizer(texts, padding=True, truncation=True, max_length=max_length,