BERT(Bidirectional Encoder Representations from Transformers)是一种开放源代码的预训练神经网络模型。它可以应用于自然语言处理任务,并且在各种任务中都取得了相对较好的性能。
BERT attribution scores是一种用于确定BERT模型中每个token的重要性的技术。在对token进行概率预测时,可以使用此技术来评估token对预测结果的贡献。
下面是一个使用BERT attribution scores进行token probability prediction的示例代码:
import tensorflow as tf import tensorflow_hub as hub import numpy as np
bert_module = hub.Module("https://tfhub.dev/google/bert_uncased_L-12_H-768_A-12/1") tokenizer = create_tokenizer_from_hub_module(bert_module)
text = "This is a sample input sentence."
input_ids = convert_text_to_token_ids(text, tokenizer)
bert_inputs = dict(input_ids=input_ids, input_mask=input_mask, input_type_ids=input_type_ids) pooled_output, sequence_output = bert_module(bert_inputs, signature="tokens", as_dict=True) logits = tf.layers.dense(sequence_output, 2, activation=None, name="output_layer") probabilities = tf.nn.softmax(logits, axis=-1)[:, 1]
input_sequence = tf.placeholder(dtype=tf.int32, shape=[None]) input_mask = tf.placeholder(dtype=tf.int32, shape=[None]) segment_ids = tf.placeholder(dtype=tf.int32, shape=[None]) delta = tf.placeholder(dtype=tf.float32, shape=[None])
pooled_output, sequence_output = bert_module({ "input_ids": input_sequence, "input_mask": input_mask, "segment_ids": segment_ids }, signature="tokens", as_dict=True)
one_hot_labels = tf.one_hot([1], depth=2, dtype=tf.float32)
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=one_hot_labels) grad, = tf.gradients(loss, sequence_output)
token_scores = tf.reduce_sum(tf.multiply(grad, sequence_output), axis=2) token_scores_normalized = tf.nn.softmax(token_scores, axis=1) token_scores_normalized_delta = tf.subtract(token_scores_normalized, delta)
上一篇:把“ActiveBuildVariant-Gradle-LibraryImplementation”翻译成中文。
下一篇:把“CalculatetimedifferenceandcompareditintoGoogleAppsscript”改写为中文并给出包含代码示例的解决方法。