在机器学习算法中,步长(Learning Rate)是指每次迭代时更新权重的幅度。步长大小的选择非常关键,过小会导致收敛速度缓慢,过大会导致震荡或不收敛。步长的选择一般基于试错法。
以下是一个使用梯度下降法求解线性回归问题的例子,其中包含了步长的设定:
import numpy as np
# 生成数据
x = np.array([1,2,3,4,5])
y = np.array([2.5,3.7,4.9,6.1,7.3])
# 初始化模型参数
alpha = 0.05 # 步长
theta0 = 0 # 截距
theta1 = 0 # 斜率
# 定义损失函数
def loss_function(theta0, theta1, x, y):
y_pred = theta1*x + theta0
loss = np.mean(np.square(y_pred - y))
return loss
# 定义梯度下降函数
def gradient_descent(theta0, theta1, x, y, alpha, epochs):
n = len(x)
for i in range(epochs):
y_pred = theta1*x + theta0
theta1 = theta1 - alpha*2*np.dot((y_pred - y), x)/n # 更新斜率
theta0 = theta0 - alpha*2*np.mean(y_pred - y) # 更新截距
print('Epoch {}: loss={:.4f}, theta0={:.4f}, theta1={:.4f}'.format(i, loss_function(theta0, theta1, x, y), theta0, theta1))
return theta0, theta1
theta0, theta1 = gradient_descent(theta0, theta1, x, y, alpha, epochs=100)
print('Final result: theta0={:.4f}, theta1={:.4f}'.format(theta0, theta1))
``
上一篇:不增加行数的情况下连接3个表