第154章 敦煌计划(2/2)
天才一秒记住本站地址:[笔趣阁ok]
https://www.bqgok.net最快更新!无广告!
m = len(y)
A = sigmoid(np.dot(X, w) + b)
cost = -(1 / m) * np.sum(y * np.log(A) + (1 - y) * np.log(1 - A))
return cost
```python
def pute_gradient(X, y, w, b):
m = len(y)
A = sigmoid(np.dot(X, w) + b)
dz = A - y
dw = (1 / m) * np.dot(X.T, dz)
db = (1 / m) * np.sum(dz)
return dw, db
```
接下来,我们编写一个函数来更新权重和偏置项:
```python
def update_parameters(w, b, dw, db, learning_rate):
w = w - learning_rate * dw
b = b - learning_rate * db
return w, b
```*****
现在,我们将所有这些步骤整合到一个训练函数中,并设置迭代次数和学习率:
```python
def train_logistic_regression(X, y, num_iterations=2000, learning_rate=0.5):
dim = X.shape[1]
w, b = initialize_with_zeros(dim)
for i in range(num_iterations):
dw, db = pute_gradient(X, y, w, b)
w, b = update_parameters(w, b, dw, db, learning_rate)
if i % 100 == 0:
cost = pute_cost(X, y, w, b)
print(f"Cost after iteration {i}: {cost}")