Batch gradient descent and stochastic gradient descent
up vote
1
down vote
favorite
I'm trying to implement logistic regression and I believe my batch gradient descent is correct or at least it works well enough to give me decent accuracy for the dataset I'm using. When I use stochastic gradient descent I'm getting really poor accuracy so I'm not sure if it's my learning rate, epochs or just my code is incorrect. Also I'm wondering how would I add regularization to both of these? Do I add a variable lambda and multiply it by the learning rate or is the more to it?
BGD:
def batch_gradient(df, weights, bias, lr, epochs):
X = df.values
y = X[:,:1]
X = X[:,1:]
length = X.shape[0]
for i in range(epochs):
output = (sigmoid((np.dot(weights, X.T)+bias)))
weights_tmp = (1/length) * (np.dot(X.T, (output - y.T).T))
bias_tmp = (1/length) * (np.sum(output - y.T))
weights -= (lr * (weights_tmp.T))
bias -= (lr * bias_tmp)
return weights, bias
SGD:
def stochastic_gradient(df, weights, bias, lr, epochs):
x_matrix = df.values
for i in range(epochs):
np.random.shuffle(x_matrix)
x_instance = x_matrix[np.random.choice(x_matrix.shape[0], 1, replace=True)]
y = x_instance[:,:1]
output = sigmoid(np.dot(weights, x_instance[:,1:].T) + bias)
weights_tmp = lr * np.dot(x_instance[:,1:].T, ((output - y)))
weights = (weights - weights_tmp.T)
bias -= lr * (output - y)
return weights, bias
python numpy pandas
New contributor
add a comment |
up vote
1
down vote
favorite
I'm trying to implement logistic regression and I believe my batch gradient descent is correct or at least it works well enough to give me decent accuracy for the dataset I'm using. When I use stochastic gradient descent I'm getting really poor accuracy so I'm not sure if it's my learning rate, epochs or just my code is incorrect. Also I'm wondering how would I add regularization to both of these? Do I add a variable lambda and multiply it by the learning rate or is the more to it?
BGD:
def batch_gradient(df, weights, bias, lr, epochs):
X = df.values
y = X[:,:1]
X = X[:,1:]
length = X.shape[0]
for i in range(epochs):
output = (sigmoid((np.dot(weights, X.T)+bias)))
weights_tmp = (1/length) * (np.dot(X.T, (output - y.T).T))
bias_tmp = (1/length) * (np.sum(output - y.T))
weights -= (lr * (weights_tmp.T))
bias -= (lr * bias_tmp)
return weights, bias
SGD:
def stochastic_gradient(df, weights, bias, lr, epochs):
x_matrix = df.values
for i in range(epochs):
np.random.shuffle(x_matrix)
x_instance = x_matrix[np.random.choice(x_matrix.shape[0], 1, replace=True)]
y = x_instance[:,:1]
output = sigmoid(np.dot(weights, x_instance[:,1:].T) + bias)
weights_tmp = lr * np.dot(x_instance[:,1:].T, ((output - y)))
weights = (weights - weights_tmp.T)
bias -= lr * (output - y)
return weights, bias
python numpy pandas
New contributor
add a comment |
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I'm trying to implement logistic regression and I believe my batch gradient descent is correct or at least it works well enough to give me decent accuracy for the dataset I'm using. When I use stochastic gradient descent I'm getting really poor accuracy so I'm not sure if it's my learning rate, epochs or just my code is incorrect. Also I'm wondering how would I add regularization to both of these? Do I add a variable lambda and multiply it by the learning rate or is the more to it?
BGD:
def batch_gradient(df, weights, bias, lr, epochs):
X = df.values
y = X[:,:1]
X = X[:,1:]
length = X.shape[0]
for i in range(epochs):
output = (sigmoid((np.dot(weights, X.T)+bias)))
weights_tmp = (1/length) * (np.dot(X.T, (output - y.T).T))
bias_tmp = (1/length) * (np.sum(output - y.T))
weights -= (lr * (weights_tmp.T))
bias -= (lr * bias_tmp)
return weights, bias
SGD:
def stochastic_gradient(df, weights, bias, lr, epochs):
x_matrix = df.values
for i in range(epochs):
np.random.shuffle(x_matrix)
x_instance = x_matrix[np.random.choice(x_matrix.shape[0], 1, replace=True)]
y = x_instance[:,:1]
output = sigmoid(np.dot(weights, x_instance[:,1:].T) + bias)
weights_tmp = lr * np.dot(x_instance[:,1:].T, ((output - y)))
weights = (weights - weights_tmp.T)
bias -= lr * (output - y)
return weights, bias
python numpy pandas
New contributor
I'm trying to implement logistic regression and I believe my batch gradient descent is correct or at least it works well enough to give me decent accuracy for the dataset I'm using. When I use stochastic gradient descent I'm getting really poor accuracy so I'm not sure if it's my learning rate, epochs or just my code is incorrect. Also I'm wondering how would I add regularization to both of these? Do I add a variable lambda and multiply it by the learning rate or is the more to it?
BGD:
def batch_gradient(df, weights, bias, lr, epochs):
X = df.values
y = X[:,:1]
X = X[:,1:]
length = X.shape[0]
for i in range(epochs):
output = (sigmoid((np.dot(weights, X.T)+bias)))
weights_tmp = (1/length) * (np.dot(X.T, (output - y.T).T))
bias_tmp = (1/length) * (np.sum(output - y.T))
weights -= (lr * (weights_tmp.T))
bias -= (lr * bias_tmp)
return weights, bias
SGD:
def stochastic_gradient(df, weights, bias, lr, epochs):
x_matrix = df.values
for i in range(epochs):
np.random.shuffle(x_matrix)
x_instance = x_matrix[np.random.choice(x_matrix.shape[0], 1, replace=True)]
y = x_instance[:,:1]
output = sigmoid(np.dot(weights, x_instance[:,1:].T) + bias)
weights_tmp = lr * np.dot(x_instance[:,1:].T, ((output - y)))
weights = (weights - weights_tmp.T)
bias -= lr * (output - y)
return weights, bias
python numpy pandas
python numpy pandas
New contributor
New contributor
edited yesterday
Jamal♦
30.2k11115226
30.2k11115226
New contributor
asked 2 days ago
jj2593
61
61
New contributor
New contributor
add a comment |
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
jj2593 is a new contributor. Be nice, and check out our Code of Conduct.
jj2593 is a new contributor. Be nice, and check out our Code of Conduct.
jj2593 is a new contributor. Be nice, and check out our Code of Conduct.
jj2593 is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f208412%2fbatch-gradient-descent-and-stochastic-gradient-descent%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown