Batch gradient descent and stochastic gradient descent











up vote
1
down vote

favorite












I'm trying to implement logistic regression and I believe my batch gradient descent is correct or at least it works well enough to give me decent accuracy for the dataset I'm using. When I use stochastic gradient descent I'm getting really poor accuracy so I'm not sure if it's my learning rate, epochs or just my code is incorrect. Also I'm wondering how would I add regularization to both of these? Do I add a variable lambda and multiply it by the learning rate or is the more to it?



BGD:



def batch_gradient(df, weights, bias, lr, epochs):
X = df.values
y = X[:,:1]
X = X[:,1:]
length = X.shape[0]
for i in range(epochs):
output = (sigmoid((np.dot(weights, X.T)+bias)))
weights_tmp = (1/length) * (np.dot(X.T, (output - y.T).T))
bias_tmp = (1/length) * (np.sum(output - y.T))

weights -= (lr * (weights_tmp.T))
bias -= (lr * bias_tmp)

return weights, bias


SGD:



def stochastic_gradient(df, weights, bias, lr, epochs):
x_matrix = df.values
for i in range(epochs):
np.random.shuffle(x_matrix)
x_instance = x_matrix[np.random.choice(x_matrix.shape[0], 1, replace=True)]
y = x_instance[:,:1]

output = sigmoid(np.dot(weights, x_instance[:,1:].T) + bias)
weights_tmp = lr * np.dot(x_instance[:,1:].T, ((output - y)))

weights = (weights - weights_tmp.T)
bias -= lr * (output - y)

return weights, bias









share|improve this question









New contributor




jj2593 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
























    up vote
    1
    down vote

    favorite












    I'm trying to implement logistic regression and I believe my batch gradient descent is correct or at least it works well enough to give me decent accuracy for the dataset I'm using. When I use stochastic gradient descent I'm getting really poor accuracy so I'm not sure if it's my learning rate, epochs or just my code is incorrect. Also I'm wondering how would I add regularization to both of these? Do I add a variable lambda and multiply it by the learning rate or is the more to it?



    BGD:



    def batch_gradient(df, weights, bias, lr, epochs):
    X = df.values
    y = X[:,:1]
    X = X[:,1:]
    length = X.shape[0]
    for i in range(epochs):
    output = (sigmoid((np.dot(weights, X.T)+bias)))
    weights_tmp = (1/length) * (np.dot(X.T, (output - y.T).T))
    bias_tmp = (1/length) * (np.sum(output - y.T))

    weights -= (lr * (weights_tmp.T))
    bias -= (lr * bias_tmp)

    return weights, bias


    SGD:



    def stochastic_gradient(df, weights, bias, lr, epochs):
    x_matrix = df.values
    for i in range(epochs):
    np.random.shuffle(x_matrix)
    x_instance = x_matrix[np.random.choice(x_matrix.shape[0], 1, replace=True)]
    y = x_instance[:,:1]

    output = sigmoid(np.dot(weights, x_instance[:,1:].T) + bias)
    weights_tmp = lr * np.dot(x_instance[:,1:].T, ((output - y)))

    weights = (weights - weights_tmp.T)
    bias -= lr * (output - y)

    return weights, bias









    share|improve this question









    New contributor




    jj2593 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.






















      up vote
      1
      down vote

      favorite









      up vote
      1
      down vote

      favorite











      I'm trying to implement logistic regression and I believe my batch gradient descent is correct or at least it works well enough to give me decent accuracy for the dataset I'm using. When I use stochastic gradient descent I'm getting really poor accuracy so I'm not sure if it's my learning rate, epochs or just my code is incorrect. Also I'm wondering how would I add regularization to both of these? Do I add a variable lambda and multiply it by the learning rate or is the more to it?



      BGD:



      def batch_gradient(df, weights, bias, lr, epochs):
      X = df.values
      y = X[:,:1]
      X = X[:,1:]
      length = X.shape[0]
      for i in range(epochs):
      output = (sigmoid((np.dot(weights, X.T)+bias)))
      weights_tmp = (1/length) * (np.dot(X.T, (output - y.T).T))
      bias_tmp = (1/length) * (np.sum(output - y.T))

      weights -= (lr * (weights_tmp.T))
      bias -= (lr * bias_tmp)

      return weights, bias


      SGD:



      def stochastic_gradient(df, weights, bias, lr, epochs):
      x_matrix = df.values
      for i in range(epochs):
      np.random.shuffle(x_matrix)
      x_instance = x_matrix[np.random.choice(x_matrix.shape[0], 1, replace=True)]
      y = x_instance[:,:1]

      output = sigmoid(np.dot(weights, x_instance[:,1:].T) + bias)
      weights_tmp = lr * np.dot(x_instance[:,1:].T, ((output - y)))

      weights = (weights - weights_tmp.T)
      bias -= lr * (output - y)

      return weights, bias









      share|improve this question









      New contributor




      jj2593 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      I'm trying to implement logistic regression and I believe my batch gradient descent is correct or at least it works well enough to give me decent accuracy for the dataset I'm using. When I use stochastic gradient descent I'm getting really poor accuracy so I'm not sure if it's my learning rate, epochs or just my code is incorrect. Also I'm wondering how would I add regularization to both of these? Do I add a variable lambda and multiply it by the learning rate or is the more to it?



      BGD:



      def batch_gradient(df, weights, bias, lr, epochs):
      X = df.values
      y = X[:,:1]
      X = X[:,1:]
      length = X.shape[0]
      for i in range(epochs):
      output = (sigmoid((np.dot(weights, X.T)+bias)))
      weights_tmp = (1/length) * (np.dot(X.T, (output - y.T).T))
      bias_tmp = (1/length) * (np.sum(output - y.T))

      weights -= (lr * (weights_tmp.T))
      bias -= (lr * bias_tmp)

      return weights, bias


      SGD:



      def stochastic_gradient(df, weights, bias, lr, epochs):
      x_matrix = df.values
      for i in range(epochs):
      np.random.shuffle(x_matrix)
      x_instance = x_matrix[np.random.choice(x_matrix.shape[0], 1, replace=True)]
      y = x_instance[:,:1]

      output = sigmoid(np.dot(weights, x_instance[:,1:].T) + bias)
      weights_tmp = lr * np.dot(x_instance[:,1:].T, ((output - y)))

      weights = (weights - weights_tmp.T)
      bias -= lr * (output - y)

      return weights, bias






      python numpy pandas






      share|improve this question









      New contributor




      jj2593 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|improve this question









      New contributor




      jj2593 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|improve this question




      share|improve this question








      edited yesterday









      Jamal

      30.2k11115226




      30.2k11115226






      New contributor




      jj2593 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked 2 days ago









      jj2593

      61




      61




      New contributor




      jj2593 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      jj2593 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      jj2593 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.



























          active

          oldest

          votes











          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "196"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });






          jj2593 is a new contributor. Be nice, and check out our Code of Conduct.










           

          draft saved


          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f208412%2fbatch-gradient-descent-and-stochastic-gradient-descent%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown






























          active

          oldest

          votes













          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          jj2593 is a new contributor. Be nice, and check out our Code of Conduct.










           

          draft saved


          draft discarded


















          jj2593 is a new contributor. Be nice, and check out our Code of Conduct.













          jj2593 is a new contributor. Be nice, and check out our Code of Conduct.












          jj2593 is a new contributor. Be nice, and check out our Code of Conduct.















           


          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f208412%2fbatch-gradient-descent-and-stochastic-gradient-descent%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Quarter-circle Tiles

          build a pushdown automaton that recognizes the reverse language of a given pushdown automaton?

          Mont Emei