About 862,000 results
Open links in new tab
  1. optimization - Batch gradient descent versus stochastic gradient ...

    Additionally, batch gradient descent, given an annealed learning rate, will eventually find the minimum located in it's basin of attraction. Stochastic gradient descent (SGD) computes the gradient using a …

  2. machine learning - why gradient descent when we can solve linear ...

    Aug 12, 2013 · what is the benefit of using Gradient Descent in the linear regression space? looks like the we can solve the problem (finding theta0-n that minimum the cost func) with analytical method so …

  3. Gradient Descent with constraints (lagrange multipliers)

    Since the gradient descent algorithm is designed to find local minima, it fails to converge when you give it a problem with constraints. There are typically three solutions: Use a numerical method which is …

  4. How to define the termination condition for gradient descent?

    Actually, I wanted to ask you how can I define the terminating condition for gradient descent. Can I stop it based upon the number of iterations, i.e. considering parameter values for, say, 100

  5. machine learning - Gradient descent convergence How to decide ...

    Jun 25, 2013 · 15 I learnt gradient descent through online resources (namely machine learning at coursera). However the information provided only said to repeat gradient descent until it converges. …

  6. gradient descent using python and numpy - Stack Overflow

    Jul 22, 2013 · Below you can find my implementation of gradient descent for linear regression problem. At first, you calculate gradient like X.T * (X * w - y) / N and update your current theta with this …

  7. python - How to Implement Full Batch Gradient Descent with Nesterov ...

    Mar 4, 2024 · 0 I'm working on a machine learning project in PyTorch where I need to optimize a model using the full batch gradient descent method. The key requirement is that the optimizer should use all …

  8. Can someone explain to me the difference between a cost function and ...

    So in gradient descent, you follow the negative of the gradient to the point where the cost is a minimum. If someone is talking about gradient descent in a machine learning context, the cost function is …

  9. Why is Newton's method not widely used in machine learning?

    Dec 29, 2016 · 161 Gradient descent maximizes a function using knowledge of its derivative. Newton's method, a root finding algorithm, maximizes a function using knowledge of its second derivative. That …

  10. Why use gradient descent with neural networks?

    Jul 8, 2017 · When training a neural network using the back-propagation algorithm, the gradient descent method is used to determine the weight updates. My question is: Rather than using gradient descent …