The Conjugate Gradient Method is an optimization method based on the second order Taylor approximation of a (cost-)function depending on N parameters. The method needs to be able to evaluate the gradient of the function in any N-dimensional parameter. A user should provide an initial parameter guess and the method determines iteratively better approximations of the desired minimum. This is done by reducing the multi-dimensional problem into a sequence of line minimizations. These line minimizations are done by Brent's method, which determines the minimum of the (cost-)function on a line with starting point the current parameters and direction the so-called search direction. The search directions are determined either by the Fletcher-Reeves method (method 1) or the Polak-Ribieri method (method 2). For these methods, the new search direction is conjugate to the gradient at the previous parameters. It is also possible to choose the search direction equal to the negative of the gradient, which is known as Steepest Descent (method 3). In this case, the search direction is perpendicular to the gradient at the previous parameters. The iteration is performed until one of the stopping criteria is satisfied.