This draft's references do not show that the subject qualifies for a Wikipedia article. In summary, the draft needs multiple published sources that are:
in-depth (not just passing mentions about the subject)
Make sure you add references that meet these criteria before resubmitting. Learn about mistakes to avoid when addressing this issue. If no additional references exist, the subject is not suitable for Wikipedia.
If you would like to continue working on the submission, click on the "Edit" tab at the top of the window.
If you have not resolved the issues listed above, your draft will be declined again and potentially deleted.
If you need extra help, please ask us a question at the AfC Help Desk or get live help from experienced editors.
Please do not remove reviewer comments or this notice until the submission is accepted.
Where to get help
If you need help editing or submitting your draft, please ask us a question at the AfC Help Desk or get live help from experienced editors. These venues are only for help with editing and the submission process, not to get reviews.
If you need feedback on your draft, or if the review is taking a lot of time, you can try asking for help on the talk page of a relevant WikiProject. Some WikiProjects are more active than others so a speedy reply is not guaranteed.
To improve your odds of a faster review, tag your draft with relevant WikiProject tags using the button below. This will let reviewers know a new draft has been submitted in their area of interest. For instance, if you wrote about a female astronomer, you would want to add the Biography, Astronomy, and Women scientists tags.
Please note that if the issues are not fixed, the draft will be declined again.
Collinear gradients method (ColGM)[1] is an iterative method of directional search for the local extremum of a smoothmultivariate function, which do moving towards the extremum along the vector such that the gradients , i.e. they are collinear vectors. This is a first-order method (it uses only the first derivatives ) with a quadratic convergence rate. It can be applied to functions of high dimension with several local extremes. GolGM can be attributed to the Truncated Newton method family.
For a smooth function in a relatively large vicinity of a point , there is a point , where the gradients and are collinear vectors. The direction to the extremum from the point will be the direction . The vector points to the maximum or minimum, depending on the position of the point . It can be in front or behind of relative to the direction to (see the picture). Next, we will consider minimization.
Angle brackets are an inner product space in the Euclidean space. If is a convex function in the vicinity of , then for the front point we get the number , for the back . In any case, we follow step (1).
For a strictly convex quadratic function the ColGM step is
i.e. it is a Newton's step (a second-order method with a quadratic convergence rate), where is the Hesse matrix. Such steps ensure the quadratic convergence rate for ColGM.
In general, if has a variable convexity and saddle points are possible, then the minimization direction should be checked by the angle between the vectors and . If , then is the direction of maximization, and in (1) we should take with the opposite sign.
Collinearity of gradients is estimated by the residual of their directions, which has the form of a system of equations for search a root :
(3)
where the sign, this allows us to equally evaluate the collinearity of gradients, both co-directional and oppositely directed, .
System (3) is solved iteratively (sub-iterations) by the conjugate gradient method, assuming that the system is linear in the -vicinity:
(4)
where vector , , , , the product of the Hesse matrix by is found by numerical differentiation:
(5)
where , is a small positive number such that .
The initial approximation is set at 45° to all coordinate axes and -length:
(6)
The initial radius is the vicinity of the point and it is modifid:
(7)
Necessary . Here, the small positive number is noticeably larger than the machine epsilon.
Sub-iterations terminate when at least one of the conditions is met:
The parameter . For functions without saddle points, we recommend , . To "bypass" saddle points, we recommend , .
The described algorithm allows us to approximately find collinear gradients from the system of equations (3). The resulting direction for the ColGM algorithm (1) will be approximate Newton direction (truncated Newton method).
In the drawing, three black starting points are set for . The gray dots are sub-iterations of with (shown as a dotted line, inflated for demonstration). Parameters , . It took one iteration for all and no more than two sub-iterations .
For (parameter ) with the starting point ColGM achieved with an accuracy of 1% in 3 iterations and 754 calculations and . Other first-order methods: Quasi-Newtonian BFGS (working with matrices) required 66 iterations and 788 calculations; conjugate gradients (Fletcher—Reeves) - 274 iterations and 2236 calculations; Newton's finite difference method — 1 iteration and 1001 calculations. Newton's method second order — 1 iteration.
As the dimension of increases, computational errors in the implementation of the collinearity condition (3) may increase markedly. Because of this, the ColGM, in comparison with the Newton's method, in the considered example required more than one iteration.
The parameters are the same, except . The descent trajectory of the ColGM completely coincides with the Newton's method. In the drawing, the blue starting point is , and the red one is . Unit vector of the gradient are drawn at each point .
ColGM is very economical in terms of the number of calculations and . Due to formula (2), it does not require expensive calculations of the step multiplier by linear search (for example, golden-section search, etc.).