next up previous
Next: The Macromolecular Problem Up: Conjugate Direction Minimization: An Previous: Introduction

Overview of Function Minimization

The theoretical underpinnings of all refinement methods used in the latter stages of high resolution refinement are the same. The analysis begins by making a Taylor series expansion of the function being minimized about the current guess for the values of the parameters of the model ( tex2html_wrap_inline338 ). The Taylor series expansion is

  equation12

where tex2html_wrap_inline340 is the gradient of the function, tex2html_wrap_inline342 is the second derivative or normal matrix, and tex2html_wrap_inline344 is the shift vector which takes tex2html_wrap_inline338 to tex2html_wrap_inline348 . The higher order terms are always assumed equal to zero.

To find the value of tex2html_wrap_inline348 where tex2html_wrap_inline352 is minimal we take the derivative of Equation (1) with respect to tex2html_wrap_inline348 and solve for tex2html_wrap_inline344 when tex2html_wrap_inline340 is tex2html_wrap_inline360 . The result is

  equation17

  equation22

tex2html_wrap_inline348 defines the minimum in all cases where the higher order terms are, in fact, zero, and when tex2html_wrap_inline364 is positive definite, which is always the case in this application.



Dale Edwin Tronrud
Thu Nov 20 10:28:11 PST 1997