The values of the NNBP energies mostly define the location and the height of the force rips. If these values are modified, the shape of the FDC changes dramatically. So if we consider that the other parameters (trap stiffness, elastic properties of ssDNA) are known and fixed, it is possible to state that the FDC only depends on the 10 NNBP energies (=1,,10) and on the free energy of the end loop .
The question that we want to address here is how to modify the values of in order to obtain a theoretical FDC that is as close as possible to the experimentally measured one. In order to do so, it is necessary to perform a fit by using the LeastSquare approach. This method consists in defining an error function that is the sum of the squared differences between a measured value and a theoretical value of the FDC according to
The theoretical FDC is calculated in equilibrium, which assumes that the bandwidth of our measurements is 0 Hz. The experimental data is filtered at a bandwidth of 1 Hz. If data is filtered at higher frequencies (1 Hz), the hopping between intermediate states is observed and the experimental FDC does not compare well with the theoretical FDC at equilibrium. If the data is filtered at lower frequencies (1 Hz), the force rips are smoothed and hopping transitions are averaged out. Appendix K explains why the (filtered) experimental FDC can be compared with the theoretical FDC.
The best values for the NNBP energies can be inferred by minimizing the mean squared error (i.e., the error function) between the experimental and theoretical FDCs. The complex dependence of the FDC with the NNBP energies and the large dimensional space spanned by the 10 NNBP energies makes the minimization a difficult optimization problem. In general, we have to minimize Eq. 5.2 which is a 11dimensional function whose error landscape is unknown. There are several techniques to minimize the error function but not all of them are equally efficient.
We have tested two standard methods: 1) a steepest descent approach and 2) a Monte Carlo approach. The first one involves the numerical calculation of the first derivatives of the error function, which makes the method very straightforward but computationally inefficient. Indeed, each step of the steepest descend method leads to the solution that minimizes the error function. However, each step of the calculation requires 10 times more of time due to the calculation of the derivatives. On the contrary, the Monte Carlo method involves a random search for optimal solutions in the space of parameters which turns out to be faster, although less accurate that the steepest descent method (see Fig. 5.7). So the minimization was finally performed with a Monte Carlo method.
