The UQ method automatically converges to a correct solution, i.e. accurate final Probability Density Function (PDF). However, one can be interested in the convergence of the process, which is divided into two aspects. The first aspect represents the selection process of important increment functions, which is done with our prediction scheme. The second aspect represents our adaptive scheme, which optimally sample each increment function. Our adaptive scheme differentiate between the convergence of the final model (global convergence) and the convergence of each increment function.
The first step in the approximation process represents the selection of important increment functions, i.e. domains which plays a crucial role in the process of approximation. Selecting only important increment functions leads to accurate and cheap approximation. Once the prediction scheme is converged, a linear model is selected to approximate the remaining residual. In other words, we are considering only increment functions which are neglected (not used) in the final model. This linear model is called – Linear model of residuals.
Our prediction scheme deselects increment function based on their statistical influence and only the important increment functions are selected, i.e. increment functions having influence higher than residual defined by the user. This allows reducing the number of required samples and such that speed up the interpolation process.
Our code is designed to consider only increment functions, which could be useful in the process. However, it can happen that the sum of the increment functions with extremely low influence can in the total lead to a significant change in the final PDF. For example, user-set residual is 0.05 and increment functions dF1.2.3, dF1.2.4, dF1.3.4, dF188.8.131.52 have influence around 0.0001 each. This is significantly lower than a user-defined residual and adding these increment functions would cost an additional 54 samples, which is a very high price. Therefore, these increment functions are neglected. Influence of these increment functions is caught with our linear residual model. Therefore, we let the user decide if these increment functions are needed. If one wants to include these increment functions simply reduce the global residual and re-run the learning process. It should be kept in mind, that this will lead to a larger number of samples required.
Linear model is selected due to the assumption that the least complex model capable to perform statistical propagation is linear. In other words, any complex function can be approximated with a linear model, if its influence is negligible. In this case, the linear model serves as an estimator of residual. Therefore, adding results of the linear model to the final model serves as an estimator of influence of the neglected increment functions. One can consider the function as converged if a change in the final distribution is negligible. NOTE: For non-normal distributions, large values of variance for the linear model can lead to a very small change in the final model. Therefore, it is suggested to visually check the final distribution.
A very important aspect of the linear model is its conservatives. In other words, in reality, the real influence of neglected increment functions will be lower than predicted by the linear model. Thus one can expect only smaller changes in the final model than predicted by the linear model. Another property of the linear model is that it naturally converges to zero if all increment functions are used. Therefore, the residual of the model diminishes as more increment functions are included in the final model.
This type of convergence is commonly known and understood. In each iteration, once the samples are added to the domain and the final model is build, the final model is checked if it is accurate enough, i.e. the residual of the expected value and the variance is bellow given threshold. The equation for the residual of the expected value reads:
and the residual of the variance reads:
where mu represents the expected value, sigma2 represents the variance and the subscript (in front of the letter) p represents the iteration. The model is considered converged once both residuals are bellow given threshold, i.e. fulfill prescribed criteria.NOTE: In order to maximize the gain of each sample, most of the samples are used in the final model. The infamous over-fitting phenomenon is handled inside the developed code and the handling scheme is part of the internal knowhow of the UptimAI company.
This type of convergence represents the convergence of each individual increment function – dF. It consist of 2 convergence approaches – the normal convergence and the logic convergence. The normal convergence for the expected value of given increment function is defined as follows:
and the normal convergence for the variance of given increment function reads:
where muk represents the expected value of k increment function, sigma2k represents the variance of k increment function and the subscript (in front of the letter) p represents the iteration.
The logic convergence for the increment function is defined in the following way:
and the normal convergence of the variance for given increment function reads:
Notice the difference in denominator for the normal and logic convergence. The increment function is considered converged if either logic or normal convergence criterion is reached. However, both residuals, i.e. for the expected value and variance, has to be bellow given threshold to consider the increment function converged.The difference between the normal and logic convergence is that the normal convergence process takes into account the behaviour of the model, while the logic convergence only observes the given increment function. This helps to optimize the number of samples used for the model creation and it makes the process robust. For example, consider a problem, where the final model is diverging. In that case, adding samples to the converging increment functions is pointless. In this problem, the logic convergence process ensures that samples are not added to an already converged problem (null improvement for the final model) and the algorithm focuses on the diverging increment functions. On the other hand, the normal convergence process ensures that samples are not added to the non-influential increment function. Consider a problem, which is converging and the given increment function has a very low influence. Adding samples to this increment function would lead to non-optimal samples with little influence on accuracy. Thus, the increment function does not need to be sampled further. Combination of these two convergence processes brings the optimal number of samples for the desired accuracy.
Fig. 1: The final model including Linear model of residuals
Fig. 2: The final model including Linear model of residuals
The table under sets -> options -> Prediction refers to the influence of the residual model. One can see the influence of the residual model when added to the final model (see Fig. 1). To see both models added together, one has to tick both boxes in options – Residual model and the Original model. To see the influence of the linear model of residuals (see Fig. 2), one has to tick box only for the Residual model.The graph is fully adjustable with options, where:
Fig. 3: Post process (Prediction model) – options
Table under sets -> options refers to the convergence process for global model and each increment function. Each increment function has its own convergences process, which can be displayed by selecting the desired increment function. Once the increment function is selected, the list of convergence schemes appears:
Fig. 4: Post process
The graph is fully adjustable with options, where:
Fig. 5: Post process – options
To store selected results in File (upper left corner) select save. It will allow you to browse in folders starting in the project folder. The code automatically selects the format to store visualized results.
[ Placeholder content for popup link ]
WordPress Download Manager - Best Download Management Plugin