Читать книгу Advances in Electric Power and Energy - Группа авторов - Страница 74
2.5.11.1 Performance Analysis: Bad Data
ОглавлениеAll estimators presented in this chapter (with the exception of the WLS) are robust when dealing with bad data, i.e. a gross error does not significantly influence the estimated state.
If WLS estimator is used, then bad measurement detection and identification procedures must be employed after the estimation process in order to eliminate any bad data that may populate the measurement set. These detection and identification procedures are generally based on χ2 test and on normalized residuals, respectively (see [18]).
According to the traditional bad measurement detection procedure, if the objective function value at the estimated state is smaller than (where γ is the number of degrees of freedom and 1 − α is the confidence interval), the measurement set is assumed to be error‐free. If not, a procedure is carried out to identify the erroneous measurements.
Therefore, if a bad measurement is corrupting the measurement set but its magnitude is not sufficiently large to satisfy , the bad data detection process concludes, and the identification procedure is not performed. In this case, the measurement set is corrupted by a bad measurement that the WLS estimation procedure cannot detect.
In the study below, two bad measurements are present in each measurement scenario. The magnitude of the corresponding error is sufficiently small to satisfy the condition , i.e. the bad data identification procedure does not detect any error, and the bad measurement is not therefore removed from the measurement set.
One hundred measurement scenarios have been considered, and Tables 2.16 and 2.17 provide the results concerning estimation accuracy and computational performance, respectively. Note that the format of these tables is similar to that of Tables 2.14 and 2.15.
TABLE 2.16 Case study: estimation accuracy results with bad measurements.
Method | (p.u.) | (p.u.) | (rad) | (rad) |
---|---|---|---|---|
WLS | 0.0019 | 0.0014 | 0.0017 | 0.0014 |
LAV | 0.0019 | 0.0016 | 0.0019 | 0.0016 |
QC | 0.0016 | 0.0012 | 0.0017 | 0.0014 |
QL | 0.0016 | 0.0012 | 0.0017 | 0.0014 |
LMS | 0.0103 | 0.0106 | 0.0053 | 0.0047 |
LTS | 0.0050 | 0.0048 | 0.0025 | 0.0022 |
LMR | 0.0018 | 0.0013 | 0.0017 | 0.0014 |
TABLE 2.17 Case study: computational performance results with bad measurements.
Method | Minimum (s) | Mean (s) | Maximum (s) | Std. dev. (s) |
---|---|---|---|---|
WLS | 0.94 | 1.70 | 2.28 | 0.18 |
LAV | 0.59 | 0.94 | 1.29 | 0.12 |
QC | 0.22 | 0.31 | 0.45 | 0.05 |
QL | 1.00 | 1.74 | 2.71 | 0.27 |
LMS | 3.80 | 8.21 | 12.64 | 1.42 |
LTS | 1.28 | 2.36 | 3.96 | 0.36 |
LMR | 0.94 | 2.73 | 34.84 | 4.99 |
The following observations can be made about Tables 2.16 and 2.17:
1 As expected, the WLS approach does not provide the most accurate results. The estimates computed using the QC and QL techniques are more precise than that obtained with the conventional WLS method.
2 The QC and LAV approaches are the most efficient ones from the computational perspective. The computational burden of the LMS technique is higher than that of any of the other procedures.