### linear-algebra

#### Matlab : How do I ensure that the covariance matrix is positive definite when using Kalman Filter

```For complex valued data, I am finding it hard to ensure that the covariance matrix is positive definite. Taking an example,
P =
10.0000 +10.0000i 0 0
0 10.0000 +10.0000i 0
0 0 10.0000 +10.0000i
I can check for positive definiteness of P using the cholesky or the eigenvalues explained below.
(A)
[R1,p1] = chol(P)
R1 =
[]
p1 =
1
Since p1 > 0, A is not positive definite
(B) Using eigen values : if the eigenvalues are positive, then P should be positive definite.
[r p]=eig(P)
r =
1 0 0
0 1 0
0 0 1
p =
10.0000 +10.0000i 0 0
0 10.0000 +10.0000i 0
0 0 10.0000 +10.0000i
However, doing svd(P) gives all positive eigenalues !!
Where am I going wrong and what should I do to prevent the P matrix from becoming non positive definite. During run time and real world scenarios it is very hard to ensure the postive definiteness of P. Is there a hack or a way out? Thank you very much
```
```Checking positive definiteness on complex matrices:
First of all, an answer to this question at math.stackexchange says that:
A necessary and sufficient condition for a complex matrix A to be
positive definite is that the Hermitian part
A_H = 1/2·(A+A^H) is positive definite,
where A^H denotes the conjugate transpose.
Why P matrix becomes non-positive definite:
Then, on the question of why P loses its "positive-definiteness", the usual culprit is floating point representation/arithmetic.
Standard Kalman filter algorithm can show numerical stability problems in some sensitive operations as taking the inverse of matrix S when calculating Kalman gain, or when applying optimizations as using the simplified expresion for error covariance on the update step P+ = (I - K·H)·P-.
There are other sources of error, as a buggy implementation or using wrong data (e.g. defining process/measure covariance matrices that are not positive definite themselves).
How to avoid the problem:
I will focus in the first source of error: numerical stability.
There are a number of alternatives commonly used to make Kalman filters more stable and avoid the covariance matrix problem:
Correct small errors in P on each iteration (formally not correct, but works nicely). I have successfully used the simplistic P = 1/2 (P + P') with real matrices in the past, but there are more ellaborate schemes.
Use a square-root Kalman filter, or any other formulation that improves stability. Since they keep and update a kind of square root of P, the assymetries are not a problem anymore. They also reduce the positive definite problem in general.
Go to 64 bit FP arithmetic. Much more stable than using single precision.
Improve numerical conditioning by scaling some variables. Large magnitude differences between numbers worsen the inaccuracies, so if P mixes in the diagonal values in the order of 10^+3 with other values in the order of 10^-6, then change the unit of the latter variable to micro-whatever to reduce the gap.
Using one or a combination of these factors could work. However, I have not worked with Kalman filters in the complex domain, so let me know how it works for your case.```

### Resources

Mobile Apps Dev
Database Users
javascript
java
csharp
php
android
MS Developer
developer works
python
ios
c
html
jquery
RDBMS discuss
Cloud Virtualization