Subscribe now and choose from over 30 free gifts worth up to £49 - Plus get £25 to spend in our shop
Just finished a nice little matlab script that gives me the dominant eigenvalue (with an error of at most 10^-7) and eigenvector of a 3x3 matrix, using the power method.
However, it's giving me the correct eigenvalue, but the negative of the eigenvector (-b instead of b).
Here's my script
a=[9 5 1;8 -5 9;-9 -2 -2];
b=[1;1;1];function bnew=powermethod(a,b)
bnew=[0;0;0]; c=0; itterations = 0;
while c==0
itterations = itterations + 1;
bnew=a*b/norm(a*b);
d=a*b; e=a*bnew;error = abs( (d(1)/b(1)) - (e(1)/bnew(1)) );
if error<10^-7
c=1;
end
b=bnew;
end
fprintf('\n Script took %d itterations to converge.\n', itterations);
fprintf('\n Eigenvalue is %d.\n', e(1)/bnew(1));
end
I know this is pushing it a bit, but thought someone might be able to see what's going on. Cheers.
iterations only has one t.
Good point, well made. Spelling is now fixed.
The maths is beyond me, but I'm pretty sure that the spelling of your variable names isn't the problem. (-:
I was pretty sure too, but gave it a go anyway.. 😉
So with the eigenvalue/vector equation, v being the eigenvalue, b being the eigenvector
Ab=vb
If you swap b and -b you get
A(-b)=v(-b)
-(Ab)=-(vb)
Ab=vb
So does that mean -b is still an eigenvector, of the same eigenvalue? Or am I being stupid?
Dunno what language that is supposed to be. But, purely for curiosity:
function bnew=powermethod(a,b)
bnew=[0;0;0]; c=0; itterations = 0;
...to someone used to c plus plus you appear to 'declare' something called bnew initialised to the return from powermethod, then immediately replace the contents with an array of 3 zeros.
Yeah, to be fair that doesn't have to be there, it doesn't really do anything except just remind me it's a vector.
Dunno what language that is supposed to be.
MATLAB code is a slightly bizarre mash-up of C and FORTRAN, with native support for vector and matrix data types thrown in.
The first line...
[code]function bnew=powermethod(a,b)[/code]
... is actually the function signature for [code]powermethod()[/code] which takes two arguments [code]a,b[/code] and returns [code]bnew[/code]... the following code then implements it
I used to work for Mathworks by the way 🙂
elliptic - I used to work for Mathworks by the way
Having suffered 3 years of Matlab at uni, please tell me where you live so I can h?u?n?t? ?y?o?u? ?d?o?w?n? ?a?n?d? ?k?i?l?l? ?y?o?u? bring you some free cake.
i think you're missing this?
fprint ('\n Hello World')
no need to thank me
Matlab is ace.
to the op, cut+pasting your code gives an eigenvector:
-0.274121806207894
0.957476112991588
-0.089981822674594
which seems to be sort of correct as
a*ans
ans =
2.230302486412296
-7.790191418692434
0.732107675237060
which is parallel, except pointing in the opposite direction, like you said.
The reason is that you are using the wrong operator in this line:
error = abs( (d(1)/b(1)) - (e(1)/bnew(1)) );
it should be:
error = abs( (d(1)\b(1)) - (e(1)\bnew(1)) );
EDIT: actually that still seems wrong as a*ans is still pointing in the opposite direction....hmmmm
I'm not sure it is wrong thinking about it more. I think B is still and eigenvector of A so long as A*B is parallel to A.
In fact, that's what the negative eigenvalue in this case (-8.13) tells you (i.e. A*(-8.13) = A*B).
Bit rusty here, but isn't the dominant eigenvalue the one with the largest absolute magnitude?
In which case, this can be either positive or negative, and in this case, the calculated value is negative.
As above, am not sure there is anything wrong.
Correct funkynick, the eigenvalue is fine, it's the eigenvector that's negative.
It's Zero.
Or Twenty Six.
Perhaps Twenty Eight....
