How to transform matlab value to FPGA value

I have implemented (-1,+1) boxed-constraint algorithm into FPGA desgin. The algorithm is to solve the equation Rh+n=b in telecommunication application. For my project case, all the elements of size NxN Matrix R in the region [-1,1], and n is the Gauss noise. h is a size Nx1vector . For example: R =

1.0000 0.5000 0.5000 0.5000 0.5000 1.0000 0 0 0.5000 0 1.0000 1.0000 0.5000 0 1.0000 1.0000

b =

-1.2672 -0.5291 2.1703 1.2486

R1=round(2*R); b1=round(2*b); It is easily seen that all elements can be presented by 4 bit. But for the transmission to FPGA board, I extend sign bit to 8 bit . Then I write Matrix R1 and vector b1 into FPGA board.

In Matlab simulation, I used constraint H=2^0 as the biggest step size .Then is d=2^-1, d=2^-2....to approach the expected h value. So my question is: Shall I use H=00000001 to presnted 1? Some said I used 4 bit to present input data. I should use 2^4 to present H. I don't agree with it. And now I am a little confused about it. Does anybody can help me figure me out? I have tried my best to explain it clearly. Thank you very much.

Zhi

Reply to
ZHI
Loading thread data ...

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.