Hi all,
I am still relative new to low-level programming microcontrollers, currently doing some work on a STM32F103RB using forth.
To get to know the device, I am now trying out some code to change the clock from internal8Mhz, to External48Mhz and External72Mhz.
There are two things I do not really understand.
1/ What exactly is the purpose of the Flash Latency? You need to change this to 0, 1 or 2 depending on the clock-speed. As far as I understand it, it defines the speed to access (read) the flash memory of the device.When looking at some C-code I on the internet to set the clock-speed to
8 to 72 MHz, I noticed that they change the clock FIRST and THEN increase the latency from 0 waitstate to 2 afterwards.I would have expected the code to slow down the access to the flash FIRST before stepping up the speed.
2/ Page 791 of the STM32F10x_reference manual say that the speed of a USART is Tx/Rx (baud) = f(ck) / (16*USARTDIV)."USARTDIV is an unsigned fixed point number that is coded on the USART_BRR register."
However, the value in the BRR register is a 16 bit value, with the last
4 bits as the fraction of USARTDIV. There are some examples on how to calculate the USARTDIV value on page 791 and 792.Now, I could be wrong, but it looks to me that, if you want to have a certain baudrate, you can just ignore the "*16" part in the formula and the "fraction of USARTDIV".
If you just say "Tx/Rx (baud) = f(ck) / USARTDIV" and have USARTDIV simply as an 16bit integer value, you get exactly the same result.
Or am I missing something here? Is there a special reason why this "*16" and "4 last bits = fraction of USARTDIV" is needed?
Cheerio! Kr. Bonne.