I normally don't use dynamic allocation in embedded projects, however there are situations where this isn't possible.
I'm working on a project with LPC1768 Cortex-M3 MCU that features
32kB+32kB RAM. I'm using lwip+mbedTLS to make a TLS connection to AWS IoT Core through Ethernet.After a precise allocation of data blocks (incoming Ethernet packets, outgoing packets and so on), it seems working. However I'm using dynamic allocation, mainly for outgoing packets: when some application (MQTT, DNS client, ...) wants to transmit, lwip requests some space from the heap.
lwip can be configured to use a set of memory pools of different sizes to avoid dynamic allocation, but it's very difficult to configure. During execution I saw allocation requests of very different sizes: 5-10 bytes, 50-100 bytes, 800 bytes, 1500 bytes.
However the dynamic allocation algorithm appears very strange to me. I expected that malloc() returns always the start of the heap if the heap is completely free. It is so at the beginning, however when some allocations and deallocations happen, even if the heap is completely deallocated (all the allocations were freed), a new allocation doesn't return the start of the heap. How to explain this behaviour?
I noticed my allocation pattern is very simple and, after 1-5 allocations, there are other 1-5 deallocations and I eventually have a completely free heap, when all the outgoing packets are serially shifted out and/or acked from the remote side.
For example:
- malloc(3)=addr1
- free(addr1) -> free heap
- malloc(5)=addr2
- malloc(60)=add3
- free(addr2)
- free(addr3) -> free heap
- malloc(800)=addr4
- malloc(5)=addr5
- malloc(32)=addr6
- free(addr5)
- malloc(80)=addr7
- free(addr6)
- free(addr7)
- free(addr4) -> free heap
With this pattern, I think I can simplify the allocation algorithm so that it returns always the start of the heap when it is completely free and reduce fragmentation of complex algorithm.