Embedded Software Colin Walls
Colin Walls has over thirty years experience in the electronics industry, largely dedicated to embedded software. A frequent presenter at conferences and seminars and author of numerous technical articles and two books on embedded software, Colin is an embedded software technologist with Mentor … More » malloc() – just say noSeptember 21st, 2020 by Colin Walls
A topic that I find particularly interesting, which is raised by many embedded software developers whom I meet, is dynamic memory allocation – grabbing chunks of memory as and when you need them. This seemingly simple and routine operation opens up a huge number of problems. These are not confined to embedded development – many desktop applications exhibit memory leaks that impact performance and can make system reboots common. However, I am concerned about the embedded development context … Because I find this subject interesting, I cover it commonly in technical conferences and articles. However, today I want to take a slightly different perspective. I would normally outline three key reasons to not use standard malloc():
These are valid points, but may not always be as important as they seem:
However, malloc() does present another challenge: it is often rather slow. A real time system is fundamentally predictable, but not necessarily fast. Many embedded systems do not need to be predictable to any precision but do need to be speedy. So, finding a way to provide the functionality of malloc(), without the problems, is worthwhile considering. The main reason why malloc() is rather slow is that it is providing a lot of functionality – the allocation of chunks of memory of variable size is somewhat complex. However, it turns out that, for many applications, this functionality is really not needed, as the chunks of memory are all the same size [or a small number of known sizes]. It is a simple matter to write an allocation function for fixed size blocks – this can be done using an array with usage flags or a linked list [the latter is often better]. The resulting code will inevitably be faster. It may even be deterministic or could be made so, if that is a requirement. Allocation failure can still occur but may be handled in an appropriate way for the specific application. |