Open side-bar Menu
 Embedded Software
Colin Walls
Colin Walls
Colin Walls has over thirty years experience in the electronics industry, largely dedicated to embedded software. A frequent presenter at conferences and seminars and author of numerous technical articles and two books on embedded software, Colin is an embedded software technologist with Mentor … More »

malloc() – just say no

 
September 21st, 2020 by Colin Walls

A topic that I find particularly interesting, which is raised by many embedded software developers whom I meet, is dynamic memory allocation – grabbing chunks of memory as and when you need them. This seemingly simple and routine operation opens up a huge number of problems. These are not confined to embedded development – many desktop applications exhibit memory leaks that impact performance and can make system reboots common.

However, I am concerned about the embedded development context …

Because I find this subject interesting, I cover it commonly in technical conferences and articles. However, today I want to take a slightly different perspective.

I would normally outline three key reasons to not use standard malloc():

  • Memory allocation may fail
  • The function is commonly not re-entrant [thread friendly]
  • It is not deterministic [predictable]

These are valid points, but may not always be as important as they seem:

  • The function does clearly indicate failure by returning a NULL pointer. It is really quite straightforward to check for this and take appropriate action.
  • It is quite likely that all memory handling is done within a single thread/task.
  • Not all embedded systems are real time, so determinism might not really be needed.

However, malloc() does present another challenge: it is often rather slow. A real time system is fundamentally predictable, but not necessarily fast. Many embedded systems do not need to be predictable to any precision but do need to be speedy. So, finding a way to provide the functionality of malloc(), without the problems, is worthwhile considering.

The main reason why malloc() is rather slow is that it is providing a lot of functionality – the allocation of chunks of memory of variable size is somewhat complex. However, it turns out that, for many applications, this functionality is really not needed, as the chunks of memory are all the same size [or a small number of known sizes]. It is a simple matter to write an allocation function for fixed size blocks – this can be done using an array with usage flags or a linked list [the latter is often better]. The resulting code will inevitably be faster. It may even be deterministic or could be made so, if that is a requirement. Allocation failure can still occur but may be handled in an appropriate way for the specific application.

Logged in as . Log out »




© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise