Embedded Software Colin Walls
Colin Walls has over thirty years experience in the electronics industry, largely dedicated to embedded software. A frequent presenter at conferences and seminars and author of numerous technical articles and two books on embedded software, Colin is an embedded software technologist with Mentor … More » Thanks for the memoryNovember 21st, 2014 by Colin Walls
The computer world is often accused of being mired in jargon and I think that is a fair criticism. In some ways it gets worse when an everyday word is “hijacked” to have a new meaning. A good example is “program”, which had several meanings before it was applied to software. Interestingly, in the UK we use the US spelling (“program”) to refer to software, but retain the English version (“programme”) for everything else. Another re-purposed word is “memory”, which is interesting because it has acquired a number of meanings in a computing context. Historically, the term referred to the place that a program and data resided during execution – it still does have this meaning. But it was also used to refer to bulk storage like disk drives. Even today, when someone tells me how much memory their PC has, I have to make sure that they are not telling me about disk storage. For embedded systems, memory has always been a term with a number of meanings … Memory is an important consideration in embedded systems. Most of the time, it is available in limited quantities and its configuration may be quite complex. An embedded developer is often very concerned about memory addresses and ensuring that code and data are located in the right places. Hence, a capable linker is an essential tool for embedded software development. Traditionally, the code and constant data were placed in PROM (Programmable Read Only Memory). These devices would be programmed electrically and retain their data indefinitely. Most of them could be erased, ready for re-use, using an ultra-violet light source. Working data, that needed to be changed during program execution, was held in RAM (Random Access Memory). PROMs were cheap, sufficiently capacious and fast enough to allow code to be executed directly from them. RAM was faster, but more expensive, so was typically provided in small amounts. Some systems would need other kinds of memory, in addition to PROM and RAM. A typical example would be non-volatile RAM (NVRAM) – memory that could be changed while the program was running, but retained data after the power was shut off. This presented some interesting programming challenges – maybe I will discuss that another day. NVRAM was commonly implemented using regular RAM provided with a backup power supply (“battery backed RAM”). Latterly, flash memory has become increasingly available as an option, but, though cheaper and simpler from a hardware perspective, it is less flexible from the software point of view. Nowadays, a common system configuration is a combination of flash memory for program and constant data storage along with a large RAM area from where the code is executed. On start up, some “boot” code copies the instructions from flash to RAM and branches to it to initialize the application. Flash is a little slow, so executing code directly from it is undesirable. RAM is relatively cheap now and much faster. This configuration works quite well, as flash may be fitted on board easily and programmed rapidly. During development, or even after deployment, new code versions can easily be loaded into flash. There are, however, two downsides: First, the code in RAM is potentially subject to corruption. Steps may be taken to prevent this, but they are a further complexity. PROMs were impossible to corrupt, which was reassuring. Second, the copying process from flash to RAM introduces link time complexity, which is quite readily overcome, and a start-up delay, which is more of an issue. This is increasingly the case as systems become more complex and the memory size grows. In an ideal world, all the memory in an embedded system would be the same. It would be fast enough to be used for execution, re-writable on a byte by byte basis, as required, be write-protectable (in blocks) in a straightforward way and retain data through power cycles. There are technologies that hold out a promise of such a design – MRAM for example. But they do not seem to have reached mainstream yet. How are your designs configured? Are you looking at MRAM etc.? Comments or email are very welcome. One Response to “Thanks for the memory” |
Hi, Colin; back in the day, I used BBRAM whenever possible, often constructing modules that dropped into the exisiting EPROM/flash sockets. A write-protect switch kept the code safe from corruption, and a wee bit of extra logic was used to detect write attempts in code space; this signal was either used to trigger a logic analyzer, or tied to an interrupt pin on the CPU — even though the code was safe, it was extremely useful to know if any bad behavior *tried* to occur!
The closest I’ve come to this scenario these days is FRAM-based MCUs, which (unfortunately) are few and far between… TI’s “Wolverine” MSP430 series sports a number of FRAM models that also have a Memory Protection Unit, which is very useful for trapping faults. I overwhelmingly prefer programming in Forth, so FRAM is a godsend; managing the Forth dictionary in flash is a huge PITA by comparison.
Nevertheless the overwhelming majority of MCUs are flash-based, and I simply must bite the bullet wrt Forth housekeeping. Even so, a lot of MCUs now have Memory Protection Units, and it’s foolish not to take advantage of this important asset.
cheers, – vic