August 11, 2003
More on Object-Oriented Design
Please note that contributed articles, blog entries, and comments posted on EDACafe.com are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
As an editor, it is always fun/informative to receive letters from readers. The article in the
issue of EDA Weekly,
The Root of All Evil, precipitated a lengthy Letter to the Editor posted in the
August 4th issue.
The August 4th Letter to the Editor then precipitated the two letters posted here.
I am grateful to Mark Jones for the initial conversation detailed in the July 28th issue, to Seth Goodman for his August 4th
Letter to the Editor, and to Sy Wong and Kevin Dewar for their additional contributions to the dialog on
design. These letters have been slightly edited, but are pretty much printed as received.
Letter No. 1 - August 8, 2003
Another 'Letter to the Editor' on the evil/wonderful nature of objects...
It's always stimulating to see reader responses to articles and interviews, and there was probably a lot in Mr. Jones'
article that might have prompted both agreement and disagreement, but the reply by Mr. Goodman seemed curiously overwrought,
although at least he didn't sit on the fence.
Mr. Goodman has picked on the area of Object-Oriented Design, as used in software engineering, to arrive at the
conclusion/remonstration that "we should not seriously consider any system that looks even remotely like OO design."
Phew! Not scared to be both prescriptive *and* blinkered.
The justification for this firm advice seems to be that OOD is an unmitigated disaster in the software world and shouldn't
really be tolerated there, let alone let loose into the hardware domain. However, whilst many people undoubtedly do consider
OO SW rather inefficient, and many more take issue with Microsoft's market influence, it seems unarguable that OO design has
actually met with widespread acceptance in industry. The people using it are intelligent professionals, many of them are
hugely experienced, and they would not have chosen it unless they thought it the best choice available (although not
necessarily an ideal choice of course).
I think that Mr. Goodman has exaggerated the actual inefficiency of real SW systems developed with OOD. Whilst there may be
some systems that have "several orders of magnitude" less performance than they could have, I don't believe that this level
of inefficiency (i.e. thousands of times) is typical of either mainstream OOD systems or any reasonably competently designed
ones. Certainly it is patently unfair to say that hardware improvements have merely allowed us to stand still as the
software bloated. A 2003 mainstream PC may have a cost similar (or actually less in real terms) to one from 1990, but I
don't think that many people would consider them to be functionally equivalent to the user.
Additionally, Mr. Jones mentioned other examples of software using OOD and, as one of these was Linux, it couldn't be said
that OOD is entirely the result of Microsoft marketing. The Java language is another OO example that has clearly gained
widespread (or pervasive) acceptance without anyone yet being able to blame it upon academics (many of whom must be
surprised to learn of the apparent level of influence that they have).
We need to remember why abstraction is sought after in the first place and I think there are two, closely related reasons.
Firstly, the availability of higher abstraction levels allows a greater choice of trade-off points between design
effort/cost and system performance. When a software product can be developed in half the time/cost, it will sometimes be
perfectly acceptable to allow it to have half the performance. (I certainly don't notice if my phone takes 50ms or 100ms to
recognize an incoming call.)
Secondly, as systems get more complex, there comes a point at which it isn't even an option to use a low-abstraction
methodology, whatever the likely performance gain theoretically achievable, since it could never realistically be
completed. At this point (which some would argue has already been reached in many systems) it is instructive to compare an
OOD system's "low" performance against that of a system that can't be designed at all (performance = zero).
In chip design, we can similarly see that whatever loss of performance occurred as the industry went from transistor-level
full-custom optimization to standard-cell based schematic entry to HDL/synthesis, the difference is immaterial to somebody
designing a 10 million gate ASIC since they aren't going to use anything other than RTL, IP, and compiled blocks. As
gate-counts continue to rise then abstraction levels will have to rise with them and perhaps it is a little premature to
state that nothing can be learnt from abstraction techniques widely practiced in another important engineering discipline.
I also believe that the reason Hardware seems to contain fewer (product) bugs is not because the *design* methodologies are
inherently better than Software, but is a result of more effort being put into verification at an earlier point in the
product cycle (since multiple ASIC re-spins are ruinous). In terms of number of defects/KLOC, my experience has been that
there isn't much difference between hardware designers writing RTL and software designers writing high-level languages, at
the point of initial coding. Hardware designers may get these defects out more quickly, but this is mainly a result of
economic necessity although the effort required to achieve this is rapidly getting out of control. One attribute of a good
design methodology (for software, hardware, or systems) is that it would allow fewer defects to get into design and allow
the ones that did get in to be more easily found and fixed.
It is interesting that one of the major other approaches to enhancing hardware design efficiency is pervasive re-use of
IP. This approach is itself really a simple type of "object-based" design in which the more successful IP blocks will have
to be reasonably general and flexible - i.e. "inefficient" compared to fully optimized custom solutions.
Mr. Goodman mentions that other, more efficient, methods of abstraction do exist and I would be genuinely interested to hear
what he believes these are, and to have the opportunity to perhaps compare what they have to offer for chip design against
the ideas from Mr. Jones.
MCD Design Consulting
Letter No. 2 - August 5, 2003
A computer is a real object, which in turn is built up from a hierarchy of increasingly complex real objects, whether on a
PC board or inside an IC. With hardware designed with software, the concept of objects is indistinguishable whether as
software or hardware, if the design language in either [paradigm] is unified into one.
In his letter in the August 4th issue of EDA Weekly, Mr. Goodman mistakenly blames object-oriented programming when he says,
'The object-oriented approach is indeed incredibly wasteful of processor and memory resources.'
The real culprits are the increasingly bloated multi-tasking operating systems (MTOS) such as Windows or Linux
(a.k.a. UNIX). The computer science community was brought up on UNIX and C after Bell Labs gave the code to universities for
free. Were it not for supporting MTOS like the Pentium and the PCI bus, a processor could be made for almost nothing
today. Why make a processor faster and faster and memories bigger and bigger only to chop them up for multiple tasks?
Meanwhile, you can wake up the IC design community with an editorial citing the major mistake made by VHSIC in 1980 at Woods
Hole that declared the then DoD-mandated Ada language was inadequate for use as an HDL. That was at a time when a practical
Ada compiler was not available. Obviously, the Ada inadequacy conclusion was not based on factual tests. The resulting VHDL
is a hodge-podge of copied Ada constructs, plus unnecessary additions that are also convoluted. Now IC designers are slaves
to the IEEE HDLs and the necessary EDA tools.
Ironically, Ada was initiated by [Donald] Rumsfeld in 1975 to unify the many languages used by weapons systems. However, Ada
was buried by too many bureaucrats [who were not] actual software developers. The "parallel or concurrent" construct that
[gave] Ada a reputation for being too large and too complex was the same evil time-sharing concept of MTOS. It is impossible
to have parallel processing on a single threaded computer, only pseudo or virtual concurrency. Most of the guts of a Pentium
go to support the MTOS, to make it go faster in order to cut up time for sharing between tasks.
For 10 years, I had proved the VHSIC 1980 declaration wrong. Ada has a package construct that provides a clean interface
called Specification and a Body with implementation. The Body can use Boolean expressions as system-level implementations or
as connected library cells. Both should be logically identical. Syntactical and semantic errors are caught by the compiler,
and validated by an official test set. Being a programming language, the designer can ascertain functional correctness by
writing a test program. Since the implementation is hidden from the test program, transition between Boolean implementation
and connected cells need only relinking to run the test again.
You can find the full EDACafe event calendar here.
To read more news, click here.
-- Peggy Aycinena, EDACafe.com Contributing Editor.
Be the first to review this article