One direction is bottom-up, from concrete to abstract —
working up from the specific operations in the problem domain that you
know you will need to perform. For example, if one is designing
firmware for a disk drive, some of the bottom-level primitives might
be ‘seek head to physical block’, ‘read physical
block’, ‘write physical block’, ‘toggle drive
LED’, etc.
The other direction is top-down, abstract to concrete — from the
highest-level specification describing the project as a whole, or the
application logic, downwards to individual operations. Thus, if one is
designing software for a mass-storage controller that might drive
several different sorts of media, one might start with abstract
operations like ‘seek logical block’, ‘read logical
block’, ‘write logical block’, ‘toggle
activity indication’. These would differ from the similarly named
hardware-level operations above in that they're intended to be
generic across different kinds of physical devices.
These two examples could be two ways of approaching design for
the same collection of hardware. Your choice, in cases like this, is
one of these: either abstract the hardware (so the objects encapsulate the real
things out there and the program is merely a list of manipulations on
those things), or organize around some behavioral model (and
then embed the actual hardware manipulations that carry it out in the
flow of the behavioral logic).
An analogous choice shows up in a lot of different contexts.
Suppose you're writing MIDI sequencer software. You could organize
that code around its top level (sequencing tracks) or around its
bottom level (switching patches or samples and driving wave
generators).
A very concrete way to think about this difference is to ask
whether the design is organized around its main event loop (which
tends to have the high-level application logic close to it) or around
a service library of all the operations that the main loop can invoke. A
designer working from the top down will start by thinking about the
program's main event loop, and plug in specific events later. A
designer working from the bottom up will start by thinking about
encapsulating specific tasks and glue them together into some kind of
coherent order later on.
But the Web browser has to call a large set of domain
primitives to do its job. One group of these is concerned with
establishing network connections, sending data over them, and
receiving responses. Another set is the operations of the GUI
toolkit the browser will use. Yet a third set might be concerned
with the mechanics of parsing retrieved HTML from text into a
document object tree.
Which end of the stack you start with matters a lot, because the
layer at the other end is quite likely to be constrained by your
initial choices. In particular, if you program purely from the top
down, you may find yourself in the uncomfortable position that the
domain primitives your application logic wants don't match the ones
you can actually implement. On the other hand, if you program purely
from the bottom up, you may find yourself doing a lot of work that is
irrelevant to the application logic — or merely designing a pile
of bricks when you were trying to build a house.
Ever since the structured-programming controversies of the
1960s, novice programmers have generally been taught that the correct
approach is the top-down one: stepwise refinement, where you specify
what your program is to do at an abstract level and gradually fill in
the blanks of implementation until you have concrete working code.
Top-down tends to be good practice when three preconditions are true:
(a) you can specify in advance precisely what the program is to do,
(b) the specification is unlikely to change significantly during
implementation, and (c) you have a lot of freedom in choosing, at
a low level, how the program is to get that job done.
These conditions tend to be fulfilled most often in programs
relatively close to the user and high in the software stack —
applications programming. But even there those preconditions often
fail. You can't count on knowing what the ‘right’ way for
a word processor or a drawing program to behave is until the user
interface has had end-user testing. Purely top-down programming often
has the effect of overinvesting effort in code that has to be scrapped
and rebuilt because the interface doesn't pass a reality check.
In self-defense against this, programmers try to do both things
— express the abstract specification as top-down application
logic,
and
capture a lot of low-level domain
primitives in functions or libraries, so they can be reused when the
high-level design changes.
Unix programmers inherit a tradition that is centered in systems
programming, where the low-level primitives are hardware-level
operations that are fixed in character and extremely important. They
therefore lean, by learned instinct, more toward bottom-up
programming.
Whether you're a systems programmer or not, bottom-up can also
look more attractive when you are programming in an exploratory way,
trying to get a grasp on hardware or software or real-world phenomena
you don't yet completely understand. Bottom-up programming gives you
time and room to refine a vague specification. Bottom-up also appeals to
programmers' natural human laziness — when you have to scrap and
rebuild code, you tend to have to throw away larger pieces if you're
working top-down than you do if you're working bottom-up.
Real code, therefore tends to be programmed both top-down and
bottom-up. Often, top-down and bottom-up code will be part of the
same project. That's where ‘glue’ enters the
picture.