by Asim Jalis
The way object-oriented programs decouple and intertwine is an
optimal way to structure systems from certain perspectives --
e.g. it optimizes use of system resources.
But how does it fare with reuse. And I see much more reuse (of my
own code) at the Unix command line application level than at the
class level. My command line applications live forever, while my
GUI applications are dead on arrival.
There are different boundaries. There is the object boundary,
there is the process boundary, and there is the machine boundary.
Unix is more biased towards multiple decoupled programs in
different processes, while the OO tendency is to put everything
into a single process.
The question is whether the Unix approach tilts the balance
towards more reuse, and why.
I am deeply unmotivated to write code unless I can use it more
than once. I feel that I have been writing a lot of code that
gets used only once. And this makes me wonder if I could get more
reuse if I started breaking my applications down into smaller
pieces that did precisely one thing well.