?

Log in

No account? Create an account

Previous Entry | Next Entry

How The West Was Won

Whatever happened to good, old-fashioned basement hackers? Not the user of the sleek, modern, fast-and-furious Java or C# IDE (YRAINW!)... not the MATLAB rapid-prototyping whiz... not even the server-side-savvy, DCOM/CORBA/DCE-juggling, Ruby/Python/Perl/PHP/ASP/.NET master. I mean the vi or GNU Emacs power user, the guy (or gal) who was equally facile with C and 80x86 assembler, the one who wrote his or her own device drivers and built a pen plotter with a jerry-rigged solenoid? Not with the Hotline server or the Grokster client did they conquer, but with the WWIV and FidoNet BBS, not with "1337 h a > < o r" tools but with DEBUG and Phrack magazine.

My age is showink, innit? #-)

In nonlocal news, here is a sad and disturbing story from angiej, an American schoolteacher:

Where have all the Muslims gone? [updates here (25 Mar 2003) and here (26 Mar 2003)]

This link is courtesy of andrewwyld and mirabehn by way of ixwin. Pray that the internment mentality, such as it is, diminishes soon and that we can find ways to dissipate it for good.

--
Banazir

Comments

( 4 comments — Leave a comment )
andrewwyld
Mar. 27th, 2003 02:25 am (UTC)
68k series assembler did age somewhat.

My Dad used to write in 68000 assembler because we had an Atari ST and compiled (or, worse yet, interpreted) languages just weren't fast enough for what he needed.  Now, compiler optimization is good enough that C will do the job pretty much as well as assembler and with much less pain, and computers like the ST are mostly a fond but distant memory.

I liked that computer.  I learned to program (FaST Basic, which is what we had) on it.

Ultimately, it comes down to the fact that it's cheaper for computer manufacturers to buy more memory and bigger chips than to wait the extra year for development ... and, of course, as systems get bigger, the tools you need to attack them must get bigger, too.
banazir
Mar. 30th, 2003 06:51 pm (UTC)
Innerspace and innertime...
... ahh, the joys of code bloat. ("Judge me by my size, do you?")

Think aboat it: code bloat


  • raises the visibility threshold of computer risks (viruses, intrusion)

  • makes previous single-developer projects so unwieldy that they can no longer be managed by one person

  • renders a nice Moore's Law application (18-month doubling time for CPU clock speed; 12 for RAM capacity; 9 for graphics hardware specs; and about 4-6 for network bandwidth) utterly futile

  • lines the pockets of [your favorite Silicon Valley exec here]

  • gives us software engineering researchers more cannon fodder :-)



I saw one PBS (Merkian "public broadcasting") special on the rise and fall of Netscape, and towards the time of the AOL buyout, after Marc left, there were programmers working at the company who were giving carefully reasoned arguments against formal methods and in favor of the doomed guy approach:

1. Run the error visualization system on the nightly build.
2. Sort the errors by programmer responsibility/jurisdiction.
3. Declare the programmer with the most errors to be the "doomed guy" or "doomed gal".
4. Go find him or her (out snowboarding or otherwise recreating).
5. Force the programmer to correct errors until no longer "doomed".

The end of Netscape-That-Was came while the late, great E. W. Dijkstra was still with us. To paraphrase one of his famous lectures:

We could, for instance, begin with cleaning up our language by no longer calling a bug "a bug" but by calling it an error. It is much more honest because it squarely puts the blame where it belongs, viz., with the programmer who made the error. The animistic metaphor of the bug that maliciously sneaked in while the programmer was not looking is intellectually dishonest as it is a disguise that the error is the programmer's own creation. The nice thing of this simple change of vocabulary is that it has such a profound effect. While, before, a program with only one bug used to be "almost correct," afterwards a program with an error is just "wrong."

(Thanks to John Regehr for archiving this gem on his quotes page.)

--
Banazir
andrewwyld
Mar. 31st, 2003 02:27 am (UTC)
Good thoughts.  I'd add the following:

There are two kinds (and isn't it always two?) of bug.  There are the kind where something functions incorrectly but behaves gracefully, and the kind where something functions correctly but behaves disgracefully, typified in the first instance by a function which gives the wrong answer and in the second by a function that causes a memory leak or some such.

Obviously, checking for the second is possible using automated testing, where checking for the first is not -- incorrect behaviour for one function may be correct behaviour for another.  Here we hit spec bugs.

Spec bugs introduce a third possible cause of error -- two functions that work and are well-defined, but where the definition of one is inherently incompatible with the other -- for example, one function may pass a pointer directly through to another, expecting the function to trap null pointers -- this second function may not expect any pointers passed to it to be null.  It is obviously wasteful and slow to check pointers for nullity twice, and increases code size to check them in many functions.  It is also wasteful to pass a null pointer through many levels of hierarchy, only to have it returned -- and feed through the corresponding tiers of error-checking.  This is, of course, still preferable to a crash, which is why everyone should program defensively, but less trivial incompatibilities between functions may also occur, and these cannot be avoided by "automatic" fixes like pointer-checking.

This means that pointing at a doomed programmer may be unfair, since his or her doom may have been sealed by a doomed project manager.

I had a larger point to make, but I've lost the pointer to it.
andrewwyld
Mar. 31st, 2003 02:35 am (UTC)
Ah yes.

I suspect that, with good design, errors like memory leaks and buffer overruns may die out quite soon -- pro dev tools are getting quite smart and can find these things for you, and of course the garbage collector concept from Java (if written well) would, in theory, eliminate memory leakage forever.  Of course some idiot would never drop any references and still end up consuming system resources at a ferocious rate, but at least that memory would be accessible, and therefore potentially useful.

Buffer overruns were the cause of most of the security holes in Windows that I heard about while I was there, and are mostly the result of not dealing with bounds checking properly.  Again, this could be automated quite easily.  This is the logical extension of having a well-defined string library, stream library, socket library and so on -- more and more of the stuff that everyone needs is done by the machine, and since everyone needs it, you can allocate more resources to making these bits bug free.

On a completely separate point, the entire purpose of the software industry is to line the pockets of Silicon Valley executives.  The entire purpose of software is something quite different.  This distinction is very important, and is a bit like that between selfish genes and (sometimes) co-operative humans ... execs will get away with whatever they can, and sometimes we who need software get something out of the deal, too.  To see any change in the model of how software is built, we have to make it fiscally optimal to build it that way ....
( 4 comments — Leave a comment )

Latest Month

December 2008
S M T W T F S
 123456
78910111213
14151617181920
21222324252627
28293031   

KSU Genetic and Evolutionary Computation (GEC) Lab

Teunciness

Breakfast

Science, Technology, Engineering, Math (STEM) Communities

Fresh Pages

Tags

Page Summary

Powered by LiveJournal.com
Designed by Naoto Kishi