- Andresen (operating systems, wireless and distributed computing)
- Banerjee (programming languages, language-based security, type theory)
- Singh (distributed systems and networking)
- Wallentine (high-performance computing and parallel algorithms)
this course features topics ranging from ethics, history, and social issues to network security to language-based security techniques.
Today's introduction by Wallentine featured:
- some motivation: National Research Council articles on cybersecurity
- history: the Morris Worm and the establishment of CERT, Y2K, 9/11, the Patriot Act
- broad treatment of risks and costs
- common vulnerabilties: network-based attacks (port-scanning), social engineering
- subtopics of cybersecurity: authentication, covert channels, cryptology (encryption and information flow)
- general principles: publicity, response techniques, jurisprudence, synergy in defense
- general pitfalls: security through obscurity, "ignorance is bliss", the lay user
The cost of security: dollars, hours, and intangibles
Speaking of costs, Virg quoted an axiom to the effect that "the securer must spend $1 more than the sum of all attackers' expenditures". I realized that "dollar" as a representative measure of effort is probably meant figuratively, but this struck my skeptical nerve a bit. To wit, it seems optimistic.
For one thing, the economy of scale for script kiddies can be disproportionate: most attackers are heavily reusing code, can employ the target's own networks as a resource (e.g., to spread viruses), can use brute force avenues of attack such as sneakernet and social engineering, and can distribute an attack. On the other hand, the defender can employ defense-in-depth, isolate the "clean network" from the internet (a brute-force method, but one that masaga reports that his summer, 2005 internship employer used).
I'll go out on a limb and suggest that it probably costs less in practice to develop a secure kernel according to a tight formal specification than to continually patch a huge monstrosity of bloatware with system updates. Hrm, whatever could I be talking about? ;-)
I'll also assert that though the principle that Virg outlined, that "one must close all vulnerabilities while the attackers need only find one", is true, it isn't always the case that dollars can be compared in linear proportion. What is the value of a "dollar" spent on certifying an incremental kernel source patch versus securing one, or a hundred, systems? An exploit has a certain negative utility if first found by crackers and a certain positive utility if first found by administrators, but depending upon the secrecy and sensitivity required for the system, it isn't necessarily a one-to-one tradeoff.
Another topic we touched on during the hour was open source as a double edged-sword. As many others have pointed out in the past, and as I alluded to during the debate about open source with zengeneral a few months ago, one putative benefit of open source is that it gives you the only self-sufficient basis for trusting your software: the ability to verify it. The downside of this is that it really comes down to how much incentive there is for each activity. Just because the potential exists in equal measure does not mean there is equal interest in verification, or auditing, versus malicious review of the source in order to discover vulnerabilities.
I wonder if there is any potential application of so-called discovery informatics to the activities of tiger teams - security consultants employed by companies to deliberately compromise the system and find vulnerabilities, then prescribe remedies. (I learned today that tiger teams are also called red teams, a term I had not heard before.)
What's wrong with testing?
By "self-sufficient", I mean that you aren't just doing black box (functional case-based) testing or even white box (structural) testing. You are really verifying the system formally rather than validating it. Operational equivalence and bisimulation are all well and good, but security is one area that affords as little uncertainty as possible. (Didn't think you'd hear that from a probabilist, did you? ;-))
What's wrong with testing? Well, nothing, when you can do it - but no matter how efficient and exhaustive your approach is, and no matter how phenomenally good your abstractions are, you ultimately run up against scalability issues. This makes for a curious balance of approaches in language-centric departments such as ours: on the one hand you have the formalists (Amtoft, Banerjee, Schmidt, Stoughton), who favor type systems, static analysis, and formal verification; on the other hand you have the "software engineering and languages" researchers (Deloach, Dwyer, Hatcliff, Robby), who develop the spec tools and the model checkers, and have more vested effort in testing and dynamic analysis. The difference in our department, as opposed to those such as Illinois, Hopkins, or Yale, where this is a more marked split, is that everybody in the above lists is versed in denotational semantics and specification techniques. The most practical person I named above knows as much about specification logics, modal logics, and models of state (syntactic control of interference, etc.) as most pure semanticists among my former classmates (Trifonov, Lakshman, League, Beckman).
A digression: the Software Group at Illinois, early to late 1990s
Note that I say balance of approaches. There isn't really a line, whereas in chambana there was very much a line. If you remember how the new Digital Computer Lab was laid out, the whole catwalk that divided the "theorist wing" (Kamin, Reddy) from the "grunge wing" (Johnson, Padua, other compilers people) was very clearly demarcated. The line-crossers (Chien, Kale) were well-regarded on both sides, but I remember going to group meetings where they, and grad students they could drag to the "clean C++" and "clean CHARM++" discussions were the only overlapping participants. Sometimes I think back and wonder whether having them on different sides was in the way of a demilitarized zone! But seriously - there were some folks in the middle in the Software Group. Agha, Harandi, and Jane Liu were laws unto themselves, though for different reasons. (Agha was more of a languages generalist, I guess; Harandi, one of my committee members, was rather formal but had students who ran the whole gamut, and was himself about half AI; and Jane Liu is a giant of real-time systems who could have straddled any categorization you or I might postulate.) Also, I'd be remiss if I didn't mention that even when we worked for the hardcore type theorists, Matt Beckman, Howard Huang, Jon Springer, and I shared an office with the grads of both the grunge professors and the formal freaks. I couldn't tell you exactly who was who: I remember Bill Harrison, Ian Chai, and a gemeration of Johnson's cadre passing through that room when I was there (c. 1993-1994).
I also couldn't tell you how it is now, at the Siebel Center. Is anyone reading this who's been there recently?
Well, gosh, it's late, and I've rambled for an hour. In any case, the cybersecurity course is an interesting and eclectic one. Being a "sampler", I tend to enjoy technical courses and seminars that are team-taught by a rotating group of faculty members. This should be interesting.
The MacMag Virus
One last thing: Today I cited the MacMag virus of March, 1988 as a high-profile historical cybersecurity incident. The first recorded case of a virus being shipped with a shrink-wrapped commercial software application (Aldus Freehand, which eventually became an Adobe property), the case gave the author, editor Richard Brandon of the Montreal publication MacMag, quite a lot of negative publicity. Brandon was never sued or charged with a crime, because the payload of the virus was benign and because no laws at the time were deemed applicable. Times have changed, indeed.
The virus is mentioned in this Computer Virus Tutorial - in particular Robert Slade's history of computer viruses contains a chapter about it.