- Andresen (operating systems, wireless and distributed computing)
- Banerjee (programming languages, language-based security, type theory)
- Singh (distributed systems and networking)
- Wallentine (high-performance computing and parallel algorithms)
this course features topics ranging from ethics, history, and social issues to network security to language-based security techniques.
Today's introduction by Wallentine featured:
- some motivation: National Research Council articles on cybersecurity
- history: the Morris Worm and the establishment of CERT, Y2K, 9/11, the Patriot Act
- broad treatment of risks and costs
- common vulnerabilties: network-based attacks (port-scanning), social engineering
- subtopics of cybersecurity: authentication, covert channels, cryptology (encryption and information flow)
- general principles: publicity, response techniques, jurisprudence, synergy in defense
- general pitfalls: security through obscurity, "ignorance is bliss", the lay user
The cost of security: dollars, hours, and intangibles
Speaking of costs, Virg quoted an axiom to the effect that "the securer must spend $1 more than the sum of all attackers' expenditures". I realized that "dollar" as a representative measure of effort is probably meant figuratively, but this struck my skeptical nerve a bit. To wit, it seems optimistic.
For one thing, the economy of scale for script kiddies can be disproportionate: most attackers are heavily reusing code, can employ the target's own networks as a resource (e.g., to spread viruses), can use brute force avenues of attack such as sneakernet and social engineering, and can distribute an attack. On the other hand, the defender can employ defense-in-depth, isolate the "clean network" from the internet (a brute-force method, but one that
I'll go out on a limb and suggest that it probably costs less in practice to develop a secure kernel according to a tight formal specification than to continually patch a huge monstrosity of bloatware with system updates. Hrm, whatever could I be talking about? ;-)
I'll also assert that though the principle that Virg outlined, that "one must close all vulnerabilities while the attackers need only find one", is true, it isn't always the case that dollars can be compared in linear proportion. What is the value of a "dollar" spent on certifying an incremental kernel source patch versus securing one, or a hundred, systems? An exploit has a certain negative utility if first found by crackers and a certain positive utility if first found by administrators, but depending upon the secrecy and sensitivity required for the system, it isn't necessarily a one-to-one tradeoff.
Another topic we touched on during the hour was open source as a double edged-sword. As many others have pointed out in the past, and as I alluded to during the debate about open source with
I wonder if there is any potential application of so-called discovery informatics to the activities of tiger teams - security consultants employed by companies to deliberately compromise the system and find vulnerabilities, then prescribe remedies. (I learned today that tiger teams are also called red teams, a term I had not heard before.)
What's wrong with testing?
By "self-sufficient", I mean that you aren't just doing black box (functional case-based) testing or even white box (structural) testing. You are really verifying the system formally rather than validating it. Operational equivalence and bisimulation are all well and good, but security is one area that affords as little uncertainty as possible. (Didn't think you'd hear that from a probabilist, did you? ;-))
What's wrong with testing? Well, nothing, when you can do it - but no matter how efficient and exhaustive your approach is, and no matter how phenomenally good your abstractions are, you ultimately run up against scalability issues. This makes for a curious balance of approaches in language-centric departments such as ours: on the one hand you have the formalists (Amtoft, Banerjee, Schmidt, Stoughton), who favor type systems, static analysis, and formal verification; on the other hand you have the "software engineering and languages" researchers (Deloach, Dwyer, Hatcliff, Robby), who develop the spec tools and the model checkers, and have more vested effort in testing and dynamic analysis. The difference in our department, as opposed to those such as Illinois, Hopkins, or Yale, where this is a more marked split, is that everybody in the above lists is versed in denotational semantics and specification techniques. The most practical person I named above knows as much about specification logics, modal logics, and models of state (syntactic control of interference, etc.) as most pure semanticists among my former classmates (Trifonov, Lakshman, League, Beckman).
A digression: the Software Group at Illinois, early to late 1990s
Note that I say balance of approaches. There isn't really a line, whereas in

I also couldn't tell you how it is now, at the Siebel Center. Is anyone reading this who's been there recently?
Well, gosh, it's late, and I've rambled for an hour. In any case, the cybersecurity course is an interesting and eclectic one. Being a "sampler", I tend to enjoy technical courses and seminars that are team-taught by a rotating group of faculty members. This should be interesting.
The MacMag Virus
One last thing: Today I cited the MacMag virus of March, 1988 as a high-profile historical cybersecurity incident. The first recorded case of a virus being shipped with a shrink-wrapped commercial software application (Aldus Freehand, which eventually became an Adobe property), the case gave the author, editor Richard Brandon of the Montreal publication MacMag, quite a lot of negative publicity. Brandon was never sued or charged with a crime, because the payload of the virus was benign and because no laws at the time were deemed applicable. Times have changed, indeed.
The virus is mentioned in this Computer Virus Tutorial - in particular Robert Slade's history of computer viruses contains a chapter about it.
--
Banazir