I have not had any instruction in formal methods, so I really didn’t encounter specification languages in my coursework, but I have read papers on proving programs including some classic papers by Hoare and Djikstra, and was familiar with Eiffel contracts.
At Harvard, I audited the first couple lectures of graduate level computer science course in Software Engineering, but never encountered modeling and specification in the initial lectures. (By the way, this instructor, a guest lecturer from industry, following custom, recounted his prior interactions with Bill Gates. He did indeed have Bill as a student, and proudly proclaimed in his opening lecture having given him a B years ago, waiting on us to form our own conclusions. Having earlier said that the Macintosh architecture was superior, his agenda was not in doubt. It didn't seem ethical for him to publish such confidential information, but then celebrities have less privacy rights.)
But, independently, I did arrive at a desire to express programming languages in a more mathematical way.
My first observation was that pseudo-code programs are actually real executable programs. The obstacles typically are the use of potentially large sets used as values and the use of a search or minimization function across said sets. The pseudocode for solving 2SAT above incorporates a nondeterministic choose operation and a several set operations.
While pseudo-code programs are lacking in implementation detail needed to execute in the most efficient way, such information could actually be automatically inferred by a compiler, or, if ambiguous time-space tradeoffs are involved, specifically included by the programmer as hints or performance guarantees to the compiler (in the form of contracts or attributes). There are actually very few core algorithms, which can be permuted in many ways.
Over the past couple years, I have become better acquainted with various specification languages, typically but not exclusively used in software verification. My interests in these languages rests in the similarities to the work I am doing with my own expression language (which I will henceforth refer to as "W"), which allows me to write programs in a more succinct mathematical style yet perform efficiently.
Many of these specification languages didn't actually exist during my college years, so I can't really fault my education. Joel, some time ago, talked at Yale about a software engineering course in Dynamic Logic, which turned him of from pursuing a graduate school in Computer Science.
Last year, as part of my continual self-improvements, I proceeded through the course material on software verification. I purchased and read through Software Abstractions- Logic, Language, and Analysis, which presented the Z specification language and also encountered the Z Notation: Reference Manual online. The reading left me with the impression that, at least in some methodologies, systems are designed first with a model program, written in some specification such as Z, which is checked either manually with proof-assistants or model-checking software and then consequently or concurrently ported into actual software. My self-study involved a course syllabus, purchase books in the reading lists, reviewing course notes/presentation and other material and last a few days. It made me more acquainted with a form of temporal logic in software verification--my prior exposure with temporal logic was within the context of natural language semantics--and also with the use of mu calculus in model checking, the technology that won its pioneering researchers the ACM Turing Prize last year.
Over the years, I have subscribed to and read papers from Microsoft Research, one of the few Microsoft feeds that I still read outside of MSDN blogs. Some papers refer to Horn-clause based language, pointing the use of a Prolog-equivalent systems. Microsoft dedicates significant resources to formal methods and verification. I have come across a couple of languages for specifications: AsmL (Abstract State Machine Language) and TLA+ (Temporal Logic for Actions).
Microsoft's AsmL also shares a lot of goals that I have except that it based on operational semantics rather denotational semantics.
AsmL is useful in any situation where you need a precise, non-ambiguous way to specify a computer system, either software or hardware. AsmL specifications are an ideal way for teams to communicate design decisions. Program managers, developers, and testers can all use an AsmL specification to achieve a single, unified understanding.
One of the greatest benefits of an AsmL specification is that you can execute it. That means it is useful before you commit yourself to coding the entire system. By exploring your design, you can answer the following questions: Does it do everything you intended it to? How do the features interact? Are there any unintended behaviors?
I was not initially fond of AsmL, but I do admire its resemblance to pseudocode. Below is an example specification for in-place sorting.
This program nondeterministically chooses two array items to swap until array is sorted. The choose operation closely resembles the pseudo code earlier. The loop continues until a fixed point is reached in which the next iteration of the loop produce further state changes; technically, though, the choose statement will actually fail before that occurs. By withholding detail, the algorithm can inclusively characterize both the quicksort and bubblesort algorithms according to the AsmL manual; however, these are unstable sorts which the specification disallows since distinct items that compare "equivalent" may be swapped.
I discovered TLA+ from a report that Microsoft's Oslo modeling language was reported to be based on TLA+ (Temporal Logic for Actions), although I haven't picked up on the similarities in any of the published Oslo information thus far. Joe Duffy recently also wrote about an internal research talk on TLA+ ("Math is nature's way of letting you know how sloppy your writing is") and points to a TLA+ website containing more information. TLA+ seems a bit unfriendly, and while it shares with my current work the integration of modal operators, its design shows that they are multiple ways to express the same idea. Don Box mention that one of the TLA+ architects is working in the Oslo group, but I sincerely hope that TLA+ is not Oslo’s future. It’s clear to me that Oslo will eventually evolve into richer programming model, potentially doing away with the code-behind XAML model.
My recent readings in the past few month included Software Engineering 1- Abstraction and Modelling and Software Engineering 2- Specification of Systems and Languages which presents the RAISE specification language or RSL. I actually generally prefer this language, and seems most similar to the "W" I have been developing for so long--and initially conceived of since college.
One beneficial aspect of this exercise is seeing what operators I am missing. I noticed that RSL and W both include nondeterministic expressions. Which I have a single nondeterministic choice operator, RSL includes two with fine distinctions, and a related parallel operator. However, RSL does not have a Kleene star. I out-innovated RSL in generalizing the full regular expression operations (nondeterministic choice, Kleene star) over function composition rather than symbol concatenation. This allows me to represent and infer object invariants. I have only seen this used within the formal systems, Propositional Dynamic Logic and Interval Temporal Logic.
I have also become acquainted with other efforts such as CASL (Common Algebraic Specification Language) and VDM. VDM was the subject of a classic paper. Developed by IBM in the 1970s and quite possibly the first specification language, there are now numerous tools including one that generates executables directly from the specification. Fred Brooks was probably influenced by this language, when writing his widely circulated paper,
Mythical Man Month “No Silver Bullet,” which states that specifications, among other things, will never be a software engineering panacea.
The goal and the designs of the various specification languages diverge, but they help me to understand and refine the language W that I have built. My effort with W was inspired by the exposure to the alternative evaluation models yet common declarative presentation of the languages.