Usually, when I mention functional programming to other developers, they will unconsciously close their arms and legs—a sure sign of resistance. These developers are naturally suspicious of another new, better “programming” paradigm, so I have learned to zip my mouth. They don’t see the possibility of code becoming more declarative and looking exactly like specification. My blog traffic has diminished after I have started talked about programming in a more functional way.
In the MVP summit, I talked to a developer, who was a known functional programming advocate, about how I merge the notions of code and data.
I mentioned my static analysis tool, NStatic, and how I use everywhere an “expressions” data structure, which is like an immutable abstract syntax tree, but also a functional language in which I can evaluate any expression to its normal form. Expressions are used to represent such things are traditional algebraic expressions, code, natural language, and all sorts of other documents.
Document Operations & Transforms
In the case of a wordprocessor that I am also developing, documents are represented as expressions. In addition, document operations are constructed as functions in my expression language which are then applied to the document expression to return an entirely new expression.
The advantage of this approach is that I can support application-wide functionality by applying a transform to all document operations in my application. Transforms are functions that convert any function into a new function by recursively walking through an expression and applying changes based on pattern-matching (very much like a derivative or Fourier transform). Control-flow operators like lambdas and conditionals themselves have higher-order transformations that work with any function, and thus any transform.
Transforms can only easily be implemented over code written in a functional way. Code in other paradigms would need to be converted to a functional representation first, as I have done with NStatic, and then converted back (or retained) after the transformation. A common refrain among FP enthusiasts is that functional code is easier to prove, but what the real benefit is that functional code is easier for a computer to analyze and manipulate--ie, easier to transform.
Have you ever written an application in which the same transformation needed to be manually coded into a number of different classes? The transform is stored inside our head and then mentally applied to an each method.
Such transformations couldn’t be previously automated because functions were opaque and the language mechanisms like virtual functions are too coarse. With transforms, new application-wide functionality can be written just once, not interleaved within code in several other unrelated classes. I can also easily integrate thirty-party functionality or externalize features this way, because the transformations are not burned in at compile-time.
In Microsoft Secrets, Michael Cusumano wrote about Excel's "Am I Done Yet" list in page 319, a list of concerns that must be considered for every new application feature added:
Microsoft projects have also been compiling metric-based checklists to help determine feature and product completion. For example, Excel has a six-page list of criteria entitled "Am I Done Yet?" This groups completion criteria into twenty-six categories, such as menu commands, printing, interaction with the operating system, and application interoperability (see Table 5.4). A program manager, developer, or test uses the checklist to help evaluate whether a feature is complete.
I wonder how many of these items in the entire "Am I Done Yet" list would just disappear, if program features were written using transforms. Probably a lot. Transforms would make things like revision marking, selection tracking, simultaneously editing a lot easier to write. The application developer could just focus on writing the essential feature and be rest assured that transforms would take care of much of the rest.
Code & Data And Lisp
The developer whom I spoke to in the summit then brought up that Lisp supports treating code as data, simply by quoting code and performing explicit evaluation using the “EVAL” operator. Having coded in the language in the late ‘80s, I was already familiar with Lisp.
The problem with LISP is that it is not as tight and clean as it could be. Unfortunately, the way it mixes code and data is seen as the correct approach, and thus people look no further at alternative cleaner approaches. LISP still distinguishes between code and data: Functions are opaque (their definitions are inaccessible) just as in imperative programming languages. Also, code stored as data can only be executed once all free symbols are removed. For example, (+ x 1 2) would evaluate to undefined symbol error or, if the symbol x were quoted, a type-mismatch error rather than the correct approach (+ x 3).
However, if the standard LISP functions could already operate on symbols, then both quoting and calling the EVAL function, would become unnecessary. In addition, if execution proceeded lazily, LISP macros would also become unnecessary as every function would also be a macro.