Sequentiality as the Essence of Parallelism

November 4, 2017

I recently thought of a nice way to structure a language for parallel programming around the concept of sequential composition.  Think of parallelism as the default—evaluate everything in parallel unless the semantics of the situation precludes it: sums are posterior to summands, but the summands can be evaluated simultaneously.  You need a way to express the necessary dependencies without introducing any spurious ones.

There’s a tool for that, called lax logic, introduced by Fairtlough and Mendler and elaborated by Davies and Pfenning, which I use extensively in PFPL.  The imperative language Modernized Algol is formulated in the lax style, distinguishing two modes, or levels, of syntax, the (pure) expressions and the (impure) commands.  The lax modality, which links the two layers, behaves roughly like a monad, but, all the hype notwithstanding, it is not the central player.  It’s the modes, not the modality, that matter.  (See the Commentary on PFPL for more.)

The lax modality is just the ticket for expressing parallelism.  Rather than separate expressions from commands, here we distinguish between values and computations.  The names are important, to avoid possible confusion.  Values are fully evaluated; they are not a source of parallelism.  (If values were called “pure”, it would be irresistible to think otherwise.)  Computations have yet to be evaluated; they engender parallelism by sequential composition.  What?  No, you didn’t nod off! Let me explain.

Parallelism is all about the join points.  If parallel execution is the default, then the job of the programmer is not to induce parallelism, but to harness it.  And you do that by saying, “this computation depends on these others.”  Absent that, there is nothing else to say, just go for it.  No sub-languages.  No program analysis.  No escaping the monad.  Just express the necessary dependencies, and you’re good to go.

So, what are the join points?  They are the elimination forms for two parallel modalities.  They generalize the sequential case to allow for statically and dynamically determined parallelism.   A value of parallel product type is a tuple of unevaluated computations, a kind of “lazy” tuple (but not that kind of laziness, here I just mean unevaluated components).  The elimination form evaluates all of the component computations in parallel, creates a value tuple from their values, and passes it to the body of the form.  Similarly, a value of parallel sequence type is a generator consisting of two values, a natural number n indicating its size, and a function determining the ith component computation for each 1≤i<n.  The elimination form activates all n component computations, binds their values to a value sequence, and passes it to the body of the form.

The join point effects a change of type, from encapsulated computations to evaluated values, neatly generalizing sequential composition from a unary to a multiway join.  If you’d like, the parallel products and parallel sequences are “generalized monads” that encapsulate not just one, but many, unevaluated computations.  But they are no more monads than they are in any other functional language: the categorial equational laws need not hold in the presence of, say, divergence, or exceptions.

The dynamics assigns costs to computations, not to values, whose cost of creation has already been paid.  The computation that just returns a value has unit work and span.  Primitive operations take unit work and span.  The sequential composition of a parallel product with n components induces span one more than the maximum span of the constituents, and induces work one more than the sum of their work.  The dynamics of sequential composition for parallel sequences is similar, with the “arity” being determined dynamically rather than statically.

Programming in this style means making the join points explicit.  If you don’t like that, you can easily define derived forms—and derived costs—for constructs that do it for you.    For example, a pair of computations might be rendered as activating a parallel pair of its components, then returning the resulting value pair.  And so on and so forth.  It’s no big deal.

En passant the modal formulation of parallelism solves a nasty technical problem in a substitution-based cost semantics that does not make the modal distinction.  The issue is, how to distinguish between the creation of a value, and the many re-uses of it arising from substitution?  It’s not correct to charge again and again for cresting the value each time you see it (this cost can be asymptotically significant), but you do have to charge for creating it somewhere (it’s not free, and it can matter).  And, anyway, how is one to account for the cost of assessing whether an expression is, in fact, a value?  The usual move is to use an environment semantics to manage sharing.  But you don’t have to, the modal framework solves the problem, by distinguishing between a value per se; the computation that returns it fully created; and the computation that incrementally constructs it from its constituent parts.  It’s the old cons-vs-dotted pair issue, neatly resolved.

Please see Section 10 of the Commentary on PFPL for a fuller account.  The main idea is to generalize a type of single unevaluated computations, which arises in lax logic, to types of statically- and dynamically many unevaluated computations.  The bind operation becomes a join operation for these computations, turning a “lazy” tuple or sequence into eager tuples or sequences.

Updates: word-smithing, added cite to Davies-Pfenning, replaced cite of course notes with reference to commentary.


Proofs by contradiction, versus contradiction proofs

March 4, 2017

It is well-known that constructivists renounce “proof by contradiction”, and that classicists scoff at the critique.  “Those constructivists,” the criticism goes, “want to rule out proofs by contradiction.  How absurd!  Look, Pythagoras showed that the square root of two is irrational by deriving a contradiction from the assumption that it is rational.  There is nothing wrong with this.  Ignore them!”

On examination that sort of critique fails, because a proof by contradiction is not a proof that derives a contradiction.  Pythagoras’s  proof is valid, one of the eternal gems of mathematics.  No one questions the validity of that argument, even if they question proof by contradiction.

Pythagoras’s Theorem expresses a negation: it is not the case that the square root of two can be expressed as the ratio of two integers.  Assume that it can be so represented.  A quick deduction shows that this is impossible.  So the assumption is false.  Done.  This is a direct proof of a negative assertion; it is not a “proof by contradiction”.

What, then, is a proof by contradiction?  It is the affirmation of a positive statement by refutation of its denial.  It is a direct proof of the negation of a negated assertion that is then pressed into service as a direct proof of the assertion, which it is not.  Anyone is free to ignore the distinction for the sake of convenience, as a philosophical issue, or as a sly use of “goto” in a proof, but the distinction nevertheless exists and is important.  Indeed, part of the beauty of constructive mathematics is that one can draw such distinctions. Once drawn, such distinctions can be disregarded; once blurred, forever blurred, irrecoverably.

For the sake of explanation, let me rehearse a standard example of a genuine proof by contradiction.  The claim is that there exists irrationals a and b such that a to the b power is rational.  Here is an indirect proof, a true proof by contradiction.  Let us prove instead that it is impossible that any two irrationals a and b are such that a to the b is irrational.  This is a negative statement, so of course one proves it by deriving a contradiction from assuming that which is negated.  Suppose, for a contradiction, that for every two irrationals a and b, the exponentiation a to the b power is irrational.  We know from Pythagoras that root two is irrational, so plug it in for both a and b, and conclude that root two to the root two power is irrational.  Now use the assumption again, taking a to be root two to the root two, and b to be root two.  Calculate a to the power of b, it is two, which is eminently rational.  Contradiction.

We have now proved that it is not the case that every pair of irrationals, when exponentiated, give an irrational.  There is nothing questionable about this proof.  But it does not prove that there are two irrationals whose exponent is rational!  If you think it does, then I ask you, please name them for me.  That information is not in this proof (there are other proofs that do name them, but that is not relevant for my purposes).  You may, if you wish, disregard the distinction I am drawing, that is your prerogative, and neither I nor anyone has any problem with that.  But you cannot claim that it is a direct proof, it is rather an indirect proof, that proceeds by refuting the negative of the intended assertion.

So why am I writing this?  Because I have learned, to my dismay, that in U.S. computer science departments–of all places!–students are being taught, erroneously, that any proof that derives a contradiction is a “proof by contradiction”.  It is not.  Any proof of a negative must proceed by contradiction.  A proof by contradiction is, contrarily, a proof of a positive by refutation of the negative.  This distinction is important, even if you want to “mod out” by it in your work, for it is only by drawing the distinction that one can even define the equivalence with which to quotient.

That’s my main point.  But for those who may not be familiar with the distinction between direct and indirect proof, let me take the opportunity to comment on why one might care to draw such a distinction.  It is a matter of honesty, of a sort: the information content of the foregoing indirect proof does not fulfill the expectation stated in the theorem.  It is a kind of boast, an overstatement, to claim otherwise.  Compare the original statement with the reformulation used in the proof.  The claim that it is not the case that every pair of irrationals exponentiate to an irrational is uncontroversial.  The proof proves it directly, and there is nothing particularly surprising about it.  One would even wonder why anyone would bother to state it.  Yet the supposedly equivalent claim stated at the outset appears much more fascinating, because most people cannot easily think up an example of two irrationals that exponentiate to rationals.  Nor does the proof provide one. Once, when shown the indirect proof, a student of mine blurted out “oh that’s so cheap.”  Precisely.

Why should you care?  Maybe you don’t, but there are nice benefits to keeping the distinction, because it demarcates the boundary between constructive proofs, which have direct interpretation as functional programs, and classical proofs, which have only an indirect such interpretation (using continuations, to be precise, and giving up canonicity).  Speaking as a computer scientist, this distinction matters, and it’s not costly to maintain.  May I ask that you adhere to it?

Edit: rewrote final paragraph, sketchy and irrelevant, and improved prose throughout. Word-smithing, typos.


 


PCLSRING in Semantics

July 11, 2016

PCLSRING is an operating system kernel design technique introduced in ITS for managing interruptions of long-running synchronous system calls.  It was mentioned in an infamous diatribe by Dick Gabriel, and is described in loving detail by Allen Bawden in an article for the ages.

Discussions of PCLSRING usually center on fundamental questions of systems design.  Is the ITS approach better than the Unix approach?   Should the whole issue be avoided by using asynchronous system calls, as in VMS?  And weren’t the good old days better than the bad new days anyway?

Let’s set those things aside for now and instead consider what it is, rather than what it’s for or whether it’s needed.  The crux of the matter is this.  Suppose you’re working with a system such as Unix that has synchronous system calls for file I/O, and you initiate a “large” read of n bytes into memory starting at address a.  It takes a while to perform the transfer, during which time the process making the call may be interrupted for any number of reasons.  The question is, what to do about the process state captured at the moment of the interrupt?

For various reasons it doesn’t make sense to snapshot the process while it is running inside the kernel.  One solution is to simply stop the read “in the middle” and arrange that, when the process resumes, it returns from the system call indicating that some m<=n bytes have been read.  You’re supposed to check that m=n yourself anyway, and restart the call if not.  (This is the Unix solution.)  It is all too easy to neglect the check, and the situation is made the worse because so few languages have sum types which would make it impossible to neglect the deficient return.

PCLSRING instead stops the system call in place, backs up the process PC to the system call, but with the parameters altered to read n-m bytes into location a+m, so that when the process resumes it simply makes a “fresh” system call to finish the read that was so rudely interrupted.  The one drawback, if it is one, is that your own parameters may get altered during the call, so you shouldn’t rely on them being anything in particular after it returns.  (This is all more easily visualized in assembly language, where the parameters are typically words that follow the system call itself in memory.)

While lecturing at this year’s OPLSS, it occurred to me that the dynamics of Modernized Algol in PFPL, which is given in Plotkin’s style, is essentially the same idea.  Consider the rule for executing an encapsulated command:

if mm’, then bnd(cmd(m);x.m”)bnd(cmd(m’);x.m”)

(I have suppressed the memory component of the state, which is altered as well.)  The expression cmd(m) encapsulates the command m.  The bnd command executes m and passes its result to another command, m”, via the variable x.  The above rule specifies that a step of execution of m results in a reconstruction of the entire bnd, albeit encapsulating m’ , the intermediate result, instead of m.  It’s exactly PCLSRING!  Think of m as the kernel code for the read, think of cmd as the system call, and think of the bnd as the sequential composition of commands in an imperative language.  The kernel only makes partial progress executing m before being interrupted, leaving m’ remaining to be executed to complete the call.  The “pc” is backed up to the bnd, albeit modified with m’ as the new “system call” to be executed on the next transition.

I just love this sort of thing!  The next time someone asks “what the hell is PCLSRING?”, you now have the option of explaining it in one line, without any mention of operating systems.  It’s all a matter of semantics.

 

 


PFPL Commentary

June 3, 2016

I am building a web page devoted to the 2nd edition of Practical Foundations for Programming Languages, recently published by Cambridge University Press.  Besides an errata, the web site features a commentary on the text explaining major design decisions and suggesting alternatives.  I also plan to include additional exercises and to make sample solutions available to faculty teaching from the book.

The purpose of the commentary is to provide the “back story” for the development, which is often only hinted at, or is written between the lines, in PFPL itself.  To emphasize enduring principles over passing fads, I have refrained from discussing particular languages in the book.  But this makes it difficult for many readers to see the relevance.  One purpose of the commentary is to clarify these connections by explaining why I said what I said.

As a starting point, I explain why I ignore the familiar concept of a “paradigm” in my account of languages.  The idea seems to have been inspired by Kuhn’s (in)famous book The Structure of Scientific Revolutions, and was perhaps a useful device at one time.  But by now the idea of a paradigm is just too vague to be useful, and there are many better ways to explain and systematize language structure.  And so I have avoided it.

I plan for the commentary to be a living document that I will revise and expand as the need arises.  I hope for it to provide some useful background for readers in general, and teachers in particular.  I wish for the standard undergraduate PL course to evolve from a superficial taxonomy of the weird animals in the language zoo to a systematic study of the general theory of computation.  Perhaps PFPL can contribute to effecting that change.

Update: As I had hoped, I have been making many new additions to the commentary, exposing alternatives, explaining decisions, and expanding on topics in PFPL.  There are also a few errors noted in the errata; so far, nothing major has come up.  (The sections on safety are safely sound.)


It Is What It Is (And Nothing Else)

February 22, 2016

A recent discussion of introductory computer science education led to the topic of teaching recursion.  I was surprised to learn that students are being taught that recursion requires understanding something called a “stack” that is nowhere in evidence in their code.  Few, if any, students master the concept, which is usually “covered” only briefly.  Worst, they are encouraged to believe that recursion is a mysterious bit of esoterica that is best ignored.

And thus is lost one of the most important and beautiful concepts in computing.

The discussion then moved on to the implementation of recursion in certain inexplicably popular languages for teaching programming.  As it turns out, the compilers mis-implement recursion, causing unwarranted space usage in common cases.  Recursion is dismissed as problematic and unimportant, and the compiler error is elevated to a “design principle” — to be serpentine is to do it wrong.

And thus is lost one of the most important and beautiful concepts in computing.

And yet, for all the stack-based resistance to the concept, recursion has nothing to do with a stack.  Teaching recursion does not need any mumbo-jumbo about “stacks”.  Implementing recursion does not require a “stack”.  The idea that the two concepts are related is simply mistaken.

What, then, is recursion?  It is nothing more than self-reference, the ability to name a computation for use within the computation itself.  Recursion is what it is, and nothing more.  No stacks, no tail calls, no proper or improper forms, no optimizations, just self-reference pure and simple.  Recursion is not tied to “procedures” or “functions” or “methods”; one can have self-referential values of all types.

Somehow these very simple facts, which date back to the early 1930’s, have been replaced by damaging myths that impede teaching and using recursion in programs.  It is both a conceptual and a practical loss.  For example, the most effective methods for expressing parallelism in programs rely heavily on recursive self-reference; much would be lost without it.  And the allegation that “real programmers don’t use recursion” is beyond absurd: the very concept of a digital computer is grounded in recursive self-reference (the cross-connection of gates to form a latch).  (Which, needless to say, does not involve a stack.)  Not only do real programmers use recursion, there could not even be programmers were it not for recursion.

I have no explanation for why this terrible misconception persists.  But I do know that when it comes to programming languages, attitude trumps reality every time.  Facts?  We don’t need no stinking facts around here, amigo.  You must be some kind of mathematician.

If all the textbooks are wrong, what is right?  How should one explain recursion?  It’s simple.  If you want to refer to yourself, you need to give yourself a name.  “I” will do, but so will any other name, by the miracle of α-conversion.  A computation is given a name using a fixed point (not fixpoint, dammit) operator:  fix x is e stands for the expression e named x for use within e.  Using it, the textbook example of the factorial function is written thus:

fix f is fun n : nat in case n {zero => 1 | succ(n') => n * f n'}.

Let us call this whole expression fact, for convenience.  If we wish to evaluate it, perhaps because we wish to apply it to an argument, its value is

fun n : nat in case n {zero => 1 | succ(n') => n * fact n'}.

The recursion has been unrolled one step ahead of execution.  If we reach fact again, as we will for a positive argument,  fact is evaluated again, in the same way, and the computation continues.  There are no stacks involved in this explanation.

Nor is there a stack involved in the implementation of fixed points.  It is only necessary to make sure that the named computation does indeed name itself.  This can be achieved by a number of means, including circular data structures (non-well-founded abstract syntax), but the most elegant method is by self-application.  Simply arrange that a self-referential computation has an implicit argument with which it refers to itself.  Any use of the computation unrolls the self-reference, ensuring that the invariant is maintained.  No storage allocation is required.

Consequently, a self-referential functions such as

fix f is fun (n : nat, m:nat) in case n {zero => m | succ(n') => f (n',n*m)}

execute without needing any asymptotically significant space.  It is quite literally a loop, and no special arrangement is required to make sure that this is the case.  All that is required is to implement recursion properly (as self-reference), and you’re done.  There is no such thing as tail-call optimization.  It’s not a matter of optimization, but of proper implementation.  Calling it an optimization suggests it is optional, or unnecessary, or provided only as a favor, when it is more accurately described as a matter of getting it right.

So what, then, is the source of the confusion?  The problem seems to be a too-close association between compound expressions and recursive functions or procedures.  Consider the classic definition of factorial given earlier.  The body of the definition involves the expression

n * fact n'

where there is a pending multiplication to be accounted for.  Once the recursive call (to itself) completes, the multiplication can be carried out, and it is necessary to keep track of this pending obligation.  But this phenomenon has nothing whatsoever to do with recursion.  If you write

n * square n'

then it is equally necessary to record where the external call is to return its value.  In typical accounts of recursion, the two issues get confused, a regrettable tragedy of error.

Really, the need for a stack arises the moment one introduces compound expressions.  This can be explained in several ways, none of which need pictures or diagrams or any discussion about frames or pointers or any extra-linguistic concepts whatsoever.  The best way, in my opinion, is to use Plotkin’s structural operational semantics, as described in my Practical Foundations for Programming Languages (Second Edition) on Cambridge University Press.

There is no reason, nor any possibility, to avoid recursion in programming.  But folk wisdom would have it otherwise.  That’s just the trouble with folk wisdom, everyone knows it’s true, even when it’s not.

Update: Dan Piponi and Andreas Rossberg called attention to a pertinent point regarding stacks and recursion.  The conventional notion of a run-time stack records two distinct things, the control state of the program (such as subroutine return addresses, or, more abstractly, pending computations, or continuations), and the data state of the program (a term I just made up because I don’t know a better one, for managing multiple simultaneous activations of a given procedure or function).  Fortran (back in the day) didn’t permit multiple activations, meaning that at most one instance of a procedure can be in play at a given time.  One consequence is that α-equivalence can be neglected: the arguments of a procedure can be placed in a statically determined spot for the call.  As a member of the Algol-60 design committee Dijkstra argued, successfully, for admitting multiple procedure activations (and hence, with a little extra arrangement, recursive/self-referential procedures).  Doing so requires that α-equivalence be implemented properly; two activations of the same procedure cannot share the same argument locations.  The data stack implements α-equivalence using de Bruijn indices (stack slots); arguments are passed on the data stack using activation records in the now-classic manner invented by Dijkstra for the purpose.  It is not self-reference that gives rise to the need for a stack, but rather re-entrancy of procedures, which can arise in several ways, not just recursion.  Moreover, recursion does not always require re-entrancy—the so-called tail call optimization is just the observation that certain recursive procedures are not, in fact, re-entrant.  (Every looping construct illustrates this principle, albeit on an ad hoc basis, rather than as a general principle.)


The Power of Negative Thinking

January 27, 2015

Exception tracking is a well-known tar baby of type system design.  After all, if expressions can have two sorts of result, why shouldn’t the type say something about them both?  Languages such as CLU, FX, and Java, to name three, provide “throws” or “raises” clauses to the types of procedures that specify an upper bound on the exceptions that can occur when they are called.  It all seems natural and easy, but somehow the idea never really works very well.  One culprit is any form of higher-order programming, which is inherent in object-oriented and functional languages alike.  To handle the indirection requires more complex concepts, such as effect polymorphism, to make thing work reasonably well.  Or untracked exceptions are used to avoid the need to track them.  Somehow such an appealing idea seems rather stickier to realize than one might expect.  But why?

A piece of the puzzle was put into place by Xavier Leroy and François Pessaux in their paper on tracking uncaught exceptions. Their idea was to move use type-based methods to track uncaught exceptions, but to move the clever typing techniques required out of the programming language itself and into a separate analysis tool.  They make effective use of the powerful concept of row polymorphism introduced by Didier Rémy for typing records and variants in various dialects of Caml.  Moving exception tracking out of the language and into a verification tool is the decisive move, because it liberates the analyzer from any constraints that may be essential at the language level.

But why track uncaught exceptions?  That is, why track uncaught exceptions, rather than caught exceptions?  From a purely methodological viewpoint it seems more important to know that a certain code fragment cannot raise certain exceptions (such as the \texttt{match} exception in ML, which arises when a value matches no pattern in a case analysis).  In a closed world in which all of the possible exceptions are known, then tracking positive information about which exceptions might be raised amounts to the same as tracking which exceptions cannot be raised, by simply subtracting the raised set from the entire set.  As long as the raised set is an upper bound on the exceptions that might be raised, then the difference is a lower bound on the set of exceptions that cannot be raised.  Such conservative approximations are necessary because a non-trivial behavioral property of a program is always undecidable, and hence requires proof.  In practice this means that stronger invariants must be maintained than just the exception information so that one may prove, for example, that the values passed to a pattern match are limited to those that actually do satisfy some clause of an inexhaustive match.

How realistic is the closed world assumption?  For it to hold seems to require a whole-program analysis, and is therefore non-modular, a risky premise in today’s world.  Even on a whole-program basis exceptions must be static in the sense that, even if they are scoped, they may in principle be declared globally, after suitable renaming to avoid collisions.  The global declarations collectively determine the whole “world” from which positive exception tracking information may be subtracted to obtain negative exception information.  But in languages that admit multiple instantiations of modules, such as ML functors, static exceptions are not sufficient (each instance should introduce a distinct exception).  Instead, static exceptions must be replaced by dynamic exceptions that are allocated at initialization time, or even run-time, to ensure that no collisions can occur among the instances.  At that point we have an open world of exceptions, one in which there are exceptions that may be raised, but which cannot be named in any form of type that seeks to provide an upper bound on the possible uncaught exceptions that may arise.

For example consider the ML expression

let
  exception X
in
  raise X
end

If one were to use positive exception tracking, what would one say about the expression as a whole?  It can, in fact it does, raise the exception \texttt{X}, yet this fact is unspeakable outside of the scope of the declaration.   If a tracker does not account for this fact, it is unsound in the sense that the uncaught exceptions no longer provide an upper bound on what may be raised.  One maneuver, used in Java, for example, is to admit a class of untracked exceptions about which no static information is maintained.  This is useful, because it allows one to track those exceptions that can be tracked (by the Java type system) and to not track those that cannot.

In an open world (which includes Java, because exceptions are a form of object) positive exception tracking becomes infeasible because there is no way to name the exceptions that might be tracked.  In the above example the exception \textsf{X} is actually a bound variable bound to a reference to an exception constructor.  The name of the bound variable ought not matter, so it is not even clear what the exception raised should be called.  (It is amusing to see the messages generated by various ML compilers when reporting uncaught exceptions.  The information they provide is helpful, certainly, but is usually, strictly speaking, meaningless, involving identifiers that are not in scope.)

The methodological considerations mentioned earlier suggest a way around this difficulty.  Rather than attempt to track those exceptions that might be raised, instead track the exceptions that cannot be raised.  In the above example there is nothing to say about \texttt{X} not being raised, because it is being raised, so we’re off the hook there.  The “dual” example

let
  exception X
in
  2+2
end

illustrates the power of negative thinking.  The body of the \textsf{let} does not raise the exception bound to \textsf{X}, and this may be recorded in a type that makes sense within the scope of \textsf{X}.  The crucial point is that when exiting its scope it is sound to drop mention of this information in a type for the entire expression.  Information is lost, but the analysis is sound.  In contrast there is no way to drop positive information without losing soundness, as the first example shows.

One way to think about the situation is in terms of type refinements, which express properties of the behavior of expressions of a type.  To see this most clearly it is useful to separate the exception mechanism into two parts, the control part and the data part.  The control aspect is essentially just a formulation of error-passing style, in which every expression has either a normal return of a specified type, or an exceptional return of the type associated to all exceptions.  (Nick Benton and Andrew Kennedy nicely formulated this view of exceptions as an extension of the concept of a monad.)

The data aspect is, for dynamic exceptions, the type of dynamically classified values, which is written \textsf{clsfd} in PFPL.  Think of it as an open-ended sum in which one can dynamically generate new classifiers (aka summands, injections, constructors, exceptions, channels, …) that carry a value of a specified type.  According to this view the exception \textsf{X} is bound to a dynamically-generated classifier carrying a value of unit type.  (Classifier allocation is a storage effect, so that the data aspect necessarily involves effects, whereas the control aspect may, and, for reasons of parallelism, be taken as pure.)  Exception constructors are used to make values of type \textsf{clsfd}, which are passed to handlers that can deconstruct those values by pattern matching.

Type refinements come into play as a means of tracking the class of a classified value.  For the purposes of exception tracking, the crucial refinements of the type \textsf{clsfd} are the positive refinement, \textsf{a}\cdot, and the negative refinement\overline{\textsf{a}\cdot}, which specify that a classified value is, or is not, of class \textsf{a}.  Positive exception tracking reduces to maintaining invariants expressed by a disjunction of positive refinements; negative exception tracking reduces to maintaining invariants expressed by a conjunction of negative refinements.  Revisiting the logic of exception tracking, the key is that the entailment

\overline{\mathsf{a}_1\cdot}\wedge \cdots\wedge \overline{\mathsf{a}_n\cdot} \leq \overline{\mathsf{a}_1\cdot} \wedge \cdots \wedge \overline{\mathsf{a}_{n-1}\cdot}

is valid, whereas the “entailment”

\mathsf{a}_1\cdot \vee \cdots \vee \mathsf{a}_n\cdot \leq \mathsf{a}_1\cdot\vee \cdots \vee \mathsf{a}_{n-1}\cdot

is not.  Thus, in the negative setting we may get ourselves out of the scope of an exception by weakening the refinement, an illustration of the power of negative thinking.


Structure and Efficiency of Computer Programs

September 28, 2014

For decades my colleague, Guy Blelloch, and I have promoted a grand synthesis of the two “theories” of computer science, combinatorial theory and logical theory.  It is only a small exaggeration to say that these two schools of thought work in isolation.  The combinatorial theorists concern themselves with efficiency, based on hypothetical translations of high-level algorithms to low-level machines, and have no useful theory of composition, the most important tool for developing large software systems.  Logical theorists concern themselves with composition, emphasizing the analysis of the properties of components of systems and how those components are combined; the heart of logic is a theory of composition (entailment).  But relatively scant attention is paid to efficiency, and, to a distressingly large extent, the situation is worsening, and not improving.

Guy and I have argued, through our separate and joint work, for the applicability of PL ideas to algorithms design, leading. for example, to the concept of adaptive programming that Umut Acar has pursued aggressively over the last dozen years.  And we have argued for the importance of cost analysis, for various measures of cost, at the level of the code that one actually writes, and not how it is compiled.  Last spring, prompted by discussions with Anindya Banerjee at NSF in the winter of 2014, I decided to write a position paper on the topic, outlining the scientific opportunities and challenges that would arise in an attempt to unify the two, disparate theories of computing.  I circulated the first draft privately in May, and revised it in July to prepare for a conference call among algorithms and PL researchers (sponsored by NSF) to find common ground and isolate key technical challenges to achieving its goals.

There are serious obstacles to be overcome if a grand synthesis of the “two theories” is to be achieved.  The first step is to get the right people together to discuss the issues and to formulate a unified vision of what are the core problems, and what are promising directions for short- and long-term research.  The position paper is not a proposal for funding, but is rather a proposal for a meeting designed to bring together two largely (but not entirely) disparate communities.  In summer of 2014 NSF hosted a three-hour long conference call among a number of researchers in both areas with a view towards hosting a workshop proposal in the near future.  Please keep an eye out for future developments.

I am grateful to Anindya Banerjee at NSF for initiating the discussion last winter that led to the paper and discussion, and I am grateful to Swarat Chaudhuri for his helpful comments on the proposal.

[Update: word smithing, corrections, updating, removed discussion of cost models for fuller treatment later, fixed incoherence after revision.]


Scotland: Vote No

September 15, 2014

So far I’ve ignored the back and forth on the Scottish referendum on secession from the United Kingdom, but this weekend I decided that it was past time for me to sort it out.  For those of you who don’t know me, I’ll mention that I lived for 3.5 years in Scotland quite some time ago, so I am not completely ignorant of the cultural and political issues that underly the debate.  As a rule my political views are very much in line with those of the average Scot, solidly Labour Party back in the day when people like Derek Hatton and Ken Livingston and Roy Hattersley and Tony Benn defined what that meant.  Despite Tony Blair’s slimy “third way” nonsense, and his toadying up to Dick “Dick” Cheney’s sock puppet to help lie us into the Iraq war, Scotland in national politics remains solidly Labour; practically every Scottish seat is a Labour seat.

Although I used to be a so up on British politics that I could read and enjoy Private Eye, it’s been a long while since I’ve paid more than scant attention to what’s been going on there, apart from noting that The Scotsman was one of the few sources of truth about the Iraq War back when it really mattered.  The Scots have spines.

I’m no historian, but I do have basic understanding of Scottish history, particularly as regards the English, and am very familiar with the Scottish concept of valor in glorious defeat.  I understand full well that practically every Scotsman harbors some resentment towards the English for centuries of injustices, including the highland clearances, and, more recently, the appropriation of the oil in Scottish territory for the scant benefit of the Scots themselves.  And I am well aware of the bravery and sacrifice that so many Scots made fighting against the Axis during World War II.

My home institution, Carnegie Mellon University, was founded by a Scotsman from Kirkaldy, just across the spectacular Forth Bridge from Edinburgh.  Carnegie was born into penury and died as the wealthiest man on earth, far wealthier relative to GDP than Gates by a wide margin.  Carnegie was extraordinary, but the Scots in general punch far above their weight class in all things, especially industrious self-reliance.

In short, I love Scotland, and consider it to be a second home.  (OK, the weather is appalling, but we’ll set that aside for the time being.)

Emotionally, I am deeply sympathetic to the Scottish independence movement.  I know full well how poorly the U.K. treats Scotland and its interests.  Politics in the UK revolves around the “home counties” in the south of England; the terminology tells you all you need to know.  One time while watching the weather report on the BBC, the national broadcasting network, the announcer said that there was some horrendous weather coming our way, but that “it’ll mostly be up in Scotland, though”.  Though.  Though.

But I urge all my Scottish friends to vote NO on the independence proposal.  It makes no sense whatsoever in its present form, and represents to me a huge scam being perpetrated by the SNP to seize power and impose policies that nearly every Scot, judging from their voting record over decades and decades, would oppose.  The whole movement seems driven by the powerful urge to finally stick it to the English and get their country back, and Salmond is exploiting that to the hilt.  Back when I lived in Scotland I looked into the SNP, because even then I had separatist sympathies, but when I did, it was obvious why they had so few backers.  They’re just Tories without the class structure, more akin to our Tea Party lunatics than to the British Conservatives, and steadfastly opposed to policies, such as well-funded public education, that nearly all Scots support, and determined to follow the post-cold war Slovakian model of slashing taxes on the wealthy in the hope of attracting business to the country.  Having not followed Scottish politics for so long, it is astonishing to me that the SNP has managed to gain a majority in the Scottish Parliament, while the voting pattern at the national level has not changed at all.  How did this happen?  From my position of ignorance of the last decade or so of politics in Scotland, it looks as though Salmond is a slick operator who has pulled off a colossal con by exploiting the nationalist tendencies that lie within every Scot.

But never mind Salmond, the main reason that Scots must vote NO on the referendum is that it proposes to keep the English pound as Scotland’s national currency!  This is such a preposterous idea that I can only suspect dishonesty and deceit, because no sane political leader of honest intent could ever voluntarily place his or her country’s economic future in the hands of another.  The Bank of England will, particularly after separation, have no interest whatsoever in the economic conditions in Scotland when determining its policies on the pound.  And the Bank of Scotland will have no ability to control its own currency, the prime means of maintaining economic balance between labor and capital.  The Scots will, in effect, be putting themselves on a gold standard, the stupidest possible monetary system, so that, in a crisis, they will have to buy or borrow pounds, at interest, in emergency conditions, to deal with, say, the failure of the Royal Bank of Scotland (but don’t worry, that sort of thing can never happen again).  And the Bank of Scotland will have no means of stimulating the economy in a demand slump other than borrowing pounds from somewhere outside the country, rendering themselves in debt beyond their means.  And this will become an excuse for dismantling the social system that has been so important to elevating the Scots from poverty to a decent standard of living within one or two generations.  Just look at the poor PIGS in the Euro-zone being pushed around by Germany, especially, to satisfy the conveniences of the German bankers, and to hell with the living, breathing souls in Greece or Spain or Ireland or Portugal, to name the canonical victims.

A country that does not control its own currency is not independent and cannot be independent.  It’s an illusion.  Just what are Salmond’s true intentions are not entirely clear to me, but on the basis of his monetary policies alone, I implore my Scottish friends to suppress the natural wish to make a statement of pride, and instead do the sensible thing.  The proposal to be voted on this week is not a spittle on the  Heart of Midlothian, it is an irrevocable decision to place Scotland in an even worse position with respect to England than it already is in.

Listen to reason.  Vote NO on independence.


Summer of Programming Languages

July 6, 2014

Having just returned from the annual Oregon Programming Languages Summer School, at which I teach every year, I am once again very impressed with the impressive growth in the technical sophistication of the field and with its ability to attract brilliant young students whose enthusiasm and idealism are inspiring.  Eugene was, as ever, an ideal setting for the summer school, providing a gorgeous setting for work and relaxation.  I was particularly glad for the numerous chances to talk with students outside of the classroom, usually over beer, and I enjoyed, as usual, the superb cycling conditions in Eugene and the surrounding countryside.  Many students commented to me that the atmosphere at the summer school is wonderful, filled with people who are passionate about programming languages research, and suffused with a spirit of cooperation and sharing of ideas.

Started by Zena Ariola a dozen years ago, this year’s instance was organized by Greg Morrisett and Amal Ahmed in consultation with Zena.  As usual, the success of the school depended critically on the dedication of Jim Allen, who has been the de facto chief operating officer since it’s inception.  Without Jim, OPLSS could not exist.  His attention to detail, and his engagement with the students are legendary.   Support from the National Science Foundation CISE Division, ACM SIGPLANMicrosoft Research, Jane Street Capital, and BAE Systems was essential for providing an excellent venue,  for supporting a roster of first-rate lecturers, and for supporting the participation of students who might otherwise not have been able to attend.  And, of course, an outstanding roster of lecturers donated their time to come to Eugene for a week to share their ideas with the students and their fellow lecturers.

The schedule of lectures is posted on the web site, all of which were taped, and are made available on the web.  In addition many speakers provided course notes, software, and other backing materials that are also available online.  So even if you were not able to attend, you can still benefit from the summer school, and perhaps feel more motivated to come next summer.  Greg and I will be organizing, in consultation with Zena.  Applying the principle “don’t fix what isn’t broken”, we do not anticipate major changes, but there is always room for improvement and the need to freshen up the content every year.  For me the central idea of the summer school is the applicability of deep theory to everyday practice.  Long a dream held by researchers such as me, these connections become more “real” every year as the theoretical abstractions of yesterday become the concrete practices of today.  It’s breathtaking to see how far we’ve come from the days when I was a student just beginning to grasp the opportunities afforded by ideas from proof theory, type theory, and category theory (the Holy Trinity) to building beautiful software systems.  No longer the abstruse fantasies of mad (computer) scientists, these ideas are the very air we breathe in PL research.  Gone are the days of ad hoc language designs done in innocence of the foundations on which they rest.  Nowadays serious industrial-strength languages are emerging that are grounded in theory and informed by practice.

Two examples have arisen just this summer, Rust (from Mozila) and Swift (from Apple), that exemplify the trend.  Although I have not had time to study them carefully, much less write serious code using them, it is evident from even a brief review of their web sites that these are serious languages that take account of the academic developments of the last couple of decades in formulating new language designs to address new classes of problems that have arisen in programming practice.  These languages are type safe, a basic criterion of sensibility, and feature sophisticated type systems that include ideas such as sum types, which have long been missing from commercial languages, or provided only in comically obtuse ways (such as objects).  The infamous null pointer mistakes have been eradicated, and the importance of pattern matching (in the sense of the ML family of languages) is finally being appreciated as the cure for Boolean blindness.  For once I can look at new industrial languages without an overwhelming sense of disappointment, but instead with optimism and enthusiasm that important ideas are finally, at long last, being recognized and adopted.  As has often been observed, it takes 25 years for an academic language idea to make it into industrial practice.  With Java it was simply the 1970’s idea of automatic storage management; with languages such as Rust and Swift we are seeing ideas from the 80’s and 90’s make their way into industrial practice.  It’s cause for celebration, and encouragement for those entering the field: the right ideas do win out in the end, one just has to have the courage to be irrelevant.

I hope to find the time to comment more meaningfully on the recent developments in practical programming languages, including Rust and Swift, but also languages such as Go and OCaml that are also making inroads into programming practice.  (The overwhelming success and future dominance of Haskell is self-evident.  Kudos!) But for now, let me say that the golden age of programming language research is here and now, and promises to continue indefinitely.

Update: word smithing.


Bellman on “Dynamic Programming”

April 21, 2014

Everyone who has studied algorithms has wondered “why the hell is Bellman’s memorization technique called dynamic programming?”.  I recently learned the answer from my colleague, Guy Blelloch, who dug up the explanation from Richard Bellman himself:

“I spent the Fall quarter (of 1950) at RAND. My first task was to find a name for multistage decision processes.

“An interesting question is, ‘Where did the name, dynamic programming, come from?’ The 1950s were not good years for mathematical research. We had a very interesting gentleman in Washington named Wilson. He was Secretary of Defense, and he actually had a pathological fear and hatred of the word, research. I’m not using the term lightly; I’m using it precisely. His face would suffuse, he would turn red, and he would get violent if people used the term, research, in his presence. You can imagine how he felt, then, about the term, mathematical. The RAND Corporation was employed by the Air Force, and the Air Force had Wilson as its boss, essentially. Hence, I felt I had to do something to shield Wilson and the Air Force from the fact that I was really doing mathematics inside the RAND Corporation. What title, what name, could I choose? In the first place I was interested in planning, in decision making, in thinking. But planning, is not a good word for various rea- sons. I decided therefore to use the word, ‘programming.’ I wanted to get across the idea that this was dynamic, this was multistage, this was time-varying—I thought, let’s kill two birds with one stone. Let’s take a word that has an absolutely precise meaning, namely dynamic, in the classical physical sense. It also has a very interesting property as an adjective, and that is it’s impossible to use the word, dynamic, in a pejorative sense. Try thinking of some combination that will possibly give it a pejorative meaning. It’s impossible. Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it as an umbrella for my activities” (p. 159).

As with algorithms, so too with dynamic languages?

Update: why is it called “memoization” and not “memorization”?

Update: rewrite of the commentary.