Main

December 1, 2011

The Scala-Yammer note

This criticism of Scala-in-production is most astute. I wish we had more of these things to drive functional language development. I'm very pleased to see that the Yammer people took their use of Scala as far as they did, and hope that young startups continue to experiment with modern languages and that languages continue to meet such needs better and better every year.

July 31, 2010

Big Lambda




Illustration and first page of the story "Missing Out" by Leila Aboulela, in Granta no. 111. The protagonist is a young Sudanese postgrad math student in London.

March 7, 2010

What ARE anamorphisms

So I got sucked into editing the wikipedia page for anamorphisms, which was a mess: it didn't define the concept itself, except to say that it's a generalization of unfolds, and allude to category theory; it had duplicated code, but treated the duplicates as different, and was generally jargon-heavy and not very helpful.

But while fixing it, I got stuck on a puzzle: what exactly is an anamorphism?

Is an anamorphism a higher-order unfold operator? Or is it any function which is defined by partially applying such an operator? Or a function that is defined using a certain syntactic pattern, corresponding to an unfold operator? Or any function at all that could be so defined?

In the almighty "Programming with Bananas, Lenses, Envelopes and Barbed Wire," they seem to use "anamorphism" to refer to a function defined by a certain syntactic pattern. It's a question of interpretation, but they don't seem to define it as a higher-order function, exactly: the text posits some pre-existing operators and then the code sample, which takes just one argument (the "seed" value to unfold from), uses those posited operators.

It seems natural to me to use "anamorphism" to refer to the (unique) unfold operator for a given ADT, and likewise for "catamorphism" and fold. We still have the plural "anamorphisms," since we have many types with such operators. And, I'm not sure to what extent a function is intrinsically "an anamorphism" in Meijer, et al.'s sense. Of course, it is intrinsic whether a function can or cannot be defined that way; but when we start using the whole bag of bananas, lens, envelopes and barbed wire, I think we get multiple ways to define many functions, so it's less intrinsic.

I'd like to hear what others think: What are anamorphisms? Please comment!

March 1, 2010

Simply adding lambda won't make your rewrites diverge

I've been working on trying to extend a result from my thesis, having to do with strong normalization of a certain higher-order rewrite system, but its proof is very complicated, so I've been working on simplifying it.

Along the way I found a paper by Mitsuhiro Okada from 1989 which proves strong normalization for the combination of STLC with any first-order term-rewriting system. But this paper is hard to read because it has strange notations and concepts and says things like, "If a part s[*/x1, ..., */xn, */y] of t belongs to the cap then the part s[*/x1, ..., */xn, */y] also belongs to the cap." Say what?

I knew I could prove the same result using the Tait-Girard method (the same basic approach Okada takes) in a more straightforward way—at least to me. So I did, and here it is for future reference: "Simply adding lambda won't make your rewrites diverge."

November 3, 2009

Function calls are not stack frames

Tim Bray is spreading more misinformation about tail recursion. He describes it this way:

It looks like a subroutine call, but in the case where it occurs as the last thing in the routine, it magically, silently, and automatically gets turned into, now how did I put it? “A highly controlled and structured GOTO.”

A tail-call is a subroutine call. The efficient implementation does not magically transformed into something else; if it doesn't create a stack frame on such a call, it's because one simply isn't relevant.

The essential observation behind the efficient-tail-call implementation (not "optimization"—more on which in a moment) is as follows: For most programming languages, a stack frame is needed not for a subroutine call but only for an argument evaluation, that is, an evaluation whose result is temporary and needs further processing. Calls in the middle of a procedure are "argument" evaluations, because their results need further processing. It's really the temporary, non-final natural of the result that forces us to do the book-keeping that remembers where to come back to.

Another confusion is between semantics and cost model:

Call me old-fashioned, but it just seems wrong when the semantics of two identical function invocations vary wildly as a function of whether there’s an executable statement between the invocation and the function exit.

The semantics of the call doesn't change; the result and the side-effects are the same (that's what's usually meant by "semantics" anyway). The cost, on the other hand, might be quite different depending on whether a stack frame is needed.

Unfortunately, efficient tail recursion has often been described as a transparent "optimization," so that it might or might not be efficient and the programmer can't tell in advance.

Efficient tail calls (or space-consuming ones) really should be part of the official "cost model" of the language, something that goes along with the semantics, as a peer to the semantics, in fact. The cost model tells you how expensive you can expect things to be, but should be a little less binding than the semantics, since language implementors should have some freedom to do additional optimizations.

The idea that stack frames should correspond directly to the call structure is just odd. Maybe we want to know the call structure at runtime; in that case, we should capture that as debugging information, or as a reflection feature of the runtime system, but not as a core language design. Let the language implementation use the stack as efficiently as possible!

To summarize:

  • "Tail-call optimization" is a terrible misnomer.
  • The programmer should have assurance as to whether tail-calls have a cost.
  • Most languages have no reason to use up space on every function call; only calls whose result will be fed to another expression need to use space.

July 7, 2009

Thesis submitted

So I've been rather busy. The first bit of news is, I submitted a thesis. Warning: it has not been examined and may be full of errors, inaccuracies, omissions, or ill-advised conclusions.

The thesis turned out better than I'd hoped, in the sense that all four major pillars of the work were also accepted to conferences (the papers are "Links: Web Programming without Tiers," "The Essence of Form Abstraction," "A Located Lambda Calculus," and "The Script-writer's Dream: How to Write Great SQL in Your Own Language and Be Sure It Will Succeed").

The night I printed it, I was elated, walking to the printer to collect my final copy, knowing I had done it—created some programming language features, written a 180-page thesis, persevered. In a couple of months I'll defend it, warts and all, and then perhaps I'll be completely finished. But it's downhill from here. I think.

Update: The final version has now been submitted, and the above link points to the latest version.

June 12, 2009

Dept. of Wheel Reinvention

From the brief on Apple's new concurrency framework, Grand Central Dispatch:

Blocks are a simple extension to C (as well as Objective-C and C++) that make it easy for you to define self-contained units of work. A block in code is denoted by a caret at the beginning of a function. For example, you could declare a block and assign it to x by writing:

    x = ^{ printf("hello world\n"); }
This turns the variable x into a way of calling the function so that calling x( ); in the code would print the words hello world.

What’s really powerful about blocks is that they enable you to wrap much more complex functions—as well as their arguments and data—in a way that can be easily passed around in a program, much as a variable can be easily referenced and passed.

Underneath this is a diagram showing that a block is like a function with some associated data.

I think I've seen this idea before somewhere. :-)

Ah, but a closure by any other name would smell as sweet...

March 9, 2009

Grust & Schreiber's Ferry: Database Queries in a Sensible Language

Last week we had some really cool guys come visit us from Germany: Torsten Grust and Tom Schreiber of Universität Tübingen. I was aware of Grust's work on functional database query languages from some years ago, but was surprised when he emailed us (the Links team) and let us know about their new work with Ferry [1], essentially a nice, first-order, functional programming language which they compile to bundles of SQL queries.

This overlaps a lot with what I've been doing with compiling Links to SQL. I had recently discovered a way of compiling expressions in a pure, non-recursive, higher-order fragment of Links to single SQL queries, given the condition that the source expression has the same type as an SQL query: a multiset of tuples of base type.

What Torsten and Tom have figured out how to do is to compile expressions of nested data type to a small set of queries having flat type, such that you can easily assemble the nested result from the flat queries. This is something I had thought about how to do and hadn't cracked, so I was very pleased when the Tübingen guys turned up with a way to do it. In fact Jan Van den Bussche had figured out a similar encoding (of nested to flat algebra) in 2001 [2], in a purely theoretical setting and with (unordered) sets as the only container type. Torsten and Tom can actually handle unordered bags and ordered lists, which is another big step forward.

Their work and my work overlap, but we each had things that the other didn't have. They had nested data structures and ordered collections (lists), while my compilation works for a higher-order language, where functions are first-class values. Also I'm working in the setting of an impure language with side-effects; we can statically identify database-compilable expressions, which must be pure, by using a type-and-effect analysis. The two systems seem to fit together very well, and we're looking at integrating them.

Torsten, and Tom, and myself, Phil and Sam had a nice visit together in Edinburgh were we pried into the details. Three cheers for collaborative visits!

[1] "Ferry: Database-Supported Program Execution." Torsten Grust, Manuel Mayr, Jan Rittinger, Tom Schreiber. Proc. 28th ACM SIGMOD Int'l Conference on Management of Data (SIGMOD 2009), June 2009. To appear.
[2] "Simulation of the nested relational algebra by the flat relational algebra, with an application to the complexity of evaluating powerset algebra expressions." Jan Van den Bussche. Theoretical Computer Science, 254(1-2). p. 363-377. 2001.

October 13, 2008

ECMAScript 4 & Apple's resistance

Because I'm out of the loop with my head in a thesis, I was surprised to hear about the dropping of tail calls in the plans for ECMAScript 4. There's a bit of interesting discussion on tail calls on that thread before it wanders off into general pure/impure flames. One gem is a Google Docs spreadsheet which shows the ECMA industrial members' votes on various proposals, including Apple's bleeding-red No column. That reads to me as an uncharacteristic resistance to innovation from Apple.

September 12, 2008

Formlets; Idioms, arrows and monads

[UPDATE: The broken link to "The Essence of Form Abstraction" has been fixed. Sorry!]

For the first time in three years of research, I'm really proud of a paper that I worked on with my research group: The Essence of Form Abstraction, by Ezra Cooper, Sam Lindley, Philip Wadler and Jeremy Yallop, to be published in APLAS 2008. The paper shows how you can create compositional HTML forms which package up their data in any desired type. It shows an OCaml implementation but we also use the idea in our web-centric language, Links.

Though this is the first good group work that I've contributed to, the others have been doing some great work. In particular, Jeremy, Sam, and Phil did some nice work characterizing the expressive power of idioms, arrows, and monads. They first defined a metalanguage, like Moggi's monadic metalanguage, which can encompass all three notions, in The Arrow Calculus, by Sam Lindley, Philip Wadler and Jeremy Yallop (unarchived). Then they showed the hierarchy of the three, under which idioms are the loosest interface but therefore their language is the strictest to program with, and monads are the strictest interface, therefore their language is the most free, and arrows can be seen as lying in between. The work is described in Idioms are Oblivious, Arrows are Meticulous, Monads are Promiscuous (ibid.), MSFP 2008.

To summarize this, you can see the arrow calculus as a syntax like the do notation of Haskell:

do y <- f x
   z <- g y x
   return (h x y z)

Now to put the result crudely, monads allow expressions like the above, where the value bound at each step can be used in all future steps (this makes them "promiscuous"). The use of arrows restricts this so that the value bound at each step can be used only at the next step, as in:

do y <- f x
   z <- g y x
   return (j x z)  -- y cannot be used here

You can still use arrow operations to manually pass data forward through each step, but this is a nuisance. Hence Lindley, Wadler and Yallop coined the term "meticulous" to describe arrows.

The idiom laws restrict this further so that each value can be used only in a final return clause (the return clause is "oblivious"):

do y <- f x
   z <- k x        -- y cannot be used here
   return (h x y z)

An astute reader will note that return is not required in do syntax. What happens if there is no return clause? Then you can add one:

do y <- f x
   z <- k x
   q

is equivalent to

do y <- f x
   z <- k y x
   result <- q
   return result

But either way the idiom laws require that q cannot depend on y or z.

This shows that the three notions progressively restrict (and fairly naturally) the ways that you can program with them. But, in a familiar appearance of contravariance, the notions are reverse-ordered in how restrictive they are to create. An idiom, being the most restrictive on its user, is least restrictive to its writer. Thus you can model some computational notions as idioms which you could not model as monads. This fact came in handy in our latest paper, where we used idioms to model form abstraction.

I encourage you all to read about idioms, arrows and monads, as it's turning out to be a very nice theory.

August 27, 2008

Sensible SQL: The second typing for grouped columns

SQL keeps turning out to be more sensible than I thought it was.

If you're like me, you thought that GROUP BY clauses partitioned your columns into "grouping columns" and "non-grouping columns" and that the MUST to be processed with aggregate functions (count, sum, max, min, average) and that the second MUST NOT be processed that way. This would have been a problem for converting Peyton Jones and Wadler's Comprehensive Comprehensions to SQL, since that technique uses a ncie uniform treatment of group-by clauses: they transform all the query's bound variables to list type for the remainder of the query. Example—here's a simple query that takes a table, groups it on values from columna a, and returns pairs of the value from a and the sum of the list of corresponding column b values:

select a, sum(b) from t group by a

Fine and simple. And as you may know, the following produces an error:

select a, b from t group by a

Because we've grouped on a, we can't just select column b within each grouping because there are multiple column b values. The DBMS will insist that we use an aggregate function to convert this collection to a primitive-type value. Such are the restrictions of the relational model.

But Peyton Jones and Wadler's proposal simply treats all columns, when grouped, as lists. So

[(a, b) | (a, b) <- t, group by a]

is a perfectly legal expression of type [([A], [B])] (letting A and B be the raw column types of a and b, resp.). What they expect you to do, to make SQLizable expressions, is to apply some other function to reduce each column, a typical example being

[(the a, sum b) | (a, b) <- t, group by a]

which corresponds to the first SQL query above: it collects the values from a and matches these with the sums of corresponding values from b. But the above syntax allows you to do things like this:

[(sum a, sum b) | (a, b) <- t, group by a]

which seems to correspond to SQL that applies an aggregate function to a grouped column as well as an ungrouped column. "Whither SQL!?" I worried! Yet all was not lost. SQL treats grouped columns as being alternately bag-typed or primitive-typed, depending on the context. So the following is a valid query:

select sum(a), sum(b) from s group by a

Huzzah! This shows SQL to be in closer accordance with a sensible, uniform, orthogonal languge such as Peyton Jones and Wadler's than I'd imagined. In other ways, too, I keep finding it easier and easier to make SQL queries out of difficult FP expressions by using little-known SQL features. Cheers, guys.

August 22, 2008

NamedGZipStream, Covariance and Contravariance

Favorite MSDN article title of the month: NamedGZipStream, Covariance and Contravariance.

That's Microsoft, as always, blending the sublime and the ridiculous.

(Thanks also for the bogus etymology of "covariance": "In mathematics, covariance is a measure of the degree to which two variables move up or down together. The term was co-opted by the OO crowd to describe..." Way to overlook category theory, guys!)

April 24, 2008

null == undefined

(Salad with) Steve Jenson lays it down for you: the null vs. undefined nonsense in JavaScript. Better recognize.

April 14, 2008

Yhc : Haskell -> JavaScript

A note on compiling Haskell to JavaScript, and interfacing with the DOM, in Yhc.

April 13, 2008

A located lambda calculus

Oh right. Last week I submitted a paper to ICFP. Here it is: "A located lambda calculus," by Ezra elias kilty Cooper and Philip Wadler. The abstract:
Several recent language designs have offered a unified language for programming a distributed system; we call these “location-aware” languages. These languages provide constructs that allow the programmer to control the location (the choice of host, for example) where a piece of code should run, which can be useful for security or performance reasons. On the other hand, a central mantra of web engineering insists that web servers should be “stateless”: that no “session state” should be maintained on behalf of individual clients—that is, no state that pertains to the particular point of the interaction at which a client program resides. Thus far, most implementations of unified location-aware languages have ignored this precept, usually keeping a process for each client running on the server, or otherwise storing state information in memory. We show how to implement a location-aware language on top of the stateless-server model.

March 27, 2008

Bracha on Monkey Patching

Gilad Bracha just gave an excellent overview of "monkey patching," or adding methods to a class from the outside, and other possible solutions to the same problem, none of which satisfy.

January 19, 2008

MapReduce slides from Google

Google has published a series of tutorial presentations about MapReduce, the GFS, and distributed algorithms. I love this kind of stuff: the fruitful interaction of my two passions, programming languages and distributed computing.

The presentation shows functional programming in a good light; I have one quibble so far: The 3rd slide of the 2nd talk ("MapReduce Theory and Implementation") says, "Functional operations do not modify data structures: They always create new ones." And then, "Original data still exists in unmodified form." But this is an implementation detail; the compiler needn't keep the original data if it it's not going to be used.

More helpful would be to note that, in a functional language, a variable's value never changes, thus making it easy to read the code. Whether input data stays around after an operation is a different issue, and one the compiler can easily handle better than humans, on average.

January 14, 2008

Classics

I was surprised to find that these classic papers are available online:

It's interesting to read Church's papers to see how he's approaching the project. Initially, he's trying to clarify the notion of a free variable in logic, or rather to chuck it out in favor of a clear discipline for binding variables. Because he's looking at foundational issues in logic, he's at great pains to identify his assumptions.

A minor historical fact: alpha- and beta-conversion are defined in these early Church papers, but not with those names; they're identified as I-conversion and II-conversion (there is a conversion rule III, which looks like the reverse of beta-conversion).

January 7, 2008

On Why Most Published Research Findings Are False

Here's a stimulating article: Why Most Published Research Findings Are False by John P. A. Ioannidis (PLoS Medicine, 2005). It focuses on research that aims to find statistical relationships in data, and asserts that most such relationships claimed in the literature are in fact false. Distilling the discussion, I find these compelling reasons why it would be so:

  • standard tests of "statistical significance" are taken as proof of a proposition,
  • there is bias in experimental design & interpretation,
  • researchers and journals prefer positive results,
  • experiments tend not to be independently reproduced.

This last point is particularly damning—few things are more essential to the scientific method than reproducible experiments, yet the article blithely says (and I readily believe) that most biomedical studies are not reproduced. In fact, the competitive publication cycle works against this: merely to confirm an existing result is not very publishable; to contradict an existing result may be publishable, but this means, as Ioannidis notes, that there can be an alternation of positive, then negative, then positive, then negative results on a particular question, as each team becomes interested in upsetting the last published result. Far from amassing independent evidence on a question, this is just another selection bias that works against the scientific process.

Interestingly, the article is wholly unscientific. Without stating its assumptions, it works these assumptions through to conclusions. Along the way, it presents a bunch of formulas, which add the gloss of analysis to what is essentially a work of persuasive writing—but I don't buy the formulas, which include unobservable (and perhaps ill-defined) quantities such as "the ratio of true relationships to no-relationship pairs in a field" and "the false-negative error rate." (How amusing would it be if this article were a false research finding?) But methodology aside, I do believe it: that many, it not most, published research findings are false.

I'd be interested to see someone look at the issue in other kinds of fields—fields that aren't quantitative, for example. In the field of programming languages, people make a lot of claims that are justified by proofs. How often are these proofs actually correct, I wonder? And how often are the claims correct? Moreover, much PL research is not even "claim-based," as such. Many papers simply present a feature or technique and tout its virtues—and don't make falsifiable claims, at all. And often, this is the most valuable research: someone tried something and described the experience. We can learn from others' experience, even without proven claims.

How do we assess the value of a research field, such as that of programming languages? How do we know when we're doing a good job?

September 20, 2007

'raises' predicate (Haskell)

A Haskell predicate that tests whether an expression raises an error (for my future reference):

import Control.Exception

raises :: a -> IO Bool
raises expr = 
   Control.Exception.catch (
      return . (const False) =<< Control.Exception.evaluate expr
   ) (return . (const True))

September 18, 2007

Mixed feelings

Poplar's users and potential users had mixed feelings about the syntax. Even aspects we consider successful were not universally appreciated. No one was ever sure what the precedence rules were or should be."
—James H. Morris, Eric Schmidt, Philip Wadler. Experience with an Applicative String Processing Language, POPL. 1980.
Har!

September 13, 2007

Logarithmic Slowdown of Functional Languages?

The Wikipedia page on "Functional Programming" has some assertions about efficiency that sound questionable. I'm not the only one--someone else tagged the item with "citation needed" and started a discussion.

It currently says, "Any imperative algorithm is expressible in these languages with a logarithmic slowdown in the worst case." David Hopwood notes in the discussion that "Imperative algorithms are expressible with no asympotic slowdown," using some clever/complicated technique involving mutable arrays (citing a caml-list posting by one Tom Breuel).

All of this sounds a bit strange to me--as far as I know, there's no conclusive reason to think that functional languages must execute a particular algorithm more slowly than an imperative language. (Where does this "logarithmic slowdown" come from, anyway?) But I don't have a mastery of all the relevant literature. Can someone else jump in and straighten this out?

September 12, 2007

Foundational Papers in Programming Languages

Here's a rather good paper of the sort that I wish someone had shown me when I first started as a grad student: Conception, evolution, and application of functional programming languages, by Paul Hudak (1989). It gives a good history of how functional languages arose (from Lisp, ISWIM, KRC, and so on) and coalesced (into ML and Haskell and the like), and describes in some detail the salient features of (and issues within) functional programming, including pattern-matching, algebraic datatypes, memoization, nondeterminism, and of course, type-inference and polymorphism. It also includes a short bit on formal semantics, which might give an neophyte the flavor of such a formalism without getting too heavy. The paper is fairly long, but it covers a lot and is written in the old style of clear, readable prose with a minimum of specialized notation.

I found the paper from a link on Wikipedia.

Other papers that fall into this class ("classic papers I wish I'd seen at the beginning") include "The Next 700 Programming Languages" and "The Mechanical Evaluation of Expressions,"[1] both by Peter J. Landin. (I'll post more as I think of them.)

Tree papers that I gladly did read at the beginning are "Definitional Interpreters for Higher-Order Programming Languages" (1972—the original typewritten version is better than the one typeset in TeX that you find sometimes), "The discoveries of continuations" (2005), and "Gedanken" (1970), all by John C. Reynolds. The latter introduces (?) the nice idea of an escape construct (now often called let/cc) and "references" as first-class values (or is that older?).

[1] Sadly no longer available online.

September 10, 2007

Abstraction and Variation

Fellow researchers may be interested in this fanciful article on the techniques, advantages, and supposed pitfalls of abstraction in software engineering.

August 5, 2007

Memoizing pages

Tim Bray, "On Being for the Web":

What do ETags do? They allow you, when you fetch a Web resource, to send a short string along with the request saying “this is the signature you sent me for the version I know about”; then the server can look and see if the signature is still good, and if so not bother sending the data redundantly. Which is potentially a big, big win.

But also potentially not. If you look at actual real-world servers that are maxed out, they’re mostly maxing out on computation or back-end database traffic, not on bandwidth to the Net. So the saving that would be really valuable would be if you didn’t have to rebuild the requested page. Unfortunately, both Rails and Django rebuild the requested page, then compute the signature, then check the ETag. Which is extra run-time work in order to conserve a resource (Web bandwidth) that probably isn’t the problem.

This is sharp. Maybe web frameworks (or <cough /> web-centric languages) should provide support for easy memoization.

June 18, 2007

Missing Features of HTML

I've just been looking at this position paper, "Declarative Models for Ubiquitous Web Applications" [1]; amongst other things, it makes some criticisms of HTML that are in line with my goals for the Links project:
  • A declarative expression language notation is not specified in (X)HTML. As a result developers have to code dynamic aspects of the user interface by means of server side scripts or Javascript DOM [DOM] manipulation.
  • (X)HTML classic forms [Forms] are severely limited in terms of typing information. For example, It cannot be specified that an input element accepts a date, a number or a patterned text. Furthermore, due to limitations in the submission formats, applications have to convert everytime, typically at server side, strings to the correct type expected by the application logic.
  • Lack of data binding mechanisms. Data binding is a technique that allows to associate data between the model and user interface components in a declarative way. Developers typically write server side scripts or Javascript AJAX-based code to populate user interface components with data. As a result, in (X)HTML-based user interfaces there is no clear separation between the model, the view and the controller.
It looks like the authors are concerned more with the phenomenon of web apps deployed on mobile devices and inside other web apps, rather than "ordinary" browser–server apps. Who knows what will come of it. [1] Fonseca, J. M. C., Prendes, I. M., Soriano, J., Hierro, J. J. Declarative Models for Ubiquitous Web Applications. April, 2007.

May 9, 2007

Alligator Eggs

Neat: It's lambda-calculus rendered understandable through colorful alligators.

It's the best thing since "To Dissect a Mockingbird."

It seems likely a kid could understand and enjoy playing with this system; I do wonder about the "color rule," though: I suspect young kids would find that tricky and not see the point—someone should do an experiment.

March 26, 2007

What "A Normal Form" is short for

"A Normal Form" is short for "Administrative Normal Form"

March 21, 2007

Published slides from Amazon talk 16 Mar 07

Herewith I'm making available the slides from the lunchtime talk I gave at Amazon's Edinburgh office last week (16 March 2007). Thanks again to Andrew Birkett for inviting me.

It might be hard to gather much information from these slides without the accompanying performance, but you might enjoy looking at the drawings or trying to run the code samples.

March 17, 2007

Links Talk at Amazon in Edinburgh

So yesterday I went down to Amazon's development office in Edinburgh, and gave a talk on Links, at the invitation of Andrew Birkett. The talk went well and I was quite pleased with the response and the sharp questions.

One of the biggest concerns that the Amazon people had, which we haven't tried to address in Links, is failure: how the language allows them to handle failure. In Amazon's systems, there are loads of interdependent services that depend on each other to display a single page, and any of them might go wrong; the developers described to me a wide variety of alternate responses that they might want to give, depending on the situations. For example, in some instances, a page feature might collapse down to nothing (disappearing from the page) when it fails. Other times, if a service doesn't respond, the front-end web software might use cached data to display a feature.

This came up in regard to Links because of the way we're making client-server (Ajax) calls simple RPC calls. The question there is, what happens if the remote call doesn't finish successfully, either because the server code couldn't get the right data, or because the network mechanisms themselves failed; how in Links can we handle that anomaly? Part of the answer might be an ordinary exception mechanism, which we can support fairly easily, but we should think more about how Links programmers deal with exceptional situations.

The crowd was very friendly and engaged with the talk quite deeply, I think. They fed me plenty of questions on particular points. Many of these had to do with various kinds of possible failure, as I mentioned; another theme was metaprogramming, prompted because they noticed a certain amount of repetition in my slides (e.g. (foo=foo, bar=bar) when constructing records; I do hope we can improve on that).

I gather they [present-day Amazon developers] do most of their work in Java, but they weren't thrown off when I started talking about continuations or any of the functional-programming idioms that we use in Links. There were a few self-confessed language afficionados in the crowd, but they weren't the only ones who didn't miss a beat at the functional idioms, the tuple types, or the unusual kinds of variable binding we use for our "formlets" (oh, reader—you don't know about our formlets—I'll have to post about that soon).

Between the talk itself and chatting with developers outside of it, I had a quite nice time. Thanks to Andrew and all of the others for listening so well & treating me nicely, even though I'm an academic!

UPDATE: Slides from the talk are available.

New Erlang Book

Interesting: Joe Armstrong is writing a new book about Erlang.

This is good because the old one was short on helpful material for beginners. I had the same problem when I started learning Erlang as I had with Perl: I was dead in the water trying to get a simple program to run, for hours or days—until, with Perl, I found Learning Perl. With Erlang it was just a tremendous amount of banging my head against the wall. The first hours of learning a new language, for me, are often spent with trivial frustrations, like figuring out what "main" is called, or how to include the standard library. Maybe that's because I usually don't start with examples, the way many people do. Rob Sayre says the book is good.

February 9, 2007

Whistle While You Work

A recent tech report from Berkeley surveys the state of parallel computing [via Tim Bray]—particularly with an eye to the importance and difficulty of coding for multicore processors (one of which is helping me author this article). The bulk of the article deals with hardware design but also discusses programming systems (that's my bailliwick) and application design issues.

I've not read the whole thing yet, but I've already found some surprises:

  • For measuring success, the authors promote "dwarfs"—something like patterns of algorithms—as opposed to conventional benchmarks on particular tasks. It's not clear why this is better for measuring purposes, but the dwarfs do convey nicely some areas of challenge (e.g. "linear algebra," "spectral methods," "Monte Carlo methods").
  • They list twelve recent changes to the conventional wisdom (from no-brainers like "power is free, transistors are expensive"—which has now inverted—to more surprising ones such as "Researchers [can no longer] demonstrate new ideas by building chips.")
  • Transactional memory, a hot topic in PL research, is given some significant airtime. The authors don't mince words over the conventional alternative, locking:
    These locking schemes are notoriously difficult to program, as the programmer has to remember to associate a lock with every critical data structure and to access only these locks using a deadlock-proof locking scheme. Locking schemes are inherently noncomposable and thus cannot form the basis of a general parallel programming model. Worse, these locking schemes are implemented using spin waits, which cause excessive coherence traffic and waste processor power.
    Transactional memory seems like a clear win, though it's not widely implemented and needs to get some experience behind it.
  • With regard to programming paradigms, they put a great emphasis on the role of human programmers in the process of making software:
    We believe that future successful programming models must be more human-centric. They will be tailored to the human process of productively architecting and efficiently implementing, debugging, and maintaining complex parallel applications on equally complex manycore hardware.
    This seems to me quite wise, although it's still unclear whether we can understand the cognitive (or psychological) processes in programming well enough to design for them.

Section 5.4 seems a haunting parable for language designers:

Programming languages, compilers, and architectures have often placed their bets on one style of parallel programming, usually forcing programmers to express all parallelism in that style. Now that we have a few decades of such experiments, we think that the conclusion is clear: some styles of parallelism have proven successful for some applications, and no style has proven best for all.

There seems to be a lot of wisdom collected here, which cuts across subfields of CS—as in a list of which "dwarfs" are commonplace in which application domains. This appears to be a quite useful reference for those of us who don't work in any particular application domain but make systems for programmers to use. I nudge you gently in the direction of this article.

January 23, 2007

Programming Graphics and Getting Eyeballs

The language called "Processing" is a Java-like language, designed for coding up lovely interactive graphics. It compiles to Java and it's easy to deploy your apps on the web for people to enjoy them. Many of the demos on the site are mouth-wateringly impressive—check them out before you read on.

What the Processing people have done really well is to think about all the primitives that you'll want when coding up graphics demos—so they have good ways of playing with color, controlling the opacity of a shape, getting nice bezier curves, rotating and skewing, plus fairly good type-drawing and 3D operations. They've also thought through the programmer startup cost, making it easy to sit down and start using the thing without much knowledge.

The area where Processing is weaker is in the ergonomics of development. It uses an imperative style where you have to build up values and images in sequential steps. It is clumsy, for example, to create a list of dots and then render each one to the screen; clumsier still to express how they change or move over time, since you need to write out the frame-by-frame changes, rather than writing it as a function of time or a system of equations. Many facets of the API are controlled using peculiar beginFoo() and endFoo() calls, which affect how the statements in between behave. Heaven help you if you call beginCamera() and leave off the endCamera() call. These drawbacks make errors likely and have a cost in debugging time and code-reading time.

There is an alternative way to make such demos, in the name of "functional reactive animation" (FRP). In the FRP style, you write the behavior of each thing and each shape as a function expressing how it changes over time. You have at your disposal the raw parameters of time, and various input values, such as mouse position and key presses, so you can make the shapes react to these things.

I'll describe briefly what FRP programs look like (if you already know, skip the next few paragraphs). In the imperative style, you forcibly update persistent variables and so to understand the behavior you need to look at all the ways those variables can be updated over time, which can be quite complex. By contrast, in the FRP style, the behavior of an object is completely defined by an expression, so you can understand that behavior just by studying that expression—a more compact way of appreciating the behavior.

For example, to define a dot that moves at a constant rate in a circle around a point (centerX, centerY), I might write it something like this:

dotPosn = (centerX, centerY) + (radius * (sin time), radius * (cos time))

(I'm positing an operation of summing two vectors—a simple thing to define in many languages, though a bit cumbersome in Java.) The above defines the moving center of the dot I want to draw; drawing it at each animation frame would require just an expression as simple as this:

myDot = circle(dotPosn, 1, redColor)

That is, draw a circle centered at the value determined by dotPosn, with radius 1, in the color red (defined elsewhere). Objects can also be defined in a mutually-recursive way, so that their behaviors depend on one another.

So FRP is an appealing paradigm for writing interactive graphical demos. But none of the existing implementations are as appealing as Processing. The original implementation, Fran, is a closed-source Windows-only application (How's that for a conversation killer?) and is no longer being actively developed. The newer FrTime (part of DrScheme) is a much more serious contender, as it is open-source and runs on all the major platforms. Yampa is another, which I haven't had the chance to dip into.

These are good efforts; but there are still some pieces missing. Processing apps can be deployed on the web very easily—which makes the pay-off for writing them so much higher. Writing a graphics demo and showing it to your friends at your next pizza party might be fun. Writing a graphics demo and slapping it on the web is a whole lot more fun. See Linerider and the great popularity it has had over the last few months.

Another problem is that most FRP implementations are very conservative: they tend to offer the minimum set of tools that makes it theoretically possible to implement anything in the space. Processing takes a different approach: it tries to offer the minimal toolbox that you're likely to reach for in the course of making a demo. So FrTime gives you accum-e, which can probably be used to express any discrete time-changing behavior imaginable; but Processing gives you shininess (for setting a solid object's surface properties) and strokeCap (for adjusting the shape of the ends of lines).

A great Summer of Code project, or Master's project, would be to compile FRP code from one of the above languages down to the JVM, so you can deploy it over the web. Another one: take FrTime or Yampa and beef it up with all these nice graphics operations, to add the kind of vocabulary that graphic designers and animators like to use. The latter project would be straightforward from a CS point of view, but it would require attention to detail and a good sense of visual aesthetics. The former is more of a CS research project, and might require some clever thinking because of the apparent mismatch between functional-style code and the JVM's computational model. But there are plenty of other languages that compile to the JVM, so it's not beyond the pale.

If I were just starting now on my PhD, I'd find a way to include this in my work, or I'd put off the PhD until I could get this done! Tinkering with graphics demos and slapping them up on the web is the kind of hacking I'd really like to be doing. As it is, I'll have to leave it to some young whippersnapper, alas.

January 10, 2007

Think in Closures?

I did a search for "plt scheme" and I got this terrific ad from Jane St Capital:

October 25, 2006

QuickCheck

Yesterday I had to prove that a certain algorithm (for a variation on lambda-lifting) was correct. I wasn't really sure that it was correct, so I took some time and coded it up in Haskell, then picked up enough knowledge of QuickCheck (the manual is more helpful than that raw interface file, but it uses HTML frames to crush your spirit) to write some randomized tests for the algorithm.

At first it caught some good bugs. Then it started succeeding on 100 random tests every time I ran it, so I feel reasonably certain that my algorithm is correct. I played around with the test-data generator to make sure it was generating terms that were deep enough and rich enough & to make sure that it never diverges, creating arbitrarily large terms. QuickCheck is reasonably nice this way.

So here's the Haskell code for my lambda-lifting algorithm, together with the QuickCheck tests (at the bottom). Note that there are some funny things in the syntax that aren't explained; they'll have to stay unexplained until I get done with the actual research I'm doing! Any day now...

September 25, 2006

Bloody Semicolons

Tim Bray, who is writing Ruby code for a comment-management system on his weblog:


The only downside is that I have to make a few adjustments to the current publishing engine, which is Perl, and to the client-side stuff, which is of course JavaScript. I keep putting : in front of constant strings. And forgetting parentheses. Especially empty parentheses. And semicolons. Especially semicolons. Bloody stupid useless semicolons.

Tim sounds like the ideal user for Links.

Oh, but Links has semicolons.

August 1, 2006

Oregon Summer School: Further Notes

I've been putting off writing up the rest of the lectures from the Oregon Summer School because I seem to have lost my trusty notebook where all the brilliant things were intoned. But I guess it's gone on long enough; the world must know what happened at Oregon Summer School 2006.

Some of the presenters I mentioned before talked on more than one topic. Michael Hicks, after talking about futures, went on to discuss some work on "dynamic software update" or DSU. Hicks has developed a set of techniques for dynamically loading a new version of an existing program, and switching it over while it runs, with zero downtime. This includes translating old data representations to new ones. As I listened to the talk, I kept thinking, "OK, sure, we're going to design our code in this highly-restricted way, and then as long as we're willing to bend over backwards to do some tricky retrofitting, then hypothetically we might be able to update the code without downtime."

I was wrong! Hicks and his team have actually taken three years' worth of updates to sshd and vsftpd (amounting to a couple dozen updates, including some whole-number revisions) and updated running instances with each successive version, all without crashing or taking the server down. I was quite astonished that these techniques could be applied to changes that have already been made in the wild. Of course, they had to write some code to translate in-memory data structures on the fly—but they didn't have to re-architect the application to make it fit. Everyone in the seminar room woke up when Hicks showed the slide showing all the versions, with their dates, that had been dynamically loaded into these servers.

I would be interested to see whether these DSU techniques turn out to be a good software-engineering tradeoff in the long run. Most of the time, just having an extra machine to handle load while you bounce individual servers to the new version is a cheap way to get the same result. And you still have the challenge of writing your updates so that they're compatible on the wire: you can update sshd's internal structures on the fly, but updating the protocol might be more challenging. Also, to be slightly critical, sshd and vsftpd together make a pretty constrained class of software: slow-changing servers that mainly wait for connections and spawn off processes to handle them. Would this work for a more sophisticated system like a fancy real-time game system, where the gamers are actively interacting through the system?

Matthew Flatt argued for programming-language features inspired by OS features. The case was reasonably compelling: an IDE like DrScheme needs to run user programs in a special bomb-proof box, so that user programs can't impinge on the workings of DrScheme itself. This extends to lots of issues: device ownership, memory consumption, non-termination. Flatt argued for an abstraction called a "custodian" that manages all those resources together; killing the custodian frees up all the resources it manages. At the same time, he wants to enable sharing of data between programs, as an OS might allow. This makes the memory-management problem much harder, of course, since you need a policy for determining which custodian is "charged" for a block of memory, when it's shared between many. Flatt outlined a policy, whose details I didn't get, which apparently works in his setting.

Sandhya Dwarkadas talked about transactional memory from the hardware point of view. Unfortunately, her talk was pitched in the vocabulary of computer architects, so I didn't understand any of it! At a high level, what I took away was that transactional memory might be easy for processor makers to provide, by taking advantage of the cache-coherency systems that are already being included in multiprocessor machines.

Jeff Foster talked about another system for statically detecting race conditions, like Flanagan's for Java, but this time for C code. It amounts to a kind of pointer alias analysis, and the details are very complicated. A question that wasn't raised, which just occurred to me: Why was alias analysis necessary in C but not in Java? I think the answer will be that the Java system may assume that most access to data members are from within the class definition (and thus are not by reference).

Shaz Qadeer had the true misfortune of presenting last, after we'd patiently sat through 48 hours of lectures. For myself, I know I didn't follow his (or Jeff Foster's) presentation as closely as most of the others. Someone has to go last, I guess. Qadeer's presentation was on model-checking concurrent software. Some of the material he presented was basic model-checking stuff (like "What is LTL?") but he quickly jumped ahead to cover fancy techniques for state-space reduction. I'm always surprised when speakers do that. If you assume that I don't know the basics, then why do you expect me to absorb those and with some advanced material in one lecture? If you want to focus on the advanced stuff, then why not just say, "This is for people who already know X," and just give a quick refresher for X? The advanced students were probably bored while us newbies asked questions about LTL, and us newbies got bored once our intuition had been outstripped and we couldn't follow the lecture closely anymore.

All in all, the quality of the presentations at the Summer School was quite high. I was surprised that I could follow about 40 of the 48 hours of lectures, and got something out of almost every one (the previous 48 seminars I'd attended didn't have half that hit rate).

We also had a great time: Jim Allen's nightly walks around Eugene were quite nice, and we always ended up at a pub (if you like beer, they have lots of good ones in Oregon [my favorite: the Black Butte Porter, not to everyone's taste]). I met loads of people there and really enjoyed it. To PhD students in the US and abroad, I'd suggest you go to the Oregon Summer School in future years.

July 20, 2006

Oregon Summer School: Early Lectures

I am having a great time at the Oregon Summer School for Language-based Techniques in Concurrent and Distributed Systems & learning loads of interesting things. I'll summarize a few of the earlier lectures, but there are lots more besides these.

Cormac Flanagan presented his technique for checking & inferring that Java programs observe a locking discipline that gives desired atomicity properties. A motivation for this is that Java's synchronized keyword allows you to protect a block with a lock; but it is up to you to make sure that all of the right locks are held--in general it can be hard to tell whether a piece of code is atomic with respect to the rest of the program. Flanagan's system allows you to annotate a method or a block with atomic; a static checker then infers whether it is truly atomic by virtue of the locks it holds (viz-a-viz other bits of code in the program). The analysis is somewhat conservative, in that it may reject programs that are actually correct, but the techniques seem to lead you to write the kind of lock-based code that is ordinarily used in practice; Flanagan's team has run the checker successfully on large bodies of threaded benchmark code, and has even found bugs in the Java standard library (e.g. with the append function on Strings). The biggest drawback to this work is that it still relies on locks, and deadlock can still occur.

Dan Grossman gave an nice survey of possible semantics for transactions. Here again, programmers would wrap blocks of code with an atomic keyword, but now we are proceeding from semantics to implementation, rather than the other way around. Some of the semantic questions surround the interaction of transactions with exceptions, the nesting of transactions, and the distinction between weak and strong atomicity [1]. Dan convinced me that when an exception escapes an atomic block, you should not roll back the transaction. One good reason for this (among many) is that it preserves "serial elision"[2]: if you erase the atomic keywords, you get a sequential program that behaves the same as the original program would behave in a sequential context.

Strong and weak atomicity are distinguished by how you treat reads and writes that are not protected by an atomic block. An interesting tidbit is that Haskell's STM system moots the distinction by cleanly separating transactional memory from non-transactional memory (they have different types). This means that the low-level implementation can provide only weak atomicity, but at the language level there is no risk of other threads altering transactional memory.

Dan's thesis is that if you provide explicit threads and mutable shared memory, *then* transactions are a great solution to the problems that arise—but that it's not clear whether threads with shared memory are the best model of concurrency.

To contrast, we've had two presentations on alternate approaches to concurrency. Charles Leiserson presented Cilk, a conservative, faithful extension of C that offers declarative parallelism. Designed for big parallel algorithms (think FFT, matrix multiplication, etc.), this language allows you to tag subexpressions for evaluation in separate threads—the principal mode of communication is simply the return value of the subexpression. This model removes the chance of race conditions and deadlocks (Although the full power of C is still available so you can still shoot yourself in the foot). The language design seems reasonably elegant (just a few keywords are added) and it has the property they call "serial elision"; a drawback is that return values need to be used in specific ways (e.g., assigning directly to a local variable) and there are ways to use it unsafely (e.g., trying to use a spawned return value before the spawned thread has actually returned).

Leiserson also gave some very simple and interesting ways to analyze the parallelism of an algorithm, which gives you a good guideline to how much speedup you can expect as you add more processors. Essentially, you need to add up the total amount of work done as well as the length of the critical path (the longest dependency chain) and look at the ratio. I hope to post more about this another time.

Matthew Flatt of PLT Scheme fame (currently at University of Utah) gave a really neat demo of the Concurrent ML primitives in Scheme, in realtime: he built up a program iteratively, running it each time, while we watched. This worked surprisingly well. At times, it was easy to get confused, but asking lots of questions was a strategy that allowed me to grasp the ideas. The concurrency primitives are a lot like pi-calculus, in that they allow synchronous rendezvous and a "choice" combinator. This sublanguage is pure in the way that Erlang is: locks are not needed. Of course, the synchrony means the primitives are hard to implement (efficiently and reliably) in a distributed setting.

Michael Hicks presented "futures," a language feature first implemented in MULTILISP; futures permit declarative concurrency like Cilk, but the language handles some of the necessary mechanics: futures automatically collect or wait on result values when they're needed, whereas Cilk requires the programmer to explicitly wait for the results (and dies if this is done incorrectly!).

[1] Weak atomicity requires only that transactions are atomic with respect to one another; strong atomicity requires that they be atomic even with respect to code to non-atomic blocks. The latter is, of course, much harder to implement.

[2] So-called by the Cilk people.

June 29, 2006

On Semicolon Wars

The recent American Scientist article, "The Semicolon Wars," by Brian Hayes, is a reasonably good short introduction to the history and variety of programming languages. I'm pleased with the author's even-handedness in presenting opposing viewpoints without taking sides.

My only beef is that Hayes applies the outworn, yet still circulating, classification scheme which distinguishes four types of languages: imperative, functional, object-oriented, and logic. These qualities are more like independent axes for measuring programming languages, rather than discrete categories. Most object-oriented languages are imperative, but some are functional (e.g. OCaml, CLOS and, I'd argue, Haskell). Logic programming has been screwed into Scheme (viz. Kanren in Scheme, probably others). Some functional languages have imperative features (Common Lisp, SML), while some "imperative" languages have functional features (viz. higher-order primitives (map, grep, filter) in Perl/Python/Ruby).

This categorization scheme should be abandoned and the communities entrenched in their positions should cross borders and see what the others have to offer. All the cool languages are doing it.

Final note: The article partly exemplifies Wadler's Law, in that a whole page is given over to the discussion of syntax: semicolons, comments, and identifier conventions.

#haskell

Overheard on IRC channel #haskell:

• vincenz btw notes that #haskell is one of the greatest channels around, you ask questions in other channels and if you're not 100% accurate, they shoot you down instead of guessing what you really meant

June 19, 2006

STM in Pugs

Pugs (the Perl6 implementation in Haskell) has started to add STM, à la Haskell. Audrey says they're missing certain combinators, which sound to me like nearly all of the necessary ones, but still it sounds like they "get it." Audrey also hints that STM may be part of the standard concurrency model for Perl6 widely, and that some Summer of Code project is working on adding STM to Parrot.

I'll be pretty interested to see how all this pans out.

May 22, 2006

Web Continuations Considered Not So Harmful

Ian Griffiths mounts a useful attack on web continuations. He's thought this through, and I appreciate it. Critical thinking is a help to those of us working hard on research in the area.

Most of his objections don't hold for Links, though. Let me go through them one by one.

Abandoned Sessions

Griffiths says that code with web continuations will simply stop executing at some points, when the user walks away, and that will cause (a) debugging confusion and (b) problems with resource management. In Links, the latter is not a problem, since we store all the state of a continuation on the client.

The confusion that might result from having functions that just silently quit in the middle may indeed cause trouble. Griffiths offers that at least it'ss predictable where this will occur; but not necessarily. I may write a function one day that always completes, and I'd be happy with that. But then some function that I call may change so that it serves a page and doesn't return until the user does something. My own function will now have the same problem, and I may not understand why. This is an issue, but it may be possible to add some language feature that forces programmers to declare this kind or behavior, thus being aware of the problem and catching it at compile time.

Thread Affinity

I don't know what Thread Affinity is, but it sounds like a problem that crops up in Java specifically. I know there's a lot of voodoo around how to treat threads in Java.

In Links we haven't decided exactly how to present threads on the server, but we don't expect any voodoo. Links shoud be purely functional except with respect to IO (files and databases will be mutable, of course). As a result, the identity of the thread that runs a piece of code shouldn't matter.

Web Farms

Griffiths worries that web requests can generally come in to any machine in a farm, but the freezing of continuations might not be so fluid. In Links, they are fluid: continuations are serialized completely to the client, so when the request comes in to a farm, it doesn't matter what machine it's assigned to.

Back Button, Branching

The issue here is that user's behavior is not essentially linear, and thus a control construct that assumes a linear path for the user would be inappropriate. The particular problems are that: (a) a user can go back, causing code to be executed any number of times, and (b) a user can branch in multiple directions.

In Java, of course, this is a major worry, since Java code tends to be quite dependent on mutable objects. In that context, it could be really hard to figure out why x = 0; y = f(x); x = x + 5 ended up with, say, 15 in x. Links greatly mitigates this problem by encouraging a functional style, so that the only possible value for x after that sequence would be 5. Links libraries will be written in a pure-functional style, so you won't be required to run into the above problems just by using basic libraries.

On the other hand, he has a point: user behavior on the web is not fundamentally linear, so why do we propose a linear control construct? This is a very good question, and it's one that we don't have a great answer to, yet.

My own thinking is that a linear page flow is occassionally the right abstraction, and it would be very pleasant if we could give you a super-simple way to write such a page flow. You'll never have a page that has only one way out—you'll always have a Cancel button or a Help link or something, so we've got to figure out the right way to handle the edge cases. There's research to be done, for sure.

Summary

To summarize, Griffiths makes some good points about problems with web continuations in Java and in general. With Links, we're creating a new language, which we hope will prove that certain language features make these problems much much easier. The Java community won't be able to go right out and implement these features, but it might influence future decisions, and perhaps whatever comes after Java will learn from these problems and from our solutions.

May 7, 2006

Pi-calculus solves consensus

Dolev, Dwork and Stockmeyer in 1987 examined the variations of asynchronous message-passing models where consensus is solvable [1]. They examined the axes of processor synchrony (can processors sleep indefinitely or is their sleep bounded?), network synchrony (are messages received in a bounded number of steps after being sent?) and message-order synchrony (are messages on a channel received in the order they're sent?). The result was that processor synchrony doesn't help (1-resilient consensus is unsolvable whether processors are synchronous or not) but either of the other two parameters alone makes the difference between no fault-tolerant consensus and n-resilient consensus (any number of process failures can be tolerated).

The (synchronous, or classic) pi-calculus is a "model of concurrency" where messages are rendezvous points: messages are received right when they're sent and thus in order. This implies that pi-calculus can solve n-resilient consensus.

In case you're not convinced, here's a pi-calculus expression that solves it*.

* For pedants: my solution is only (n–2)-resilient: it deadlocks if only one process is non-faulty.

UPDATE: C. Palamidessi had shown that synchronous pi-calculus can solve consensus and that asynchronous pi-calculus cannot [2]. Wischik and Wischik have a protocol for synchronous rendezvous [3], which suggests that pi-calculus is achievable purely given the right primitives, and probabilistically achievable in any case.

[1] Dolev, D., Dwork, C., and Stockmeyer, L. 1987. On the minimal synchronism needed for distributed consensus. J. ACM 34, 1 (Jan. 1987), 77-97.
[2] Palamidessi, C. 1997. Comparing the expressive power of the synchronous and the asynchronous &pgr;-calculus. In Proceedings of the 24th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (Paris, France, January 15 - 17, 1997). POPL '97. ACM Press, New York, NY, 256-265. DOI= http://doi.acm.org/10.1145/263699.263731
[3] Wischik, L., Wischik, D. 2004 A Reliable Protocol for Synchronous Rendezvous. Technical Report UBLCS-2004-1, Dept. of Computer Science, Univ. of Bologna. Feb. 2004

April 13, 2006

Concurrency Theory

Ironically, just a week after I posted a screed about the lack of purpose in process-calculus research, I found myself trying to write a semantic model for a distributed programming language.

My purpose was not to prove anything about computation, but just to explicate the language precisely.

I have some strange requirements. I want to show that my semantics are realistic for a certain kind of environment, having peculiar limitations (for example, threads exist in particular locations, which could fail, and not all primitives are available in all locations; some components of the system are single-threaded...), so I want to choose a system that is close to that environment. CCS, pi-calculus, and join calculus are not at all realistic for distributed computation of any sort, so I shied away from those.

Fortunately, not all process calculists are blind to the issues that arise in distributed computing. Andrew Gordon has a nice page on "Nominal Calculi for Security and Mobility," summarizing various attempts to resolve the unrealisms. He observes that "Synchronisation on global channels is the basic communication primitive of the pi calculus. Many authors note that this is an expensive primitive to implement in a distributed system."

Fournet, et al., motivate their "Calculus of Mobile Agents" [1] with this acknowledgement:

Suppose ... that we want to implement a concurrent calculus with CCS-like communication channels and with processes running on different physical sites. If we do not locate channels, we quickly face a global consensus problem for nearly every communication which uses the interconnection network.

Later, they introduce the distributed reflexive chemical machine (DRCHAM), an extension of CHAM, which offers located solutions—that is, bags of processes which are collected together and marked with a location—and limit the communication between locations. As a result, "The transport is deterministic, static, and point-to-point, and synchronization is only done locally on the receiving site during message treatment."

The mobile ambient calculus of Cardelli and Gordon [2] is similar to the DRCHAM model in that it provides named locations. I haven't read this work in detail yet so I can't compare the two.

The upshot is that there may be a calculus which is realistic enough to model the issues that our programming language is designed to solve, and hence it may be worthwhile modeling our language in that calculus. Using the classical process calculi would not have been fruitful, any more than modeling it on a regular old Turing machine.

[1] Cédric Fournet and Georges Gonthier and Jean-Jacques Lévy and Luc Maranget and Didier Rémy. A Calculus of Mobile Agents. Proceedings of the 7th International Conference on Concurrency Theory. Springer-Verlag, 1996.
[2] Luca Cardelli and Andrew D. Gordon. Mobile Ambients. Foundations of Software Science and Computation Structures: First International Conference. Springer-Verlag, 1998.

February 21, 2006

The Week in Web Languages

This was a hot week for web-language news.

Generators (yield and next statements) were implemented in FireFox's JavaScript a few days ago, in what looks like about two weeks of work by one guy. Cheers to Brendan Eich.

Tim Bray posted a round-up of arguments for and against PHP (mostly against). I was interested to read the Zend CEO's claim that half the websites of the world are powered by PHP. I'm disappointed that the Netcraft survey doesn't seem to be tracking that data.

Speaking as a language designer and as someone who briefly coded PHP for money, I'm pretty convinced that PHP is a bad language design. Yet, it's not entirely clear that language design matters.

Joe Armstong, the Erlang doyen, announced Jaws, a web framework for Erlang this week. It has some of the features of Links, such as easy calls from client to server. One difference I note right away is that finding a server method from the client is, in Jaws, a matter of tacking on the right string to a URL. Links treats the method definition, as a name binding, and calls from client to server have to be to bound names. Not that mis-typing the function name is a great source of error amongst web programmers!

Finally, a bit off-topic, Greg Linden of Findory writes about Blogger losing data and asking its users to cut-and-paste their entries from the web into their posting interface. The irony is astounding, of course. Greg thinks the problem is lax data-management at Google. They might be lax, but this tells me that they don't care much about the Blogger product. It's free, after all.

February 15, 2006

3-body Problem in Haskell

So last week I implemented the n-body simulation in Erlang with message-passing, as an exploration of one-process-per-particle concurrency. Now I've got it up and running in Haskell, one version using STM and another using just MVars (individual atomic variables).

I found the shared-memory approach to be a more natural style than message-passing for this kind of physical model. My technique is simple: for each time step in simulation time, there is a shared vector of the states of the particles. Each process constantly tries to read the current moment values in sequence (blocking on those particles that are not filled in) and when it has read the whole thing, it performs its next-state computation on those states; it then writes its own next-state into the t+1 shared state vector. I want to reiterate that this technique works only because the processes necessarily work synchronously through simulation time.

Why MVars instead of STM? MVars implement a medium-power concurrency primitive—something like test-and-set—and it was plenty powerful enough to write this application without much fuss. Transactional memory is a significantly higher-level primitive, and its added flexibility wasn't necessary in this case. I'd like to make a loose argument that MVars are generally suffficient for the concurrency that's likely to be found in games.

The STM version works very similarly, but instead of just blocking on an unfilled particle-state, it retries. I believe that this is inefficient, because it's not necessary to repeat the reads that have already been performed at a given timestep: the variables in question are all single-assignment. Also, I suspect that the semantics of STM will lead to lots of retries, the threads tending to swarm around an unfilled variable and all of them retrying the whole transaction together. By contrast, the MVar implementation is efficient in the sense that when a state-variable is filled in, one waiting thread is woken, whose take/put action causes another waiting thread to be woken, etc.

Here's the code, for reference.

Continue reading "3-body Problem in Haskell" »

February 10, 2006

3-body Problem in Erlang

UPDATE: The same problem in Haskell.

I've been learning Erlang, and trying to understand the pros and cons of its approach to concurrency, particularly for physical simuilations that are computationally-intensive and perhaps hard to parallelize. The press packets keep saying that Erlang makes it easy: if you want to model some thing that has n parts, just fork off n processes that each do the work of one part, communicating when they need to. This is by no means the only way to build a model of such a system, but it's what the Erlang evangelists tout. To start testing the wisdom of that, I made a simulation for a simple gravitional system. What follows are the tribulations of a neophyte, and there may well be better ways to do it. What's the upshot? I'm not sold on Erlang's ease-of-use for one-to-one modeling of systems.

The first conceptual problem I hit was the question of synchronization. In order for the sim to be a decent finite approximation to the continuous world of physics, we need to break it into discrete time steps, each of which depends on the previous time step (at least, that's the only way I know of to get a fair approximation). This means that each particle can't just crunch away at it's own speed, working as fast as it can to calculate its position at various times. Easily the particles could grow out of sync with one another, making an inaccurate physical model. So right at the start, Erlang's asynchronous model seems to be a disadvantage.

Another problem is that in this particular model, every process depends on every other. As such, it would be nice if each process could simply fetch the needed information about the others. In Erlang, that's not trivial, since it would require each particle acting as a server, responding to requests for its position data, and each one acting as a client, sending off requests and waiting for responses. That sounded overly complicated, so I chose a different approach: each particle broadcasts its position as soon as it has been calculated. This basically dispenses with one half of the client-server interaction; but it means that each process can receive messages that it doesn't immediately care about, so it has to manage that information.

I solved the synchronization problem by attaching a time stamp (in model time, not real time) to each position message. When it has enough information about time t, a process calculates its position for time t+1 and broadcasts a message that says, "My position at t+1 is (x, y)." The others capture this information, and when they know all they need to know about time t+1, they each calculate their position at time t+2, and so on ad nauseam.

As I said, a process can receive messages that it doesn't care about yet. In the code, you'll see that each process keeps track of a a list of "current" positions of particles, and also a list of "future" positions. In this particular model, I know that a process can only receive data that's at most one time step ahead of it. That's because a process that is "at" time t (i.e., it has not yet computed its t+1 position) cannot receive information about others' t+2 positions, because those t+2 positions would depend on its own t+1 position. That makes the future-data management a little easier in this case. A different model might not have that constraint, and would require better management of the future data.

This model is particularly pernicious, though, since it has total mutual interdependence. I'm interested in how well Erlang would work for big game worlds; maybe in those worlds each object has a small neighborhood of other objects that can affect it. But, I expect that the coding style would be the same within that neighborhood. What's more, if objects can move around and change neighborhoods, then there will be the issue of managing the set of objects in each process's neighborhood. Is this unneccessary overhead?

A final note about the paradigm: a lot of my work went into managing the messages, since they can be received at any time and in any order. The machine does a lot of work in handling the messages, too: at each time step there are n distinct messages but the machine has to deliver n2 messages. In this case, the particle positions are only written by one process, and they are read by all the others. A shared-memory approach might have an advantage here, since locking per se might not be needed.

At last, the code. To run it, start up erl and type c(atoms)., then atoms:start().

Continue reading "3-body Problem in Erlang" »

January 30, 2006

Multi-way Joins in Links

Multi-way joins are now working (transformed from simple list comprehensions, that is) in the Links interpreter. There were a number of subtleties in the rewriting code that prevented this from working.

Currently, my cohort Jeremy and I are not sure whether it's better to use a deep-searching algorithm, like the one we have now, or rather to use simple, local rewrite rules with a fixed depth. The advantage of simple rules would be the ease of showing correctness. But it might be hard to show that such rules must produce more optimal behavior—there is the possibility of hill-climbing, reaching a local optimum and thereby missing a global optimum. The deep-searching rewriter we have now is rather hard to keep track of, as it relyies on simultaneous knowledge of several different kinds of AST nodes, whereas we suspect that a shallow-rule system could be built with only about one rule for each way of combining two node constructors.

To decide this question, I think we need to flesh out the shallow-rule system, and in parallel we need to pursue some kind of assertions about the deep-search system. I still haven't been able t find any tidy results about this in the literature. There are scads of papers by Libkin & Wong that sound related, but I haven't yet done a thorough search to see what's there.

Some things I discovered: Codd, 1970, A relational model for large shared data banks is the original paper for (flat) 'relational algebra' which is the formalism ours needs to compile into. Libkin & Wong's 1994 paper New Techniques for Studying Set Languages, Bag Languages and Aggregate Functions argues about the comparative power of various nested calculi, including those called BQL and NRCaggr.

January 23, 2006

Message-passing and Entanglement

Also today, I pondered some issues around concurrency. I had the realization that, although message-passing may be a useful language mechanism (as a sole concurrency mechanism) for structuring a lot of programs, it's probably not good for applications that have a lot of data interdependence and need strong atomicity properties. Here was my insight:

Suppose you're building software for a complex system—like a simulator, say—and there are discrete units in the system that seem like they are largely independent from one another. It may be attractive to implement these units as distinct processes (threads) in the way you structure the software. You can write a single process description that sends messages whenever the real, modelled object has an interaction with other objects. This might seem appealing, but it's not necessarily the right thing to do.

The critical thing is that if, as in Erlang, message-passing is the sole means of concurrency-control, then passing a message is the only kind of atomic operation. That means that if variables x and y are updated together in some atomic operation, they have to be managed by the same process. And if y and z are also part of some atomic operation, they too have to be managed by the same process.

So the natural division of your program into threads, mirroring the division of modelled objects, may not be correct, since there may be some operation which must be atomic, and must involve two or more "natural" objects. Furthermore, it's not hard to imagine most or all of the variables in a program becoming entangled in this way.

This is not to say message-passing is not useful, or that there aren't lots of programs that could profitably be written that way. Many applications, after all, don't require strict atomicity in every of the operations that are in principle atomic. But, I think some real safety-critical applications are going to have data that is so entangled that, if they had to use the message-passing approach, they'd lose out on the benefits of concurrency.

So I continue my search for other concurrency abstractions that the language can usefully support.

Term-Rewriting Combinators

Progress on the Links interpreter.

Jeremy checked in his completely-overhauled optimizer combinators. We now have ways of defining term-rewriters by combining rewriters; and we can compose a structurally-recursive function just by giving one case and specifying a strategy, such as "bottom-up" or "top-down." This approach is pretty clearly laid out in the papers of Limsoon Wong, but it took us a while to rewrite out existing optimizer functinos this way and to find just the right combinators. Hopefully we'll have a chance to publish the specific techniques that we're finding useful and the pitfalls.

Now, we've cut our optimizer code from ~1200 lines to about 400, a rather good improvement. It's far more readable and approximately as correct! We also spent some time pairing to create unit tests, which was fruitful and makes me feel much more confident that these things will keep working in the future.

January 20, 2006

Languages and Multithreading

Some references on the state of mainstream programming languages for multithreading:
Software and the Concurrency Revolution (in ACM Queue) has some really good observations about what problems exist and why current techniques don't solve them, or don't solve them yet. The author argues that the problems need to solved for imperatiev languages, not just for functional languages.

Tim Bray's On Threads describes the problems that exist in the Java Standard Libraries, with lots of redundant locking at different levels—and you can still have concurrency problems because it doesn't help the programmer address safety and deadlock at the application level. Until I read this I didn't realize the situation was so bad in that arena.

Some cutting-edge (avant-garde?) languages are marginally better at this stuff, but I agree with the first author that the problem should be solved with new abstractions that can be applied in any langauge; it's not sufficient to solve this just in functional languages, as there's a lot of code out there that's not going to be rewritten in another language anytime soon. Even the best functional approaches (I'm thinking of STM in Concurrent Haskell and Erlang) don't seem to have the problem locked down—Haskell's STM moots referential transparency; and Erlang's message-passing model hasn't been shown to be powerful enough for all applications. This is a pretty interesting research area and I want to learn more.

January 11, 2006

Browsers are [not] quick; more on functional GUIs

UPDATE: This was complete rot. My test code was buggy, which caused the flicker. I patched it up and my draggable list moves smoothly now. Jeremy reports that he once tried an app that would re-create an entire (small) DOM tree on every keypress, and it worked reasonably fast. How do browsers manage to redraw pages as smoothly as they do?

An observation: web browsers move very fast when executing calls that modify the DOM.

For example, suppose list is a list of items, of which x is one, and you delete the item, then immediathttp://homepages.inf.ed.ac.uk/s0456219/ely insert it in the same place:

y = x.nextSibling;
list.removeChild(x);
list.insertBefore(y, x);

This will typically cause the whole list to be redrawn in the intermediate state, so the list gets shorter for a fraction of a second before the element is reinserted.

However, if you delete an element and reinsert it in a different place, the browser seems to redraw it smoothly, as if the element just moved:

y = x.previousSibling;
list.removeChild(x);
list.insertBefore(y, x);

That seems to be a difference of behavior, which I'm at a loss to explain.

In any event, this seems to rule out the "whole component replacement" that I talked about before, in implementing a purely-functional web GUI.

How to work around this difficulty? I still like the scoped, conditionalized structure that I outlined in that post. I suppose we could still have an inner function return a complete component, and have the runtime system compare it with the existing state of the component, executing a minimal series of DOM instructions to make the two agree.

Another approach, which I'm prefering, is to have the inner function return a "delta"—an expression representing how the component should change, something like a list of DOM instructions. I wouldn't want to allow the instructions in such an object to depend on one another, so I'd like to cook up a set of "deltas" which would be mutually independent, if possible. This would amount to a new mini-language for manipulating DOM trees, one that had no notion of sequential execution.

January 10, 2006

On Not Designing XML languages

Tim Bray discusses what he calls the "big five" XML languages and argues that whatever you're trying to do, you probably don't need to design a new one, particularly if your application falls anywhere near one of these. I'm inclined to agree.

Language design is hard work, and new languages almost never get any uptake (cf. Esperanto). The same seems to be true for spoken languages, programming languages, and data-description languages.

Tim says that "any non-trivial" XML language is going to have constraints that can't be checked with existing schema languages. That sounds like an interesting problem. Is there a list of such constraints somewhere?

January 6, 2006

Haskell Activity

I'm relatively new to the Haskell community; before diving in this fall, I'd thought that it was probably a really innovative language with lots of neat features, but also suffering from quirks that prevent it from being widely used except by language nerds.

Instead, I'm impressed how much activity there is in the Haskell community, making real libraries like modular database abstraction layers, a distributed source-control system, and apps like an mp3 player, a diagram editor, and even, apparently, a start at a first-person shooter using OpenGL.

I'm leaving out all of its applications in interpreting and compiling languages, since that's its forte. It's quite good at those things, but these wider applications should make it interesting to programmers in general, as opposed to language nerds alone.

January 3, 2006

Better Comprehension Through Memo(r)ization

I've been thinking about how to deal with these problems optimizing query-backed comprehensions.

A reasonably general example of the problem is as follows. I include a join condition for conversation sake, but also keep in mind the possibility that the condition could be always-true.

for x <- s in
    let y = f(x) in
        for z <- t where x.id == z.id in
            (x, y, z)

First let's characterize the running time of this algorithm, as is. Let s be the size of the table s, let F(n) be the cost of running f on an input n. Also suppose that there is a cost q for each query, and that this is non-trivial since it involves marshalling data to and from an external process, which may be on another machine. Finally, let k be the average number of rows in t which match the join condition; i.e. that k such that the number of tuples in the result is sk. The above algorithm should run in time O(sF(n) + sk + sq).

Now how is it affected by optimization? We have a rewrite rule which would rewrite the above as

for (x, z) <- s * t where x.id == z.id in
    let y = f(x) in
        (x, y, z)

Which performs fewer queries but evaluates f(x) more times. This version should take time like O(skF(n)). In lots of cases, this is worse.

It might be possible to estimate these costs, yielding a heuristic optimization rule that the compiler would employ.

But if we know that f is pure, we can memoize it, and that would give a much better optimization.

Supposing we can do the memoization lookups in some kind of O(log m) time, this would give a program that runs in time upper-bounded at O(sF(n) + sk + s log m)); we've saved the O(sq) cost of doing multiple queries but we've incurred the cost of the lookups at O(s log m).

Tricky Query Optimizations

Optimizing language-integrated queries is trickier than it looks.

Besides the structural problem I talked about yesterday, there are cases where a naive "optimization" can easily make the result run much slower than the direct version. This will happen whenever some time-consuming operation occurs between an inner and an outer comprehension. Suppose t is a table with loads of big numbers in the number column; consider this Links algorithm:

for x <- t in
    factors = factors_of(x.number);  # (costly)
    for y <- s in
        (y, factors)

A naive optimizer would say, "I should combine these two queries rather than re-running the inner query over and over again." It would yield an algorithm that works like this:

for (x, y) <- (t, s) in
    factors = factors_of(x.number);  # (costly)
    (y, factors)

But this algorithm might evaluate factors_of many more times than the original, since there may be more than one pair (x, y) for any given value of x. The optimization improved performance with respect to the database, but worsened the performance of the algorithm itself.

January 2, 2006

Nesting into SQL

It's perfectly plain in Buneman et al.'s "Comprehension Syntax", but I didn't realize it until now: Links's list/bag/set comprehensions don't correspond to SQL. There are simple expressions, using only comprehensions and no fancy language features, that cannot be compiled directly into SQL, because SQL's relations are always flat, whereas the structures our comprehensions build can be nested. To wit:

for x <- t in
  [(x.name, for y <- s where y.t_id = x.id in [y.id])]

This should give us a list of pairs, the first of which is the name attribute of a thing in t and the second of which is a list of corresponding id values taken from s. But SQL has no structure directly corresponding to a "list of tuples of lists."

Currently, the Links interpreter will simply decline to optimize these comprehensions, leading to a single query against the table t and as many queries against s as there are rows in t. This is abysmal; it's distinctly worse than coding in PHP where at least you can write one efficient query.

I see two plausible solutions; one is to modify Links to bring it into line with SQL's model. I'm not too fond of this; ruling out nested structures across the board would be unacceptable, and creating a sub-language for queries, which contained only flat relations, would diminish the Links goal of unifying the algorithmic language with the query language.

Alternatively, and much better, we could do the join, and then post-process the results to group them into the proper structure. In this case, the query would be

select t.name, s.id from s, t where s.t_id = t.id order by t.name

And the post-processing might be

fun grouper(row, accum)
{
    if accum <> [] && row.name == hd(accum).name 
    then [{name = row.name; id = [row.id]++hd(accum).id}]++tl(accum)
    else [{name = row.name; id = [row.id]}]++accum
}
fold(grouper, [], results)

This just takes the records with a common name field and folds them into a single record, with all the corresponding id fields listed therein. I'm relying on the ordering by t.name in order to do this in linear time, but a more general/robust solution could probably work in n log n time, by keeping an indexed structure to look up the record into which any given row should be folded.

It will take a bit of tinkering to come up with the grouper for a given nested comprehension, in general. Some thoughts toward that:

In SQL, only the so-called aggregate functions (max, average, sum, count) may be applied to the values being grouped together—essentially, only functions that yield an atomic datum. The operation we need might be equivalent to a "group by" whose operation is list/bag/set construction. Looking at it this way might allow us to easily apply algorithms and transformations that already exist for SQL's "group by." But how to recognize what columns are being grouped by and which ones are being grouped together?

Well, a condition that makes the example problematic is that we have nested queries, and there is a "bare" piece of the top query: a value from t is used outside of the inner query. I think this condition implies the sort of nested structure I'm talking about. Furthermore, it is a smoking gun as to how we should structure the groupings. The variable used in the outer comprehension represents the one to group by, the other comprehension, in itself, becomes the aggregation operation.

This needs to be refined, of course, but I think it is a lead. Essentially, the idea is to take a nested comprehension like this:

for x <- t in
  [(x.name, for y <- s where y.t_id = x.id in [y.id])]

and convert it to a faux-SQL query like so:

select t.name, makebag(s.id) from s, t
where s.t_id = t.id group by t.name

where makebag( ) is the "aggregate function" that constructs a bag from its arguments. (In fact, this should fall out as a comprehension over the grouped elements.) The faux query comes apart into a real query and a post-processing step:

select t.name, s.id from s, t where s.t_id = t.id

group_by [name] makebag results

I can't see writing a general post-processor within Links, since it would need something like reflection, to dissect an arbitrary row on given fields. Let's use a Haskell-like meta-language notation. Suppose we want to group by some list of fields g and that project g is a function that projects those fields out to form a new record. We also have an aggregation function, f. The needed post-processor should behave like this (though more efficient algorithms should be used):

group_by g f rows = map (onSecond f) (collate g rows)
    where collate g list = [let ys = whereEq (project g) x list) in
                                (x, map (project g') ys)
                            | x <- uniq (map (project g) list)]
              where g' = complement g       -- fudging here a bit
          whereEq f x = filter (\y -> f y == x)
         onSecond f (x, y) = (x, f y)

I aim to set all this in a more gentle theoretical framework. I'd like to have a formal definition of the NRC, SQL, and our query-optimizable language; then we can give the translation between them, and then it's a simple matter of programming. Shouldn't be too hard.

❡

December 22, 2005

Simplifying XML

Don't give up hope, Tim. A simplified/revised XML spec would make some of our lives much easier.

I'm working on a programming language for the web which integrates XML syntax into the language. This is dandy except that there are times when we need to look at the Doctype in order to be able to parse a program. In our language, for example, you can't use &nbsp; until you've indicated a Doctype of XHTML, and until our wee little parser has fetched and parsed that whole mess and made a big table of entities.

Also, DTD doesn't do much for us. The newer models for validating documents (e.g. Relax) fit better with the kind of validity that programming-language people are inclined to think about (namely, "regular trees").

We do integrate XML Namespaces, so having that pulled in probably wouldn't hurt, either.

December 1, 2005

Shared Memory

I've been thinking about concurrency.

I think the model of "message-passing with no destructive update" may be the ultimate solution. This is the Erlang approach, and it has a lot going for it.

When you disallow updating data, you can do one incredible, massive optimization: whenever process A sends a huge chunk of data to process B (on the same machine), you just send a pointer. You don't copy the data. You don't marshal it, you don't do nothing. You just give process B a copy of the pointer that A is using. Now, A may later "update" that data by constructing a very similar data structure and throwing away its old pointer. At that point, B is "out of date." If B needs to stay in sync, you should send the new data as well. Since this is cheap, there's no reason not to.

Whenever I've done concurrent programming, I've started out with a shared-memory model and ultimately found that it's too hard to manage the timing and notification issues. I've always abandoned it for message-passing sooner or later.

Do you have an application that absolutely requires mutable shared memory? Either for performance reasons, or just to achieve a solution to the problem?

I want to hear about it.

November 11, 2005

Functional Reactive Web GUIs

Here's a bit of syntax for doing a kind of "functional reactive web programming." This example aims to implement the draggable list problem.

fun dragList(origWidget) : mouseIsDown => DOM -> DOM {
    draggingElement = findHitElement(mousePos);
    fun updateIndicator(widget) : mouseMoved => DOM -> DOM {
        reorderNodes origWidget mousePos.y;
    }
    finally {
        # updates the widget once the behavior's condition 
        # finally becomes false
        reorderNodes origWidget draggingElement mousePos.y
    }
}

This defines a "behavior" called dragList which could be applied to a node in a DOM tree. The dragList widget has a type mouseIsDown => DOM -> DOM. Here the small arrow -> denotes an ordinary function type (arg on the left, result on the right). What comes before the big arrow (=>) denotes a condition. This is something which must be true for the runtime system to invoke this function. So the type mouseIsDown => DOM -> DOM says, "if the mouse is down, you can call this function with a DOM tree and expect another DOM tree." The runtime system will repeatedly pass the current DOM of the relevant widget and replace it with the result of the function.

You might apply the behavior to an HTML element something like this:

<ul l:behavior=dragList>
  <li>Yorick</li>
  <li>Horatio</li>
  <li>Rosencrantz</li>
  <li>Guildenstern</li>
</ul>

The same behavior can be attached to many page components this way. Conceivably, more than one component's behavior could be active at a given time. But within a behavior, internal functions should be serialized. In this system, a behavior can only destructively affect the component on which it is invoked—it follows from this that concurrent behaviors on distinct components cannot interfere. It remains to be seen whether every desirable GUI experience can be implemented this way.

Now, diving into the body of the dragList behavior:

    draggingElement = findHitElement(mouseDownPos);

The dragList behavior begins by letting draggingElement refer to the element where the mouse is originally clicked; we'll use it later.

Next dragList gives an internal function which has its own condition; as long as the outer block is active, the inner block stands to be called.

    fun updateIndicator(widget) : mouseMoved => DOM -> DOM {
        reorderNodes origWidget draggingElement mousePos.y
    }

The inner function's condition only applies while the outer function's condition is true. So what this block expresses is that, while the outer condition is true, whenever the mouse moves, the runtime system should call updateIndicator. In order to implement the nested conditions, the compiler will write code to register and unregister the inner function with the appropriate event handlers whenever the outer function's condition is satisfied/falsified.

Finally, the use of the inner function, like that of the outer function, is to update the component. The runtimes system passes the current DOM for the component and replaces it with whatever the function returns. In this way, we model changes over time without destructive operations.

Now to progress to the next level of sophistication, we can observe that the type annotations are unneccessary and the compiler should be able to derive both the argument types and also the conditions under which the function needs to be called. This works because there are special thunks which refer to environmental variables that can change. Omitting the type annotations from the original figure, we get:

fun dragList(origWidget) {
    draggingElement = findHitElement(mouseDownPos);
    if (not(childOf(origWidget, draggingElement)))
        return origWidget 
    fun updateIndicator(widget) {
        reorderNodes origWidget draggingElement mousePos.y
    }
    finally {
        # updates the widget once the behavior's condition 
        # finally becomes false
        reorderNodes origWidget draggingElement mousePos.y
    }
}

the compiler could still determine, making use of referential transparency, that updateIndicator will not return a distinct result unless the value of mousePos has recently changed. Thus it can infer the condition type of mouseMoved and behave as before, registering the function with the mouseMoved event at the javascript level. Similarly, the outer function, dragList, should be invoked whenever the mouseDownPos special thunk has changed its value. In fact, the value returned will not change as long as draggingElement is a null value—that is, as long as mouseDownPos lies outside the elements within this widget, and findHitElement returns some node which is not a part of this widget. (In fact this explicit test is ugly, and in a moment I'll look at ways of eliminating it.)

This can be seen as a declarative approach to web GUIs, because each function, inner or outer, is essentially just "giving the value" of the widget as a function of some environmental conditions. Besides the mouse button and position, other environmental variables could be used (keys down, wall-clock time, a changing signal from the server, etc.). These "special thunks" are analogous to what Hudak and Elliott call "signals" in the area of Functional Reactive Animation [1].

Now ideally, the compiler should determine when two inner functions of one behavior would compete—when their conditions overlap; then there would be a race to update the component, possibly with different values. This is a mistake and the programmer should be prompted to change the code so that only one value of the widget is implied by any given set of environmental changes. Perhaps this derivation could be by means of a condition type on the special thunks, and a typing rule which bubbles these conditions up to any containing expressions.

Lexical scoping is an important factor here, which needs to be pinned down. In this case the inner function was a function of origWidget, that is, the original structure of the component—rather than its own paramter, widget, which would give the current structure of the widget at the time of invocation. This is just a design choice for the behavior-designer; under other circumstances it may be more natural to use the current widget instead. Of course, the runtime system should take care not to destroy the lexically-scoped origWidgetvalue while this behavior is active.

A lot needs to be pinned down. The condition types have been presented a bit too breezily. After all, mouseIsDown has a clear truth value at any moment; but mouseMoved describes an instantaneous change in a sort of continuous variable. Some care should be taken in defining these conditions. The question of how to handle an instantaneously-changing variable and its downstream effects has been looked at in functional reactive programming.

Also, a bit more expressivity would be useful. Above, we would prefer to describe the condition of the dragList function more finely as: "mouse is down and its original hit point is within me." This calls for some way to parameterize the condition type, e.g. "mouseIsDown && firstHit(insideMe)." It's not obvious that all the natural conditions and parameters will be easily and soundly expressible.

Finally, in talking to Phil about some of these ideas, he suggested that rather than operate on successive values for the view of the widget, the behavior functions should instead operate on a model of the widget; another set of viewing functions should be automatically invoked to transform a model into its realization in DOM form. I think this is a good idea and I need to explore this further.

[1] Elliott, C. and Hudak, P. 1997. Functional reactive animation. In Proceedings of the Second ACM SIGPLAN international Conference on Functional Programming (Amsterdam, The Netherlands, June 09 - 11, 1997). A. M. Berman, Ed. ICFP '97. ACM Press, New York, NY, 263-273.

November 7, 2005

OCaml Gripes

Gripes with OCaml:

  • Types that are declared in an interface file also need to be redeclared in the implementation file.
  • let is stranger word than fun (as in SML) for declaring functions
  • OCamlMakefile doesn't detect dependencies if the interface file is not mentioned in SOURCES; "X and Y have inconsistent assumptions"
  • No built-in "compose" combinator, can't use o (as in SML) or . (as in Haskell)
  • No before form as in SML (very useful when debugging, checking return values)
  • a record type cannot be given directly within a variant constructor declaration (e.g. Foo of {field : string}).
  • Syntax errors are often unhelpful (Syntax error)
  • Never helpfully points out that you might've partially applied a function when you meant to fully apply it (Haskell does help in this way!).
  • Interactive shell is very difficult to use:
    • It is picky about terminating each expression with exactly two semicolons.
    • There are lots of special commands for loading modules (open FooModule is never enough) and they have different conventions: for example, some begin with # and require an exact filename. I never know what file needs to be #loaded.
  • The build system detects when a module's interface has changed, but forces programmer to do something about it.
  • The built-in regexp interface is very clunky:
    • Str.quote needlessly returns a string instead of a regexp, requiring me to then call Str.regexp whenever I use it.
    • Str.string_match needlessly requires an integer argument designating the index at which I'd like to start matching. This should, of course, be an optional argument or a distinct form of the function (Perl seems to have gotten along okaytaken over the world without a silly index argument on regex matching).
  • The Map module for finite maps provides a map function for transforming the domain of the map, but doesn't provide a way to transform the keys.

November 1, 2005

A Case for Reflection in HOT Languages

In a language for the web, one ought to be able to serialize data at will, it ought to be easy to do so, and one ought to be able to write various serializers that have different properties of speed and compactness. In typical statically-typed languages, doing this is rather involved, whereas it's quite easy and commonplace in dynamically-typed languages.

To that end, a HOT language should provide primitives which will accept a value of any type, and return information such as:

  • the value's specific type
  • the "kind" of the type, e.g.: basic type, record, variant, list, function
  • for a variant value: the constructor (as a first-class value), and its arguments
  • for a record value: the labels of the fields, and a way of getting their values

If the type system is preventing us from doing something which is perfectly safe, sane and desirable, then the type system should be changed.

I have some ideas how to type operations like the above; I'll work it out and get back to you.

October 31, 2005

Why We Need a Web Control Stack

Web applications almost universally have a notion of "authenticated user" which flavors the pages they serve up. Typically the logic to support this notion is built into frameworks, so that programmers don't have to deal with it on an action-by-action basis; they can just mark which actions are required to be authenticated, and presume that the user is authenticated and her credentials are handy when the handler is running.

There are at least two decent ways of writing the standard authentication check. One is to handle failure explicitly:

fun my_handler() {
    user = validate_user();
    if (!user) {
        return login_page(retun => my_handler);
    }
    # from here on down we assume that "user" holds a valid user object
    if (!can_do(user, foo)) {       # can user perform the current action?
        return "You can't do this!"
    }
    do_foo(...);
}

But the intention is, you'd like to say "Show the user a login page and come back here when it's done." In the code aboev, you have to spell out where "here" is.

Ideally the programming system should know where to come back to. Here is a more attractive syntax:

fun my_handler() {
    user = validate_user();
    # from here on down we assume that "user" holds a valid user object
    if (!can_do(user, foo)) {       # can user perform the current action?
        return "You can't do this!"
    }
    do_foo(...);
}

To sketch an implementation, suppose that the validate_user routine has a way of returning a page to the browser, and embedding a token in that page which knows how to come back to the calling routine, preserving its context (call stack, lexical variables, etc.). Call this feature a "web stack": a call stack which is representable in a web page.

The first approach has the limitation that "my_handler" must be a top-level function and, in traditional web languages, you need to provide a table showing how to dispatch from the string "my_handler" to the function my_handler. Some languages/frameworks will automatically create a dispatch table that gives an external name to every top-level function, but this presents a security question, since you're implicitly exposing all functions to the network, leaving the programmer to remember to close them up. One way to patch that difficulty is to support decorators, which can be used to register a function with a dispatch table; this makes it easy to check which functions are externally-callable.

Still, these approaches require the programmer to be explict about what "here" is when telling a subroutine "come back here." Links should support the second approach.

With the "web stack" example, there is still some risk of inadvertently exposing a program point that can be invoked from the network. As a bulwark against that, one can imagine statically checking and tainting such "holes." The programmer could then be required to declare that a function may open up an entry point, and any routines which call such a routine would also be tainted—forcing the programmer to acknowledge the "hole."

October 28, 2005

GHC & Extensible Records

From the GHC FAQ:

Does GHC implement any kind of extensible records?

No, extensible records are not implemented in GHC. Hugs implements TRex, one extensible record variant. The problem is that the record design space is large, and seems to lack local optima. And all reasonable variants break backward compatibility. As a result, nothing much happens.

I have a hunch that extensible records are going to be a condition of bringing functional programming to the masses. I need to learn why they are hard.

October 24, 2005

My Research

Before getting too much into the thick of things, I should should say who I am and what I'm doing.

I'm working on a doctoral project to help make a new language for web development at the University of Edinburgh. The thrust of the project is to combine best practices from a number of domains and present them in one coherent language—essentially, we want to mitigate the so-called "impedance mismatch" between different technologies that make up a given web system.

For example, web applications typically are backed by a database, but the database comes with its own language, SQL, for extracting data; one typically has to write a lot of glue code to repackage the data that comes from the DBMS for use within the main language that the system is coded in. Then, in order to create a rich user experience, you typically have to code in JavaScript, and again you have to repackage your data to make it work with JavaScript & the DOM. All these multiple representations for the same data wastes a lot of web developers' time. We hope to bring all these technologies in under one linguistic umbrella, and provide a more convenient way to express a complete web system.

Along the way, we might come up with some ideas and approaches that are useful in other contexts. So if these issues are interesting to you, even if you don't expect to adopt a new programming language anytime soon, I hope you'll subscribe and follow along.

The Draggable List: a Benchmark Widget for Web Languages

Here's a nicely-defined problem that comes up repeatedly in web development, and serves as a simple example of some of the problems that come up in programming client-side web applications. We can measure web languages by how well they can express a solution to this.

The goal is to create a "draggable list." Here the user is shown a list of things—for example, to-do items—and has the opportunity to re-arrange them by dragging & dropping. While the user is dragging, the UI must indicate where the dragged item would be placed if the user let up the mouse at that moment. When the user lets go of the mouse button, the list should, of course, be permanently reordered on the page; afterwards the user can do more drag actions, or not.

The behavior should be abstracted from the concrete HTML/CSS representation so that the behavior is reusable with respect to a wide variety of visual treatments.

A side condition is that it should be easy to get the reordered list data *itself*, not by scraping the DOM tree. In this case it's easy, but the approach ought to extend directly to more complex widgets with more complex models.

How to visually indicate where the item would be dropped should be up to the coder. It's worth noting that there are generally two good ways of doing it. One way is to leave all the existing items in place but to display a solid line between the two elements where this one would be inserted. Only after the user lets go are any of the other elements moved on the screen. The other approach is to move around a representation of the dragged element—in this case, the other elements are displaced while you drag. This approach is taken in the best-known ready-made example of the Draggable List (if anyway knows of other good examples, please let me know).

Looking at the source of that implementation, JavaScript performs poorly on the Draggable List benchmark problem. Any other contenders?

UPDATE: Another implementation, very appealing to the user. But, appealing to the programmer?

October 23, 2005

Ruby DSLs & Juggling

Here's an interesting set of slides on using Ruby syntax to define DSLs within Ruby.

I was pleasantly surprised at the language for defining juggling patterns.

$IOCCC

"He [Larry Wall] won the International Obfuscated C Code Contest twice..." (State of the Onion)

Ah. Suddenly it's all clear to me.