Sunday, November 30, 2008

Non-standard inhabitants

The goal of this post is to demonstrate that I'm not crazy. One of my latest discoveries involves a function called "inhabitants", but that's impossible, so says the textbooks, and I can't explain my discovery any further because I crumble under the mathematically rigorous objections. You know, the way researchers whose latest discovery involves a perpetual motion engine tend not to be taken very seriously.

So I took the time to read about those mathematically rigorous objections, and I'm happy to report that my conclusion is that I am not a crackpot. It's just a misunderstanding. A bit as if my perpetual motion engine was supposed to cycle more slowly at every turn, without ever stopping (hence the name) and without producing useful work. It may or may not be an interesting discovery, but it has nothing to do with what people usually call perpetual motion engines.

Now that I'm convinced of my own sanity, the hard part is to convince others. I'll practice with you, foreign and unknown readers, who are probably too far away to cause me trouble if I fail.


First of, I'd like to explain what people usually mean by "inhabitants". The type inhabitation problem is the task of determining, for any given type in a particular programming language, whether there exists an expression in that language which has that type. Such an expression is said to "inhabit" its type. It would be even more useful if an actual inhabitant could be returned as a witness, and that's precisely what Djinn does for Haskell's type system.

Some people have wondered whether Djinn could be extended to enumerate all the inhabitants of that type, instead of just one. Well, no. And here is a diagonalization argument to prove it.


First, it makes things much simpler if we cast everything in terms of integers. Types and expressions can both easily be encoded as integers, for example by interpreting their ascii representations as base 256 numbers. I'll use the notation ⟨x⟩ to denote the integer representing the type or expression x.

Even though our expressions now masquerade as integers, we should still be able to evaluate them.

haskell eval :: Int -> Int -> Int spec ⟨f⟩ `eval` ⟨x⟩ == ⟨f x⟩

The above code is the specification for eval, not its implementation. It is an incomplete specification, because we don't care about the behaviour of eval when it is given integers which don't represent expressions (rendering the left-hand side meaningless) or when these expressions are of incompatible types (rendering the right-hand side meaningless). In what follows, eval should always be used implicitly whenever the code appears to be applying an integer to another.

Next, by contradiction, suppose we did have a version of Djinn which enumerated all of the inhabitants of a type.

haskell inhabitants :: Int -> Int -> Int spec if (p :: T) then ∃i:Int. inhabitants ⟨T⟩ i == ⟨p⟩

Instead of returning a list of all the results, inhabitants takes an extra integer i as a second argument and returns the ith member of its enumeration. If there are less than i members in the enumeration, then we don't care what inhabitants returns.

Again, that's because the specification is incomplete. It is very incomplete. It doesn't even forbid inhabitants from returning expressions with the wrong type in addition to those who do. It only requires inhabitants not to miss any of the expressions which do have the requested type, because that's all I need in order to derive a contradiction. In fact, I only need to find a single type and a single expression of that type which inhabitants misses. The diagonal, for example.

haskell diag :: Int -> Int haskell diag i = 1 + inhabitants ⟨Int -> Int⟩ i i

This time I did not merely give a specification, but an explicit definition.

The function diag has type Int -> Int, so the specification of inhabitants requires that inhabitants Int -> Int i == diag⟩ for some i. Both sides of this equation should evaluate to the same integer, ⟨diag⟩. And thus, the expressions represented by those identical integers should also be the same. In particular, the expressions should behave the same on all inputs, including input i. But on that input, diag made sure to behave differently, by returning a value that is off by one from the left-hand side's value! Thus, inhabitants Int -> Int⟩ must have missed diag, as needed.


Now that we know that implementing inhabitants correctly is impossible, what's wrong with the following attempt?

haskell inhabitants :: Int -> Int -> Int haskell inhabitants _ p = p

Okay, so it's not a very serious attempt, but it does satisfy the specification. Remember how I said that I didn't need to forbid inhabitants from returning too many expressions? Well here I am, returning all expressions, all the time, regardless of the type requested. How could I miss any?

Let's walk through the argument again. There is some diag function which I'm supposed to miss. The specification requires inhabitants Int -> Int i == diag⟩ for some i. And with this implementation, this happens when i is equal to ⟨diag⟩. At this point, diag is supposed to stab us in the back by adding one!

haskell diag i = 1 + inhabitants ⟨Int -> Int⟩ i i haskell diag ⟨diag⟩ = 1 + inhabitants ⟨Int -> Int⟩ ⟨diag⟩ ⟨diag⟩ haskell diag ⟨diag⟩ = 1 + ⟨diag⟩ `eval` ⟨diag⟩ haskell diag i = 1 + diag i haskell diag i = loop

The foolish diag, however, was stabbed by our infiltrated agents first. It turns out that calling inhabitants Int -> Int i caused diag i not to terminate! Adding one to ⊥ has no effect, and diag failed to distinguish its behaviour from inhabitants Int -> Int i, avoiding a contradiction.

What this tells us, is that the original argument implicitly assumed that inhabitants only enumerated terminating expressions. In turn, this tells us that the type inhabitation problem implicitly requires inhabitants to terminate. And with hindsight, it's rather obvious than they should. Otherwise, the type inhabitation problem would be trivial: every single type would be inhabited!

haskell fake_inhabitant :: a haskell fake_inhabitant = haskell let witness = fake_inhabitant haskell in witness

And now, I can finally reveal what it is that makes my perpetual motion engine possible: the inhabitants function involved in my discovery enumerates expressions which, syntactically, have the correct type, but whether the enumerated expressions terminate or not is irrelevant to my discovery. The fact that I won't be able to produce useful work out of it doesn't mean it can't also be a nice theoretical device. But maybe I really should have called it a "perpetually slowing-down engine", to avoid confusion.


update: In other news, it is also impossible for Djinn to exist. But it's only impossible for versions of Djinn which cover strong enough type systems, like Haskell's. From this, I can conclude that Djinn covers a useful fragment of Haskell, and you can conclude that I don't use Djinn very often.

Friday, November 28, 2008

Axes of the lambda cube

Can't see the math behind the greek? For reference, here are the appropriate greek letters for each abstraction mechanism introduced in each axis of the lambda cube. For completeness, lambda abstraction is also included on its own, even though it appears in all axes.

Each cell begins with an example type, followed by an example term of that type. Either the term, the type, or both (...or neither!) will feature a new greek abstraction letter, introducing a variable that can be used on the right of the period.

These new symbols extend the syntax of the calculus, in a way that is made precise in the last lines of the cell. Those lines extend the basic grammar using a functional notation which models variable introductions more precisely than a context-free grammar could.

Even though some cells introduce more than one new abstraction symbol, each cell only introduces one new kind of application. The syntax for application is always just juxtaposition; the syntactic types of the juxtaposed constructs should make it clear which application is meant. The last line of each cell extends the grammar to support this new kind of application, with a bold fragment highlighting what makes the axis unique.

function types
(A → B) →
  A → B

λf:(A → B).
  λx:A. f x


(→) : type → type → type
λ : type → (term → term) → term
(•) : term → term → term

dependent types
(Πx:Nat. P x) →
  Πy:Nat. P y

λf:(Πx:Nat. P x).
  λy:Nat. f y


Π : type → (term → type) → type
(•) : type → term → type

polymorphic types
(∀A. Nat → A) →
  ∀B. Nat → B

λf:(∀A. Nat → A).
  ΛB. λy:Nat. f B y


∀ : (type → type) → type
Λ : (type → term) → term
(•) : term → type → term

parametric types
(A → B → (P : * → * → *) A B) →
  A → B → P A B

λf:(A → B → P A B).
  λx:A. λy:B. f a b


(→) : kind → kind → kind
(•) : ∀⟨type⟩:kind. (type → ⟨type⟩) → type → ⟨type⟩

Thursday, November 27, 2008

Testing Chrome Overlays

Writing a Firefox extension involves writing a so-called "chrome overlay", which is an xml description of the interface changes you wish to make.

I'm in the process of learning how to write overlays. That means I make mistakes. Lots of them. They're completely normal stepping stones in the process of learning, and I'm totally prepared to spend lots of time doing nothing but mistakes. But while I'm doing this, I'd rather not spend lots of time doing non mistake-related tasks. Non-mistakes reduce the number of mistakes I can do in one day, and thus increase the number of days spent learning.

If, in addition, the non mistake-related task is repetitive and impossible to automate, then I'm likely to get angry. I'm really angry against reinstalling in-progress Firefox extensions right now. Or at least I was, before I found a way to avoid doing it.

My solution is to add two new buttons to my bookmarks toolbar: one for the chrome I want to overlay over, and one which applies my overlay over it. Don't click those links, just look at the URLs to understand the trick, and then only if you're also trying to learn overlays. For everybody else, well... just wait a little bit. The extension I'm working on is going to be really interesting.

...if my attention is not caught by something else before I complete it, that is.

Sunday, November 02, 2008

I want to be a knowledge engineer!

The best thing about being a student is that your job is to learn. Except it isn't really a job at all, since I'm the one paying the university, not the other way around.

The sad thing about being a student, is that it cannot last. Once you've done enough learning, you're expected to go out there in the industrial world and use that knowledge. My current plan is to work in the mornings, raising enough money to pay for my classes autonomous learning (I'm a grad student) in the afternoon. I prefer this to filling out grant application forms.

But I've stumbled upon this fiction-sounding job title, "knowledge engineer", which sounds too good to be true. According to that article, a knowledge engineer acts as an intermediate between domain experts and programmers. So the domain expert does the hard job of finding and reading papers and arguing with other experts and performing experiments and keeping up with the latest research. This process apparently turns experts into brains floating in a vat of brains, thereby losing their ability to communicate with the outside world.

Somewhere in this outside world lies a programmer, who is also a domain expert, an expert in program construction. Thus he lives in his own vat, separate from the domain expert's. The programmer vat is equipped with wheels, and it travels from vat to vat trying to simplify the jobs of the other brains. Many of them perform dull and automatable tasks everyday without realizing that a computer could do it for them, freeing them to do more arguing and reading and experimenting. So the programmers offer to write them useful new tools and the experts agree to explain all of the little details which the programmers could need, and the experts blabber on and on and on and the programmers can't take it anymore. I just need to know whether measuring an increased atmospheric pressure leads to predicting a higher or a lower temperature! Who cares about the details of the moving air masses and the process of cloud formation and the reason altitude messes with our measurement?

I do. I mean the knowledge engineer does, I just wish I was a knowledge engineer. When the programmer vat approaches the expert vat, the knowledge engineer's brain jumps from one vat to the other, and begins to ask questions. He does care about the process of cloud formation because he's naturally curious, and maybe it's not going to help with the problem at hand or maybe clouds, pressure and temperature are all intimately linked in some complicated way which the knowledge engineer just can't wait to discover. Then he jumps back into the programmer vat and explains the parts which turned out to matter to the problem at hand, using words which make sense to the programmers, because at the bottom of his heart, he's one of them. The programmer vat is his own travelling home, the one with the cables floating everywhere and the squeaky mouse-like wheels. Domain experts just love the knowledge engineer, because he cares about their stuff, and programmers also love the knowledge engineer, because he talks their language.

Anybody out there who wants to love me?