Log in

Buffalo proof - Ben FrantzDale [entries|archive|friends|userinfo]
Ben FrantzDale

[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

Buffalo proof [Feb. 6th, 2005|01:21 pm]
Ben FrantzDale
[Tags|, , , ]
[music |Laguardia, Sleepover (116 Overture)]

Continued from my previous post, a proof of the infinite number buffalo sentences:

Since I can't diagram sentences easily, let's use this notation: subject[modifier].verb(direct object). For brevity, let B be the animal, and b be the verb.

So we have:

  1. .b() // Imperative: Hey you, bamboozle someone!
  2. B.b() // Declarative: There are bison who bamboozle.
  3. B.b(B) // There are bison who bamboozle.
  4. B[B.b(~)].b() // Bison whom other bison have bamboozled tend to bamboozle. (Let the ~ mean that the thing B.b(~) is modifying is the argument of b(). That is, people[people.know(~)] is equivalent to “People whom people know”
  5. B[B.b(~)].b(B) // Bison whom other bison have bamboozled tend to bamboozle bison.
  6. B[B[B.b(~)].b(~)].b() // etc.
  7. B[B.b(~)].b(B[B.b(~)]) // etc.

Now there are three transformations I see in use here:

(i) . ⇒ B. // This is only useful once; it gets you out of imperative but isn't useful after that.
(ii) b() ⇒ b(B) // This is also only useful once, because only the top-level b has an empty argument, the others all act on their parent.
(iii) B ⇒ B[B.b(~)]

So (ii) adds one word and leaves you with a new B leaf; (iii) adds two Bs and also leaves you with a new B leaf. As base cases, consider the sentences of length two and three above. Both have B leaves. Applying (iii) to #2 gives #4. Applying (iii) to #3 gives #5. Applying (iii) to #4 gives #6. Because (iii) preserves B leaves, ∃ sentence of length n ⇒ ∃ sentence of length n+2 ∀ n ∈ { 2, 3, 4, … }. But by example ∃ sentence of length 1 and 2.

∴ ∃ sentence of length nn ∈ { 1, 2, 3, 4, … }.

However, because #3 has two B leaves, #5 could be either B[B.b(~)].b(B) or B.b(B[B.b(~)]), so the parsing is not unique. Hence, as if it's any surprise, sentences of length five and up are ambiguous.

[User Picture]From: amoken
2005-02-06 09:41 pm (UTC)
Ben, you are a horrendous geek. I love it!!!
(Reply) (Thread)
[User Picture]From: audaibnjad
2005-02-06 09:45 pm (UTC)
(Reply) (Thread)
[User Picture]From: cubetime
2005-02-08 08:19 pm (UTC)
If I'm reading your notation correctly, the number 6 you have is congruent to:

"The cheese the rat the cat chased ate was moldy."

That's probably why you had trouble parsing it. This sentence is at the boundary of how many things most normal humans can hold in relation to each other in a single thought.
(Reply) (Thread)
[User Picture]From: benfrantzdale
2005-02-09 04:50 am (UTC)
That sounds like the right parsing for number six. Now that I think about it, I think it's not a matter of how many things you can old in relation to each other in a thought, but rather how many things you can hold in relation to each other on your stack. That is, “The cheese the rat ate was moldy – the rat the cat chased.” is fairly clear, because the recursion has been eliminated.

Hmm... It seems as though your brain can optimize recursive tail calls. That is, “The dog chased the car the man bought from his boss.” is far clearer than “The dog the man bought from his boss chased the car.” (which means something different, of course). I wonder how this is in other language with different subject-object-verb ordering. It seems an object-subject-verb language might lend itself to elaborate verbs.

This might be related to why many people find functional languages difficult and why some code reads very clearly and other code looks Like. Many. Short. Sentences. Without. Any. Sense. Of. Flow.
(Reply) (Parent) (Thread)
[User Picture]From: cubetime
2005-02-09 05:43 am (UTC)
Incidentally, "police" works as well. What other words are there that fit the model?
(Reply) (Parent) (Thread)