# Category Archives: Uncategorized

## Why is the derivative of a generating function meaningful?

A generating function is a formal power series where the sequence of coefficients is the object of interest. Usually the point of using them is that operations on the power series (like addition, multiplication, and differentiation) correspond to meaningful operations on your sequence of coefficients.

I’ve known about the gist of generating functions for a while, and I’d always thought that the fact that differentiation was meaningful was just a magical coincidence (for some reason, addition and multiplication being meaningful didn’t seem as surprising to me).

But recently, Nathan Linger pointed out to me that over in the functional programming community, they have what I think is a very satisfying answer to this question (he said he got it from sigfpe’s blog, but I’m not sure what post, maybe this one?).

### Combinatorial Species

I actually find the general concept of generating functions surprisingly slippery.  André Joyal’s notion of a combinatorial species makes things more concrete for me.  A combinatorial species $\mathbf{F}$ is simply a functor from $\mathbb{B}$ to itself, where $\mathbb{B}$ is the category of finite sets and bijections.  The idea is that, for a finite set $S$, $\mathbf{F}(S)$ should be considered as the set of all structures of a certain kind on $S$.

For example: $\mathbf{LinearOrder}$ is the functor taking a set $S$ to the set of all linear orders of elements of $S$, so $\mathbf{LinearOrder}(\{1, 2, 3\}) = \{1 < 2 < 3, 1 < 3 < 2, 2 < 1 < 3, 2 < 3 < 1, 3 < 1 < 2, 3 < 2 < 1\}$.  Another example is $\mathbf{Tree}$ which takes $S$ to the set of all trees on elements of $S$.

Each species $\mathbf{F}$ has an associated generating function $F(x)=a_0 + a_1 x + \frac{a_2}{2!} x^2 + \frac{a_3}{3!} x^3 \ldots$ where $a_i = |\mathbf{F}(\{1, \ldots, i\})|$ (you could use any set of cardinality $i$ instead of $\{1, \ldots, i\}$.

It is now possible to make precise the fact that meaningful operations on species correspond to meaningful operations on generating functions.  For example, if you define addition on species by letting $(\mathbf{F}+\mathbf{G})(S)$ be the disjoint union of $\mathbf{F}(S)$ and $\mathbf{G}(S)$, then the generating function of the sum of two species is the sum of the generating functions.  Similarly, multiplication on species is defined by letting $(\mathbf{F}\cdot\mathbf{G})(S)$ be a pair of an element of $\mathbf{F}(S_1)$ and an element of $\mathbf{G}(S2)$ for some partition $S1\sqcup S2=S$, and it corresponds to multiplication of generating functions.

As mentioned before, there’s also an operation on species which corresponds to differentiation of the generating function.  It corresponds to an addition of a “hole” in the structure.  That is, $\mathbf{F}'(S) = \mathbf{F}(S\sqcup\{\star\})$, and $\mathbf{F}'$ takes bijections on $S$ to the output of $\mathbf{F}$ on the same bijection but fixing $\star$.

This is a very powerful fact.  For example, a linear order with a hole is the same thing as the product of two linear orders (the one to the left of the hole and the one to the right of the hole).  If $F$ is the generating function of $\mathbf{LinearOrder}$, this gives us the equation $F' = F^2$.  Since we also know that $a_0$ should be 1, this gives us $F(x) = \frac{1}{1-x} = 1 + x + x^2 + x^3 + \cdots = 1 + x + \frac{2!}{2!}x^2 + \frac{3!}{3!}x^3 + \cdots$.  Thus, if we didn’t know it already, we can deduce that there are $n!$ linear orders on $n$ elements.

Furthermore, there is a notion of composition of species corresponding to composition of generating functions, where intuitively $(\mathbf{F}\circ\mathbf{G})(S)$ is a partition of $S$ together with a $\mathbf{G}$-structure on each element of the partition, and a $\mathbf{F}$-structure on the partition as a whole.

### So why is differentiation meaningful?

What is the connection between differentiation of species and ordinary differentiation?  Two main ways of approaching ordinary differentiation are through limits or through infinitesimals.  There don’t seem to be any limits around, so let’s focus on infinitesimals.

Although you can make infinitesimals precise in various ways, most people who think about calculus using infinitesimals do so in a non-rigorous way.  Here’s one common non-rigorous principle:

Let $\delta\neq 0$ be such that $\delta^2 = 0$.  Then for any (smooth, real-valued) $f$ and real $x$, $f(x + \delta) = f(x) + \delta f'(x)$.

Of course, there is no such $\delta$ in the standard definition of the real numbers.

Although this is non-rigorous, if we can translate the same non-rigorous principle over to species, that would give a good account of why differentiation shows up in generating functions.  The main insight is what the meaning of $\delta$ should be.  The condition that $\delta\neq 0$ for species simply means that there should be at least one $\delta$-structure on some set.  The condition that $\delta^2 = 0$ is subtler: it means that you can’t put a $\delta$-structure on two sets at the same time.  As in the case with real numbers, this is impossible, but the reasoning works anyway so we’ll go with it.

Now let’s think about what $f(x + \delta)$ means.  An $(x+\delta)$-structure on a set is either an $x$-structure or a $\delta$-structure.  By the definition of composition of species, an $f(x+\delta)$-structure on a set $S$ is a partition of $S$ together with an $(x+\delta)$-structure on each element of the partition and an $f$-structure on the partition as a whole.  This means that each element of the partition is given either an $x$-structure or a $\delta$-structure.

But, by the infinitesimal nature of $\delta$, at most one element of the partition can be given a $\delta$ structure.  That means there are two cases: 0 elements of the partition have a $\delta$-structure or 1 does.  If 0 elements do, every element of the partition has an $x$-structure, and the species is $f(x)$.  On the other hand, if one does, we can describe the species by saying which one does (with a hole in the $f$-structure) and what the $\delta$-structure was.  That’s $\delta f'(x)$.  Therefore $f(x + \delta) = f(x) + \delta f'(x)$!

Note that if we didn’t know what the formula for a species with a hole was, we could go through the preceding informal reasoning and deduce that it should correspond to the derivative!  I don’t know if others are convinced, but I find this quite satisfying.  To be totally clear, as mentioned before, this argument came from sigfpe’s blog and I don’t know if it has history before that.

1 Comment

Filed under Uncategorized

## Complexity to Simplicity and Back Again

Generalizing a problem can make the solution simpler or more complicated, and it’s often hard to predict which beforehand. Here’s a mini-example of a puzzle and four generalizations which alternately make it simpler or more complicated.

Filed under Uncategorized

## Generating Functions as Cardinality of Set Maps

There is a class of all cardinalities $\mathbf{Card}$, and it
has elements $0$, $1$ and operations $+$, $\cdot$, and so forth defined on it. Furthermore, there is a map
$\mathrm{card}\colon\mathbf{Set}\to\mathbf{Card}$ which takes
sets to cardinalities such that $\mathrm{card}(A\times B)=\mathrm{card}(A)\cdot\mathrm{card}(B)$ (and so on).

Ordinary generating functions can be thought of entirely analogously
with set maps $\mathbf{Set}\to\mathbf{Set}$ replacing sets:
There is a class $\mathbf{GenFunc}$ with elements $0$,
$1$, and operations $+$, $\cdot$. Furthermore,
there is a (partial) map $\mathrm{genFunc}\colon (\mathbf{Set}\to\mathbf{Set})\to\mathbf{GenFunc}$ such that $\mathrm{genFunc}(F\times G)=\mathrm{genFunc}(F)\cdot\mathrm{genFunc}(G)$ (and so on). Here, $F\times G$ is defined by $(F\times G)(S)=F(S)\times G(S)$. Other operations on set maps (like disjoint union) are similarly defined pointwise.

(This is probably obvious and trivial to anyone who actually works
with generating functions, but it only occurred to me recently, so I
thought I’d write a blog post about it.)

The class $\mathbf{GenFunc}$ is in fact a set, and is just the set of formal power series $\{\sum_{i\geq 0} a_i z^i\mid a_i\in\mathbb{N}\}$. The partial map $\mathrm{genFunc}$ takes $F$ to $\sum_{i\geq 0} a_i z^i$ just in case $F$ is “canonically isomorphic” (a notion I’ll leave slippery and undefined but that can be made precise) to the map $Z\mapsto \coprod_{i\geq 0} \{1,2,\ldots,a_i\}\times Z^i$, where $\coprod$ indicates disjoint union.

That provides a semantics for ordinary generating functions. Furthermore, this semantics has a number of features beyond those of cardinality. For example, in addition to respecting $\coprod$ and $\times$, $\mathrm{genFunc}$ represents composition.

A similar semantics can be provided for exponential generating functions, but it takes a little more work. In particular, we have to single out $\lbrack0,1\rbrack$ as a distinguished set. Let $\mathbf{Meas}$ be the smallest set containing all measurable subsets of $\lbrack0,1\rbrack^n$ for any finite $n$ and which is closed under finite products, countable disjoint unions, and products with sets $\{1,\ldots,n\}$ for finite $n\geq 1$.

We can define the measure $\mu$ of all sets in $\mathbf{Meas}$ by extending Lebesgue measure in the obvious way (taking the product of a set with $\{1,\ldots,n\}$ will multiply the measure by $n$). Furthermore, notice that, by construction, every element of every set $M$ in $\mathbf{Meas}$ is a tuple which (after flattening) has all of its elements either natural numbers or elements of $\lbrack 0,1\rbrack$ and has at least one element of $\lbrack 0,1\rbrack$. Therefore, we can define a pre-ordering on $M$ by comparing the corresponding first elements that are in $\lbrack 0,1\rbrack$.

The point of all that is that, for $M\in\mathbf{Meas}$, we can form the set $M^n_{<}=\{\langle m_1,\ldots,m_n\rangle\mid m_1 < \cdots < m_n\}$ which will again be in $\mathbf{Meas}$ and its measure will be $\mu(M)^n/n!$. The corresponding statement with cardinality is not true since you have to worry about the case when elements in the tuple are equal ($\mathrm{card}(X^n_{<}) = \mathrm{card}(X)(\mathrm{card}(X)-1)\cdots(\mathrm{card}(X)-n+1)/n!$) but the set of tuples that have duplicates has measure 0, so by working with measure, we can get the equality we want.

Finally, let $\mathbf{ExpGenFunc}$ be the set of formal power series $\{\sum_{i\geq 0} a_i/i! z^i\mid a_i\in\mathbb{N}\}$. The partial map $\mathrm{expGenFunc}$ takes $F$ to $\sum_{i\geq 0} a_i/i! z_i$ just in case $F$ is “canonically isomorphic” to the map $Z\mapsto \coprod_{i\geq 0} \{1,2,\ldots,a_i\}\times Z^i_{<}$ for all $Z$ in $\mathbf{Meas}$. Just as before, this map respects $+$, $\cdot$, composition, etc.

Note that the exponential generating functions are usually explained via labeled objects and some sort of relabeling operation. This approach weasels out of that by observing that the event that there was a label collision has probability 0, so you can just ignore it.

1 Comment

Filed under Uncategorized

## Mathematica and Quantifier Elimination

In 1931, Alfred Tarski proved that the real ordered field $(\mathbb{R}, 0, 1, +,\times, <)$ allows quantifier elimination: i.e., every first-order formula is equivalent to one with no quantifiers.  This is implemented in Mathematica’s “Resolve” function.

The Resolve function is called like Resolve[formula,domain] where domain gives the domain for the quantifiers in formula.  Since we’ll always be working over $\mathbb{R}$ in this blog post, let’s set that to be the default at the start.

In[1]:= Unprotect[Resolve]; Resolve[expr_] := Resolve[expr, Reals]; Protect[Resolve];

Now let’s see what quantifier elimination lets you do!

(A couple of caveats first though: First, many of these algorithms are extremely inefficient. Second, I had some trouble exporting the Mathematica notebook, so I basically just copy-and-pasted the text. Apologies if it’s unreadable.)

## How many solutions?

Let’s start with just existential formulas.  By eliminating quantifiers from $\exists x\, \phi(x,a)$, we can tell what the conditions are on a such that there’s at least one solution $x$.  For example:

In[2]:= Resolve[Exists[x, x^2 + b x + c == 0]] Out[2]= -b^2 + 4 c <= 0 

This just tells you that there’s a solution to the quadratic if the discriminant is non-negative.  Let’s turn this into a function:

In[3]:= atLeastOneSolution[formula_, variable_] := Resolve[Exists[variable, formula]] 

Now we can verify that cubics always have solutions:

In[4]:= atLeastOneSolution[x^3 + b x^2 + c x + d == 0, x] Out[4]= True 
Now suppose we wanted to find when something has at least two solutions.  Just like resolving $\exists x\,\phi(x)$ told us when there was at least one, $\exists x_1,x_2\,x_1\ne x_2\wedge \phi(x_1)\wedge\phi(x_2)$ will be true exactly when there are at least two.

This is just as easy to program as atLeastOneSolution was, except that when we create the variables $x_1$ and $x_2$ we have to be careful to avoid capture (what if one of those two already appeared in $\phi$?).  Mathematica provides a function called Unique where if you call Unique[], you’re guaranteed to get back a variable that’s never been used before.  With that we can define atLeastTwoSolutions correctly (edit: actually, this isn’t right if the passed-in variable is also bound in the passed-in formula):

In[5]:= atLeastTwoSolutions[formula_, v_] := With[{s1 = Unique[], s2 = Unique[]}, Resolve[ Exists[{s1, s2}, s1 != s2 && (formula /. v -> s1) && (formula /. v -> s2)]]] 
We can check this by verifying that quadratics have two solutions when the discriminant is strictly positive:

In[6]:= atLeastTwoSolutions[x^2 + b x + c == 0, x] Out[6]= -b^2 + 4 c < 0

Here’s the condition for the cubic to have at least two solutions:

In[7]:= atLeastTwoSolutions[x^3 + b x^2 + c x + d == 0, x] Out[7]= c < b^2/3 && 1/27 (-2 b^3 + 9 b c) - 2/27 Sqrt[b^6 - 9 b^4 c + 27 b^2 c^2 - 27 c^3] <= d <= 1/27 (-2 b^3 + 9 b c) + 2/27 Sqrt[b^6 - 9 b^4 c + 27 b^2 c^2 - 27 c^3] 
Note that (and I believe Resolve always does this) the $c condition given first is sufficient that the later square root is well-defined:
In[8]:= Resolve[ForAll[{b, c}, c < b^2/3 ⇒ b^6 - 9 b^4 c + 27 b^2 c^2 - 27 c^3 > 0]] Out[8]= True
It’s clear that we can determine when there at least n solutions by a very similar trick: just resolve $\exists x_1,\ldots,x_n (x_i\ne x_j,i\ne j)\wedge (\phi(x_i),\forall i)$.

We’ll first write a helper function to produce the conjunction of inequalities we’ll need:

In[9]:= noneEqual[vars_] := And @@ Flatten[Table[If[s1 === s2, True, s1 != s2], {s1, vars}, {s2, vars}]] In[10]:= noneEqual[{x, y, z}] Out[10]= x != y && x != z && y != x && y != z && z != x && z != y 

And now we’ll write atLeastNSolutions:

In[11]:= atLeastNSolutions[formula_, v_, n_] := With[{sList = Array[Unique[] &, n]}, Resolve[ Exists[sList, noneEqual[sList] && (And @@ Table[formula /. v -> s, {s, sList}])]]]

Given atLeastNSolutions, we can easily write exactlyNSolutions:

In[12]:= exactlyNSolutions[formula_, v_, n_] := BooleanConvert[ atLeastNSolutions[formula, v, n] && ! atLeastNSolutions[formula, v, n + 1]]

I used BooleanConvert instead of Resolve since there won’t be any quantifiers left in the formula, so we just have to do Boolean simplifications.

In[13]:= exactlyNSolutions[x^3 + b x^2 + c x + d == 0, x, 2] Out[13]= ! 1/27 (-2 b^3 + 9 b c) - 2/27 Sqrt[b^6 - 9 b^4 c + 27 b^2 c^2 - 27 c^3] < d < 1/27 (-2 b^3 + 9 b c) + 2/27 Sqrt[b^6 - 9 b^4 c + 27 b^2 c^2 - 27 c^3] && 1/27 (-2 b^3 + 9 b c) - 2/27 Sqrt[b^6 - 9 b^4 c + 27 b^2 c^2 - 27 c^3] <= d <= 1/27 (-2 b^3 + 9 b c) + 2/27 Sqrt[b^6 - 9 b^4 c + 27 b^2 c^2 - 27 c^3] && c < b^2/3 In[14]:= exactlyNSolutions[x^2 + b x + c == 0, x, 1] Out[14]= -b^2 + 4 c <= 0 && -b^2 + 4 c >= 0

This last calculation shows that a quadratic has exactly one solution exactly when the discriminant is both nonnegative and nonpositive (as you can see, there is no guarantee that the formula will be in it’s simplest form).
We now have a way to test whether a formula with one free variable has $n$ solutions for specific values of $n$, since exactlyNSolutions will return either True or False if you quantify out the only variable.  For example:

In[15]:= p = x^4 - 3 x^3 + 1 Out[15]= 1 - 3 x^3 + x^4 In[16]:= Plot[Evaluate[p], {x, -3, 3}]

In[17]:= exactlyNSolutions[p == 0, x, 2] Out[17]= True

It would be nice, however, to have a function which will just tell you how many solutions such a formula has.

In the single-variable polynomial case, we could just try exactlyNSolutions for $n=0,1,2,\ldots$ until we find the right $n$.  However, there might not be finitely many solutions if the formula involves inequalities or higher-dimension polynomials (e.g., $x^2 + y^2 = 1$ has infinitely many solutions).

How can we tell if a formula has infinitely many solutions?  Well, the fact that $\mathbb{R}$ has quantifier elimination implies that $\{x\in\mathbb{R}\mid \phi(x)\}$ for $\phi$ with just $x$ free must be a finite union of points and open intervals (since the only quantifier free terms are $t_1=t_2$ and $t_1. Therefore $\{x\in\mathbb{R}\mid\phi(x)\}$ is infinite iff it contains a non-empty open interval, i.e., iff $\exists a,b\,a.

In[18]:= infinitelyManySolutions[formula_, v_] := With[{a = Unique[], b = Unique[]}, Resolve[Exists[{a, b}, a < b && ForAll[v, a < v < b ⇒ formula]]]]

To test:

In[19]:= infinitelyManySolutions[Exists[y, x^2 + y^2 == 1], x] Out[19]= True

Now we can write numberOfSolutions and be assured that it will always (theoretically) terminate for any formula with a single free variable:

In[20]:= numberOfSolutions[formula_, v_] := If[infinitelyManySolutions[formula, v], Infinity, Block[{n = 0}, While[! exactlyNSolutions[formula, v, n], n++]; n]]

A few examples:

In[21]:= numberOfSolutions[p == 0, x] Out[21]= 2 In[22]:= numberOfSolutions[p > x^2, x] Out[22]= ∞ In[23]:= numberOfSolutions[p > x^6 + 5, x] Out[23]= ∞ In[24]:= numberOfSolutions[p > x^6 + 6, x] Out[24]= 0 In[26]:= Plot[{p, x^6 + 5, x^6 + 6}, {x, -1.6, -1}, PlotLegend -> {HoldForm[p], x^6 + 5, x^6 + 6}, LegendPosition -> {1, 0}, ImageSize -> Large]

Up to now, all our functions have taken single variables, but we can accomodate tuples of variables as well.  First, we’ll define the analogue of noneEqual to produce the formula asserting that none of the given tuples are equal (recall that two tuples are unequal iff a pair of corresponding components is unequal):

In[27]:= noTuplesEqual[tuples_] := And @@ Flatten[Table[If[t1 === t2, True, Or @@ MapThread[#1 != #2 &, {t1, t2}]], {t1, tuples}, {t2, tuples}]] In[28]:= noTuplesEqual[{{x[1], y[1]}, {x[2], y[2]}}] Out[28]= (x[1] != x[2] || y[1] != y[2]) && (x[2] != x[1] || y[2] != y[1])

Now we can add rules to our old function to deal with tuples of variables as well:

In[29]:= atLeastNSolutions[formula_, variables_List, n_] := With[ {sList = Array[Unique[] &, {n, Length[variables]}]}, Resolve[ Exists[Evaluate[Flatten[sList]], noTuplesEqual[sList] &&

 

And @@ Table[ formula /. MapThread[Rule, {variables, tuple}], {tuple, sList}]]]]; 

We can extend infinitelyManySolutions by observing that a formula $\phi(x_1,\ldots,x_n)$ has infinitely many solutions iff some projection $\exists x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n \phi(x_1,\ldots,x_n)$ does.

In[30]:= infinitelyManySolutions[formula_, variables_List] := Or @@ Table[ infinitelyManySolutions[Exists[Select[variables, ! (# === v) &], formula], v], {v, variables}] In[33]:= ContourPlot[{x^2 + y^3 - 2, x^2 + y^2/4 - 2}, {x, -3, 3}, {y, -3, 3}]

In[34]:= exactlyNSolutions[x^2 + y^3 == 2 && x^2 + y^2/4 == 2, {x, y}, 2] Out[34]= False

(There are actually four solutions. This example of a set equations for which it’s difficult to tell how many solutions there are by graphing is from Stan Wagon)

## Solving Polynomial Equations

In the last section, we saw how to use quantifier elimination to find out how many roots there are.  But how can you actually find the roots?

In a certain sense, you’ve already found them just when you identified how many there are!  To “find” a root in this sense, you just introduce a new symbol for it, and have some means for answering questions about its properties.  Given some property $\phi(x)$, if you want to determine if it holds of the 6th root of some polynomial $p$ with 17 roots, then you just have to decide $\exists x_1,\ldots,x_{17}\,(x_i.

We can implement this by a function withSpecificRoot, that takes a variable, the formula it’s supposed to be a solution to, which of the roots it’s a solution to, and a formula in which you want to use this root:

In[35]:= withSpecificRoot[variable_, rootFormula_, whichRoot_, totalRoots_, formula_] :=

 

With[{roots = Array[Unique[] &, totalRoots]}, Resolve[ Exists[Evaluate[roots~Join~{variable}], Less[Sequence @@ roots] && variable == roots[[whichRoot]] && (And @@ Table[(rootFormula /. variable -> root), {root, roots}]) && formula]]] 

We can tell where various roots are with respect to already-known real numbers:
In[36]:= withSpecificRoot[x, x^2 - 3 == 0, 1, 2, x < 3] Out[36]= True In[37]:= withSpecificRoot[x, p == 0, 1, 2, x < 1] Out[37]= True In[38]:= withSpecificRoot[x, p == 0, 2, 2, x < 1] Out[38]= False 

We can also compute relationships between roots like $\sqrt{2}+\sqrt{3}=\sqrt{5+2\sqrt{6}}$:

In[39]:= withSpecificRoot[sqrt6, sqrt6^2 == 6, 2, 2, withSpecificRoot[lhs, lhs^2 == 5 + 2 sqrt6, 2, 2, withSpecificRoot[sqrt3, sqrt3^2 == 3, 2, 2, withSpecificRoot[sqrt2, sqrt2^2 == 2, 2, 2, lhs == sqrt3 + sqrt2 ]]]] Out[39]= True

That’s all I have time for now, but I hope to write another blog post on the subject soon!

Filed under Uncategorized

## A Logical Interpretation of Some Bits of Topology

Edit: These ideas are also discussed here and here (thanks to Qiaochu Yuan: I found out about those links by him linking back to this post).

Although topology is usually motivated as a study of spatial structures, you can interpret topological spaces as being a particular type of logic, and give a purely logical, non-spatial interpretation to a number of bits of topology.

This seems like one of those facts that was obvious to everyone else already, but I’ll write a quick blog post about it anyway.

Filed under Uncategorized

## The Spectrum From Logic to Probability

Let $\Omega$ be the set of propositions considered by some rational logician (call her Sue).  Further, suppose that $\Omega$ is closed under the propositional connectives $\vee$, $\wedge$, $\neg$.  Here are two related but different preorders on $\Omega$:

1. $p\leq q$ if $p$ logically entails $q$.
2. $p \preceq q$ if Sue considers $q$ at least as likely to be true as $p$ is.

Let $\sim$ be the equivalence relation defined by $p \sim q$ iff $p \leq q \wedge q \leq p$ and let $\approx$ similarly be defined by $p \approx q$ iff $p\preceq q\wedge q\preceq p$.

Then we know what type of structure $\Omega/{\sim}$ is: since we’re assuming classical logic in this article, it’s a Boolean algebra.  What type of structure is $\Omega/{\approx}$?

We can at least come up with a couple of examples.  Since Sue is a perfect logician, it must be that if $p\leq q$, then $p\preceq q$.  If Sue is extremely conservative, she may decline to offer opinions about whether one proposition is more likely to be true than another except when she’s forced to by logic.  In this case, $\Omega/{\approx}$ is equal to $\Omega/{\sim}$ and therefore again a Boolean algebra.

In the other extreme, Sue may have opinions about every pair of propositions, making $\preceq$ a total ordering.  A principal example of this is where $\Omega/{\approx}$ is isomorphic to a subset of $[0,1]$ and Sue’s opinions about the propositions were generated by her assigning a probability $P(p)\in [0,1]$ to every proposition $p$.

What’s in between on the spectrum from logic to probability?  Are there totally ordered structures not isomorphic to $[0,1]$ or a subset? More ambitiously: every Boolean algebra has operations $\vee$, $\wedge$, $\neg$, while $[0,1]$ has operations ${+}$, $\times$, $(x\mapsto 1-x)$ which play similar roles in the computation of probabilities (note that $+$ is partial on $[0,1]$).  How are these related and does every structure on the spectrum from logic to probability have analogous operations?

These structures (i.e., structures of the form $\Omega/{\approx}$ for some acceptable $\preceq$ in a sense to be defined below) were called scales and defined and explored in a very nice paper by Michael Hardy.

Filed under Uncategorized

## Topology and First-Order Modal Logic

The normal square root function can be considered to be multi-valued.

Let’s momentarily accept the heresy of saying that the square root of a negative number is $0$, so that our function will be total.

How can we represent the situation of this branching “function” topologically?