No, I am not talking about the kind of knowledge and information you should keep to yourself.

By discrete knowledge I mean ex post knowledge in the context of a discrete information structure. And by a discrete information structure I mean a partition of a probability space into events with positive probability.

I am going to describe the distinction between ex post knowledge and an ex ante information structure.

There is an underlying probability space \((\Omega, \mathcal{F}, P)\), where the elements of \(\Omega\) are states of the world, and the sets in \(\mathcal{F}\) are events.

Information is represented by a partition \(\Pi \subset \mathcal{F}\) of \(\Omega\) into events with positive probability, or by the sigma-algebra \(\sigma(\Pi)\) generated by \(\Pi\). Because the events in \(\Pi\) have positive probability, there can be at most countably many of them, which implies that \(\sigma(\Pi)\) simply consists of all unions of elements of \(\Pi\) plus the empty set. One can recover \(\Pi\) from \(\sigma(\Pi)\) by picking those nonempty sets in \(\sigma(\Pi)\) that are minimal with respect to inclusion.

In this discrete setup there is supposed to be some particular state \(\omega\) in \(\Omega\) which is *the true state* of the world.

An event *happens* if it contains the true state. Otherwise it does not happen. Only events that happen can possibly be known.

The idea that an agent with information partition \(\Pi\) and information sigma-algebra \(\sigma(\Pi)\) knows an event \(A\) if the true state is \(\omega\) can be formalized like this: say that \(\Pi\) *knows \(A\) at \(\omega\)* if there is some event \(D\) in \(\Pi\) with \(\omega \in D \subset A\), or equivalently, there is some event \(D\) in \(\sigma(\Pi)\) with \(\omega \in D \subset A\). Here and in what follows, the agent is identified with his or her information structure.

Define the event \(K(\Pi,A)\) that \(\Pi\) knows the event \(A\) as:

$$\begin{eqnarray*}

K(\Pi,A)

& = &

\{ \omega \in \Omega : \Pi \mbox{ knows } A \mbox{ at } \omega \} \\

& = &

\bigcup \{ D \in \Pi: D \subset A \}

\end{eqnarray*}$$

Since the set \(K(\Pi,A)\) is a union of cells in \(\Pi\), it is in \(\sigma(\Pi)\), and in particular, it is an event (it is in \(\mathcal{F}\)).

Since \(K(\Pi,A) \subset A\), the event \(A\) can only be known in states of the world where it actually happens.

An event \(A\) belongs to \(\sigma(\Pi)\) if and only if \(K(\Pi, A) =A\). So, \(\sigma(\Pi)\) consists of those events that will necessarily be known by \(\Pi\) if they happen.

If an event in \(\sigma(\Pi)\) does not happen, then its complement happens. Since the complement is in \(\sigma(\Pi)\), the fact that it happens will be known by \(\Pi\). In other words, if an event is in \(\sigma(\Pi)\), then \(\Pi\) will know whether it happens or not.

Say that \(\Pi\) is *informed about* \(A\) if \(A\) belongs to \(\sigma(\Pi)\).

So this is the distinction between ex post knowledge and ex ante information:

- Knowledge depends on the true state of the world. The events that the agent knows are those that contain the cell in \(\Pi\) that contains the true state, or equivalently, those that contain an event from \(\sigma(\Pi)\) that contains the true state.
- Information is independent of the true state. The events that the agent is informed about are those in \(\sigma(\Pi)\), or equivalently, those that are unions of events from \(\Pi\). These are the events such that the agent will know whether they happen or not, no matter what the true state is.

That was easy.

Next steps:

- Define what it means that somebody knows that somebody else knows something, and define the concept of common knowledge
- Define knowledge when the information structure is given by a general sigma-algebra that is not necessarily generated by a partition into events with positive probability

#### Reference

Lars Tyge Nielsen: “Common Knowledge, Communication, and Convergence of Beliefs,”

*Mathematical Social Sciences* 8 (1984), 1-14 [Abstract]