FIRST AND SECOND TALK:
TITLE: General properties of Bayesian learning as statistical inference determined by conditional expectations I, II
ABSTRACT: Step by step we build up a general framework for studying the general properties of general Bayesian learning, where “general Bayesian learning” means inferring a state from another that is regarded as evidence, and where the inference is conditionalizing the evidence using the conditional expectation determined by a reference probability measure representing the background subjective degrees of belief of a Bayesian Agent performing the inference. If every state is Bayes accessible from some other defined on the same set of random variables, then the set of states is called weakly Bayes connected. It is shown that the set of states is not weakly Bayes connected if the probability space is standard. The set of states is called weakly Bayes connectable if, given any state, the probability space can be extended in such a way that the given state becomes Bayes accessible from some other state in the extended space. It is shown that probability spaces are weakly Bayes connectable. Since conditioning using the theory of conditional expectations includes both Bayes’ rule and Jeffrey conditionalization as special cases, the results presented generalize substantially some results obtained earlier for Jeffrey conditionalization.
TITLE: How much can a Bayesian agent learn?
ABSTRACT: The Bayes Blind Spot of a Bayesian Agent is the set of probability measures on a Boolean algebra that are absolutely continuous with respect to the background probability measure (prior) of a Bayesian Agent on the algebra and which the Bayesian Agent cannot learn by conditionalizing no matter what (possibly uncertain) evidence he has about the elements in the Boolean algebra. We investigate the size of the Blind Spot in the finite and infinite cases.