• No results found

Methods of Data Analysis Working with probability distributions

N/A
N/A
Protected

Academic year: 2022

Share "Methods of Data Analysis Working with probability distributions"

Copied!
7
0
0

Loading.... (view fulltext now)

Full text

(1)

Methods of Data Analysis Working with probability distributions

Week 4

1 Motivation

One of the key problems in non-parametric data analysis is to create a good model of a generating probability distribution, assuming we are given as data a finite sample from that distribution.

Obviously this problem is ill-posed for continuous distributions: with finite data, there is no way to distinguish between (or exclude) distributions that are not restricted to be smooth. The question then becomes of how to formulate the problem where a finite data set can be used to choose the “best” distribution among the distributions that are “sufficiently smooth”, where, intuitively, more data should allow us to consider less smooth distributions. A popular way of constructing such continuous distribution estimates is by kernel density estimation (KDE). An- other method is to describe the data with a parametric family of distributions that is sufficiently rich so that, in the limit of large data, it can describe an arbitrary distribution. A well-known example from this class are Mixtures of Gaussians. Another versatile framework for modeling both continuous and discrete distributions of potentially high dimensionality given a finite sam- ple is by means of maximum entropy (ME) approach. Here, we are looking for the most random distribution (= maximum entropy) that exactly reproduces a chosen set of statistics which can reliably be estimated from data. The assumption of maximum entropy is a formal version of Occam’s razor: one chooses distributions of a particular form that contain a minimal amount of structure that is nevertheless sufficient to explain selected aspects of observations (constraints).

2 Goals

• Introduce kernel density estimation for 1D and 2D continuous data. Illustrate the optimal choice of kernel smoothing.

• Apply KDE to Sachs data.

• Introduce Mixtures of Gaussians.

• Introduce maximum entropy models. Give examples of several well-known maximum en- tropy distributions. Show MEM for a multivariate Gaussian distribution and a distribution over binary variables with pairwise constraints.

• Create ME models for subsets Sachs data.

(2)

3 Data

Single-cell measurements of network activity in a MAP signaling pathway of human T cells; for a primary reference on this dataset, consult [1]. Signaling nodes were immunostained in different color channels such that a FACS readout of intensity in each channel corresponds to the activity level of a given node. Each measurement therefore corresponds to an 11-dimensional readout for a single cell. Cells were imaged in 9 conditions, at 600 cells per condition. A condition was a particular combination of chemicals that either stimulate or repress various parts of the signaling network. In the original publication, interaction diagram between the 11 nodes was reconstructed from this dataset. The nodes in the file are arranged as follows: RAF, MEK, PLCgamma, PIP2, PIP3, ERK, AKT, PKA, PKC, P38, JNK.

4 Quantities, definitions, details

Kernel density estimation (KDE). Consider a set of D-dimensional data vectors xt, t = 1, . . . , T , that we assume have been drawn from a continuous probability distribution P (x) that we don’t have direct access to, but would like to model. Formally, one could construct an empir- ical approximation to the distribution, ˆP (x) = T−1PT

t=1δ(x − xt), where Dirac-delta functions represent infinitely sharp peaks centered at the T sample points. It is clear that this is a very bad model for the distribution if we have a prior expectation that the distribution should be smooth. However, we can retain the ansatz and write ˆP (x) = T−1PT

t=1Kθ(x − xt), where Kθ is a particular kernel function optionally parametrized by θ. A common choice is a Gaus- sian kernel, where KC(x) = (2π)−D/2(det C)−1/2exp −12xTC−1x, where C is the covariance matrix. Intuitively, each data point is now smeared into a ball whose size is set by C, and each single data point induces a nonzero probability across the whole domain of x. Choosing the variances of the Gaussian involves a tradeoff: too small a variance brings us towards non- smooth distributions (delta functions in the limit), and too large of a variance will smooth over the interesting features in the distribution. How does one set the parameters θ of a kernel (and what kernel does one choose)? There is no universal answer, but there are (at least) two general approaches: (i) assuming the data is generated by some distribution (e.g., Gaussian), one can compute the best smoothing kernel parameter θ, what minimizes some measure of error (e.g., L2 norm between the true distribution and the one obtained from KDE; or Kullback-Leibler diver- gence between the two distributions); (ii) one can use the cross-validation to set the smoothing.

In cross-validation, which works well independently of the assumptions about the generating distributions, parameters of the kernel θ are chosen in the following way: (i) split the data intro training and testing sets; (ii) use the training set to build KDE distribution models for a variety of parameters θ; (iii) for each value of θ, use the obtained model and estimate the likelihood of the testing data under that model; (iv) choose the value of θ that maximizes the likelihood on test data.

Mixtures of Gaussians. Another often-used approach to distribution modeling is a mixture of gaussians model. Here, we look for a model of the form P (x|θ) =PM

i=1wi G(x; µi, Ci). Here, G are N -dimensional Gaussian distributions with means µi and covariance matrices Ci, and the model is written as a weighted average (with weights wisuch thatPM

i=1wi= 1) of M Gaussians.

The parameters of the model θ = {wi, µi, Ci} (that is, M − 1 weights plus M × N parameters for the mean and M × N (N + 1)/2 parameters for the covariance) can be estimated using maximum likelihood / Bayesian inference (next week), or a version of the expectation-maximization (EM) algorithm. An interesting feature is that the mixture model can be viewed as an instance of

(3)

hidden (latent) variable model: for every data point xt, there is a hidden node zt(the “cause”), taking on integer values that specify from which mixture component the data point was drawn.

We will not use the mixture models here, but it is nice to discuss the similarities and differences to KDE, where the distribution is also a superposition of Gaussian functions. With mixtures, one decides on the number of gaussians that best describes the data (e.g., by cross-validation), but many times distributions can be described by just a few Gaussians (e.g., a two-peaked dis- tribution might be well described by a two-gaussian mixture); the point here is to estimate well the mixing weight and the (possibly high dimensional) means and covariance parameters. In contrast, in KDE the number of gaussians is given by the number of data points, their width is chosen by cross validation (and usually symmetry is assumed), and the mean is fixed by the data point. For further information and implementation of mixture-of-gaussians inference in Matlab, see gmdistribution in the Statistics toolbox.

Maximum entropy framework. The groundwork for maximum entropy and its relation to statistical physics was laid by the work of ET Jaynes [2]. We start by choosing a set of M functions (“operators”) on the state of the system, ˆfµ(x), µ = 1, . . . , M , and estimate the empirical averages of these operators over the data, i.e., h ˆfµidata = T−1PT

t=1µ(xt). These expectations are also known as “constraints.” Our model ˆP(f1,...,fM)(x) (or ˆP for short) will be constructed according to the following two conditions: (i) expectation values of the M functions over the model distribution will be exactly equal to the constraints; (ii) the distribution will be maximally unstructured otherwise, or specifically, it will have maximum entropy. Note again that maximum entropy framework doesn’t specify which constraints to use, but merely tells us what the form of the distribution should be given the chosen constraints. To derive the maximum entropy distribution, one forms a functional optimization problem:

L = −X

x

P (x) log ˆˆ P (x) − Λ X

x

P (x) − 1ˆ

!

−X

µ

gµ X

x

P (x)fˆ µ(x) − hfµidata

! , (1)

where the first term will maximize the entropy, the second (with conjugate Lagrange multiplier Λ) will enforce the normalization of the distribution, and the last sum will make sure that all the expectation values over the model distribution equal the empirical constraints. One solves δL/δ ˆP (x) to find:

P (x) =ˆ 1 Z exp

( X

µ

gµfµ(x) )

, (2)

where Z is the normalization constant and the multipliers (“couplings”) need to be set such that hfµ(x)iPˆ ≡ ∂ log Z

∂gµ

= hfµ(x)idata. (3)

Note that this is a set of M nonlinear equations for {gµ}. It can be shown that for max- imum entropy problems, these equations have a solution, which (for all reasonable cases of non-degenerate constraints) is unique. Nevertheless, solving the problem by means of solving a set of nonlinear equations is often hard (especially because it requires the evaluation of the partition sum, or normalization constant, Z) except for the case of a small number of constraints and low dimensional distributions.

Observe that the forms of Eqs (2, 3) are exactly identical to the Boltzmann distribution in statistical physics and the role of the partition function (or normalizing constant) Z as the generating function for the observables. Indeed, maximum entropy can be viewed as inverse

(4)

statistical mechanics. Specifically, Boltzmann distribution of statistical physics is a maximum entropy distribution over microstates where the sole constraint is that on the average energy (and thus the distribution P (x) = Z−1exp(−βE(x)), where β−1 = kBT is now seen as the Lagrange multiplier enforcing the constraint of average energy). For physicists, this is precisely the canonical ensemble (fixing the mean energy)...

Another approach to find the couplings is to view the inference of couplings as maximum likelihood inference (see next week), i.e. our model is ˆP (x|{gµ}) and we find the set of {gµ} that maximize the log likelihood of the data, log L = P

tlog ˆP (xt|{gµ}). A gradient ascent on the log likelihood leads to the learning iterative scheme:

gq+1µ = gµq− α hfµiPˆ − hfµidata , (4) where the learning rate α must slowly be decreased as the scheme iterates (q denotes the iteration index), and {gµ} converge to their correct values. Note that here one only needs expectation values over the model distribution ˆP , which can be obtained by using Monte Carlo sampling for high-dimensional distributions, without the need to explicitly compute Z. Maxent problems can thus be viewed as a specific subset of maximum likelihood problems, in which the constraints can be reproduced exactly (not just in a best-fit manner), and where the form of the distribution is “derived” from the chosen constraints.

A particularly insightful construction in the maximum entropy framework is obtained when one builds a sequence of approximating distributions that are constrained by marginals of a higher and higher order [3]. Consider for instance a distribution over binary variables, P ({σi}), where σi = {0, 1}. Then, the first-order marginals are just P (σi), which are fully specified by the mean value of each variable, hσii. One can construct the maxent distribution consistent with these means, which is the factor (independent) distribution: ˆPi)({σi}) =QN

i=1P (σi). The next approximating distribution is a maxent distribution consistent with all second-order marginals, that is, distributions P (σi, σj) for every pair (i, j). This is equivalent to specifying all hσii and, for all pairs, Cij = hσiσji − hσiihσji. The maximum entropy distribution then must follow the ansatz: Piiσj)= Z−1exp

P

ihiσi+P

i<jJijσiσj

, i.e., the distribution has an Ising-model like form. Constraining higher-order correlations results in exponential family distributions that contain terms with successively higher-order products in the energy function. But specifying higher and higher correlation orders stops when we constrain the N -th order marginal table (and thus N -th order correlation), which is equal to the distribution itself, so that model is exact. A sequence of maxent models constrained by progressively higher-order correlations thus generates a hierarchy that starts with the independent distribution and captures systematically interactions between more and more variables, until all the structure has been captured and the model is exact. Note that constructing even a pairwise model for binary variables from data is very powerful (called Boltzmann machine learning in the machine learning community) but very hard numerically. On the other hand, often even capturing the lower-order interactions can explain very complex collective effects—recently, a number of papers built pairwise maxent models for real data sets that predicted extremely well the measured higher-order statistics of the data (note that this is a nontrivial statement for maxent models, which by construction are the most random models consistent with, say, pairwise observables).

Maximum entropy models also have an interesting interpretation in that they separate cor- relations (observable statistics; constraints) from the true underlying interactions (terms in the exponential model). Concretely, in case of the binary pairwise maximum entropy model (Ising- like model), the correlations Cij are simple to estimate from data, but map in a potentially highly complicated way to a matrix of underlying interactions, Jij. They can be viewed as

(5)

an instance of an inverse problem: in (statistical) physics one often assumes the probabilistic model and the values of interactions in the energy function, and then computes the values of the observables. Maxent approach starts with measured observables and infers what interactions should be put into the energy function to explain the measurements.

5 Study literature

• Notes on nonparametric statistics including kernel density estimation http://www.ssc.wisc.edu/∼bhansen/718/NonParametrics1.pdf

• Chapter 22 of MacKay’s book (and other chapters therein), for mixture models http://www.inference.phy.cam.ac.uk/itprnn/book.pdf

• A list of online resources for maximum entropy models http://homepages.inf.ed.ac.uk/lzhang10/maxent.html

6 Homework

1. The Silverman’s rule of thumb for selecting the Gaussian kernel parameter σ is σ ≈ 1.06Ωn−1/5. This is assuming that the data is generated by an underlying Gaussian dis- tribution with std Ω, and we want to minimize the least square error between the real distribution and the KDE model. Let’s check if the same or similar scaling applies in the likelihood cross-validation scenario. Draw, e.g., n = 100, 200, 500, 1000, 2000 “training”

points from a Gaussian with unit variance (Ω = 1), as well as 100 “testing” points, 500 times at each n. Set a range of σ to try out, e.g., σ = 0.05, 0.1, 0.2, 0.3, . . . , 1.5. For each random draw of points, construct a KDE model using the training points, and compute the log likelihood of that model on the testing points, for all values of σ. Average the log likelihood across 500 random draws for each σ, and plot it for various n as a function of σ to see if the average log likelihood is maximized at a particular σ(n) for different data sizes, n. To smoothly find the value of σ at which the cross-validated log likelihood is max- imized, one approximate rule-of-thumb way is to first fit a parabola or similar low-order polynomial to the log likelihood as a function of σ (at every n), and then to compute the optimal σ exactly from the fitted coefficients. You can alternatively use other methods of smoothing to find the optimal σ, but please describe briefly what you did. Show σ as a function of n on a log log plot, and compare with the Silverman’s rule of thumb on the same plot.

2. Implement a 2D KDE for Sachs data. First, visualize several pairwise cross-sections through the data (look in particular at node 4 vs node 5, and node 1 vs node 3). Look at the data if you scatterplot the log activity levels of pairs of proteins against each other.

Which representation (raw vs log) do you find more useful? Using cross-validation, deter- mine the best σ to use for making a 2D KDE model for the log activities for both pairs of variables (1 vs 3, 4 vs 5); you will have to determine best σs separately for different pairs of variables. Using the KDE model, visualize the joint probability distributions of the pairwise distributions by making a contour plot using the optimal value of the σ, and using five-fold too large and five-fold too small values. Concretely, you can make your kernels be 2D Gaussians with diagonal covariance that has σx, σy widths in the two or- thogonal directions. Finally, to examine the effects of smoothing, discretize the x and y

(6)

axis into bins of size σx× σy, and estimate an empirical distribution (by counting) over this discretized domain. Over the same domain, define another discrete probability distri- bution, which is equal to your continuous KDE model evaluated in the bin centers (and normalized). Compute and compare the entropy of both discrete distributions.

3. Derive the: (i) a continuous maximum entropy distribution with a given mean hxi for x ∈ [0, ∞) (i.e., what is the form for the distribution, and what is the value of the Lagrange multiplier conjugate to the constraint function in terms of hxi); (ii) a continuous maximum entropy distribution with a given mean hxi and a given variance, σx2, and, in general, for the multivariate case, with mean hxi and covariance matrix C, where x ∈ (−∞, ∞).

For the multivariate case, a well-argued educated guess is sufficient, if you do not want to cary out multidimensional integrals (note: carrying out high-dimensional Gaussian integration is very applicable to many fields, so you are encouraged to learn about it; by using several tricks it is possible to compute essentially any moment without actually doing the integration).

4. Let’s try to infer interactions from Sachs et al data. First, try the naive way, by computing the correlation coefficient between every pair of signaling nodes across all of 5400 samples.

Plot the 11 × 11 matrix of correlation coefficients. If you were to threshold that matrix using different thresholds to find the underlying interactions, could you reproduce, even approximately, the network of known interactions as reported in Ref [1]? Second, try to construct a maximum entropy model that is consistent with means and covariances (e.g., a pairwise model you derived in the previous exercise) of the activation levels. If you take the resulting matrix of interactions (specifically, their absolute values) and threshold it, does it recover the known interactions better than the naive approach? What happens if you repeat the same procedure only on the first 2 conditions (1200 samples) that correspond to the signaling network driven by its “natural” stimulation? A third possible approach, using Bayes network inference, has been attempted in the paper by Sachs et al – you are welcome to read about that in the original publication.

5. Maximum entropy for binary variables. Let’s try to do maximum entropy in case the signaling levels are binary. To this end, take again the first two conditions, and binarize each variable such that the discretized value is 0 for samples below the median, and 1 for samples above the median. This should result in a binary data matrix of dimension 11 × 1200. To build a pairwise maximum entropy model for this data, you need to infer a vector of local fields, hi (11 variables) and a matrix of pairwise couplings Jij of dimension 11 × 11 (with 11 × 10/2 free parameters due to symmetry), in a model distribution of the form Piiσj) = Z−1exp

P

ihiσi+P

i<jJijσiσj



, such that the mean values hσii and all pairwise terms hσiσji of the model exactly match the data. For 11 binary variables it is possible to enumerate all 211 states, compute the normalization constant Z and the probability of each state, and thus evaluate the expectation values of the means and pairwise terms. Use the gradient ascent learning (see introduction; you can use any other method if you know how to implement it) to learn the parameter values for h and J , by decreasing the learning rate α slowly (hint: two possibilities to try are e.g. α(q) = q−0.5or α(q) = q−0.25). Iterate the learning until the relative error on the means and covariances is below 1%. How to the interactions Jij in this binarized case compare to the previous exercise where you looked at the continuous activation levels? If your code for 11 nodes together runs too slowly, try with a smaller set of nodes (this is always useful to debug

(7)

your algorithm anyway)—you should definitely have no problems in terms of computational time for a system of size N = 5 or 6 nodes...

References

[1] K Sachs, O Perez, D Pe’er, D Lauffenburger, G Nolan (2005) Causal protein-signaling net- works derived from multiparameter single-cell data. Science 308: 532.

[2] ET Jaynes (1957) Information theory and statistical mechanics. Phys Rev 106: 620–630.

[3] E Schneidman, S Still, MJ Berry 2nd, W Bialek (2003) Network information and connected correlations. Phys Rev Lett 91: 238701.

References

Related documents

Results suggest that the probability of under-educated employment is higher among low skilled recent migrants and that the over-education risk is higher among high skilled

Furthermore, while symbolic execution systems often avoid reasoning precisely about symbolic memory accesses (e.g., access- ing a symbolic offset in an array), C OMMUTER ’s test

Small molecule developmental screens reveal the logic and timing of vertebrate development... Small molecule developmental screens reveal the logic and timing of

• Follow up with your employer each reporting period to ensure your hours are reported on a regular basis?. • Discuss your progress with

Field experiments were conducted at Ebonyi State University Research Farm during 2009 and 2010 farming seasons to evaluate the effect of intercropping maize with

This section describes how to plan the parameters of the external alarms and complete the data configuration according to the external alarm requirements, through an example...

Мөн БЗДүүргийн нохойн уушгины жижиг гуурсанцрын хучуур эсийн болон гөлгөр булчингийн ширхгийн гиперплази (4-р зураг), Чингэлтэй дүүргийн нохойн уушгинд том

19% serve a county. Fourteen per cent of the centers provide service for adjoining states in addition to the states in which they are located; usually these adjoining states have