Jacob and Monod

The original gene regulation diagram?

jacob

J Mol Biol. 1961 Jun;3:318-56.
Genetic regulatory mechanisms in the synthesis of proteins.
JACOB F, MONOD J.

Advertisements

Binomial GLM for ratios of read counts

For certain sequencing experiments (e.g. methylation data), one might end up with a ratio of read counts at a certain location satisfying a given property (e.g. ‘is methylated’) and want to test if this ratio is significantly associated with a given variable, x.

One way to proceed would be a linear regression of ratio ~ x. However the case when 100 reads cover a nucleotide is not statistically equivalent to the case when 2 reads cover a nucleotide. The binomial probabilities will become increasingly spiked at p*n as the number of reads n increases. So the case with 100 reads covering gives us more information than the case with 2 reads covering.

Here is a bit of code for using the glm() function in R with the binomial distribution with weights representing the covering reads.

binomialGLM

n <- 100
# random poisson number of observations (reads)
reads <- rpois(n,lambda=5)
reads
# make a N(0,2) predictor variable x
x <- rnorm(n,0,2)
# x will be negatively correlated with the target variable y
beta <- -1
# through a sigmoid curve mapping x*beta to probabilities in [0,1]
p <- exp(x*beta)/(1 + exp(x*beta))
# binomial distribution from the number of observations (reads)
y <- rbinom(n,prob=p,size=reads)

# plot the successes (y) over the total number of trials (reads)
# and order the x-axis by the predictor variable x
par(mfrow=c(2,1),mar=c(2,4.5,1,1))
o <- order(x)
plot(reads[o],type="h",lwd=2,ylab="reads",xlab="",xaxt="n")
points(y[o],col="red",type="h",lwd=2)
points(reads[o],type="p",pch=20)
points(y[o],col="red",type="p",pch=20)
par(mar=c(4.5,4.5,0,1))
# more clear to see the relationship
# plot just the ratio
plot((y/reads)[o],type="h",col="red",ylab="ratio",xlab="rank of x")
points((y/reads)[o],type="p",col="red",pch=20)

# from help for glm():
# "For a binomial GLM prior weights are used to give 
# the number of trials when the response is the 
# proportion of successes"
fit <- glm(y/reads ~ x, weights=reads, family=binomial)
summary(fit)

How wrong is hypergeometric test with one random margin?

In biostats and bioinformatics, the hypergeometric distribution is often used to assign probability of surprise to the amount of overlap between results and annotation, e.g.: 100 gene levels are changed by drug treatment and 50 of those genes are annotated as relating to immune system. The probability of surprise of such an overlap depends on the total number of genes examined in the analysis and the number of genes annotated as relating to the immune system.

However a hypergeometric test is not perfect for this application, as it assumes the margins are fixed (“margins” meaning the sums along the side of the contingency table, i.e. the number of changed genes and the number of immune system genes). While the annotation side might be considered fixed, the number of genes which are observed as changed is better considered a random variable, as it depends on the dataset.

What happens if one of the margins is a random variable? Here is a simple example showing how the null distribution of the number of genes in the intersection changes when one of the margins is allowed to vary by different amounts.

  • consider 100 genes, 20 annotated for a given category
  • black/blue density is randomly taking 20 genes as changed
  • red density is flipping a 0.2 coin 100 times and taking this many genes as changed
  • green densities are variations on a censored negative binomial, which has more variance than the binomial in red

random_margin2

Continue reading “How wrong is hypergeometric test with one random margin?”

Poisson regression

In trying to explain generalized linear models, I often say something like: GLMs are very similar to linear models but with different domains for the target y, e.g. positive numbers, outcomes in {0,1}, non-negative integers, etc. This explanation bypasses the more interesting point though, that the optimization problem for fitting the coefficients is totally different, after applying the link function.

lm_vs_glm

This can be seen by comparing the coefficients from a linear regression of log counts to those from a Poisson regression. For some cases, the fitted lines are quite similar, however they diverge if you introduce outliers. A casual explanation here would be that the Poisson likelihood is thrown off more by high counts than by low counts; the high count pulls up the expected value for x=2 in the second plot, but the low count does not substantially pull down the expected value for x=3 in the third plot.

Continue reading “Poisson regression”

Splitting data

The caret package has a nice function for splitting up balanced subsets of data. Though I don’t see why I don’t get 3 rows out of 10 in this example. The p argument is defined as “the percentage of data that goes to training”.


d <- data.frame(x=rnorm(10), group=factor(c(1,1,1,2,2,2,3,3,3,3)))
d
            x group
1   1.0089900     1
2   0.4854706     1
3   1.7083259     1
4  -1.3362274     2
5   1.4905259     2
6   1.6451234     2
7   1.0361174     3
8   0.2369341     3
9  -2.0043264     3
10  1.4361718     3
library(caret)
d[createDataPartition(d$group, p=3/10)$Resample1,]
            x group
3   1.7083259     1
4  -1.3362274     2
8   0.2369341     3
10  1.4361718     3

Points and line ranges

Two ways of plotting a grid of points and line ranges. I’m coming around to ggplot2. I recommend skimming the first few chapters of the book to understand what is going on – but it only takes about 30 min or so to understand enough to make basic plots.

m <- 10
k <- 3
d <- data.frame(x=factor(rep(1:k,m)), y=rnorm(m*k), z=rep(1:m,each=k))
d$ymax <- d$y + 1
d$ymin <- d$y - 1

# pretty simple
library(ggplot2)
p <- ggplot(d, aes(x=x, y=y, ymin=ymin, ymax=ymax))
p + geom_pointrange() + theme_bw() + facet_wrap(~ z)

# messy
par(mfrow=c(3,4), mar=c(3,3,2,1))
for (i in 1:m) {
  with(d[d$z == i,], {
    plot(as.numeric(x), y, main=i, xlim=c(0,k+1), ylim=c(-3,3), pch=20, xaxt="n")
    axis(1,at=1:k,1:k)
    segments(as.numeric(x),ymin,as.numeric(x),ymax)
  })  
}