# How to draw a probable outcome from a distribution?

To visualize the data, I'd like to draw a 'typical' outcome of an experiment.A plot of the 'typical' outcome would then have the average (or possibly mode) number of objects, say, 5.

# Modeling response times

not an expert, but maybe the ex-gaussian (gaussian plus exponential distribution)?pdf

In the framework of cognitive
processes, this convolution can be
seen as representing the overall
distribution of RT [Response Time] resulting from two

# Using mixed effects modelling to estimate and compare variability

Can I use mixed effects analysis to assess whether this within-person variability is, on average, different between the two groups?seed(1)

group_A_base_sd = 1
group_B_base_sd = 2
within_group_sd_of_sds =.

# Intraclass correlation and aggregation

Thus, my questions are:

What descriptive labels would you attach to different values of the intra-class correlation?, the aim is to actually relate the values of the intra-class correlation to qualitative language such as: "When the intraclass correlation is greater than x, it suggests that the attitudes are modestly/moderately/strongly shared across team members.

# How to fit a negative binomial distribution in R while incorporating censoring

I need to fit $Y_{ij} \sim NegBin(m_{ij},k)$, i.a negative binomial distribution to count data.

# If the t-test and the ANOVA for two groups are equivalent, why aren't their assumptions equivalent?

I'm sure I've got this completely wrapped round my head, but I just can't figure it out.So why is it that the t-test is equivalent to ANOVA with two groups?

# Appropriate normality tests for small samples

So far, I've been using the Shapiro-Wilk statistic in order to test normality assumptions in small samples.The fBasics package in R (part of Rmetrics) includes several normality tests, covering many of the popular frequentist tests -- Kolmogorov-Smirnov, Shapiro-Wilk, Jarque–Bera, and D'Agostino -- along with a wrapper for the normality tests in the nortest package -- Anderson–Darling, Cramer–von Mises, Lilliefors (Kolmogorov-Smirnov), Pearson chi–square, and Shapiro–Francia.

# Based on z-score, is it possible to compute confidence without looking at a z-table?

Is it possible to compute confidence without looking up a z-table?A z-table gives you values of the cumulative distribution function for the standard (i.

# Comparing 2 independent non-central t statistics

The sample Sharpe ratio is the sample mean divided by the sample standard deviation.Up to a constant factor ($\sqrt{n}$, where $n$ is the number of observations), this is distributed as a (possibly non-central) $t$-statistic.

# Constructing smoothing splines with cross-validation

Can someone provide me with a book or online reference on how to construct smoothing splines with cross-validation?I would also appreciate an overview of whether this is smoothing technique is a good one for smoothing data and whether there are any disadvantages of which a non-statistician needs to be aware.

# Using information geometry to define distances and volumes…useful?

I came across a large body of literature which advocates using Fisher's Information metric as a natural local metric in the space of probability distributions and then integrating over it to define distances and volumes.But are these "integrated" quantities actually useful for anything?

# Do working statisticians care about the difference between frequentist and Bayesian inference?

It is a trade-off between whether the subjective element of the Bayesian approach (which is itself debatable, see e.I think Bayesian statistics come into play in two different contexts.

# Is there a way to remember the definitions of Type I and Type II Errors?

)

Since type two means "False negative" or sort of "false false", I remember it as the number of falses.If you believe such an argument:

Type I errors are of primary concern
Type II errors are of secondary concern

Note: I'm not endorsing this value judgement, but it does help me remember Type I from Type II.

# Survival Analysis tools in Python

I am wondering if there are any packages for python that is capable of performing survival analysis.I have been using the survival package in R but would like to port my work to python.

# Express answers in terms of original units, in Box-Cox transformed data

So I only can make inferences about the difference (or the ratio) of the medians on the original scale of measurement.However, if we apply t-tools to Box-Cox transformed data , we will get inferences about the difference in means of the transformed data.

# What are alternatives to broken axes?

]

(3) You can show the broken plot side-by-side with the same plot on unbroken axes.(4) In the case of your bar chart example, choose a suitable (perhaps hugely stretched) vertical axis and provide a panning utility.

# Shall I trust AIC (non-full model) or slope (full model)?

The purpose to run regressions for butterfly richness again 5 environmental variables is to show the importance rank of the independent variables mainly by AIC.In non-full models, they reveal that variable A tends to be more influential than the others by delta AIC.

# Video/Audio online material for getting into Bayesian analysis and logistic-regressions

An "Advanced" model would be a monte carlo simulation validated using R2 tests.Currently, in my field, there is a lot of research using Logistic and bayesian analysis.

# Variation in PCA weights

I have weights of SNP variation (output through Eigenstrat program) for each SNP for the three main PCs.g:

SNPNam PC1-wt PC 2-wt PC3-wt
SNP_1 -1.

# I just installed the latest version of R. What packages should I obtain?

Duplicate thread: What R packages do you find most useful in your daily work?Are there any R packages that are just plain good to have, regardless of the type of work you are doing?