Let $X_1,X_2,\dotsc$ be defined jointly. I'm not entirely sure what this means, but I think it means they're all defined on the same space. Let $E[X_i]=0, E[X_i^2]=1 \;\forall\; i$. Show $P(X_n\geq n \text{ infinitely often}=0$. The condition that an event occurs infinitely often is equivalent to saying the $\lim\sup (X_n\geq n)$ (at least I'm pretty sure that's the correct way to write it). The $\lim\sup$ is defined as $$ \bigcap_{n=1}^\infty \bigcup_{k=n}^\infty (X_k \geq n) $$ Now since the mean for each $X_i$ is $0$ intuitively it makes sense that the probability would go to $0$. My concern though is that I'm not using the other hypothesis and I can't really see how it fits in. Thanks!

Hint: According to the first Borel-Cantelli lemma, the limsup of the events has probability zero as soon as the series $(*)$ $\sum\limits_n\mathrm P(X_n\geqslant n)$ converges. Hence if one shows $(*)$ converges, the proof is over.

How to show that $(*)$ converges? Luckily, one is given only one hypothesis on $X_n$, hence one knows that one must use it somehow. Since the hypothesis is that $\mathrm E(X_n)=0$ and $\mathrm E(X_n^2)=1$ for every $n$, the problem is to bound $\mathrm P(X\geqslant n)$ for **any** random variable $X$ such that $\mathrm E(X)=0$ and $\mathrm E(X^2)=1$. Any idea?

One might begin with the obvious inclusion $[X\geqslant n]\subseteq[|X-\mathrm E(X)|\geqslant n]$ and try to use one of the not-so-many inequalities one knows which allow to bound $\mathrm P(|X-\mathrm E(X)|\geqslant n)$...