Let $V$ be a vector space over $\mathbb R$. A symmetric bilinear pairing on $V$ is a linear map $a: V\otimes V \to \mathbb R$. Because $\mathbb R$ is characteristic not-two, I will freely confuse symmetric bilinear pairings with quadratic forms; if $v\in V$, I will write $av^2$ for $a(v\otimes v)$; and $av_1v_2$ for $a(v_1\otimes v_2)$. The pairing $a$ is positive (negative) definite on $V$ if $av^2$ is strictly positive (negative) whenever $v\neq 0$. The pairing $a$ is nondegenerate if the corresponding map $a: V\to V^\*: v \mapsto a(v\otimes-)$ has trivial kernel.

Two subspaces $V_1,V_2 \subseteq V$ are orthogonal if $av_1v_2 = 0$ for any $v_1\in V_1$ and any $v_2\in V_2$. If $V_1\subseteq V$, its orthogonal complemenet is the maximal subspace $V_2\subseteq V$ so that $V_1$ and $V_2$ are orthogonal (it always exists, and may intersect $V_1$). A subspace $V_+\leq V$ is positive (negative) if $a|\_{V_+ \otimes V_+}$ is positive- (negative-) definite. A subspace is maximally positive (negative) if it is positive (negative) and not contained in any other positive (negative) subspace. Maximally positive (negative) subspaces exist by Zorn's lemma.

Let $a$ be a nondegenerate symmetric bilinear pairing on $V$. If $V$ is finite-dimensional, then the following are true (e.g. by running Gram-Schmidt):

Let $V_+$ be a maximally positive subspace. Then its orthogonal complement is maximal negative.

An important corrolarry of 1. is that any two maximally positive (negative) subspaces have the same dimension. Then we can define the signature of the pairing $a$ as the pair $(\dim V_+,\dim V_-)$. When $a$ is not nondegenerate, the signature is actually a triple; the third term is the dimension of the kernel of the map $a: V\to V^\*$. ($a$ induces a nondegenerate symmetric pairing on $V / \ker a$.)

My question is: are statements 1., 2. true when $V$ is infinite-dimensional? If not, what (topologico-analytical, say) conditions on $V$ assure that they are? (Or at least that 1. is; I don't really care about 2.)

Let's move on to the other part of your question: when can we ensure that the splitting exists? What you are asking for is that $V$ be the direct sum of two subspaces, $V_-$ and $V_+$. Of course, if you start with two abstract vector spaces, $V_-$ and $V_+$, both of which admit symmetric, positive definite bilinear forms, say $b_-$ and $b_+$ respectively, then you can construct an example by taking $V = V_- \oplus V_+$ and taking $-b_- + b_+$. This shows that you can get this situation to work with quite awful spaces, but the point is that all the awfulness of $V$ divides nicely into awfulness of $V_-$ plus awfulness of $V_+$.

Presumably, though, you are more interested in the case where you start with $V$ and the quadratic form. Maybe this quadratic form can be fairly arbitrary (perhaps varies in some space of quadratic forms). In this situation, you would want conditions on $V$ that guarantee that the splitting occurs without too much fuss.

Let's examine the question from the other end: suppose that $V = V_- \oplus V_+$. Then by changing the sign of the form on $V_-$, we obtain a positive-definite symmetric bilinear form on $V$. This usually goes by the name of an inner product as we're over $\mathbb{R}$. So the problem reduces to finding complements of subspaces in inner product spaces. To guarantee this, you want completeness. Then your bilinear form is related to the original inner product by the operator $2P_+ - I$ where $P_+$ is the orthogonal projection on to $V_+$.

So what you want is to be working with a Hilbert space and the space of self-adjoint square-roots of the identity.

As I said, this isn't an "if and only if". But it is a simple condition that quite often holds. It can be further relaxed since it's enough that the inner product induced by the bilinear form and the original inner product be merely equivalent rather than equal, but I'll leave those details as an exercise.

Edit: From your other question related to this it seems as though you are particularly interested in the case where one of the factors is finite dimensional. In that case, the splitting always holds.

Let $V$ have (algebraic) basis $(v_n)_{n\in\mathbb{Z}}$. Let $a$ be so that $av_n^2=\operatorname{sign}(n)$ for all $n$ (note $\operatorname{sign}(0)=0$). Further, $av_iv_j=0$ if $i\ne j$ and $0\notin\{i,j\}$, and finally, $av_0v_n=av_nv_0=-\operatorname{sign}(n)$.

Then the linear span $V_+$ of all $v_n$ with $n>0$ is maximal positive, the span $V_-$ of those with $n<0$ is maximal negative, and yet $v_0$ is not in the sum of the two.

To see the maximal postiveness of $V_+$ it is helpful to note that $aw^2=0$ for a symmetric vector (obvious definition). To see that $a$ is non-degenerate, note that if $awv_n=0$ for $n\ne0$ then $w_0=w_n$, so all coordinates of $w$ are the same.

The 'correct' infinite-dimensional generalization of a finite-dimensional indefinite inner-product space is the notion of a Krein space:
http://eom.springer.de/K/k055840.htm
This more or less builds statement 1 into the definition. That's the way this issue has been solved: just assume what you need....

"just assume what you need...." I know that you say that half in jest, but I'm not interested in the abstract theory of Krein spaces, but rather in some particular application. And in my application, the vector space I actually have is most definitely not a Krein space; the positive part is not Hilbert, for example.
–
Theo Johnson-FreydDec 8 '09 at 19:45