Geometrically, this means that if $X \to S$ and $Y \to S$ are two quasi-compact morphisms of schemes with dense image, then the image of $X \times_S Y \to S$ is dense, too.

We can prove this as follows (using the axiom of choice many times): It suffices to show that the kernel of $R \to A \otimes_R B$ is contained in any minimal prime ideal $P \subseteq R$. Since the kernel of $R \to A$ is contained in $P$, we have $A_P \neq 0$, and likewise $B_P \neq 0$. Choose prime ideals in these rings. Their preimages are prime ideals $I \subseteq A$, $J \subseteq B$ such that $I \cap R = P$ (since $I \cap R \subseteq P$ and $P$ is minimal) and $J \cap R = P$. The kernel of $R \to A \otimes_R B$ is contained in the kernel of $R \to Q(A/I) \otimes_{Q(R/P)} Q(B/J)$, which equals the kernel of $R \to Q(R/P)$, i.e. $P$, since $Q(R/P) \to Q(A/I) \otimes_{Q(R/P)} Q(B/J)$ is injective.

In fact, one can prove this statement in $\mathsf{ZF}$, i.e. the axiom of choice is not necessary. The trick is to use filtered colimits (or the explicit construction of the tensor product) to reduce to the case that $R$ is of finite type over $\mathbf{Z}$, hence countable, and to the case that $A,B$ are finite type over $R$, hence countable too. In Kostas Hatzikiriakou, "Minimal Prime Ideals and Arithmetic Comprehension", it is proven (in a fragment of $\mathsf{ZF}$) that every non-trivial countable commutative has a minimal prime ideal. Moreover, $\mathsf{ZF}$ proves that vector spaces are flat: Again, using filtered colimits, it suffices to prove this for finitely generated vector spaces, but these are free and hence flat. This proves in particular that the tensor product of two non-trivial vectors is a non-trivial vector. In particular, $Q(R/P) \to Q(A/I) \otimes_{Q(R/P)} Q(B/J)$ is injective.

The proof is still not very satisfying. We have a rather elementary statement about tensor products of algebras, do we really need prime ideals? Is it possible to simplify the proof further? Also notice that the proof above is not constructive, i.e., it uses the law of the excluded middle.

Question. Is there a constructive proof of the statement? If yes, how does it look like?

Notice that we may assume that $R,A,B$ are reduced. In that case, the question becomes: If $R \to A$ and $R \to B$ are injective, why is $R \to A \otimes_R B$ injective, too? The special case that $R$ is a field was already discussed above, and this part was constructive, I guess.

I have recently asked a similar yet broader question. I am not sure how to handle minimal prime ideals here. There is just a very short chapter about minimal prime ideals in the book by Lombardi and Quitté on constructive commutative algebra, and they "only" apply this to give a constructive proof of Traverso-Swan's theorem characterizing seminormal rings. But I would prefer a constructive proof of the statement which is really down-to-earth and can be therefore understood without any prerequisites on constructive mathematics (like the proof for the case that $R$ is a field).

Added 2. Here is a first simplification: We may assume that $R$ is reduced. Let $P$ be a minimal prime ideal of $R$. Then $R_P$ is a reduced zero-dimensional local ring, i.e. a field. The $R_P$-algebras $A_P$ and $B_P$ are non-trivial, hence $R_P \to A_P \otimes_{R_P} B_P$ is injective. It follows that the kernel of $R \to A \otimes_R B$ is contained in the kernel of $R \to R_P$, which is $P$. Now, Lombardi and Quitté write in their book in section XIII.7:

"It is a fact that the use of minimal prime ideals in a proof of classical mathematics can in general be made innocuous (i.e. constructive) by using $A_{\mathrm{min}}$".

Here, $A_{\mathrm{min}}$ denots a rather peculiar looking commutative ring constructed in Theorem 7.8. Does this mean that we can somehow prove that $R_{\mathrm{min}} \to A_{\mathrm{min}} \otimes_{R_{\mathrm{min}}} B_{\mathrm{min}}$ is injective? This would suffice since $R \to R_{\mathrm{min}}$ is injective when $R$ is reduced.

Added 3. Here is another proof, which seems to be more suitable for constructivization. We may assume that $A,B$ are of finite type over $R$ with $A \otimes_R B = 0$ and that $R$ is reduced, the goal is $R=0$. By generic freeness, there is some open dense subset $U \subseteq \mathrm{Spec}(R)$ such that $A_f$ is free over $R_f$ (hence, flat) for every $D(f) \subseteq U$. It follows that $A_f = A_f \otimes_{R_f} R_f$ injects into $A_f \otimes_{R_f} B_f = 0$, i.e. $A_f=0$. Since $R \to A$ is injective, this means $f=0$. This shows $U=\emptyset$, and therefore $R=0$. Theorem 2.45 in "Computational Methods in Commutative Algebra and Algebraic Geometry" by Wolmer Vasconcelos seems to be a constructive proof of Generic Flatness at least for Noetherian domains $R$. So we would have to generalize this to reduced commutative rings $R$, and avoid the usage of the prime spectrum.

$\begingroup$Really nice question. Is it important that $A$ and $B$ be rings, or could this work for any $R$-modules $A$ and $B$ as well?$\endgroup$
– darij grinbergSep 16 '16 at 18:17

$\begingroup$@YCor: It would be clearly helpful for others when you repost your example as a comment. :)$\endgroup$
– HeinrichDSep 16 '16 at 18:41

1

$\begingroup$I gave the following (sorry for the typo): $R=\mathbb{Z}$, and the $R$-modules $M_1,M_2$, with trivial annilihator, with $M_1\otimes_R M_2=0$ (hence with non-nilpotent elements in its annihilator), namely, pick two infinite disjoint sets of primes $U_1,U_2$, and $M_i=\bigoplus_{p\in U_i}\mathbb{Z}/p\mathbb{Z}$.$\endgroup$
– YCorSep 16 '16 at 21:13

$\begingroup$Does "the kernel consists of nilpotent elements" mean (a) "every element in the kernel is nilpotent" or (b) "every nilpotent element is in the kernel" or (c) "an element is in the kernel iff it is nilpotent"? I am guessing (a), but please confirm.$\endgroup$
– Andrej BauerSep 18 '16 at 20:41

4 Answers
4

This answer provides a scheme how to construct a constructive proof, though I'm still working to actually explicitly extract the constructive proof, so please don't accept the answer just yet. (Update: See below.) We'll prove the following statement:

The general case, with $A$ not necessarily being finitely generated, follows formally from this case, since $A$ is the directed union of its finitely generated submodules which contain the image of $\alpha$ and tensoring with $B$ commutes with colimits.

We'll prove this statement by working internal to the little Zariski topos of $R$, that is the topos of sheaves on $\operatorname{Spec}(R)$, as explained in these notes. In this topos $R$, $A$, and $B$ have mirror images $R^\sim$, $A^\sim$, and $B^\sim$ such that $R \to A \otimes_R B$ is injective if and only if $R^\sim \to A^\sim \otimes_{R^\sim} B^\sim$ is a monomorphism in the topos. In order to ultimately be able to extract a fully explicit, constructive, non-toposophic proof, the little Zariski topos needs to be defined in a constructibly sensible way; but this is possible. I presume that the extracted proof will look convoluted at first, but it's possible that it could be simplified even to the point that one wonders why one didn't see it without help of tools.

The point is that working internal to that topos simplifies the situation to the easiest case, namely that the base ring is a field, such that the proof is almost trivial. This is because the internal universe of the Zariski topos has the following peculiarities:

The ring $R^\sim$ is a field in the sense that $1 \neq 0$ and $\forall x {:} R^\sim. \neg(\text{$x$ invertible}) \Rightarrow x = 0$.

From this it follows that $\forall x{:}R^\sim. \neg\neg(x = 0) \Rightarrow x = 0$. This is a huge simplification, since it's much easier to verify doubly negated statements: In order to show that $\neg\neg\varphi \Rightarrow \neg\neg\psi$, it suffices to show that $\varphi \Rightarrow \neg\neg\psi$. Note that this is really a peculiarity of the Zariski topos. The analogous statement $\forall x \in R. \neg\neg(x = 0) \Rightarrow x = 0$ is in general not intuitionistically justified.

Any finitely generated module over $R^\sim$ is not not finite free. (There does not not exist a minimal generating family. The usual proof shows that such a family is linearly independent and therefore a basis.)

Without further ado, here is the internal proof. Let $r:R^\sim$ such that $r \cdot (\alpha(1) \otimes \beta(1)) = 0$ in $A^\sim \otimes B^\sim$. We want to verify that $r = 0$, but it suffices to verify that $\neg\neg(r = 0)$. Therefore we may assume that $A^\sim$ is finite free. Let $(x_1,\ldots,x_n)$ be a basis. Write $\alpha(r) = \sum_i r_i x_i$. Since $A^\sim \otimes B^\sim \cong (B^\sim)^n$, it follows that $r_i \beta(1) = 0$ for all $i$. Since $\beta$ is injective, it follows that $r_i = 0$ for all $i$. Thus $\alpha(r) = 0$. Since $\alpha$ is injective, it follows that $r = 0$.

Update: Here is a fully explicit constructive proof, obtained by working with @HeinrichD in the comments to unravel the scheme sketched above. Unfortunately it's rather convoluted and not particularly memorable; I hope that it can be simplified.

Lemma 1. Let $R$ be a ring. Let $A$ be an $R$-module with generating family $(x_1,\ldots,x_n)$. Assume that the only $g \in R$ such that one of the $x_i$ is an $R[g^{-1}]$-linear combination of the others in $A[g^{-1}]$ is $g = 0$. Then $A$ is free with $(x_1,\ldots,x_n)$ as a basis.

Proof: Let $\sum_i r_i x_i = 0$. Let $i$ be arbitrary. In $A[r_i^{-1}]$, the generator $x_i$ is a linear combination of the others. By assumption it follows that $r_i = 0$.

Lemma 2. Let $R$ be a reduced ring. Let $A$ be a finitely generated $R$-module. Assume that the only $f \in R$ such that $A[f^{-1}]$ is a free $R[f^{-1}]$-module is $f = 0$. Then $R = 0$.

Proof: By induction on the length $n$ of a given generating family $(x_1,\ldots,x_n)$ of $A$. Note that we'll apply the induction
hypothesis not to the ring $R$, but to some localizations of $R$.

If $n = 0$, then $A = 0$. Thus we can finish by using the assumption for $f := 1$.

If $n \geq 1$, then we want to verify the assumptions of Lemma 1.
Thus let $g \in R$ be given such that one of the $x_i$ is an $R[g^{-1}]$-linear combination of the others in $A[g^{-1}]$. Therefore the $R[g^{-1}]$-module $A[g^{-1}]$ can be generated by $n-1$ elements. By the induction hypothesis (applied to the reduced ring $R[g^{-1}]$ and its module $A[g^{-1}]$, which are easily seen to satisfy the assumptions of the induction hypothesis) it follows that $R[g^{-1}] = 0$ (in this step the assumption enters for many different $f$'s). Therefore $g = 0$.

Thus, by Lemma 1, $A$ is free. We can finish by using the assumption for $f := 1$.

$\begingroup$@HeinrichD: But in YCor's example there is no injection $\mathbb{Z} \to M_i$, right? It therefore is only a counterexample for the claim "If two modules over a reduced ring have annihilators whose elements are all nilpotent, then their tensor product has this property too", not to the more specific claim involving given maps.$\endgroup$
– Ingo BlechschmidtSep 22 '16 at 16:17

$\begingroup$I've deleted my silly comment. (Actually, it was exactly your statement which was claimed to be refuted by YCor in his first comment, but this has been deleted soon afterwards. Obviously I still had this in mind.) So actually darij's first comment was right to the point!$\endgroup$
– HeinrichDSep 22 '16 at 17:15

1

$\begingroup$Very nice proof! I already found something similar, except that I wasn't able to transform the classical proof of Lemma 2 to a constructive one.This needs some practice, I guess. In your proof, $A[g^{-1}]=0$ is a typo, we have $R[g^{-1}]=0$. Intuitively, Lemma 1 says that every locally minimal generating system is a basis. Lemma 2 says that if a f.g. $R$-module is nowhere free, then $R=0$. From this we immediately get generic (i.e. not not) freeness: If a f.g. $R$-module is nowhere free for elements $g \in \langle f \rangle$, then $f=0$. (Just apply the Lemma to the the localization).$\endgroup$
– HeinrichDSep 23 '16 at 6:42

$\begingroup$A few more details on how this is a consequence of Lemma 6.4 would be nice (I don't see it immediately, and I don't have much time for exploring).$\endgroup$
– darij grinbergSep 23 '16 at 6:58

$\begingroup$Thank you! I like this approach, too. However, I think the proof is not correct yet. Lemma 6.4 uses a generating system $\{b_i\}$ of $B$ over $R$, and we may assume $1_B = b_0$ for some index $0$. Let $a_0 = \alpha(r)$ and $a_i = 0$ for $i \neq 0$, so that $\sum_i a_i \otimes b_i = 0$. Then, the Lemma gives a matrix $(r_{ij})$ over $R$ and elements $a'_i \in A$ such that $\sum_j r_{ij} a'_j = a_i$ for all $i$ and $\sum_i r_{ij} b_i = 0$ for all $j$. This implies $\sum_j r_{0j} a'_j = a_0 = \alpha(r)$ as in your answer, but we cannot derive $r_{0j} b_0 = 0$ from $\sum_i r_{ij} b_i = 0$!$\endgroup$
– HeinrichDSep 23 '16 at 6:58

1

$\begingroup$Shin's proof can be salvaged as follows. We assume that $R$ is reduced and induct over the length $n$ of a generating family $(a_1,\ldots,a_n)$ of $A$. The case $n=0$ is trivial. For $n\geq1$ write $\alpha(r)=\sum_ir_ia_i$. Lemma 6.4 yields elements $h_{ij}\in R$ and $c_j\in B$ such that $r_i\beta(1)=\sum_jh_{ij}c_j$ for all $i$ and $\sum_ih_{ij}a_i=0$ for all $j$. We can apply the induction hypothesis to each of the rings $R[h_{ij}^{-1}]$, showing that $r=0$ there. Thus $h_{ij}r=0\in R$ for all $i,j$. Thus $\beta(rr_i)=0$, therefore $rr_i=0$ for all $i$. Finally $\alpha(r^2)=0$, so $r=0$.$\endgroup$
– Ingo BlechschmidtSep 23 '16 at 8:39

1

$\begingroup$Shin's proof is very nice! I wonder whether the explicit proof given in my answer is actually "the same" as his. In my proof lots of universal quantifications appear, but not all of thee might actually be needed.$\endgroup$
– Ingo BlechschmidtSep 23 '16 at 8:43

$\begingroup$@Ingo Very nice proof indeed.Thank you for the elaboration. I hope that this will be integrated in one of the answers so that readers will immediately find it. I also wonder if the proofs are the same. Obviously they are very similar, because in both proofs we basically localize to get rid of one generator and then induct on the number of generators. But I wonder why Lemma 6.4 is needed here, because in your proof we don't need anything like that. Perhaps we can even get rid of it! (Lemma 6.4 is essentially right exactness of the tensor product.)$\endgroup$
– HeinrichDSep 23 '16 at 9:07

We have to prove the following. Let $k \to A$ and $k\to B$ be two injective morphisms of commutative reduced rings, if $A\otimes_k B=0$ then $k=0$.

1)
This is true when $k$ is a discrete field or, more generally, a zerodimensional reduced ring (von Neuman regular).
Indeed $k\to A$ is faithfully flat (VIII-6.2)
and $k \to B$ is injective, so $ A \to A \otimes_k B$ is injective,
thus $A=0$, thus $k=0$

$\begingroup$The notation $k_{\operatorname{min}}$ is defined as in Theorem XIII.7.8 of your book with Quitté, right? (By the way, I am wondering how it compares to the -- arguably nonconstructive -- definition of the density monad of the inclusion functor $\mathsf{Field} \to \mathsf{CRing}$ in Example 6.5.11 of Emily Riehl's Category theory in context, math.jhu.edu/~eriehl/context.pdf .)$\endgroup$
– darij grinbergDec 14 '16 at 18:07

I do not know enough about constructive proofs, but here is a try. Since your proof above is trying to use prime ideals, I assume your rings are rings with 1 and 1 maps to 1 under homomorphisms.

If $R\to A\otimes_R B$ had a non-nilpotent element $s$ in the kernel, then when we localize at $s$, we get the zero homomorphism $R_s\to A_s\otimes _{R_s} B_s$. But the image of $1$ in $A_s, B_s$ can not be zero and since these are rings with identities, $A_s\otimes _{R_s} B_s$ is a ring with identity different from zero. This is a contradiction.

I am not certain that my argument is correct, but I am sure you will let me know.

$\begingroup$Yes, all rings are unital by definition.$\endgroup$
– HeinrichDSep 16 '16 at 20:39

$\begingroup$Why is $A_s \otimes_{R_s} B_s \neq 0$? The tensor product of two non-trivial rings may be zero. A standard example is $\mathbf{Z}/2 \otimes_{\mathbf{Z}} \mathbf{Z}/3=0$. Obviously, somewhere in the proof the assumptions on $R \to A$ and $R \to B$ have to be used.$\endgroup$
– HeinrichDSep 16 '16 at 20:40

$\begingroup$I was already using the assumption in saying $A_s\neq0, B_s\neq 0$, but clearly I need to say something more.$\endgroup$
– MohanSep 16 '16 at 21:00

3

$\begingroup$You are right. So the localization shows that the statement is equivalent to: If $R$ is a commutative reduced ring and and $R \to A$, $R \to B$ are injective ring homomorphisms with $A \neq 0$ and $B \neq 0$, then $A \otimes_R B \neq 0$.$\endgroup$
– HeinrichDSep 16 '16 at 21:03