I'm studying the Atiyah's commutative algebra book and I realized that in the beginning of the book, the author says as one of the conditions to a map be a homomorphism is $f(1)=1$, I would like to know if I can left this condition behind, i. e., if $f(x+y)=f(x)+f(y)$ and $f(xy)=f(x)f(y)$ we can have $f(1)=1$.

This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.

We can have $f(1)=1$, but we don't must have it.
–
user26857Jan 5 '13 at 12:02

As YACP hints, your formulation of the question is probably not what you really mean: namely whether the axioms $f(x+y)=f(x)+f(y)$ and $f(xy)=f(x)f(y)$ imply $f(1)=1$. The answer is "no" , as akkk and Jyrki point out.
–
Georges ElencwajgJan 5 '13 at 12:45

The condition $f(1)=1$ is usually included in the definition of a ring homomorphism, when the (IMHO more common) convention that rings must have units is adopted. You want to rule out mappings like
$$
x\mapsto\pmatrix {x&0\cr0&0\cr}
$$
from being a homomorphism from, say, the reals to 2x2 real matrices.

Similar things might otherwise also happen between mappings between commutative rings (Atiyah's context). For example, the mapping $f:x\mapsto 3x$ from $\mathbb{Z}_6$ to itself does satisfy the conditions $f(a+b)=f(a)+f(b)$ and $f(ab)=f(a)f(b)$. The latter condition follows from the congruence $3^3\equiv 3\pmod 6$.

More generally, if a commutative ring $R$ has an idempotent, i.e. an element $e$ that satisfies the relation $e^2=e$, then the mapping $x\mapsto xe$ satisfies both conditions
$f(a+b)=f(a)+f(b)$ and $f(ab)=f(a)f(b)$ for all $a,b\in R$.

As you have seen there are several examples proving that the condition $f(1)=1$ does not follow from the other requirements of a ring homomorphism. Not even in the case, when $f(1)\neq0$.

Commenting a little bit as to why we (or at least Atiyah) want to rule out the mappings between rings that respect the binary operations, but don't map 1 to 1. In several parts of algebra emphasis is on modules over a ring. There it is essential that 1 acts as the identity mapping on the module. We also want to be able pull back the module structure as follows. If $M$ is an $S$-module and $f:R\rightarrow S$ is a ring homomorphism, we often want to turn $M$ into an $R$-module by the rule $r*m=f(r)m$. If we didn't know that $f(1_R)=1_S$, then we would need to worry, whether multiplication by $1_R$ is the identity mapping on $M$.

There may be several other reasons. The above is the first that occurred to me, because it really is everywhere in applications of modules. I guess it is possible to have a context, where this argument is not pressing, and then you can choose to work with a different definition.

Could you add a word on why we want to rule out such mappings?
–
akkkkJan 5 '13 at 13:08

@akkkk Universal algebra: "variety"/"quasivariety". Logic/model theory: "model homomorphism", "free model", "set of Horn clauses". OK, I somehow get the impression that I'm unable to convey this information with a single word. I probably have to write a complete answer with full details.
–
Thomas KlimpelJan 5 '13 at 14:47

But looking at it from a logical point of view, a homomorphism would be a language and interpretation preserving function. In the language of unital rings we have two constants, $0,1$ and two operations $+,\cdot$. If we require that $f$ preserves the interpretation then $f(1_R)=1_S$ is an actual requirement.

If $a$ is not a zero divisor then $ a = 0$ or $ a-1 = 0$, but we assumed earlier that $ a \ne 0$ and hence $ a = 1$ and if $ f(s)$ is a unit in ring $S$ then you can simply cancel out $f(s)$ so, again $ a = 1$
–
RamJan 6 '13 at 3:35

When discussing rings with unity (i.e. the category of unital rings, usually denoted Ring), authors usually require homomorphisms to preserve the unit element. When discussing rings that aren't necessarily unital (i.e. the category of "rngs", usually denoted Rng) authors usually don't require homomorphisms to preserve the unit element (should it exist).
Refer to this link.