Speech is the vocalized form of humancommunication. It is based upon the syntactic combination of lexicals and names that are drawn from very large (usually about 10,000 different words) vocabularies. Each spoken word is created out of the phonetic combination of a limited set of vowel and consonant speech sound units. These vocabularies, the syntax which structures them, and their set of speech sound units differ, creating the existence of many thousands of different types of mutually unintelligible human languages. Most human speakers are able to communicate in two or more of them,[1] hence being polyglots. The vocal abilities that enable humans to produce speech also provide humans with the ability to sing.

A gestural form of human communication exists for the deaf in the form of sign language. Speech in some cultures has become the basis of a written language, often one that differs in its vocabulary, syntax and phonetics from its associated spoken one, a situation called diglossia. Speech in addition to its use in communication, it is suggested by some psychologists such as Vygotsky is internally used by mental processes to enhance and organize cognition in the form of an interior monologue.

Speech perception refers to the processes by which humans are able to interpret and understand the sounds used in language. The study of speech perception is closely linked to the fields of phonetics and phonology in linguistics and cognitive psychology and perception in psychology. Research in speech perception seeks to understand how human listeners recognize speech sounds and use this information to understand spoken language. Speech research has applications in building computer systems that can recognize speech, as well as improving speech recognition for hearing- and language-impaired listeners.[2]

Spoken vocalizations are quickly turned from sensory inputs into motor instructions needed for their immediate or delayed (in phonological memory) vocal imitation. This occurs independently of speech perception. This mapping plays a key role in enabling children to expand their spoken vocabulary and hence the ability of human language to transmit across generations.[3]

Speech is a complex activity; as a result, errors are often made in speech. Speech errors have been analyzed by scientists to understand the nature of the processes involved in the production of speech.

Two areas of the cerebral cortex are necessary for speech. Broca's area, named after its discoverer, French neurologist Paul Broca (1824-1880), is in the frontal lobe, usually on the left, near the motor cortex controlling muscles of the lips, jaws, soft palate and vocal cords. When damaged by a stroke or injury, comprehension is unaffected but speech is slow and labored and the sufferer will talk in "telegramese". Wernicke's area, discovered in 1874 by German neurologist Carl Wernicke (1848-1904), lies to the back of the temporal lobe, again, usually on the left, near the areas receiving auditory and visual information. Damage to it destroys comprehension - the sufferer speaks fluently but nonsensically. Some researchers have explored the connections between brain physiology, neuroscience, and other elements of physiology to that of communication. Communibiology first proposed by Beatty and McCroskey address these issues and presents a set of specific axioms about these phenomena.