pujangga malam

welcome to my blog

English Linguistics

Vowel
In phonetics, a vowel is a sound in spoken language, such as English ah! [ɑː] or oh! [oʊ], pronounced with an open vocal tract so that there is no build-up of air pressure at any point above the glottis. This contrasts with consonants, such as English sh! [ʃː], where there is a constriction or closure at some point along the vocal tract. A vowel is also understood to be syllabic: an equivalent open but non-syllabic sound is called a semivowel.

In all languages, vowels form the nucleus or peak of syllables, whereas consonants form the onset and (in languages which have them) coda. However, some languages also allow other sounds to form the nucleus of a syllable, such as the syllabic l in the English word table [ˈteɪ.bl̩] (the stroke under the l indicates that it is syllabic; the dot separates syllables), or the r in Serbian vrt [vr̩t] "garden".

There is a conflict between the phonetic definition of 'vowel' (a sound produced with no constriction in the vocal tract) and the phonological definition (a sound that forms the peak of a syllable).The approximants [j] and [w] illustrate this conflict: both are produced without much of a constriction in the vocal tract (so phonetically they seem to be vowel-like), but they occur on the edge of syllables, such as at the beginning of the English words 'yes' and 'wet' (which suggests that phonologically they are consonants). The American linguist Kenneth Pike suggested the terms 'vocoid' for a phonetic vowel and 'vowel' for a phonological vowel,so using this terminology, [j] and [w] are classified as vocoids but not vowels.

The word vowel comes from the Latin word vocalis, meaning "speaking", because[citation needed] in most languages words and thus speech are not possible without vowels. Vowel is commonly used to mean both vowel sounds and the written symbols that represent them.

Articulation
The articulatory features that distinguish different vowel sounds are said to determine the vowel's quality. Daniel Jones developed the cardinal vowel system to describe vowels in terms of the common features height (vertical dimension), backness (horizontal dimension) and roundedness (lip position). These three parameters are indicated in the schematic IPA vowel diagram on the right. There are however still more possible features of vowel quality, such as the velum position (nasality), type of vocal fold vibration (phonation), and tongue root position.

Monophthong, Diphthong, Triphthong, and Semivowel

A vowel sound whose quality doesn't change over the duration of the vowel is called a monophthong. Monophthongs are sometimes called "pure" or "stable" vowels. A vowel sound that glides from one quality to another is called a diphthong, and a vowel sound that glides successively through three qualities is a triphthong.
All languages have monophthongs and many languages have diphthongs, but triphthongs or vowel sounds with even more target qualities are relatively rare cross-linguistically. English has all three types: the vowel sound in hit is a monophthong /ɪ/, the vowel sound in boy is in most dialects a diphthong /ɔɪ/, and the vowel sounds of flower, /aʊər/, form a triphthong or disyllable, depending on dialect.
In phonology, diphthongs and triphthongs are distinguished from sequences of monophthongs by whether the vowel sound may be analyzed into different phonemes or not. For example, the vowel sounds in a two-syllable pronunciation of the word flower (/ˈflaʊər/) phonetically form a disyllabic triphthong, but are phonologically a sequence of a diphthong (represented by the letters <ow>) and a monophthong (represented by the letters <er>). Some linguists use the terms diphthong and triphthong only in this phonemic sense


Use of vowels in Language

The importance of vowels in distinguishing one word from another varies from language to language. The alphabets used to write the Semitic languages, such as the Hebrew alphabet and the Arabic alphabet, do not ordinarily mark all the vowels, since they are frequently unnecessary in identifying a word. These alphabets are technically called abjads. Although it is possible to construct simple English sentences that can be understood without written vowels (cn y rd ths?), extended passages of English lacking written vowels can be difficult to understand (consider dd, which could be any of add, aided, dad, dada, dead, deed, did, died, diode, dodo, dud, dude, eddie, iodide, or odd).

In most languages, vowels serve mainly to distinguish separate lexemes, rather than different inflectional forms of the same lexeme as they commonly do in the Semitic languages. For example, while English man becomes men in the plural, moon is not a different form of the same word. Vowels are especially important to the structures of words in languages that have very few consonants, like Polynesian languages such as Maori and Hawaiian, and in languages whose inventories of vowels are larger than their inventories of consonants.Nearly all languages have at least three phonemic vowels, usually [i], [a], [u] as in Classical Arabic and Inuktitut (or [æ], [ɪ], [ʊ] as in Quechua), though Adyghe and many Sepik languages have a vertical vowel system of [ɨ], [ə], [a]. Very few languages have fewer, though some Arrernte, Circassian, Ndu languages have been argued to have just two, [ə] and [a], with [ɨ] being epenthetic.

The rarest vowel cataloged are ɜ (has just been cataloged in Paicĩ and English) and ʊ̈ (Early modern english and Russian).

It is not possible to say which language has the most vowels, since that depends on how they are counted. For example, long vowels, nasal vowels, and various phonations may or may not be counted separately; indeed, it may sometimes be unclear if phonation belongs to the vowels or the consonants of a language. If such things are ignored and only vowels with dedicated IPA letters ('vowel qualities') are considered, then very few languages have more than ten. The Germanic languages have some of the largest inventories: Standard Swedish has seventeen contrasting simple vowels, nine long and eight short (/iː eː ɛː ɑː oː uː ʉ̟ː yː øː ɪ ɛ a ɔ ʊ ɵ ʏ œ/), while the Amstetten dialect of Bavarian has been reported to have thirteen long vowels: /iː yː eː øː ɛː œː æː ɶː aː ɒː ɔː oː uː/. The Mon-Khmer languages of Southeast Asia also have some large inventories, such as the eleven vowels of Vietnamese: /i e ɛ æ ɑ ʌ ɔ ɤ o ɯ u/. Wu has the largest inventories of Chinese; the Jinhui dialect of Wu (金汇方言)) has also been reported to have eleven vowels: ten normal vowels, /a e ɯ ɨ i ɞ ɵ ø u y/, plus restricted /ɿ/.

One of the most common vowels is [a̠]; it is nearly universal for a language to have at least one open vowel, though most dialects of English have an [æ] and a [ɑ]—and often an [ɒ], all open vowels—but no central [a]. Some Tagalog- and Cebuano-speakers have [ɐ] rather than [a], and Dhangu Yolngu is described as having /ɪ ɐ ʊ/, without any peripheral vowels. [i] is also extremely common, though Quileute has [eː], [æː], [aː], [oː] without any close vowels, at least as they are pronounced when long. The third vowel of Arabic-type three-vowel system, /u/, is considerably less common. A large fraction of the languages of North America happen to have a four-vowel system without /u/: /i, e, a, o/; Aztec is an example.

Consonants
In articulatory phonetics, a consonant is a speech sound that is articulated with complete or partial closure of the vocal tract. Examples are [p], pronounced with the lips; [t], pronounced with the front of the tongue; [k], pronounced with the back of the tongue; [h], pronounced in the throat; [f] and [s], which are noisy (fricatives); and [m] and [n], which have air flowing through the nose (nasals). Contrasting with consonants are vowels.

Since the number of consonants in the world's languages is much greater than the number of consonant letters in any one alphabet, linguists have devised systems such as the International Phonetic Alphabet (IPA) to assign a unique symbol to each attested consonant. In fact, the Latin alphabet, which is used to write English, has fewer consonant letters than English has consonant sounds, so digraphs like "ch", "sh", "th", and "zh" are used to extend the alphabet, and some letters and digraphs represent more than one consonant. For example, many speakers are not aware that the sound spelled "th" in "this" is a different consonant than the "th" sound in "thing". (In the IPA they are transcribed [ð] and [θ], respectively.)

The word consonant is also used to refer to a letter of an alphabet that denotes a consonant sound. Consonant letters in the English alphabet are B, C, D, F, G, H, J, K, L, M, N, P, Q, R, S, T, V, X, Z, and usually W and Y: The letter Y stands for the consonant [j] in "yoke", and for the vowel [ɪ] in "myth", for example; W is almost always a consonant except in rare words (mostly loanwords from Welsh) like "crwth" "cwm".

Each spoken consonant can be distinguished by several phonetic features:

* The manner of articulation is the method that the consonant is articulated, such as nasal (through the nose), stop (complete obstruction of air), or approximant (vowel like).
* The place of articulation is where in the vocal tract the obstruction of the consonant occurs, and which speech organs are involved. Places include bilabial (both lips), alveolar (tongue against the gum ridge), and velar (tongue against soft palate). Additionally, there may be a simultaneous narrowing at another place of articulation, such as palatalisation or pharyngealisation.
* The phonation of a consonant is how the vocal cords vibrate during the articulation. When the vocal cords vibrate fully, the consonant is called voiced; when they do not vibrate at all, it's voiceless.
* The voice onset time (VOT) indicates the timing of the phonation. Aspiration is a feature of VOT.
* The airstream mechanism is how the air moving through the vocal tract is powered. Most languages have exclusively pulmonic egressive consonants, which use the lungs and diaphragm, but ejectives, clicks and implosives use different mechanisms.
* The length is how long the obstruction of a consonant lasts. This feature is borderline distinctive in English, as in "wholly" [hoʊlli] vs. "holy" [hoʊli], but cases are limited to morpheme boundaries. Unrelated roots are differentiated in various languages such as Italian, Japanese and Finnish, with two length levels, "single" and "geminate". Estonian and some Sami languages have three phonemic lengths: short, geminate, and long geminate, although the distinction between the geminate and overlong geminate includes suprasegmental features.
* The articulatory force is how much muscular energy is involved. This has been proposed many times, but no distinction relying exclusively on force has ever been demonstrated.

All English consonants can be classified by a combination of these features, such as "voiceless alveolar stop consonant" [t]. In this case the airstream mechanism is omitted.

Some pairs of consonants like p::b, t::d are sometimes called fortis and lenis, but this is a phonological rather than phonetic distinction.  

Phonotactics  

Phonotactics (in Greek phone = voice and taktikos = something which may be arranged or ordered) is a branch of phonology that deals with restrictions in a language on the permissible combinations of phonemes. Phonotactics defines permissible syllable structure, consonant clusters, and vowel sequences by means of phonotactical constraints.

Phonotactic constraints are language specific. For example, in Japanese, consonant clusters like /st/ are not allowed, although they are in English. Similarly, the sounds /kn/ and /ɡn/ are not permitted at the beginning of a word in Modern English but are in German and Dutch, and were permitted in Old and Middle English.

Syllables have the following internal segmental structure:

* Onset (optional)
* Rime (obligatory, comprises Nucleus and Coda):
o Nucleus (obligatory)
o Coda (optional)

Both onset and coda may be empty, forming a vowel-only syllable, or alternatively, the nucleus can be occupied by a syllabic consonant.

The English syllable (and word) twelfths /twɛlfθs/ is divided into the onset /tw/, the nucleus /ɛ/, and the coda /lfθs/, and it can thus be described as CCVCCCC (C = consonant, V = vowel). On this basis it is possible to form rules for which representations of phoneme classes may fill the cluster. For instance, English allows at most three consonants in an onset, but among native words under standard accents, phonemes in a three-consonantal onset are limited to the following scheme:

/s/ + pulmonic + approximant:  

* /s/ + /m/ + /j/
* /s/ + /t/ + /j ɹ/
* /s/ + /p/ + /j ɹ l/
* /s/ + /k/ + /j ɹ l w/

This constraint can be observed in the pronunciation of the word blue: originally, the vowel of blue was identical to the vowel of cue, approximately [iw]. In most dialects of English, [iw] shifted to [juː]. Theoretically, this would produce **[bljuː]. The cluster [blj], however, infringes the constraint for three-consonantal onsets in English. Therefore, the pronunciation has been reduced to [bluː] by elision of the [j].

Other languages don't share the same constraint: compare Spanish pliegue [ˈpljeɣe] or French pluie [plɥi].

Most languages of the world syllabify CVCV and CVCCV sequences as /CV.CV/ and /CVC.CV/ or /CV.CCV/, with consonants preferentially acting as the onset of a syllable containing the following vowel. According to one view, English is unusual in this regard, in that stressed syllables attract following consonants, so that ˈCVCV and ˈCVCCV syllabify as /ˈCVC.V/ and /ˈCVCC.V/, as long as the consonant cluster CC is a possible syllable coda. In addition, according to this view, /r/ preferentially syllabifies with the preceding vowel even when both syllables are unstressed, so that CVrV occurs as /CVr.V/. However, many scholars do not agree with this view.

Syllable structure  



The syllable structure in English is (C)3V(C)5, with a near maximal example being strengths (/ˈstrɛŋkθs/, although it can be pronounced /ˈstrɛŋθs/).Because of an extensive pattern of articulatory overlap, English speakers rarely produce an audible release in consonant clusters. This can lead to cross-articulations that seem very much like deletions or complete assimilations. For example, hundred pounds may sound like [hʌndɹɛb pʰaʊndz] but X-ray and electropalatographic studies demonstrate that inaudible and possibly weakened contacts may still be made so that the second /d/ in hundred pounds does not entirely assimilate a labial place of articulation, rather the labial co-occurs with the alveolar one.

When a stressed syllable contains a pure vowel (rather than a diphthong), followed by a single consonant and then another vowel, as in holiday, many native speakers feel that the consonant belongs to the preceding stressed syllable, /ˈhɒl.ɨ.deɪ/. However, when the stressed vowel is a long vowel or diphthong, as in admiration or pekoe, speakers agree that the consonant belongs to the following syllable: /ˈæd.mɨ.ˈreɪ.ʃən/, /ˈpiː.koʊ/. Wells (1990) notes that consonants syllabify with the preceding rather than following vowel when the preceding vowel is the nucleus of a more salient syllable, with stressed syllables being the most salient, reduced syllables the least, and secondary stress / full unstressed vowels intermediate. But there are lexical differences as well, frequently with compound words but not exclusively. For example, in dolphin and selfish, he argues that the stressed syllable ends in /lf/; in shellfish, the /f/ belongs with the following syllable: /ˈdɒlf.ɪn/, /ˈsɛlf.ɪʃ/ → [ˈdɒlfɨn], [ˈsɛlfɨʃ] vs /ˈʃɛl.fɪʃ/ → [ˈʃɛlˑfɪʃ], where the /l/ is a little longer and the /ɪ/ not reduced. Similarly, in toe-strap the /t/ in a full plosive, as usual in syllable onset, whereas in toast-rack the /t/ is in many dialects reduced to the unreleased allophone it takes in syllable codas, or even elided: /ˈtoʊ.stræp/, /ˈtoʊst.ræk/ → [ˈtʰoˑʊstɹæp], [ˈtoʊs(t̚)ɹʷæk]; likewise nitrate /ˈnaɪ.treɪt/ → [ˈnʌɪtɹ̥ʷeɪt] with a voiceless /r/, vs night-rate /ˈnaɪt.reɪt/ → [ˈnʌɪt̚ɹʷeɪt] with a voiced /r/. Cues of syllable boundaries include aspiration of syllable onsets and (in the US) flapping of coda /t, d/ (a tease /ə.ˈtiːz/ → [əˈtʰiːz] vs. at ease /æt.ˈiːz/ → [æɾˈiːz]), epenthetic plosives like [t] in syllable codas (fence /ˈfɛns/ → [ˈfɛnts] but inside /ɪn.ˈsaɪd/ → [ɪnˈsaɪd]), and r-colored vowels when the /r/ is in the coda vs. labialization when it is in the onset (key-ring /ˈkiː.rɪŋ/ → [ˈkʰiːɹʷɪŋ] but fearing /ˈfiːr.ɪŋ/ → [ˈfɪəɹɪŋ]).



(Takes from Wikipedia and Summary of Peter Ladfoged book " A Course In Phonetic)

Acoustic Phonetics
Acoustic phonetics is a subfield of phonetics which deals with acoustic aspects of speech sounds. Acoustic phonetics investigates properties like the mean squared amplitude of a waveform, its duration, its fundamental frequency, or other properties of its frequency spectrum, and the relationship of these properties to other branches of phonetics (e.g. articulatory or auditory phonetics), and to abstract linguistic concepts like phones, phrases, or utterances.

The study of acoustic phonetics was greatly enhanced in the late 19th century by the invention of the Edison phonograph. The phonograph allowed the speech signal to be recorded and then later processed and analyzed. By replaying the same speech signal from the phonograph several times, filtering it each time with a different band-pass filter, a spectrogram of the speech utterance could be built up. A series of papers by Ludimar Hermann published in Pflüger's Archiv in the last two decades of the 19th century investigated the spectral properties of vowels and consonants using the Edison phonograph, and it was in these papers that the term formant was first introduced. Hermann also played back vowel recordings made with the Edison phonograph at different speeds to distinguish between Willis' and Wheatstone's theories of vowel production.

Further advances in acoustic phonetics were made possible by the development of the telephone industry. (Incidentally, Alexander Graham Bell's father, Alexander Melville Bell, was a phonetician.) During World War II, work at the Bell Telephone Laboratories (which invented the spectrograph) greatly facilitated the systematic study of the spectral properties of periodic and aperiodic speech sounds, vocal tract resonances and vowel formants, voice quality, prosody, etc.

Human voice
The human voice consists of sound made by a human being using the vocal folds for talking, singing, laughing, crying, screaming, etc. Human voice is specifically that part of human sound production in which the vocal folds (vocal cords) are the primary sound source. Generally speaking, the mechanism for generating the human voice can be subdivided into three parts; the lungs, the vocal folds within the larynx, and the articulators. The lung (the pump) must produce adequate airflow and air pressure to vibrate vocal folds (this air pressure is the fuel of the voice). The vocal folds (vocal cords) are a vibrating valve that chops up the airflow from the lungs into audible pulses that form the laryngeal sound source. The muscles of the larynx adjust the length and tension of the vocal folds to ‘fine tune’ pitch and tone. The articulators (the parts of the vocal tract above the larynx consisting of tongue, palate, cheek, lips, etc.) articulate and filter the sound emanating from the larynx and to some degree can interact with the laryngeal airflow to strengthen it or weaken it as a sound source.


Morphology         
Morphology is the identification, analysis and description of the structure of words (words as units in the lexicon are the subject matter of lexicology). While words are generally accepted as being (with clitics) the smallest units of syntax, it is clear that in most (if not all) languages, words can be related to other words by rules. For example, English speakers recognize that the words dog, dogs, and dog catcher are closely related. English speakers recognize these relations from their tacit knowledge of the rules of word formation in English. They infer intuitively that dog is to dogs as cat is to cats; similarly, dog is to dog catcher as dish is to dishwasher (in one sense). The rules understood by the speaker reflect specific patterns (or regularities) in the way words are formed from smaller units and how those smaller units interact in speech. In this way, morphology is the branch of linguistics that studies patterns of word formation within and across languages, and attempts to formulate rules that model the knowledge of the speakers of those languages. 



Inflection Vs Word Formation
Given the notion of a lexeme, it is possible to distinguish two kinds of morphological rules. Some morphological rules relate to different forms of the same lexeme; while other rules relate to different lexemes. Rules of the first kind are called inflectional rules, while those of the second kind are called word formation. The English plural, as illustrated by dog and dogs, is an inflectional rule; compounds like dog catcher or dishwasher provide an example of a word formation rule. Informally, word formation rules form "new words" (that is, new lexemes), while inflection rules yield variant forms of the "same" word (lexeme).

There is a further distinction between two kinds of word formation: derivation and compounding. Compounding is a process of word formation that involves combining complete word forms into a single compound form; dog catcher is therefore a compound, because both dog and catcher are complete word forms in their own right before the compounding process has been applied, and are subsequently treated as one form. Derivation involves affixing bound (non-independent) forms to existing lexemes, whereby the addition of the affix derives a new lexeme. One example of derivation is clear in this case: the word independent is derived from the word dependent by prefixing it with the derivational prefix in-, while dependent itself is derived from the verb depend.

The distinction between inflection and word formation is not at all clear cut. There are many examples where linguists fail to agree whether a given rule is inflection or word formation. The next section will attempt to clarify this distinction.

Word formation is a process, as we have said, where you combine two complete words, whereas with inflection you can combine a suffix with some verb to change its form to subject of the sentence. For example: in the present indefinite, we use ‘go’ with subject I/we/you/they and plural nouns, whereas for third person singular pronouns (he/she/it) and singular nouns we use ‘goes’. So this ‘-es’ is an inflectional marker and is used to match with its subject. A further difference is that in word formation, the resultant word may differ from its source word’s grammatical category whereas in the process of inflection the word never changes its grammatical category.

Paradigms and Morphosyntax
A linguistic paradigm is the complete set of related word forms associated with a given lexeme. The familiar examples of paradigms are the conjugations of verbs, and the declensions of nouns. Accordingly, the word forms of a lexeme may be arranged conveniently into tables, by classifying them according to shared inflectional categories such as tense, aspect, mood, number, gender or case. For example, the personal pronouns in English can be organized into tables, using the categories of person (1st., 2nd., 3rd.), number (singular vs. plural), gender (masculine, feminine, neuter), and case (subjective, objective, and possessive). See English personal pronouns for the details.

The inflectional categories used to group word forms into paradigms cannot be chosen arbitrarily; they must be categories that are relevant to stating the syntactic rules of the language. For example, person and number are categories that can be used to define paradigms in English, because English has grammatical agreement rules that require the verb in a sentence to appear in an inflectional form that matches the person and number of the subject. In other words, the syntactic rules of English care about the difference between dog and dogs, because the choice between these two forms determines which form of the verb is to be used. In contrast, however, no syntactic rule of English cares about the difference between dog and dog catcher, or dependent and independent. The first two are just nouns, and the second two just adjectives, and they generally behave like any other noun or adjective behaves.

An important difference between inflection and word formation is that inflected word forms of lexemes are organized into paradigms, which are defined by the requirements of syntactic rules, whereas the rules of word formation are not restricted by any corresponding requirements of syntax. Inflection is therefore said to be relevant to syntax, and word formation is not. The part of morphology that covers the relationship between syntax and morphology is called morphosyntax, and it concerns itself with inflection and paradigms, but not with word formation or compounding.

Affixation
An affix is a morpheme that is attached to a word stem to form a new word. Affixes may be derivational, like English -ness and pre-, or inflectional, like English plural -s and past tense -ed. They are bound morphemes by definition; prefixes and suffixes may be separable affixes. Affixation is, thus, the linguistic process speakers use to form new words (neologisms) by adding morphemes (affixes) at the beginning (prefixation), the middle (infixation) or the end (suffixation) of words.

Category of Affixation   

1. A prefix is an affix which is placed before the stem of a word. Particularly in the study of Semitic languages, a prefix is called a preformative, because it alters the form of the words to which it is affixed.

Examples of prefixes:

* unhappy : un is a negative or antonymic prefix.
* prefix, preview : pre is a prefix, with the sense of before
* redo, review : re is a prefix meaning again.

The word prefix is itself made up of the stem fix (meaning attach, in this case), and the prefix pre- (meaning "before"), both of which are derived from Latin roots.

2. An infix is an affix inserted inside a stem (an existing word). It contrasts with adfix, a rare term for an affix attached to the outside of a stem, such as a prefix or suffix. English has very few true infixes (as opposed to tmesis, see below), and those it does have are marginal. A few are heard in colloquial speech, and a couple more are found in technical terminology.

* The infix ‹iz› or ‹izn› is characteristic of hip-hop slang, for example hizouse for house and shiznit for shit. Infixes also occur in some language games.
* The ‹ma› infix, whose location in the word is described in Yu (2004), gives a word an ironic pseudo-sophistication, as in sophistimacated, saxomaphone, and edumacation.
* Chemical nomenclature includes the infixes ‹pe›, signifying complete hydrogenation (from piperidine), and ‹et› (from ethyl), signifying the ethyl radical C2H5. Thus from the existing word picoline is derived pipecoline, and from lutidine is derived lupetidine; from phenidine and xanthoxylin are derived phenetidine and xanthoxyletin.

3. Suffix. In linguistics, a suffix (also sometimes called a postfix or ending) is an affix which is placed after the stem of a word. Common examples are case endings, which indicate the grammatical case of nouns or adjectives, and verb endings, which form the conjugation of verbs. Particularly in the study of Semitic languages, a suffix is called an afformative, as they can alter the form of the words to which they are fixed. In Indo-European studies, a distinction is made between suffixes and endings (see Proto-Indo-European root).

Suffixes can carry grammatical information (inflectional suffixes), or lexical information (derivational suffixes). An inflectional suffix is sometimes called a desinence.

Some examples from English:

Girls, where the suffix -s marks the plural.
He makes, where suffix -s marks the third person singular present tense.
It closed, where the suffix -ed marks the past tense.

Inflectional suffix
Inflection changes grammatical properties of a word within its syntactic category. In the example:

The weather forecaster said it would clear today, but it hasn't cleared at all.

the suffix -ed inflects the root-word clear to indicate past tense.

Some inflectional suffixes in present day English:

* -s third person singular present
* -ed past tense
* -ing progressive/continuous
* -en past participle
* -s plural
* -en plural (irregular)
* -er comparative
* -est superlative
* -n't negative

Derivational suffixes
In the example:

"The weather forecaster said it would be clear today, but I can't see clearly at all"

the suffix -ly modifies the root-word clear from an adjective into an adverb. Derivation can also form a semantically distinct word within the same syntactic category. In this example:

"The weather forecaster said it would be a clear day today, but I think it's more like clearish!"

the suffix -ish modifies the root-word clear, changing its meaning to "clear, but not very clear".

Some derivational suffixes in present day English:

* -ize/-ise
* -fy
* -ly
* -able/-ible
* -ful
* -ness
* -less
* -ism
* -ment
* -ist
* -al
* -ish

4. Confix. Words can be simple or complex. A simple word consists solely of a base or root word, which can not be broken down into smaller units. A base carries the essential meaning of a word.

A word can also consist of a base with one or more affixes. There are three types of affix in Indonesian: prefixes, suffixes and circumfixes or confixes:

- a prefix is attached before the base,
- a suffix comes after the base and
- a circumfix or confix contains two parts, one occurring before the base and one after. Example : Unhelpful. Un is preffix and ful is suffix.

However, not all base words can be combined with affixes, nor are they always consistent in their subsequent usage and meaning.

Morpheme in Morphology
In morpheme-based morphology, a morpheme is the smallest linguistic unit that has semantic meaning. In spoken language, morphemes are composed of phonemes (the smallest linguistically distinctive units of sound), and in written language morphemes are composed of graphemes (the smallest units of written language).
The concept morpheme differs from the concept word, as many morphemes cannot stand as words on their own. A morpheme is free if it can stand alone, or bound if it is used exclusively alongside a free morpheme. Its actual phonetic representation is the morph, with the different morphs representing the same morpheme being grouped as its allomorphs.

English example:

The word "unbreakable" has three morphemes: "un-", a bound morpheme; "break", a free morpheme; and "-able", a bound morpheme. "un-" is also a prefix, "-able" is a suffix. Both "un-" and "-able" are affixes.

The morpheme plural-s has the morph "-s", /s/, in cats (/kæts/), but "-es", /ɨz/, in dishes (/dɪʃɨz/), and even the voiced "-s", /z/, in dogs (/dɒgz/). "-s". These are allomorphs.

Types of Morpheme in English

* Free morphemes like town, and dog can appear with other lexemes (as in town hall or dog house) or they can stand alone, i.e. "free".
* Bound morphemes like "un-" appear only together with other morphemes to form a lexeme. Bound morphemes in general tend to be prefixes and suffixes. Unproductive, non-affix morphemes that exist only in bound form are known as "cranberry" morphemes, from the "cran" in that very word.
* Derivational morphemes can be added to a word to create (derive) another word: the addition of "-ness" to "happy," for example, to give "happiness." They carry semantic information.
* Inflectional morphemes modify a word's tense, number, aspect, and so on, without deriving a new word or a word in a new grammatical category (as in the "dog" morpheme if written with the plural marker morpheme "-s" becomes "dogs"). They carry grammatical information.
* Allomorphs are variants of a morpheme, e.g. the plural marker in English is sometimes realized as /-z/, /-s/ or /-ɨz/.

Explanation:
* In morphology, a bound morpheme is a morpheme that cannot stand alone as an independent word. A free morpheme is one which can stand alone.

Most English language affixes (prefixes and suffixes) are bound morphemes, e.g., -ment in "shipment", or pre- in "prefix".

Many roots are free morphemes, e.g., ship- in "shipment", while others are bound.
The morpheme ten- in "tenant" may seem free, since there is an English word "ten". However, its lexical meaning is derived from the Latin word tenere, "to hold", and this or related meaning is not among the meanings of the English word "ten", hence ten- is a bound morpheme in the word "tenant".

* In linguistics, derivation is "Used to form new words, as with happi-ness and un-happy from happy, or determination from determine. A contrast is intended with the process of inflection, which uses another kind of affix in order to form variants of the same word, as with determine/determine-s/determin-ing/determin-ed.[1]
A derivational suffix usually applies to words of one syntactic category and changes them into words of another syntactic category. For example, the English derivational suffix -ly changes adjectives into adverbs (slow → slowly).

Some examples of English derivational suffixes:

* adjective-to-noun: -ness (slow → slowness)
* adjective-to-verb: -ise (modern → modernise) in British English or -ize (archaic → archaicize) in American English and Oxford spelling
* noun-to-adjective: -al (recreation → recreational)
* noun-to-verb: -fy (glory → glorify)
* verb-to-adjective: -able (drink → drinkable)
* verb-to-noun (abstract): -ance (deliver → deliverance)
* verb-to-noun (concrete): --er (write-writer)
Although derivational affixes do not necessarily modify the syntactic category, they modify the meaning of the base. In many cases, derivational affixes change both the syntactic category and the meaning: modern → modernize ("to make modern"). The modification of meaning is sometimes predictable: Adjective + ness → the state of being (Adjective); (white→ whiteness).
A prefix (write → re-write; lord → over-lord) will rarely change syntactic category in English. The derivational prefix un- applies to adjectives (healthy → unhealthy), some verbs (do → undo), but rarely nouns. A few exceptions are the prefixes en- and be-. En- (em- before labials) is usually used as a transitive marker on verbs, but can also be applied to adjectives and nouns to form transitive verb: circle (verb) → encircle (verb); but rich (adj) → enrich (verb), large (adj) → enlarge (verb), rapture (noun) → enrapture (verb), slave (noun) → enslave (verb).
Note that derivational affixes are bound morphemes. In that, derivation differs from compounding, by which free morphemes are combined (lawsuit, Latin professor). It also differs from inflection in that inflection does not create new lexemes but new word forms (table → tables; open → opened).
Derivation may occur without any change of form, for example telephone (noun) and to telephone. This is known as conversion or zero derivation. Some linguists consider that when a word's syntactic category is changed without any change of form, a null morpheme is being affixed.

* In grammar, inflection or inflexion is the modification of a word to express different grammatical categories such as tense, mood, voice, aspect, person, number, gender and case. Conjugation is the inflection of verbs; declension is the inflection of nouns, adjectives and pronouns.
Inflection can be overt or covert within the same language. An overt inflection expresses grammatical category with an explicitly stated affix.[1] Latin ducam, "I will lead", includes an explicit affix, -am, expressing person (first), number (singular) and future tense. This is an overt inflection. In the English, the "lead" is not marked for either person or number and is marked for tense, but only in opposition to "led"; that is, not specifically for the future. The whole clause, however, achieves all the specific grammatical categories by use of other words. This is covert inflection, or periphrasis. The process typically distinguishes lexical items (such as lexemes) from functional ones (such as affixes, clitics, particles and morphemes in general) and has functional items acting as markers on lexical ones.
Lexical items that do not respond to overt inflection are invariant or uninflected; for example, "will" is an invariant item: it never takes an affix or changes form to signify a different grammatical category. Its category can only be determined by its context. Uninflected words do not need to be lemmatized in linguistic descriptions or in language computing. On the other hand, inflectional paradigms, or lists of inflected forms of typical words (such as sing, sang, sung, sings, singing, singer, singers, song, songs, songstress, songstresses in English) need to be analyzed according to criteria for uncovering the underlying lexical stem (here s*ng-); that is, the accompanying functional items (-i-, -a-, -u-, -s, -ing, -er, -o-, -stress, -es) and the functional categories of which they are markers need to be distinguished to adequately describe the language.
Constraining the cross-referencing of inflection in a sentence is known as concord or agreement. For example, in "the choir sings", "choir" and "sings" are constrained to the singular number; if one is singular, they both must be.
Languages that have some degree of overt inflection are inflected languages. The latter can be highly inflected, such as Latin, or weakly inflected, such as English, depending on the degree of overt inflection.

*An allomorph is a linguistics term for a variant form of a morpheme. The concept occurs when a unit of meaning can vary in sound (phonologically) without changing meaning. It is used in linguistics to explain the comprehension of variations in sound for a specific morpheme.
English has several morphemes that vary in sound but not in meaning. Examples include the past tense and the plural morphemes.
For example, in English, a past tense morpheme is -ed. It occurs in several allomorphs depending on its phonological environment, assimilating voicing of the previous segment or inserting a schwa when following an alveolar stop:
* as /əd/ or /ɪd/ in verbs whose stem ends with the alveolar stops /t/ or /d/, such as 'hunted' /hʌntəd/ or 'banded' /bændəd/
* as /t/ in verbs whose stem ends with voiceless phonemes other than /t/, such as 'fished' /fɪʃt/
* as /d/ in verbs whose stem ends voiced phonemes other than /d/, such as 'buzzed' /bʌzd/
Notice the "other than" restrictions above. This is a common fact about allomorphy: if the allomorphy conditions are ordered from most restrictive (in this case, after an alveolar stop) to least restrictive, then the first matching case usually "wins". Thus, the above conditions could be re-written as follows:

Inflection
In grammar, inflection or inflexion is the modification of a word to express different grammatical categories such as tense, mood, voice, aspect, person, number, gender and case. Conjugation is the inflection of verbs; declension is the inflection of nouns, adjectives and pronouns.

Inflection can be overt or covert within the same language. An overt inflection expresses grammatical category with an explicitly stated affix. Latin ducam, "I will lead", includes an explicit affix, -am, expressing person (first), number (singular) and future tense. This is an overt inflection. In the English, the "lead" is not marked for either person or number and is marked for tense, but only in opposition to "led"; that is, not specifically for the future. The whole clause, however, achieves all the specific grammatical categories by use of other words. This is covert inflection, or periphrasis. The process typically distinguishes lexical items (such as lexemes) from functional ones (such as affixes, clitics, particles and morphemes in general) and has functional items acting as markers on lexical ones.

Lexical items that do not respond to overt inflection are invariant or uninflected; for example, "will" is an invariant item: it never takes an affix or changes form to signify a different grammatical category. Its category can only be determined by its context. Uninflected words do not need to be lemmatized in linguistic descriptions or in language computing. On the other hand, inflectional paradigms, or lists of inflected forms of typical words (such as sing, sang, sung, sings, singing, singer, singers, song, songs, songstress, songstresses in English) need to be analyzed according to criteria for uncovering the underlying lexical stem (here s*ng-); that is, the accompanying functional items (-i-, -a-, -u-, -s, -ing, -er, -o-, -stress, -es) and the functional categories of which they are markers need to be distinguished to adequately describe the language.

Constraining the cross-referencing of inflection in a sentence is known as concord or agreement. For example, in "the choir sings", "choir" and "sings" are constrained to the singular number; if one is singular, they both must be.

Languages that have some degree of overt inflection are inflected languages. The latter can be highly inflected, such as Latin, or weakly inflected, such as English, depending on the degree of overt inflection.

Example in English
In English many nouns are inflected for number with the inflectional plural affix -s (as in "dog" → "dog-s"), and most English verbs are inflected for tense with the inflectional past tense affix -ed (as in "call" → "call-ed"). English also inflects verbs by affixation to mark the third person singular in the present tense (with -s), and the present participle (with -ing). English short adjectives are inflected to mark comparative and superlative forms (with -er and -est respectively). In addition, English also shows inflection by ablaut (mostly in verbs) and umlaut (mostly in nouns), as well as long-short vowel alternation. For example:

* Write, wrote, written (marking by ablaut variation, and also suffixing in the participle)
* Sing, sang, sung (ablaut)
* Foot, feet (marking by umlaut variation)
* Mouse, mice (umlaut)
* Child, children (ablaut, and also suffixing in the plural)

Inflectional Morphology
Languages that add inflectional morphemes to words are sometimes called inflectional languages, which is a synonym for inflected languages. Morphemes may be added in several different ways:

* Affixation, or simply adding morphemes onto the word without changing the root,
* Reduplication, doubling all or part of a word to change its meaning,
* Alternation, exchanging one sound for another in the root (usually vowel sounds, as in the ablaut process found in Germanic strong verbs and the umlaut often found in nouns, among others).
* Suprasegmental variations, such as of stress, pitch or tone, where no sounds are added or changed but the intonation and relative strength of each sound is altered regularly. For an example, see Initial-stress-derived noun.

Affixing includes prefixing (adding before the base), and suffixing (adding after the base), as well as the much less common infixing (inside) and circumfixing (a combination of prefix and suffix).

Inflection is most typically realized by adding an inflectional morpheme (that is, affixation) to the base form (either the root or a stem). 

Productivity 
In grammar, inflection or inflexion is the modification of a word to express different grammatical categories such as tense, mood, voice, aspect, person, number, gender and case. Conjugation is the inflection of verbs; declension is the inflection of nouns, adjectives and pronouns.

Inflection can be overt or covert within the same language. An overt inflection expresses grammatical category with an explicitly stated affix. Latin ducam, "I will lead", includes an explicit affix, -am, expressing person (first), number (singular) and future tense. This is an overt inflection. In the English, the "lead" is not marked for either person or number and is marked for tense, but only in opposition to "led"; that is, not specifically for the future. The whole clause, however, achieves all the specific grammatical categories by use of other words. This is covert inflection, or periphrasis. The process typically distinguishes lexical items (such as lexemes) from functional ones (such as affixes, clitics, particles and morphemes in general) and has functional items acting as markers on lexical ones.

Lexical items that do not respond to overt inflection are invariant or uninflected; for example, "will" is an invariant item: it never takes an affix or changes form to signify a different grammatical category. Its category can only be determined by its context. Uninflected words do not need to be lemmatized in linguistic descriptions or in language computing. On the other hand, inflectional paradigms, or lists of inflected forms of typical words (such as sing, sang, sung, sings, singing, singer, singers, song, songs, songstress, songstresses in English) need to be analyzed according to criteria for uncovering the underlying lexical stem (here s*ng-); that is, the accompanying functional items (-i-, -a-, -u-, -s, -ing, -er, -o-, -stress, -es) and the functional categories of which they are markers need to be distinguished to adequately describe the language.

Constraining the cross-referencing of inflection in a sentence is known as concord or agreement. For example, in "the choir sings", "choir" and "sings" are constrained to the singular number; if one is singular, they both must be.

Languages that have some degree of overt inflection are inflected languages. The latter can be highly inflected, such as Latin, or weakly inflected, such as English, depending on the degree of overt inflection.

Example in English
In English many nouns are inflected for number with the inflectional plural affix -s (as in "dog" → "dog-s"), and most English verbs are inflected for tense with the inflectional past tense affix -ed (as in "call" → "call-ed"). English also inflects verbs by affixation to mark the third person singular in the present tense (with -s), and the present participle (with -ing). English short adjectives are inflected to mark comparative and superlative forms (with -er and -est respectively). In addition, English also shows inflection by ablaut (mostly in verbs) and umlaut (mostly in nouns), as well as long-short vowel alternation. For example:

* Write, wrote, written (marking by ablaut variation, and also suffixing in the participle)
* Sing, sang, sung (ablaut)
* Foot, feet (marking by umlaut variation)
* Mouse, mice (umlaut)
* Child, children (ablaut, and also suffixing in the plural)

Inflectional Morphology
Languages that add inflectional morphemes to words are sometimes called inflectional languages, which is a synonym for inflected languages. Morphemes may be added in several different ways:

* Affixation, or simply adding morphemes onto the word without changing the root,
* Reduplication, doubling all or part of a word to change its meaning,
* Alternation, exchanging one sound for another in the root (usually vowel sounds, as in the ablaut process found in Germanic strong verbs and the umlaut often found in nouns, among others).
* Suprasegmental variations, such as of stress, pitch or tone, where no sounds are added or changed but the intonation and relative strength of each sound is altered regularly. For an example, see Initial-stress-derived noun.

Affixing includes prefixing (adding before the base), and suffixing (adding after the base), as well as the much less common infixing (inside) and circumfixing (a combination of prefix and suffix).

Inflection is most typically realized by adding an inflectional morpheme (that is, affixation) to the base form (either the root or a stem).  

Lexeme
A lexeme (About this sound pronunciation (help·info)) is an abstract unit of morphological analysis in linguistics, that roughly corresponds to a set of forms taken by a single word. For example, in the English language, run, runs, ran and running are forms of the same lexeme, conventionally written as RUN.[1] A related concept is the lemma (or citation form), which is a particular form of a lexeme that is chosen by convention to represent a canonical form of a lexeme. Lemmas are used in dictionaries as the headwords, and other forms of a lexeme are often listed later in the entry if they are not common conjugations of that word.

A lexeme belongs to a particular syntactic category, has a certain meaning (semantic value), and in inflecting languages, has a corresponding inflectional paradigm; that is, a lexeme in many languages will have many different forms. For example, the lexeme RUN has a present third person singular form runs, a present non-third-person singular form run (which also functions as the past participle and non-finite form), a past form ran, and a present participle running. (It does not include runner, runners, runnable, etc.) The use of the forms of a lexeme is governed by rules of grammar; in the case of English verbs such as RUN, these include subject-verb agreement and compound tense rules, which determine which form of a verb can be used in a given sentence.

A lexicon consists of lexemes.

In many formal theories of language, lexemes have subcategorization frames to account for the number and types of complements they occur with in sentences and other syntactic structures.

The notion of a lexeme is very central to morphology, and thus, many other notions can be defined in terms of it. For example, the difference between inflection and derivation can be stated in terms of lexemes:

* Inflectional rules relate a lexeme to its forms.
* Derivational rules relate a lexeme to another lexeme.

Decomposition
Lexemes are often composed of smaller units with individual meaning called morphemes, according to root morpheme + derivational morphemes + desinence (not necessarily in this order), where:

* The root morpheme is the primary lexical unit of a word, which carries the most significant aspects of semantic content and cannot be reduced to smaller constituents.
* The derivational morphemes carry only derivational information.
* The desinence is composed of all inflectional morphemes, and carries only inflectional information.

The compound root morpheme + derivational morphemes is often called the stem. The decomposition stem + desinence can then be used to study inflection.

Lexical items
Lexical items are single words or words that are grouped in a language's lexicon. Examples are "cat", "traffic light", "take care of", "by-the-way", and "don't count your chickens before they hatch". Lexical items are those which can be generally understood to convey a single meaning, much as a lexeme, but are not limited to single words. Lexical items are like semes in that they are "natural units" translating between languages, or in learning a new language. In this last sense, it is sometimes said that language consists of grammaticalized lexis, and not lexicalized grammar.
The entire store of lexical items in a language is called its lexis.

Lexical chunks
Lexical items composed of more than one word are also sometimes called lexical chunks, gambits, lexical phrases, lexical units, lexicalized stems or speech formulae. The term polyword listemes is also sometimes used. Common types of lexical chunks include[1]:

* Words, e.g., "cat", "tree".
* Phrasal verbs, such as "put off" or "get out".
* Polywords, e.g., "by the way", "inside out".
* Collocations, e.g., "motor vehicle", "absolutely convinced".
* Institutionalized utterances, e.g., "I'll get it", "We'll see", "That'll do", "If I were you", "Would you like a cup of coffee?"
* Idioms, e.g., "break a leg", "was one whale of a", "a bitter pill to swallow".
* Sentence frames and heads, e.g., "That is not as...as you think", "The problem was".
* Text frames, e.g., "In this paper we explore...; Firstly...; Secondly...; Finally ...".

An associated concept is that of noun-modifier semantic relations, wherein certain word pairings have a standard interpretation. For example, the phrase "cold virus" is generally understood to refer to the virus that causes a cold, rather than a virus that is cold.  

Syntax
In linguistics, syntax (from Ancient Greek σύνταξις "arrangement" from σύν syn, "together", and τάξις táxis, "an ordering") is the study of the principles and rules for constructing sentences in natural languages. In addition to referring to the discipline, the term syntax is also used to refer directly to the rules and principles that govern the sentence structure of any individual language, as in "the syntax of Modern Irish."


Modern research in syntax attempts to describe languages in terms of such rules. Many professionals in this discipline attempt to find general rules that apply to all natural languages. The term syntax is also sometimes used to refer to the rules governing the behavior of mathematical systems, such as logic, artificial formal languages, and computer programming languages.

Modern theory of Syntax
There are a number of theoretical approaches to the discipline of syntax. Many linguists see syntax as a branch of biology, since they conceive of syntax as the study of linguistic knowledge as embodied in the human mind. Others (e.g. Gerald Gazdar) take a more Platonistic view, since they regard syntax to be the study of an abstract formal system. Yet others (e.g. Joseph Greenberg) consider grammar a taxonomical device to reach broad generalizations across languages. Some of the major approaches to the discipline are listed below.

Generative Grammar
The hypothesis of generative grammar is that language is a structure of the human mind. The goal of generative grammar is to make a complete model of this inner language (known as i-language). This model could be used to describe all human language and to predict the grammaticality of any given utterance (that is, to predict whether the utterance would sound correct to native speakers of the language). This approach to language was pioneered by Noam Chomsky. Most generative theories (although not all of them) assume that syntax is based upon the constituent structure of sentences. Generative grammars are among the theories that focus primarily on the form of a sentence, rather than its communicative function.

Among the many generative theories of linguistics, the Chomskyan theories are:

* Transformational Grammar (TG) (Original theory of generative syntax laid out by Chomsky in Syntactic Structures in 1957)
* Government and binding theory (GB) (revised theory in the tradition of TG developed mainly by Chomsky in the 1970s and 1980s).
* The Minimalist Program (MP) (revised version of GB published by Chomsky in 1995)

Other theories that find their origin in the generative paradigm are:

* Generative semantics (now largely out of date)
* Relational grammar (RG) (now largely out of date)
* Arc Pair grammar
* Generalized phrase structure grammar (GPSG; now largely out of date)
* Head-driven phrase structure grammar (HPSG)
* Lexical-functional grammar (LFG)

Categorial Grammar
Categorial grammar is an approach that attributes the syntactic structure not to rules of grammar, but to the properties of the syntactic categories themselves. For example, rather than asserting that sentences are constructed by a rule that combines a noun phrase (NP) and a verb phrase (VP) (e.g. the phrase structure rule S → NP VP), in categorial grammar, such principles are embedded in the category of the head word itself. So the syntactic category for an intransitive verb is a complex formula representing the fact that the verb acts as a functor which requires an NP as an input and produces a sentence level structure as an output. This complex category is notated as (NP\S) instead of V. NP\S is read as " a category that searches to the left (indicated by \) for a NP (the element on the left) and outputs a sentence (the element on the right)". The category of transitive verb is defined as an element that requires two NPs (its subject and its direct object) to form a sentence. This is notated as (NP/(NP\S)) which means "a category that searches to the right (indicated by /) for an NP (the object), and generates a function (equivalent to the VP) which is (NP\S), which in turn represents a function that searches to the left for an NP and produces a sentence).

Tree-adjoining grammar is a categorial grammar that adds in partial tree structures to the categories.



Phrase Structure Rules
Phrase-structure rules are a way to describe a given language's syntax. They are used to break a natural language sentence down into its constituent parts (also known as syntactic categories) namely phrasal categories and lexical categories (aka parts of speech). Phrasal categories include the noun phrase, verb phrase, and prepositional phrase; lexical categories include noun, verb, adjective, adverb, and many others. Phrase structure rules were commonly used in transformational grammar (TGG), although they were not an invention of TGG; rather, early TGG's added to phrase structure rules (the most obvious example being transformations; see the page transformational grammar for an overview of the development of TGG.) A grammar which uses phrase structure rules is called a phrase structure grammar - except in computer science, where it is known as just a grammar, usually context-free.

Definition

Phrase structure rules are usually of the form A \to B \quad C, meaning that the constituent A is separated into the two subconstituents B and C.

Some examples correct inter alia for natural English language are:

S \to NP \quad VP
NP \to Det \quad N1
N1 \to (AP) \quad N1 \quad (PP)

The first rule reads: An S consists of an NP followed by a VP. This means A sentence consists of a noun phrase followed by a verb phrase. The next one: A noun phrase consists of a determiner followed by a noun.

Further explanations of the constituents: S, Det, NP, VP, AP, PP

Associated with phrase structure rules is a famous example of a grammatically correct sentence. The sentence was constructed by Noam Chomsky as an illustration that syntactically but not semantically correct sentences are possible.

Some important linguists argue that the structure of a word and the structure of a sentence are akin. Therefore they apply rules which are used in sentence syntax to word syntax or the structure of words. Thus rules of this kind are not only found in Syntax but also in Morphology. The rules we are concerned with are the phrase-structure rules.

The general phrase-structure rule for compounding would be as follows:

A word consists of a word plus a word.



The syntactic information which is provided by phrase-structure rules:

* there is a constituent which functions as the syntactic head
* the syntactic properties of the head determine the entire compound
* the head is on the right hand side

Compounds with a verbal constituent

A Compound nouns with a verbal constituent

1 verb + noun / adjective

tabular19

These transparent combinations can be analyzed in terms of predicate argument structure, e.g. scarecrow - to scare (verb) the crows (theme/patient), which will be done more extensively for the examples in C.1.

Examples:

(1) scarecrow

(2) pickpocket

(3) diehard


B Compound verbs

1 verb + verb
This type of compounding has got an appositional character because the two verbal elements are simply put together without any further dependency holding between them. These compound verbs usually signify a combination of actions which are closely connected or follow each other within the fraction of a second (cf. example (5)).

Examples:

(4) freeze-dry

(5) drop-kick

2 verb + preposition / preposition + verb


2.1 preposition + verb
In order to distinguish this way of compounding from "pseudo-verbal compounds" (C.2), it is necessary to refer to the semantic level of interpretation. As for the previous compounds, this type of compound verbs consisting of two independent morphemes follows the determinant/determinatum relationship (cf. Marchand 1969, p.96). The verb functions as the determinatum because, as the head of the compound, it refers to the action which is described by the compound. So the verb to outgrow (example (1)) can be treated as a variation of the verb to grow. The other constituent, the preposition, works as a determinant, which specifies the determinatum.

Examples:

(6) outgrow

(7) underestimate

(8) overhear

(9) offload

2.2 verb + preposition

Some linguists like Francis Katamba, for example, argue that phrasal verbs consisting of a verb and a preposition or adverbial particle must be regarded as a compound. In contrast to Katamba, Marchand lists them under the heading phrases and treats them separately as lexicalized items, which means that they are regarded as an entity which can only be modified as a whole. However, phrasal verbs are different from other compounds as the constituents can be separated within a sentence , e.g. He took it over. Apart from that, these phrasal verbs are often nominalized, which form another type of compound nouns or adjectives: N tex2html_wrap_inline87 V P .

Examples:

(10) a. to take off

b. take-off

(11) a. to take over

b. take-over

(12) a. to hand out

b. hand-out

C Special forms of compounding involving verbal constituents

1 Verbal compounds

Verbal compounds are sometimes also called synthetic or secondary compounds because they contain a nominal or adjectival head which is derived from a verb . The underlying structure of the compound can be interpreted in terms of predicate argument structure. The nonverbal head serves as an argument (e.g. agent, theme/patient, instrument) of the deverbal head. This kind of compounding is very productive because almost any active or passive phrase can be turned into a verbal compound.

1.1 Compound nouns

tabular24

Examples:

(18) bookseller: a seller AGENT of books THEME

N'


N N


V


book sell <Ag,Th> er

(19) sheep-shearing: shearing VERB sheep THEME

1.2 Compound adjectives

Examples:

(20) God-fearing: fearing VERB God THEME

(21) hand-written: written VERB by hand INSTRUMENT


2 Pseudo-compound verbs
In contrast to the actual compound verbs described under B.2 , the determinant/ determinatum relationship does not apply to the so-called pseudo-compound verbs. In the case of the verb to spotlight the whole compound does not have any determinatum at all because the word means 'to turn spotlights on s.th./s.o.', but the idea of 'to turn s.th. on s.o.' is not expressed in the actual word. As these verbs are derived from compound nouns or adjectives, the verbs themselves cannot be called compounds.


2.1 Conversion
Conversion means that a compound noun is taken over as a verb. This process can also be called zero-derivation and is, strictly speaking, no example of compounding.

Examples:

(13) to spotlight [noun + noun]V = '(to turn) spotlights on ...'

(14) to blacklist [adjective + noun]V = '(to put someone) on a list of suspicious persons'


2.2 Back-formation
These "compound verbs" are derived from synthetic (=verbal) compounds by back-formation. These synthetic compounds, which are nouns or adjectives, are supposed to be based on a compound verb of the same kind. So the compound noun housekeeper is supposed to refer to the originally non-existing compound verb to housekeep. In fact, the process of derivation works the other way round. The ending for the agentive noun (-er) or the participle (-ing, -en/ed) is clipped off the verbal compound to form a pseudo-compound verb. However, these pseudo-compound verbs may act as models of analogous word-formations, which deserve the label "compound", e.g. the formation of to house-sit following the pattern of to baby-sit.

Examples:

(15) to housekeep: derived from housekeeper [noun + verb + er]N

(16) to baby-sit: derived from babysitter [noun + verb + er]N

(17) to sightsee: derived from sightseeing [noun + verb-ing]N

(18) to bottlefeed: derived from bottlefed [noun + verb-en]A

3 Phrases

It is questionable whether it is possible to call these phrases compounds or if it is more suitable to treat them as lexicalized phrases. However, these entities consist of free morphemes which are put together to form a new word. In this respect, they fit a general definition of compounds. Especially in the case of the kinship term -in-law it seems to be justified to think of a deliberate combination of elements which is typical of compounding because any kinship term (mother, father,brother etc.) can be combined with -in-law to form a new left-headed compound.

Examples:

(22) do-it-yourself

(23) mother-in-law

(24) lady-in-waiting

(25) forget-me-not 


Semantics 
Semantics is the study of meaning, usually in language. The word "semantics" itself denotes a range of ideas, from the popular to the highly technical. It is often used in ordinary language to denote a problem of understanding that comes down to word selection or connotation. This problem of understanding has been the subject of many formal inquiries, over a long period of time. In linguistics, it is the study of interpretation of signs or symbols as used by agents or communities within particular circumstances and contexts. Within this view, sounds, facial expressions, body language, proxemics have semantic (meaningful) content, and each has several branches of study. In written language, such things as paragraph structure and punctuation have semantic content; in other forms of language, there is other semantic content.

The formal study of semantics intersects with many other fields of inquiry, including proxemics, lexicology, syntax, pragmatics, etymology and others, although semantics is a well-defined field in its own right, often with synthetic properties. In philosophy of language, semantics and reference are related fields. Further related fields include philology, communication, and semiotics. The formal study of semantics is therefore complex.

Semantics is sometimes contrasted with syntax, the study of the symbols of a language (without reference to their meaning), and pragmatics, the study of the relationships between the symbols of a language, their meaning, and the users of the language.

In linguistics, semantics is the subfield that is devoted to the study of meaning, as inherent at the levels of words, phrases, sentences, and larger units of discourse (referred to as texts). The basic area of study is the meaning of signs, and the study of relations between different linguistic units: homonymy, synonymy, antonymy, polysemy, paronyms, hypernymy, hyponymy, meronymy, metonymy, holonymy, exocentricity / endocentricity, linguistic compounds. A key concern is how meaning attaches to larger chunks of text, possibly as a result of the composition from smaller units of meaning. Traditionally, semantics has included the study of sense and denotative reference, truth conditions, argument structure, thematic roles, discourse analysis, and the linkage of all of these to syntax.



Formal semanticists are concerned with the modeling of meaning in terms of the semantics of logic. Thus the sentence John loves a bagel can be broken down into its constituents (signs), of which the unit loves may serve as both syntactic and semantic head.

In Psychology Side
In psychology, semantic memory is memory for meaning – in other words, the aspect of memory that preserves only the gist, the general significance, of remembered experience – while episodic memory is memory for the ephemeral details – the individual features, or the unique particulars of experience. Word meaning is measured by the company they keep, i.e. the relationships among words themselves in a semantic network. In a network created by people analyzing their understanding of the word (such as Wordnet) the links and decomposition structures of the network are few in number and kind, and include "part of", "kind of", and similar links. In automated ontologies the links are computed vectors without explicit meaning. Various automated technologies are being developed to compute the meaning of words: latent semantic indexing and support vector machines as well as natural language processing, neural networks and predicate calculus techniques.  


Semantics unit
Semantics is the study of meaning, usually in language. The word "semantics" itself denotes a range of ideas, from the popular to the highly technical. It is often used in ordinary language to denote a problem of understanding that comes down to word selection or connotation. This problem of understanding has been the subject of many formal inquiries, over a long period of time. In linguistics, it is the study of interpretation of signs or symbols as used by agents or communities within particular circumstances and contexts. Within this view, sounds, facial expressions, body language, proxemics have semantic (meaningful) content, and each has several branches of study. In written language, such things as paragraph structure and punctuation have semantic content; in other forms of language, there is other semantic content.

The formal study of semantics intersects with many other fields of inquiry, including proxemics, lexicology, syntax, pragmatics, etymology and others, although semantics is a well-defined field in its own right, often with synthetic properties. In philosophy of language, semantics and reference are related fields. Further related fields include philology, communication, and semiotics. The formal study of semantics is therefore complex.

Semantics is sometimes contrasted with syntax, the study of the symbols of a language (without reference to their meaning), and pragmatics, the study of the relationships between the symbols of a language, their meaning, and the users of the language.

In linguistics, semantics is the subfield that is devoted to the study of meaning, as inherent at the levels of words, phrases, sentences, and larger units of discourse (referred to as texts). The basic area of study is the meaning of signs, and the study of relations between different linguistic units: homonymy, synonymy, antonymy, polysemy, paronyms, hypernymy, hyponymy, meronymy, metonymy, holonymy, exocentricity / endocentricity, linguistic compounds. A key concern is how meaning attaches to larger chunks of text, possibly as a result of the composition from smaller units of meaning. Traditionally, semantics has included the study of sense and denotative reference, truth conditions, argument structure, thematic roles, discourse analysis, and the linkage of all of these to syntax.

Formal semanticists are concerned with the modeling of meaning in terms of the semantics of logic. Thus the sentence John loves a bagel can be broken down into its constituents (signs), of which the unit loves may serve as both syntactic and semantic head.

In Psychology Side
In psychology, semantic memory is memory for meaning – in other words, the aspect of memory that preserves only the gist, the general significance, of remembered experience – while episodic memory is memory for the ephemeral details – the individual features, or the unique particulars of experience. Word meaning is measured by the company they keep, i.e. the relationships among words themselves in a semantic network. In a network created by people analyzing their understanding of the word (such as Wordnet) the links and decomposition structures of the network are few in number and kind, and include "part of", "kind of", and similar links. In automated ontologies the links are computed vectors without explicit meaning. Various automated technologies are being developed to compute the meaning of words: latent semantic indexing and support vector machines as well as natural language processing, neural networks and predicate calculus techniques.  


Semantics units
1. Homonymy
In linguistics, a homonym is, in the strict sense, one of a group of words that share the same spelling and the same pronunciation but have different meanings (in other words, are both homographs and homophones), usually as a result of the two words having different origins. The state of being a homonym is called homonymy. Examples of pairs of homonyms are stalk (part of a plant) and stalk (follow/harass a person), and left (opposite of right) and left (past tense of leave). 
In a looser non-technical sense, the term "homonym" can be used to refer to words that share the same spelling irrespective of pronunciation, or share the same pronunciation irrespective of spelling – in other words, they are homographs or homophones. In this sense, pairs such as row (propel with oars) and row (argument), and read (peruse) and reed (waterside plant), would also be homonyms.
A distinction may be made between "true" homonyms, which are unrelated in origin, such as skate (glide on ice) and skate (the fish), and polysemous homonyms, or polysemes, which have a shared origin, such as mouth (of a river) and mouth (of an animal).
Several similar linguistic concepts are related to homonymy. These include:

* Homographs (literally "same writing") are usually defined as words that share the same spelling, regardless of how they are pronounced. If they are pronounced the same then they are also homophones (and homonyms) – for example, bark (the sound of a dog) and bark (the skin of a tree). If they are pronounced differently then they are also heteronyms – for example, bow (the front of a ship) and bow (a type of knot).

* Homophones (literally "same sound") are usually defined as words that share the same pronunciation, regardless of how they are spelled. If they are spelled the same then they are also homographs (and homonyms); if they are spelled differently then they are also heterographs (literally "different writing"). Homographic examples include rose (flower) and rose (past tense of rise). Heterographic examples include to, too, two, and there, their, they’re.

* Heteronyms (literally "different name") are the subset of homographs (words that share the same spelling) that have different pronunciations (and meanings). That is, they are homographs which are not homophones. Such words include desert (to abandon) and desert (arid region); row (to argue or an argument) and row (as in to row a boat or a row of seats). Note that the latter meaning also constitutes a pair of homophones. Heteronyms are also sometimes called heterophones (literally "different sound").

* Polysemes are words with the same spelling and distinct but related meanings. The distinction between polysemy and homonymy is often subtle and subjective, and not all sources consider polysemous words to be homonyms. Words such as mouth, meaning either the orifice on one's face, or the opening of a cave or river, are polysemous and may or may not be considered homonyms.

Example:
A further example of a homonym, which is both a homophone and a homograph, is fluke. Fluke can mean:

* A fish, and a flatworm.
* The end parts of an anchor.
* The fins on a whale's tail.
* A stroke of luck.

All four are separate lexemes with separate etymologies, but share the one form, fluke.*

Similarly, a river bank, a savings bank, a bank of switches, and a bank shot in pool share a common spelling and pronunciation, but differ in meaning.

The words bow and bough are interesting because there are two meanings associated with a single pronunciation and spelling (the weapon and the knot); there are two meanings with two different pronunciations (the knot and the act of bending at the waist), and there are two distinct meanings sharing the same sound but different spellings (bow, the act of bending at the waist, and bough, the branch of a tree). In addition, it has several related but distinct meanings – a bent line is sometimes called a 'bowed' line, reflecting its similarity to the weapon. Thus, even according to the most restrictive definitions, various pairs of sounds and meanings of bow and bough are homonyms, homographs, homophones, heterophones, heterographs, and are polysemous.

* bow – a long wooden stick with horse hair that is used to play certain string instruments such as the violin
* bow – to bend forward at the waist in respect (e.g. "bow down")
* bow – the front of the ship (e.g. "bow and stern")
* bow – the weapon which shoots arrows (e.g. "bow and arrow")
* bow – a kind of tied ribbon (e.g. bow on a present, a bowtie)
* bow – to bend outward at the sides (e.g. a "bow-legged" cowboy)
* bough – a branch on a tree. (e.g. "when the bough breaks...")
* bō – a long staff, usually made of tapered hard wood or bamboo
* beau – a male paramour.

2. Synonymy
Synonyms are different words with identical or very similar meanings. Words that are synonyms are said to be synonymous, and the state of being a synonym is called synonymy. The word comes from Ancient Greek syn (σύν) ("with") and onoma (ὄνομα) ("name"). The words car and automobile are synonyms. Similarly, if we talk about a long time or an extended time, long and extended become synonyms. In the figurative sense, two words are often said to be synonymous if they have the same connotation:

"a widespread impression that … Hollywood was synonymous with immorality" (Doris Kearns Goodwin)

Synonyms can be any part of speech (e.g. nouns, verbs, adjectives, adverbs or prepositions), as long as both members of the pair are the same part of speech. More examples of English synonyms are:

* student and pupil (noun)
* petty crime and misdemeanor (noun)
* buy and purchase (verb)
* sick and ill (adjective)
* quickly and speedily (adverb)
* on and upon (preposition)

Note that synonyms are defined with respect to certain senses of words; for instance, pupil as the "aperture in the iris of the eye" is not synonymous with student. Similarly, he expired means the same as he died, yet my passport has expired cannot be replaced by my passport has died.

In English, many synonyms evolved from the parallel use, in the early medieval period, of Norman French (from Latin) and Old English (Anglo-Saxon) words, often with some words being used principally by the Saxon peasantry ("folk", "freedom", "bowman") and their synonyms by the Norman nobility ("people", "liberty", "archer").

Some lexicographers claim that no synonyms have exactly the same meaning (in all contexts or social levels of language) because etymology, orthography, phonic qualities, ambiguous meanings, usage, etc. make them unique. Different words that are similar in meaning usually differ for a reason: feline is more formal than cat; long and extended are only synonyms in one usage and not in others (for example, a long arm is not the same as an extended arm). Synonyms are also a source of euphemisms.

The purpose of a thesaurus is to offer the user a listing of similar or related words; these are often, but not always, synonyms.

3.Antonymy
antonyms, from the Greek anti ("opposite") and onoma ("name") are gradable opposites. Gradable opposites lie at opposite ends of a continuous spectrum of meanings; examples are hot and cold, slow and fast, and fat and skinny. Words may have several different antonyms, depending on the meaning: both long and tall can be antonyms of short.

Though the word antonym was only coined by philologists in the 19th century, such relationships are a fundamental part of a language, in contrast to synonyms, which are a result of history and drawing of fine distinctions, or homonyms, which are mostly etymological accidents or coincidences.

Languages often have ways of creating antonyms as an easy extension of lexicon. For example, English has the prefixes in- and un-, so unreal is the antonym of real and indocile is of docile.
The term antonym (and the related antonymy) has also been commonly used as a term that is synonymous with opposite; however, the term also has other more restricted meanings. One usage has antonym referring to both gradable opposites, such as long : short, and (non-gradable) complementary opposites, such as male : female, while opposites of the types up : down and precede : follow are excluded from the definition.


4. Polysemy
A polyseme is a word or phrase with multiple, related meanings. A word is judged to be polysemous if it has two senses of the word whose meanings are related. Since the vague concept of relatedness is the test for polysemy, judgments of polysemy can be very difficult to make. Because applying pre-existing words to new situations is a natural process of language change, looking at words' etymology is helpful in determining polysemy but not the only solution; as words become lost in etymology, what once was a useful distinction of meaning may no longer be so. Some apparently unrelated words share a common historical origin, however, so etymology is not an infallible test for polysemy, and dictionary writers also often defer to speakers' intuitions to judge polysemy in cases where it contradicts etymology. English has many words which are polysemous. For example the verb "to get" can mean "take" (I'll get the drinks), "become" (she got scared), "have" (I've got three dollars), "understand" (I get it) etc.

Example :

* Mole

1. a small burrowing mammal
2. consequently, there are several different entities called moles (see the Mole disambiguation page). Although these refer to different things, their names derive from 1. :e.g. A Mole burrows for information hoping to go undetected.

* Bank

1. a financial institution
2. the building where a financial institution offers services
3. a synonym for 'rely upon' (e.g. "I'm your friend, you can bank on me"). It is different, but related, as it derives from the theme of security initiated by 1.

However: a river bank is a homonym to 1 and 2, as they do not share etymologies. It is a completely different meaning. River bed, though, is polysemous with the beds on which people sleep.

* Book

1. a bound collection of pages
2. a text reproduced and distributed (thus, someone who has read the same text on a computer has read the same book as someone who had the actual paper volume)

* Milk
o The verb milk (e.g. "he's milking it for all he can get") derives from the process of obtaining milk.

* Wood

1. a piece of a tree
2. a geographical area with many trees

* Crane

1. a bird
2. a type of construction equipment


5. Paronymy
A paronym or paronyme in linguistics may refer to two different things:

1. A word that is related to another word and derives from the same root, e.g. a cognate word;
2. Words which are almost homonyms, but have slight differences in spelling or pronunciation and have different meanings.

Some paronyms are truly synonymous, but only under the rarest of conditions. They often lead to confusion. Examples of any type of paronym are:

* alternately and alternatively
* collision and collusion
* conjuncture and conjecture
* excise and exercise
* prolepsis and proslepsis
* continuous and contiguous
* farther (or farthest) and further (or furthest)
* affect and effect
* upmost and utmost
* deprecate and depreciate

6. Hyponymy or Hypernymy
In linguistics, a hyponym is a word or phrase whose semantic range is included within that of another word, its hypernym (sometimes spelled hyperonym outside of the natural language processing community). In simpler terms, a hyponym shares a type-of relationship with its hypernym. For example, scarlet, vermilion, carmine, and crimson are all hyponyms of red (their hypernym), which is, in turn, a hyponym of colour.

Computer science often terms this relationship an "is-a" relationship. For example, the phrase Red is a colour can be used to describe the hyponymic relationship between red and colour.

Hypernymy is the semantic relation in which one word is the hypernym of another. Hypernymy, the relation in which words stand when their extensions stand in the relation of class to subclass, should not be confused with holonymy, which is the relation in which words stand when the things that they denote stand in the relation of whole to part. A similar warning applies to hyponymy and meronymy.

7. Meronymy
Meronymy (from the Greek words meros = part and onoma = name) is a semantic relation used in linguistics. A meronym denotes a constituent part of, or a member of something. That is,

X is a meronym of Y if Xs are parts of Y(s), or
X is a meronym of Y if Xs are members of Y(s).

For example, 'finger' is a meronym of 'hand' because a finger is part of a hand. Similarly 'wheel' is a meronym of 'automobile'.

Meronymy is the opposite of holonymy. A closely related concept is that of mereology, which specifically deals with part/whole relations and is used in logic. It is formally expressed in terms of first-order logic.

A meronym means part of a whole. A word denoting a subset of what another word denotes is a hyponym.

In knowledge representation languages, meronymy is often expressed as "part-of".

8. Metonymy
Metonymy (pronounced /mɨˈtɒnɨmi/) is a figure of speech used in rhetoric in which a thing or concept is not called by its own name, but by the name of something intimately associated with that thing or concept. For instance, "London," as the capital of the United Kingdom, could be used as a metonym for its government.

The word "metonymy" and "metonym" comes from the Greek: μετωνυμία, metōnymía, "a change of name", from μετά, metá, "after, beyond" and -ωνυμία, -ōnymía, a suffix used to name figures of speech, from ὄνῠμα, ónyma or ὄνομα, ónoma, "name." Metonymy may also be instructively contrasted with metaphor. Both figures involve the substitution of one term for another. In metaphor, this substitution is based on similarity, whereas, in metonymy, the substitution is based on contiguity.

Example:
General Original meaning Metonymic use

Word A unit of Language A conversation
Sweat Perspiration Hard work
Tongue Oral muscle A language or dialect
The press Printing press The news media


9. Holonymy
Holonymy (in Greek holon = whole and onoma = name) is a semantic relation. Holonymy defines the relationship between a term denoting the whole and a term denoting a part of, or a member of, the whole. That is,

'X' is a holonym of 'Y' if Ys are parts of Xs, or
'X' is a holonym of 'Y' if Ys are members of Xs.

For example, 'tree' is a holonym of 'bark', of 'trunk' and of 'limb.'

Holonymy is the opposite of meronymy.



Pragmatics   
Pragmatics is a subfield of linguistics which studies the ways in which context contributes to meaning. Pragmatics encompasses speech act theory, conversational implicature, talk in interaction and other approaches to language behavior in philosophy, sociology, and linguistics It studies how the transmission of meaning depends not only on the linguistic knowledge (e.g. grammar, lexicon etc.) of the speaker and listener, but also on the context of the utterance, knowledge about the status of those involved, the inferred intent of the speaker, and so on. In this respect, pragmatics explains how language users are able to overcome apparent ambiguity, since meaning relies on the manner, place, time etc. of an utterance.The ability to understand another speaker's intended meaning is called pragmatic competence. An utterance describing pragmatic function is described as metapragmatic. Pragmatic awareness is regarded as one of the most challenging aspects of language learning, and comes only through experience.


Structural Ambiguity in Pragmatics

The sentence "You have a green light" is ambiguous. Without knowing the context, the identity of the speaker, and their intent, it is not possible to infer the meaning with confidence. For example:

* It could mean you are holding a green light bulb.
* Or that you have a green light to drive your car.
* Or it could be indicating that you can go ahead with the project.
* Or that your body has a green glow

Similarly, the sentence "Sherlock saw the man with binoculars" could mean that Sherlock observed the man by using binoculars; or it could mean that Sherlock observed a man who was holding binoculars. The meaning of the sentence depends on an understanding of the context and the speaker's intent. As defined in linguistics, a sentence is an abstract entity — a string of words divorced from non-linguistic context — as opposed to an utterance, which is a concrete example of a speech act in a specific context. The cat sat on the mat is a sentence of English; if you say to your sister on Tuesday afternoon: "The cat sat on the mat", this is an example of an utterance. Thus, there is no such thing as a sentence with a single true meaning; it is underspecified (which cat sat on which mat?) and potentially ambiguous. The meaning of an utterance, on the other hand, is inferred based on linguistic knowledge and knowledge of the non-linguistic context of the utterance (which may or may not be sufficient to resolve ambiguity).  

Area of Interest in Pragmatics

* The study of the speaker's meaning, not focusing on the phonetic or grammatical form of an utterance, but instead on what the speaker's intentions and beliefs are.

* The study of the meaning in context, and the influence that a given context can have on the message. It requires knowledge of the speaker's identities, and the place and time of the utterance.

* The study of implicatures, i.e. the things that are communicated even though they are not explicitly expressed.

* The study of relative distance, both social and physical, between speakers in order to understand what determines the choice of what is said and what is not said.

* The study of what is not meant, as opposed to the intended meaning, i.e. that which is unsaid and unintended, or unintentional.




Discourse Analysis

Discourse analysis (DA), or discourse studies, is a general term for a number of approaches to analyzing written, spoken or signed language use.

Discourse analysis is the branch of linguistics that deals with the study and application of approaches to analyse written, spoken or signed language.

The objects of discourse analysis—discourse, writing, talk, conversation, communicative event, etc.—are variously defined in terms of coherent sequences of sentences, propositions, speech acts or turns-at-talk. Contrary to much of traditional linguistics, discourse analysts not only study language use 'beyond the sentence boundary', but also prefer to analyze 'naturally occurring' language use, and not invented examples. This is known as corpus linguistics; text linguistics is related.

Discourse analysis has been taken up in a variety of social science disciplines, including linguistics, sociology, anthropology, social work, cognitive psychology, social psychology, international relations, human geography, communication studies and translation studies, each of which is subject to its own assumptions, dimensions of analysis, and methodologies. Sociologist Harold Garfinkel was another influence on the discipline.


History

The term discourse analysis (DA) first came into general use following the publication of a series of papers by Zellig Harris beginning in 1952 and reporting on work from which he developed transformational grammar in the late 1930s. Formal equivalence relations among the sentences of a coherent discourse are made explicit by using sentence transformations to put the text in a canonical form. Words and sentences with equivalent information then appear in the same column of an array. This work progressed over the next four decades (see references) into a science of sublanguage analysis (Kittredge & Lehrberger 1982), culminating in a demonstration of the informational structures in texts of a sublanguage of science, that of immunology, (Harris et al. 1989) and a fully articulated theory of linguistic informational content (Harris 1991). During this time, however, most linguists pursued a succession of elaborate theories of sentence-level syntax and semantics.


Although Harris had mentioned the analysis of whole discourses, he had not worked out a comprehensive model, as of January, 1952. A linguist working for the American Bible Society, James A. Lauriault/Loriot, needed to find answers to some fundamental errors in translating Quechua, in the Cuzco area of Peru. He took Harris's idea, recorded all of the legends and, after going over the meaning and placement of each word with a native speaker of Quechua, was able to form logical, mathematical rules that transcended the simple sentence structure. He then applied the process to another language of Eastern Peru, Shipibo. He taught the theory in Norman, Oklahoma, in the summers of 1956 and 1957 and entered the University of Pennsylvania in the interim year. He tried to publish a paper Shipibo Paragraph Structure, but it was delayed until 1970 (Loriot & Hollenbach 1970). In the meantime, Dr. Kenneth L. Pike, a professor at University of Michigan, Ann Arbor, taught the theory, and one of his students, Robert E. Longacre, was able to disseminate it in a dissertation.

Harris's methodology was developed into a system for the computer-aided analysis of natural language by a team led by Naomi Sager at NYU, which has been applied to a number of sublanguage domains, most notably to medical informatics. The software for the Medical Language Processor is publicly available on SourceForge.

In the late 1960s and 1970s, and without reference to this prior work, a variety of other approaches to a new cross-discipline of DA began to develop in most of the humanities and social sciences concurrently with, and related to, other disciplines, such as semiotics, psycholinguistics, sociolinguistics, and pragmatics. Many of these approaches, especially those influenced by the social sciences, favor a more dynamic study of oral talk-in-interaction.

Mention must also be made of the term "Conversational analysis", which was influenced by the Sociologist Harold garfinkel who is the founder of Ethnomethodology.

In Europe, Michel Foucault became one of the key theorists of the subject, especially of discourse, and wrote The Archaeology of Knowledge.


Topic of Interest
Topics of discourse analysis include:

* The various levels or dimensions of discourse, such as sounds (intonation, etc.), gestures, syntax, the lexicon, style, rhetoric, meanings, speech acts, moves, strategies, turns and other aspects of interaction
* Genres of discourse (various types of discourse in politics, the media, education, science, business, etc.)
* The relations between discourse and the emergence of syntactic structure
* The relations between text (discourse) and context
* The relations between discourse and power
* The relations between discourse and interaction
* The relations between discourse and cognition and memory




Semiotics
Semiotics, also called semiotic studies or semiology, is the study of sign processes (semiosis), or signification and communication, signs and symbols, and is usually divided into three branches:

* Semantics: Relation between signs and the things to which they refer; their denotata
* Syntactics: Relations among signs in formal structures
* Pragmatics: Relation between signs and their effects on those (people) who use them

Semiotics is frequently seen as having important anthropological dimensions; for example, Umberto Eco proposes that every cultural phenomenon can be studied as communication. However, some semioticians focus on the logical dimensions of the science. They examine areas belonging also to the natural sciences – such as how organisms make predictions about, and adapt to, their semiotic niche in the world (see semiosis). In general, semiotic theories take signs or sign systems as their object of study: the communication of information in living organisms is covered in biosemiotics or zoosemiosis.

Syntactics is the branch of semiotics that deals with the formal properties of signs and symbols. More precisely, syntactics deals with the "rules that govern how words are combined to form phrases and sentences." Charles Morris adds that semantics deals with the relation of signs to their designata and the objects which they may or do denote; and, pragmatics deals with the biotic aspects of semiosis, that is, with all the psychological, biological, and sociological phenomena which occur in the functioning of signs.

Semioticians classify signs or sign systems in relation to the way they are transmitted (see modality). This process of carrying meaning depends on the use of codes that may be the individual sounds or letters that humans use to form words, the body movements they make to show attitude or emotion, or even something as general as the clothes they wear. To coin a word to refer to a thing (see lexical words), the community must agree on a simple meaning (a denotative meaning) within their language. But that word can transmit that meaning only within the language's grammatical structures and codes (see syntax and semantics). Codes also represent the values of the culture, and are able to add new shades of connotation to every aspect of life.

To explain the relationship between semiotics and communication studies, communication is defined as the process of transferring data from a source to a receiver. Hence, communication theorists construct models based on codes, media, and contexts to explain the biology, psychology, and mechanics involved. Both disciplines also recognize that the technical process cannot be separated from the fact that the receiver must decode the data, i.e., be able to distinguish the data as salient and make meaning out of it. This implies that there is a necessary overlap between semiotics and communication. Indeed, many of the concepts are shared, although in each field the emphasis is different. In Messages and Meanings: An Introduction to Semiotics, Marcel Danesi (1994) suggested that semioticians' priorities were to study signification first and communication second. A more extreme view is offered by Jean-Jacques Nattiez (1987; trans. 1990: 16), who, as a musicologist, considered the theoretical study of communication irrelevant to his application of semiotics.

Semiotics differs from linguistics in that it generalizes the definition of a sign to encompass signs in any medium or sensory modality. Thus it broadens the range of sign systems and sign relations, and extends the definition of language in what amounts to its widest analogical or metaphorical sense. Peirce's definition of the term "semiotic" as the study of necessary features of signs also has the effect of distinguishing the discipline from linguistics as the study of contingent features that the world's languages happen to have acquired in the course of human evolution.

Perhaps more difficult is the distinction between semiotics and the philosophy of language. In a sense, the difference is a difference of traditions more than a difference of subjects. Different authors have called themselves "philosopher of language" or "semiotician". This difference does not match the separation between analytic and continental philosophy. On a closer look, there may be found some differences regarding subjects. Philosophy of language pays more attention to natural languages or to languages in general, while semiotics is deeply concerned about non-linguistic signification. Philosophy of language also bears a stronger connection to linguistics, while semiotics is closer to some of the humanities (including literary theory) and to cultural anthropology.

Semiosis or semeiosis is the process that forms meaning from any organism's apprehension of the world through signs.  



Sociolinguistics
Sociolinguistics is the study of the effect of any and all aspects of society, including cultural norms, expectations, and context, on the way language is used. Sociolinguistics differs from sociology of language in that the focus of sociolinguistics is the effect of the society on the language, while the latter's focus is on the language's effect on the society. Sociolinguistics overlaps to a considerable degree with pragmatics.

It also studies how language varieties differ between groups separated by certain social variables, e.g., ethnicity, religion, status, gender, level of education, age, etc., and how creation and adherence to these rules is used to categorize individuals in social or socioeconomic classes. As the usage of a language varies from place to place (dialect), language usage varies among social classes, and it is these sociolects that sociolinguistics studies.

The social aspects of language were in the modern sense first studied by Indian and Japanese linguists in the 1930s, and also by Gauchat in Switzerland in the early 1900s, but none received much attention in the West until much later. The study of the social motivation of language change, on the other hand, has its foundation in the wave model of the late 19th century. The first attested use of the term sociolinguistics was by Thomas Callan Hodson in the title of a 1939 paper. Sociolinguistics in the West first appeared in the 1960s and was pioneered by linguists such as William Labov in the US and Basil Bernstein in the UK.

Application of Sociolinguistics
For example, a sociolinguist might determine through study of social attitudes that a particular vernacular would not be considered appropriate language use in a business or professional setting. Sociolinguists might also study the grammar, phonetics, vocabulary, and other aspects of this sociolect much as dialectologists would study the same for a regional dialect.

The study of language variation is concerned with social constraints determining language in its contextual environment. Code-switching is the term given to the use of different varieties of language in different social situations.

William Labov is often regarded as the founder of the study of sociolinguistics. He is especially noted for introducing the quantitative study of language variation and change,[2] making the sociology of language into a scientific discipline. Sociolinguistics





Psycholinguistics
Psycholinguistics or psychology of language is the study of the psychological and neurobiological factors that enable humans to acquire, use, comprehend and produce language. Initial forays into psycholinguistics were largely philosophical ventures, due mainly to a lack of cohesive data on how the human brain functioned. Modern research makes use of biology, neuroscience, cognitive science, and information theory to study how the brain processes language. There are a number of subdisciplines; for example, as non-invasive techniques for studying the neurological workings of the brain become more and more widespread, neurolinguistics has become a field in its own right.

Psycholinguistics covers the cognitive processes that make it possible to generate a grammatical and meaningful sentence out of vocabulary and grammatical structures, as well as the processes that make it possible to understand utterances, words, text, etc. Developmental psycholinguistics studies children's ability to learn language.

Mehtodologies of Psycholinguistics

Much methodology in psycholinguistics takes the form of behavioral experiments incorporating a lexical decision task. In these types of studies, subjects are presented with some form of linguistic input and asked to perform a task (e.g. make a judgment, reproduce the stimulus, read a visually presented word aloud). Reaction times (usually on the order of milliseconds) and proportion of correct responses are the most often employed measures of performance. Such experiments often take advantage of priming effects, whereby a "priming" word or phrase appearing in the experiment can speed up the lexical decision for a related "target" word later.

Such tasks might include, for example, asking the subject to convert nouns into verbs; e.g., "book" suggests "to write," "water" suggests "to drink," and so on. Another experiment might present an active sentence such as "Bob threw the ball to Bill" and a passive equivalent, "The ball was thrown to Bill by Bob" and then ask the question, "Who threw the ball?" We might then conclude (as is the case) that active sentences are processed more easily (faster) than passive sentences. More interestingly, we might also find out (as is the case) that some people are unable to understand passive sentences; we might then make some tentative steps towards understanding certain types of language deficits (generally grouped under the broad term, aphasia).

Until the recent advent of non-invasive medical techniques, brain surgery was the preferred way for language researchers to discover how language works in the brain. For example, severing the corpus callosum (the bundle of nerves that connects the two hemispheres of the brain) was at one time a treatment for some forms of epilepsy. Researchers could then study the ways in which the comprehension and production of language were affected by such drastic surgery. Where an illness made brain surgery necessary, language researchers had an opportunity to pursue their research.

Newer, non-invasive techniques now include brain imaging by positron emission tomography (PET); functional magnetic resonance imaging (fMRI); event-related potentials (ERPs) in electroencephalography (EEG) and magnetoencephalography (MEG); and transcranial magnetic stimulation (TMS). Brain imaging techniques vary in their spatial and temporal resolutions (fMRI has a resolution of a few thousand neurons per pixel, and ERP has millisecond accuracy). Each type of methodology presents a set of advantages and disadvantages for studying a particular problem in psycholinguistics.

Computational modeling - e.g. the DRC model of reading and word recognition proposed by Coltheart and colleagues - is another methodology. It refers to the practice of setting up cognitive models in the form of executable computer programs. Such programs are useful because they require theorists to be explicit in their hypotheses and because they can be used to generate accurate predictions for theoretical models that are so complex that they render discursive analysis unreliable. One example of computational modeling is McClelland and Elman's TRACE model of speech perception.

More recently, eye tracking has been used to study online language processing. Beginning with Rayner (1978) the importance and informativity of eye-movements during reading was established. Tanenhaus et al. have performed a number of visual-world eye-tracking studies to study the cognitive processes related to spoken language. Since eye movements are closely linked to the current focus of attention, language processing can be studied by monitoring eye movements while a subject is presented with linguistic input. 





Neurolinguistics
Neurolinguistics is the study of the neural mechanisms in the human brain that control the comprehension, production, and acquisition of language. As an interdisciplinary field, neurolinguistics draws methodology and theory from fields such as neuroscience, linguistics, cognitive science, neurobiology, communication disorders, neuropsychology, and computer science. Researchers are drawn to the field from a variety of backgrounds, bringing along a variety of experimental techniques as well as widely varying theoretical perspectives. Much work in neurolinguistics is informed by models in psycholinguistics and theoretical linguistics, and is focused on investigating how the brain can implement the processes that theoretical and psycholinguistics propose are necessary in producing and comprehending language. Neurolinguists study the physiological mechanisms by which the brain processes information related to language, and evaluate linguistic and psycholinguistic theories, using aphasiology, brain imaging, electrophysiology, and computer modeling.


The History of Neurolinguistics

Neurolinguistics is historically rooted in the development in the 19th century of aphasiology, the study of linguistic deficits (aphasias) occurring as the result of brain damage.Aphasiology attempts to correlate structure to function by analyzing the effect of brain injuries on language processing.One of the first people to draw a connection between a particular brain area and language processing was Paul Broca, a French surgeon who conducted autopsies on numerous individuals who had speaking deficiencies, and found that most of them had brain damage (or lesions) on the left frontal lobe, in an area now known as Broca's area. Phrenologists had made the claim in the early 19th century that different brain regions carried out different functions and that language was mostly controlled by the frontal regions of the brain, but Broca's research was possibly the first to offer empirical evidence for such a relationship,and has been described as "epoch-making"and "pivotal"to the fields of neurolinguistics and cognitive science. Later, Carl Wernicke, after whom Wernicke's area is named, proposed that different areas of the brain were specialized for different linguistic tasks, with Broca's area handling the motor production of speech, and Wernicke's area handling auditory speech comprehension.The work of Broca and Wernicke established the field of aphasiology and the idea that language can be studied through examining physical characteristics of the brain.Early work in aphasiology also benefited from the early twentieth-century work of Korbinian Brodmann, who "mapped" the surface of the brain, dividing it up into numbered areas based on each area's cytoarchitecture (cell structure) and function;these areas, known as Brodmann areas, are still widely used in neuroscience today.

The coining of the term "neurolinguistics" has been attributed to Harry Whitaker, who founded the Journal of Neurolinguistics in 1985.

Although aphasiology is the historical core of neurolinguistics, in recent years the field has broadened considerably, thanks in part to the emergence of new brain imaging technologies (such as PET and fMRI) and time-sensitive electrophysiological techniques (EEG and MEG), which can highlight patterns of brain activation as people engage in various language tasks;electrophysiological techniques, in particular, emerged as a viable method for the study of language in 1980 with the discovery of the N400, a brain response shown to be sensitive to semantic issues in language comprehension.The N400 was the first language-relevant brain response to be identified, and since its discovery EEG and MEG have become increasingly widely used for conducting language research.  



Stylistics
Stylistics is the study of varieties of language whose properties position that language in context, and tries to establish principles capable of accounting for the particular choices made by individuals and social groups in their use of language. A variety, in this sense, is a situationally distinctive use of language. For example, the language of advertising, politics, religion, individual authors, etc., or the language of a period in time, all are used distinctively and belong in a particular situation. In other words, they all have ‘place’ or are said to use a particular 'style'.

Stylistics is a branch of linguistics, which deals with the study of varieties of language, its properties, principles behind choice, dialogue, accent, length, and register.

Stylistics also attempts to establish principles capable of explaining the particular choices made by individuals and social groups in their use of language, such as socialisation, the production and reception of meaning, critical discourse analysis and literary criticism.

Other features of stylistics include the use of dialogue, including regional accents and people’s dialects, descriptive language, the use of grammar, such as the active voice or passive voice, the distribution of sentence lengths, the use of particular language registers, etc.

The situation in which a type of language is found can usually be seen as appropriate or inappropriate to the style of language used. A personal love letter would probably not be a suitable location for the language of this article. However, within the language of a romantic correspondence there may be a relationship between the letter’s style and its context. It may be the author’s intention to include a particular word, phrase or sentence that not only conveys their sentiments of affection, but also reflects the unique environment of a lover’s romantic composition. Even so, by using so-called conventional and seemingly appropriate language within a specific context (apparently fitting words that correspond to the situation in which they appear) there exists the possibility that this language may lack exact meaning and fail to accurately convey the intended message from author to reader, thereby rendering such language obsolete precisely because of its conventionality. In addition, any writer wishing to convey their opinion in a variety of language that they feel is proper to its context could find themselves unwittingly conforming to a particular style, which then overshadows the content of their writing.  



Source: www.wikipedia.org
             (repost from Ary Holmes)































 















 



















1 comment:

Unknown said...

thanks for this story review of English linguistics! i propose you to follow http://edit-it.org/blog/adverb-its-definition-and-characteristics and read some characteristics of English adverbs!

Post a Comment