Using Corpora in Discourse Analysis by Paul Baker

Using Corpora in Discourse Analysis



Download Using Corpora in Discourse Analysis




Using Corpora in Discourse Analysis Paul Baker ebook
ISBN: 0826477259,
Page: 206
Publisher: Continuum
Format: pdf


We used our word list to “compare different corpora, such as those that represent spoken versus written discourse, or American versus British English for example” (Adolphs, 40). @jankenb2 would replace discourse analysis by using what Evan calls simulation,. I need to be careful using the above article's methods on my abstracts corpora because these corpora are really multiple texts written by multiple authors; the methodology linked above is designed to analyze self-contained texts. It can process a variety of English language texts using a 10,000 word corpus and user-created custom dictionaries. But for my current question (what concepts occur most In other words, my question is an indirect way to test the cohesion of rhetorical studies' discourse about that discipline's defining term. (Sinclair, 14) Concordance lines are useful, according to Adolph in the book Introducing Electronic Text Analysis, to visualize the data so that one search item is seen as a node and its use can be easily portrayed in the concordance lay out. DICTION currently runs on DICTION also reports normative data for each of its forty scores based on a 50,000-item sample of discourse. The authors of a recent publication, based on the British Academic Written English (BAWE) Corpus, point out that the starting point of their research study into student writing across disciplines was not the texts themselves but their disciplinary Based on this analysis, they could design EAP teaching materials which would enable students to use certain linguistic and stylistic strategies they could use to become accepted members of their discourse community. And challenge them with case-based learning scenarios, depending on level of students. Case interaction is the problem that a corpus is not a random sample, and cases are not independent from each other (but interact), but we use statistical methods which assume that the corpus is random. DICTION is a computer-aided text analysis program for determining the tone of a verbal message. The focus is on the qualitative analysis of discourse seen as a concrete socio-historical formation characterised by particular ways of using language. DICTION searches a passage for five general features as well as thirty-five sub-features.

Download more ebooks: