oturn home > Theory of presentation: overview > §2 Methodology

A theory of presentation and its implications for the design of online technical documentation
©1997 Detlev Fischer, Coventry University, VIDe (Visual and Information Design) centre

2    Methodology

The theory of presentation is a grounded theory. I will first describe what led to the use of the grounded theory method. I will then describe grounded theory and at the same time illustrate and qualify it through examples taken from its application in this study. After the exposition of grounded theory I will discuss the link between theory development and design practice.

2.1    Choice of methodology

During the research programme, direction and methods of research changed significantly. The programme began as an investigation of time-based aspects of multimedia design, and changed to a field-based investigation into users' presentation of various resources in response to situated needs. While at the beginning, the aim of research had been a general syntax of time-based multimedia, it eventually became the generation of a theory of presentation grounded in the evidence collected in the field and during evaluations.

There are several reasons for this turnaround and the eventual uptake of the grounded theory method.

(1) Contacts with people and literature in the initial phase of my research slowly began to instil the notion that design and theory should relate to an empirical situation, not take its direction from available technology or academic interest [1].

(2) At the same time, the contact with Rolls Royce made me aware that so far, my prototype design had been more defined by my own requirements than by any appreciation of users' situation or needs. Given the opportunity to spend time in the Service Engineering section, I now wanted to find out more about the conditions and dynamics of a real situation of use.

(3) Studying the literature, particularly a range of PhD theses, I realised the dominance of a research design that constrains the scope of research to what can be rendered valid in terms of the canon of experimental science. Most researchers focus on a particular narrowly defined question, such as ‘Do animated images of a heart lead to better post-test retention scores than still images?’[2]  The results may be valid for the chosen narrow validation context (mostly some controlled laboratory setting), but they have limited applicability for realistic settings. There are severe methodological problems with the assumption of ‘all things being equal’ and related attempts to neutralise unwanted context [3].

The isolation of the research problem pays tribute to the arrangement which locates science as a metaphysical arbitrator above and beyond its referent. The basic figure is the ideological construction of some parts of the world as constant and others as variable—a construction which in any particular experimental design is expressed in the choice of independent and dependent variables. The selection of constants assumes that some aspects of the system (or design, or theory) can be excluded from the research of the empirical dynamics. The choice of ‘usefully stable’ parameters can be traced to social and economic arrangements which are treated as given, but could be different, if changing them was thought to be a legitimate enterprise. The measurement of comfort of car seats, for example (made possible by statistically defining the average body size), takes the rigid anchoring of seats as an independent variable, although exploratory research shows that the possibility to change sitting posture is most important for comfort (cf. Jones 1992/1970). Increasing the scope of design, the problem of ‘comfort’ would itself appear in a different light (and reveal different perspectives for change) if it was generally formulated as ‘comfort of travel’, not ‘comfort of travelling seated in a private car’.

The definition of research problem, methodological framework and hypotheses at the outset rules out recursive improvements or shifts of method beyond weeding out flaws through pilot runs. For design, and by implication design research, flexibility must be part of the method, if only for the fact that the introduction of the designed object or system, even as prototype, affects the domain's processes in many ways which are difficult or impossible to predict.

Methods introduce new domains, such as evaluations, that have their peculiar patterns and protocols which are unlike those of the referent domain. Reflection and adaptation of the methods used does not remove this problem, but makes it visible.

 (4) As I began to explore and learn about the field, there were growing piles of evidence such as field notes, interview notes or transcripts, and evaluation reports. I needed a method to systematise research and data analysis without typing results according to preconceived ideas. Grounded theory method struck me as the most integrated and, at the same time, open approach to help me make sense of the diverse evidence. It acted as a super-method or framework for a variety of subordinate research methods such as participant observation, receptive and semi-structured interviews, brainstorming, and evaluation methods such as think-aloud protocol, comparative trialling, and question-asking (cf. appendix V–Evaluation).

2.2    Grounded theory method

The grounded theory method originates in the work of Glaser and Strauss [4].  It is inductive in that it proceeds from empirical incidents to theoretical concepts, and at the same time deductive in that it applies these concepts in its coding and sampling of data. Although grounded theory is now nearly 30 years old, it is discussed and recommended in many recent books on social research and methodology [5].While it may not be at the height of its popularity, there are recent examples of its application [6].

Grounded theory method is a recursive process that links theory generation to data collection. It is this aspect which puts it close to iterative design and formative evaluation methods [7].  Recently it has also been compared with faceted classification [8].  The basic overlapping operations of the grounded theory method are data collection, coding, memoing, and sorting. (A demonstration of the grounded theory process concerning the development of one important category can be found in appendix VI–Grounded theory applied.)

2.2.1    Data collection

Data collection employs a variety of qualitative methods such as observation, receptive interviews and document analysis to collect the grounding evidence [9].  In this study, evidence consists of field notes, interview transcripts, evaluators' and users' reports, and documents collected in the field. Secondary evidence are the documents generated during theory development such as line-by-line micro-analyses of reports of field notes, and memos on codes or categories. A further source for comparison was the reviewed literature.

Grounded theory eschews survey methods and structured interviews since these filter data according to preconceived categories. The basic attitude is to approach the field open-minded and with as few preconceived concepts and hypotheses as possible. ‘Existing preconceptions about the object of study should be treated as preliminary, to be overcome as research produces new incongruent information.’[10]  In this study, the initial research object ‘new techniques for multimedia systems’ was gradually replaced by ‘users' presentation of resources for solving domain problems’.

Theoretical sampling means that the grounded theory process recursively links data collection to data analysis and coding which begins as soon as the first data become available. Analysis suggests other samples of data as potentially relevant, for example, other informants or settings, other collection methods, other times of collection, etc. In this study, the initial loose research contact to the Department of Visual Communication turned out to be inadequate for the study of resource use, so I managed to gain access and permanent desk space in the Service Engineering department. This department has a pivotal position for presenting emergent problems in that it mediates between field representatives, airlines, and many in-house departments. To give another example of theoretical sampling, the results of cinegram evaluations with non-engineering students suggested that it would be important to evaluate the prototype with engineering students. Equally, the comparative method used in stage 2 of the evaluation suggested a dialogical approach followed in stage 3 and 4 (cf. appendix V–Evaluation).

2.2.2    Coding

The grounded theory method suggests the development of theoretical concepts from data through coding. The coding of data such as field notes and interview transcripts poses questions such as ‘What does this incident indicate?’ (Glaser 1978 p57). Coding proceeds line-by-line to avoid missing out important aspects which might escape in the overview approach of reading ‘over-all the data somewhat quickly, which yields an impressionistic cluster of categories’ (op cit p58). While substantive codes relate to objects and events in the data, conceptual codes integrate these on a higher level of abstraction and ‘get the analyst off the empirical level’ (op cit p55). Open coding opens up the data by generating as many preliminary categories, properties and dimensions as possible. Constant comparison moving back and forth between different codes and between indicators in the data informs the subsumption of individual substantive codes under more general conceptual categories. The individual substantive codes now begin to indicate dimensions of the conceptual category.

In this study, initial codes from the open coding process such as ‘awareness of task artificiality’ (users performing according to what they assumed to be evaluators' expectations), ‘local matching’ (users copying text from document systems onto their question catalogue), ‘anchoring’ (users quoting the ATA reference instead of giving a substantive answer) or ‘document authority’ (users' inclination to refer to the document as authority, as in ‘there must be good reasons for this’) all contributed to the category ‘validation context’. In evaluations with lay novices the validation context is local and ungrounded, while in the service engineering domain, it is grounded in anticipated processes in the referent domain.

2.2.3    Memoing

Memoing is interleaved with coding. ‘Memos are the theorizing write-up of ideas about codes and their relationships as they strike the analyst while coding’. (Glaser 1978 p83) The advice is to ‘stop and memo’ as coding sparks off ideas.

Memoing reflects the process of constant comparison across indicators and codes. It saturates dimensions of the main categories that have emerged through coding and constantly generates open questions for further coding and data collection.

In this study, one problem was memo management. It was difficult to keep track of the developing memos, particularly since some were word-processed while others were hand-written. Since memory of older memos faded and new ideas wanted to be recorded before they got lost, there was a lot of overlap. Similar connections were beginning to appear again and again in different memos saved under different descriptors. The process of writing drew in, or shifted to, other categories so the memo descriptor did not fit anymore, which led to splitting memos and to slightly different versions of the same memo under different descriptors.

Memoing develops the core category around which the other categories integrate. The core category has no transcendental prerogative; it simply integrates the theory according to the emergent perspective of investigation and thereby defines its cut-off points. However, the core category has earned its relevance through the grounding of the theory in the domain. ‘It must be central, i.e., related to as many other categories and their properties as possible…and account for a large portion of the variation in a pattern of behaviour’ (Glaser 1978 p95). It must also occur frequently, be completely variable, and ‘have a clear and grabbing implication for formal theory’ (ibid). Which exact descriptor for the core category is chosen may involve some arbitrariness on the part of the researcher. In this study, the core was first labelled ‘activity’, pitting ‘user activity’ against ‘document activity’. It was later re-labelled ‘presentation’, pitting ‘emergent presentation’ against ‘schedule’. While being broadly equivalent, the later descriptors allowed a wider scope. For example, ‘schedules’ were not confined to document activity, but included flight schedules, maintenance schedules or training schedules.

2.2.4    Sorting

The sorting of memos goes some way towards resolving the memo management problem. Grounded theory is not written according to a pre-conceived outline; instead, the outline emerges during the sorting process.

Sorting presents the theory by differentiating and segmenting it. It thereby forces comparison and clarification of codes both substantially and on the level of terminology as similar memos are brought close together in one section. Sorting sparks new memos on interrelationships between codes which are sorted into the emerging outline.

In this study, one problem was premature close-out of sorting in order to meet the draft deadline. The resulting redundancy and the lack of integration in the draft suggested taking up sorting once more. I sorted all printed or hand-written memos, including the draft which I cut up. One problem was the existence of two materially distinct outlines: the one in the paper-based sort, and the other in the outline modus of the word processor. The former allows the inclusion of hand-written memos and quickly sketched conceptual diagrams; the latter allows quick rearrangement of sections within and between chapters, which means that the stage of sorting and writing tend to blur. At some point during writing the computer-based outline took over; it simply became too laborious to resort the paper-based outline only to secure a correspondence to the emerging word-processed text.

2.2.5    Writing

Writing turns the sort into a text. Sorting has created the outline of the theory which largely determines the order of chapters and sections within chapters. This order introduces concepts in a cumulative fashion and thereby minimises redundancy. Since writing is the stage within the grounded theory process which is most dependent on the style and personal predilections of the author, I do not intend to cover it in much detail [11].  Perhaps the most important aspect of writing is that it presents conceptual difficulties and overlaps in full scope. This can generate further memos and change the outline.

In this study, for example, writing suggested some important changes to the sorted outline. The category of confluence was first sorted as a property of navigation, although it equally relates to articulation. As the chapters on articulation and navigation were written, it became clear that confluence deserved a chapter in its own right. (The development of the concept of confluence is traced in appendix VI–Grounded theory applied.)

2.2.6    Synthesis

The protocol for PhD theses requires a ‘new contribution to knowledge’. The phrase implies knowledge as a growing store of facts to be augmented, refined, or falsified, but forbids questions as to the usefulness of the store itself. The novelty  must be measurable in comparison to existing objects of scientific endeavour—an activity which implies a fundamental complicity with the structure and implicit ideology of these objects. Results will only be recognised as new knowledge if a double projection can conceive of them as old knowledge. The anchoring of the work in the scientific tradition is not so much an expression of learning and gratitude as the stochastic justification of past endeavours which finds its material expression in citation records.

By contrast, the grounded theory process anchors the emerging theory through the comparative analysis of all the data collected in a substantive domain. This works against the bias of any specialised ‘academic’ problem since the core process around which the theory aggregates only emerges through the analysis of the entire setting, albeit from a particular interested perspective. The substantive theory, then, contributes to knowledge through a synthesis based on grounded evidence. This synthesis allows the assessment, integration and modification of diverse analytical concepts generated by separate academic disciplines. Where the theory makes use of existing concepts, it often extends their scope and discovers important new relationships. In this study, for example, the concept of navigation described in chapter 6–Navigation extends beyond the immediate handling of one resource, and relates navigation to users' simultaneous articulation of new resources.

2.2.7    Scope for change

The grounded theory method explicitly acknowledges the hypothetical nature of the generated theory and its openness for change [12]. ‘A theory must be readily modifiable, based on ever-emerging notions from more data.’ (Glaser 1978 p4). The hypotheses of a theory are necessarily incomplete when validation is linked to the recursive changes of research process and researched processes.

In this study, changes came about as the study of field notes led to questions which revealed incorrect assumptions and prompted corrections and explanations by service engineers [13], or when users read evaluation protocols and indicated errors or misrepresentations. The cinegram evaluations also repeatedly revealed problematic features of design that were then changed and re-evaluated.

2.3    Field work

The field work contributed the bulk of evidence on which the theory of presentation rests. It began with a one week attachment to the department of Visual communication at Rolls Royce plc in Derby. The manager of Visual communication then arranged many interviews with people in the departments of Technical Publications, CADDS Systems, Preliminary Design, Aircraft Projects, Customer Support, Sales Support, Customer Training, and the Technical Library.

At the end of the week I chose ETGs (Engineering Technical Graphics) as the starting point for my practical work since this document type seemed to have an integrating role within a voluminous and highly fragmented body of formal technical documentation (cf. appendix II–ATA-specified documents). Also, it seemed appropriate for a transformation into an animated on-line document system. The choice was informally endorsed by the manager of Visual Communication at Rolls Royce, and by advisers to the research project. The oil system of the recent Trent 700 engine was chosen as referent system since the respective ETG happened to be drawn at that time and provided opportunities for discussions of techniques and design decisions with its designers.

Every few weeks, I travelled to Derby and visited the designers at Visual Communication. This was a phase of apprenticeship [14] in the art of designing ETGs, with the aim of designing the cinegram. I could follow, participate in and benefit from discussions about design problems posed by currently drawn ETGs. My presence led to ‘receptive interviews’ [15].  However, involvement fell short of ‘going native’. My visits were infrequent. I was not commissioned to do any work, but merely tolerated as an observer. Also, there was a cultural and skill differential. The designers at Visual Communication were trained technical illustrators, while I was a trained film maker and hypermedia designer with no prior experience of the engineering context. They had years of professional experience in an engineering company, I had months of freelance multimedia design experience. They were English, I was German, although on the surface, there was no ‘language problem’.

The allocation of desk space in the department of Service Engineering brought an important qualitative change. Having a desk to sit and write allowed immersion in the surroundings and observation of engineers' resource use, movements and encounters in the office space [16]  I could also overhear and jot down fragments of conversations, desk-to-desk and to customers on the phone. People were aware of my overhearing and watching their activities without being too bothered about it. I made it implicitly clear that I had no hidden agenda by asking engineers about things I had noticed but not understood.

The general pattern of field work consisted in periods of observation interleaved with interviews or conversations in which engineers often qualified my observations. Much of the discussion was initially on the level of the physical referent system, which enabled me to expand my technical knowledge and substantive competence. I then steered the discussion to the secondary level of resource use.

2.4    Design practice and theory

An important feature of the research design was the close link between theory development and design practice. The design practice (which includes the evaluation of artefacts with users) becomes a medium of theory development as much as the grounded theory process and, on another level, the logic and rhetoric of discourse.

However, it would be naïve to assume a direct transfer between theory and practice. The ambitious interventionist program of action research has given way to a more cautious view which assumes that the science system and the practice system have different referents and modes of operation (Moser 1995 pp70). In this view, the pivot of scientific discourse is truth, while that of the discourse of practice is utility (‘Brauchbarkeit’). The constructivist view of science as differential processing of truths, however, is itself ideological. The operational independence of the ‘science system’ is tied to an historical situation which assumes the segregation of the discourse of the observer from that of the observed. This situation has begun to change dramatically in recent years. Traditionally, publications are consumed without access to the context of their development. On such a basis, the ‘truth’ of science must be constructed immanently through the form of its own discursive modus: that it relates observations while accounting for its methods of observation, which means that it constitutes the very context in which it processes the truth value of its hypotheses. This view of truth disregards the fact that any scientific reading routinely goes beyond the narrow boundaries of the constructed context. The reader will not only draw comparisons with other publications in the same field, but compare the publication's context and scope with that of other available resources about its reference domain, such as personal experience and observation, hearsay, and various more or less reliable conjectures and inferences. ‘Truth’ is tied to this messy context. It is not a binary value of controlled and testable propositions, but rather, a complex sensation of evidence that is socially mediated.

The material conditions which seem to motivate the view of a self-referential, truth-processing science system are not a timeless fact, but historically determined. The internet publishing model has the potential to change the conditions of consumption of scientific publications, e.g., by documenting their referent domain and their social and discursive context. It is already commonplace that on-line publications include a pointer to the author. Social studies may include not only references to other publications, but on-line links into the researched domain. New forms of peer review may append many readers' comments to the on-line publication [17].  The potential of communicative validation beyond the observer perspective enshrined in the publication fundamentally alters the concept of truth.

This change is significant because a theory linked with practice is no longer exclusively bound by the characteristics of its medium, language. The enlightenment had prepared the modelling of cognition according to ‘logos’, i.e., as a model structured (and constrained) by the discrete units of language. Descartes' third and fourth maxims of scientific method postulate discreteness and closure [18]. This conceptual dignity allowed language to go beyond the mere reflection of empirical objects. Its own objectivity however was modelled on the mechanical processes [19] at the beginning of industrialised production. Discrete objects that are the in- and output of closed processes permeate the literature on cognition, and can still be found in textbooks today [20].  The weight of this tradition makes it difficult to perceive the world in terms of doeys rather that thingys [21]. Since the scientific argument usually processes its commodity—concepts—for extraneous persuasive reasons, i.e., in order to extract, like a profit, a competitive argument for ends which are not immediately coupled with its referents, it draws, so to speak, on the exchange-value rather than the use-value of concepts [22]. The latter can only be consumed when terms become a reflective moment of the very practice they describe.

Footnotes to chapter 2Methodology

[1] The most important were Lindsay (various papers and personal communication); Jones (1992/1970); and Newman's talk at the BCS conference Multimedia Systems and Applications in 1993 (published as Newman 1995).

[2] This is the essence of the research question in Mayton (1990). Similar examples of quantitative research design in PhD theses can be found in Guri (1984), Hegarty (1988), Asoodeh (1993), Harrison (1993), Vullo (1993), and Gonzales (forth.).

[3] Pinnington (1991) analyses problems of this sort for evaluation design. Practitioners tend to favour ‘quick-and -dirty’ approaches with small samples (cf. Nielsen & Mack (1994), Tognazzini (1993).

[4] Cf. Glaser & Strauss (1967); Glaser (1978) and Strauss (1987).

[5] Cf. Strauss & Corbin (1990); Hammersley & Atkinson (1995); Moser (1995); Kelle (1994); Kleining (1994).

[6] At the time of its introduction the centre of grounded theory research was the medical field. Today it seems to be somewhere between organisational research and information systems design. Here are some examples:

[7] Cf. for example, Pinnington (1990), Brown (1994), Nielsen (1993), Usability evaluation (1994).

[8] Cf. Star (forth.).

[9] Methods such as participant observation (Waddington 1994), brainstorming (Jones 1992/1970), document analysis (Hodder 1994; Forster 1994), think-aloud method (Tognazzini 1992; Nielsen & Mack 1994; or question-asking (Johnson & Briggs 1994) simply appear as subordinate methods contributing evidence for grounded theory development.

[10] This is my translation of rule 1 about the subject, the researcher, of Kleining's (1994 p23) four rules for qualitative social research. Kleining's position differs from that of Glaser (quoted in Kelle 1994, p335) who suggests that ‘… the analyst should just not know as he approaches the data, so he does not even have to waste time correcting his preconceptions’.

[11] The most useful recommendations for writing I have found in Glaser (1978).

[12] The proponents of business process re-engineering, a capitalist revolution confined to the level of organisation, emphasise the uselessness of a careful analysis of existing production processes since radical reengineering will wipe the slate clean for the design of more efficient processes (cf. Hammer & Stanton 1995). This indicates the temporal tolerances of validity and reliability: research might simply take too long to be of any relevance for a fast-changing reality. Changes from public to private funding are likely to align academic research to the time scales dictated by the accelerating economic system. Besides, this acceleration seems to favour methodologies such as grounded theory that recursively generate theory as soon as the first data are collected, and make an anachronism of those which require lengthy data collection before (statistical) analysis and evaluation can begin.

[13] For example, I had assumed that service engineers' attention to contradictions in accounts of operators implied a constant search for operators' errors which would help ward off liability claims. In commenting on the draft version of this study, one service engineer pointed out that ‘any reluctance we may have [to believe operators' accounts, D.F.] usually stems from inconsistencies in reports from different sources’ and only serves to exclude ‘the type of problem which is the easiest to understand and rectify’.

[14] Apprenticeship learning has been described by Lave & Wenger (quoted after Duguid & Brown 1992, p167) as ‘legitimate peripheral participation’ (LPP). According to this view ‘learning is a process of constructing an identity through joining (or developing) a "community-of-practice". Learning involves becoming an "insider"’ (op cit p168).

[15] The term ‘receptive interview’ is suggested by Kleining (1994 p123). In this form of interview, the interviewer is se-lected by the interviewee, who spontaneously produces unsolicited news of past or current events and problems related to the work.

[16] Cf. the study of in-house manual use by Mirel (1988).

[17] Cf. ‘Electronic refereeing’, The Economist, 22/6/96, p98

[18]‘…the third [maxim], to order my train of thought from the most elementary things to the cognition of the more complicated; the fourth, to account for everything [reasons, hypotheses, observations] completely and to comprehend everything to the extent that it is safe that nothing will be left out’. (Descartes, Discours de la méthode (1637). My translation after Kindlers Literatur Lexikon, 1974 p2707)

[19] Cf. Horkheimer (1970/1937 p18).

[20] Cf., for example, Smyth et al (1994) or Driscoll (1994).

[21] Lindsay (1995), personal communication.

[22] Cf. Marx’s (1970/1859 p27) fundamental distinction between the use-value and the exchange-value of commodities.

Last update: 16 February 2009 | Impressum—Imprint