NOTES ON METHODS FOR THE STUDY OF WORK PRACTICES

(Unpublished Manuscript)

 

Brigitte Jordan

Last changes: May 2007

 

Note: an earlier version of this essay was written for participants of the Workplace Project at Xerox PARC, an interdisciplinary group of researchers who studied ground operations at a west coast metropolitan airport. It has since been used by project teams around the world for thinking through how the issues raised here apply to their own studies of “workscapes”: the integrated view of work practices and work settings.

 

1 PURPOSE AND ASSUMPTIONS

 

The intention of this document is to propose some topics and approaches for discussion by research groups engaged in the study of work practices.

 

In regard to questions of methodology, we proceed from the premise that the choice of methods for data collection and analysis should be determined by what “work” the data need to do for us, that is to say, what aspect of the phenomenon of interest they are to speak to. Given our multi-topical interests, we expect to use in our projects a rich array of methods from a variety of disciplines, some of which some of us have used in the past, others of which we expect to develop in the course of our work. At all times, we mean to remain accountable to the questions:

 

*          what are these data for

 

*          are these the data we should collect for the purpose we have in mind (or are there others which might shed more light on the issues)

 

*          what is the best way to display these data so they will speak to our partners in a powerful way.

 

In the work we are doing, we expect that our interests will require methods ranging from global ethnography to the micro-analysis of interaction between people and technologies of various kinds. One of the methodological challenges, then, is to determine the right mix of methods that will allow us to answer our research questions.

 

In the most general terms, our research questions are concerned with:

 

*           the social organization of work

 

*          the workplace as a learning environment

 

*          the temporal patterning of activity in the workplace

 

*          the relationship between physical space and the organization of activities

 

*          the flow of information within the organization and the consequent distribution of knowledge

 

*          the local design and redesign of artifacts, tools, technologies and practices

 

*          the role of artifacts and technologies in the conceptualization and execution of tasks

 

In what follows we will introduce some terminology for kinds of data, examine some of the issues around reliability, validity and sampling, and discuss some ways of collecting data in a systematic fashion.

 

2 KINDS OF DATA

 

In order for us to talk about what kinds of work our data will do for us, we want to have available a shared set of terms that describe categories of data. I’ll suggest a terminology below.

It is important to realize that the proposed categories are not mutually exclusive, that is to say, several may apply to a given data set. They are also not evaluative, that is to say, we realize that no one kind of data is intrinsically better than another; what is important is that they be appropriate for the research questions they are to speak to.

 

2.1 Quantitative and Qualitative Data

 

Quantitative data are data that have numbers attached to them; qualitative data are everything else. Note that it is often trivial to produce quantitative data by counting the occurrence of particular events, either directly in the field or through analysis of videotapes. The question whether to go for quantitative or qualitative data should be decided by considering what work these data will do for us.

Quantitative data have two major uses:

1.        to describe some aspect of the world through numbers and graphs (descriptive statistics);

2.       to carry out formal hypothesis testing (this is known as inferential statistics).

 

For both purposes a thorough understanding of the phenomenon of interest is necessary before it makes sense to collect quantitative data. Otherwise, we don't know what we are describing, counting, or drawing inferences about.

 

Qualitative data are all non-numeric data, such as responses to questions, stories, sketches, descriptive paragraphs written by a researcher, and the like. We particularly want to collect qualitative data when we are interested in multiple, ill-understood relationships in complex settings.

 

2.2 Semantic and Observational Data

 

Semantic (or elicited) data are data we get in response to questions that we ask as researchers, such as interview data, responses to questions asked on the fly, etc.

 

This is important because the distinction between

            *           what people say (something we can find out by asking questions), and

            *           what people, in fact, do (for which one ideally wants observational data)

is always important in organizational settings.

 

By the way, drawing attention to the say/do distinction is not to take an attitude that people lie or that you can't trust them, but rather that the tellable and remarkable characteristics of their activities are something different from the activities themselves.

 

Observational data are data acquired through observation of the phenomenon of interest, through the eyes and ears of a researcher or other recording device. Thus fieldnotes, photographs, video tapes, counts (e.g. of persons, objects, behaviors) etc. constitute observational data.

 

One reason some of us are partial to videotaping is that it does not rely on the in situ sense-making of the researcher to the same extent as other methods. It also permits the generation of additional observational data off the tape, something that can be done from fieldnotes only to a negligible extent.

 

Note that a conversation on topic X that is overheard by the researcher would constitute observational data while the same information elicited through the researcher's question would constitute elicited data.

 

2.3 Emic and Etic Data

 

Emic data are data collected in categories relevant to the participants (members, informants, workers), in contrast to etic data which come from the perspective of the researcher. What the emic categories are in a given study population is an empirical question to the discovery of which much of our effort will be directed.

 

For example, one could imagine that we want to know what activities different types of persons engage in in an airport. Using our own etic categories for the airport (i.e. pilots, passengers, baggage) for observation and elicitation, produces etic data.

 

However, once we've found out how persons are classified by people in the workplace, we can collect data about those categories, which are then emic data. For example, "ramp rat" and "blue goon" are emic categories used by airlines personnel. One can easily imagine that data collected on, say, job performance, could look quite different if they were collected as emic or as etic data.

 

Note that one could think of etic data as the analyst's emic data.

 

A common misconception is that all data emanating directly from participants are emic data because they express the state of the world from the participants' perspective. But note that the crucial distinction lies in the category system which will be either native (emic) or researcher-generated (etic).

 

Whether we want to work with an emic or etic classification system depends on the uses to which we want to put our data. If we are seriously interested in how the tasks, activities, personnel and resources of the workplace are organized from the point of view of participants, we want to make very sure that we understand the emic system. On the other hand, our interest may lie in externally motivated categories that can be applied uniformly across systems without regard to local practices. We may, for example, be interested in "time of departure" (defined as clock time at which brakes are released before plane pulls away from gate). In that case we need to understand how people operationalize that locally and to what extent such a definition is shared across airports and airlines.

 

2.4 Documentary or Artifactual Data

 

are the terms we use to refer to the documents and artifacts generated or used in the workplace by participants, such as floor plans, input/output figures, a collection of baggage tags, internal memos, etc.

 

We will want to create an inventory of documents and an inventory of material culture of the workplace and understand how they function in the organization of work.

 

Documents (e.g. the paperwork required to get a plane pushback) and other artifacts (e.g. sets of keys) shadow activities; they also shadow settings (e.g. the paperwork required at opening and closing of an airport store).

 

2.5  Collateral Data

 

are data that come to us from outside the project that we nevertheless may find useful, such as projections for the national economy and how that influences the work place of interest; others' research results on the use of space; relevant literature; etc.

 

3 RELIABILITY AND VALIDITY

 

In all arenas, be they academic or corporate, making pronouncements about reliability and validity is a way of making judgments and claims about the quality, if not the relevance, of the research we have carried out.

 

Reliability is the degree to which a measuring instrument produces the same data on multiple occasions of use in the same setting. A thermometer that shows 100 degrees Celsius every time water in my kettle comes to a boil, produces reliable data.

 

A question (considered as a measurement device) produces reliable data if the answers to that question are pretty much the same no matter when or where you or somebody else might ask it of an informant. (Whether the informant is right or wrong in any “objective” manner is not relevant!)

 

For example, if the response to the question: “What do you do when you have the flu?” produces the same answer no matter who we ask or who does the asking, then we are getting reliable data.

 

                                                                                    Reliable data are replicable data.

 

Validity, on the other hand, refers to the degree to which we measure what we want to measure. If I use a thermometer to measure the weight of the water in my kettle, I may well get reliable (that is replicable) data but these data are not valid because I am not measuring what I think I am measuring.

 

A prerequisite for getting valid data from questions is that interrogator and respondent impute the same meaning to the question. If we ask questions about the flu and our respondents think about the common cold when they answer, we may well get reliable data which are, however, not valid. We are not measuring what we think we measure, and the answers will have little validity.

 

In general, the validity of answers is likely to be enhanced if they are asked in the setting they are about. So we would always prefer to elicit information about activities as they are actually being performed rather than in off-site, temporally displaced interviews.

 

Recently, researchers' preoccupation with reliability and validity has given way to concerns with the "robustness" of data, i.e. the degree to which they stand up to triangulation from a variety of standpoints and using a variety of methods. This is akin to our requirement for generalizability. This is akin to our requirement for generalizability (see next section).

 

                                                Truth is to be found at the intersection of independent lies.

 

4 SAMPLING

 

How large a sample do we need before we can generalize? How many observations do we have to make? How many people do we have to interrogate, before we can be satisfied that we have discovered general patterns and not personally idiosyncratic or narrowly local behavior? This, again, is an empirical question, the answer to which depends on the complexity of the phenomenon under investigation, the internal variability of the data, and the degree of detail and certainty we require.

 

 When we are making observations in the field, it is often the case that we see a particular event or action happening and wonder how typical it is. For example, in our observations of pre-departure activities at the airport, we saw the soon-to-depart pilot come down from the plane, walk around it in a sort of inspectionary manner, and go back up to the cockpit. Now, this could be a totally idiosyncratic action, i.e. this guy needed some fresh air. It could be an idiosyncratic but patterned action, i.e. this particular pilot always walks around the plane before taking off, for health reasons. And then again, we might have observed an activity that all pilots engage in, either because this is an unwritten rule within a community of practice or because this action is one that is prescribed by official rules and regulations. While in the first two instances we may have an interest in the pilot's action as an expression of idiosyncratic variability in behavior, we are primarily interested in the second kind, that is, the general case of pilots' behavior.

 

How do we determine if the present case is one or the other? There are two or three complementary approaches:

 

            1.            observe more cases,

            2.            ask experts in the field, i.e. pilots or ramp personnel

            3.            read a manual (theoretically possible but not likely)

 

The big question, and one we would love to be able to respond to before we do our investigation, is: when will we have enough data to satisfy us and our corporate partners. This is a tough question.

Obviously, the longer we observe a given scene, the more people we talk to, the more we ourselves learn to do the activities in question, the more confidence we have that we have enough data, that is to say, we have a sufficiently large sample. The problem is, that in most of our projects, there is always more to learn, another meeting to go to, another set of questions to ask.

 

Nevertheless, in most investigations, sooner or later there will come a point when we have found out "enough for the purposes at hand", where we begin to feel pretty confident that we have a handle on the situation. In general I would suggest that we have made enough observations and asked enough questions when there are no more surprises, when we are able to project, along with participants, what the next set of events in an action sequence is likely to be.

 

5 DATA COLLECTION

 

While in our work useful data are often collected "on the fly", it is also worthwhile to identify certain systematic ways of collecting data. These include:

 

5.1 The Person-Oriented Record (POR)

 

We are here interested in coming to understand what the working existence of a particular person is like for her or him: what is the sequence of activities in their daily round, what artifacts and technologies do they use, who do they interact with as required by procedure and informally as necessary, what are routinely encountered problems and rewards, what do they perceive as typical and exceptional in a particular day's work, and so on.

 

Note that we are actually not interested in the particular informant. What we are after is the persons like her or him who do that kind of work; in other words, what we want to understand is the work practices, not the psychology or experience of the individual qua individual. For which types of persons we'll need Person-Oriented Records depends on our assessment of relevant types of persons in the work setting. For an airport these might include passenger, flight personnel, store clerk, etc. One caution is to make sure we've got the emic category system right before making such decisions.

 

If there is, for example, some kind of “informal expert” in a work system, whether there exists an emic label or not, we had better do a POR on that person.

 

How to obtain a Person-Oriented Record:

·      a researcher follows the target person around on the daily round as a more or less silent observer while compiling time-line notes (see below); or,

·      the researcher follows the target person around but with an active attitude of apprenticeship.

 

The latter may provide deeper insights but is also more intrusive and leaves less time for observation and recording. Both activities should be accompanied by a running tape recorder. The compiled notes could then be used as an elicitation device to get the informant to comment on that day regarding its typicality, difficulty, special conditions, etc.

 

·      Ask the person to carry a recorder if we can't follow them around.

 

This is somewhat problematic. For one thing, 8-10 hours of a working day on tape is extremely time-consuming to listen to for you, the researcher. But it may be possible to do this for some segment of the day which is otherwise not accessible to us, e.g. some meetings.

 

·      Ask the person to do a time-line record for us.

 

This involves giving her or him a set of note cards, one for each time period, on which they are asked to describe their major activity during that period. Time periods could be half-hour chunks, or as fine-grained as five minutes, or they could be based on “significant events as they occur.” (The latter tends to be problematic unless the informant is an active part of the research team and understands its agenda in a deep way.) Where cell phones are ubiquitous, giving study participants a cell phone may be an obvious solution. This method has the advantage of immediate recording in contrast to later asking "tell me what your day  was like," which relies on much more reprocessed and therefore questionable materials. However, it is important to realize that in many circumstances this may be too much to ask of people actively engaged in doing their work.

 

5.2 The Object-Oriented Record (OOR)

 

Here we are interested in following the career of an object, artifact or document through the system. Again, our interest is not in the particular object but in its typicality, in the fact that it can inform us about the path of objects like it. Initially, we may want to look at a typical piece of baggage or a plane (as an object, though a plane could also be treated as a setting -- see below) but the determination of what set of objects is salient and should be followed must remain until after a preliminary analysis of the workplace has been performed.

 

How to obtain an Object-Oriented Record:

·      Follow the object in any way possible, documenting its physical path, who interacts with it, and what kinds of modifications it undergoes.

 

·      Note particularly who has rights to touch, manipulate, impede, derail, modify and so on.

 

·      We may want to try to "instrument" an object, e.g. a piece of luggage, with a camera, tape recorder, or an “active badge” that allows tracing its course.

 

·      Experimentation: Objects salient in a particular cultural system always have jurisdictions attached to them, i.e. notions of who has rights and duties to manipulate them. These are sometimes expressed spatially (i.e. the place where the object is located is inaccessible to all but those entitled to employ it) but often the jurisdiction is displayed and enforced nonverbally. These are the things that are out-of-awareness for participants and are visible only in the breach. We may want to run a series of "natural experiments" where we arrange to systematically vary the taken-for-granted ground rules.

 

In an earlier study of patient/physician interaction, for example, once we had strong intuitions about the role of medical charts, we began to experiment with systematic violations of such rules as "the patient must not touch, read, or write in the record". Such experiments need to be run somewhere other than in the primary field site in order to avoid jeopardizing cooperative relationships.

 

5.3 The Setting-Oriented Record (SOR)

 

Here we want to understand what goes on in a particular setting, such as a plane or an operations room, throughout the daily, weekly, or other cycle. We are interested in the range and distribution of persons, objects (fixed and mobile) and activities in the particular setting under consideration.

 

Here is where videotaping is particularly useful, either with a single fixed camera or shooting with two cameras from different angles. Having a camera in place that can be remotely activated is one unobtrusive way of gathering data. On occasion, one may want to get 24-hour video records of particular settings. Participant-defined "hot spots" may be especially productive.

 

5.4 The Task-Oriented Record (TOR)

 

We are interested in the recurring, regular features of emically or etically defined tasks, such as opening up for business, handling a complaint, making out a ticket, dispatching a plane, etc.

 

The choice of tasks awaits our overview analysis of what the relevant tasks are for the various players. Note that tasks may be spatially fixed if the necessary technology is stationary (i.e. the ticket agent's job) or mobile, i.e. do-able in a variety of locations.

 

A major issue here is the grain-size of a task. We could think of a task as a complex system of activities such as “getting a plane out” or “making a sale.” But it could also be a much smaller-scale activity, such as “calling service.”

 

How to obtain a Task-Oriented Record:

Since we are talking here about activities that are temporally delineated and often spatially confined, videotaping may be an attractive method, in particular if it is complemented by elicitation of issues arising from tape analysis.

 

For complex coordinated but geographically distributed activities, distributed research teams or at least distributed recording technologies may be necessary. One might have recorders going in various locations to ascertain coordination of efforts.

 

•••

 

Let this suffice for a first tutorial. In the end, what is important to remember is that a good research design doesn't come from applying ready-made recipes. Rather, it comes from "navigating through a sea of contingencies" with one's eyes open and one's common sense engaged.

 

                                                                        In God we Trust

                                                                        All Others Need Data

 

(Sweatshirt design at the national meetings

of the American Statistical Association)