First Monday

Agile research by Michael Twidale and Preben Hansen



Abstract
In this paper we ask: “how might we take the ideas, the methods and the underlying philosophy behind agile software development and explore applying them in the context of doing research — even research that does not involve software development?” We look at some examples of agile research methods and think about how they might inspire the design of even better methods. We also try to address some potential criticisms of an approach that aims to minimize a need for Big Design Up Front by developing tighter iteration cycles, coupled with reflection and learning as part of a process for doing research.

Contents

Introduction
Brief overview of agile software development
Considering the similarities between a software development project and a research project
Certain kinds of agile for certain kinds of research
Some examples of agile research methods
Caveats and clarifications
Conclusion

 


 

Introduction

Most of us struggle when starting a new research project, even if we have considerable prior experience. It is a new topic and we are unsure about what to do, how to do it and what it all means. We may not have reflected much on our research process. Furthermore the way that research is described in the literature can be rather disheartening. Those papers describe what seems to be a nice, clear, linear, logical, even inevitable progression through a series of stages. It seems like proper researchers carefully plan everything out in advance and then execute that plan. How very different from the mess, the bewilderment, the false starts, the dead ends, the reversions and changes that we make along the way. Are we just doing research wrong? If it feels like that to established researchers with decades of experience and a nice publication record, how much worse must it feel to a new researcher, such as a Ph.D. student? If they are lucky they may have a good mentoring experience, effectively serving an apprenticeship with a wise and nurturing adviser in a supportive group of fellow researchers. Even so, it can be all too easy to feel like an imposter who must be doing it all wrong because what you are doing is not at all like what you read about what others are doing.

In the light of these confusions, fears, doubts and mismatches with what you experience while doing research and what you think is the right and proper way as alluded to in all the papers you read, we want to explore ideas around a title, or at least a provocative metaphor of “agile research”. We want to ask the question: “how might we take the ideas, the methods and the underlying philosophy behind agile software development and explore how these might be applied in the context of doing research?” This paper is not about sharing a set of methods that we have developed but more about provoking a discussion about the issue: What might agile research be like? How might it work? When might it be useful? When might it be problematic? Is it worth trying? Are people doing it already?

We are not claiming that this idea is wholly new. Many people have been using small scale rapid iterative methods within the research process for a long time. Rather we think that it can be useful to consider all these and other possible methods in the light of the successful deployment of agile software development processes, and to contrast them with more conventional research processes that rely more on careful advance planning. That is not to say that the latter methods are bad, just that other methods that might be characterized as more agile can be useful in particular circumstances.

We believe that it is worth exploring this idea as a way of addressing the problems that arise in trying to do a new research project, especially where an exploratory approach is useful. This could be in a domain that is new to the researcher, or where the domain is new in some way, such as involving new use contexts, new ways of interacting, new technologies, novel technology combinations, or new appropriations of existing technologies. We suspect this may be especially useful in helping new researchers such as PhD students get a better understanding of the research process in a less daunting manner. This work builds on prior thinking about how agile may be applied in university teaching and administration (Twidale and Nichols, 2013).

 

++++++++++

Brief overview of agile software development

Various challenges and frustrations with the software development process led to the development of a range of different techniques to address them. Some of these methods (that included extreme programming, adaptive software development, crystal, and scrum) had a certain set of characteristics. A group of people worked to try and articulate what they had in common — and how they were different from other practices. The resultant Agile Manifesto (Figure 1) was written in February 2001.

 

Agile Manifesto
 
Figure 1: The Agile Manifesto (http://agilemanifesto.org/).

 

The Manifesto is clearly not a method in itself. Rather it is more like a philosophy or a set of values. It aims to describe features that those software development methods belonging in the category ‘agile’ have in common and in contrast to those which do not. It does that by noting four features that agile methods share in abundance and contrasting those features with other features that are much more visible in other non-agile methods.

The Manifesto notes that agile methods are not completely different from other methods and that it is more a matter of emphasis of certain values (those on the left in the Manifesto) over others (those on the right). For agile advocates, certain approaches (those on the right in the Manifesto) may be (and indeed had been observed to be) carried to extremes, resulting in overly bureaucratic plan-driven (and non-agile) development. In addition to the Manifesto, 12 principles underlying it were also articulated (Figure 2):

 

Principles behind the Agile Manifesto
 
Figure 2: Principles behind the Agile Manifesto (http://agilemanifesto.org/principles.html).

 

Since 2001 agile methods have been adopted by many software development teams around the world. This has been accompanied by a substantial literature of books and articles providing tutorials, new agile methods and refinements, applications in other contexts, and advice on how to introduce the ideas into organizations with both established processes and having individuals and whole groups who may be skeptical. There has been an accompanying research interest in agile, trying to understand whether it works, if it is in fact more efficient than other methods, and if so, how and why.

One review of the literature, (Dingsøyr, et al., 2012) found 1,551 research papers from 63 countries on agile software development in Web of Science published between 2001 and 2010 (inclusive). There have been various general and systematic literature reviews (e.g., Brhel, et al., 2015; Ciupe, et al., 2017) and even one systematic literature review of systematic literature reviews of agile software development (Hoda, et al., 2017).

Origins of agile software development

Over many years, various processes were developed to try and address the difficulties of software development by very careful precise specification, planning and documentation of what to do in advance. After this, the code is written, tested and ultimately deployed in the customer’s setting. This very logical, rational process is sometimes called the Waterfall method (allegedly Royce, 1970, — but see later). Such an approach can seem eminently reasonable. It is in part inspired by processes that have proved to be highly effective in mass production, and in construction. But the development of something as logical as software seems to be strangely resistant to excessively logical and rational methods that try to plan everything in advance and then to systematically execute each element in a logical order. This might be because our understanding of software development is underdeveloped. Or, as agile advocates often claim, it could be that there is something fundamentally different about software development.

Many of the problems encountered arise around the issue of requirements capture — determining what it is exactly that the client wants and needs: “Traditional approaches assumed that if we just tried hard enough, we could anticipate the complete set of requirements early and reduce cost by eliminating change” (Highsmith and Cockburn, 2001). Various document-centric methods were developed in reaction to unsuccessful software projects that resulted in dissatisfied clients, breakdowns in trust and communication and indeed lawsuits. If developers can show that they have delivered exactly what the clients asked for, by referring to a voluminous requirements document, then surely the client has no reason to complain or indeed to sue. The problem is that the client may not know exactly what they want, or what they want may change before the product is delivered. That is not because the client is confused or naïve, just that interactive software products are immensely difficult to think about — even for skilled software developers.

It should be noted that many software engineers regard the waterfall method as something of a straw man. Different software development methods are often explained and justified in contrast to this hypothetical waterfall method — including methods that are not agile. Indeed as both agile advocates and critics note, Royce never used the term ‘waterfall’ in his highly cited paper. In the literature on agile methods ‘waterfall’ is often used as a catch-all term for all non-agile methods that although not as linear and rigid as an imaginary pure waterfall, are far less flexible and adaptive than agile. Another term used by agile advocates to describe non-agile methods is Big Design Up Front (BDUF). That is, that most non-agile methods devote considerable time and effort to advance planning and design in order to minimize errors and rework that becomes increasingly costly the later it is noted in the development work. Again BDUF seems perfectly rational and wise. Except, agile advocates argue, that it just does not seem to work in real life, at least in certain settings. There seems to be too much uncertainty, too many unknowns and too much that can change to make BDUF as cost-effective as it might logically seem to be.

Agile methods seem to work by acknowledging human fallibilities — the difficulties that clients have in knowing what they want and articulating it, the difficulties that developers have in completely understanding those wants and needs, the errors that inevitably arise in software development, and everyone’s inability to predict future needs. The Manifesto proposes that the way to address all these problems is to focus on tight iteration loops and different kinds of rapid testing and evaluation. Particular methods vary in exactly how they achieve this, but they all focus on building and testing minimal versions of the desired product very quickly and then progressively adding more features over time. This is a somewhat organic kind of growth process (rather like a tree that starts off as a seedling) and unlike a typical construction project (where we do not begin with a tiny shed and then grow it into a mansion). The result is that at all stages the client has a product that at least does something; even if it does not do everything desired. Rather than trying to plan everything correctly in advance, the methods allow for much more rapid adjustment on the fly in the light of inevitable human error and externally changing circumstances.

Agile methods seem to be especially effective in novel design settings, where developers and clients may not be exactly sure what the best software solution is, or indeed what is really needed from the software in order to do the job. The focus on early delivery of working software (versions that successfully execute just a few of the features of the envisaged final product) allows for different kinds of testing and revision of the requirements, allowing for fast and flexible response to a rapidly changing world — or indeed participants’ rapidly changing understanding of the world.

Nevertheless, agile can seem a very alien way of working, and switching to agile is not a trivial matter. It feels good to have a clearly worked out plan to follow. It feels like good management to begin by working on such a detailed plan (the BDUF). Agile is not about an anarchic free for all. But it emphasizes that plans will inevitably have to change as circumstances dictate (Suchman, 1987), and so detailed upfront planning may not be the most efficient way of working. Rather what is needed are ways to dynamically re-plan, but in a systematic manner.

Planning is one of the most difficult concepts for engineers and managers to re-examine. For those raised on the science of reductionism (reducing everything to its component parts) and the near-religious belief that careful planning followed by rigorous engineering execution produces the desired results (we are in control), the idea that there is no way to “do it right the first time” remains foreign. (Highsmith, 2002)

Agile is not so much a method as a metamethod

The substantial literature on agile methods can be rather challenging to read. It can seem slippery in what it actually advocates. This is in part because although it talks a lot about methods, it is really much more focussed on how to design methods, and indeed how to create a setting where methods themselves are continually being redesigned and improved to meet the demand of local circumstances. Highsmith and Cockburn (2001) describe agile as using generative rules: “a minimum set of things you must do under all situations to generate appropriate practices for special situations.”

This abstraction is why we believe it can be applied to other settings such as research. It operates through a process of first articulating values that lead to principles and thence to the development of particular practices (Beck, 2005). Testing and review does not just apply to the outputs (the software produced), but also to these practices. These practices are systematically reviewed and refined as a team learns more about what it does, and how it can change its practices in order to do things better.

As an illustration of values informing method design, consider the first value in the Manifesto:

Cockburn and Highsmith (2001) note: “It’s not that organizations that employ rigorous processes value people less that agile ones, it’s that they view people, and how to improve their performance differently. Rigorous processes are designed to standardize people to the organization, while agile processes are designed to capitalize on each individual and each team’s unique strengths.” This value in concert with the other three and the 12 principles leads to practices such as pair programming (two developers sitting side by side at a computer working on a single task) and an emphasis on informal communication and consensus-building; but also techniques to ensure that conversations and meetings do not go on forever, and decisions are made quickly. It also leads to approaches to how teams should be managed: “However, ‘politics trump people.’ Even good people can be kept from accomplishing the job by inadequate support” (Cockburn and Highsmith, 2001). A substantial part of an agile team-leader’s role is identifying and removing barriers to a team being able to do its job.

Although the agile approach criticizes an excessive focus on documentation, the processes developed do allow teams to track their progress and indeed their rate of progress (often termed ‘velocity’). Public displays, known as ‘information radiators’, enable a team to see how they are progressing in producing working software that accomplishes an increasing number of desired features. The aim is to work towards a constant sustainable velocity as teams learn to more accurately estimate the costs of developing each component of a project and can thereby reliably deliver working products while also being able to dynamically adjust requirements by re-prioritising the task list. This information on processes is obtained as a by-product of actually doing the work, rather than additionally documenting what is to be and what has been done. The process information allows teams to periodically reflect on their processes and to revise them to further increase productivity and minimize errors.

 

++++++++++

Considering the similarities between a software development project and a research project

In both software development and research we may not be completely sure what the requirements are. It is certainly possible to have a nice precisely articulated requirements document and this will not change. But it does not always happen. It may be somewhat more common to have a nice precisely articulated set of research questions and these will not change. But that does not apply to all research. In particular, exploratory research may have considerably more uncertainty about where it will lead and what exactly needs to be done to make progress at any given point in time. Furthermore, where did that precisely articulated research question come from? How was it constructed? Did it emerge fully formed, or was it itself the result of an iterative design process? Just as a software development project can fall into delays, cost overruns and unexpected complexities, so can a research project, especially if you are doing something new or are new to doing research. In both software development and research we may wish that we had the foreknowledge to have been able to have specified everything perfectly at the outset through a BDUF, even as we concede that we lack the wisdom and prescience to ever do that. Failing prescience, it would be nice if we could catch problems earlier through tighter iterations. That is the starting point to considering agile methods.

Many grant proposals and thesis proposals are written in the style of the waterfall method, showing a nice clear progression of steps whereby the researcher will discover something radically new in a completely predictable, unsurprising way. The irony is palpable. For some proposal writers this is simply formulaic — of course nobody does what they say they will do, you just lie in order to get the grant money and then you figure out what you will really do. Then you flounder around and iterate, learning as you go. Finally you get some useful insights and then write it up as if you could travel back in time and tell your previous self what you should have done in order to create a BDUF that would have correctly led to where you ended up. This may well work for established researchers, at the psychic cost of certain amounts of self-deception or cynicism, but it seems unnecessarily confusing and demoralizing for new researchers who may constantly feel that they are doing research wrong. What they experience in the research process — the mess, the confusion, the constant revisions, is completely at odds with what they read that ‘proper’ researchers do in their smooth coasting down the BDUF waterfall.

Some research projects may actually involve software development, but we want to emphasize that agile research, although inspired by agile software development, does not itself require that software development be any component of the research itself. Nevertheless, there are various similarities between a software development project and a research project.

Acknowledging the differences between a software development project and a research project

There are of course also differences between software development and research. Most obviously one produces code (that, one hopes, provides functionality) and the other produces research papers (that, one hopes, provide insight). Agile software development methods were created to deal with some particularly difficult issues that can arise in writing code, that may not apply in writing a research paper. We noted above there are certain parallels between a requirements specification and a research question. However the origins of the two may be rather different. As Gabriel (2014) notes, most advocates of agile software development simply assume that there is in fact a customer for the software so that the agile manifesto can advocate for “Customer collaboration over contract negotiation”. But who is the customer for a research project? A funding body? A journal? Society? Research project ‘customers’ are at best rather hands-off, so are typically not able or willing to give the regular feedback idealized in agile software development. Way, et al. (2009) note the challenges, even in research that is mostly about software development: ”Because the customer of a research project tends toward the extremes, either as an intimately involved researcher or a remote, grant-funding entity, the customer orientation of most agile methodologies requires the most adaptation.”

A research team may certainly give each other feedback, as can a Ph.D. student’s supervisor, or peer reviewers’ comments on a submitted paper. Presenting research at conferences and workshops is another way of testing a research idea and gaining feedback. However these venues typically do not enable a tight enough feedback loop as we might want in an agile project, which can aspire to iterations in various cycles of days and weeks, not just months and years. The extent to which research currently is or should be in the future an intensely collaborative process is a topic for another day. For now we can concede that software development is intensely collaborative, and that research, especially a Ph.D. student’s research, typically has some collaborative components even though it may not be as intensely collaborative.

Additionally, in agile software development it is assumed that the customer will supply the requirements of the software, although of course these may then be refined in tight collaboration between the customer and the software development team. But where do research questions come from? Except in very directed, often applied research, it is unlikely to come from the research ‘customer’. Rather the research question may need to be uncovered or perhaps iteratively designed. This is a process and can itself be amenable to tighter iterative loops to minimize the problems of a BDUF research question.

Agile research methods may not be the panacea that agile software development methods often claim

One key difference between a software development project and a research project is around the relative appropriateness of non-agile BDUF approaches. Most advocates for agile methods claim that an agile approach is appropriate for nearly all software development projects. As noted, “waterfall” is as much a term of abuse as a classification; indeed critics of some of the more inflated claims of agile consider “waterfall” to be a straw man, because no one truly uses a waterfall method — even if many projects are more waterfall-like than agile-like. By contrast, in our opinion there can be many kinds of research where a BDUF waterfall approach is entirely appropriate, maybe even desirable. Hypothesis testing studies, such as the classic double blind randomized controlled trial — the gold standard of medical research — is a waterfall research method that necessarily involves BDUF, and that seems to work very well indeed. We just want to make the case that there are some kinds of research where BDUF methods can lead to problems, that these problems have great similarities with certain software development problems, and that in software development, agile methods seem to be rather effective at dealing with those problems.

Agile research is worth investigating

There are similarities between at least some kinds of research and some kinds of software development. Unfortunately there is not a straightforward mapping from the agile manifesto and the 12 principles to analogues in research. Nevertheless these can be inspirational in challenging us to think about alternative approaches to doing research, particularly when our more conventional BDUF methods become problematic. Consequently, we want to ask:

Agile, learning and reflection

As might be expected, there have been various explorations of how to teach agile software development processes to CS students (e.g., Frydenberg, et al., 2017; Heggen and Myers, 2018; Lang, 2017). However what may be of more interest for agile research is to consider how agile software development methods have continuous learning built into them. Agile software development methods can be seen as ways of improving various kinds of learning:

This learning occurs through making processes explicit, breaking them up into more manageable units, building in time for reflection and refactoring, and creating a different mindset to the crisis mode that can overtake many non-agile projects as they start to deviate from the BDUF (e.g., Razavian, et al., 2016). The agile technique of pair programming has a very clear side effect of helping a less experienced programmer learn from another, as well as the potential to enable more collaborative kinds of learning where both are learning by working together (Chau, et al., 2003). Nerur and Balijepally (2007) show similarities in trends in software development, architecture and strategic management from optimization and control towards learning and innovation using lightweight methods.

The importance of supporting continuous learning is noted, and supported by retrospectives where team members learn from each other and how to work more effectively together as a team. (Chau, et al., 2003). Learning via reflection is integral to agile (Razavian, et al., 2016), but it can be in danger of getting sidelined under real life pressures (Babb, et al., 2014). All this highly desirable learning needs active support to thrive. As Cunha, et al., (2017) point out: “When escaping the safety of plans, people will expose themselves to the possibility of mistakes. If the context is not friendly to an aesthetics of imperfection, then people will potentially be less inclined to improvise.” This leads to a need for agile improvisational leadership; something we may also need in the world of research management and mentoring.

It is at least reasonable to hope that research should have a strong component of learning in it too. The need for learning is particularly acute in two research scenarios: 1) when an established researcher ventures into a new area, and 2) when a novice researcher such as a Ph.D. student undertakes research. In both cases, there are many unknowns, and so a BDUF approach to research is very likely to miss something.

One of the critiques of waterfall methods is that problems are detected too late in the process (such as at deployment) when rectification is extremely expensive and some decisions may simply be locked in. The same can happen with a research project. By contrast, agile methods aim to catch problems early and thereby create a safe microworld (Edwards, 1995) for learning.

 

++++++++++

Certain kinds of agile for certain kinds of research

As we have noted we are by no means the first to look at the potential interactions between agile software development and research. There are a number of papers describing various explorations of the idea, typically inspired by a particular need of a particular kind of research. Many of these involve research where programming is a key component of the research work, including digital humanities (Tabak, 2017), infrastructure for eScience (Procter, et al., 2011) and cybersecurity (Dark, et al., 2015). Programming for research projects has been noted to proceed in rather different ways to other software development settings: “In our experience, research projects that have a software development component tend to be managed in an ad hoc manner. This extemporized approach arises of necessity due to the discontinuous nature of an academic researcher’s daily schedule, but also for lack of a properly designed and well suited management process.” (Way, et al., 2009).

In the context of cybersecurity research Dark, et al. (2015) note that “Traditional, long-term research often involves extensive requirements definitions, comprehensive proposals, competitive awards, distributed organizational structures, complex funding protocols, and long-term performance that can extend for years.” However, that approach is problematic for research in cybersecurity: “The traditional research infrastructure was never intended for this level of fast engagement and immediate application, and is not well suited for these situations.” For them, agile applied research for cybersecurity “is organized around sponsors, who pose research questions to be answered, and researchers, who conduct the research and produce results.” (Linger, et al., 2017), and so the customer centric aspects of agile software development align very nicely with this kind of sponsor centred research. Their approach is especially attuned to “iterative, short-term, accumulating increments that each produces actionable results. Increments focus on understanding the problem, and progress to solution strategies, and then to incremental solutions.” Additionally Linger, et al. remind us that graduate education, even in a research lab, can focus too much on research as an individualistic endeavor, even though research is increasingly multidisciplinary and collaborative. An agile approach can encourage us to think about more collaborative ways of learning research skills.

The challenges of project management of research teams, especially of undergraduate researchers (Zhang, et al., 2017), has led various researchers to explore agile methods, especially Scrum. Frequent standup meetings for research group project management are a popular method that has been deployed in various settings (Hicks and Foster, 2010; Bezerra, et al., 2014; Broman, 2015; Hansen and Hansson, 2015; Hansen and Hansson, 2017).

Finally, in exploring the similarities and differences between an academic research project that involves software development and an industry software development project, Way, et al. (2009) attempt to adapt the Agile Manifesto and the 12 principles (Figures 1 and 2) to agile software research. The authors note that this is intended just as a first attempt, emphasizing by calling it a “Penultimatum”. We find this approach very thought provoking, although we want to go further, exploring the potential of agile methods for any research project regardless of whether it involves any software development.

 

++++++++++

Some examples of agile research methods

Inspired by the Agile Manifesto, we choose to categorize any approach or technique used in research that involves tight iterative cycles to be an agile research method. This is a rather fuzzy and relative definition intended to be very broad, encompassing a range of methods that many researchers have used in many setting over many years. We don’t want to say it only applies to new methods that look very similar to agile software development methods. Tighter iterations mean over the span of hours, days and weeks rather than months and years. As an example, Zhang, et al. (2017) describe the use of two-week research sprints to help undergraduates learn about the research process. Some examples might not fit in a two-week sprint, but they can serve as inspiration to see if it makes sense develop even more agile variants.

With such a broad definition we include the following as examples of agile methods that can be used in research: quick and dirty ethnography, design probes, contextual inquiry, bodystorming and small scale user testing (e.g., Kuniavsky, 2010). Given our own research background, many of the methods we think of derive from development methods in human-computer interaction (HCI) and computer supported cooperative work (CSCW), particularly those where practitioners have had to repurpose larger methods for tight commercial development cycles. In that light the IDEO methods card deck (IDEO, 2003) can be viewed as consisting of 51 agile methods that can help in the design process for the development of new and better systems and services. These development methods can also be repurposed to serve as contributing to the research process — even if they do not necessarily count as normal ‘research methods’.

Given our defining feature of enabling a tighter iteration cycle, we can explore the development of additional agile research methods tailored to needs of a particular discipline, line of inquiry or student need by asking questions like: “What could you do to advance your understanding of the problem in the next week, the next day or even the next hour?” That might lead to a very rapid lightweight observation, or an observation of a proxy of the real item of interest, or quickly mining a literature for ideas and insights. Whatever the approach, there is a call to action, and expectation of a delivery of something (unlikely to be profound, but something) and then a process of reflection of what to do next. It is a form of classic problem decomposition. It involves decomposing the problem of doing research into more manageable substeps. These in turn are designed so that they can be rapidly iterated and learned from, rather than expecting to get them right first time. They can include methods for doing all parts of the research from scoping out a problem space, designing a research question, prototyping a pilot study, executing an evaluation, analyzing results and writing and reporting them.

Agile methods can handle longer term planning by employing different scales of analysis, Cohn (2006) developed the ‘planning onion’ that describes these scales and the kinds of planning that can happen at each scale. This can be used to ensure that an agile approach does not simply lead to a whole series of disjointed bits of superficial activities.

Research and design: Design sprints

A design sprint can be seen as a quicker and smaller scale version of the overall design thinking process and could be done in several iterations. It usually starts with a meeting. This type of meeting has to be attended by a designer and by someone from the business or stakeholder and sometimes also from the development team (Knapp, et al., 2016). This is similar to the Ph.D. student and supervisor relationship to some degree. A design sprint is normally divided into five stages. However, the naming of each stage may be little different depending on who designs the sprint and for what purpose. A general approach involves the following concepts:

Understand, sketch, decide, prototype and validate

These stages have been used in several different frameworks for design sprints and below are a summarization of stages and general description according to the Interaction Design Foundation (Dam and Siang, 2018) and the sprint design process (Table 1).

 

Table 1: Stages, concepts and activities supporting progression in research.
Interaction Design FoundationGeneral sprint design processGeneral activitiesMethods and techniques (examples)
Empathise
(Understanding human needs involved)
Map or understandExplore different research problem(s) from many angles. Who are the users and their needs and its contextContextual inquiry
Cultural probes
Customer experience audit
Design ethnography
Diary studies
Focus groups
Literature reviews
Observations
Stakeholder maps
Define
(Aggregate information that has been created, analyse observations and synthesise them. Re-frame and define the problem)
Sketch (Diverge)Brainstorm ideas and solutions on their own. Envision and ideateAffinity diagrams
Behavioural and cognitive mapping
Card sorting
Content analysis
Mind mapping
Personas
Ideate
(Creating many ideas in ideation sessions)
Decide (Converge)Select one good idea and visualize itDesign workshops
Generative design
Research through design
Scenarios
Prototype
(Adopting a hands-on approach in prototyping)
PrototypeMaterialize (build) ideas sketched earlier. Scaled down versions with major functions and how to use it.Creative toolkits
Experience prototyping
Participatory design
Generative design
Simulation exercises
TestTest or validateTest prototypes with stakeholders and relevant (end)-users outside research context. Establish lessons learnt.Ergonomic analysis
Experiments
Eyetracking
Focus groups
Heuristic evaluation
Kano analysis
Usability testing
Web analytics
  Next iteration 

 

In the beginning you need to gain an empathic understanding and a map of the problem you are trying to solve or explore. Based on the information and the analysis of the information gathered and synthesized about the context, tasks, user needs etc., this will feed into the second stage, that involves the need to define and redefine a problem. So the empathize stage helps to define the problem to explore. The next step is to sketch and ideate. Ideation is a process where you generate ideas, concepts and solutions. This is done through sketching, brainstorming or other similar techniques. The ideate or decide stage involves creating many ideas in ideation sessions. One or several ideas could be developed.

Even though these different stages do not always follow sequentially, they identify in a systematic way different stages to be carried out in smaller iterative research cycles. Neither do they have to follow a specific order. They can be repeated in several iterations and importantly, they can also occur in parallel. Based on this notion, we can see them as different modes that contribute to a research problem or research process. As such, each iteration may have one or more intellectual or practical spin-offs that can be used and reused in a certain research process. More importantly, each of these iterations may create and develop some by-products such as snippets of ideas, sketches and stories that was discarded or voted down in a certain iteration. Interestingly, these by-products could be re-used at a later stage or in the next iteration in the development of the research process. Each of these stages may then involve their own specific activities in connection to the overall research problem and context.

Requirements for using and applying methods

To apply these methods and techniques, some requirements need to be adapted. One of them is temporality. They should be easily planned for and quickly performed. Basically, a set of methods that is used and its data analysed should not take more than a week. Another aspect of these methods are how easily they can be applied without large efforts and planning. Methods and techniques that allow for understanding, ideation and visualization of a context, problem, study setting or similar as part of the research process. Here we consider general methods of which some have been used in both systems and user interface design. See Martin and Hanington (2012) and Muratovski (2016) for an overview of these various methods.

For example, for the planning and scoping phase of a project or design, we can use design briefs; brainstorming; card sorting to understand how people group and categorize information; competitive testing to know your competitors’ research or products; concept mapping; customer experience audit; focus groups; Kano analysis; literature reviews; research through design; stakeholder maps; territory maps; or triading.

For the exploration and synthesis stage, we may use methods like: making affinity diagrams; quick domain-based artifact analysis; behavioral mapping that quickly but systematically documents location-based observation of human activities, objects etc.; card sorting; case studies; cognitive mapping; contextual inquiry; cultural probes; quick ethnography; diaries; directed storytelling; evidence-based research; experience sampling collecting snapshots of behaviour, interactions, activities etc; fly-on-the-wall observations; image boards; interviews; Kano analysis; mind mapping; participatory design; photo studies; picture cards; quick questionnaires; RITE (Rapid iterative testing and evaluation); research through design; scenarios; scenario description swimlanes; shadowing; task analysis; or thematic networks.

For early prototyping and concept generation, methods like bodystorming, card sorting to understand how people group and categorize information; collage; concept mapping; creative toolkits like Lego etc; generative research; mental model diagram; parallel prototyping; participatory design; personas; low-fi prototyping; RITE (rapid iterative testing and evaluation); research through design; role-playing; scenarios; scenario description swimlanes; simulation; stakeholder walkthrough; storyboards; think-aloud; user journey maps; value opportunity analysis; weighted matrix; or Wizard-of-Oz.

Research through design

Traditional design processes, such as industrial, product, media or artistic design, may involve and require heavy and cumbersome processes with respect to time, human power, material, funding and other resources. Furthermore, these processes usually follow specific procedures and steps and thus makes it somewhat slow when it comes to being reactive to other processes and changing patterns in users needs and markets. In agile software development, methods are based on evolutionary developments involving rapid iterative cycles. In design, similar rapid prototyping can be used in order to create components that meet the user needs in a quicker way. These incremental and iterative procedures will enable more practical and pragmatic design cycles and dynamic feedback loops that are more responsive to changing situational and contextual features of our world.

Within the design community, research through design (RtD) has been used for over 15–20 years within the design community as a term and concept to describe mainly practice-based inquiry that generates transferable knowledge (Durrant, et al., 2017). Research through design is not a formal methodological approach but again, more of a foundational concept for how to approach inquiry through the practice of design involving a collection of applicable methods. This concept has been utilized and articulated in various contexts. In general, one of the purposes of research can be seen as the production of knowledge (in research an explicit output would be a research paper or a prototype or set of written design rules). Knowledge that a person can build upon but also that other people can use (Stappers and Giaccardi, 2017) and it can be of different types: detailed, generalized, abstract, fragmented, ideas and sketches. Within the design field such as human-computer interaction (HCI), the purpose of design is generally understood to be the creation of a certain solution to be applied in the world and used by humans, directly or indirectly. Sanders and Stappers (2008) have visualized the research process in a way that may also represent the small and iterative research sprints proposed as part of the concept of agile research methods.

 

Co-design process
 
Figure 3: Co-design process. Source: Sanders and Stappers (2008).

 

What the authors call the front end is a ‘pre-design’ stage, that shows the many activities and thinking that may take place in order to inform and inspire an exploration of a research or design question. Thus, this design process has similarities to an agile research process. Uncertainty and openness are usually characteristic features and often referred to as ‘fuzzy’. At this stage, it is seldom known what the precise outcome would be. In Figure 3, it is depicted as several stages, such as design criteria, ideas, concept, prototype and product/service/artefact. And as the process progresses, clarity is achieved.

 

++++++++++

Caveats and clarifications

Our aim in this paper is not to present a definitive solution, but rather to ask a question that we think can yield some interesting insights and provoke some creative ideas for things to try:

What might agile research look like?

We already know what Big Design Up Front, waterfall style research looks like. Unlike in the setting of software development, there are many situations when BDUF waterfall style research is perfectly fine, indeed desirable or the best thing to do. But not always.

It is quite clear that many people are already doing things that fit our definition of an agile research method. We think that by beginning to collect these examples we can share them more widely and be inspired to develop yet more methods that can address problems that researchers, especially new researchers can encounter with some aspect of the research process. For example in writing this paper we noticed the problems that arise in developing a good research question (hardly a novel thought) and started to ask about methods that might help in the iterative refinement of a research question. After all we don’t expect people to construct a novel interface ex nihilo. We have an array of design and evaluation methods to gradually iterate towards a good interface. What might be some analogues for gradually iterating towards a good research question?

In working towards agile research methods we draw inspiration from the history of agile software development. Just as early advocates of agile were careful to note a wide range of pre-existing methods that they could say fitted their broad definition of agile, so we want to say the same about agile research methods. However, agile research methods are not simply pre-existing methods to be selected from. Rather they are examples to serve as inspiration for iterative refinement, combination and the development of whole new methods. Over time we will need to collect evidence about how well they work and why. They will need legitimation just as agile software development methods needed evidence for them to be regarded as legitimate. In the early discussions of agile software development, much effort was made to try and draw very clear distinctions between agile and non-agile methods. As more organizations adopted agile, or were influenced by it, and as the literature on the methods and how they worked grew, so the discussion gradually shifted to regarding all methods as more on a continuum of agileness, and developers moving on that continuum as they tried different ways of working. We might expect that reactions to agile research methods to follow a similar trajectory, with some people in some research traditions regarding them as invalid, pernicious and liable to seduce the naive, while others in other traditions may be bewildered by the fuss and be merely annoyed that people are going on about tight iteration loops that they have been using for years.

One risk with agile research methods is that it can all look too easy. Just go off for an afternoon, look at some stuff or build some stuff or test some stuff and come back with what you find and get a Ph.D.? Just repeat that 100 times and get a Ph.D.? Obviously not. But a naïve student may read about this and think that’s all they have to do. We certainly want to make the research process easier and less daunting by taking what can seem to be a huge indivisible blob of work and insight and break it down into more bite-sized chunks. We want to remove the daunting sense that you need to know everything in advance so you can do the BDUF in order to discover the thing you don’t know. We want to help students with methods to help them uncover a good, informative and tractable research question, most likely by numerous iterations.

Just as agile software development is not just a load of little sprints, so agile research is not just a load of easy little activities. The larger processes, in the analogue of Cohn’s ‘planning onion’ still need structure and students still need a lot of guidance. This is particularly the case in reflecting on what has been learned from a previous iteration and so deciding what to do in the next iteration. Reflection is always hard. However, the advantage of agile methods is that it gives you something to reflect on very soon. It does not make problems go away. Rather it can reveal problems and confusions earlier so that there is more time to figure out what to do with them. It enables opportunities for reflecting at all levels, including the research equivalent of refactoring, where you take the accumulation of what you have found so far, realize that the way it what discovered has created artifacts of your understanding of it and that it needs a conceptual tidying up.

There are related risks of legitimation. An agile approach may occasionally get to certain very insightful finds much more quickly than a BDUF approach. Gatekeepers such as reviewers may be hostile to such findings, not because they believe them to be incorrect, or even because there is insufficient evidence, but simply because they perceive the method as too ‘easy’. That can create a belief that it is not ‘fair’ that a finding has emerged with what is seen as ‘insufficient’ effort. The ethos of the Protestant work ethic can pervade research at all levels, including funding and peer review. We will see if this concern plays out. For now it is just a hypothesis, with a few indicative anecdotes.

Another risk is that agile research can be seen as perpetuating, even encouraging the concept of the ‘least publishable unit’ as an analogy to the minimal viable product. We acknowledge this risk, but would claim that the benefits of the approach seem to outweigh the risks. Also, just because you do a series of very small research-like activities to learn more about the research space does not mean you could or should publish every one of them. That’s an entirely separate issue. Agile methods are all about generating a series of small results working towards a final deliverable. The intermediate results create an accountability, an ability to check on satisfactory progress, to verify that requirements are being met, to check if requirements have subtly evolved, and a resource for reflecting on progress and improving processes in the future. Should these intermediate stages be worthy of publication, then fine, but there is no need for them to be. Their purpose and main value in agile research is internal to the individual researcher or the team.

 

++++++++++

Conclusion

In this paper we have tried to explore a range of possibilities for agile research. We have done this by using agile software development as inspiration for the development and tailoring of various agile-like techniques that can be applied at all stages of the research process. That includes the initial scoping-out and learning activities as well as those later evaluative processes mostly covered in the literature on research methods. In our explorations of these ideas we came to believe that agile research methods have great potential for helping new researchers learn how to do research. We have a few informal indicators of this potential, along with a growing list of evidence that many people are using agile methods in their own research practice. We do not want to imply that agile research methods are only about software development, or to imply that they can work by simply directly mapping agile software development methods into a research setting, or to serve as an excuse to do fast sloppy work to bang out yet more publications. Nevertheless we think that there is great promise in further exploring alternative faster lighter more tightly iterative ways of doing exploratory research.

The agile approach is not just about selecting pre-existing methods most appropriate to the task at hand. Rather it is a metamethod, encouraging the development, refinement, testing and iterative improvement of new and revised methods that are more agile, permitting iteration loops at varying and appropriate levels of granularity. To support research, we advocate looking at the methods we use in the doing of research (including all the things that we do, not just those that we traditionally designate as research methods) and asking questions. Questions like — How do these approaches currently get in the way? How can they be revised or replaced by other methods that allow for tighter more productive incremental learning? How can we avoid (where appropriate) the necessity of BDUF approaches? Instead what are some possible methods that help us move towards a very large final goal by a series of faster lighter iterations that let us learn and improve as we go rather than necessitating that we get it right first time?

In research it is highly unlikely we can get it right first time no matter how hard we try or how smart we think we are. So let’s try and design methods that are a bit less hubristic and a bit kinder to our inevitable human fallibilities. This could help us all, both new researchers and established researchers exploring new areas. End of article

 

About the authors

Michael Twidale is a professor in the School of Information Sciences at the University of Illinois at Urbana-Champaign.
E-mail: twidale [at] illinois [dot] edu

Preben Hansen is a professor in the Department of Computer and Systems Sciences at Stockholm University.
E-mail: preben [at] sv [dot] su [dot] se

 

References

J. Babb, R. Hoda and J. Nørbjerg, 2014. “Embedding reflection and learning into agile software development,” IEEE Software, volume 31, number 4, pp. 51–57.
doi: https://doi.org/10.1109/MS.2014.54, accessed 29 December 2018.

K. Beck with C. Andres, 2005. Extreme programming explained: Embrace change. Second edition. Boston, Mass.: Addison-Wesley.

D. R. Bezerra, A. C. Dias-Neto and R. da Silva Barreto, 2014. “ARDev: A methodology based on scrum principles to support research management on software technologies,” CASCON ’14: Proceedings of 24th Annual International Conference on Computer Science and Software Engineering, pp. 363–366.

M. Brhel, H. Meth, A. Maedche and K. Werder, 2015. “Exploring principles of user-centered agile software development: A literature review,” Information and Software Technology, volume 61, pp. 163–181.
doi: https://doi.org/10.1016/j.infsof.2015.01.004, accessed 29 December 2018.

D. Broman, 2015. “A process for student group supervision,” KTH Royal Institute of Technology, at http://www.bromans.com/publ/broman-2015-group-supervision.pdf, accessed 29 December 2018.

T. Chau, F. Maurer and G. Melnik, 2003. “Knowledge sharing: Agile methods vs. Tayloristic methods,” WETICE ’03: Proceedings of the Twelfth International Workshop on Enabling Technologies: Infrastructure for Collaborative Enterprises, pp. 302–307.

A. Ciupe, S. Meza, R. Ionescu and B. Orza, 2017. “Practical agile in higher education: A systematic mapping study,” 2017 XXVI International Conference on Information, Communication and Automation Technologies (ICAT).
doi: https://doi.org/10.1109/ICAT.2017.8171626, accessed 29 December 2018.

A. Cockburn and J. Highsmith, 2001. “Agile software development: The people factor,” Computer, volume 34, number 11, pp. 131–133.
doi: https://doi.org/10.1109/2.963450, accessed 29 December 2018.

M. Cohn, 2006. Agile estimating and planning. Upper Saddle River, N.J.: Prentice Hall.

M. P. e Cunha, L. Giustiniano, P. Neves and A. Rego, 2017. “Improvising agility: Organizations as structured-extemporaneous hybrids,” In: P. Boccardelli, M. Annosi, F. Brunetta and M. Magnusson (editors). Learning and innovation in hybrid organizations: Strategic and organizational insights. Cham, Switzerland: Palgrave Macmillan, pp. 231–254.
doi: https://doi.org/10.1007/978-3-319-62467-9_12, accessed 29 December 2018.

Rikke Dam and Teo Siang, 2018. “5 stages in the design thinking process,” Interaction Design Foundation, at https://www.interaction-design.org/literature/article/5-stages-in-the-design-thinking-process, accessed 29 December 2018.

M. Dark, M. Bishop, R. Linger and L. Goldrich, 2015. “Realism in teaching cybersecurity research: The agile research process,” In: M. Bishop, N. Miloslavskaya and M. Theocharidou (editors). Information security education across the curriculum. IFIP Advances in Information and Communication Technology, volume 453. Cham, Switzerland: Springer, pp. 3–14.
doi: https://doi.org/10.1007/978-3-319-18500-2_1, accessed 29 December 2018.

T. Dingsøyr, S. Sridhar, V. Balijepally and N. B. Moe, 2012. “A decade of agile methodologies: Towards explaining agile software development,” Journal of Systems and Software, volume 85, number 6, pp. 1,213–1,221.
doi: https://doi.org/10.1016/j.jss.2012.02.033, accessed 29 December 2018.

A. C. Durrant, J. Vines, J. Wallace and J. S. R. Yee, 2017. “Research through design: Twenty-first century makers and materialities,” Journal of Design Issues, volume 33, number 3, pp. 3–10.
doi: https://doi.org/10.1162/DESI_a_00447, accessed 29 December 2018.

L. D. Edwards, 1995. Microworlds as representations, In: A. A. diSessa, C. Hoyles, R. Noss and L. D. Edwards (editors). Computers and exploratory learning. Berlin: Springer, pp. 127–154.
doi: https://doi.org/10.1007/978-3-642-57799-4_8, accessed 29 December 2018.

M. Frydenberg, D. J. Yates and J. S. Kukesh, 2017. “Sprint, then fly: Teaching agile methodologies with paper airplanes,” 2017 Proceedings of the EDSIG Conference, at http://proc.iscap.info/2017/pdf/4324.pdf, accessed 29 December 2018.

R. P. Gabriel, 2014. “I throw itching powder at tulips,” Onward! 2014: Proceedings of the 2014 ACM International Symposium on New Ideas, New Paradigms, and Reflections on Programming & Software, pp. 301–319.
doi: https://doi.org/10.1145/2661136.2661155, accessed 29 December 2018.

P. Hansen and H. Hansson, 2017. “Exploring student and supervisor interaction during the SciPro thesis process: Two use cases,” International Journal of Distance Education Technologies, volume 15, number 2, pp. 33–44.
doi: https://doi.org/10.4018/IJDET.2017040103, accessed 29 December 2018.

P. Hansen and H. Hansson, 2015. “Optimizing student and supervisor interaction during the SciPro thesis process — Concepts and design,” In: F. Li, R. Klamma, M. Laanpere, J. Zhang, B., Manjón and R. Lau (editors). Advances in Web-Based Learning — ICWL 2015. Lecture Notes in Computer Science, volume 9412. Cham, Switzerland: Springer, pp 245–250.
doi: https://doi.org/10.1007/978-3-319-25515-6_23, accessed 29 December 2018.

S. Heggen and C. Myers, 2018. “Hiring millennial students as software engineers: A study in developing self-confidence and marketable skills,” SEEM ’18: Proceedings of the Second International Workshop on Software Engineering Education for Millennials, pp. 32–39.
doi: https://doi.org/10.1145/3194779.3194780, accessed 29 December 2018.

M. Hicks and J. S. Foster, 2010. “SCORE: Agile research group management,” Communications of the ACM, volume 53, number 10, pp. 30–31.
doi: https://doi.org/10.1145/1831407.1831421, accessed 29 December 2018.

J. Highsmith, 2002. “What is agile software development?” CrossTalk, volume 15, number 10, pp. 4–9.

J. Highsmith and A. Cockburn, 2001. “Agile software development: The business of innovation,” Computer, volume 34, number 9, pp. 120–127.
doi: https://doi.org/10.1109/2.947100, accessed 29 December 2018.

R. Hoda, N. Salleh, J. Grundy and H. M. Tee, 2017. “Systematic literature reviews in agile software development: A tertiary study,” Information and Software Technology, volume 85, pp. 60–70.
doi: https://doi.org/10.1016/j.infsof.2017.01.007, accessed 29 December 2018.

IDEO, 2003. IDEO method cards: 51 ways to inspire design. Palo Alto, Calif.: IDEO.

J. Knapp, J. Zeratsky and B. Kowitz, 2016. Sprint: How to solve big problems and test new ideas in just five days. New York: Simon & Schuster.

M. Kuniavsky, 2010. Smart things: Ubiquitous computing user experience design. Boston: Morgan Kaufmann.

G. Lang, 2017. “Agile learning: Sprinting through the semester,” Information Systems Education Journal, volume 15, number 3, 14–21, and at https://files.eric.ed.gov/fulltext/EJ1140882.pdf, accessed 29 December 2018.

R. Linger, L. Goldrich, M. Bishop and M. Dark, 2017. “Agile applied research for cybersecurity: Creating authoritative, actionable knowledge when speed matters,” Proceedings of the 50th Hawaii International Conference on System Sciences.
doi: https://doi.org/10.24251/HICSS.2017.723, accessed 29 December 2018.

B. Martin and B. Hanington, 2012. Universal methods of design: 100 ways to research complex problems, develop innovative ideas, and design effective solutions. Beverly, Mass.: Rockport Publishers.

S. Nerur and V. Balijepally, 2007. “Theoretical reflections on agile development methodologies: The traditional goal of optimization and control is making way for learning and innovation,” Communications of the ACM, volume 50, number 3, pp. 79–83.
doi: https://doi.org/10.1145/1226736.1226739, accessed 29 December 2018.

R. Procter, M. Rouncefield, M. Poschen, Y. Lin and A. Voss, 2011. “Agile project management: A case study of a virtual research environment development project,” Computer Supported Cooperative Work (CSCW), volume 20, number 3, pp. 197–225.
doi: https://doi.org/10.1007/s10606-011-9137-z, accessed 29 December 2018.

M. Razavian, A. Tang, R. Capilla and P. Lago, 2016. “In two minds: How reflections influence software design thinking,” Journal of Software: Evolution and Process, volume 28, number 6, pp. 394–426.
doi: https://doi.org/10.1002/smr.1776, accessed 29 December 2018.

W. Royce, 1970. “Managing the development of large software systems,” Proceedings, IEEE WESCON (Western Electronic Show and Convention), pp. 328–338.

E. B.-N. Sanders and P. J. Stappers, 2008. “Co-creation and the new landscapes of design,” CoDesign, volume 4, number 1, pp. 5–18.
doi: https://doi.org/10.1080/15710880701875068, accessed 29 December 2018.

P. Stappers and E. Giaccardi, 2017. “Research through design,” In: M. Soegaard and R. Friis-Dam (editors). Encyclopedia of human-computer interaction. Second edition, at http://www.interaction-design.org/literature/book/the-encyclopedia-of-human-computer-interaction-2nd-ed/research-through-design, accessed 29 December 2018.

L. Suchman, 1987. Plans and situated actions: The problem of human-machine communication. New York: Cambridge University Press.

E. Tabak, 2017. “A hybrid model for managing DH projects,” DHQ: Digital Humanities Quarterly, volume 11, number 1, pp. 1–19, and at http://www.digitalhumanities.org/dhq/vol/11/1/000284/000284.html, accessed 29 December 2018.

M. Twidale and D. Nichols, 2013. “Agile methods for agile universities,” In: T. Besley and M. Peters (editors). Re-imagining the creative university for the 21st century. Rotterdam: Sense Publishers. pp. 27–48.

T. Way, S. Chandrasekhar and A. Murthy, 2009. “The agile research penultimatum,” SERP 2009: Proceedings of the 2009 International Conference on Software Engineering Research & Practice, pp. 530–536.

H. Zhang, M. Easterday, E. Gerber, D. Rees Lewis and L. Maliakal, 2017. “Agile research studios: Orchestrating communities of practice to advance research training,” CSCW ’17: Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, pp. 220–232.
doi: https://doi.org/10.1145/2998181.2998199, accessed 29 December 2018.

 


Editorial history

Received 14 September 2018; accepted 30 December 2018.


Creative Commons License
“Agile research” by Michael Twidale and Preben Hansen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Agile research
by Michael Twidale and Preben Hansen.
First Monday, Volume 24, Number 1 - 7 January 2019
https://firstmonday.org/ojs/index.php/fm/article/download/9424/7718
doi: http://dx.doi.org/10.5210/fm.v24i1.9424