Brown, A. L. (1992). Design experiments: Theoretical and methodological challenges in creating complex interventions in classroom settings. The Journal of the Learning Sciences, 2(2), 141-178.
At first glance this article may seem somewhat strange, with the author listing about 25 self-citations, however given the format of the article, it actually makes sense. It is written as a qualitative longitudinal review of the author’s own work over about a 15 year period, with comparisons between that work and the work of others in the field during a similar time period. It is definitely a different approach to writing an article, but it shows an interesting pattern of various ideas becoming popular among researchers and practitioners and then giving way to something else. Brown also follows the same method in writing the article about past research as when writing specific research articles – giving an overview of the entire study and the general trends, flavored with detailed information about an interesting case. In this article Brown shows us the general pattern to the studies and then gives us specific examples, such as the biology class studying carnivores and herbivores.
The actual point of the article seemed to be that just because a study doesn’t follow a strict scientific convention doesn’t mean that it’s not still useful. The author discusses some common criticisms of her work. With some results being discounted as a Hawthorne effect, Brown explains why it was not just a Hawthorne effect but even if it was it may still be a valid study with good results. The major issue pointed out by Brown is that learning outcomes depend on many interacting factors, and the act of controlling for all factors and varying just one, in addition to the fact that the observer is there watching in the first place, makes the environment become artificial. So is it better to be very scientific and be able to show a given effect size of one particular variable in an artificial environment, or to simply interact within a natural environment where most everything is out of the control of the researcher? Neither one can really be generalizable or applicable to other situations.
In terms of how this relates to my interests, since this article was not about a specific intervention, rather methods of implementing research, I can only really take the article as advice for planning my own studies. A purely scientific, sterile approach does not necessarily always provide the best information. A messy, unorganized study could very well be a poor study, but it also may be useful if providing real information about real-world environments. It is important to understand research practices to be able to implement protections against internal and external validity issues, either to prevent them from happening or to defend yourself against others that may claim your work is subject to them.