I would like to reference my previous reflections on quantitative and qualitative studies where the latter basically challenged the methodology used in the first while studying the topic of how, when and why someone identify oneself as a gamer. I can see why using quantitative methods is attractive in this case since you can calculate and test connections with numbers. However something that in my opinion was lost in the quantitative study was a bigger emphasis on the context. While the paper itself was fine with accepting the subjects replies due to the fact that they were only interested in knowing who would willingly and openly identify as a gamer, actually understand WHY this happen would be much more interesting. To do this properly you would need to go deeper and include qualitative methods like contextual interviews or some kind of journal that is being kept by the subjects to be able to identify more specifically what could trigger the change.
While looking for examples of studies combining different research methods I ran across something called triangulation that really caught my interest. At its core it means finding the way by looking at something from two (or more) points of view. This can be done by gathering two separate sets of empirical data which might give your research a higher degree of validity by lowering the risk of having inconsistencies. The same can be said about applying it to your methodology. Using different methods to look at the same problem is what I have argued for at seminars to get a deeper understanding about the phenomenon you are studying and now I have a word to attach to it.
One of the reasons to why I feel so strongly about triangulation and always taking context into account is something I remember from reading organisational theory, namely the Hawthorne effect. The researchers went out to perform a research study on industrial workers to see how changes in work environment like illumination, length and timing of breaks etc affected productivity. What was discovered was that a lot of the increase in productivity could be explained by the fact that these workers’ felt special at work for being the ones being observed and not their colleagues. Should the study have been conducted purely quantitatively without taking into contextual parameters except those being measured specifically in each experiment, nor comparing the numbers to those of the average worker not in the study the results might have been very skewed.
Aside from increasing validity, combining methods might be a necessity like when attempting innovation. I was really inspired by Anders Lundström’s lecture when he talked about research through design and the role of the prototype as a way of provoking discussions. In the beginning of a study you might have gathered data either quantitatively or qualitatively to gain some level of basic understanding about the topic but is unable to draw any reliable conclusions. This is where the prototype comes in! The prototype might never be meant to simulate the final product but rather knowingly be designed to test something specific that your previous data could not answer. By presenting the prototype to your sample you can gain new helpful insights if you successfully emphasis the problem in your design. With the lessons learned you can gain new knowledge and proceed in your studies.
It is unfortunate papers being published in our field most often present only one type of method. While I understand that you have some kind of word limit to make the journals possible to publish I believe explaining the entire process of testing different methods, designing the final tests and failed tests would be highly valuable. I’m a believer of the quote “you learn from your misstakes” and also that this would be usable information to mention in a paper since it shows that the researchers have explored the problem from different angles which in my opinion could increase the validity of the knowledge gained but also be helpful for anyone who would want to delve deeper into the subject in the future. I was thinking about this when Ilias mentioned the paper he was one of the authors of, which he claims was done purely quantitatively. While this is true for the experiment, wouldn’t the design of the experiment still be part of the same study, and wouldn’t that make it part qualitatively? I’m sure people would argue against me on this point but that’s the way I see it.
The last example I want to bring up is case studies. While I see them as a strategy rather than a method in itself that can utilize both qualitative and quantitative methods I just love what I see as controlled chaos in them. When “anything goes” you can combine, alter and change research methods between iterations to gain new information and understand the data you already have. There seems to be some kind of strict framework in place for how research should be performed to help filter bad theory. While case studies should still make sure that the methods they use are implemented in a proper way, I feel like they give the opportunity for more thinking outside the box when you are allowed to go back and try again with some other method to improve your knowledge. This is specifically helpful in cases where not much is known. One specific paper I read during this course the researchers triumphantly claim they managed to see a connection in a field that still need more exploring. To be perfectly honest I was really dissapointed by the fact that they never tried to delve deeper, since to me the results felt like educated guesses based on knowledge from similar fields without ever trying to explain anything. Had a case strudy been done they might have learned something really interesting.
Inga kommentarer:
Skicka en kommentar