Critical Review of Research #2: Using Peer Feedback to Enhance the Quality of Student Online Postings: An Exploratory Study

EDU800: Critical Review of Research #2
Written By Daniel Grigoletti
11/30/16

Article: Ertmer, P. A., Richardson, J. C., Belland, B., Camin, D., Connolly, P., Coulthard, G., . . . Mong, C. (2007). Using Peer Feedback to Enhance the Quality of Student Online Postings: An Exploratory Study. Journal of Computer-Mediated Communication, 12(2), 412-433. doi:10.1111/j.1083-6101.2007.00331.x

Problem

1. Identify the clarity with which this article states a specific problem to be explored.

The Ertmer article clearly defined the problem involving how using peer feedback as an instructional strategy may lead to better quality postings. The researchers in the study examined how instructor facilitated feedback is valuable to enable rich learning environments. As stated in the literature they referenced, peer feedback in college courses, specifically in online discussions, could have an equal impact on student learning. The study sought to find how students perceived giving and receiving peer feedback. The researches posited that good discussion feedback in online coursework is essential to close the learning loop, and since feedback is costly to instructors in terms of logistical burden and workload, that peer feedback could go a long way to alleviate significant amount of time and effort spent, while enabling students to improve socio-cognitive engagement. The authors sought to determine how peer feedback can provide cognitive improvement to students. By replacing the instructor in a limited way, peer feedback could provide manifold value to the recipient, the deliverer of peer feedback and the instructor. It does this by providing unique enhancements to the normal feedback process. They concluded that timely and high quality peer feedback has many benefits, but was not as important as when the instructor provided the same type of feedback. There were, however, many other social benefits to the students participating in the study. They had more opportunities to collaborate and w ere able to build intra-classroom relationships, and share knowledge and opinion. However, some students were concerned that because the actual instructor was not providing feedback, that they were not getting the most value from the feedback.

2. Comment on the need for this study and its educational significance as it relates to this problem.

Studying feedback in educational environment is a useful endeavor because it seeks to understand the cognitive benefit to students having their work analyzed, reviewed and rated, and getting the results presented back to them for reflection. Feedback in online discussions extend and amplify the ramifications of feedback by showing how one of the emerging and powerful course delivery mechanisms, the online course, can be integrated with virtual and asynchronous interaction from faculty and fellow students. Further, this study combines the need to study feedback, including the use of feedback in online environments, and specifically the use of peer feedback in online environments. Since online course pursuits require unprecedented self-direction and independent learning without the face-to-face account of the instructor, the role of fellow students can prove to be a way to extend learning in a powerful and economical fashion. Since a typical class of 30 can have interactions within the hour or two for any given week in an onsite class, a hybrid modality or fully online course can enable 24×7 interaction through using an LMS, giving students the ability exchange ideas and having them share the responsibility for learning. This will extend content exploration, provide knowledge creation, and present unbounded reflective opportunities to learn. As a natural progression and complement to onsite models, emerging online delivery methods and courses need to meet with the challenges that students have absorbing extreme volumes of information in our technological world, which needs to be disseminated and learned. The improved and increased interactions between and among students in online environments can be a powerful way to build courses for learning new technological content. Utilization of new literacies such as information literacy is important especially for the digital natives or millennials who comprise much of the student body within today’s colleges. Also, since the typical instructor is logistically limited in giving high quality personalized attention to every student, peer-based learning can go a long way to alleviate the logistical challenges that educators face when teaching online.

3. Comment on whether the problem is “researchable?” That is, can it be investigated through the collection and analysis of data?

The problem of investigating the effects on learning of peer feedback in online discussions is very researchable, given the extensive availability of online course instances which deliver essentially the same set of courses that are available onsite. Since online threaded discussions are asynchronous and automatically “recorded,” the data representing the discussion events can be readily collected and examined. The networked electronic communication tools that are employed in online courses include emails, discussions, blogs, threads, wikis and synchronous chats. Therefore, the opportunities to collect qualitative data from any given LMS are plentiful. In addition, the computerized aspects of cloud-based tools, large storage capacities and the ability to access the data for assessment and analyzation enables examination of both qualitative information and quantitative data such as frequency of postings. In this study, the researchers proved that they could also examine the qualitative data using software that examines and analyzes it using various data collection techniques. Armed with technological tools, learning management systems, persistent data collection, and external software, they were able to comprehensively attack the problem, and establish a baseline for future research into online feedback, whether it is peer based or instructor based. Further, future research into feedback in online courses can be done on the other aspects of online courses that were not included in this study.
Theoretical Perspective and Literature Review

4. Critique the author’s conceptual framework.

The authors used a case study framework to investigate the learning impact of peer feedback versus instructor feedback in online courses. The environment that they examined was a graduate level course. They used a scoring rubric to examine the participant responses based on Bloom’s taxonomy to determine whether or not high-quality feedback could be sustained during the semester in several discussion questions (DQ’s). They were interested in seeing how the quality of the postings changed during the course of the semester. They wanted to see whether higher-levels of Bloom’s taxonomy could be achieved, but had to be sensitive to the way that the discussion questions were written to ensure consistency. They utilized a process to inform students of feedback, then interviewed them on the results. It involved both giving and receiving peer feedback within an online course, from pre-course to post-course. They utilized a constructivist approach and hoped to see an increase in the quality of the responses. They also wanted to gauge whether peer feedback was better or worse than instructor feedback. Since most of the previous research they referenced did not involve peer feedback for online courses, they were at a disadvantage in that they could not compare notes to similar studies. They acknowledged that additional research needed to be performed and that this study was exploratory in nature. The conceptual framework of the study was based upon a very specific type of feedback. Feedback wasn’t applied to assignments, tests, labs and other work performed in an online course, but was only provided to threaded discussions. Further, it focused on the nature of peer-to-peer feedback as opposed to traditional instructor feedback. The study was ambitious in this respect, since it sought to extend the knowledge of learning science in a relatively new medium, the online course, and with the proxy for face-to-face interaction, the discussion thread. Because of this narrow examination, it proved to be effective to ferret out the positive effects of peer feedback. The study can have the effect of furthering our understanding of the online modality and how the asynchronous interactions can help learners. There is an asymmetrical contrast to onsite course interaction/peer feedback because of the vast difference between the two environments.

5. How effectively does the author tie the study to relevant theory and prior research? Are all cited references relevant to the problem under investigation?

The authors of this study frequently cite prior research into feedback, and the importance of this in educational environments. As an exploratory study, this study adequately tied the previous research on feedback in non-online settings to the current examination of online peer feedback. For example, they cited Liu, Lin, Chiu and Yuan to reinforce the idea that peer feedback requires students to implement additional cognitive processes beyond just reading and writing, including questioning, comparing, suggesting modifications, and reflecting on how the work being rated compares to their own. The study also refers to McConnell’s about how collaboration of peer assessment allows students to be less dependent on educators, giving the student more autonomy and independence. This collaborative process gives alternatives to the students doing the ratings to develop and increase their own knowledge, learning and skills in the subject area. This meaningful interaction and discourse between evaluators and students receiving feedback, gives value to both parties in the learning process. It leverages the power of teaching as a learning strategy, by providing students opportunities to “micro teach” by evaluating and assessing peer discussion postings.

6. Does the literature review conclude with a brief summary of the literature and its implications for the problem investigated?

While the survey utilized many good resources and references to relevant literature, they did not include a comprehensive review of literature, nor did the conclusion include a summary of literature. Instead, they strategically placed literature references throughout the article. The implication of taking this approach for the problem they were investigating, is that the literature may not be comprehensively available for peer feedback in courses with online discussions. Their approach to the literature review was not conventional, but they did sufficiently include relevant studies on peer feedback in other settings. The structure of the document was more focused on stating the problem and presenting the research results. They could have included more references to draw from for this study, but it was relatively short and focused on a very specific sub-area of providing feedback, namely that which is provided in online discussion forums.

7. Evaluate the clarity and appropriateness of the research questions or hypotheses.

The review questions provided by the researchers in this study focused on the impact of peer feedback on the quality of online student postings, the quality of and increased learning be through the use of peer feedback, the perceptions of the value of receiving peer feedback vs instructor feedback, and the perceptions of the value of giving peer feedback. The RC’s were clear and appropriate to establish the study and compare/contrast feedback from peers vs. instructors in online courses. The discussion postings in an online course form an important basis for communication and learning, and the hypothesis was clearly written, resulting in analysis of the impact and quality of discussion postings. For peer feedback in online discussions to be most valuable, the researchers reiterated from previous research on feedback in general, specifically from Schwartz and White, that good feedback is prompt, timely, and thorough, provides ongoing formative and summative assessment, is constructive, supportive, and substantive, and should be specific, objective, and individual. Also, by citing Notar, Wilson, and Ross, they included the notion that feedback should be diagnostic and prescriptive, formative and iterative, and involve both peers and group assessment. Peer interaction in online courses serves to provide an important interpersonal connection and gives the students motivation to check and recheck their work since their peers are watching and assessing, and also builds a sense of community and trust. The real learning is adjusting one’s perspective to view how others respond to the question, then responding to the response. This discourse leads to deep learning since it drills down to new territory of the topic. Peer feedback also has the effect of offloading some of the workload from the instructor, by transferring the task of reviewing content to students. The article emphasized how providing feedback is one of the most time-consuming elements of teaching online, so sharing the responsibility of providing feedback with students has a twofold benefit: 1) reduction of workload for teachers, and more importantly, 2) giving students opportunities to synthesize information at a high level, emulating the teacher role. When a student gives peer assessments, it opens up dialogue, the recipient is given insight into their own learning. Online courses rely on quality design and interaction to be rich and valuable, but it cannot all be planned, so the discussion thread provides a dynamic aspect to the course. Therefore, feedback in all forms is essential to make the course compelling, keep students engaged, accelerating and amplifying learning. Students are used to getting feedback from instructors, but when getting it from peers, then it layers the learning by having a non-expert examine responses, allowing the sharing of ideas and diverse perspectives, and leading to a more collaborative learning environment rather than a patriarchal model.

Research Design and Analysis

8. Critique the appropriateness and adequacy of the study’s design in relation to the research questions or hypotheses.

The design of the study utilized a sound researching approach to learning about peer feedback in online discussions by providing multiple raters to evaluate the perceptions and effects that peer feedback delivered to participants. The hypothesis tested how peer feedback compared to instructor feedback in quality and whether or not it benefited the learning outcomes. The study provided a great variety of resulting data to help judge the effectiveness of the feedback, however, it acknowledged that there are logistical problems with providing feedback and collecting information to assess its effectiveness, including both quantitative results and qualitative analysis of the responses via interviews, providing valuable insight to the researches. Data were collected through a variety of research techniques such as multiple and standardized pre-and-post interview protocols in which students were asked several research questions addressing discussion postings and assessed the quality of interaction, and provided data on the perceptions from both students and researchers on the value of giving and receiving peer feedback. The study applied learning theory, including Bloom’s Taxonomy to help determine the depth of learning as a result of peer feedback, which appropriately addressed how deep the learning occurred with respect to higher order learning such as analysis, synthesis and evaluation.

9. Critique the adequacy of the study’s sampling methods (e.g., choice of participants) and their implications for generalizability.

The study involved a number of discussion questions to measure the peer feedback process, contrasting it with instructor feedback, and using a paired sample t-test. However, due to a small sample size, the quantitative results only provided a limited insight into the effectiveness of peer feedback to learning. They were able to assess the relevance and impact that student feedback had, but cross-referencing to teacher-only feedback in online courses was not present, and the qualitative assessment of the student-to-student peer feedback was not present. The specific sampling in the study was adequate to generate knowledge about the short-term perceptions of how peer feedback can be used as a alternative (but not a substitute) for instructor feedback, but it was lacking information about how peer feedback can affect the learning outcomes for online students.

10. Critique the adequacy of the study’s procedures and materials (e.g., interventions, interview protocols, data collection procedures).

The researchers utilized various data collection instruments such as entry and exit survey questionnaires, scored ratings of weekly discussion question postings, interviews and surveys for data collection. They applied rubrics, and standardized the interview protocol which added reliability, and analyzed data both from primary groups and subgroup. The consistency of the data sets, variety of data collection procedures gave them the ability to rate the effects and impacts on student learning while giving and receiving peer feedback, and concluded, from the interviews, that the students had a positive perception of the value of peer feedback. They also performed “triangulation” between the interview data with the ratings of the peer feedback. This provided integration between measurements of both quantitative and qualitative data collection, which had the effect of amplifying the assessment of the quality. They were able to recognize patterns in the interview data through using software for quantitative analyzation, called NUD*IST. They paid attention to validity, accuracy, and completeness of the data, looked for discrepancies, and used check-coding to check inter-rater reliability while studying the peer feedback.

11. Critique the appropriateness and quality (e.g., reliability, validity) of the measures used.

Various data collection techniques were used in the study. Qualitative data collection was conducted at intervals of weeks 3-5, and weeks 7-13, and included standardized interviews to establish reliability. The interviews were conducted via phone and in-person (for a duration of 20-30 minutes), and were then recorded and transcribed to ensure accuracy and completeness. The interviews provided insights into the participant perceptions about giving peer feedback and on various aspects of feedback including quality, timeliness, and quantity. They also collected specific feedback from students on the feedback process itself, and measured their understanding applying Bloom’s taxonomy. The researchers utilized tabular data to aggregate the sampled question responses.
Quantitative data collection included entry and exit survey questionnaires, in which they used the results to measure overall perceptions of students giving and receiving peer feedback. Providing scores/ratings on discussion postings during the semester, correlated with the research questions using the same rubric that students had used. They collected data from the peer ratings discussion postings, provided by various peers, and applied rubrics, to ensure that the measurement of posting quality was consistent. However, the data provided was sporadic from student peer feedback because they were not required to score every peer posting so the data set was incomplete. During the data collection process, the raters compared results, and examined discrepancies and collaborated on the results. They also did make sure that timing was not a factor in scoring by removing posting dates, and times were removed from these documents. With regard to sampling reliability, the raters scored randomly selected discussion question. The raters provided specific examples of student responses in the qualitative data collection, for example, measuring student feelings about Internet filtering, and enabled the students to give elaborations on their responses.

12. Critique the adequacy of the study’s data analyses. For example: Have important statistical assumptions been met? Are the analyses appropriate for the study’s design? Are the analyses appropriate for the data collected?

During the analysis of the comprehensive and adequate data they collected, the researchers in this study, the researchers utilized various statistical methods for measuring and studying the quantitative data. They compared their results to the assumptions stated in the research questions, and the results they anticipated in their hypotheses. They employed methodologies to analyze both the quantitative and qualitative data. The quantitative data analysis included tallying results of pre-surveys in which the researchers gave the students opportunities to answer not only objective questions, but also open-ended questions in order to assess student perceptions. They used a 5 level rating scale to measure agreement/disagreement, which they then analyzed using statistical means and other measurement instruments. They also conducted a post-survey in week 16, in which students rated the importance of peer and instructor feedback and commented on the value of both giving and receiving peer feedback, but they noted that not all of the pre-surveys (12/15) were returned. They also performed a final survey to verify interview data collection. During analysis, to alleviate validity concerns, after completion of the data collection they triangulated interview question data with survey results. They compared average scores using a paired sample t-test to compare the ratings obtained on postings from both peer and instructor feedback prior to the use of peer feedback. Reliability of the data was ensured by using multiple interviewers, multiple evaluators to reduce bias. They also used check coding to ensure inter-rater reliability. They utilized measurements of quantitative data, providing mean ratings regarding timeliness, quality, and perceptions of importance of feedback.

Interpretation and Implications of Results

13. Critique the author’s discussion of the methodological and/or conceptual
limitations of the results.

Feedback, to be effective, should be of high quality and timely and since students in online courses do not experience the physical interaction in onsite classes. The learners may struggle to feel social connections to classmates in the virtual environments. Students can both give and receive peer feedback which goes a long way to personalize interactions since students must use critical thinking to analyze other works, then absorb and process criticism from the other students. By prescribing an expected response, whereas the latter opens up common experience dialogical interaction. The student-to-student interaction is more socially oriented and involved co-construction of knowledge. This provides more of a group oriented factor to threaded discussions, which are decidedly asynchronous communicative instruments. However, by adding a peer-collaborative factor, it adds another valuable dimension to the activity and may help with cognitive processing of the content. Peer feedback can have drawbacks in that students may become anxious about giving and receiving feedback, concerned about the reliability of the feedback. In addition, students may not be prepared or be comfortable to take on the role of evaluator.

14. How consistent and comprehensive are the author’s conclusions with the reported results?

The researchers in this study drew from many relevant theorists with regard to the effectiveness of feedback. However, many of the studies were pertinent to face-to-face rather than online learning environments. The researchers in this study concluded that student-to-student feedback can be used effectively in place of instructor feedback. The important factors which they stated and tested repeatedly were the timeliness, consistency, and quality, but not necessarily the quantity of the feedback responses. The integrative data collection using interviews as well as direct observation of feedback responses provided a deeper understanding of the motivations of the students and how they internalized the learning opportunities into cognitive growth. The pre-and-post interview experience gave the students to reflect on the process, and the researchers cross-referenced and corroborated the interview comments to determine the perceptions that students had regarding the effectiveness of the feedback process. This reflection appeared to have a positive effect on the learning effectiveness. The difficulties that arose were assessing the qualitative aspects of student postings, and determining the reliability and validity of peer feedback. These results which were presented in the form of survey and interview results (including actual quotations from the respondents) coincided with the expectations by the researchers that the feedback process would add value to the course experience. However, the authors conceded that, since this was an exploratory study, they were evaluating peer feedback rather than feedback in general. Even though peer interaction enables sharing and comparing of information, they did not find there was better critical thinking and analysis as a result of peer feedback. The peer-to-peer feedback had value in that it enabled students to form basic feedback commentary, co-construct knowledge with peers. It did provide better comprehension of the content through reflection, and reinforcement of lower-levels of Blooms Taxonomy, but did not prove to result in higher level cognition, which face-to-face student interaction may be able to do better.

15. How well did the author relate the results to the study’s theoretical base?

The study was focused on online learners and a specific type of feedback, peer feedback in discussion threads. They tied this well to a number of theorists (Higgins, Hartley, and Skelton) analyses of the importance of timely, substantive, and high quality feedback in learning environments, and how feedback provides formative assessment (Nicol and Macfarlane-Dick) which contributes to improved self-regulation (Robyler) better socio-cognitive engagement with the content (Vygotsky), and more efficient learning. By studying discussion respondents in a variety of ways and using both qualitative and quantitative data collection methodologies, the researchers were striving to learn whether or not feedback from peers (students) improved upon or did not strengthen or weaken learning, the cognition of and construction of meaning through interactions with instructors. In addition, the researchers scored discussion feedback using Bloom’s taxonomy. By doing so, the examination of how peer feedback lent itself to the lower levels of Bloom’s taxonomy involving recalling and comprehension, but also how the reinforcements from peers would affect application, analysis and synthesis of the knowledge being discussed. They developed a process involving question-response-feedback cycle, where they collected and delivered the feedback responses to the participants. The raters also collaborated with each other, comparing the question-response-feedback results and integrating them with interview results through triangulation. They found that higher quality learning occurred with a combination of student-to-student as well as instructor feedback, and concurred the findings from the Ko and Rossen study which stated that the learning process is improved when the student can to cross-check their understanding. They also concurred with Mory that feedback is essential to the learning process.

16. In your view, what is the significance of the study, and what are its primary implications for theory, future research, and practice?

The study provided a significant insight into how mixing the roles of student and teacher with respect to the provision of quality feedback, specifically peer-to-peer feedback, can enable students to learn and reflect on their thoughts beyond the feedback from instructors and in addition to the immediate discussion questions/topics. The implications of the study were to inform researchers how peer feedback may aid educators in facilitating course tasks, developing alternative dialogues, disseminating information, and assessing performance in online courses. The theorists cited in the article qualified the need for good feedback as a catalyst for deep learning. They concurred that prompt, timely and thorough feedback is essential to improve learning and develop skills in communication and subject matter. The researchers in this study also provided justifications for how good feedback in general leads to better retention, but in addition, that peer feedback can provide opportunities for social interaction integrated with knowledge construction and sharing. The study presented here is a good foundation for learning about the effect of peer feedback in online courses, and can lead future researchers to delve deeper into the interactions enabled through embedded functions of the LMS. This type of study is very relevant and applicable to online courses in their current state, but the online course is evolving and will include richer interactions that may benefit greatly from various forms of feedback.

EDU800 Critical Review #1: Digital Game-Based Learning: Impact of instructions and feedback on motivation and learning effectiveness. By Erhel and Jamet

Erhel, S., & Jamet, E. (2013). Digital game-based learning: Impact of instructions and feedback on motivation and learning effectiveness. Computers & Education,67, 156-167. doi:10.1016/j.compedu.2013.02.019

EDU 800 Critical Review 1

  1. Identify the clarity with which this article states a specific problem to be explored.

The problems and challenges associated with the use of DGBL, or Digital Game Based Learning was clearly articulated in the article.  Erhel and Jamet first explored how several learning theories apply to the study to find out the effects of using various types of instructions and feedback in conjunction with DGBL’s.  They expressed how the two experiments which they conducted and were outlined in the reading, would demonstrate whether or not DGBL with enhanced instructions would lead to better cognitive results whether they were explicitly applied to the learning factors or to the entertainment factors with regard to motivation.  They also expressed, with the help of many relevant theorists, whether feedback in DGBL scenarios would promote better learning.  This type of study is important to further the use of game based learning.  Since software and hardware technology is available to build powerful simulations, this type of research will go a long way toward enhancing the systems in place today.

  1. Comment on the need for this study and its educational significance as it relates to this problem.

They effectively defined the problem in the context of how there has been little research available to study virtual learning environments and determine how they affect motivation and engagement, deep learning, whether in the form of a competitive game or a simulation.  Since this is an emerging capability for teaching and learning, it is a relatively new approach.  However, non-digital game based learning has been studied, so there is a significant amount of research available to draw from.  The authors in this article built and created new knowledge of DGBL by performing these two experiments.  By applying learning theory to game development for educational purposes, the content can become more compelling and valuable.

  1. Comment on whether the problem “researchable”? That is, can it be investigated through the collection and analysis of data?

The problems presented in this article, particularly determining how variations of DGBL can influence such things as deep learning versus surface learning, whether specific or general instructions can affect learning using DGBL’s, and how different question types and feedback can aid in such things as memorization and comprehension, are definitely researchable.  The authors demonstrated through their experiments that by establishing hypotheses, screening and selecting subjects for the study via pre-testing, and maintaining control variables, that the DGBL can be studied, and that data gathered can be used to draw conclusions about how instructions and feedback enhance learning when using DGBL’s or simulations.  The study, while short, provided usable data which were analyzed in rudimentary fashion, but formed a basis for future exploration of DGBL efficacy in learning, as compared to other digital multimedia based learning.

Theoretical Perspective and Literature Review (about 3 pages)

  1. Critique the author’s conceptual framework.

The authors drew from many scholarly articles to establish and justify the need for further research on this relatively new type of learning.  Their conceptual framework involved first defining digital games, how they can be used for both education and entertainment.  They also contrasted learning with serious games environments (SGE) and digital games to conventional media such as classroom learning, and explored how various scholars have reported that games can have a positive effect, or no effect on learning and motivation.  The approach and methodology on using and analyzing the use of digital games and simulations for learning established a way to frame the study of their effect on cognition and learning, in order to further study something that was in need of a framework.

  1. How effectively does the author tie the study to relevant theory and prior research? Are all cited references relevant to the problem under investigation?

The authors effectively and frequently introduced references in their literature review, to relevant learning theories from theorists and researchers who have explored such things as learning and motivation utilizing new media (Liu), those who explored motivation and education from a self-determination perspective (Deci), learning from computer games (Tobias & Fletcher), health games researchers such as Lieberman, Vogel’s work on simulation and games, Ames & Archer’s work on achievement goals in the classroom, among many others.

  1. Does the literature review conclude with a brief summary of the literature and its implications for the problem investigated?

The literature review provided in this article, while drawing upon several other scholarly articles and theorists, only thoroughly explores and summarizes one of the two experimental hypotheses, namely that how using instructions may improve the learning effectiveness of digital games.  The other experiment which explored the efficacy of using feedback in gamified digital education, was not explored until the experiment conclusion was discussed.  This exclusion of the theoretical background for the hypothesis for the 2nd experiment demonstrates that the authors should have focused on one or the other in this article, and performed additional experiments in another study.

  1. Evaluate the clarity and appropriateness of the research questions or hypotheses.

The authors established a compelling case for whether digital learning games can be compared to conventional media, but they did not create a specific experiment that compared DGBL to conventional learning. They intermingled the discussion of the experiments up front with the review of literature.  While interesting and useful, the establishment of the hypothesis that DGBL can be better than conventional learning did not conclude with proof either way that DGBL is better than conventional learning.  The authors did, however, point out that there are contradictory studies that come out with both positive and neutral aspects of DGBL.

The article appeared to assume that DGBL is superior to conventional learning, but only set out to experiment as to whether enhancing DGBL with instructions and feedback would lead to better learning than without these.  The authors did provide results for the 1st experiment that tied back to the original hypothesis and extended upon it with the 2nd experiment.  They stated the hypothesis for the 2nd experiment in the discussion concluding the study.

Research Design and Analysis (about 3 pages)

  1. Critique the appropriateness and adequacy of the study’s design in relation to the research questions or hypotheses.

While the research and experiments were based on the use of DGBL, the authors may have been better off settling with more thoroughly performing  the first experiment, with more thoughtful ways to select the sample group.  Then, perhaps, by following up this study by exploring and researching the feedback factor on DGBL.  The overall study design, first in experiment 1, regarding the use of instructions in DGBL, involved 3 phases, which showed that the authors wanted to hone in on the issue at hand.  All of the study participants were screened in the first phase and those with too much prior knowledge based upon the results of a pre-test were disqualified to participate.  The study measured Avoidance versus Approach in terms of simulations of people with one of 4 different disease presentations.  The hypothesis that was to find whether certain types of instructions (entertainment versus educational) will aid in cognition and learning of the subject matter, was addressed by the first experiment.  The second experiment presented the hypothesis about how KCR (knowledge of correct response) feedback in educational games can reduce redundant cognitive processes.  As mentioned earlier, there was not a literature review of the background theories regarding feedback.  They did provide some references to literature such as Cameron & Dwyer’s work on the effects of online gaming on cognition.  The article seemed to be testing what Cameron & Dwyer studied with regard to how using different feedback types affected achievement of learning objectives.

  1. Critique the adequacy of the study’s sampling methods (e.g., choice of participants) and their implications for generalizability.

The article utilized a fairly sound way of selecting participants based upon demographic factors such as age, gender and college attendance.  For example, they first establishing generalized categories based upon age (i.e. young adults 18-26, length of time in their college programs, and decided to develop filters to exclude of medical students.  However, in the 2nd experiment, the breakdown in terms of gender had many more female participants (16 male and 28 female) than in the 1st experiment.  This inconsistency evidenced that the two experiments were not cohesively designed to work together.  The 2nd experiment tested the addition of KCR to the first experiment, but they did not maintain consistency in the sampling methods and choice of participants.

  1. Critique the adequacy of the study’s procedures and materials (e.g., interventions, interview protocols, data collection procedures).

The experiments involved utilizing ASTRA, a multimedia learning environment.  This simulated learning environment presented an avatar as a stand-in for a real instructor, and presented the case studies of the disease presentations on a simulated television monitor.  It was an adequate representation of the situation, but was more a facsimile or model of a real world instructor presenting on a screen.  This may have provided a simulated association with an actual teacher and the interactions therein, to the subjects in the study.  In addition, it provided learning and entertainment instructions for the student to review while viewing the simulation.  The use of a combination of full-motion animation with text enabled a richer cognitive environment than if it were just a screen with text.  The methods involved utilizing pre-tests, a recall quiz, and questionnaires on knowledge gained after the simulation concluded.  with the adequacy of the study’s procedures and materials (e.g., interventions, interview protocols, data collection procedures).  The 2nd experiment utilized mostly the same instrumentation and technology as the first, but interjected additional content in order to test if KCR feedback promoted the learner’s emotional investment.  By providing popup windows with immediate feedback about the student’s responses, the 2nd experiment was testing whether better cognition, comprehension and learning occurred because of it.

  1. Critique the appropriateness and quality (e.g., reliability, validity) of the measures used.

The measures that were applied from the results of the mostly quantitative data, were appropriate for the simple experiment.  For the first experiment, the authors provided data on the prior knowledge pre-test, including mean and standard deviation analysis.  In addition, they provided similar statistical analysis on the data collected on the recall quiz, knowledge questionnaire, motivation questionnaire, and measured the intrinsic motivation of the subjects.  However, the authors provided a different type of analysis result for the second experiment.  While they designed the ratings for the second experiment by applying similar statistical analysis such as mean scores on the paraphrase-type questions versus inference-type questions, the results were presented in a tabular form instead of a narrative form as they did in the first experiment.  In addition, they utilized statistical measures such as ANOVA, SD, and means to measure the motivation questionnaires and provided the results in a tabular form, which reflected how the performance versus mastery goals compared, highlighting the differences in the goal avoidance and goal approach.

  1. N/A
  2. Critique the author’s discussion of the methodological and/or conceptual limitations of the results.

In the general discussion for the article, the authors summarized the results, outlining their use of learning science, and expressed how their hypotheses were confirmed by the data they acquired through the experimentation.  They pointed out that they were able to derive a value-added result, building on the first experiment, to the 2nd experiment.  They accomplished their goal of combining the knowledge of how education or entertainment instructions contribute to learning in DGBL of the 1st experiment, with the KCR feedback in the 2nd, but they may have incorporated too complex of a methodology and process to arrive at their results.  They subsequently acknowledged that the effects of the types of questions they utilized did not yield what they expected and pointed out that they may need to perform future studies.

  1. How consistent and comprehensive are the author’s conclusions with the reported results?

The 1st experiment yielded the conclusion that learning instructions were more effective than entertainment instructions with regard to encouraging better comprehension, cognition and learning, which was what they were looking for in their original hypothesis.  This was not, however, comprehensively explored.  With regard to the 2nd experiment, the results were viewed by the authors to confirm the effect of feedback on DGBL was to provide deeper learning and cognitive processing. The authors also concluded that DGBL overall enhanced memorization, and that the study aligned and was consistent with some of the other studies they cited on such things as cognitive load theory.

  1. How well did the author relate the results to the study’s theoretical base?

The authors maintained their commitment to utilizing DGBL, but finding how to enhance its effectiveness by adding value to the experience by providing meta instructional content before the training module commenced, in the form of providing instructions to the student.  They also found that providing feedback during the gameplay, that the students had a more intense experience with regard to their cognitive results, memorization and learning overall.  The authors related the new findings about DGBL to their opening review of literature on motivation, game learning theory.

  1. In your view, what is the significance of the study, and what are its primary implications for theory, future research, and practice?

The significance of this research study is that it will advance the knowledge base in learning science, of the gamification of educational modules.  As the authors admitted, they need to perform further studies in the future.  However, after analyzing the effects of instructions and subsequent enhancement with various types of feedback will provide game/simulation developers and designers to implement changes to their software based upon the practical results of this study.