EDU800 Week 14 Annotation

Dede, C. (2011). Developing a research agenda for educational games and simulations. Computer games and instruction, pp. 233-250. Charlotte, NC: Information Age Publishing.

In his article, Developing a Research Agenda for Educational Games and Simulations, Dede makes 5 fundamental assumptions about developing a research agenda for educational games and simulations. His first assumption is that the research agenda should involve generation of usable knowledge when studying learning within games and simulations, in which many stakeholders collaboratively develop and create knowledge in a community orientation. These stakeholders include those who do the research, practice the material being studied and establish policy. The stakeholders would also include specialized theorists such as constructivists, behaviorists and cognitivists. Since games and simulations which are examined in educational research are varied in complexity, design, and applicability, it is better that many eyeballs are looking at the same things and brainstorming to find the usefulness, usability, and usage which can be applied to generating new knowledge in the research study. He discusses that instead of the usual focused independent study based upon intellectual curiosity when examining existing games and simulations, as well as analyzing the independent findings from scholars, that first a problem needs to be defined in educational science. Then as the stakeholders study simulations and games in that problem context, they can better find solutions and create usable knowledge, from a practical standpoint, to apply to the subject being studied. His second assumption about studying games and simulations involves collective research, as contrasted with rogue studies which come to conclusions in a somewhat isolated manner. In order to find solutions that attack the problem from as many angles as possible, the researchers must deliberatively and continually collaborate and combine, creating portfolio knowledge that is distributed among many sub-contexts and perspectives on the larger problem. This gives the research study substance and depth because of the synergies and catalysts that come from collaborative focus. Thirdly, he assumes that game and simulation studies should focus on what works, when, for whom. Since there is no be-all and end-all solution for learning in educational games and simulations, each individual experiencing the game or simulation has potentially different sets of take-aways. So, by individualized the study and applying the usability and efficacy of an instance of a game/simulation to each learner, a deeper understanding of what works for each person, can be determined. If multiple games and/or simulations are included, each one may resonate differently with the people using them. He likens how the variant ways that people do such mundane things as sleep and eat, and more complex things such as bonding with others, can be applied to the ways that people perform other activities, especially when in an environment that tries to approximate the real world. The real-world affects the situated virtual world, so applying real-world knowledge is necessarily applicable to simulated worlds. To measure the learning efficacy of a given game or simulation, the researchers must personalize the experience and take into account the complexities and preferences of the learner. The fourth and fifth assumptions Dede states is the treatment effects considered when developing agendas for studying games and simulations. The treatment effects he is concerned about is how the different ways that studies are conducted will affect whether the knowledge generated can be applied in a general way to other research. Depending upon how the study is designed, implemented, and analyzed, it may not be as valuable and worthwhile as a different approach. So, by normalizing and standardizing the approach to study games and simulations, there is less room for going down the wrong path and wasting time and money. Some studies may just be superficial if designed the wrong way. He states the risks of studies simply being summative studies and not having the depth of a well-run research study. Small flaws in the study design and implementation could have large effects on the results. Lastly, Dede examines the scalability, demonstrating it through a five dimensional framework from River City multi-user virtual environment for middle school science. To scale a study, it must have depth of effectiveness, sustainability in design, spreadability in an economic way, shifting to be generalized and applied, and evolvability as new information is learned.

By making his five assumptions about forming agendas for studying the educational technology of games and simulation, Dede, in one fell swoop, both focuses on how to properly study this mode of learning, while expanding the understanding that simulations should be treated with a myriad of objectives in mind. Since learning from different technological enhanced media and modalities is not fully understood, a framework needs to be developed for each one. In the case of games and simulations, having the guidelines/assumptions which Dede proposes, gives researchers a sound and sane way to approach studying something that tries to replicate the real or exaggerated world through computer generated images, sounds and scenarios.

The paper which Dede integrates the five assumptions can be applied to researching any educational technology. By attacking one of the more complex types, games and simulations, he sets the standard for studying other types, that may simply involve components of simulations such as animation, hypertext, audio, video, etc. He gives a usable framework by focusing on the five assumptions and giving future researchers a manageable way to start studying simulations. Since computer simulations and games are emergent and complex learning tools, researchers need a way to tackle their complexity through a divide-and-conquer approach. Dede segregates the approach to study how learning science can benefit from games and simulations, treating them more as a problem to be managed before being solved.

Critical Review of Research #2: Using Peer Feedback to Enhance the Quality of Student Online Postings: An Exploratory Study

EDU800: Critical Review of Research #2
Written By Daniel Grigoletti
11/30/16

Article: Ertmer, P. A., Richardson, J. C., Belland, B., Camin, D., Connolly, P., Coulthard, G., . . . Mong, C. (2007). Using Peer Feedback to Enhance the Quality of Student Online Postings: An Exploratory Study. Journal of Computer-Mediated Communication, 12(2), 412-433. doi:10.1111/j.1083-6101.2007.00331.x

Problem

1. Identify the clarity with which this article states a specific problem to be explored.

The Ertmer article clearly defined the problem involving how using peer feedback as an instructional strategy may lead to better quality postings. The researchers in the study examined how instructor facilitated feedback is valuable to enable rich learning environments. As stated in the literature they referenced, peer feedback in college courses, specifically in online discussions, could have an equal impact on student learning. The study sought to find how students perceived giving and receiving peer feedback. The researches posited that good discussion feedback in online coursework is essential to close the learning loop, and since feedback is costly to instructors in terms of logistical burden and workload, that peer feedback could go a long way to alleviate significant amount of time and effort spent, while enabling students to improve socio-cognitive engagement. The authors sought to determine how peer feedback can provide cognitive improvement to students. By replacing the instructor in a limited way, peer feedback could provide manifold value to the recipient, the deliverer of peer feedback and the instructor. It does this by providing unique enhancements to the normal feedback process. They concluded that timely and high quality peer feedback has many benefits, but was not as important as when the instructor provided the same type of feedback. There were, however, many other social benefits to the students participating in the study. They had more opportunities to collaborate and w ere able to build intra-classroom relationships, and share knowledge and opinion. However, some students were concerned that because the actual instructor was not providing feedback, that they were not getting the most value from the feedback.

2. Comment on the need for this study and its educational significance as it relates to this problem.

Studying feedback in educational environment is a useful endeavor because it seeks to understand the cognitive benefit to students having their work analyzed, reviewed and rated, and getting the results presented back to them for reflection. Feedback in online discussions extend and amplify the ramifications of feedback by showing how one of the emerging and powerful course delivery mechanisms, the online course, can be integrated with virtual and asynchronous interaction from faculty and fellow students. Further, this study combines the need to study feedback, including the use of feedback in online environments, and specifically the use of peer feedback in online environments. Since online course pursuits require unprecedented self-direction and independent learning without the face-to-face account of the instructor, the role of fellow students can prove to be a way to extend learning in a powerful and economical fashion. Since a typical class of 30 can have interactions within the hour or two for any given week in an onsite class, a hybrid modality or fully online course can enable 24×7 interaction through using an LMS, giving students the ability exchange ideas and having them share the responsibility for learning. This will extend content exploration, provide knowledge creation, and present unbounded reflective opportunities to learn. As a natural progression and complement to onsite models, emerging online delivery methods and courses need to meet with the challenges that students have absorbing extreme volumes of information in our technological world, which needs to be disseminated and learned. The improved and increased interactions between and among students in online environments can be a powerful way to build courses for learning new technological content. Utilization of new literacies such as information literacy is important especially for the digital natives or millennials who comprise much of the student body within today’s colleges. Also, since the typical instructor is logistically limited in giving high quality personalized attention to every student, peer-based learning can go a long way to alleviate the logistical challenges that educators face when teaching online.

3. Comment on whether the problem is “researchable?” That is, can it be investigated through the collection and analysis of data?

The problem of investigating the effects on learning of peer feedback in online discussions is very researchable, given the extensive availability of online course instances which deliver essentially the same set of courses that are available onsite. Since online threaded discussions are asynchronous and automatically “recorded,” the data representing the discussion events can be readily collected and examined. The networked electronic communication tools that are employed in online courses include emails, discussions, blogs, threads, wikis and synchronous chats. Therefore, the opportunities to collect qualitative data from any given LMS are plentiful. In addition, the computerized aspects of cloud-based tools, large storage capacities and the ability to access the data for assessment and analyzation enables examination of both qualitative information and quantitative data such as frequency of postings. In this study, the researchers proved that they could also examine the qualitative data using software that examines and analyzes it using various data collection techniques. Armed with technological tools, learning management systems, persistent data collection, and external software, they were able to comprehensively attack the problem, and establish a baseline for future research into online feedback, whether it is peer based or instructor based. Further, future research into feedback in online courses can be done on the other aspects of online courses that were not included in this study.
Theoretical Perspective and Literature Review

4. Critique the author’s conceptual framework.

The authors used a case study framework to investigate the learning impact of peer feedback versus instructor feedback in online courses. The environment that they examined was a graduate level course. They used a scoring rubric to examine the participant responses based on Bloom’s taxonomy to determine whether or not high-quality feedback could be sustained during the semester in several discussion questions (DQ’s). They were interested in seeing how the quality of the postings changed during the course of the semester. They wanted to see whether higher-levels of Bloom’s taxonomy could be achieved, but had to be sensitive to the way that the discussion questions were written to ensure consistency. They utilized a process to inform students of feedback, then interviewed them on the results. It involved both giving and receiving peer feedback within an online course, from pre-course to post-course. They utilized a constructivist approach and hoped to see an increase in the quality of the responses. They also wanted to gauge whether peer feedback was better or worse than instructor feedback. Since most of the previous research they referenced did not involve peer feedback for online courses, they were at a disadvantage in that they could not compare notes to similar studies. They acknowledged that additional research needed to be performed and that this study was exploratory in nature. The conceptual framework of the study was based upon a very specific type of feedback. Feedback wasn’t applied to assignments, tests, labs and other work performed in an online course, but was only provided to threaded discussions. Further, it focused on the nature of peer-to-peer feedback as opposed to traditional instructor feedback. The study was ambitious in this respect, since it sought to extend the knowledge of learning science in a relatively new medium, the online course, and with the proxy for face-to-face interaction, the discussion thread. Because of this narrow examination, it proved to be effective to ferret out the positive effects of peer feedback. The study can have the effect of furthering our understanding of the online modality and how the asynchronous interactions can help learners. There is an asymmetrical contrast to onsite course interaction/peer feedback because of the vast difference between the two environments.

5. How effectively does the author tie the study to relevant theory and prior research? Are all cited references relevant to the problem under investigation?

The authors of this study frequently cite prior research into feedback, and the importance of this in educational environments. As an exploratory study, this study adequately tied the previous research on feedback in non-online settings to the current examination of online peer feedback. For example, they cited Liu, Lin, Chiu and Yuan to reinforce the idea that peer feedback requires students to implement additional cognitive processes beyond just reading and writing, including questioning, comparing, suggesting modifications, and reflecting on how the work being rated compares to their own. The study also refers to McConnell’s about how collaboration of peer assessment allows students to be less dependent on educators, giving the student more autonomy and independence. This collaborative process gives alternatives to the students doing the ratings to develop and increase their own knowledge, learning and skills in the subject area. This meaningful interaction and discourse between evaluators and students receiving feedback, gives value to both parties in the learning process. It leverages the power of teaching as a learning strategy, by providing students opportunities to “micro teach” by evaluating and assessing peer discussion postings.

6. Does the literature review conclude with a brief summary of the literature and its implications for the problem investigated?

While the survey utilized many good resources and references to relevant literature, they did not include a comprehensive review of literature, nor did the conclusion include a summary of literature. Instead, they strategically placed literature references throughout the article. The implication of taking this approach for the problem they were investigating, is that the literature may not be comprehensively available for peer feedback in courses with online discussions. Their approach to the literature review was not conventional, but they did sufficiently include relevant studies on peer feedback in other settings. The structure of the document was more focused on stating the problem and presenting the research results. They could have included more references to draw from for this study, but it was relatively short and focused on a very specific sub-area of providing feedback, namely that which is provided in online discussion forums.

7. Evaluate the clarity and appropriateness of the research questions or hypotheses.

The review questions provided by the researchers in this study focused on the impact of peer feedback on the quality of online student postings, the quality of and increased learning be through the use of peer feedback, the perceptions of the value of receiving peer feedback vs instructor feedback, and the perceptions of the value of giving peer feedback. The RC’s were clear and appropriate to establish the study and compare/contrast feedback from peers vs. instructors in online courses. The discussion postings in an online course form an important basis for communication and learning, and the hypothesis was clearly written, resulting in analysis of the impact and quality of discussion postings. For peer feedback in online discussions to be most valuable, the researchers reiterated from previous research on feedback in general, specifically from Schwartz and White, that good feedback is prompt, timely, and thorough, provides ongoing formative and summative assessment, is constructive, supportive, and substantive, and should be specific, objective, and individual. Also, by citing Notar, Wilson, and Ross, they included the notion that feedback should be diagnostic and prescriptive, formative and iterative, and involve both peers and group assessment. Peer interaction in online courses serves to provide an important interpersonal connection and gives the students motivation to check and recheck their work since their peers are watching and assessing, and also builds a sense of community and trust. The real learning is adjusting one’s perspective to view how others respond to the question, then responding to the response. This discourse leads to deep learning since it drills down to new territory of the topic. Peer feedback also has the effect of offloading some of the workload from the instructor, by transferring the task of reviewing content to students. The article emphasized how providing feedback is one of the most time-consuming elements of teaching online, so sharing the responsibility of providing feedback with students has a twofold benefit: 1) reduction of workload for teachers, and more importantly, 2) giving students opportunities to synthesize information at a high level, emulating the teacher role. When a student gives peer assessments, it opens up dialogue, the recipient is given insight into their own learning. Online courses rely on quality design and interaction to be rich and valuable, but it cannot all be planned, so the discussion thread provides a dynamic aspect to the course. Therefore, feedback in all forms is essential to make the course compelling, keep students engaged, accelerating and amplifying learning. Students are used to getting feedback from instructors, but when getting it from peers, then it layers the learning by having a non-expert examine responses, allowing the sharing of ideas and diverse perspectives, and leading to a more collaborative learning environment rather than a patriarchal model.

Research Design and Analysis

8. Critique the appropriateness and adequacy of the study’s design in relation to the research questions or hypotheses.

The design of the study utilized a sound researching approach to learning about peer feedback in online discussions by providing multiple raters to evaluate the perceptions and effects that peer feedback delivered to participants. The hypothesis tested how peer feedback compared to instructor feedback in quality and whether or not it benefited the learning outcomes. The study provided a great variety of resulting data to help judge the effectiveness of the feedback, however, it acknowledged that there are logistical problems with providing feedback and collecting information to assess its effectiveness, including both quantitative results and qualitative analysis of the responses via interviews, providing valuable insight to the researches. Data were collected through a variety of research techniques such as multiple and standardized pre-and-post interview protocols in which students were asked several research questions addressing discussion postings and assessed the quality of interaction, and provided data on the perceptions from both students and researchers on the value of giving and receiving peer feedback. The study applied learning theory, including Bloom’s Taxonomy to help determine the depth of learning as a result of peer feedback, which appropriately addressed how deep the learning occurred with respect to higher order learning such as analysis, synthesis and evaluation.

9. Critique the adequacy of the study’s sampling methods (e.g., choice of participants) and their implications for generalizability.

The study involved a number of discussion questions to measure the peer feedback process, contrasting it with instructor feedback, and using a paired sample t-test. However, due to a small sample size, the quantitative results only provided a limited insight into the effectiveness of peer feedback to learning. They were able to assess the relevance and impact that student feedback had, but cross-referencing to teacher-only feedback in online courses was not present, and the qualitative assessment of the student-to-student peer feedback was not present. The specific sampling in the study was adequate to generate knowledge about the short-term perceptions of how peer feedback can be used as a alternative (but not a substitute) for instructor feedback, but it was lacking information about how peer feedback can affect the learning outcomes for online students.

10. Critique the adequacy of the study’s procedures and materials (e.g., interventions, interview protocols, data collection procedures).

The researchers utilized various data collection instruments such as entry and exit survey questionnaires, scored ratings of weekly discussion question postings, interviews and surveys for data collection. They applied rubrics, and standardized the interview protocol which added reliability, and analyzed data both from primary groups and subgroup. The consistency of the data sets, variety of data collection procedures gave them the ability to rate the effects and impacts on student learning while giving and receiving peer feedback, and concluded, from the interviews, that the students had a positive perception of the value of peer feedback. They also performed “triangulation” between the interview data with the ratings of the peer feedback. This provided integration between measurements of both quantitative and qualitative data collection, which had the effect of amplifying the assessment of the quality. They were able to recognize patterns in the interview data through using software for quantitative analyzation, called NUD*IST. They paid attention to validity, accuracy, and completeness of the data, looked for discrepancies, and used check-coding to check inter-rater reliability while studying the peer feedback.

11. Critique the appropriateness and quality (e.g., reliability, validity) of the measures used.

Various data collection techniques were used in the study. Qualitative data collection was conducted at intervals of weeks 3-5, and weeks 7-13, and included standardized interviews to establish reliability. The interviews were conducted via phone and in-person (for a duration of 20-30 minutes), and were then recorded and transcribed to ensure accuracy and completeness. The interviews provided insights into the participant perceptions about giving peer feedback and on various aspects of feedback including quality, timeliness, and quantity. They also collected specific feedback from students on the feedback process itself, and measured their understanding applying Bloom’s taxonomy. The researchers utilized tabular data to aggregate the sampled question responses.
Quantitative data collection included entry and exit survey questionnaires, in which they used the results to measure overall perceptions of students giving and receiving peer feedback. Providing scores/ratings on discussion postings during the semester, correlated with the research questions using the same rubric that students had used. They collected data from the peer ratings discussion postings, provided by various peers, and applied rubrics, to ensure that the measurement of posting quality was consistent. However, the data provided was sporadic from student peer feedback because they were not required to score every peer posting so the data set was incomplete. During the data collection process, the raters compared results, and examined discrepancies and collaborated on the results. They also did make sure that timing was not a factor in scoring by removing posting dates, and times were removed from these documents. With regard to sampling reliability, the raters scored randomly selected discussion question. The raters provided specific examples of student responses in the qualitative data collection, for example, measuring student feelings about Internet filtering, and enabled the students to give elaborations on their responses.

12. Critique the adequacy of the study’s data analyses. For example: Have important statistical assumptions been met? Are the analyses appropriate for the study’s design? Are the analyses appropriate for the data collected?

During the analysis of the comprehensive and adequate data they collected, the researchers in this study, the researchers utilized various statistical methods for measuring and studying the quantitative data. They compared their results to the assumptions stated in the research questions, and the results they anticipated in their hypotheses. They employed methodologies to analyze both the quantitative and qualitative data. The quantitative data analysis included tallying results of pre-surveys in which the researchers gave the students opportunities to answer not only objective questions, but also open-ended questions in order to assess student perceptions. They used a 5 level rating scale to measure agreement/disagreement, which they then analyzed using statistical means and other measurement instruments. They also conducted a post-survey in week 16, in which students rated the importance of peer and instructor feedback and commented on the value of both giving and receiving peer feedback, but they noted that not all of the pre-surveys (12/15) were returned. They also performed a final survey to verify interview data collection. During analysis, to alleviate validity concerns, after completion of the data collection they triangulated interview question data with survey results. They compared average scores using a paired sample t-test to compare the ratings obtained on postings from both peer and instructor feedback prior to the use of peer feedback. Reliability of the data was ensured by using multiple interviewers, multiple evaluators to reduce bias. They also used check coding to ensure inter-rater reliability. They utilized measurements of quantitative data, providing mean ratings regarding timeliness, quality, and perceptions of importance of feedback.

Interpretation and Implications of Results

13. Critique the author’s discussion of the methodological and/or conceptual
limitations of the results.

Feedback, to be effective, should be of high quality and timely and since students in online courses do not experience the physical interaction in onsite classes. The learners may struggle to feel social connections to classmates in the virtual environments. Students can both give and receive peer feedback which goes a long way to personalize interactions since students must use critical thinking to analyze other works, then absorb and process criticism from the other students. By prescribing an expected response, whereas the latter opens up common experience dialogical interaction. The student-to-student interaction is more socially oriented and involved co-construction of knowledge. This provides more of a group oriented factor to threaded discussions, which are decidedly asynchronous communicative instruments. However, by adding a peer-collaborative factor, it adds another valuable dimension to the activity and may help with cognitive processing of the content. Peer feedback can have drawbacks in that students may become anxious about giving and receiving feedback, concerned about the reliability of the feedback. In addition, students may not be prepared or be comfortable to take on the role of evaluator.

14. How consistent and comprehensive are the author’s conclusions with the reported results?

The researchers in this study drew from many relevant theorists with regard to the effectiveness of feedback. However, many of the studies were pertinent to face-to-face rather than online learning environments. The researchers in this study concluded that student-to-student feedback can be used effectively in place of instructor feedback. The important factors which they stated and tested repeatedly were the timeliness, consistency, and quality, but not necessarily the quantity of the feedback responses. The integrative data collection using interviews as well as direct observation of feedback responses provided a deeper understanding of the motivations of the students and how they internalized the learning opportunities into cognitive growth. The pre-and-post interview experience gave the students to reflect on the process, and the researchers cross-referenced and corroborated the interview comments to determine the perceptions that students had regarding the effectiveness of the feedback process. This reflection appeared to have a positive effect on the learning effectiveness. The difficulties that arose were assessing the qualitative aspects of student postings, and determining the reliability and validity of peer feedback. These results which were presented in the form of survey and interview results (including actual quotations from the respondents) coincided with the expectations by the researchers that the feedback process would add value to the course experience. However, the authors conceded that, since this was an exploratory study, they were evaluating peer feedback rather than feedback in general. Even though peer interaction enables sharing and comparing of information, they did not find there was better critical thinking and analysis as a result of peer feedback. The peer-to-peer feedback had value in that it enabled students to form basic feedback commentary, co-construct knowledge with peers. It did provide better comprehension of the content through reflection, and reinforcement of lower-levels of Blooms Taxonomy, but did not prove to result in higher level cognition, which face-to-face student interaction may be able to do better.

15. How well did the author relate the results to the study’s theoretical base?

The study was focused on online learners and a specific type of feedback, peer feedback in discussion threads. They tied this well to a number of theorists (Higgins, Hartley, and Skelton) analyses of the importance of timely, substantive, and high quality feedback in learning environments, and how feedback provides formative assessment (Nicol and Macfarlane-Dick) which contributes to improved self-regulation (Robyler) better socio-cognitive engagement with the content (Vygotsky), and more efficient learning. By studying discussion respondents in a variety of ways and using both qualitative and quantitative data collection methodologies, the researchers were striving to learn whether or not feedback from peers (students) improved upon or did not strengthen or weaken learning, the cognition of and construction of meaning through interactions with instructors. In addition, the researchers scored discussion feedback using Bloom’s taxonomy. By doing so, the examination of how peer feedback lent itself to the lower levels of Bloom’s taxonomy involving recalling and comprehension, but also how the reinforcements from peers would affect application, analysis and synthesis of the knowledge being discussed. They developed a process involving question-response-feedback cycle, where they collected and delivered the feedback responses to the participants. The raters also collaborated with each other, comparing the question-response-feedback results and integrating them with interview results through triangulation. They found that higher quality learning occurred with a combination of student-to-student as well as instructor feedback, and concurred the findings from the Ko and Rossen study which stated that the learning process is improved when the student can to cross-check their understanding. They also concurred with Mory that feedback is essential to the learning process.

16. In your view, what is the significance of the study, and what are its primary implications for theory, future research, and practice?

The study provided a significant insight into how mixing the roles of student and teacher with respect to the provision of quality feedback, specifically peer-to-peer feedback, can enable students to learn and reflect on their thoughts beyond the feedback from instructors and in addition to the immediate discussion questions/topics. The implications of the study were to inform researchers how peer feedback may aid educators in facilitating course tasks, developing alternative dialogues, disseminating information, and assessing performance in online courses. The theorists cited in the article qualified the need for good feedback as a catalyst for deep learning. They concurred that prompt, timely and thorough feedback is essential to improve learning and develop skills in communication and subject matter. The researchers in this study also provided justifications for how good feedback in general leads to better retention, but in addition, that peer feedback can provide opportunities for social interaction integrated with knowledge construction and sharing. The study presented here is a good foundation for learning about the effect of peer feedback in online courses, and can lead future researchers to delve deeper into the interactions enabled through embedded functions of the LMS. This type of study is very relevant and applicable to online courses in their current state, but the online course is evolving and will include richer interactions that may benefit greatly from various forms of feedback.

EDU800 Week 13 Annotation

Robelia, B., Greenhow, C. & Burton, L. (2011): Environmental learning in online social networks: adopting environmentally responsible behaviors. Environmental Education Research, 17(4), 553-575.

In this article, Robeliaa discusses how an application within Facebook, called Hot Dish, can be integrated and used to help students learn about the environment. The study performed was an examination of using Facebook in education in order to help those with like minds communicate and share information about a subject area, specifically, environmental studies. Applications such as Hot Dish can be developed to integrate on the Facebook platform. Programmers can develop their own applications to leverage the Facebook environment, and Hot Dish is an example of this. First, they defined and discussed the SMS (Social Networking Site) and then described how Hot Dish was integrated into Facebook. The key features that Hot Dish were leveraging from Facebook is the ability to have unique profiles, share connections with others that have common interests, and be able to access and communicate with this list of connections. Hot Dish specifically was designed to share information within the Facebook SMS about pro-environmental and climactic change topics and activism. Since the SMS is a common way for young people to communicate, it can be layered upon for a more specific purpose. As an open-source application, Hot Dish has many authors and contributor, and can be extended and enhanced by a community of technical people, programmers to enrich it beyond a proprietary application. The study did a meta-analysis of studies in environmental subject matter, and showed how Facebook can be used to help learner communities to study and share information about environmental science.

However, the application Hot Dish, can work as a way to disseminate information about the environment, but a Facebook group may be adequate for most types of special interest applications. The Hot Dish enhances the community building process that Facebook facilitates. Facebook applications, as well as other social media applications, can be built to further the features and functions of the parent, or hosting SMS. The applications can be customized to focus on such things as sub-community creation, creating deep and rich learning experiences, facilitating areas to post, share and showcase specific articles on the topic, in this case, the pro-environmental movement. This particular application can prove to help those involved in environmental studies to develop and create new knowledge, be able to share through a powerful medium, particularly through social means. As a subject area such as environmental studies evolves, a software system like this can develop learners and help build skills and qualify people for careers in environmental industries, as well as enable further research and study into this area of science. The Hot Dish system can be used to model ideas and practices for those that want to further their knowledge and ability in affecting constructive improvements to environmental science. By creating rich learning environments like this, the meaningful engagement that students have with the subject matter increases.

The potential for building custom applications on top of SMS’s can be very powerful for researchers and educators to have already-established foundations for their content and learning experiences. Since Facebook is so ubiquitous and universal, it has great value to educators to use it’s already-built community basis to develop learning communities. It can be tied to many learning theories that researchers and educators are interested in such as social learning theory, free-choice learning theory, and behavior-change theory. The open-source applications on Facebook which is highlighted in the article is just one example of using the networks and infrastructure of systems that are already established to perform and provide new purposes. Other applications for subject areas of interest to learning science such as government, medicine, business, museums, hobbies, science education and others can be developed.

EDU800 Week 12 Annotation

Salmon, G., M. & Nie, P., (2010). Developing a five-stage model of learning in Second Life. Educational Research, 52(2): 169-182.

The Salmon article discusses the study performed examine teaching and learning utilizing a 5-stage model, in the online interactions of users in the areas of Archaeology, Digital Photography and Media and Communications within the online multi-user virtual environment system Second Life, which focused on collaborative activities implemented over computer networks involving asynchronous communication for use within blended learning environments. The subjects, all from the UK, ranged from undergraduate to postgraduate students involved in higher and professional education. Data was collected in a variety of ways, including text based interviews collected within Second Life chats, and was analyzed. They used a conferencing software called FirstClass and studied learning tasks performed in what they called MOOSE (MOdelling Of Second life Environment, gathering information primarily via asynchronous discussion forums. The five stages included looking at (1) how students prepare to access and take part in online learning in Second Life, including how to navigate and use the system, learning on their own, (2) beyond the individual involvement, group work and establishment of the students unique identities formed in the simulated world and how they interact and socialize with others, as well as adjust to the world and cooperate with others to build value, (3) how the participants created, consumed and shared information and performed various tasks, and tested the parameters and extents of the virtual environment, (4) how well the students succeeded in knowledge creation through the activities building upon the previous stages as they performed and collaborated on various tasks and implemented processes involving higher-level thinking, and finally (5) how the students reflected on meeting their goals and how effectively they constructed knowledge from their experience of working together in the virtual 3D world, which led to growth both personally and for the group. Each of these stages can be used for scaffolding purposes in the context of each of the three case studies. There were unique opportunities in each of the cases to examine how people learn in the SL (Second Life) environment. For example, Digital Photography involves developing artifacts, which can be used in interactions to stimulate the learning experiences. The participants naturally developed enhancements to their world utilizing the artifacts that they developed. These become assets and gave the subjects opportunities to create alternate worlds such as a Kalashi village, which they were able to navigate through and interact.

Second Life is a unique simulation because it involves 3D world that mimics the real world and has enhancements that each user has the freedom to develop, both to their avatars and to the world which they exist. There is a great social and cultural engagement in the system, which can be very powerful in teaching and learning experiments and activities. It allows the students to experience both exotic visual environments, as well as hyperbolic social interactions with others, and enables them to live out fantasies that they would not be able to access in the real world. For example, being able to teleport to different parts of the world, and confronting others with renditions of themselves which were exaggerated. This may have the effect of freeing the students to push the envelope of what they would be willing to interact and discuss, thus contributing to a rich learning environment.

This article and studies it outlines, gives an account of how systems like Second Life can be a powerful tool for researchers, utilizing a virtualized social environment to study for learning. The created worlds that SL provides, gives the researcher ways to experiment in a very economical way. The simulations can be created and destroyed, backed up and tested repeatedly. They can test different pathways that occur depending upon the sequences that they implement. They can easily drop-in and out different subjects to test new theories and hypotheses. By using a multi-stage model, the researchers can modularize the study to examine the outcomes, tweaking the variables in one of the stages to see how it affects the overall results. The 3D nature of Second Life also provides enhanced dimensionality, beyond just the physical “look” of the environments. It evokes cognition in a versatile way which otherwise would require a complex physical experimentation, requiring buildings, rooms, people, and many other resources to accomplish. The value of the scaffolding using a multi-stage model is that examining the state of the learning at any stage and looking at how it effects the next stage will can produce a deep understanding of how people learn and how it can contribute to learning science. The results gathered in a study like this can be applied outside of a virtual world, and utilized to build learning environments. However, building more rich learning environments, which involve physical avatars or robots to interact and learn from may be the next step in the evolution of a simulated world like Second Life. In it’s current state, it may seem like a novelty or a primitive environment when we look at it a few years from now.

EDU800 Week 11 Annotation

Ertmer, P. A., Richardson, J. C., Belland, B., Camin, D., Connolly, P., Coulthard, G., Mong, C. (2007). Using Peer Feedback to Enhance the Quality of Student Online Postings: An Exploratory Study. Journal of Computer-Mediated Communication, 12(2), 412-433. doi:10.1111/j.1083-6101.2007.00331.x

The Ertmer article studies how using peer feedback as an instructional strategy may increase quality postings. Feedback in threaded discussions of online courses is essential for enabling students to self-regulate their performance, confirm prior knowledge and improve cognitive engagement. Feedback, to be effective, should not be absent, must be of high quality and timely and since students in online courses do not experience the physical interaction in onsite classes, they may struggle to feel social connections to classmates in the virtual environments. Students can both give and receive peer feedback which goes a long way to personalize interactions since students must use critical thinking to analyze other works, then absorb and process criticism from the other students. By prescribing an expected response, whereas the latter opens up common experience dialogical interaction. The student-to-student interaction is more socially oriented and involved co-construction of knowledge. This provides more of a group oriented factor to threaded discussions, which are decidedly asynchronous communicative instruments. However, by adding a peer-collaborative factor, it adds another valuable dimension to the activity and may help with cognitive processing of the content. Peer feedback can have drawbacks in that students may become anxious about giving and receiving feedback, concerned about the reliability of the feedback. In addition, students may not be prepared or be comfortable to take on the role of an evaluator.

Discussion postings form an important basis for communication in online courses and can be judged on both quantity and quality. To be most valuable, they must be interactive, rather than just having all students respond to the question with an “answer.” Peer interaction in online courses serves to provide an important interpersonal and gives the students motivation to check and recheck their work since their peers are watching and assessing, and also builds a sense of community and trust. The real learning is adjusting one’s perspective to view how others respond to the question, then responding to the response. This discourse leads to deep learning since it drills down to new territory of the topic. Peer feedback also has the effect of offloading some of the workload from the instructor, by transferring the task of reviewing content to students. The article emphasized how providing feedback is one of the most time-consuming elements of teaching online, so sharing the responsibility with students has a twofold benefit: 1) reduction of workload for teachers, and more importantly, 2) giving students opportunities to synthesize information at a high level, emulating the teacher role. When a student gives peer assessments, it opens up dialogue, the recipient is given insight into their own learning. Online courses rely on quality design and interaction to be rich and valuable, but it cannot all be planned, so the discussion thread provides a dynamic aspect to the course. Therefore, feedback in all forms is essential to make the course compelling, keep students engaged, accelerating and amplifying learning. Students are used to getting feedback from instructors, but when getting it from peers, then it layers the learning by having a non-expert examine responses, allows sharing of ideas, diverse perspectives, and leads to a more collaborative learning environment rather than a patriarchal model.

Researchers can use the feedback loop and process to analyze the effectiveness of the communal nature of meaningful student interaction. The feedback process gives value not only to the recipient, but also to the provider. By emulating and modeling a teacher behavior, the provider takes on the role of teacher and receives a distinct learning opportunity, and hence, gains greater insight into the course objectives while providing feedback. The article utilized a sound researching approach to peer feedback, and provided a great deal of results data to help judge the effectiveness of the feedback, however, it acknowledged that there are logistical problems with providing feedback and collecting information to assess its effectiveness. For example, comments included “My impressions are that it is very beneficial to learning. Peers are more often on the same level and may be able to explain things in a manner that makes more sense than the instructor.” The qualitative analysis of responses to a teacher versus a student is different can provide valuable insight for the researcher. In this study, the students were asked several research questions, which addressed postings in online courses, quality of interaction, and perceptions from both students and faculty on the value of giving peer feedback. The study applied Bloom’s Taxonomy to ensure the quality of the data collection. The study also used multiple techniques such as multiple interviewers, and standardized interview protocol in order to reduce bias. They also incorporated some of their own techniques in order to ensure high quality of the data. They concluded that quality peer feedback has many benefits, but was not as important as instructor feedback. The students got to know each other better, share opinions, but may have been concerned that the actual instructor was not providing feedback.

EDU800 Week 10 Annotation

Knobel, M., & Lankshear, C. (2014). Studying new literacies. Journal of Adolescent & Adul t Literacy, 57(9), 1-5.

The article from Knobel & Lankshear explores the idea of “New Literacies,” which have emerged largely due to technology and the Internet. New literacies involve shared skills and knowledge for the current generation, in which they leverage ubiquitous technologies to learn, make meaning, and create knowledge in a dynamic and diverse environment. The article gives examples and studies young students who are learning digitally via self-directed activities. It explores new literacies from the socio-cultural perspective and emphasizes how the new skills, knowledge and tools play into social contexts. Tools and techniques include multiple media types such as video, images, podcasts, and hypertext/hypermedia. The practical applications of digital literacy is provided with technology, software and new approaches, but competence in more abstract aspects of literacy such as problem-solving, reasoning, critical thinking, and argument play an equally important role.

The new literacies discussed in the article build upon the old literacies, and enhance the foundational knowledge that individuals need to have to survive and navigate in the modern world. The new literacies involve shared digital know-how and skillsets extending “old” literacies. They unleash expression of diverse intelligences and natural talents, manifested as new languages, abstractions, and creations. They are comprised of digitally-influenced knowledge and skillsets, such as the Internet and other new technologies which require new skills and strategies to effectively use them. They also promote creativity and integrated contexts, and coincide with new pedagogies and diversity of thought. The new literacies unleashed by digital technology enable new expressions of intelligence, natural talents through new languages, abstractions, and creations. By understanding how today’s students acquire, integrate and synthesize knowledge utilizing such tools and techniques as things as micro-blogging, wikis, social networking, hypermedia, search engines, and gamification/game-playing, we can connect with today’s learners by designing and developing pedagogies and systems that meet the needs of the new paradigm of learning. Besides the obvious emergence of digital literacy, we also see other new literacies acquired and possessed by learners that enable higher levels of criticality. However, simply translating the conventional literacies and rendering them digitally, does not comprise new literacies.

Understanding new ways of learning and internalizing information to form new knowledge is essential for researchers. The technologies available to today’s learners can extend collaboration from face-to-face to activities and personal interaction with global scope. By identifying, analyzing and studying new ways of creating and/or curating information, as educators, we can advance our experience and exposure to the new ways people learn, teach, and communicate, and contribute these ideas to the greater body of knowledge in learning science. New technologies which increase interactivity with the content and ability to dynamically develop new knowledge can help researchers with the analysis of emergent learning methodologies as well as enable them to develop solutions to the challenges of learning in a world with exponential growth in content. The new literacies not only provide new frameworks for learning, but provide practical tools and techniques for students to find what information they need, when they need it. The research in this article emphasized how young learners engage in both new and conventional literacies, utilizing technology, knowledge, and skills. And, the article explains how teachers can reinforce the new literacies by providing cycles of feedback and mentoring, and adapting to the dynamic digital classroom, which can lead to better assessment of learning, and improvement of learning outcomes.

EDU800 Week 8 Supplemental Annotation

Means. (2010). Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies. U.S. Department of Education Office of Planning, Evaluation, and Policy Development Policy and Program Studies Service.

This article, a study for the U.S. Department of Education, examined the state of K-12 online education by examining existing articles which did empirical studies of online and blended learning, comparing and contrasting it to face-to-face (FTF), measuring student outcomes, utilizing rigorous research design.  The authors, Means et.al, looked at historical aspects and the evolution in the use of eLearning.  They found that the popularity of online stemmed from the flexibility, time-and-place advantages, cost-effectiveness, and the ability to instruct larger groups of students in an efficient manner.  In the literature search, 50 independent effects that for meta-analysis were identified.  They stated how online learning, as a subset of distance learning, utilizes newer technology in addition to the traditional video and TV based education, which for the most part simply were stand-ins for FTF.  Since online education entails many web-based technologies, multimedia, collaboration tools and other new techniques, it was sufficiently different from traditional distance learning.  The research questions for the meta-analysis involved effectiveness of online vs. FTF, whether simply supplementing FTF with online enhanced learning, the practices associated with effective online learning, and conditions that influence effectiveness of online learning.  The authors did a comprehensive literature search and review on online learning, and performed meta-analysis on the findings.  The searches were limited to only online, those that utilized random-assignment or controlled quasi-experimental designs, and focused on studies which objectively measured student learning rather than teacher perceptions of learning or course quality, for example.  Many of the studies they found examined the influences of media such as video, on the learning experience and subsequent assessments of learning with the use of technology for such things as asynchronous communication, synchronous technologies, as well as online testing.  They referred to how online technologies can be used to expand and support learning communities (Bransford, Brown and Cocking 1999; Riel and Polin 2004; Schwen and Hara 2004; Vrasidas and Glass 2004).  They also found that learning was enhanced by online learning because of increased multimedia interactions, leading to better reflective analysis of the content.  In addition, they concluded that effectiveness of online learning may be different for K-12 than for adult learners in undergraduate studies.  They also considered such conditions as demographics, teacher qualifications, and accountability to government regulations when doing their analyses.

The study, while doing meta-analysis of online vs. FTF instruction, gave some key insights into the state of research into online education such as that there were not many published studies of online learning effectiveness for K–12 students.  However, from the available research they found that student performance in online was slightly better than FTF, that success in learning outcomes were 20%higher for online students, but they acknowledged that the two types of education were considerably different in terms of time students spent on tasks.  Many of the studies did not try to normalize the study of online learning through drawing equivalents to pedagogical approaches, curricula, and time spent learning.  They also found that by comparing purely online with hybrid or blended modalities yielded similar learning outcomes.  However, as with FTF, if we want to maximize the value of an online learning experience, there should be active learning components in the course.  With the massive amount of research data that the authors of this study were trying to aggregate, it appeared to be difficult for them to come to cohesive conclusions.  The articles they looked at were all over the place, but the study does come up with some consensuses of knowledge, but more questions such as whether online can replace FTF, what pedagogies can be transferred into the online learning spaces, and to what degree should the courses be balanced between asynchronous and synchronous activities.  Online modalities enable a better way to transmit or broadcast information by enabling any computer to be a portal to the information provided by instructors.  The replicability and ability to efficiently deliver content is a key advantage to the design of online learning versus FTF.

This is a great big picture study that can help students of learning science to sift through and curate the studies of online versus FTF, how knowledge can be disseminated, acquired, created, and learned through many empirical examinations of online scenario’s.  The study provides a conceptual framework for studying online learning.  It gives us ways to build upon the masses of research, albeit mixed with various states of technology inclusion, so that we can anticipate future opportunities to design new learning environments.  If we know the historical background, we can then decide how in the future we can implement such things as technology mediated instruction, new types of synchronous methodologies and techniques, and to enhance virtual environments to approach the experience and advantages of FTF learning.

 

EDU800 Week 8 Annotation

Hrastinski, S. (2009). A theory of online learning as online participation. Computers & Education, 52(1), 78–82.

The article by Hrastinski provides a theory of student participation in distance learning and CSCL (Computer Supported Collaborative Learning) environments, how this participation actually drives learning in online education environments, how the social aspects of participation have positive effects on achievement and learning by providing further learning opportunities outside of the virtual classroom.  He also compares online learning to more traditional learning.  Also, he discusses how the inter student and teacher interaction and cooperation helps improve learning in online settings.  He provides a literature review on such things as constructivism, which aims to transfer knowledge objects from teacher to learner, in the construction of knowledge.  Participation activities in online courses also involve developing, establishing, and nurturing social relationships, dialogue and discourse, utilization of various tools, and involves activities that engage students.  However, online learners are usually physically isolated from other learners, the instructor and the source of the content.  He also provides data on a study by Morris, Finnegan and Sz-Shyan that measured learning outcomes (i.e. perceived learning, grades on tests, quality of assignment completion) based upon such variables as number of discussion posts, seconds spent viewing content pages and discussions.  In addition, the author reflects on the types of interactions that learners have (i.e. learner to instructor, content and to other learners).  He also refers to how Haythornthwaite and Kazmer found that support systems were important, such as from family, and colleagues.  Also, he ties the article to Wenger’s definition of participation, involving sense of and attachment to community, which becomes cyclical (Palloff and Pratt) because participation drives attachment to the community and attachment to the community drives a higher likelihood to help others.

The participation activities utilized in online learning contribute to student satisfaction and retention.  When students take part directly, synchronously and asynchronously in online learning activities, the participation duration is finite, so the student then has then go off to individually integrate and synthesize when they are not online.  This may appear to be a completely independent learning model, but students in online courses continue their learning outside the classroom through social interactions with other students and the instructor.  Equipped with psychological tools (language, engaging activities) physical tools such as the Internet, hardware, software, the LMS, the student has opportunities to perform work such as reading and writing (doing), interact with others (talking), reflect on the content (thinking), make choices and judgements on the content and experience (feeling), and be part of a group socially (belonging).  The social interactions form interdependency and intimacy that is generated among students in online courses through participation.  This builds trust, shared values and a sense of belonging.

This article is essential reading for researchers in learning science who want to understand the effects of various types of participation by students in online learning settings.  It gives the some theoretical footing on the value of such things as discussion boards, interactive assignments and online content, with respect to constructing knowledge.  It also gives a rationale as to how the social aspects of online learning can provide reinforcement and strengthen the acquisition and retention of knowledge through interactions.  These interactions occur between the students, instructors, social support structures such as family members and work colleagues.  The article also may help researchers and teachers to develop and/or locate and implement tools for online and distance learning.

EDU800 Critical Review #1: Digital Game-Based Learning: Impact of instructions and feedback on motivation and learning effectiveness. By Erhel and Jamet

Erhel, S., & Jamet, E. (2013). Digital game-based learning: Impact of instructions and feedback on motivation and learning effectiveness. Computers & Education,67, 156-167. doi:10.1016/j.compedu.2013.02.019

EDU 800 Critical Review 1

  1. Identify the clarity with which this article states a specific problem to be explored.

The problems and challenges associated with the use of DGBL, or Digital Game Based Learning was clearly articulated in the article.  Erhel and Jamet first explored how several learning theories apply to the study to find out the effects of using various types of instructions and feedback in conjunction with DGBL’s.  They expressed how the two experiments which they conducted and were outlined in the reading, would demonstrate whether or not DGBL with enhanced instructions would lead to better cognitive results whether they were explicitly applied to the learning factors or to the entertainment factors with regard to motivation.  They also expressed, with the help of many relevant theorists, whether feedback in DGBL scenarios would promote better learning.  This type of study is important to further the use of game based learning.  Since software and hardware technology is available to build powerful simulations, this type of research will go a long way toward enhancing the systems in place today.

  1. Comment on the need for this study and its educational significance as it relates to this problem.

They effectively defined the problem in the context of how there has been little research available to study virtual learning environments and determine how they affect motivation and engagement, deep learning, whether in the form of a competitive game or a simulation.  Since this is an emerging capability for teaching and learning, it is a relatively new approach.  However, non-digital game based learning has been studied, so there is a significant amount of research available to draw from.  The authors in this article built and created new knowledge of DGBL by performing these two experiments.  By applying learning theory to game development for educational purposes, the content can become more compelling and valuable.

  1. Comment on whether the problem “researchable”? That is, can it be investigated through the collection and analysis of data?

The problems presented in this article, particularly determining how variations of DGBL can influence such things as deep learning versus surface learning, whether specific or general instructions can affect learning using DGBL’s, and how different question types and feedback can aid in such things as memorization and comprehension, are definitely researchable.  The authors demonstrated through their experiments that by establishing hypotheses, screening and selecting subjects for the study via pre-testing, and maintaining control variables, that the DGBL can be studied, and that data gathered can be used to draw conclusions about how instructions and feedback enhance learning when using DGBL’s or simulations.  The study, while short, provided usable data which were analyzed in rudimentary fashion, but formed a basis for future exploration of DGBL efficacy in learning, as compared to other digital multimedia based learning.

Theoretical Perspective and Literature Review (about 3 pages)

  1. Critique the author’s conceptual framework.

The authors drew from many scholarly articles to establish and justify the need for further research on this relatively new type of learning.  Their conceptual framework involved first defining digital games, how they can be used for both education and entertainment.  They also contrasted learning with serious games environments (SGE) and digital games to conventional media such as classroom learning, and explored how various scholars have reported that games can have a positive effect, or no effect on learning and motivation.  The approach and methodology on using and analyzing the use of digital games and simulations for learning established a way to frame the study of their effect on cognition and learning, in order to further study something that was in need of a framework.

  1. How effectively does the author tie the study to relevant theory and prior research? Are all cited references relevant to the problem under investigation?

The authors effectively and frequently introduced references in their literature review, to relevant learning theories from theorists and researchers who have explored such things as learning and motivation utilizing new media (Liu), those who explored motivation and education from a self-determination perspective (Deci), learning from computer games (Tobias & Fletcher), health games researchers such as Lieberman, Vogel’s work on simulation and games, Ames & Archer’s work on achievement goals in the classroom, among many others.

  1. Does the literature review conclude with a brief summary of the literature and its implications for the problem investigated?

The literature review provided in this article, while drawing upon several other scholarly articles and theorists, only thoroughly explores and summarizes one of the two experimental hypotheses, namely that how using instructions may improve the learning effectiveness of digital games.  The other experiment which explored the efficacy of using feedback in gamified digital education, was not explored until the experiment conclusion was discussed.  This exclusion of the theoretical background for the hypothesis for the 2nd experiment demonstrates that the authors should have focused on one or the other in this article, and performed additional experiments in another study.

  1. Evaluate the clarity and appropriateness of the research questions or hypotheses.

The authors established a compelling case for whether digital learning games can be compared to conventional media, but they did not create a specific experiment that compared DGBL to conventional learning. They intermingled the discussion of the experiments up front with the review of literature.  While interesting and useful, the establishment of the hypothesis that DGBL can be better than conventional learning did not conclude with proof either way that DGBL is better than conventional learning.  The authors did, however, point out that there are contradictory studies that come out with both positive and neutral aspects of DGBL.

The article appeared to assume that DGBL is superior to conventional learning, but only set out to experiment as to whether enhancing DGBL with instructions and feedback would lead to better learning than without these.  The authors did provide results for the 1st experiment that tied back to the original hypothesis and extended upon it with the 2nd experiment.  They stated the hypothesis for the 2nd experiment in the discussion concluding the study.

Research Design and Analysis (about 3 pages)

  1. Critique the appropriateness and adequacy of the study’s design in relation to the research questions or hypotheses.

While the research and experiments were based on the use of DGBL, the authors may have been better off settling with more thoroughly performing  the first experiment, with more thoughtful ways to select the sample group.  Then, perhaps, by following up this study by exploring and researching the feedback factor on DGBL.  The overall study design, first in experiment 1, regarding the use of instructions in DGBL, involved 3 phases, which showed that the authors wanted to hone in on the issue at hand.  All of the study participants were screened in the first phase and those with too much prior knowledge based upon the results of a pre-test were disqualified to participate.  The study measured Avoidance versus Approach in terms of simulations of people with one of 4 different disease presentations.  The hypothesis that was to find whether certain types of instructions (entertainment versus educational) will aid in cognition and learning of the subject matter, was addressed by the first experiment.  The second experiment presented the hypothesis about how KCR (knowledge of correct response) feedback in educational games can reduce redundant cognitive processes.  As mentioned earlier, there was not a literature review of the background theories regarding feedback.  They did provide some references to literature such as Cameron & Dwyer’s work on the effects of online gaming on cognition.  The article seemed to be testing what Cameron & Dwyer studied with regard to how using different feedback types affected achievement of learning objectives.

  1. Critique the adequacy of the study’s sampling methods (e.g., choice of participants) and their implications for generalizability.

The article utilized a fairly sound way of selecting participants based upon demographic factors such as age, gender and college attendance.  For example, they first establishing generalized categories based upon age (i.e. young adults 18-26, length of time in their college programs, and decided to develop filters to exclude of medical students.  However, in the 2nd experiment, the breakdown in terms of gender had many more female participants (16 male and 28 female) than in the 1st experiment.  This inconsistency evidenced that the two experiments were not cohesively designed to work together.  The 2nd experiment tested the addition of KCR to the first experiment, but they did not maintain consistency in the sampling methods and choice of participants.

  1. Critique the adequacy of the study’s procedures and materials (e.g., interventions, interview protocols, data collection procedures).

The experiments involved utilizing ASTRA, a multimedia learning environment.  This simulated learning environment presented an avatar as a stand-in for a real instructor, and presented the case studies of the disease presentations on a simulated television monitor.  It was an adequate representation of the situation, but was more a facsimile or model of a real world instructor presenting on a screen.  This may have provided a simulated association with an actual teacher and the interactions therein, to the subjects in the study.  In addition, it provided learning and entertainment instructions for the student to review while viewing the simulation.  The use of a combination of full-motion animation with text enabled a richer cognitive environment than if it were just a screen with text.  The methods involved utilizing pre-tests, a recall quiz, and questionnaires on knowledge gained after the simulation concluded.  with the adequacy of the study’s procedures and materials (e.g., interventions, interview protocols, data collection procedures).  The 2nd experiment utilized mostly the same instrumentation and technology as the first, but interjected additional content in order to test if KCR feedback promoted the learner’s emotional investment.  By providing popup windows with immediate feedback about the student’s responses, the 2nd experiment was testing whether better cognition, comprehension and learning occurred because of it.

  1. Critique the appropriateness and quality (e.g., reliability, validity) of the measures used.

The measures that were applied from the results of the mostly quantitative data, were appropriate for the simple experiment.  For the first experiment, the authors provided data on the prior knowledge pre-test, including mean and standard deviation analysis.  In addition, they provided similar statistical analysis on the data collected on the recall quiz, knowledge questionnaire, motivation questionnaire, and measured the intrinsic motivation of the subjects.  However, the authors provided a different type of analysis result for the second experiment.  While they designed the ratings for the second experiment by applying similar statistical analysis such as mean scores on the paraphrase-type questions versus inference-type questions, the results were presented in a tabular form instead of a narrative form as they did in the first experiment.  In addition, they utilized statistical measures such as ANOVA, SD, and means to measure the motivation questionnaires and provided the results in a tabular form, which reflected how the performance versus mastery goals compared, highlighting the differences in the goal avoidance and goal approach.

  1. N/A
  2. Critique the author’s discussion of the methodological and/or conceptual limitations of the results.

In the general discussion for the article, the authors summarized the results, outlining their use of learning science, and expressed how their hypotheses were confirmed by the data they acquired through the experimentation.  They pointed out that they were able to derive a value-added result, building on the first experiment, to the 2nd experiment.  They accomplished their goal of combining the knowledge of how education or entertainment instructions contribute to learning in DGBL of the 1st experiment, with the KCR feedback in the 2nd, but they may have incorporated too complex of a methodology and process to arrive at their results.  They subsequently acknowledged that the effects of the types of questions they utilized did not yield what they expected and pointed out that they may need to perform future studies.

  1. How consistent and comprehensive are the author’s conclusions with the reported results?

The 1st experiment yielded the conclusion that learning instructions were more effective than entertainment instructions with regard to encouraging better comprehension, cognition and learning, which was what they were looking for in their original hypothesis.  This was not, however, comprehensively explored.  With regard to the 2nd experiment, the results were viewed by the authors to confirm the effect of feedback on DGBL was to provide deeper learning and cognitive processing. The authors also concluded that DGBL overall enhanced memorization, and that the study aligned and was consistent with some of the other studies they cited on such things as cognitive load theory.

  1. How well did the author relate the results to the study’s theoretical base?

The authors maintained their commitment to utilizing DGBL, but finding how to enhance its effectiveness by adding value to the experience by providing meta instructional content before the training module commenced, in the form of providing instructions to the student.  They also found that providing feedback during the gameplay, that the students had a more intense experience with regard to their cognitive results, memorization and learning overall.  The authors related the new findings about DGBL to their opening review of literature on motivation, game learning theory.

  1. In your view, what is the significance of the study, and what are its primary implications for theory, future research, and practice?

The significance of this research study is that it will advance the knowledge base in learning science, of the gamification of educational modules.  As the authors admitted, they need to perform further studies in the future.  However, after analyzing the effects of instructions and subsequent enhancement with various types of feedback will provide game/simulation developers and designers to implement changes to their software based upon the practical results of this study.