Written feedback plays a key role in the acquisition of academic writing skills. Ideally, this feedback should include feed up, feed back and feed forward. However, written feedback alone is not enough to improve writing skills; students often struggle to interpret the feedback received and enhance their writing skills accordingly. Several studies have suggested that dialogue about written feedback is essential to promote the development of these skills. Yet, evidence of the effectiveness of face-to-face dialogue remains inconclusive. To bring this evidence into focus, we conducted a literature review of face-to-face dialogue intervention studies. The emphasis was on key elements of the interventions and outcomes in terms of student perceptions and other indicators, and the methodological characteristics of the studies. Subsequently, we analysed each selected intervention for the presence of feed-up, feed-back and feed-forward information. Most interventions used all three feedback elements – notably assessment criteria, student feedback, and revision, respectively – and combined lecturer–student as well as student–student dialogue. Students generally perceived the interventions as beneficial; they appreciated criteria and exemplars because they clarified what was expected of them and how they would be assessed. With regard to student outcomes, most interventions positively affected performance. The literature review suggests that feedback dialogue shows promise as an intervention to improve academic writing skills, but also call for future research into why and under which specific conditions face-to-face dialogue is effective.
Seventy students participated in an experiment to measure the effects of either providing explanations or listening during small group discussions on recall of related subject-matter studied after the discussion. They watched a video of a small group discussing a problem. In the first experimental condition, the video was stopped at various points in time, enabling the participants to verbally respond to the discussion. In the second condition, they listened to the same discussion, without contributing. In the control condition, they listened to a discussion that was not related to the subject-matter subsequently studied. After the discussion, all participants studied a text and answered questions that tested their recall of information from this text. No immediate differences in recall were found. One month later, participants who had actively engaged in explaining remembered more from the text. The conclusion appears justified that actively providing explanations during a discussion positively affects long-term memory.
The aim of this study was to investigate to what extent ratings of tutor performance remain stable in the long term. At many schools, teaching performance is assessed and these evaluations are consulted as part of the decision-making process for promotion, tenure, and salary. Since this information may have summative value, it is crucial that the reliability of the data be assessed. A previous study had shown that a single evaluation of a tutor is reliable when the responses of six students are used (interrater reliability). The present study focused on the stability of tutor evaluations over repeated occasions of evaluation.A generalizability study was conducted to estimate the number of occasions required to demonstrate stability. The study took place during three academic years (1992-93, 1993-94, and 1994-95) at the problem-based medical school of the University of Limburg (now Maastricht University). A total of 291 ratings were analyzed (97 tutors rated during three sequential tutoring occasions). Two types of scores were used: an aggregate score calculated from ratings of 13 items and an overall judgment.The results indicate that when the scores are used to interpret the precision of individual scores, two evaluation occasions should be available for the overall judgment and four occasions for the aggregate score. If the tutor scores are consulted only to determine whether performances are above or below a cutoff score, a reliable decision can be made after only a single occasion of evaluation.The results demonstrate that data collected over an extended period of time can be reliably used as part of the decision-making process for promotion, salary, and tenure.
Student evaluation committees play a crucial role in internal quality assurance processes as representatives of the student body. However, the students on these committees sometimes experience difficulty in providing constructive and structured feedback to faculty in an environment characterised by a strong power differential between student and staff. This study aimed to evaluate a leadership and quality assurance training implemented for students involved in internal quality assurance. Furthermore, we explored how students give shape to their internal quality assurance role. Students from three health sciences programmes participated in a mixed methods study to evaluate the training and reflect on their internal quality assurance role. Overall, the students were very enthusiastic about the training. Qualitative data analysis indicated that in their internal quality assurance role, students: (1) harnessed the power of the entire student population; (2) tried to create focus and take charge; (3) searched for a common ground with staff, and (4) they explained how they dealt with the power differential. Providing training for students with internal quality assurance roles is a valuable endeavour. Future research needs to investigate further ways to help students to contribute to internal quality assurance processes in higher education.
PURPOSE: To test whether there are effects of tutor expertise on student performance under conditions of curricular materials that have low or high levels of structure and that are poorly or well matched to students' levels of prior knowledge. METHOD: The study was conducted in 1994-95 at the medical school of the University of Limburg. The data set used for analysis included 135 tutorial groups (with ten to 12 students per group), 119 tutors (each running only one group per unit), and 15 units in four curriculum years. The analysis was conducted at the level of tutorial groups since a tutor's level of expertise might differ for distinct units. Tutors were asked to judge their levels of expertise related to the cases discussed, based on which a distinction was made between expert and non-expert tutors. The degrees of structure of curricular materials and students' levels of prior knowledge were rated by the students. Using analyses of variance, students' scores on end-of-unit tests (each with about 150 true-false items) were compared for groups led by expert and non-expert tutors under conditions of low and high levels of structure and low and high levels of prior knowledge. RESULTS: No difference was found between the test scores of groups led by expert and non-expert tutors. The interaction effects between expertise and structure and expertise and prior knowledge also turned out to be not statistically significant. CONCLUSION: The results suggest that expert tutors do not compensate for lack of curricular structure or students' lack of prior knowledge. This finding is not consistent with that of a recent study that expert tutors do compensate for lack of structure and lack of prior knowledge. This discrepancy may be accounted for by a much smaller range within which the structuredness of the curriculum and students' levels of prior knowledge varied in the present study compared with the previous study. An implication might be that faculty should put their efforts into designing structured curricula that are well matched to students' levels of prior knowledge instead of selecting hyper-expert tutors.
Developing competencies for interprofessional collaboration, including understanding other professionals' roles on interprofessional teams, is an essential component of medical education. This study explored resident physicians' perceptions of the clinical roles and responsibilities of physician assistants (PAs) and NPs in the clinical learning environment.Using a constructivist grounded theory approach, semistructured interviews were conducted with 15 residents in one academic setting. Transcripts were analyzed using an iterative approach to inductive coding.Participants typically perceived PAs' and NPs' roles as being "like a resident," less commonly as independent clinicians, and rarely as collaborators. Barriers to understanding PA and NP roles and perceiving them as collaborators included the lack of preparatory instruction about PAs and NPs, the hierarchical structure of medical education, and inadequate role modeling of interprofessional collaboration.This study suggests that barriers in the clinical learning environment and the structure of medical education itself may impede residents' learning about PAs and NPs and how to collaborate with them.
Many educational institutions use instructional approaches such as problem-based learning (PBL), in which collaborative learning plays an important role. There is little research, however, that describes which factors are responsible for the success of collaboration. The purpose of this study was twofold, i.e. to explore cognitive interactions taking place between students in tutorial groups and to examine whether the coding system of is usable to analyze these interactions. The focus was on elaborations and co-constructions, which are indicators of individual and collaborative knowledge construction in a group. Videotapes of three PBL sessions were transcribed, in which tutorial groups of the Maastricht Medical School were discussing a problem. The results showed that cognitive interactions could be found in the tutorial groups and that it was possible to analyze them. Co-constructions seemed most easy to elicit from the transcripts.
The aim of this study was twofold. The first question concerns the way students make use of the learning issues they generate (as strict guidelines or as global guidelines) and whether this changes across years of training. The second question concerned the relationship between the way students make use of learning issues and the time spent on individual study and achievement on two tests of knowledge.A questionnaire was developed, containing seven items that measured to what extent students study strictly according to the student-generated learning issues and six items that measured to what extent students study beyond the student-generated learning issues. The questionnaire also contained one question in which students had to estimate the mean time spent on individual study. Achievement was measured by two forms of tests of knowledge, a block test assessing course content and a progress test assessing long-term functional knowledge.Medical School of Maastricht University, the Netherlands.Medical students (response=69%) from the problem-based curriculum at the Maastricht University.During their first year students study strictly according to the content of the learning issues, whereas in later years students studied more according to their own learning needs and interests. In addition, students who tended to study beyond the generated learning issues spent more time on individual study and achieved better on both tests.Students in a problem-based curriculum seem to become better self-directed learners during the years of training.