This paper reports on the implication of different preferred learning styles on students' performance in the introductory programming sequence and on work in progress on how to accommodate these different styles.Students were given a learning styles preference test and then their preferred learning styles were compared to their performance on the exam and the practical programming part of the introductory programming module. There were significant differences in performance between groups of students.This result could lead one to two possible conclusions. One might be that some students' learning styles are more suited to learning programming than others.An alternative explanation is that our current methods of teaching advantage students with certain learning preference styles. We are at present in the process of testing this second assumption by providing students with a wider range of learning materials. We will then see if student performance is improved by using our current results as a baseline for comparison
In this paper we present a phenomenographic analysis of computer science instructors' perceptions of student success. The factors instructors believe influence student success fell into five categories which were related to: 1) the subject being taught, 2) intrinsic characteristics of the student, 3) student background, 4) student attitudes and behaviour and 5) instructor influence on student development. These categories provide insights not only into how instructors perceive students, but also how they perceive their own roles in the learning process. We found significant overlap between these qualitative results, obtained through analysis of semi-structured interviews, and the vast body of quantitative research on factors predicting student success. Studying faculty rather than students provides an alternative way to examine these questions, and using qualitative methods may provide a richer understanding of student success factors.
This paper reports on an experiment in which first year programming students were given explicit encouragement to use Object (Instance) diagrams when tracing code in multiple-choice questions. We conjectured that by providing scaffolding in this technique, students would be helped to understand the code better and that they would then continue to draw their own diagrams in similar situations. This turned out not to be the case. Although generally students who draw diagrams do better in questions that test their understanding of code behaviour and object referencing, our intervention does not appear to have helped students and the students who were exposed to the intervention were not more likely to go on to use the technique themselves.
A study by a ITiCSE 2001 working group ("the McCracken Group") established that many students do not know how to program at the conclusion of their introductory courses. A popular explanation for this incapacity is that the students lack the ability to problem-solve. That is, they lack the ability to take a problem description, decompose it into sub-problems and implement them, then assemble the pieces into a complete solution. An alternative explanation is that many students have a fragile grasp of both basic programming principles and the ability to systematically carry out routine programming tasks, such as tracing (or "desk checking") through code. This ITiCSE 2004 working group studied the alternative explanation, by testing students from seven countries, in two ways. First, students were tested on their ability to predict the outcome of executing a short piece of code. Second, students were tested on their ability, when given the desired function of short piece of near-complete code, to select the correct completion of the code from a small set of possibilities. Many students were weak at these tasks, especially the latter task, suggesting that such students have a fragile grasp of skills that are a prerequisite for problem-solving.
This paper reports on the efforts of an ITiCSE 2007 working group with the aim of producing a publicly available, searchable, tagable, Web 2.0-style repository of short debugging videos. This repository may be accessed from http://debug.csi.muohio.edu/. The videos are aimed at novice Java programmers who may need help debugging when none is available (e.g. in the middle of the night before the homework is due). However, it could also be used by instructors of introductory programming. Here we discuss our motivation in creating this repository and detail the process we followed and the products we produced.
Research has shown that most learning in the workplace takes place outside of formal training and, given the swiftly changing nature of the field, computer science graduates more than most workers, need to be able to learn computing topics outside of organized classes.
A qualitative analysis of debugging strategies of novice Java programmers is presented. The study involved 21 CS2 students from seven universities in the U.S. and U.K. Subjects "warmed up" by coding a solution to a typical introductory problem. This was followed by an exercise debugging a syntactically correct version with logic errors. Many novices found and fixed bugs using strategies such as tracing, commenting out code, diagnostic print statements and methodical testing. Some competently used online resources and debuggers. Students also used pattern matching to detect errors in code that "just didn't look right". However, some used few strategies, applied them ineffectively, or engaged in other unproductive behaviors. This led to poor performance, frustration for some, and occasionally the introduction of new bugs. Pedagogical implications and suggestions for future research are discussed.
We report on a study of novice programmers' object oriented class designs. These designs were analysed to discover what faults they displayed. The two most common faults related to non-referenced classes (inability to integrate them into the solution), and problems with attributes and class cohesion. The paper ends with some implication for teaching that may be indicated by the empirical results.
This paper opens the classroom door to provide insight into factors that shape tertiary computer science teachers' experience of (and engagement with) student learning success and failure. This topic is explored through phenomenographic analysis of teacher narratives dealing with frustration and success in facilitating learning for their students.
Three themes related to learning are explored which highlight different aspects of the learning situation, namely, students, environment, and responsibility.
Using these themes as a focus reveals great diversity in the manner in which teachers experience student learning difficulties and approaches to resolving them.
The results provide computer science academics with a framework within which to discuss and contrast their values and assumptions and understand their implications for teaching practice.