A recent overhaul of the New Zealand digital technologies curriculum has impacted the way that students are taught to program prior to university. The connection between student experiences with the updated curriculum and their perspectives on programming at university is pedagogically significant to educators. Semi-structured interviews were conducted with eight students enrolled in introductory programming courses at the University of Auckland, and a thematic analysis was conducted on the range of responses, revealing a surprisingly diverse range of experiences and perspectives. Insights gained into the connection between learning to program in secondary and tertiary, and the impact of the curriculum changes across schools, are informative to educators in both sectors.
To have real business impact within preclinical drug development, Enterprise ELNs (Electronic Laboratory Notebooks) must provide a secure, scalable and searchable data management backbone across all disciplines focused on development of both small and large molecules, in compliant and non-compliant environments. Through more effective capture and reuse of data and knowledge, we discuss how Enterprise ELNs improve inter- and intra-departmental collaboration, and support quality initiatives such as ‘Quality by Design’.
PeerWise (PW) is an online tool that allows students in a course to collaborate and learn by creating, sharing, answering and discussing multiple-choice questions (MCQs). Previous studies of PW at the introductory level have shown that students in computing courses like it, and report statistically significant learning gains in courses taught by the investigators at different institutions. However, we recently conducted three quasi-experimental studies of PW use in upper-division computing courses in the U.S. and failed to replicate these positive results. In this paper we consider various factors that may impact the effectiveness of PW, including instructor engagement, usage requirements and subject-matter issues. We also report several positive results from other STEM courses at the same institution, discuss methodological issues pertaining to our recent studies and propose approaches for further investigation.
Computing educators face significant challenges in providing timely support to students, especially in large class settings. Large language models (LLMs) have emerged recently and show great promise for providing on-demand help at a large scale, but there are concerns that students may over-rely on the outputs produced by these models. In this paper, we introduce CodeHelp, a novel LLM-powered tool designed with guardrails to provide on-demand assistance to programming students without directly revealing solutions. We detail the design of the tool, which incorporates a number of useful features for instructors, and elaborate on the pipeline of prompting strategies we use to ensure generated outputs are suitable for students. To evaluate CodeHelp, we deployed it in a first-year computer and data science course with 52 students and collected student interactions over a 12-week period. We examine students' usage patterns and perceptions of the tool, and we report reflections from the course instructor and a series of recommendations for classroom use. Our findings suggest that CodeHelp is well-received by students who especially value its availability and help with resolving errors, and that for instructors it is easy to deploy and complements, rather than replaces, the support that they provide to students.
Large language models (LLMs) are revolutionizing the field of computing education with their powerful code-generating capabilities. Traditional pedagogical practices have focused on code writing tasks, but there is now a shift in importance towards reading, comprehending and evaluating LLM-generated code. Alongside this shift, an important new skill is emerging -- the ability to solve programming tasks by constructing good prompts for code-generating models. In this work we introduce a new type of programming exercise to hone this nascent skill: 'Prompt Problems'. Prompt Problems are designed to help students learn how to write effective prompts for AI code generators. A student solves a Prompt Problem by crafting a natural language prompt which, when provided as input to an LLM, outputs code that successfully solves a specified programming task. We also present a new web-based tool called Promptly which hosts a repository of Prompt Problems and supports the automated evaluation of prompt-generated code. We deploy Promptly in one CS1 and one CS2 course and describe our experiences, which include student perceptions of this new type of activity and their interactions with the tool. We find that students are enthusiastic about Prompt Problems, and appreciate how the problems engage their computational thinking skills and expose them to new programming constructs. We discuss ideas for the future development of new variations of Prompt Problems, and the need to carefully study their integration into classroom practice.
Technology integration in educational settings has led to the development of novel sensor-based tools that enable students to measure and interact with their environment. Although reports from using such tools can be positive, evaluations are often conducted under controlled conditions and short timeframes. There is a need for longitudinal data collected in realistic classroom settings. However, sustained and authentic classroom use requires technology platforms to be seen by teachers as both easy to use and of value. We describe our development of a sensor-based platform to support science teaching that followed a 14-month user-centered design process. We share insights from this design and development approach, and report findings from a 6-month large-scale evaluation involving 35 schools and 1245 students. We share lessons learnt, including that technology integration is not an educational goal per se and that technology should be a transparent tool to enable students to achieve their learning goals.
Online learning environments eliminate geographical barriers and enable new forms of collaboration between students at large scale. Self-presentation within such environments affects how students interact with learning content and with each other. We explore how anonymity/identifiability in user profile design impacts student interactions in a large multicultural classroom across two geographical locations. After triangulating 150,000 online interactions with questionnaires and focus groups, we provide three major findings. First, being identifiable had a significant impact on how students accessed and rated content created by their peers. Second, when identifiable, cultural differences became more prominent, leading some students to avoid content created by classmates of certain nationalities. Finally, when students interacted with their real identities, there were significant and negative gender effects which were absent when students were anonymous. These findings contribute to our understanding of social dynamics within multicultural learning environments, and raise practical implications for tool design.
CG (Computer Graphics) is a popular field of CS (Computer Science), but many students find this topic difficult due to it requiring a large number of skills, such as mathematics, programming, geometric reasoning, and creativity. Over the past few years, researchers have investigated ways to harness the power of GenAI (Generative Artificial Intelligence) to improve teaching. In CS, much of the research has focused on introductory computing. A recent study evaluating the performance of an LLM (Large Language Model), GPT-4 (text-only), on CG questions, indicated poor performance and reliance on detailed descriptions of image content, which often required considerable insight from the user to return reasonable results. So far, no studies have investigated the abilities of LMMs (Large Multimodal Models), or multimodal LLMs, to solve CG questions and how these abilities can be used to improve teaching. In this study, we construct two datasets of CG questions requiring varying degrees of visual perception skills and geometric reasoning skills, and evaluate the current state-of-the-art LMM, GPT-4o, on the two datasets. We find that although GPT-4o exhibits great potential in solving questions with visual information independently, major limitations still exist to the accuracy and quality of the generated results. We propose several novel approaches for CG educators to incorporate GenAI into CG teaching despite these limitations. We hope that our guidelines further encourage learning and engagement in CG classrooms.