The use of a Classroom Response System (CRS) was evaluated in two sections, A and B, of a large lecture microbiology course. In Section B the instructor used the CRS technology at the beginning of the class period posing a question on content from the previous class. Students could earn extra credit if they answered the question correctly. In Section A, the class also began with an extra credit CRS question. However, CRS questions were integrated into the lecture during the entire class period. We compared the two classes to see if augmenting lectures with this technology increased student learning, confidence, attendance, and the instructor’s ability to respond to student’s misconceptions, over simply using the CRS as a quizzing tool. Student performance was compared using shared examination questions. The questions were categorized by how the content had been presented in class. All questions came from instructors’ common lecture content, some without CRS use, and some questions where Instructor A used both lecture and CRS questions. Although Section A students scored significantly better on both types of examination questions, there was no demonstrable difference in learning based on CRS question participation. However, student survey data showed that students in Section A expressed higher confidence levels in their learning and knowledge and indicated that they interacted more with other students than did the students in Section B. In addition, Instructor A recorded more modifications to lecture content and recorded more student interaction in the course than did Instructor B.
The use of a Classroom Response System (CRS) was evaluated in two sections, A and B, of a large lecture microbiology course.In Section B the instructor used the CRS technology at the beginning of the class period posing a question on content from the previous class.Students could earn extra credit if they answered the question correctly.In Section A, the class also began with an extra credit CRS question.However, CRS questions were integrated into the lecture during the entire class period.We compared the two classes to see if augmenting lectures with this technology increased student learning, confidence, attendance, and the instructor's ability to respond to student's misconceptions, over simply using the CRS as a quizzing tool.Student performance was compared using shared examination questions.The questions were categorized by how the content had been presented in class.All questions came from instructors' common lecture content, some without CRS use, and some questions where Instructor A used both lecture and CRS questions.Although Section A students scored significantly better on both types of examination questions, there was no demonstrable difference in learning based on CRS question participation.However, student survey data showed that students in Section A expressed higher confidence levels in their learning and knowledge and indicated that they interacted more with other students than did the students in Section B. In addition, Instructor A recorded more modifications to lecture content and recorded more student interaction in the course than did Instructor B.
Abstract Colleges and universities interact with multiple constituents or quality monitoring groups that require programme‐level assessment of student learning. These required assessments might be used to demonstrate accountability, programme improvement or a combination of both. These demands often challenge instructional faculty to choose between the competing interests of research in their discipline and research on student learning for assessment purposes. This article offers one approach for engineering departments that simultaneously makes student learning research more meaningful for instructional faculty while farming out to the central administration those jobs it does not have the time or resources to do effectively. An engineering programme is better able to ensure the ownership, development and integrity of and research into its own curriculum if it has a centralized university improvement system that presents unit‐level quality management research to external market and accountability groups.
We have demonstrated how program-level student learning research can be designed to satisfy institutional expectations along with specialized and regional accreditation criteria without duplication of effort. A centralized university quality management system conserves faculty effort by organizing program level learning research in patterns that satisfy multiple forms of evaluation criteria, such as continuous improvement documentation, peer-review of research planning quality, monitoring of planning currency, faculty participation in assessment, and sharing learning assessment information among university community members and external constituents.
A new and innovative way to evaluate runway safety initiatives for airports is through the use of interactive real-time simulation. The National Aeronautics and Space Administration (NASA) operates an integrated suite of simulators that can give both pilots and tower controllers the ability to simultaneously try out ideas in the safety of virtual reality. In February of 2003, the FAA conducted a demonstration in the NASA facilities for Dallas/Fort Worth International Airport (DFW) of a concept to reduce runway crossings and enhance the efficiency of the airport. Currently DFW experiences about 1,700 runway crossings per day, which contribute to arrival and departure delays and the potential for runway incursions. The proposed concept included the addition of new perimeter taxiways on the East and West sides of the airport. Through use of NASA's unique simulation capabilities, DFW controllers and commercial pilots provided expert feedback on the safety and operational implications by directly experiencing the proposed changes. Overall, the data collected from the participants and the simulators demonstrated that the concept would improve operations at DFW, if implemented. Improvements were observed in many areas including departure rates, taxi duration, runway crossings, and controller and pilot communications.
• Identified capabilities for FAM operations from human-in-the-loop (HITL) simulations • Matched the identified capabilities to planned mid-term NextGen technologies • ERAM, ARMS, TMU upgrades, NVS, and Data Comm identified as FAM-related technologies • Matched comparisons indicate that limited FAM operations possible with currently planned NextGen technologies • Functions outlined in ERAM and ARMS are most critical in allowing more flexible airspace reconfigurations than is possible today
Abstract This chapter provides a glimpse of student affairs assessment at Colorado State University including a specific example of assessment, tips to implementing assessment at your institution, and barriers encountered when implementing the process at Colorado State University.