The increasing use of social media to share knowledge in medical education has led to concerns about the professionalism of online medical learners and physicians. However, there is a lack of research on the behavior of professionals within open online discussions. In 2013, the Academic Life in Emergency Medicine website (ALiEM.com) launched a series of moderated online case discussions that provided an opportunity to explore the relationship between anonymity and professionalism. Comments from 12 case discussions conducted over a one-year period were analyzed using modified scales of anonymity and professionalism derived by Kilner and Hoadley. Descriptive statistics and Spearman calculations were conducted for the professionalism score, anonymity score, and level of participation. No correlation was found between professionalism and anonymity scores (rho = -0.004, p = 0.97). However, the number of comments (rho = 0.35, p < 0.01) and number of cases contributed to (rho = 0.26, p < 0.05) correlated positively with clear identification. Our results differed from previous literature, the majority of which found anonymity associated with unprofessionalism. We believe that this may be a result of the fostering of a professional environment through the use of a website with a positive reputation, the modelling of respectful behaviour by the moderators, the norms of the broader online community, and the pre-specified objectives for each discussion.
<p>To many physicians and professionals, social media seems to be a risky business. However, recent literature has shown that there is potential to enhance your scholarly brand by engaging your stakeholders online. In this article, we discuss the opportunities presented to modern scholars by social media. Using case studies, we highlight two success stories around how scientists and scholars might use social media to enhance their careers. We also outline five key steps you can follow to build and manage your scholarly presence online.</p>
<p>Purpose</p> <p>To determine the responsibilities of journal social media editors (SMEs) and describe their goals and barriers and facilitators to their position.</p> <p>Method</p> <p>The authors identified SMEs using an informal listserv and snowball sampling. Participants were interviewed (June–July 2016) about their position, including responsibilities, goals, barriers and facilitators, and attitudes and perceptions about the position. Themes were identified through a thematic analysis and consensus building approach. Descriptive data, including audience metrics and 2016 impact factors, were collected.</p> <p>Results</p> <p>Thirty SMEs were invited; 24 were interviewed (19 by phone and 5 via e-mail). SMEs generally had a track record in the social media community before being invited to be SME; many had preexisting roles at their journal. Responsibilities varied considerably; some SMEs also served as decision editors. Many SMEs personally managed journal accounts, and many had support from non-physicians journal staff. Consistently, SMEs focused on improving reader engagement by disseminating new journal publications on social media. The authors identified goals, resources, and sustainability as primary themes of SMEs’ perspectives on their positions. Editorial leadership support was identified as a key facilitator in their position at the journal. Challenges to sustainability included a lack of tangible resources and uncertainty surrounding, or lack of, academic credit for social media activities.</p> <p>Conclusions</p> <p>Many of the participating SMEs pioneered the use of social media as a platform for knowledge dissemination at their journals. While editorial boards are qualitatively supportive, SMEs are challenged by limited resources and lack of academic credit for social media work.</p>
Abstract Objective Critics have raised concerns regarding the validity of maintenance of certification ( MOC ) programs. We sought to examine the quality of the randomized controlled trials ( RCT s) selected for the lifelong learning and self‐assessment ( LLS ) component of the American Board of Emergency Medicine ( ABEM ) MOC program. Methods We systematically reviewed the ABEM LLS reading lists from 2004 to 2017 to identify RCT s with dichotomous outcomes and superiority designs. A fragility index ( FI ) was calculated using Fisher's exact test for all statistically significant dichotomous outcomes. Bivariate correlation was performed to examine associations between the FI and RCT study characteristics. Each included study was evaluated with the Cochrane Collaboration risk‐of‐bias ( ROB ) tool. Results Thirteen superiority RCT s with dichotomous outcomes were included in the 2004–2017 LLS reading lists. Ten had a statistically significant outcome, and the majority were robust and at low ROB . The median trial size was 511 patients (interquartile range [ IQR ] = 251–1,517), and the median FI was 10 ( IQR = 7–18); i.e., if 10 patients in the treatment arm had not had events, the results would not have been statistically significant. Conclusions The majority of RCT s included in the LLS are robust and at low ROB .
An abstract is not available for this content. As you have access to this content, full HTML content is provided on this page. A PDF of this content is also available in through the ‘Save PDF’ action button.
<p>Background:</p> <p>In medical education, there is a growing global demand for Open Educational Resources (OERs). However, OER creators are challenged by a lack of uniform standards. In this guideline, the authors curated the literature on how to produce OERs for medical education with practical guidance on the Do’s, Don’ts and Don’t Knows for OER creation in order to improve the impact and quality of OERs in medical education.</p> <p>Methods:</p> <p>We conducted a rapid literature review by searching OVID MEDLINE, EMBASE, and Cochrane Central database using keywords “open educational resources” and “OER”. The search was supplemented by hand searching the identified articles’ references. We organized included articles by theme and extracted relevant content. Lastly, we developed recommendations via an iterative process of peer review and discussion: evidence-based best practices were designated Do’s and Don’ts while gaps were designated Don’t Knows. We used a consensus process to quantify evidentiary strength.</p> <p>Results:</p> <p>The authors performed full text analysis of 81 eligible studies. A total of 15 Do’s, Don’t, and Don’t Knows guidelines were compiled and presented alongside relevant evidence about OERs.</p> <p>Discussion:</p> <p>OERs can add value for medical educators and their learners, both as tools for expanding teaching opportunities and for promoting medical education scholarship. This summary should guide OER creators in producing high-quality resources and pursuing future research where best practices are lacking.</p>
Boyer's framework of scholarship was published before significant growth in digital technology. As more digital products are produced by medical educators, determining their scholarly value is of increasing importance. This scoping systematic review developed a taxonomy of digital products and determined their fit within Boyer's framework of scholarship. We conducted a broad literature search for descriptions of digital products in the medical literature in July 2013 using Medline, EMBASE, ERIC, PSYCHinfo, and Google Scholar. A framework analysis categorized each product using Boyer's model of scholarship, while a thematic analysis defined a taxonomy of digital products. 7422 abstracts were found and 524 met inclusion criteria. Digital products mapped primarily to the scholarship of teaching (85.4%) followed by integration (7.6%), application (5.5%), and discovery (1.5%). A taxonomy of 19 categories was defined. Web-based or computer assisted learning (41%) was described most frequently. We found that digital products are well described in medical literature and fit into Boyer's framework of scholarship and proposed a taxonomy of digital products that parallel traditional forms of the scholarship of teaching and learning. This research should inform the development of tools to examine the impact and quality of digital products.