The new Domestic Workers Dataset is the largest single set of surveys (n = 11,759) of domestic workers to date. Our analysis of this dataset reveals features about the lives and work of this “hard-to-find” population in India—a country estimated to have the largest number of people living in forms of contemporary slavery (11 million). The data allow us to identify child labour, indicators of forced labour, and patterns of exploitation—including labour paid below the minimum wage—using bivariate analysis, factor analysis, and spatial analysis. The dataset also helps to advance our understanding of how to measure labour exploitation and modern slavery by showing the value of “found data” and participatory and citizen science approaches.
This paper investigates how adjustments to deep learning architectures impact model performance in image classification. Small-scale experiments generate initial insights although the trends observed are not consistent with the entire dataset. Filtering operations in the image processing pipeline are crucial, with image filtering before pre-processing yielding better results. The choice and order of layers as well as filter placement significantly impact model performance. This study provides valuable insights into optimizing deep learning models, with potential avenues for future research including collaborative platforms.
Abstract The COVID-19 pandemic led to unparalleled pressure on healthcare services. Improved health-care planning in relation to diseases affecting the respiratory system has consequently become a key concern. We investigated the value of integrating sales of non-prescription medications commonly bought for managing respiratory symptoms, to improve forecasting of weekly registered deaths from respiratory disease at local levels across England, by using over 2 billion transactions logged by a UK high street retailer from March 2016 to March 2020. We report the results from the novel AI explainability variable importance tool Model Class Reliance implemented on the PADRUS model. PADRUS is a machine learning model optimised to predict registered deaths from respiratory disease in 314 local authority areas across England through the integration of shopping sales data and focused on purchases of non-prescription medications. We found strong evidence that models incorporating sales data significantly out-perform other models that solely use vari-ables traditionally associated with respiratory disease (e.g. sociodemographics and weather data). Accuracy gains are highest (increases in R2 between 0.09 to 0.11) in periods of maximum risk to the general public. Results demonstrate the potential to utilise sales data to monitor population health with information at a high level of geographic granularity.
Abstract In this work we investigate the effectiveness of different types of visibility models for use within location‐based services. This article outlines the methodology and results for our experiments, which were designed to understand the accuracy and effects of model choices for mobile visibility querying. Harnessing a novel mobile media consumption and authoring application called Zapp , the levels of accuracy of various digital surface representations used by a line of sight visibility algorithm are extensively examined by statistically assessing randomly sampled viewing sites across the 1 km 2 study area, in relation to points of interest ( POI ) across the University of Nottingham campus. Testing was carried out on three different surface models derived from 0.5 m LiDAR data by visiting physical sites on each surface model with 14 random point of interest masks being viewed from between 10 and 16 different locations, totalling 190 data points. Each site was ground‐truthed by determining whether a given POI could be seen by the user and could also be identified by the mobile device. Our experiments in a semi‐urban area show that choice of surface model has important implications for mobile applications that utilize visibility in geospatial query operations.
From September 10 to 12, 2007, over 100 attendees convened in Manchester, England, travelling from all over Europe, as well as the far climes of North America, Asia and Australia. Unlike many visitors to Manchester they weren't here to witness the city's much heralded football team, but had gathered instead at the University of Manchester for the 18th International Conference on Hypertext and Hypermedia (Hypertext 2007). Here they would discuss recent innovations in hypertext, whose most famous form exists as the World Wide Web, and assess the challenges and opportunities in the latest groundbreaking research. Traditionally the success of the Hypertext conference series has been attributed to its immense diversity, and this year was no different, with papers being divided into 5 varied tracks: Hypertext and the Person; Hypertext and Society; Practical Hypertext; Hypertext Culture and Communication; and Hypertext Models and Theory. The conference was a vibrant affair that featured 16 full papers and 7 short papers (with a 29% overall acceptance rate), posters, demos, keynotes, panels, Birds-of-a-Feather (BOFs) and social events. However, what characterized this year's conference the most was an underlying sense of reintegration, a rejoining of disparate trends in hypertext to common goals. And as such a lot of unity and camaraderie too.
The World Wide Web is only feasible as a practical proposition because of the existence of hypermedia search engines. These search engines face a monumental challenge. They are routinely confronted with searching behaviour best characterised as unsophisticated and impatient. One popular explanation for poor querying technique is lack of computer literacy. Individuals who work closely with Information Technology are frequently exposed to retrieval engines, giving them the opportunity to develop successful searching strategies. In the following paper, we examine this assumption - is there really a correlation between computer literacy and searching skill.
ZigZag is a unique hyperstructural paradigm designed by the hypertext pioneer Ted Nelson. It has piqued a lot of interest in the hypertext community in recent years because of its aim of revolutionizing electronic access to information and knowledge bases. In ZigZag information is stored in cells that are arranged into lists organized along unlimited numbers of intersecting sets of associations called dimensions. To this infrastructure a mechanism of transclusion is added, allowing the data stored in cells to span, and hence be utilized, in different contexts. Proponents of ZigZag claim that it is a flexible and universal structure for information representation, and yet the system has not been widely adopted and has been implemented even more rarely. In this paper we address the question of whether there are intrinsic theoretical reasons as to why this is the case.
Following the recent publication of an article on the Paramedic Pathfinder in the Emergency Medicine Journal, James Goulding argues that rather than highlighting a step forward for the paramedic profession, it serves as an indication that there needs to be more rigorous research before a change in current methods can be recommended.