research compared the views of a sample of Scottish comprehensive-school pupils and, a sample of students training to become secondary-school teachers in two Scottish Colleges of Education on the two subjects: The Characteristics of a Good Teacher and The Purpose of School. Essays on the two topics were collected from a sample of pupils in three comprehensive schools in central Scotland. essays were unitized into statements, and two category-systems were developed to code them. Statements on both topics were also obtained from a small sample of student-teachers to ensure that the universe of statements derived from the essays was truly exhaustive. statements made most frequently by pupils were included in a two-part questionnaire; care was taken to ensure that the views of all the sub-groups of the pupil sample were properly represented. questionnaire was administered to a sample of pupils and student-teachers. method of completion was devised by the researcher to facilitate the selection of a small group of statements to be ranked from an initially large number of statements. This process involved progressive stages of elimination by means of collapsing four lists of statements to form two, and finally one. Six alternative forms of the questionnaire were constructed, to avoid bias arising from the statements' order of presentation. four lists on both sections of the questionnaire were also balanced by the seeding of statements across the lists according to their estimated appeal to respondents, and by the equal distribution of statements referring to particular areas, (eg. the teacher's discipline).The results revealed major disagreement between students' and pupils' views on the purpose of school, but closer agreement on the characteristics of a good teacher.
Social network analysis (SNA) is a methodology for capturing, storing, visualizing and analyzing relational data; that is, data concerning relations between specified entities (e.g., individuals, organizations, nations) and patterns of connection within populations of such entities. As such it stands in contrast to most standard social scientific approaches, which typically focus upon the attributes of such entities (although attributes can be included in SNA). Interest in social relations, their properties and effects, stretches back to the origins of social science and indeed even further back to the earliest social philosophies, and the origins of SNA itself can be traced back at least as far as the 1930s. The perspective has enjoyed a huge boost in recent years, however, not least as advances in technology have increased the computing power routinely available to social scientists. There are a number of good software packages today which offer users a comprehensive range of powerful analytic routines. This review covers: (1) data collection, (2) network measures, (3) roles and positions, (4) ego networks, (5) statistical methods, (6) mixed methods, (7) social capital, (8) small worlds, (9) large, complex and multimode data, (10) visualization, and (11) cohesive subgroups/community detection.
Criminal organisations tend to be structured as a network in order to optimise flows of resource exchanges. The consolidation of these structures strengthens the criminal organisation’s capability of becoming more and more efficient over time. Recently, researchers in the field of covert networks have begun to analyse the dynamics of these networks. Thus far, the models and methods used to analyse the temporal dynamics of covert networks have come with a number of limitations. Our approach for analysing temporal dynamics attempts to address some of these limitations. In this article, we extend the use of dynamic line-graphs to bipartite networks for incorporating time directly into the network, and we suggest an alternative way to visualise the evolution of actors’ participation in successive covert actions and events. Our article intends to contribute to the research on dynamic networks by proposing a new approach for representing temporal dynamics in covert networks. After illustrating our method, we present some examples of its use on real-world data for visualising network evolutions over time.
Initial work on mapping CFD codes onto parallel systems focused upon software which employed structured meshes. Increasingly, many large scale CFD codes are being based upon unstructured meshes. One of the key problems when implementing such large scale unstructured problems on a distributed memory machine is the question of how to partition the underlying computational domain efficiently. It is important that all processors are kept busy for as large a proportion of the time as possible and that the amount, level and frequency of communication should be kept to a minimum.
Proposed techniques for solving the mapping problem have separated out the solution into two distinct phases. The first phase is to partition the computational domain into cohesive sub-regions. The second phase consists of embedding these sub-regions onto the processors. However, it has been shown that performing these two operations in isolation can lead to poor mappings and much less optimal communication time.
In this thesis we develop a technique which simultaneously takes account of the processor topology whilst identifying the cohesive sub-regions. Our approach is based on an unstructured mesh decomposition method that was originally developed by Sadayappan et al [SER90] for a hypercube. This technique forms a basis for a method which enables a decomposition to an arbitrary number of processors on a specified processor network topology. Whilst partitioning the mesh, the optimisation method takes into account the processor topology by minimising the total interprocessor communication.
The problem with this technique is that it is not suitable for dealing with very large meshes since the calculations often require prodigious amounts of computing processing power.
The problem can be overcome by creating clusters of the original elements and using this to create a reduced network which is homomorphic to the original mesh. The technique can now be applied to the image network with comparative ease. The clusters are created using an efficient graph bisection method. The coarseness of the reduced mesh inevitably leads to a degradation of the solution. However, it is possible to refine the resultant partition to recapture some of the richness of the original mesh and hence achieve reasonable partitions.
One of the issues to be addressed is the level of granuality to obtain the best balance between computational efficiency and optimality of the solution. Some progress has been made in trying to find an answer to this important issue.
In this thesis, we show how the above technique can be effectively utilised in large scale computations. Results include testing the above technique on large scale meshes for complex flow domains.