:Probabilistic networks, used as an adjunct or alternative to the logical models used in artificial intelligence (AI) and decision support systems (DSS), offer a way to compactly represent a distribution over a set of random variables. Nonetheless, the specification of a given network may require conditional probabilities that are simply unavailable. In this paper a means for analyzing incompletely specified networks is presented, and some general rules are derived from the application of the method to some simple networks. The use of the technique in MIS settings is illustrated.
MEASURES USED TO PROTECT SUBJECTS in publicly distributed microdata files often have a significant negative impact on key analytic uses of the data. For example, it may be important to analyze subpopulations within a data file such as racial minorities, yet these subjects may present the greatest disclosure risk because their records tend to stand out or be unique. Files or records that are linkable create another type of disclosure risk-common elements between two files can be used to link files with sensitive data to externally available files that disclose identity. Examples of disclosure limitation methods used to address these types of issues include blanking out data, coarsening response categories, or withholding data altogether. However, the very detail that creates the greatest risk also provides insight into differences that are of greatest interest to analysts. Restricted-use agreements that provide unaltered versions of the data may not be available, or only selectively so. The public-use version of the data is very important because it is likely to be the only one to which most researchers, policy analysts, teaching faculty, and students will ever have access. Hence, it is the version from which much of the utility of the data is extracted and often it effectively becomes the historical record of the data collection. This underscores the importance that the disclosure review c ommittee s trikes a g ood b alance b etween protection and u tility. In this paper we d escrib e our disclosure review committee's (DRC) analysis and resulting data protection plans for two national studies and one administrative data system. Three distinct disclosure limitation methods were employed, taking key uses of the data into consideration, to protect respondents while still providing statistically accurate and highly useful public-use data. The techniques include data swapping, microaggregation, and suppression of detailed geographic data. We describe the characteristics of the data sets that led to the selection of these methods, provide measures of the statistical impact, and give details of their implementations so that others may also utilize them. We briefly discuss the composition of our DRC, highlighting what we believe to be the important disciplines and experience represented by the group.
This paper describes recent attempts by the Coast Guard R&D Center to analyze the dynamics of typical Coast Guard buoys moored with synthetic line in shallow water, using an existing numerical model. Large-scale model wavetank tests have shown that the weight and drag of the synthetic line are negligible when compared to tension forces, so that catenary effects may be ignored. This allows the use of a single linear segment in the modelling of the mooring line, thus eliminating numerical convergence problems commonly experienced in time-domain simulation of shallow water dynamics. A unique system for measuring buoy position in wavetank tests is discussed. The procedure for abstracting buoy hydrodynamic coefficients from wavetank data is outlined, along with the current capabilities of the numerical model to estimate mooring line tensions and buoy attitudes in various wave conditions.
The authors explore path analysis as a technique for reasoning with uncertainty. Path analysis is a statistical technique originally developed for the purpose of causal modeling. The authors indicate how path analysis may, under the proper interpretation, be used to mode probabilistic relations among variables. It is shown how path analysis may be useful for belief propagation in expert systems and other areas.< >
Home-Delivered Meals provision is a volunteer-staffed activity for which little strategic planning is performed. We present and evaluate a Genetic Algorithm to solve the Home Delivered Meals Location-Routing Problem. This planning model addresses facility location, allocation of demand to facilities, and design of delivery routes, while balancing efficiency and effectiveness considerations. Computational results on benchmark Location-Routing Problem instances for the single objective version of the problem indicate improvements over some current methods. The case study presented on a larger data set shows how trade-off curves, which are very useful for decision making, can be obtained by the method developed.
Summary form only given, as follows. Formal mathematical programming algorithms have been developed for many scheduling problems of interest to the maritime community, but computational complexity often restricts their use to disappointingly small problems. An interactive system which uses a rule base and advice from the scheduler to formulate constraints for an integer program is discussed. The interaction results in fixing the values of many decision variables, thus considerably reducing the size of the decision space, and consequently the computational burden. As a sample problem, the task of scheduling US Coast Guard Atlantic Area cutters is addressed. Some 30 ships are involved, with varying patrol lengths, maintenance and training requirements, and capabilities. The system illustrates the coupling of symbolic computing techniques with a high-level math programming language GAMS.
A number of quantitative and qualitative approaches to defeasible reasoning have appeared in the artificial intelligence literature in the past decade. However, probability-based methods often languished, due to the supposed complexity of representation in terms of joint distributions. More recently, probabilistic representations based on networks have been shown to be quite compact. In expert systems especially, the use of probability theory for defeasible reasoning is now becoming more widespread. This dissertation describes how the statistical causal modeling technique called path analysis can be used as a probabilistically correct method for defeasible reasoning. Path analysis also makes use of networks, but the quantification of arcs is done by regression rather than by the conditional probabilities typical of other probabilistic methods. It is shown that the regression coefficients of path analysis in fact form a compact and computationally efficient representation for probabilistic reasoning. In addition to deriving the relationship between an extended version of path analysis and probability propagation on networks, the new method is illustrated by treating a number of problems which have appeared in the defeasible reasoning literature. Finally one of these so-called benchmark problems is analyzed in detail, and compared with a number of other well-known methods. It is shown that extended path analysis makes explicit the assumptions that all defeasible reasoning strategies must make, but seldom announce.
A simple problem in defeasible reasoning is analyzed using a number of methods from the literature. It is shown that any such method which arrives at a firm conclusion regarding this problem must necessarily import additional assumptions. The paper argues in favor of those methods which are candid in their assumptions, and which can support alternative decision making rules.< >