An entry from the Cambridge Structural Database, the world’s repository for small molecule crystal structures. The entry contains experimental data from a crystal diffraction study. The deposited dataset for this entry is freely available from the CCDC and typically includes 3D coordinates, cell parameters, space group, experimental conditions and quality measures.
A calligram is an arrangement of words or letters that creates a visual image, and a compact calligram fits one word into a 2D shape. We introduce a fully automatic method for the generation of legible compact calligrams which provides a balance between conveying the input shape, legibility, and aesthetics. Our method has three key elements: a path generation step which computes a global layout path suitable for embedding the input word; an alignment step to place the letters so as to achieve feature alignment between letter and shape protrusions while maintaining word legibility; and a final deformation step which deforms the letters to fit the shape while balancing fit against letter legibility. As letter legibility is critical to the quality of compact calligrams, we conduct a large-scale crowd-sourced study on the impact of different letter deformations on legibility and use the results to train a letter legibility measure which guides the letter deformation. We show automatically generated calligrams on an extensive set of word-image combinations. The legibility and overall quality of the calligrams are evaluated and compared, via user studies, to those produced by human creators, including a professional artist, and existing works.
We study the Maximum Subgraph problem in deep dependency parsing. We consider two restrictions to deep dependency graphs: (a) 1-endpoint-crossing and (b) pagenumber-2. Our main contribution is an exact algorithm that obtains maximum subgraphs satisfying both restrictions simultaneously in time O(n5). Moreover, ignoring one linguistically-rare structure descreases the complexity to O(n4). We also extend our quartic-time algorithm into a practical parser with a discriminative disambiguation model and evaluate its performance on four linguistic data sets used in semantic dependency parsing.
One major goal of mesh parameterization is to minimize the conformal distortion. Measured boundary parameterizations focus on lowering the distortion by setting the boundary free with the help of distance from a center vertex to all the boundary vertices. Hence these parameterizations strongly depend on the determination of the center vertex. In this paper, we introduce two methods to determine the center vertex automatically. Both of them can be used as necessary supplements to the existing measured boundary methods to minimize the common artifacts as a result of the obscure choice of the center vertex. In addition, we propose a simple and fast measured boundary parameterization method based on the Poisson’s equation. Our new approach generates less conformal distortion than the fixed boundary methods. It also generates more regular domain boundaries than other measured boundary methods. Moreover, it offers a good tradeoff between computation costs and conformal distortion compared with the fast and robust angle based flattening (ABF++).
Abstract In this work, we present a phenomenon-oriented comparative analysis of the two dominant approaches in English Resource Semantic (ERS) parsing: classic, knowledge-intensive and neural, data-intensive models. To reflect state-of-the-art neural NLP technologies, a factorization-based parser is introduced that can produce Elementary Dependency Structures much more accurately than previous data-driven parsers. We conduct a suite of tests for different linguistic phenomena to analyze the grammatical competence of different parsers, where we show that, despite comparable performance overall, knowledge- and data-intensive models produce different types of errors, in a way that can be explained by their theoretical properties. This analysis is beneficial to in-depth evaluation of several representative parsing techniques and leads to new directions for parser development.
We propose a novel curvature-aware simplification technique for point-sampled geometry based on the locally optimal projection (LOP) operator. Our algorithm includes two new developments. First, a weight term related to surface variation at each point is introduced to the classic LOP operator. It produces output points with a spatially adaptive distribution. Second, for speeding up the convergence of our method, an initialization process is proposed based on geometry-aware stochastic sampling. Owing to the initialization, the relaxation process achieves a faster convergence rate than those initialized by uniform sampling. Our simplification method possesses a number of distinguishing features. In particular, it provides resilience to noise and outliers, and an intuitively controllable distribution of simplification. Finally, we show the results of our approach with publicly available point cloud data, and compare the results with those obtained using previous methods. Our method outperforms these methods on raw scanned data.
Under the complicated situation of the current world pattern, national culture will once again usher in the opportunity of historical appearance under the threat of cultural globalization. Whether it is the need to maintain the independence of national spirit or to improve the centripetal force and cohesion of national states, culture is an inalienable theme. It takes a long period of social practice for a cultural concept to develop from bud to maturity, and in practice, it obtains the common value identity of all people. A mature and stable cultural identity is also the value norm, which contains the common value appeal and value expectation of a nation. Marxism itself is the theory and method of transforming the world and understanding the world. No matter the transformation or understanding, there must be corresponding subjects to undertake this mission, and the function of value norms is to mark the specific subjects to undertake this mission. From this point of view, the value norm function of Chinese excellent traditional culture will help anchor a specific subject, that is, the subject who undertakes the cause of Marxism in China.
Abstract The word ‘style’ can be interpreted in so many different ways in so many different contexts. To provide a general analysis and understanding of styles is a highly challenging problem. We pose the open question ‘how to extract styles from geometric shapes?’ and address one instance of the problem. Specifically, we present an unsupervised algorithm for identifying curve styles in a set of shapes. In our setting, a curve style is explicitly represented by a mode of curve features appearing along the 2D silhouettes of the shapes in the set. Unlike previous attempts, we do not rely on any preconceived conceptual characterisations, for example, via specific shape descriptors, to define what is or is not a style. Our definition of styles is data‐dependent; it depends on the input set but we do not require computing a shape correspondence across the set. We provide an operational definition of curve styles which focuses on separating curve features that represent styles from curve features that are content revealing. To this end, we develop a novel formulation and associated algorithm for style‐content separation. The analysis is based on a feature‐shape association matrix (FSM) whose rows correspond to modes of curve features, columns to shapes in the set, and each entry expresses the extent a feature mode is present in a shape. We make several assumptions to drive style‐content separation which only involve properties of, and relations between, rows of the FSM. Computationally, our algorithm only requires row‐wise correlation analysis in the FSM and a heuristic solution of an instance of the set cover problem. Results are demonstrated on several data sets showing the identification of curve styles. We also develop and demonstrate several style‐related applications including style exaggeration, removal, blending, and style transfer for 2D shape synthesis .