The problem of approximating functions is considered in a general domain in one and two dimensions using piecewise polynomial interpolation. An error estimator is proposed which shows how to adaptively determine the interpolation degree. Numerical examples are given.
Designing feasible and effective architectures under diverse computational budgets, incurred by different applications/devices, is essential for deploying deep models in real-world applications. To achieve this goal, existing methods often perform an independent architecture search process for each target budget, which is very inefficient yet unnecessary. More critically, these independent search processes cannot share their learned knowledge (i.e., the distribution of good architectures) with each other and thus often result in limited search results. To address these issues, we propose a Pareto-aware Neural Architecture Generator (PNAG) which only needs to be trained once and dynamically produces the Pareto optimal architecture for any given budget via inference. To train our PNAG, we learn the whole Pareto frontier by jointly finding multiple Pareto optimal architectures under diverse budgets. Such a joint search algorithm not only greatly reduces the overall search cost but also improves the search results. Extensive experiments on three hardware platforms (i.e., mobile device, CPU, and GPU) show the superiority of our method over existing methods.
With the increase in the number of users accessing the 5G network in the future, how to choose the location of 5G base stations to ensure effective network coverage of the service area, so as to provide reliable communication and transmission services, is a key issue to be considered. By establishing an improved genetic algorithm model, the site planning problem is solved. According to the circular coverage area of the base station, the coverage requirement and the minimum cost optimization goal are completed. First, denoising according to business volume, and finally get 35,915 weak coverage points. The sum of the cost of setting up macro base stations and micro base stations is the optimization goal, and the coverage rate of the weak coverage point is more than 90%, is set as the constraint condition. At the same time, the lethal factor is set in the selection process, and the individuals who do not meet the threshold conditions of the original base station are deleted. The improved operator is adopted in the crossover link and mutation link, which speeds up the running speed of the algorithm and ensures the population diversity and the accuracy of the results. Finally, 689 macro base stations and 269 micro base stations are established, with a coverage rate of 93.51% and an optimal cost solution of 7159. By visualizing the results on the coordinate map, it can be clearly seen that most of the weak coverage points have been covered, which shows that the improved genetic algorithm shows fast convergence speed and good optimal value, and verifies the effectiveness of the model.
Various geometric search operators have been developed to explore the behaviours of individuals in genetic programming (GP) for the sake of making the evolutionary process more effective. This work proposes two geometric search operators to fulfil the semantic requirements under the theoretical framework of geometric semantic GP for symbolic regression. The two operators approximate the target semantics gradually but effectively. The results show that the new geometric operators can not only lead to a notable benefit to the learning performance, but also improve the generalisation ability of GP. In addition, they also bring a significant improvement to Random Desired Operator, which is a state-of-the-art geometric semantic operator.
In recent years, the performance of hash indexes has been significantly improved by exploiting emerging persistent memory (PMem). However, the performance improvement of hash indexes mainly comes from exploiting the hardware features of PMem. Only a few studies optimize the hash index itself to fully exploit the potential of PMem. Interestingly, many of these studies improve the performance of write, but disregard the performance of read, of hash indexes on PMem. With extensive experimental evaluation, we find the major reason for inefficient read in the hash index on PMem is that the overhead of hash collision processing is expensive.To address that, we propose a novel Efficient Extendible Perfect Hashing (EEPH) on PMem-DRAM hybrid data layout to improve read performance of hash indexes. Specifically, we reduce the overhead of dynamic perfect hashing extension on PMem by combing extendible hashing. We then design a hybrid data layout to unlock the inherent read strengths of perfect hashing (i.e., zero collision). Last, we devise a complement move algorithm to efficiently guarantee the zero collision of perfect hashing when data move is conducted on PMem. We compare EEPH with the state-of-the-art hash indexes on PMem by conducting comprehensive experiments on several real-world read-intensive and read-skew workloads. The experimental results confirm the superiority of our EEPH as it achieves up to 2.21× higher throughput and about 1/3 of the 99th percentile latency than state-of-the-art hash indexes.
Cluster analysis related to computational linguistics seldom concerned with Pragmatics level. Features of corpus on Pragmatics level related to specific situations, including backgrounds, titles and habits. To improve the accuracy of clustering for conversations collected from international students in Tsinghua University, it required contextual features. Here, we collected four-hundred conversations as a corpus and built it to Vector Space Model. With the Oxford-Duden Dictionary and other methods we modified the model and concluded into three groups. We testified our hypothesis through self-organizing map neural network. The result suggested that the modified model had a better outcome.
We study the task of automated house design, which aims to automatically generate 3D houses from user requirements. However, in the automatic system, it is non-trivial due to the intrinsic complexity of house designing: 1) the understanding of user requirements, where the users can hardly provide high-quality requirements without any professional knowledge; 2) the design of house plan, which mainly focuses on how to capture the effective information from user requirements. To address the above issues, we propose an automatic house design framework, called auto-3D-house design (A3HD). Unlike the previous works that consider the user requirements in an unstructured way (e.g., natural language), we carefully design a structured list that divides the requirements into three parts (i.e., layout, outline, and style), which focus on the attributes of rooms, the outline of the building, and the style of decoration, respectively. Following the processing of architects, we construct a bubble diagram (i.e., graph) that covers the rooms' attributes and relations under the constraint of outline. In addition, we take each outline as a combination of points and orders, ensuring that it can represent the outlines with arbitrary shapes. Then, we propose a graph feature generation module (GFGM) to capture layout features from the bubble diagrams and an outline feature generation module (OFGM) for outline features. Finally, we render 3D houses according to the given style requirements in a rule-based method. Experiments on two benchmark datasets (i.e., RPLAN and T3HM) demonstrate the effectiveness of our A3HD in terms of both quantitative and qualitative evaluation metrics.