logo
    Cooperative Design for MIMO Radar- Communication Spectral Sharing System Based on Mutual Information Optimization
    7
    Citation
    38
    Reference
    10
    Related Paper
    Citation Trend
    Abstract:
    This paper considers a novel cooperative design for radar-communication spectral sharing system with multiple-input multiple-output (MIMO) structure based on mutual information optimization. We present a new spectral sharing framework aiming at maximizing the radar mutual information. Specifically, the transmit sequences including radar waveform and communication codebook are designed simultaneously based on radar and communication system mutual information optimization. Subsequently, an iterative procedure is devised to maximize the radar mutual information under the constraints including the transmit power, the constant modulus requirement for radar waveform, and the communication system mutual information performance requirement constraint. The proposed approach exploits the alternating optimization based on the Minorization-Maximization (MM) framework, Lagrange algorithm, and the alternating direction method of multipliers (ADMM) techniques to decompose the original design into two simpler subproblems with a closed-form solution. Simulation results are presented to demonstrate the advantage of the proposed schemes.
    Keywords:
    Maximization
    Transmitter power output
    An unsupervised learning procedure based on maximizing the mutual information between the outputs of two networks receiving different but statistically dependent inputs is analyzed [S. Becker and G. Hinton, Nature (London) 355, 161 (1992)]. For a generic data model, I show that in the large sample limit the structure in the data is recognized by mutual information maximization. For a more restricted model, where the networks are similar to perceptrons, I calculate the learning curves for zero-temperature Gibbs learning. These show that convergence can be rather slow, and a way of regularizing the procedure is considered.
    Maximization
    Perceptron
    Sample (material)
    Feature selection is used to eliminate redundant features and keep relevant features, it can enhance machine learning algorithm's performance and accelerate computing speed. In various methods, mutual information has attracted increasingly more attention as it's an effective criterion to measure variable correlation. However, current works mainly focus on maximizing the feature relevancy with class label and minimizing the feature redundancy within selected features, we reckon that pursuing feature redundancy minimization is reasonable but not necessary because part of so-called redundant features also carries some useful information to promote performance. In terms of mutual information calculation, it may distort the true relationship between two variables without proper neighborhood partition. Traditional methods usually split the continuous variables into several intervals even ignore such influence. We theoretically prove how variable fluctuation negatively influences mutual information calculation. To remove the referred obstacles, for feature selection method, we propose a full conditional mutual information maximization method (FCMIM) which only considers the feature relevancy in two aspects. For obtaining a better partition effect and eliminating the negative influence of attribute fluctuation, we put up an adaptive neighborhood partition algorithm (ANP) with the feedback of mutual information maximization algorithm, the backpropagation process helps search for a proper neighborhood partition parameter. We compare our method with several mutual information methods on 17 benchmark datasets. Results of FCMIM are better than other methods based on different classifiers. Results show that ANP indeed promotes nearly all the mutual information methods' performance.
    Maximization
    Conditional mutual information
    Feature (linguistics)
    Interaction information
    Minification
    Pointwise mutual information
    Benchmark (surveying)
    Citations (1)
    The codebook design which determines the quality of the encoded images is an important problem in the vector quantisation technique. The Linde–Buzo–Gray (LBG) technique is a widely used algorithm in the codebook design. However, LBG algorithm is very sensitive to the initial codebook and tends to trap to the local minimum. In this study, a high‐quality initial codebook design method is proposed. The proposed method utilises both the mean characteristic value and variance characteristic value of training vectors to divide the training vectors into groups. Then codewords are selected from each group to generate an initial codebook. The experimental results demonstrate that the authors proposed method has a better performance in the initial codebook than that of the related methods.
    Linde–Buzo–Gray algorithm
    Citations (7)
    The paper proposes a new information-theoretic method to improve the generalization performance of multi-layered neural networks, called "self-organized mutual information maximization learning". In the method, the self-organizing map (SOM) is successively applied to give the knowledge to the subsequent multi-layered neural networks. In this process, the mutual information between input patterns and competitive neurons is forced to increase by changing the spread parameter. Though several methods to increase information have been proposed in multi-layered neural networks, the present paper is the first to confirm that mutual information play important roles in learning in multi-layered neural networks and how to compute the mutual information. The method was applied to the extended Senate data. In the experiments, it is examined whether mutual information is actually increased by the present method, because mutual information can be seemingly increased by changing the spread parameter. Experimental results shows that even if the parameter responsible for changing mutual information was fixed, mutual information could be increased. This means that neural networks can be organized so as to store information content on input patterns by the present method. In addition, it could be observed that generalization performance was much improved by this increase in mutual information.
    Maximization
    Pointwise mutual information
    Citations (3)
    Vocabulary learning by children can be characterized by many biases. When encountering a new word, children as well as adults, are biased towards assuming that it means something totally different from the words that they already know. To the best of our knowledge, the 1st mathematical proof of the optimality of this bias is presented here. First, it is shown that this bias is a particular case of the maximization of mutual information between words and meanings. Second, the optimality is proven within a more general information theoretic framework where mutual information maximization competes with other information theoretic principles. The bias is a prediction from modern information theory. The relationship between information theoretic principles and the principles of contrast and mutual exclusivity is also shown.
    Citations (5)
    Abstract This paper addresses the problem of coordinated target tracking in sensor networks. For a typical target tracking scene with nonlinear bearing-only measurements, we first investigate the mutual information between multiple sensors and the target state. To improve the performance of target tracking, we analyzed the relative positions between sensor agents and the target to be tracked and derived the optimal positions for sensors in the network by mutual information maximization. Simulation results are presented and discussed to demonstrate that the performance of estimated target states could be improved by the proposed method.
    Tracking (education)
    Maximization
    Citations (0)
    In this paper a medical image registration algorithm based on maximization of normalized mutual information was proposed.The algorithm uses the improved partial volume distribution interpolation,effectively overcome the local extremum problem which in common image registration.The algorithm uses the maximization mutual information as the objective function and uses the POWELL algorithm search one maximization mutual information to get the best registration parameters.Experiment results show that the proposed algorithm is simple,fast,and can achieve better accuracy and robustness.
    Maximization
    Robustness
    Image registration
    Interpolation
    Citations (1)
    The present paper aims to propose a new type of information-theoretic method to maximize mutul information between neurons. The importance of mutual information has been well known in neural networks, but the actual implementation of mutual information maximization is a hard problem and mutual information has not necessarily been used in neural networks. We can say that the application of mutual information is very limited. To overcome this shortcoming of mutual information maximization, we present here the very simplified version of mutual information maximization by supposing that mutual information is already maximized before learning. The method was applied to the wholesale data set and the inference of default credit card holders. The experimental results show that mutual information between neurons could be increased and generalization performance could be improved. Then, the important features can be obtained by the present method, even if the training data set was small. On the other hand, by the logistic regression analysis, the important features could be extracted only with the large training data set.
    Maximization
    Pointwise mutual information
    Conditional mutual information
    Citations (9)
    Abstract The present paper 1 aims to propose a new type of information-theoretic method to maximize mutual information between inputs and outputs. The importance of mutual information in neural networks is well known, but the actual implementation of mutual information maximization has been quite difficult to undertake. In addition, mutual information has not extensively been used in neural networks, meaning that its applicability is very limited. To overcome the shortcoming of mutual information maximization, we present it here in a very simplified manner by supposing that mutual information is already maximized before learning, or at least at the beginning of learning. The method was applied to three data sets (crab data set, wholesale data set, and human resources data set) and examined in terms of generalization performance and connection weights. The results showed that by disentangling connection weights, maximizing mutual information made it possible to explicitly interpret the relations between inputs and outputs.
    Maximization
    Pointwise mutual information
    Citations (13)