logo
    Frank-Wolfe Method Used for Solving Channel Capacity
    1
    Citation
    0
    Reference
    20
    Related Paper
    Citation Trend
    Abstract:
    In general,communication system includes the source,channel and receiver.Channel refers to the media of information transmission.Channel capacity is the most important parameter.Simply computation of the channel capacity of special channels is introduced,and the Frank-Wolfe method is used to solve the problem of capacity to general discrete memoryless channel,which is more simple than that of iteration.
    Keywords:
    Binary erasure channel
    The paper deals with the capacity problems for the general multiple-access channel, where the channel transition probabilities are arbitrary for every blocklength n. The approach used here, which is called the information-spectrum approach, is quite different from the standard typical-sequence and/or AEP techniques. The general formula for the capacity region for the general multiple-access channel is thus established. In order to examine the potentiality, we apply it to the mixed channel to obtain the formula for its capacity region. The case where input cost constraints are imposed is also considered.
    Information Theory
    Sequence (biology)
    Citations (51)
    It is observed that some communication situations fall into the large alphabet setting, in which the number of channel parameters and the number of channel uses are both large. To model such situations, we consider Discrete Memoryless Channels (DMCs) in which the input and output alphabet sizes increase along with the block length n. For known channels, we show that reliable communication at the sequence of channel capacities is possible if and only if the minimum between the square logarithms of the input and the output alphabet sizes grows sublinearly with n. For unknown channels with feedback, we show that universal channel coding can be supported if the input-output product alphabet size grows sublinearly with n.
    Sequence (biology)
    Citations (3)
    This paper presents both an information lossless coding scheme and a method to evaluate constrained capacity for channels with memory and unknown state. The fundamental idea is to decompose the original channel into a bank of memoryless sub-channels with partially known states, then successively decode these sub-channels. The receiver of each sub-channel consists of an optimal estimator followed by a memoryless channel decoder. The coding scheme translates the codes and decoders designed for memoryless channels with near capacity performance to channels with memory. The results are applied to both finite state Markov channels and correlated flat fading channels
    Binary erasure channel
    Dirty paper coding
    Channel state information
    Citations (2)
    The Gilbert-Elliott channel, a varying binary symmetric channel, with crossover probabilities determined by a binary-state Markov process, is treated. In general, such a channel has a memory that depends on the transition probabilities between the states. A method of calculating the capacity of this channel is introduced and applied to several examples, and the question of coding is addressed. In the conventional usage of varying channels, a code suitable for memoryless channels is used in conjunction with an interleaver, with the decoder considering the deinterleaved symbol stream as the output of a derived memoryless channel. The transmission rate is limited by the capacity of this memoryless channel, which is often considerably less than the capacity of the original channel. A decision-feedback decoding algorithm that completely recovers this capacity loss is introduced. It is shown that the performance of a system incorporating such an algorithm is determined by an equivalent genie-aided channel, the capacity of which equals that of the original channel. The calculated random coding exponent of the genie-aided channel indicates a considerable increase in the cutoff rate over that of the conventionally derived memoryless channel.< >
    Binary symmetric channel
    Binary erasure channel
    Citations (393)
    The capacity region of the interference channel in which one transmitter non-causally knows the message of the other, termed the cognitive interference channel, has remained open since its inception in 2005. A number of subtly differing achievable rate regions and outer bounds have been derived, some of which are tight under specific conditions. In this work we present a new unified inner bound for the discrete memoryless cognitive interference channel. We show explicitly how it encompasses all known discrete memoryless achievable rate regions as special cases. The presented achievable region was recently used in deriving the capacity region of the general deterministic cognitive interference channel, and thus also the linear high-SNR deterministic approximation of the Gaussian cognitive interference channel. The high-SNR deterministic approximation was then used to obtain the capacity of the Gaussian cognitive interference channel to within 1.87 bits.
    Zero-forcing precoding
    Co-channel interference
    Citations (16)
    We consider finite state channels where the state of the channel is its previous output. We refer to such channels as POST (Previous Output is the STate) channels. Our focus is on a simple binary POST channel, with binary inputs and outputs where the state determines if the channel behaves as a Z or an S channel (of equal capacities). We show that the non feedback capacity equals the feedback capacity, despite the memory in the channel. The proof of this surprising result is based on showing that the induced output distribution, when maximizing the directed information in the presence of feedback, can also be achieved by an input distribution that is ignorant of the feedback. Indeed, we show that this is a necessary and sufficient condition for the feedback capacity to equal the non feedback capacity for any finite state channel.
    Binary erasure channel
    Citations (10)
    In this paper, we study the zero-error capacity for finite state channel with feedback when channel state information is known to both the transmitter and the receiver. We prove that the zero-error capacity in this case can be obtained through the solution of a dynamic programming problem. Exact answers are also given for certain cases.
    Zero (linguistics)
    Channel state information
    Finite state
    Citations (0)
    Shannon showed that the capacity of a discrete memoryless channel can not be increased by noiseless feedback. It has been conjectured that this should be true for a continuous memoryless channel, provided such a channel is appropriately defined. We precisely define such a channel from two mathematically different points of view and rigorously prove that its capacity can not be increased by feedback.
    Shannon–Hartley theorem
    Binary erasure channel
    Citations (41)
    The capacity region for the discrete memoryless multiple-access channel without time synchronization at the transmitters and receivers is shown to be the same as the known capacity region for the ordinary multiple-access channel. The proof utilizes time sharing of two optimal codes for the ordinary multiple-access channel and uses maximum likelihood decoding over shifts of the hypothesized transmitter words.
    Channel access method
    Citations (89)
    The authors indicate the dependence between the inputs of the relay channel with one auxiliary random variable as Cover, El-Gamal and Salehi have done for the multiple access channel with arbitrarily correlated sources. Then, by considering broadcast and multiple access sub-channels in the relay channel, the authors describe the essential role of the relay with special Markovity conditions on the auxiliary random variable and channel input–outputs, and unify most of known capacity theorems into one capacity theorem. The capacity theorem potentially may be applicable to a more general class of relay channels including at least the relay channels with known capacity.
    Citations (24)