Energy Consumption Minimization With Throughput Heterogeneity in Wireless-Powered Body Area Networks
23
Citation
35
Reference
10
Related Paper
Citation Trend
Abstract:
In this article, we focus on a wireless-powered body area network in which the simultaneous wireless information and power transfer (SWIPT) technique is adopted. We consider two scenarios based on whether sensor nodes (SNs) are equipped with battery. For the first time, energy consumption minimization with throughput heterogeneity (ECM-TH) problem is addressed for both scenarios. For the battery-free scenario, a low-complexity time allocation scheme is proposed. This scheme solves the ECM-TH problem based on a hybrid method of gradient descent and bisection search algorithms. Consequently, compared with the interior-point method, our scheme has a lower computational complexity for the same energy consumption performance of the network. For the battery-assisted scenario, the nonconvex ECM-TH problem is first transformed into a convex optimization problem by introducing auxiliary variables. Then, a joint time and power allocation scheme based on the Lagrange dual subgradient method is proposed to solve it. Compared with the battery-free scenario, energy consumption and outage probability are both decreased in the battery-assisted scenario. Moreover, we address a special case wherein the feasible set of the above-mentioned ECM-TH problems may be empty owing to poor channel conditions or high throughput requirements of SNs.Keywords:
Subgradient method
Bisection method
Wireless Power Transfer
The subgradient extragradient method for solving the variational inequality (VI) problem, which is introduced by Censor et al. \cite{CGR}, replaces the second projection onto the feasible set of the VI, in the extragradient method, with a subgradient projection onto some constructible half-space. Since the method has been introduced, many authors proposed extensions and modifications with applications to various problems. In this paper, we introduce a modified subgradient extragradient method by improving the stepsize of its second step. Convergence of the proposed method is proved under standard and mild conditions and primary numerical experiments illustrate the performance and advantage of this new subgradient extragradient variant.
Subgradient method
Projection method
Cite
Citations (0)
The distributed convex optimization problem is studied in this paper for any fixed and connected network with general constraints. To solve such an optimization problem, a new type of continuous-time distributed subgradient optimization algorithm is proposed based on the Karuch-Kuhn-Tucker condition. By using tools from nonsmooth analysis and set-valued function theory, it is proved that the distributed convex optimization problem is solved on a network of agents equipped with the designed algorithm. For the case that the objective function is convex but not strictly convex, it is proved that the states of the agents associated with optimal variables could converge to an optimal solution of the optimization problem. For the case that the objective function is strictly convex, it is further shown that the states of agents associated with optimal variables could converge to the unique optimal solution. Finally, some simulations are performed to illustrate the theoretical analysis.
Subgradient method
Proper convex function
Conic optimization
Cite
Citations (95)
A robust two-stage stochastic convex programming model is proposed in this paper with the second stage of which is quadratic programming. A formula is obtained to calculate the subdifferential of the recourse function, under the assumption that linear partial information is observed for the probability distribution. A subgradient algorithm based on deflected subgradients and exponential decay step sizes is proposed to solve the robust stochastic convex programming problem. The convergence of the algorithm is proved, and the effectiveness is demonstrated by the numerical examples.
Subgradient method
Second-order cone programming
Robust Optimization
Cite
Citations (0)
Mathematical programming approaches, such as Lagrangian relaxation, have the advantage of computational efficiency when the optimization problems are decomposable. Lagrangian relaxation belongs to a class of primal-dual algorithms. Subgradient-based optimization methods can be used to optimize the dual functions in Lagrangian relaxation. In this paper, three subgradient-based methods, the subgradient (SG), the surrogate subgradient (SSG) and the surrogate modified subgradient (SMSG), are adopted to solve a demonstrative nonlinear programming problem to assess the performances on optimality in order to demonstrate its applicability to the realistic problem.
Subgradient method
Lagrangian relaxation
Cite
Citations (2)
Recently some specific classes of non-smooth and non-Lipschitz convex optimization problems were selected by Yu.~Nesterov along with H.~Lu. We consider convex programming problems with similar smoothness conditions for the objective function and functional constraints. We introduce a new concept of an inexact model and propose some analogues of switching subgradient schemes for convex programming problems for the relatively Lipschitz-continuous objective function and functional constraints. Some class of online convex optimization problems is considered. The proposed methods are optimal in the class of optimization problems with relatively Lipschitz-continuous objective and functional constraints.
Subgradient method
Smoothness
Quasiconvex function
Cite
Citations (1)
Subgradient method
Smoothness
Quasiconvex function
Cite
Citations (8)
The standard subgradient optimization method is one of the most widely adopted algorithm for solving the dual problem arising in scheduling algorithm based on Lagrangian relaxation.In the method,all subproblems of the relaxed problem must be solved in order to obtain a subgradient direction.The incremental subgradient method is applied,where the dual function is transformed into the sum of many component functions,then the subgradient iteration is performed incrementally along the subgradient of every component function obtained by solving the corresponding subproblem.The simulation results show that the incremental subgradient method leads to significant improvement in terms of computational efficiency compared with the standard subgradient method.
Subgradient method
Lagrangian relaxation
Cite
Citations (1)
We study the convergence of the projected subgradient method for constrained convex optimization in a Hilbert space. Our goal is to obtain an ε-approximate solution of the problem in the presence of computational errors, where ε is a given positive number. The results that we obtain are important in practice because computations always introduce numerical errors.
Subgradient method
Proper convex function
Cite
Citations (16)
To the problem of zigzaging happened in solving the undifferential Lagrangian dual problems by subgradient algorithm, a subgradient algorithm based on fuzzy theory is presented. In this method, the resulting subgradient direction is attained by combining all history subgradient directions, which are achieved in the iteration process, following a simple membership function. The resulting subgradient direction uses the history information suitably, thereby significantly reduces the solution zigzagging difficulty without much additional computational requirements. The convergence of the algorithm is proved. This method is then applied in the traveling salesman problem, and the results show that this method leads to significant improvement over the traditional subgradient algorithm.
Subgradient method
Lagrangian relaxation
Cite
Citations (1)
This paper considers a general convex constrained problem setting where functions are not assumed to be differentiable nor Lipschitz continuous. Our motivation is in finding a simple first-order method for solving a wide range of convex optimization problems with minimal requirements. We study the method of weighted dual averages (Nesterov, 2009) in this setting and prove that it is an optimal method.
Subgradient method
Quasiconvex function
Conic optimization
Cite
Citations (1)