The meta-material sensor has been regarded as a next-generation sensing technology for the battery-free Internet of Things (IoT) due to its battery-free characteristic and improved sensing performance. The meta-material sensors function as backscatter tags that change their reflection coefficients with the conditions of sensing targets such as temperature and gas concentration, allowing transceivers to perform sensing by analyzing the reflected signals from the sensors. Simultaneously, the sensors also function as environmental scatterers, creating additional signal paths to enhance communication performance. Therefore, the meta-material sensor potentially provides a new paradigm of Integrated Sensing and Communication (ISAC) for the battery-free IoT system. In this article, we first propose a Meta-Backscatter system that utilizes meta-material sensors to achieve diverse sensing functionalities and improved communication performance. We begin with the introduction of the metamaterial sensor and further elaborate on the Meta-Backscatter system. Subsequently, we present optimization strategies for meta-material sensors, transmitters, and receivers to strike a balance between sensing and communication. Furthermore, this article provides a case study of the system and examines the feasibility and trade-off through the simulation results. Finally, potential extensions of the system and their related research challenges are addressed.
In federated learning (FL), data owners "share" their local data in a privacy preserving manner in order to build a federated model, which in turn, can be used to generate revenues for the participants. However, in FL involving business participants, they might incur significant costs if several competitors join the same federation. Furthermore, the training and commercialization of the models will take time, resulting in delays before the federation accumulates enough budget to pay back the participants. The issues of costs and temporary mismatch between contributions and rewards have not been addressed by existing payoff-sharing schemes. In this paper, we propose the Federated Learning Incentivizer (FLI) payoff-sharing scheme. The scheme dynamically divides a given budget in a context-aware manner among data owners in a federation by jointly maximizing the collective utility while minimizing the inequality among the data owners, in terms of the payoff gained by them and the waiting time for receiving payoff. Extensive experimental comparisons with five state-of-the-art payoff-sharing schemes show that FLI is the most attractive to high quality data owners and achieves the highest expected revenue for a data federation.
This paper studies robust Bayesian persuasion of a privately informed receiver in which the sender only has limited knowledge about the receiver's private information. The sender is ambiguity averse and has a maxmin expected utility function. We show that when the sender faces full ambiguity, i.e., the sender has no knowledge about the receiver's private information, full information disclosure is optimal; when the sender faces local ambiguity, i.e., the sender thinks the receiver's private beliefs are all close to the common prior, as the sender's uncertainty about the receiver's private information vanishes, the sender can do almost as well as when the receiver does not have private information. We also fully characterize the sender's robust information disclosure rule for various kinds of ambiguity in an example with two sates and two actions.
ABSTRACT We propose a theory of reputation to explain how investors rationally respond to mutual fund star ratings. A fund's performance is determined by its information advantage, which can be acquired but decays stochastically. Investors form beliefs about whether the fund is informed based on its past performance. We refer to such beliefs as fund reputation, which determines fund flows. As performance changes continuously, equilibrium fund reputation may take discrete values only and thus can be labeled with stars. Star upgrades thus imply reputation jumps, leading to discrete increases in flows and expected performance, although stars do not provide new information.
Self-supervised learning (SSL) methods via joint embedding architectures have proven remarkably effective at capturing semantically rich representations with strong clustering properties, magically in the absence of label supervision. Despite this, few of them have explored leveraging these untapped properties to improve themselves. In this paper, we provide an evidence through various metrics that the encoder's output $encoding$ exhibits superior and more stable clustering properties compared to other components. Building on this insight, we propose a novel positive-feedback SSL method, termed Representation Soft Assignment (ReSA), which leverages the model's clustering properties to promote learning in a self-guided manner. Extensive experiments on standard SSL benchmarks reveal that models pretrained with ReSA outperform other state-of-the-art SSL methods by a significant margin. Finally, we analyze how ReSA facilitates better clustering properties, demonstrating that it effectively enhances clustering performance at both fine-grained and coarse-grained levels, shaping representations that are inherently more structured and semantically meaningful.
We study the dominant separating equilibrium that maximizes the sender's payoff in quadratic signaling games. We relax the common and restrictive belief monotonicity assumption. We introduce a game characteristic called discriminant and show that there exists a linear incentive compatible separating strategy if and only if the game has a non-negative discriminant. In those games, there exists a unique optimal incentive compatible strategy that is continuous and differentiable, and we derive sufficient and necessary conditions for this strategy to be linear; we also fully characterize the dominant separating perfect Bayesian equilibrium and establish its existence and uniqueness. We apply these results to confirm the dominance of linear separating equilibria in some classic examples, and show that, in some other examples, there exist previously unknown non-linear dominant equilibria.
A desirable objective in self-supervised learning (SSL) is to avoid feature collapse. Whitening loss guarantees collapse avoidance by minimizing the distance between embeddings of positive pairs under the conditioning that the embeddings from different views are whitened. In this paper, we propose a framework with an informative indicator to analyze whitening loss, which provides a clue to demystify several interesting phenomena as well as a pivoting point connecting to other SSL methods. We reveal that batch whitening (BW) based methods do not impose whitening constraints on the embedding, but they only require the embedding to be full-rank. This full-rank constraint is also sufficient to avoid dimensional collapse. Based on our analysis, we propose channel whitening with random group partition (CW-RGP), which exploits the advantages of BW-based methods in preventing collapse and avoids their disadvantages requiring large batch size. Experimental results on ImageNet classification and COCO object detection reveal that the proposed CW-RGP possesses a promising potential for learning good representations. The code is available at https://github.com/winci-ai/CW-RGP.