Our current description of the large-scale Universe is now known with a precision undreamt of a generation ago. Within the simple standard cosmological model only six basic parameters are required. The usual parameter set includes quantities most directly probed by the cosmic microwave background, but the nature of these quantities is somewhat esoteric. However, many more numbers can be derived that quantify various aspects of our Universe. Using constraints from the Planck satellite, in combination with other data sets, we explore several such quantities, highlighting some specific examples.
The Planck collaboration has measured the temperature and polarization of the cosmic microwave background well enough to determine the locations of eight peaks in the temperature (TT) power spectrum, five peaks in the polarization (EE) power spectrum and twelve extrema in the cross (TE) power spectrum. The relative locations of these extrema give a striking, and beautiful, demonstration of what we expect from acoustic oscillations in the plasma; e.g., that EE peaks fall half way between TT peaks. We expect this because the temperature map is predominantly sourced by temperature variations in the last scattering surface, while the polarization map is predominantly sourced by gradients in the velocity field, and the harmonic oscillations have temperature and velocity 90 degrees out of phase. However, there are large differences in expectations for extrema locations from simple analytic models vs. numerical calculations. Here we quantitatively explore the origin of these differences in gravitational potential transients, neutrino free-streaming, the breakdown of tight coupling, the shape of the primordial power spectrum, details of the geometric projection from three to two dimensions, and the thickness of the last scattering surface. We also compare the peak locations determined from Planck measurements to expectations under the $\Lambda$CDM model. Taking into account how the peak locations were determined, we find them to be in agreement.
It is possible that fundamental constants may not be constant at all. There is a generally accepted view that one can only talk about variations of dimensionless quantities, such as the fine structure constant $\alpha_{\rm e}\equiv e^2/4\pi\epsilon_0\hbar c$. However, constraints on the strength of gravity tend to focus on G itself, which is problematic. We stress that G needs to be multiplied by the square of a mass, and hence, for example, one should be constraining $\alpha_{\rm g}\equiv G m_{\rm p}^2/\hbar c$, where $m_{\rm p}$ is the proton mass. Failure to focus on such dimensionless quantities makes it difficult to interpret the physical dependence of constraints on the variation of G in many published studies. A thought experiment involving talking to observers in another universe about the values of physical constants may be useful for distinguishing what is genuinely measurable from what is merely part of our particular system of units.
The increasing precision of cosmological data provides us with an opportunity to test general relativity (GR) on the largest accessible scales. Parameterizing modified gravity models facilitates the systematic testing of the predictions of GR, and gives a framework for detecting possible deviations from it. Several different parameterizations have already been suggested, some linked to classifications of theories, and others more empirically motivated. Here we describe a particular new approach which casts modifications to gravity through two free functions of time and scale, which are directly linked to the field equations, but also easy to confront with observational data. We compare our approach with other existing methods of parameterizing modied gravity, specifically the parameterized post-Friedmann approach and the older method using the parameter set $\{\mu,\gamma\}$. We explain the connection between our parameters and the physics that is most important for generating cosmic microwave background anisotropies. Some qualitative features of this new parameterization, and therefore modifications to the gravitational equations of motion, are illustrated in a toy model, where the two functions are simply assumed to be constant parameters.
Some of the most obviously correct physical theories - namely string theory and the multiverse - make no testable predictions, leading many to question whether we should accept something as scientific even if it makes no testable predictions and hence is not refutable. However, some far-thinking physicists have proposed instead that we should give up on the notion of Falsifiability itself. We endorse this suggestion but think it does not go nearly far enough. We believe that we should also dispense with other outdated ideas, such as Fidelity, Frugality, Factuality and other F words. And we quote a lot of famous people to support this view.
With the aim of establishing a framework to efficiently perform the practical application of quantum chemistry simulation on near-term quantum devices, we envision a hybrid quantum--classical framework for leveraging problem decomposition (PD) techniques in quantum chemistry. Specifically, we use PD techniques to decompose a target molecular system into smaller subsystems requiring fewer computational resources. In our framework, there are two levels of hybridization. At the first level, we use a classical algorithm to decompose a target molecule into subsystems, and utilize a quantum algorithm to simulate the quantum nature of the subsystems. The second level is in the quantum algorithm. We consider the quantum--classical variational algorithm that iterates between an expectation estimation using a quantum device and a parameter optimization using a classical device. We investigate three popular PD techniques for our hybrid approach: the fragment molecular-orbital (FMO) method, the divide-and-conquer (DC) technique, and the density matrix embedding theory (DMET). We examine the efficacy of these techniques in correctly differentiating conformations of simple alkane molecules. In particular, we consider the ratio between the number of qubits for PD and that of the full system; the mean absolute deviation; and the Pearson correlation coefficient and Spearman's rank correlation coefficient. Sampling error is introduced when expectation values are measured on the quantum device. Therefore, we study how this error affects the predictive performance of PD techniques. The present study is our first step to opening up the possibility of using quantum chemistry simulations at a scale close to the size of molecules relevant to industry on near-term quantum hardware.
It is difficult to model household electricity consumption by considering environmental consciousness through conventional methods. This paper presents a flexible framework based on artificial neural network (ANN), multi–layer perception (MLP), conventional regression and design of experiment (DOE) for estimating household electricity consumption by considering environmental consciousness. Environmental consciousness is evaluated through standard questionnaire. Moreover, DOE is based on analysis of variance (ANOVA) and Duncan multiple range test (DMRT). Furthermore, actual data is compared with ANN MLP and conventional regression model through ANOVA. The significance of this study is the integration of ANN, conventional regression and DOE for flexible and improved modelling of household electricity consumption by incorporating environmental consciousness indicators.
The advent of new special-purpose hardware such as FPGA or ASIC-based annealers and quantum processors has shown potential in solving certain families of complex combinatorial optimization problems more efficiently than conventional CPUs. We show that to address an industrial optimization problem, a hybrid architecture of CPUs and non-CPU devices is inevitable. In this paper, we propose problem decomposition as an effective method for designing a hybrid CPU--non-CPU optimization solver. We introduce the required algorithmic elements for making problem decomposition a viable approach in meeting the real-world constraints such as communication time and the potential higher cost of using non-CPU hardware. We then turn to the well-known maximum clique problem, and propose a new method of decomposition for this problem. Our method enables us to solve the maximum clique problem on very large graphs using non-CPU hardware that is considerably smaller than the size of the graph. As an example, we show that the maximum clique problem on the com-Amazon graph, with 334,863 vertices and 925,872 edges, can be solved with a single call to a device that can embed a fully connected graph of size at least 21 nodes, such as the D-Wave 2000Q. We also show that our proposed problem decomposition approach can improve the runtime of two of the best-known classical algorithms for large, sparse graphs, namely PMC and BBMCSP, by orders of magnitude. In the light of our study, we believe that new non-CPU hardware that is small in size could become competitive with CPUs if it could be either mass produced and highly parallelized, or able to provide high-quality solutions to specific, small-sized problems significantly faster than CPUs.