With technology scaling, there is a strong demand for smaller cell size, higher speed, and lower power in SRAMs. In addition, there are severe constraints for reliable read-and-write operations in the presence of increasing random variations that significantly degrade the noise margin. To understand these tradeoffs clearly and find a power-delay optimal solution for scaled SRAM, sequential quadratic programming is applied for optimizing 6-T SRAM for the first time. The use of analytical device models for transistor currents and formulate all the cell-operation requirements as constraints in an optimization problem. Our results suggest that, for optimal SRAM cell design, neither the supply voltage (V dd ) nor the gate length (L g ) scales, due to the need for an adequate noise margin amid leakage and threshold variability and relatively low dynamic activity of SRAM. This is true even with technology scaling. The cell area continues to scale despite the nonscaling gate length (L g ) with only a 7% area overhead at the 22-nm technology node as compared to simple scaling, at which point a 3-D structure is needed to continue the area-scaling trend. The suppression of gate leakage helps to reduce the power in ultralow-power SRAM, where subthreshold leakage is minimized at the cost of increase in cell area
In this paper, we address the problem of probe station selection. Probe station nodes are the nodes that are instrumented with the functionality of sending probes and analyzing probe results. The placement of probe stations affects the diagnosis capability of the probes sent by the probe stations. The probe station placement also involves the overhead of instrumentation. Thus it is important to minimize the required number of probe stations without compromising on the required diagnosis capability of the probes. In this paper, we address the problem of selection of probe stations to detect failures in the network. We present an algorithm for probe station selection using a reduction of the probe station selection problem to the Minimum Hitting Set problem. We address several issues involved while selecting probe stations such as link failures and probe station failures. We present experimental evaluation to show the effectiveness of the proposed approach.
Silicon photonic interconnects offer a promising solution to meeting the ever growing demand for more efficient I/O bandwidth density. We report an ultralow power 80 Gb/s arrayed silicon photonic transceiver for dense, large bandwidth inter/intrachip interconnects. Low parasitic microsolder-based hybrid bonding enables close integration of silicon photonic array devices optimized on a 130 nm silicon-on-insulator CMOS platform with CMOS very large scale integration circuits optimized on a 40 nm silicon CMOS platform to achieve unprecedented energy efficiency. The hybrid CMOS transceiver consists of eight 10 Gb/s channels with a total consumed power below 6 mW/channel. The eight-channel wavelength division multiplexing transmitter array using cascaded tunable ring modulators demonstrated better than 100 fJ/bit energy efficiency for 10 Gb/s operation excluding the laser power and tuning power, while the eight-channel receiver array using broadband Ge p-i-n waveguide detectors show sensitivity of better than -15 dBm for a bit error rate of 10 -12 at a data rate of 10 Gb/s with energy efficiency of better than 500 fJ/bit.
We report ultra-low-power (690fJ/bit) operation of an optical receiver consisting of a germanium-silicon waveguide detector intimately integrated with a receiver circuit and embedded in a clocked digital receiver. We show a wall-plug power efficiency of 690microW/Gbps for the photonic receiver made of a 130nm SOI CMOS Ge waveguide detector integrated to a 90nm Si CMOS receiver circuit. The hybrid CMOS photonic receiver achieved a sensitivity of -18.9dBm at 5Gbps for BER of 10(-12). Enabled by a unique low-overhead bias refresh scheme, the receiver operates without the need for DC balanced transmission. Small signal measurements of the CMOS Ge waveguide detector showed a 3dB bandwidth of 10GHz at 1V of reverse bias, indicating that further increases in transmission rate and reductions of energy-per-bit will be possible.
Digital "assist" circuits can improve the efficiency of traditionally analog circuit blocks, especially as technologies scale to the detriment of analog blocks. We apply some of these techniques to a 10 Gbps optical reciever, and demonstrate 395 fJ/b energy efficiency. Digital calibration blocks wrapped around a simple analog core enabled offset compensation, TIA biasing, and DLL re-timing, and cost negligible performance and power overhead. The assist circuits cost around 40% area overhead.
We present ultra low power silicon photonic transceivers, including a 320 fJ/bit reverse biased ring modulator integrated with CMOS driver, and a 690 fJ/bit record-low power receiver with sensitivity of -18.9 dBm at 5 Gbps for bit-error-rate of 10 -12 .
Plant diseases are unfavourable factors that cause a significant decrease in the quality and quantity of crops. Experienced biologists or farmers often observe plants with the naked eye for disease, but this method is often imprecise and can take a long time. In this study, we use artificial intelligence and computer vision techniques to achieve the goal of designing and developing an intelligent classification mechanism for leaf diseases. This paper follows two methodologies and their simulation outcomes are compared for performance evaluation. In the first part, data augmentation is performed on the PlantVillage data set images (for apple, corn, potato, tomato, and rice plants), and their deep features are extracted using convolutional neural network (CNN). These features are classified by a Bayesian optimized support vector machine classifier and the results attained in terms of precision, sensitivity, f-score, and accuracy. The above-said methodologies will enable farmers all over the world to take early action to prevent their crops from becoming irreversibly damaged, thereby saving the world and themselves from a potential economic crisis. The second part of the methodology starts with the preprocessing of data set images, and their texture and color features are extracted by histogram of oriented gradient (HoG), GLCM, and color moments. Here, the three types of features, that is, color, texture, and deep features, are combined to form hybrid features. The binary particle swarm optimization is applied for the selection of these hybrid features followed by the classification with random forest classifier to get the simulation results. Binary particle swarm optimization plays a crucial role in hybrid feature selection; the purpose of this Algorithm is to obtain the suitable output with the least features. The comparative analysis of both techniques is presented with the use of the above-mentioned evaluation parameters.
This brief note presents a case study for clocking links in multi-chip packages. A particular co-packaged multichip system design based on multi-Gbps silicon photonics global interconnect provides the context for our study of near-short range links, and we explore its design space. A preliminary exploration of phase noise suggests that the links should be clocked mesochronously, with an optically distributed full-rate clock and using local phase adjustment at each receiver.
This paper studies how the presence of universities and hospitals influences local home prices and rents. We analyze the data on ZIP code level and on the level of individual homes. Our ZIP code-level analysis uses median home price data from 13,105 ZIP codes over 21 years and rent data from 15,918 ZIP codes over 7 years to compare a ZIP code’s appreciation, volatility and vacancies to the size of a university or hospital within that ZIP code. Our home-level analysis uses data from 2,786,895 homes for sale and 267,486 homes for rent to study the impact of the distance from the nearest university or hospital to individual home prices. While our results generally agree with our expectations that larger, closer institutions yield higher prices, we also find some interesting results that challenge these expectations, such as positive correlations between volatility and university/hospital size in some ZIP codes, a positive correlation between rent and distance from a hospital for some homes, and lower correlations of rent vs. distance from a university compared to price vs. distance.
is highly related to large-volume of data, complex with evolving relationship, growing data sets with multiple, heterogeneous and self-governing sources. There is a faster development of networking along with data storage and collection capacity. The data is said as Big Data due to its characteristics of Volume, Variety, Velocity and Veracity. Most of this is unstructured, semi structured and heterogeneous in nature. The volume and the heterogeneity of Data, with the speed it is generated, make it difficult for the present computing infrastructure to manage Data. Because of this nature of Data, traditional data management, warehousing and analysis systems are not satisfactorily able to analyze this data. In order to process Data, HACE Theorem is considered that characterizes the features of for Processing. Hadoop and HDFS by Apache is a software framework which is widely used for storing, managing and analyzing which is a challenging task as it involves large distributed file systems which should be fault tolerant, flexible and scalable. Hadoop’s MapReduce is widely being used for the efficient processing of large data sets on clusters which is nothing but Data. In this paper, the various solutions are introduced in hand through Map Reduce framework over Hadoop Distributed File System (HDFS). Map Reduce is a Minimization technique which makes use of file indexing with mapping, sorting, shuffling and finally reducing. Map Reduce techniques have been introduced which is implemented for analysis using HDFS.