An important characteristic that distinguishes wireless sensor networks (WSNs) from other distributed systems is their need for energy efficiency because sensors have finite energy reserve. Since there is no fixed infrastructure or centralized management in WSN, a connected dominating set (CDS) has been proposed as a virtual backbone. The CDS plays a major role in routing, broadcasting, coverage and activity scheduling. To reduce the traffic during communication and prolong network lifetime, it is desirable to construct a minimum CDS (MCDS). The MCDS problem has been studied intensively in unit disk graph (UDG), in which the nodes have the same transmission range. In real world, this kind of networks is not necessarily containing nodes with equal transmission range. In this paper, a new timer-based energy-aware distributed algorithm for MCDS problem in disk graph with bidirectional links (DGB), in which nodes have different transmission ranges, is introduced which has outstanding time and message complexity ofand constant approximation ratio. Theoretical analysis and simulation results are also presented to verify our approach’s efficiency. Key words: Disk graphs, energy-aware, minimum connected dominating set, virtual backbone, wireless sensor network.
Quality of service (QoS) is an important issue in the design and management of web service composition. QoS in web services consists of various non-functional factors, such as execution cost, execution time, availability, successful execution rate, and security. In recent years, the number of available web services has proliferated, and then offered the same services increasingly. The same web services are distinguished based on their quality parameters. Also, clients usually demand more value added services rather than those offered by single, isolated web services. Therefore, selecting a composition plan of web services among numerous plans satisfies client requirements and has become a challenging and time-consuming problem. This paper has proposed a new composition plan optimizer with constraints based on genetic algorithm. The proposed method can find the composition plan that satisfies user constraints efficiently. The performance of the method is evaluated in a simulated environment.
This paper presents MAXelerator, the first hardware accelerator for privacy-preserving machine learning (ML) on cloud servers. Cloud-based ML is being increasingly employed in various data sensitive scenarios. While it enhances both efficiency and quality of the service, it also raises concern about privacy of the users' data. We create a practical privacy-preserving solution for matrix-based ML on cloud servers. We show that for the majority of the ML applications, the privacy-sensitive computation boils down to either matrix multiplication, which is a repetition of Multiply-Accumulate (MAC) or the MAC itself. We design an FPGA architecture for privacy-preserving MAC to accelerate the ML computation based on the well known Secure Function Evaluation protocol named Yao's Garbled Circuit. MAXelerator demonstrates up to 57× improvement in throughput per core compared to the fastest existing GC framework. We corroborate the effectiveness of the accelerator with real-world case studies in privacy-sensitive scenarios.
In this study, a method of thresholding is proposed that based on the histogram shape of each image, proposes the more appropriate technique from the famous and common thresholding techniques. Thresholding, which is a commonly used operation for image processing, is the selection of one of the image pixels that determines the border for background and foreground of the image. After determining a suitable threshold, the image can be converted to a binary image and from this binary image, which has a very small size, informations can be extracted for utilization in various scientific subjects. From histogram tests of images, examination of criteria extracted from the histogram shape of each image, and empirical knowledge, we designed an expert system that proposes the suitable thresholding technique for an image based on the histogram of that image. We used modeling and matching for designing this system in such way that a model was produced from the histogram of the most suitable result for each thresholding technique and was stored on a knowledge base. Afterwards, a system is coded to receive the input image and after matching this image with the above models, the technique with the most model proximity to the histogram of the input image is proposed as the more appropriate technique from the methods of thresholding. Nowadays thresholding, which is the preprocessing for each image process, is widely used. So far, many techniques and methods of thresholding have been proposed. Although all the techniques are useful, the results vary for each image. Sometimes a certain technique is more appropriate for an image than the rest of the techniques. Based on experimentation and qualitative reasoning which will be mentioned later, we concluded what techniques are more appropriate for different shapes of histograms. The application of the proposed method is to facilitate selection of a suitable thresholding technique for an image. Furthermore, the proposed method was tested on many images and the results showed that by using this method, choosing a suitable thresholding method can be greatly automatized.
Binary decision diagram (BDD) is a modern data structure proved to be compact in representation and efficient in manipulation of Boolean formulas. Using binary decision diagram in network reliability analysis has already been investigated by some researchers. In this paper we show how an exact algorithm for network reliability can be improved and implemented efficiently using CUDD - Colorado University decision diagram.
Systematic studies indicate a growing number of clinical studies that use mesenchymal stem cells (MSCs) for the treatment of cartilage lesions. The current experimental and preclinical study aims to comparatively evaluate the potential of MSCs from a variety of tissues for the treatment of cartilage defect in rabbit's knee which has not previously been reported.In this experimental study, MSCs isolated from bone marrow (BMMSCs), adipose (AMSCs), and ears (EMSCs) of rabbits and expanded under in vitro culture. The growth rate and differentiation ability of MSCs into chondrocyte and the formation of cartilage pellet were investigated by drawing the growth curve and real-time polymerase chain reaction (RT-PCR), respectively. Then, the critical cartilage defect was created on the articular cartilage (AC) of the rabbit distal femur, and MSCs in collagen carrier were transplanted. The studied groups were as the control (only defect), sham (defect with scaffold), BMMSCs in the scaffold, EMSCs in the scaffold, and EMSCs in the scaffold with cartilage pellets. Histological and the gene expression analysis were performed following the transplantation.Based on our comparative in vitro investigation, AMSCs possessed the highest growth rate, as well as the lowest chondrogenic differentiation potential. In this context, MSCs of the ear showed a significantly higher growth rate and cartilage differentiation potential than those of bone marrow tissue (P<0.05). According to our in vivo assessments, BMMSC- and EMSC-seeded scaffolds efficiently improved the cartilage defect 4 weeks post-transplantation, while no improvement was observed in the group contained the cartilage pellets.It seems that the ear contains MSCs that promote cartilage regeneration as much as the conventional MSCs from the bone marrow. Considering a high proliferation rate and easy harvesting of MSCs of the ear, this finding could be of value for the regenerative medicine.
In this research work, the impact of user’s behavior on search engine results is discussed. It aims to improvement of search results which leads to the higher satisfaction of users. In other words, we are trying to present a personalized search engine for each user, based on his/her activity and search history. We base our hypothesis that the search history of each user in a specific time frame provides precious information to be used in case of offering customizable and more efficient results to the user. In order to evaluate our research project hypothesis, we designed and implemented an experimental search engine as a web platform. This search engine measures the level of user’s satisfaction by bringing putting extra information into account. According to the experimental results, consideration of the user’s behavior history has significant effect on the quality of the search results, leading to more satisfaction of the users.
This paper proposes ReBNet, an end-to-end framework for training reconfigurable binary neural networks on software and developing efficient accelerators for execution on FPGA. Binary neural networks offer an intriguing opportunity for deploying large-scale deep learning models on resource-constrained devices. Binarization reduces the memory footprint and replaces the power-hungry matrix-multiplication with light-weight XnorPopcount operations. However, binary networks suffer from a degraded accuracy compared to their fixed-point counterparts. We show that the state-of-the-art methods for optimizing binary networks accuracy, significantly increase the implementation cost and complexity. To compensate for the degraded accuracy while adhering to the simplicity of binary networks, we devise the first reconfigurable scheme that can adjust the classification accuracy based on the application. Our proposition improves the classification accuracy by representing features with multiple levels of residual binarization. Unlike previous methods, our approach does not exacerbate the area cost of the hardware accelerator. Instead, it provides a tradeoff between throughput and accuracy while the area overhead of multi-level binarization is negligible.
Regarding optimization problems, there is a high demand for high-performance algorithms that can process the problem solution-space efficiently and find the best ones quite quickly. An approach to get this target is based on using swarm intelligence algorithms; these algorithms apply a population of simple agents to communicate locally with one another and with their surroundings. In this paper, we propose a novel approach based on combining the characteristics of the two algorithms: Cat Swarm Optimization (CSO) and the Shuffled Frog Leaping Algorithm (SFLA). The experimental results show the convergence ratio of our hybrid SFLA-CSO algorithm is seven times higher than that of CSO and five times higher than the convergence ratio of the standard SFLA algorithm. The obtained results also revealed that the hybrid method speeds up the convergence significantly, and reduces the error rate. We compared the proposed hybrid algorithm against the famous relevant algorithms PSO, ACO, ABC, GA, and SA; the results are valuable and promising.
This research work is concerned with the predictability of ensemble and singular tree-based machine learning algorithms during the recession and prosperity of the two companies listed in the Tehran Stock Exchange in the context of big data. In this regard, the main issue is that economic managers and the academic community require predicting models with more accuracy and reduced execution time; moreover, the prediction of the companies recession in the stock market is highly significant. Machine learning algorithms must be able to appropriately predict the stock return sign during the market downturn and boom days. Addressing the stated challenge will upgrade the quality of stock purchases and, subsequently, will increase profitability. In this article, the proposed solution relies on the utilization of tree-based machine learning algorithms in the context of big data. The proposed solution exploits the decision tree algorithm, which is a traditional and singular tree-based learning algorithm. Furthermore, two modern and ensemble tree-based learning algorithms, random forest and gradient boosted tree, has been utilized for predicting the stock return sign during recession and prosperity. The mentioned cases were implemented by applying the machine learning tools in python programming language and PYSPARK library that is used explicitly for the big data context. The utilized research data of the current study are the shares information of two companies of the Tehran Stock Exchange. The obtained results reveal that the applied ensemble learning algorithms have performed better than the singular learning algorithms. Additionally, adding 23 technical features to the initial data and subsequent applying of the PCA feature reduction method have demonstrated the best performance among other modes. In the meantime, it has been concluded that the initial data do not possess the proper resolution or generalizability, either during prosperity or recession.