We use cookies in order to improve the quality and usability of the HSE website. More information about the use of cookies is available here, and the regulations on processing personal data can be found here. By continuing to use the site, you hereby confirm that you have been informed of the use of cookies by the HSE website and agree with our rules for processing personal data. You may disable cookies in your browser settings.

Mental health disorders are among the leading worldwide causes of disease and long-term disability. This issue has a long and painful history of gradual de-stigmatization of patients, coinciding with humanization of therapeutic approaches. What are the current trends in Russia regarding this issue and in what ways is it similar to and different from Western countries? IQ.HSE provides an overview of this problem based on research carried out by Svetlana Kolpakova.

On September 5, Laurie Manchester, Associate Professor of History at Arizona State University, presented her paper on voluntary repatriation of Russians from China to the Soviet Union between 1935 and 1960. The presentation was part of the research seminar, ‘Boundaries of History’, held regularly by the Department of History at HSE University in St. Petersburg. HSE News Service spoke with Laurie Manchester about her research interests, collaborating with HSE faculty members, and the latest workshop.

Dr. Sabyasachi Tripathi, from Kolkata, India, is a new research fellow at HSE University. He will be working at the Laboratory for Science and Technology Studies of the Institute for Statistical Studies and Economics of Knowledge.

In this paper, we address energy-aware online scheduling of jobs with resource contention. We propose an optimization model and present new approach to resource allocation with job concentration taking into account types of applications and heterogeneous workloads that could include CPU-intensive, diskintensive, I/O-intensive, memory-intensive, network-intensive, and other applications. When jobs of one type are allocated to the same resource, they may create a bottleneck and resource contention either in CPU, memory, disk or network. It may result in degradation of the system performance and increasing energy consumption. We focus on energy characteristics of applications, and show that an intelligent allocation strategy can further improve energy consumption compared with traditional approaches. We propose heterogeneous job consolidation algorithms and validate them by conducting a performance evaluation study using the Cloud Sim toolkit under different scenarios and real data. We analyze several scheduling algorithms depending on the type and amount of information they require.

Cloud security issues are important factors for data storage and processing. Apart from the existing security and reliability problems of traditional distributed computing, there are new security and reliability problems. They include attacks on a virtual machine, attacks on the synchronization keys, and so on. According to the assessment of international experts in the field of cloud security, there are risks of cloud collusion under uncertain conditions. To mitigate this type of uncertainty and reduce harms it can cause, we propose AC-RRNS algorithm based on modified threshold Asmuth–Bloom and Mignotte secret sharing schemes. We prove that the algorithm satisfies the formal definition of computational security. If the adversary coalition knows the secret shares, but does not know the secret key, the probability to obtain the secret is less than . The probability is less than with unknown secret shares and known secret key, and with unknown secret key. Its complexity is equal to brute-force method. We demonstrate that the proposed scheme ensures security under several types of attacks. We propose approaches for selection of parameters for AC-RRNS secret sharing scheme to optimize the system behavior and data redundancy of encryption.

Earth remote sensing imagery come from satellites,
unmanned aerial vehicles, airplanes, and other sources. National agen-
cies, commercial companies, and individuals across the globe collect enor-
mous amounts of such imagery daily. Array DBMS are one of the promi-
nent tools to manage and process large volumes of geospatial imagery.
The core data model of an array DBMS is an N-dimensional array.
Recently we presented a geospatial array DBMS – ChronosDB – which
outperforms SciDB by up to 75× on average. We are about to launch a
Cloud service running our DBMS. SciDB is the only freely available dis-
tributed array DBMS to date. Remote sensing imagery are traditionally
stored in files of sophisticated formats, not in databases. Unlike SciDB,
ChronosDB does not require importing files into an internal DBMS for-
mat and works with imagery “in situ”: directly in their native file for-
mats. This is one of the many virtues of ChronosDB. It has now certain
aggregation capabilities, but this paper focuses on more advanced aggre-
gation queries which still constitute a large portion of a typical work-
load applied to remote sensing imagery. We integrate the aggregation
types into the data model, present the respective algorithms to perform
aggregations in a distributed fashion, and thoroughly compare the per-
formance of our technique with SciDB. We carried out experiments on
real-world data on 8- and 16-node clusters in Microsoft Azure Cloud.

Almost all of the technologies that are now part of the cloud paradigm existed before, but so far the market has not been proposals that bring together emerging technologies in a single commercially attractive solution. However, in the last decade, there were public cloud services, through which these technologies, on the one hand, available to the developer, and on the other - it is clear to the business community. But many of the features that make cloud computing attractive, may be in conflict with traditional models of information security.

Due to the fact that cloud computing bring with them new challenges in the field of information security, it is imperative for organizations to control the process of information risk management in the cloud. In this article on the basis of Common Vulnerability Scoring System, allowing to determine the qualitative indicator of exposure to vulnerabilities of information systems, taking into account environmental factors, we propose a method of risk assessment for different types of cloud deployment environments.

Information Risk Management, determine the applicability of cloud services for the organization is impossible without understanding the context in which the organization operates and the consequences of the possible types of threats that it may face as a result of their activities. This paper proposes a risk assessment approach used in the selection of the most appropriate configuration options cloud computing environment from the point of view of safety requirements. Application of risk assessment for different types of deployment of cloud environments will reveal the ratio counter possible attacks and to correlate the amount of damage to the total cost of ownership of the entire IT infrastructure of the organization.

Cloud computing has emerged as a new paradigm for on-demand access to a wast pool of computing resources that provides an alternative to using on-premises resources. This paper discusses the challenges related to using the cloud computing infrastructures for scientific computing. An approach based on Everest platform addressing these challenges is presented along with the prototype integration of Everest with Google Compute Engine. The proposed integration enables Everest users to seamlessly provision and use cloud-based computing resources for running different types of workloads including HPC and HTC applications. In contrast to other efforts, the presented approach also supports building and sharing domain-specific web services that automate execution of applications on dynamically provisioned cloud resources or hybrid infrastructures.

The use of cloud computing to ensure interaction between the state and citizens allows to speed up information interaction, to realize state services, to reduce the costs of providing such interaction, but at the same time this interaction raises important questions about the reliability of the cloud provider and security of interaction. Providers of the cloud can be both public authorities and private organizations. In the event that the cloud provider is a government agency, it can be assumed that all the requirements for security will be met. However, if the cloud provider is a private person, then we cannot be sure of security, if these requirements for security are not mandatory. It should be noted that Russian legislation does not require the mandatory application of information security standards. In this regard, the security of stored information in the clouds and its legislative support, the responsibility of providers providing cloud access services are very significant for the use of this technology in Russia.

This paper reviews modern ways of data preparation, acquisition and processing in projects based on Internet of Things concept. The best arrangements are considered, including strategies and techniques of network interaction, modern methods of computing organizations in projects, ways of showing and visualizing information for better client observation and realization, as well as additional technical solutions potentially applicable to developments based on the concept of the Internet of things. Selection and integration of solutions into a solitary coordinated arrangement of data collection and processing is being carried out, the extent of use and incorporation of the anticipated framework is investigated. Consequences of the experiment results examination based on the National Instruments laboratory equipment are presented.

The proceedings of the 11th International Conference on Service-Oriented Computing (ICSOC 2013), held in Berlin, Germany, December 2–5, 2013, contain high-quality research papers that represent the latest results, ideas, and positions in the field of service-oriented computing. Since the first meeting more than ten years ago, ICSOC has grown to become the premier international forum for academics, industry researchers, and practitioners to share, report, and discuss their ground-breaking work. ICSOC 2013 continued along this tradition, in particular focusing on emerging trends at the intersection between service-oriented, cloud computing, and big data.

Program autotuning is becoming an increasingly valuable tool for improving performance portability across diverse target architectures, exploring trade-offs between several criteria, or meeting quality of service requirements. Recent work on general autotuning frameworks enabled rapid development of domain-specific autotuners reusing common libraries of parameter types and search techniques. In this work we explore the use of such frameworks to develop general-purpose online services for program autotuning using the Software as a Service model. Beyond the common benefits of this model, the proposed approach opens up a number of unique opportunities, such as collecting performance data and utilizing it to improve further runs, or enabling remote online autotuning. However, the proposed autotuning as a service approach also brings in several challenges, such as accessing target systems, dealing with measurement latency, and supporting execution of user-provided code. This paper presents the first step towards implementing the proposed approach and addressing these challenges. We describe an implementation of generic autotuning service that can be used for tuning arbitrary programs on user-provided computing systems. The service is based on OpenTuner autotuning framework and runs on Everest platform that enables rapid development of computational web services. In contrast to OpenTuner, the service doesn't require installation of the framework, allows users to avoid writing code and supports efficient parallel execution of measurement tasks across multiple machines. The performance of the service is evaluated by using it for tuning synthetic and real programs.

International Science and Technology Conference "Modern Networking Technologies (MoNeTec): SDN&NFV Next Generation of Computational Infrastructure" was dedicated to the Software defined Networks (SDN) and Network Function Virtualization (NFV). These technologies have emerged as the hottest networking trends of the past a few years. The conference proceeding represent the papers where the broad scope of SDN&NFV topics are discussed.