ProActive Resource Manager

Control your on-premise and cloud infrastructure

IT Infrastructure & Operations management

The Resource Manager allows DevOps and system engineers to set up infrastructure policies and monitor computing resources. It is the place where you can manage infrastructure and control dynamic policy-based provisioning of resources.

ProActive Resource Manager allows to manage heterogeneous computing resources: on-premise, clusters, clouds, edge. You can aggregate resources into hybrid infrastructures and easily manage them using this resource-agnostic solution. Resources coming from multiple origins are unified as ProActive Nodes (usually tied to the number of cores available on a compute host) and can be accessed transparently: desktop machines (Windows, Linux, MacOS X), all kinds of standalone server machines, cluster nodes managed by common batch schedulers (Slurm, LSF, SGE) and private or public clouds (Azure, AWS, Google Cloud, OpenStack, CloudStack, VMWare, etc.).

The Resource Manager also allows to create and monitor nodes on edge devices where data is processed locally, collected and transferred to the main infrastructure or on clouds, and further pushed back to the edges.

Reduce infrastructure costs with a resource-aware solution

ProActive Resource Manager allows to control resource acquisition and allocation according to various resource management policies such as load-based or time-based. For cloud or hybrid infrastructures, cloud computing power is unlocked according to your needs. With elastic scalability of the solution, virtual machines are only deployed when needed which helps save money on VMs. With smart and fully configurable policies, unused virtual machines can be shut down whenever it’s possible.

ProActive Resource Manager enables control and visibility over the consumption of resource capacity to identify waste factors and setup smart cost control processes. A resource monitoring system helps understand the consumption of computing resource, while smart elasticity policies fetch new resources when the load increases, and release them when it decreases.

Optimizing resource consumption is key to reduce the overall TCO (total cost of ownership) and fully benefit from the cloud opportunity.

Automate IT Operations with ProActive Nodes

ProActive Nodes in the Resource Manager represent a level of abstraction between the workload to execute, and the computing resource. This allows to consistently execute tasks without worrying about the underlying system.

To standardize the environment with appropriate libraries and ensure consistent execution on any computing resource, containerization is a perfect solution. ProActive Resource Manager is Docker-enabled and ensures that each task is executed within the appropriate environment on any resource.

For specific workloads, for instance, CPU-intensive or requiring a lot of RAM, or for data location and security policies, a selection script feature enables users to setup their own requirements in terms of system resources: target a specific host, request GPU or a specific amount of RAM, etc. Such resource allocation policies help execute workloads faster and in an optimised way.

Automatic Scaling and Cloud Resource Elasticity

ProActive Resource Manager allows to automatically adjust computing capacity up or down according to your need in resources. With vertical and horizontal scaling, you can ensure that the number of resources you’re using is exactly what the system needs to perform at its best.

Configurable load factor allows you to minimize cloud spendings by deploying virtual machines only when needed. Make sure you never exceed your budget with min/max virtual machines threshold. Smart and fully configurable elastic policy allows to shutdown unused virtual machines as soon as it is possible. Moreover, it helps prevent time-consuming re-deployments with the possibility to adjust idle nodes’ release in order to avoid scale up and scale down cycles.

The Resource Manager allows to monitor key performance indicators of your computing nodes: disk space left, CPU usage, memory usage, network usage, disk IO usage, list of processes, etc. Based on relevant conditions, it seamlessly activates new resources (CPU, memory or VMs) during spikes, in an automatic way or upon demand, ensuring continuous quality of service. When demand decreases, it automatically releases resources for the sake of cost management, taking into account application or workload specificity.

IaaS service provider monitoring API included in the Resource Manager helps with vendor-specific information supervision and control of infrastructure that you connect to using our ProActive connectors.