enCORE – HPCwirehttps://www.hpcwire.com
Since 1987 - Covering the Fastest Computers in the World and the People Who Run ThemFri, 09 Dec 2016 21:51:05 +0000en-UShourly1https://wordpress.org/?v=4.760365857Expanding enCORE with CFDhttps://www.hpcwire.com/2013/07/19/expanding_encore_with_cfd/?utm_source=rss&utm_medium=rss&utm_campaign=expanding_encore_with_cfd
https://www.hpcwire.com/2013/07/19/expanding_encore_with_cfd/#respondFri, 19 Jul 2013 07:00:00 +0000http://www.hpcwire.com/?p=8509Britain HPC systems integrator OCF has made noise with regard to high performance applications in the cloud, bringing online an 8,000-node HPC cluster called enCORE last year. This week, they announced an expansion of enCORE’s capabilities. Specifically, they brought in the XFlow Computational Fluid Dynamics (CFD) software to expand their HPC capabilities.

]]>Britain HPC systems integrator OCF has made noise with regard to high performance applications in the cloud, bringing online an 8,000-node HPC cluster called enCORE last year. This week, they announced an expansion of enCORE’s capabilities. Specifically, they brought in the XFlow Computational Fluid Dynamics (CFD) software to expand their HPC capabilities.

The enCORE New Business Development Manager for OCF Jerry Dixon spoke with HPC in the Cloud on the challenges of bringing this CFD software to enCORE as well as the prospects of OCF and enCORE going forward as they look to expand their reach deeper into the manufacturing field, as well as other cloud-relevant HPC fields.

“We’re now starting to extend our horizons, to take on larger organizations with significantly bigger compute and storage requirements,” Dixon said.

According to Dixon, the project took off when he met Matt Hieatt, the commercial director of FlowHD, the company that sells XFlow in the UK and Ireland, at an event held at the Hartree Centre, where the enCORE server cluster lies. Further, the two paired with Dragon HPC, a system that aids with the visualization of large datasets, especially helpful in a manufacturing context. The collaboration has already seen movement in the automotive sector, according to Dixon.

“The engagement with Dragon HPC, pretty much a state of the art remote visualization company, together with the Hartree Centre, we operated to put this solution together, which is initially being used by a very significant automotive vendor,” Dixon said.

Usually, introducing a system like the XFlow CFD software into an 8,000-core cluster like enCORE requires a not insignificant time cost. However, Dixon noted that Hieatt’s experience working with HPC clusters made the transition relatively seamless. “Working with Matt and his team, we have pretty quickly got XFlow running successfully on the cluster,” Dixon said.

What made FlowHD interested in working with enCORE was the server cluster’s ability to properly scale, even at 1,000+ cores. Cloud environments, even ones designated for HPC workloads, do not always scale well, as servers tend to be farther apart the more numerous they are. However, enCORE’s servers are fairly tightly integrated, as noted in this previous article on HPC in the Cloud written by Tiffany Trader.

“Each of the 512 nodes sports Intel SandyBridge 8-core CPUs with either 36Gb or 128Gb RAM. This is true HPC-as-a-Service; the compute nodes are not virtualized. Service users can also access 48 GPU nodes outfitted with NVIDIA Tesla 2090 GPUs,” Trader wrote of enCORE in October of last year.

As a result, scaling happens much as it would on a typical HPC system (one that was not designated as a cloud, that is). “We have achieved very close to linear scalability, which is an important factor to [FlowHD],” Dixon said. “Matt and his team run tests likely to generate on about 1,000 cores so the performance scale-up is as close to linear as you’re likely to get. The performance from their point of view was more than acceptable.”

Those performance gains are happening on their preliminary workings with XFlow on enCORE, as Dixon mentioned that currently four instances are provisioned currently. The plan is to scale that out to their planned extent using ScaleMP, with which tests have already taken place.

When one has successfully made available the amount of cores that OCF has in 8,000, as an HPC-as-a-Service, the challenges and problems shift from virtualization concerns, such that exist in AWS HPC instances, to things like ensuring security and what Dixon referred to as ‘dynamic licensing.’

Interestingly, Dixon noted that, while security was of course still an issue, his clientele has started to accept the notion of computing in the cloud. “Security has always been an issue and always will be, but in the last few months talking to customers, I think it’s becoming less of an issue. There’s more of an acceptance of cloud technology, using remote infrastructure,” he said.

Licensing, on the other hand, is another issue entirely. According to Dixon, the limitation comes in the form of costs rising disproportionately to the number of CPU cores used. This limitation exists generally with regard to commercially available software like XFlow, created for usage by manufacturing vendors and the like.

“With commercial software, the limitation is very often the fact that the license is based on a number of CPU cores that you run on,” Dixon said. “If you want to run on that on multiple cores, you run into significant costs to operate a license. In many cases, it makes it economically [unsustainable].”

Licensing agreements add to the bevy of obstacles an institution like OCF must hurdle, including the volume of datasets that would plague most every HPC cloud provider. While some of those datasets are still transmitted by sending physical hard drives through the mail, Dixon noted that the partnership with visualization platform Dragon HPC eases the process of paring down ones data for submission to cloud systems such as enCORE.

The results so far on that front have been promising. “Typical engineering codes, they cannot produce large datasets, that’s where the remote visualization comes in here with Dragon HPC,” Dixon said. “The performance we’ve seen from that has been really good.”

Beyond CFD, OCF and enCORE look to incorporate more engineering and manufacturing codes into their system, from both the commercial and open source markets.

]]>https://www.hpcwire.com/2013/07/19/expanding_encore_with_cfd/feed/08509Warning to Cloud Adopters: Check the Fine Printhttps://www.hpcwire.com/2013/02/25/warning_to_cloud_adopters_check_the_fine_print/?utm_source=rss&utm_medium=rss&utm_campaign=warning_to_cloud_adopters_check_the_fine_print
https://www.hpcwire.com/2013/02/25/warning_to_cloud_adopters_check_the_fine_print/#respondMon, 25 Feb 2013 08:00:00 +0000http://www.hpcwire.com/?p=8670<img src="http://media2.hpcwire.com/hpccloud/cloud_sun_150x.jpg" alt="" width="95" height="71" />Central to the attraction of cloud computing is the "pay for only what you use" claim made by almost all service providers. But look a little deeper within the contracts and you will often find that they are not quite as attractive as first appears.

]]>Central to the attraction of cloud computing is the “pay for only what you use” claim made by almost all service providers. But look a little deeper within the contracts and you will often find that they are not quite as attractive as first appears.

Hidden away in the small print of many cloud service contracts are a range of charges that customers may not ‘spot’ until it is too late; that is until very large bills for data processing or data storage have made their eyes bulge. The problem is compounded in that individual users, potentially junior grades, have the potential to rack up significant bills without any clear financial monitoring or control. So suppliers really need to be more transparent and customers need to be more demanding for price clarity, otherwise it will severely limit the growth of the cloud services business.

Here are some pricing examples:

Cloud pricing will often incur general usage costs, i.e., pay per use, but a supplier may insist on a minimum charge of 2-hours, for example, even though the customer only requires 20-minutes of compute power.

Similarly, part hours are also often rounded-up for charging back. For example, 2 hours and 5 minutes of usage would be charged at 3-hours.

Users may also find that attractive initial rates apply only up to a certain level of use, and beyond that premium rates kick in, escalating the overall cost.

Often missed by users in the small print are charges for bandwidth in / out, i.e. the transfer of data. Even if spotted, this is a cost that may be difficult to estimate and even harder to control. So be very aware of these charges.

Suppliers can charge for the number of users accessing the resource – with costs increasing in stages, e.g. 5 users or 10 users – similar to the traditional software licensing model that most are familiar with.

None of the charges are particularly unfair. However, when these different pricing structures are combined and not transparent, it creates a very confusing picture for the customer who has the challenge of implementing controls over the use of the service if he is to deliver the benefits defined in his business case for cloud.

On a related point, data duplication can also affect cloud storage costs and should be taken seriously. It is not uncommon for multiple versions of a file to exist that are mostly identical and are not necessarily required, but are costing the user for their storage. A 2012 study from Johannes Gutenberg University Mainz, the Barcelona Supercomputing Center and University of Hamburg, on high performance computing (HPC) data sets, found typically 20 to 30 percent of this online data could be removed by applying data de-duplication techniques, peaking up to 70 percent for some data sets.

Tarnish

There is a growing concern amongst IT and finance managers over this issue and they are increasingly asking searching questions about fee structures of cloud services. This is filtering down to users; many are hearing the message from employers to take a cautious approach, to be wary of anything that sounds like it might carry the ‘cloud services’ badge. Naturally we expect users to be cautious and certainly don’t want to discourage that level of wariness.

As with everything, the devil is in the detail. Right now, users should look for a supplier that will enable self-monitoring of their spend in real-time whilst they’re using a cloud service, not when the bill is drops onto the doormat. Companies must set in-house policies to control usage by users – it’s very easy to get carried away on a cloud service. Users must check the fine print of the contract and ask questions if it is to confusing to understand; look for a supplier that is prepared to explain the billing process, not rush you into signing a contract. Perhaps most importantly, pick a supplier, partner up and let them guide you through the process; plug and play isn’t always cheapest in the long run.

About the Author

Joining OCF in October 2010, Jerry Dixon is Compute-on-Demand business development manager responsible for OCF’s “enCORE” service. His role encompasses the recruitment of a network of academic and research server clusters to contribute server power to enCORE, such as the Science and Technology Facilities Council’s (STFC) Hartree Centre. He works with application software vendors, develops the market and provides expert advice and consultancy to customers. Jerry’s background is in managed IT services, having previously worked for Calyx UK and ServiceTec Limited.