NGS

Tuesday, 2 October 2012

We have started to issue certificates with the "new" more secure algorithms, SHA2 (or to be precise SHA256) - basically, it means that the hashing algorithm which is a part of the signature is more secure against attacks than the current SHA1 algorithm (which in turn is more secure than the older MD5).

But only to a lucky few, not to everybody. And even they get to keep their "traditional" SHA1 certificates alongside the SHA2 one if they wish.

Because the catch is that not everything supports SHA2. The large middleware providers have started worrying about supporting SHA2, but we only really know by testing it.

So what's the problem? A digital signature is basically a one-way hash of something, which is encrypted with your private key: S=E(H(message)). To verify the signature, you would re-hash the message, H(message), and also decrypt the signature with the public key (found in the certificate in the signer): D(S)=D(E(H(message)))=H(message) - and also check the validity of the certificate.

If someone has tampered with the message, the H would fail (with extremely high probability) to yield the same result, hence invalidate the signature, as D(S) would no longer be the same as H(tamper_message).

However, if you could attack the hash function and find a tamper_message which has the property that H(tamper_message)=H(message), then the signature is useless - and this is precisely the kind of problem people are worrying about today, for H being SHA1 signatures (and history repeats itself, since we went through the same stuff for MD5 some years ago.)

So we're now checking if it works. So far, we have started with PKCS#10 requests of a few lucky individuals; I'll do some SPKACs tomorrow. If you want one to play with, send us a mail via the usual channels (eg email or helpdesk.)

Eventually, we will start issuing renewals with SHA2, but only once we're sure that they work with all the middleware out there... we also take the opportunity to test a few modernisations of extensions in the certificates.

Thursday, 14 June 2012

The first seminar will take place next Wednesday (20th June) at 10.30am (BST) and will give an overview of how accounting is done on the grid, and what it is used
for. It will cover the NGS accounting system at a high level and then go into more detail about the implementation of APEL, the
accounting system for EGI, including the particular challenges involved
and the plans for development.

The speaker will be Will Rogers from STFC Rutherford Appleton Laboratory who I'm sure would appreciate a good audience ready to ask lots of questions!

Please help spread the word about this event to any colleagues or organisations you think might be interested. A Facebook event page is available so please invite your colleagues and friends!

Thursday, 31 May 2012

Quite a lot actually is the answer!
The NGS will be hosting a second seminar series this summer and the theme of the 3 seminar event focuses on the services that we provide for the European Grid Infrastructure (EGI). As with last time, the seminar series will be held using EVO allowing people from all over the world to participate in the seminar and to quiz the presenters. The details for this series are -

20th June - Grid Accounting and APEL
This talk will give an overview of how accounting is done on the grid, and what it is used for. It will cover the NGS accounting system at a high level. It will then go into more detail about the implementation of APEL, the accounting system for EGI, including the particular challenges involved and the plans for development.

27th June - GOCDB, the NGS and EGI
This talk will cover a brief overview of the functionality provided by GOCDB, the official repository for storing and presenting EGI topology and resources information. The seminar will explain how it is used within the NGS, recent developments, useful features on the future roadmap and a chance to ask questions about the system. There will also be a short overview of how GOCDB is used in the context of the EGI project.

4th July - The Training Marketplace
The Training Marketplace is a one-stop shop for training developed by STFC and the EGI InSPIRE project. Here you will find information about classroom-based training courses and online training materials including a repository containing thousands of resources. You can also search for PhD or MSc courses, or for resources for trainers such as a Training CA. The Training Marketplace equally allows you to advertise and a freely available gadget enables you to customise and embed the Training marketplace in your own website. This seminar will talk you through the Training Marketplace and demonstrate how you can embed a customised version of a training calendar, map or repository in your own website.

If you are interested in attending any of these online seminars then please see the webpage for further details and how to join in online. There are also Facebook event pages available for each seminar series to help you inform interested colleagues and to invite them along.

Friday, 11 May 2012

Two free training events announced in the space of 2 weeks? Don't say we're not good to you!

Incase you missed it, last week I announced that we were accepting applications for our "Using e-infrastructure for Research" summer school which will be held in August. It's completely totally and utterly fully funded so there is absolutely no cost to the participants. More information can be found on our website.

Dr Pamela Greenwell, who is based at the University of Westminster, has organised a 3 day training event entitled "Biobytes", a molecular modelling event for bioscientists. It's a 3 day event held at Westminster consisting of breakout groups, seminars, demos and practical workshops.

For more information and details on how to register, see the event page on the NGS website. Don't delay as spaces are limited!

Thursday, 3 May 2012

Yes it is that time of year again. I've spent this morning opening registration for the 2012 SeIUCCR e-infrastructure summer school. Why a whole morning you may ask?

Well by the time you've double and triple checked the registration form, put the web page live, sorted out the Facebook event page, written the advertising blurb, put together the news bulletin containing the announcement and tidied up another 101 loose ends, it takes a while!

The summer school is taking a similar format as last years successful event. It will run from lunchtime Monday to lunchtime Thursday with a mix of presentations, hands on and consultation sessions. It will cover cloud, grid and other e-infrastructures to ensure that attendees gain the widest possible knowledge of e-infrastructure in the UK.

The summer school is primarily aimed at UK engineering and physical science PhD students and post docs but researchers from other disciplines can also apply. The school is fully funded including meals, accommodation and travel - all you have to do is tell us of a problem or issue in your research that could potentially be tackled by the application of e-infrastructure!

For more information and to apply for a place visit the event webpage.

Tuesday, 24 April 2012

Hopefully you'll have already seen the announcement on one of our many communication channels such as our website, Facebook page or Twitter feed but if not then read on.

Many of you will remember the changes we brought in last year in April 2011. Due to funding restrictions, we had to reduce the CPU allocation of all users to a maximum of 2000 free cpu hours in one year. You can read the original announcement on our website. As we are now a year on, all NGS users can apply for another free 2000 cpu hours.

If you are looking for some proof of concept computing, a "sand pit" area for your PhD students or to test concepts before purcahsing more hours etc then this is an ideal opportunity.

If you have any queries at all then don't hesitate to contact the NGS helpdesk.

Monday, 16 April 2012

It is worth pondering how scientific programming is different from other programming. Last year I gave an introductory talk on specialised languages used for science (in which I include Fortran but mainly covered R, APlus, and suchlike). How do you do "hello, world" in science? It has to be floating point, so I picked calculating the length of a vector.

Let's just digress for a second to do that. Say I want to calculate the length of (vi); I can then start with s=0 and loop over i, adding vi2, and finally take the square root of the sum:

my $s=0; foreach (@v) {$s+=$_*$_;} return sqrt($s);

Or we can do it more functionally, creating a new vector of squares ("map"), the elements of which are then added together ("reduce"):

(sqrt (reduce #'+ (map 'list (lambda (x) (* x x)) v)))

... which is the origin of the MapReduce paradigm, but it has the disadvantage of creating a temporary copy (here a list) of the squares. But. If you are doing them in parallel, with each task squaring its own entry (which you might if v is large), in this case you do need to keep the intermediate results anyway.

Then there are questions of precision and suchlike, for which David Goldberg's paper is still one of the best introductions. This is in contrast to "normal" programming, where one should read Zen and the Art of Motorcycle Maintenance (but see also 10 papers).

We can then ask how science use of * is different from normal use of * (where * is anything). Do scientists use the cloud in a different way from non-scientists?

With this in mind, JISC and STFC co-organised a workshop on scientific computing in the cloud (and grids.) Funded by EPSRC, and with about 75 registered participants and 15 speakers from the UK and beyond, it focused on the science use of cloud (and grid) resources. There were a number of discussions on cost effectiveness, cost models, and the true cost of doing science in clouds compared to your own (university's) resources. How careful should you be about putting your data "in the cloud" - and here we are just talking about analysis of data, not long term storage. How do you convince sceptical users?

It seems that some of the lessons learned from the grid carry over to the cloud world: the use of gateways and portals is a useful way to get researchers started using the cloud, but then someone needs to build these things for the research communities - and they will in general be domain specific. And building these cannot just be a proof of principle; they have to be production ready and supported.

Of course e-scientists have scientific applications, specialised libraries, and repositories of libraries - and every e-science programmer should know their BLAS and LAPACK... on the other hand, the presence of gateways and portals brings hope to the "ordinary" researchers who want to make the most of the brave new world of the fourth paradigm but are not themselves programmers and choose (rightly) to focus on their science.

Science use of clouds may have learnt from science use of grids, but clouds also introduce new issues. We agreed at the workshop that it was worth pursuing the case studies. There was no single "pain point" for everyone, but everyone learnt from each other. Supporting scientific research in the clouds (and grids) is a research topic in its own right, bringing together computing, science, best practices, usability, security, performance, and more - and as long as we continue to share experiences, the researchers who use the infrastructure will benefit.

Welcome!

This is the blog for the UK National Grid Service (NGS) which aims to enable coherent electronic access for UK researchers to all computational and data based resources and facilities required to carry out their research, independent of resource or researcher location.