<div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Nov 12, 2013 at 8:16 PM, Andrew Mather <span dir="ltr">&lt;<a href="mailto:mathera@gmail.com" target="_blank">mathera@gmail.com</a>&gt;</span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi All<br>
<br>
<br>
I&#39;d like to modify one of our queues by using resources_min to enforce<br>
a minimum requirement for a specific single queue on our cluster. I&#39;d<br>
like to use this parameter to force all jobs sent to this queue to<br>
&#39;ask for&#39; 2 CPUs.<br>
<br></blockquote><div><br></div><div>Setting a resources_min doesn&#39;t handle this by itself. resources_min and resources_max are used to filter jobs among queues and resources_default is used to apply defaults.</div><div>
</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
The thing I am not sure about is what will happen to jobs already<br>
queued and more particularly, currently running, if they&#39;ve requested<br>
only 1. Will the running ones be killed off for violating the minimum<br>
requirements and will the queued ones simply be held forever ?<br>
<br></blockquote><div><br></div><div>The running jobs will not be killed. I don&#39;t believe that it will change the jobs that are already queued as these limits and defaults are applied at the time of queuing the job.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Is it safe to do this while these jobs exist, or should I stop the<br>
queue and allow those jobs to drain before making this type of change<br>
? There&#39;s currently a thousand or so jobs queued or running via this<br>
queue, some of which are hundreds of hours into their 1500hour<br>
walltime run, so I don&#39;t want to kill them off !<br>
<br></blockquote><div><br></div><div>Obviously what you have described is the safest option, but I think it is not required.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">

Also, once this change is made, would a specific request for 1 CPU in<br>
the submission script override this value ?<br>
<br></blockquote><div><br></div><div>If you only use resources_min, then yes. You need to use a combination of resources_min and resources_default.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">

The reason for the change is that this particular queue is currently<br>
sending a large number of small, CPU intensive jobs onto our nodes,<br>
which currently have hyperthreading enabled, which is causing the<br>
machines to bog down and performance drops right off. This is likely<br>
to be a long-term state of affairs due to the nature of some of the<br>
current projects using the cluster.<br>
<br>
In general, we get sufficient benefit from the hyperthreading that<br>
we&#39;d prefer to leave it on cluster-wide if we could.<br>
<br>
Since all the problem jobs are coming down one particular queue, I<br>
figured that if we could tweak the levers of this queue, we wouldn&#39;t<br>
need to mess with the rest, which on the whole is working fine.<br>
<br>
Thanks for any help you can provide and see you in Denver next week !<br>
<br></blockquote><div><br></div><div>If you want to force all jobs to request at least 2 cpus, perhaps the easiest way to accomplish this is to 1) instruct all users to do so and 2) create a submit filter that would outright reject these jobs. You can also do it using resources_min and resources_default, but you need to remember to set the min to reject these jobs in all queues or they&#39;ll just get routed wherever you forgot to set it. </div>
<div><br></div><div>David</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
--<br>
-<br>
<a href="http://surfcoast.redbubble.com" target="_blank">http://surfcoast.redbubble.com</a> |<br>
<a href="https://picasaweb.google.com/107747436224613508618" target="_blank">https://picasaweb.google.com/107747436224613508618</a><br>
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-<br>
&quot;Unless someone like you, cares a whole awful lot, nothing is going to<br>
get better...It&#39;s not !&quot; - The Lorax<br>
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-<br>
A committee is a cul-de-sac, down which ideas are lured and then<br>
quietly strangled.<br>
Sir Barnett Cocks<br>
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-<br>
&quot;A mind is like a parachute. It doesnt work if it&#39;s not open.&quot; :- Frank Zappa<br>
-<br>
_______________________________________________<br>
torqueusers mailing list<br>
<a href="mailto:torqueusers@supercluster.org">torqueusers@supercluster.org</a><br>
<a href="http://www.supercluster.org/mailman/listinfo/torqueusers" target="_blank">http://www.supercluster.org/mailman/listinfo/torqueusers</a><br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div>David Beer | Senior Software Engineer</div><div>Adaptive Computing</div>
</div></div>