From mark.baker at computer.org Wed Jun 12 10:33:01 2002
From: mark.baker at computer.org (mark.baker@computer.org)
Date: Tue Nov 9 01:14:19 2010
Subject: [Beowulf-announce] Two Research Studentships
References: <200206111816.g5BIG1O10539@blueraja.scyld.com>
Message-ID:
Sorry for any cross postings... Please pass this message onto anyone
you may know has an interest.
Regards
Mark
-------------------------------------------------------------
Two Research Studentships
Department: School of Computer Science
Institution: University of Portsmouth, the Distributed Systems Group
Location: Portsmouth
Salary: ?6950 - ?8124 p.a. bursary. These are internally funded posts, available for
three years from October 2002. The posts will remain open, until filled by suitable
candidates, or until September 2002.
Job description
The School of Computer Science invites applications for two bursaries to be awarded
to exceptional students who wish to study full-time for a PhD within the Distributed
Systems Group. Each bursary will cover fees (EU students only) and a maintenance
component of ?6950 per annum (?8124 if aged 25 or over) for up to three years.
The Distributed Systems Group (see http://homer.csm.port.ac.uk) is involved in
research, development and implementation in a range of areas and technologies
related to distributed and parallel computing. The exact research topic connected to
each PhD bursary is flexible, but should be related to the activities already being
investigated within the group. Currently the group is actively involved in the
following research areas: Cluster and Grid computing, Java-based middleware (such
as Jini and JXTA), resource monitoring and management, micro-kernels and
pluggable component frameworks, active networks, and performance modelling.
Prospective candidates should have, or expect to obtain, a first or upper second class
degree in computer science, or mathematically related degree. A strong programming
background and research experience would be an advantage.
If you wish to discuss these opportunities, please contact Dr Mark Baker
, leader of the Distributed Systems Group,
http://homer.csm.port.ac.uk/
Contact
Please send a copy of your current curriculum vitae plus a letter of application giving
some indication of your proposed research and experience to Dr Mark Baker by email
to
The Gravitational Wave Detection group at the University of Wisconsin -
Milwaukee has postdoctoral positions open in grid/beowulf computing.
We operate a 300-node 300 Gflop beowulf cluster with approximately
24 TB of disk space. Please see http://www.lsc-group.phys.uwm.edu for
information about the group, and the cluster.
Please see
http://www.ligo.caltech.edu/LIGO_web/sidebar/fellowships.html#uwm1
for an on-line version of the advertisement below.
POSTDOCTORAL POSITIONS IN GRAVITATIONAL-WAVE SOURCE MODELING,
GRAVITATIONAL-WAVE DATA ANALYSIS, AND GRID COMPUTING
The Physics Department at the University of Wisconsin - Milwaukee
invites applications for postdoctoral research positions in
gravitational-wave source modeling, gravitational-wave data analysis,
and grid computing. One position is available immediately, and
one or more additional positions are expected to become available
between July 1, 2002 and Sept 1, 2002.
The Center for Gravitational Physics at UWM consists of four faculty
members (Bruce Allen, Patrick Brady, John Friedman, and Leonard
Parker), Visiting Assistant Professor Alan Wiseman, Staff Scientist
Scott Koranda, a number of postdocs (Jolien Creighton, Benjamin Owen,
and Koji Uryu) and several graduate students. An additional faculty
member is being actively recruited. The research interests of our
faculty include quantum and classical gravitation, relativistic
astrophysics, quantum field theory in curved spacetime and its
relation to cosmology and black hole physics, gravitational-wave
generation and detection, and cosmological large-scale structure.
Members of the Center for Gravitational Physics play an
important role in the LIGO Scientific Collaboration (LSC)
concentrating on data analysis for the LIGO-I experiment. See
http://www.lsc-group.phys.uwm.edu for details. We are also active
in the Grid Physics Network (GriPhyN) and International Virtual
Data Grid (IVDGL) computing collaborations. See www.griphyn.org
and www.ivdg1.org for further details.
We seek PhD-level scientists with expertise in gravitational physics,
grid and high-performance scientific computing, data analysis
algorithm and code development, gravitational-wave data analysis,
or related topics.
The group at UWM has excellent large-scale computing facilities,
including a recently-completed 300-node Linux computing cluster. In
coming years, we will be serving as one of the LSC Tier-II computing
centers, supporting a variety of LSC computing activities.
Applicants should send a CV, publication list, and a brief statement
of their research interests to:
Joyce Miezen, LSC Postdoc Search Committee
joycem@csd.uwm.edu
Physics Department
University of Wisconsin-Milwaukee
Milwaukee, WI 53201
Fax: 414-229-5589
They should also arrange to have three letters of recommendation
sent to this address. Applications will be considered at any time,
provided positions remain open. As of June 12 2002, three positions
remain open.
UWM is an Equal Opportunity/Affirmative Action Employer.
From ostrander at sensors.com Wed Jun 12 16:53:57 2002
From: ostrander at sensors.com (Rob Ostrander)
Date: Tue Nov 9 01:14:19 2010
Subject: [Beowulf-announce] RTExpress v4.0.1 released
Message-ID: <4.3.2.7.2.20020612154456.00df9e20@mailserver>
ISI has released RTExpress? version 4.0.1, offering compatibility with
MATLAB? 6.1, performance enhancements and significant improvements to the
RTExpress? parallel development tools. RTExpress? version 4.0.1 makes high
performance, parallel algorithm development from MATLAB? scripts easier
than ever before. Read more in the announcement!
http://www.rtexpress.com
Rob Ostrander
Integrated Sensors Inc.
http://www.sensors.com
phone: 315-798-1377
fax: 315-798-8950
From ssy at prg.cpe.ku.ac.th Tue Jun 18 08:41:03 2002
From: ssy at prg.cpe.ku.ac.th (Somsak Sriprayoonsakul)
Date: Tue Nov 9 01:14:20 2010
Subject: [Beowulf-announce] SCE 1.5 Release Announcement
Message-ID: <001901c216be$2e2ec6d0$0f226c9e@yggdrasil>
Parallel Research Group, Kasetsart University prouds to announce the public
release of a new version of SCE, SCE 1.5, a truly integrated scalable
computing
environment. SCE is distributed free of charge and includes source code. SCE
development is supported in part by AMD Far East, Inc. , Kasetsart
University, and COMPAQ.
SCE 1.5 is available in two forms:
- Full distribution that can be used to build a new diskless cluster.
- Software package that can run on NAPCI Rocks clusters and RedHat 7.2/7.3
based cluster.
New features in SCE 1.5:
- Fast and automatic installation for diskless cluster
- Support cluster built by NPACI Rocks 2.2.1 (diskfull cluster)
- New AMATA technology, provides basic HA support out of the box
- Increased stability
- Built-in automatic dependencies check
- The new config generation tool for building basic configuration files
- Improved performance and many bugs fix
- Builtin computing portal that link to batch scheduler (SQMS)
SCE Features
- Powerful system management and monitoring tools
- Parallel unix command
- System health monitoring
- Web and X window interface
- Powerful user-level cluster middleware
- Global process space
- Fast process creation
- Global signal and event service
- Rich set of APIs for developers
- Simple batch scheduling
- System statistics logging
SCE is available from http://www.opensce.org/
Bugs report is can be sent to http://prg.cpe.ku.ac.th/bug/. Questions or
comments can be directed to sce@prg.cpe.ku.ac.th.
Thank you for using SCE software!
What is SCE?
One of the problem with the wide adoption of clusters for mainstream high
performance computing is the difficulty in building and managing the system.
There are many efforts in solving this problem by building fully automated,
integrated stack of software distribution from several well known open
source software.
The problem is that these set of software comes never been designed to work
together as a truly integrated system. With the experiences and tools
developed to
build many clusters in our site, we decided to build an integrate software
tool that is easy to use for cluster user community. These software tools,
called SCE
(Scalable Computing Environment), consists of cluster builder tool, complex
system management tool (SCMS), scalable real-time monitoring, web base
monitoring
software(KCAP), parallel unix command, and batch scheduler. These software
run on top of our cluster middleware that provides cluster wide process
control and
many services. MPICH are also included. All tools in SCE are designed to be
truly integrated since all of them except MPI and PVM are built by our
group. SCE also
provides more than 30 APIs to access system resources information, control
remote process execution, ensemble management and more. These APIs and the
interaction among software components allows user to extends and enhance SCE
in many way. SCE is also designed to be very easy to use. Most of the
installation
and configuration are automated by complete GUI and Web.
Why use SCE?
It is easier to manage cluster than normal remote shell command by hacked
shell script. Also, SCE collects and shows resource usage statistics for
later analyzing. MPI users always get the fresh update host list. All jobs
are queued and scheduled to automated discovery computing nodes.
How do I find out more about SCE?
There are many papers describing overview of SCE and its individual
components in detail. All of them are available at http://www.opensce.org/.
Questions may be
sent to sce@prg.cpe.ku.ac.th for more information. You can keep track of the
development by subscribe to the mailing list at
http://prg.cpe.ku.ac.th/mailman/sce.