Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

I would like to point a somewhat glaring inaccuracy in the article linked in the parent post.

The article author claims:

"...Global File System (which is proprietary anyway, available from Redhat)..."

Except, GFS is NOT proprietary. Behold, the source code:

http://sources.redhat.com/cluster/gfs/

And by the way, as my first impression I think Advogato sucks if only because there is no obvious way to contact the author or reply to the article to point out this inaccuracy or anyone at the site to contact ab

Indeed, this is a very interesting development. With an LGPL license for DFS, it's time to give the DCE descendent of AFS another look.

But we have AFS, too, and although OpenAFS is not GPL-compatible, its free software in a real sense, and more important, it has a living community of developers who've worked on the code stretching back into the 1980s.

I'm not as convinced now as I might have been 3 years ago that DCE is a better mousetrap than Rxgk is shaping up to be.

the crucial thing is to find the people and quickly those who are depending on this code, and who have been let down by ibm's end-of-lifecycle decision.

dfs is just far too good at what it offers to let it go piss down the toilet.

remember: dfs was the stuff that transarc got their teeth into _after_ they released afs, and so they had a few more years to hammer at an _already_ stunningly good bit of code.... where's me openafs mailing list alias gone?

if IBM hadn't stalled the release for four years, but, they're interested in making money: if there were major contracts they were still pulling in, there was no reason for them to hand it all over on a plate.

remember, they would have _just_ finished adding LDAP to their DCE 3.0 internal proprietary version.

now, of course, this code is end-of-lifecycle as far as they are concerned, and a large number of companies and universities are in deep doodoo unless the open source communi

what goes around comes around: if DCE/RPC's profile is raised, it will hopefully stop people from reinventing technological problems that DCE solved _years_ ago, and will still need for certain kinds of software.... not to mention that there are contracts and systems still in existence that mean DCE just ain't gonna diiieeeee:)

plus, Corba is object-orientated, and its "counterpart" is DCOM (which uses DCE/RPC underneath). a lot of people make this mistake - seen it about five times on slashdot in the pa

My first thought was to say "DCE/RPC under the LGPL! Wow! Would you mind telling us what the hell the thing is?"

But, I figured I'd be socially productive, RTFA and post an explanation myself.

The OSF Distributed Computing Environment (DCE) is an industry-standard, vendor-neutral set of distributed computing technologies. DCE is deployed in critical business environments by a large number of enterprises worldwide. It is a mature product with three major releases, and is the only middleware system with a comp

have a quick read of the advogato article as well it gives a few more details. this stuff some people have been working on or with for _twenty years_:)
we're so so incredibly privileged to have been granted this opportunity.

The Distributed Computing Environment (DCE) is a software system developed in the early 1990s by a consortium that included Apollo Computer (later part of Hewlett-Packard), IBM, Digital Equipment Corporation, and others. The DCE supplies a framework and toolkit for developing client/server applications. The framework includes a remote procedure call (RPC) mechanism, a naming (directory) service, an authentication service, and a distributed file system (DFS). DCE RPC was derived from an earlier RPC system called the Network Computing System (NCS) created at Apollo Computer. The naming service was derived from work done at DEC. DCE DFS was based on the Andrew file system (AFS), originally developed at Carnegie-Mellon University, and later extended by Transarc Corporation (which was later merged into IBM)

I worked with the orginal NCS back on Apollos. Wow, what an elegant solution for the time. It makes me mad that Sun beat out Apollo for the workstation market, even though Apollos were generations more capable.

Until very recently with Macs, Apollo's distributed directory was unrivaled for ease integration of new nodes in the network. Plug it in, you're not only on the net, but you have the same account, device, filesystem, etc configuration as every other node on the network. Don't have a disk? Eh, we'll f

the lock-out you describe was done by _microsoft_ as part of their use of kerberos in "active directory": they used the "application specific" field in order to save on round-trips (and then extended their bloody SMB protocol in order to _add_ a couple. bastards).

DCE did a "proper" job by using the available fields of kerberos for the correct - documented - purpose.

the use of CDS being largely irrelevant was recognised by TOG in 1999: you need to pay IBM stacks of $$$ to get the code _but_ it was reco

the lock-out you describe was done by _microsoft_ as part of their use of kerberos in "active directory": they used the "application specific" field in order to save on round-trips (and then extended their bloody SMB protocol in order to _add_ a couple. bastards).

And now that it is open sourced, perhaps someone (or me, whatever:) can get around to fixing the screwy case issue with dce cell naming that prevents us from making a one way trust setup between active directory and dce (having the ms kdc being

I used Motif yesterday in fact. While certainly ugly and headache prone, it does have some significant advantages. It's ubiquitous and available everywhere. It's fully documented. It has stable API (unheard of with other high level X11 toolkits). And it's much much much easier than using bare Xlib.

I wouldn't recommend it to most people, as it's still low level enough to bog you down in the UI instead of the backend. But it's hardly "abandonware".

This is a disturbing trend I've seen cropping up a few times lately, but it seems like all of their useful introductory documentation (at least what they refer to on their website) is available in book format that you have to pay money for. Is the code really open and free if you have to pay money to learn how to use it?

Short answer : yes.
Long answer : The code is Free means the code is Free. The code is released under the LGPL. If you can't look at the code and figure it out, what does it really matter anyway? On top of this, if you are involved in a large project with many developers chances are your organization will pay for it.
The API is well documented in more places than just their pay-per-book service.

In '93, I was making the big bucks at a defense contractor because I could tell them how/where to use DCE.
It is interesting to see the difference between the openess of the OSF and the openess of the open source movement [all that gnu software!] begin to blur.
I hope that exposure of the security code buried in DCE, especially where it uses kerberos, will help polinate other open source projects with improved security features.

...and for those of you who are still wondering what TFA is about, note that just about every big system and OS vendor [uni-muenster.de] has its own version of DCE. It has been the foundation for a lot of securely networed applications.

I really hate to be an annoying terminology pendant -- but "all that gnu software" should really be called free software, not lumped together with the "open source movement". The free software movement was around first, after all, and IMHO have certainly earned the right to be called by their preferred name. There is a difference, and I think that both camps can see the benefit of using the appropriate terminology. The FSF obviously appreciates the distinctiveness, and people who prefer the open source t

It's been a while since I've looked at it, but wasn't DCE hijacked by Evil Empire? It was put together by OSF, now called the Open Group, and it seems bittersweet to have it released as free software now. If only they had the foresight to open it from the start.

they didn't steal it but from what i can gather they took the DCE 1.1 reference implementation (available under a BSD-like license before most people had even _heard_ of free software licenses!) which is basically "stubs"...... and then they integrated it with NetBIOS and SMB (inventing ncacn_np which is DCE/RPC over NT's NamedPipes - heard of those? look up CreateNamedPipe on the MSDN:)... and then they added WINS as a resolver...... and then they added NTLMSSP authentication...... and then they crea

They didn't write it from scratch however, they reversed engineered the DCE RPC. MS RPC is based upon, and with a little hacking, will work with DCE RPC. They did this to avoid paying the full licence from OSF. "Stealing," may be a bit of hyperbole, but it wasn't exactly innovative, either.

They didn't write it from scratch however, they reversed engineered the DCE RPC. MS RPC is based upon, and with a little hacking, will work with DCE RPC. They did this to avoid paying the full licence from OSF. "Stealing," may be a bit of hyperbole, but it wasn't exactly innovative, either.

They reverse engineered the spec, I think you mean. Kinda like what the SAMBA group did when you think about it, and they get a ton of props around here...

DCOM is literally a reverse engineered DCE-RCP, to the point where it is wire compatible with it. DCE-RPC is an authenticated RPC which uses KerberosV for the authentication token, and since DCE puts group information into the ePac (like MS did with their Kerb) it also allows for group based authorization at the RPC level.

Microsoft ripped out all the security (who is suprised?) and called it DCOM. Of course the idl compilers are different so they are not compatible at that level, but once compiled, a DCE rcp client/server can talk to a DCOM client/server, assuming you are not trying to use any of the security built into the DCE-RPC

none - the reference implementation was available almost right from the start - i _think_ - otherwise microsoft wouldn't have been able to get hold of it and use it for Windows NT 3.1.

FreeDCE, however, has _two_ security plugins: GSS-API (thanks to luke howard), and NTLMSSP (code from samba tng which i wrote, based on my and paul ashton's "welcome to the samba domain" work in august 1997)

Of course the idl compilers are different so they are not compatible at that level, but once compiled, a DCE rcp client/server can talk to a DCOM client/server, assuming you are not trying to use any of the security built into the DCE-RPC

That's not entirely true. DCOM is layered on DCE-RPC yes, but actually activating and programming a remote DCOM object is a lot more work than just using the RPC APIs. DCOM adds some kind of "object oriented"ness to it so you'd have to be able to understand OBJREFS and s

This [opengroup.org] touches on it. I used to have some proof of concept code that did this but I cannot find it:(

Basically do a regular old DCE-RPC call to a DCOM server and just do not use any of the DCE provided security or directory calls and it will work. (at least it did in the NT 4.0 days, I'm not 100% sure about today)

"Microsoft have felt the need to use an RPC mechanism, though they didn't want to write their own from scratch so it was suggested they use the best in the industry (already chosen by the OSF) - legend has it that they approached the OSF for DCE RPC but didn't want to pay the licence fees. What Microsoft _did_ do was to take the Application Environment Specification and a network sniffer and reverse engineer the DCE RPC. MS RPC is based upon, and, with a little application (Like OEC Enterra) will work with

The EOSDIS/ECS project. http://eospso.gsfc.nasa.gov/ [nasa.gov] is a good place to start looking at the project I was on. It's currently the largest satellite data processing and science data repository on the face of the planet.:) (toot toot... there goes my own horn;))

Anyway... DCE was used to tie several servers together which are the core of the system. I found it very reliable and solid (and that was several years ago).

Microsoft's RPC framework [microsoft.com], which is built into Windows, is actually an implementation of DCE. While it's a long time since Microsoft used it directly, it's a nice platform for remote communication; it's a mature API that supports a wide variety of protocols (eg., TCP, UDP, local pipes), authentication mechanisms, marshaling mechanisms etc.

Microsoft's COM (also known as DCOM) sits on top of this RPC layer to implement a distributed component object model -- one of Microsoft's finest and most underrated inventions. It's also one of their most copied technologies -- KDE, GNOME, OpenOffice (UNO) and Mozilla (XPCOM) all implement very similar object models.

Of course, DCE RPC is also famous for the UUID [wikipedia.org] (aka GUID [wikipedia.org]) algorithm -- 128-bit identifiers whose uniqueness is mathematically guaranteed as long as the generator can access a network card with a unique MAC address.

I don't know the details, but I believe that at that time, Microsoft were still on IBM's team developing OS/2. Windows and OS/2 are very similar. Also, both SOM and COM were inspired by CORBA. And there are many differences; COM used DCE-RPC and added UUIDs to interfaces, for example, whereas SOM relied on simple names.

What these technologies had in common was that they implemented binary interface compatibility between components, in a way that seemed to be the wave o

In theory, yes; that'd be the case if we were talking about something like a standard. In reality, there's only a single implementation of COM, which today includes the distributed object support; it's all DCOM now.

In theory, yes; that'd be the case if we were talking about something like a standard. In reality, there's only a single implementation of COM, which today includes the distributed object support; it's all DCOM now.

Not true. The product I work on has it's own implementation of COM, but does not use DCOM at all. The standard parts of COM are well-published in books (e.g. the Don Box COM book, the Microsoft ATL book etc - both excellent),

If you fancy tinkering around with operating system internals, it's hard to do better than OSKit.

This is very true; I'd go further and say if you want to experiment with OSes, want the result to be usable, but don't want to implement the boring but difficult to get right bits, you can't do better than OSkit. Check out Christopher Browne's Novel OS work page [cbbrowne.com] for leads to cool things.

I've never programmed in RPC directly, but I do know that it has been a horrible nightmare in terms of security for both the MS and UNIX platforms for many years.

You can't make an open ended statement like that and not provide an explaination.

DCE/RPC (which is what MSRPC basically is) provides integrity and confidentiality using the session key. If you don't properly check input and then yes you're going to have buffer overruns. If you want to program like that use Java.

While there is technically a difference between the two protocols, if you are careful when you are developing software for either you can do both simultaneously. Basically, tying yourself to just COM objects can in the long term kill you as a developer.

Still, trying to create well-behaving COM objects is always tricky, and sometimes they can give you massive fits. In addition, the DCOM procedures provide massive, and I mean massive security holes if you don't watch the default configurations carefully.

Microsoft's COM (also known as DCOM) sits on top of this RPC layer to implement a distributed component object model -- one of Microsoft's finest and most underrated inventions. It's also one of their most copied technologies -- KDE, GNOME, OpenOffice (UNO) and Mozilla (XPCOM) all implement very similar object models.

COM and DCOM are not the same thing: COM is a local component model, DCOM is a distributed layer on top of that.

And, no, this is not "Microsoft's invention", it is Microsoft's adaptation of

About 8 (?) years ago I was working on an architecture for a client server system - we had a mix of Unix and Microsoft servers and we wanted something that would tie them together so we could use the best that each had to offer.

I faced the same problem, but about 5 years ago. My solution was to use ONC (Sun) RPC instead of DCE. ONC RPC has been supported on Linux / Unix forever, and I found a port to Windows (from the original Sun code) that worked nicely.

That sounds impressive until you realise that you can simply use the MAC address instead.

The MAC address is a single unique identifier. A UUID is a space of unique identifiers -- it's a product of the MAC address, the clock, a random seed, etc. You generate UUIDs; you can't generate new MAC addresses.

DCE is the core middleware at PSU and has been for years. Your access account you use for everything is a DCE principle (Which ends up being KerberosV + some stuff).

The PASS filespace is DFS which is the distributed filesystem componant of DCE. Webmail and the Portal (wehmail.psu.edu portal.psu.edu) are built on top of the filesystem.

eLion is a client server application that uses Smalltalk on the web front end and Natural/Adabas for the backend (running on an IBM zSeries mainframe). A custom in house developed DCE RCP middleware mechanism is used to get them to talk to each other. This lets us do dynamic load balancing without special hardware, adding and removeing backend servers and automatically have them put into the locally managed "server pool" on each web server front end, and validating the calls on the backend via the kerberos credentials of both the web server and the user making the call. (can you guess what I did for the last 3 years?)

Now, IBM has end of lifed DCE, which screws us (and several National Labs, Merck, Cal Poly Tech, Buffalo U, Pain Webber, a handful of other universities, etc). PSU is migrating off of it to MIT KerberosV, LDAP, a "yet to be determined filesystem" (probably OpenAFS, which is a 10 year step backward), and I have absolutely NO idea how we will replace the RPC.

Anyway, PSU people have been using DCE heavily for about a decade and many didn't even know it:) It really was/is a cool and powerful system. Its one major failing it the complexity and effort needed to set it up.

Pardon my cynicism, but does anyone else get the impression that the new End of Life announcement is framed in terms of "we are pleased to announce the open source release of..."

i.e. Let's outsource support for this sucker! I mean, how excited am I supposed to get, in 2005, about a techmology that allows me to marshall/unmarshall data and call remote procedures over the 'net? Isn't that already being done (a lot) by the various CORBA and RPC stuff already running on my Linux box?

With kerberos, pam, ldap and NFSv4 it seems like alternatives are available. And the 90% of computer users in the enterprise needing authentication, directory service on Windows users are getting embraced by AD.

Plus, last time I remember using DCE/DFS about 7 or 8 years ago it was sloooooow.

The Open Group was formed by the merger of X/Open and the Open Software Foundation. The use of "open" in all those names predates the phrase "open source." The term it relates to is "open systems," which refers to standardized Unix systems, as opposed to mainframes.

Since the introduction of DCE, Microsoft have felt the need to use an RPC mechanism, though they didn't want to write their own from scratch so it was suggested they use the best in the industry (already chosen by the OSF) - legend has it that they approached the OSF for DCE RPC but didn't want to pay the licence fees. What Microsoft _did_ do was to take the Application Environment Specification and a network sniffer and reverse engineer the DCE RPC. MS RPC is based upon, and, with a little application (Lik

The term it relates to is "open systems," which refers to standardized Unix systems, as opposed to mainframes.

This is completely wrong.

This days you can consider a mainframe open, because not only it can ran GNU/Linux but also it has a facility to run open systems programs, protocols etc. Other non-Unix systems do so, like Digital VMS.

Open systems are systems that implement open standards, that is, standards agreed upon by representative bodies like ISO specifying interfaces, protocols, file formats

Previously, the DCE source was only available under a traditional license. Making it available under a recognized open source license (LGPL) both increases the accessibility of DCE as an interoperability technology, and permits a broader community to work on the source to expand its features and keep it current.