Does anyone know of a way to get eclipse to NOT put the build output inside the project? This is one of those "there's no good reason to force people to do it but eclipse does anyway" problems which is rather serious, since serious SCM systems do not tend to like you checking in binaries inside the source folder. Our SCM doesn't even accept binaries to be checked in (there's absolutely no point or advantage in doing so).

Seems I'll have to say no-one's allowed to use eclipse here unless they manually copy every source file back and forth every time they edit anything. This means that people using eclipse will be slower than people using no IDE at all. Don't you just love software with stupid hard-coded behaviour preventing you from doing things when there was no need to prevent you in the first place?

More seriously, I think you're supposed to use Ant for that stuff. It should be pretty easy to whip yourself up a build script.

Do you mean that eclipse uses Ant under-the-hood to do it's incremental build stuff? Personally, I don't *want* eclipse to build on my machine - ever; however, other people rather like the incremental build (especially when you have a LOT of source to build like we do - some fairly frequently-required builds can take upwards of a minute ).

As it happens, ant is banned here, by me (except for personal usage) because of it's horrific usability issues (XML as a scripting language? Now *that's* a good idea!) - and because all our builds are done centrally, automatically, in a REAL build-language (a fn-lang rather than an inter-application data-exchange text markup system [i.e. XML]), distributed across a build farm. Basically, someone would need to both volunteer to support EVERYONE with all ant-generated problems, and have to prove that it was somehow better than using the existing build-system (something which I suspect is impossible...).

Apart from a few quirks like this, I've been hooked on eclipse. The problems are just coming out now because I'm the first person to try to integrate it properly with our SCM (being the resident expert on the SCM) whereas before it was just being used separately, painfully.

Either you've been watching too much Stargate Atlantis, or it's time for that vacation you've been putting off. Had one back in June, myself. It's the only way I'm able to function under the stress.

Quote

As it happens, ant is banned here, by me (except for personal usage) because of it's horrific usability issues (XML as a scripting language? Now *that's* a good idea!)

Being a little harsh, aren't we? Ant isn't that bad. As much as people like to call it a scripting language, it's really more of a build config file like Makefiles, but without the stupidities. (Whatd'ya mean no whitespace?!)

Not to mention that it's easy for everyone to use. And you can break the scripts out with the individual modules so that each group can do their work and test builds independently. It still takes a release manager to put everything together in the first place, but it's pretty effective once it's done.

Quote

- and because all our builds are done centrally, automatically, ina REAL build-language (a fn-lang rather than an inter-application data-exchange text markup system [i.e. XML]), distributed across a build farm.

No reason why you can't use Ant to do the same thing. But if a function language (O'Caml?) works for you, more power to you.

Quote

Apart from a few quirks like this, I've been hooked on eclipse. The problems are just coming out now because I'm the first person to try to integrate it properly with our SCM (being the resident expert on the SCM) whereas before it was just being used separately, painfully.

Being a little harsh, aren't we? Ant isn't that bad. As much as people like to call it a scripting language, it's really more of a build config file like Makefiles, but without the stupidities. (Whatd'ya mean no whitespace?!)

To put that into perspective, the documentation for our SCM opens with the quote:

"make was a really good project for a college student [to implement] 20 years ago."-- Tim Leonard, Intel Massachusetts Microprocessor Design Center

i.e. there is nothing in Make that is worth keeping. I've felt the same way pretty much since I was first forced to use Make by an employer .

If you take the stupidities out of Make, are you left with....anything at all?

If someone converted Ant to a MUCH more sensible configuration system then it would be bearable. Still not particularly *good* per se, IMHO. But much much better than nothing - and significantly better than make.

Quote

Not to mention that it's easy for everyone to use. And you can break the scripts out with the individual modules so that each group can do their work and test builds independently.

You mean you can branch? Now, you see, if you do it *properly*, your build system is integrated into your SCM, so that you have full branching, revision control, etc applied to your whole build process .

Ours has:

SCM - manages your code AND your builds, all versioned

Automatic dependency detection - it knows what was actually used in each build, and what wasn't

User-independent builds - The user can't make or break a build with their personal environment or other settings. This also means there is NO setup per-user - every user just types "build" and it builds. Period.

Guaranteed repeatability of builds - no matter what happens, even if Sun deletes all known copies of that JVM minor build number, you still have perfect from-source compilation using the same tools as originally; and it takes 10 seconds to setup.

Shared cache of build results - your neighbour makes a change and builds, and automatically and instantaneously (unless you set it to manual update) you get the same pre-built code

All builds are incremental - i.e. like Eclipse, super-fast

Quote

Hmmm... lemme guess. Perforce?

Nah...Take P4, add lots of features, speed it up a little, make it a bit easier to use, take AWAY the GUI (yeah, I know - this is a BAD thing), simplify a few features, and ... remove the price tag.

Is there any reason you're checking your complete project into source control?

(after attempting to explain this three times, experimenting a bit, and fiddling, I finally discovered what I needed to do).

First, the solution:

Quote

You can store your source files in a subdirectory of your project and build to another.. e.g.

Yes, then symlink (within eclipse!) that source directory to the REAL source directory on the partition which is where the real source will be managed by the SCM.

If you try to just include the source directory directly, and have a sibling directory in eclipse where you output that is NOT inside the repository, then you will be forced to include ... the entire repository! (because it's a File-system, i.e. a tree, so the only way to have a sibling node not in the tree is to have it as a sibling or uncle/etc to the root of the tree...)

Why does the SCM do this? (I hear you ask)

It's a high-performance SCM. So, all the files you checkout and work on are on a partition of their own (i.e. they appear as directories on a partition, but are actually a direct window onto the repository itself - your files are being automatically archived whilst you save them, even before you checkin).

So, in order to checkin source, the checkin location MUST be on the partition. So, the parent directory of each checkin/checkout directory is ALSO part of the repository. All the way up until you hit the root of the FS (or the mount point, which is the same as saying "the root of the partition").

A couple of huge benefits:

1. I can checkout, make some changes, then ... my computer dies. Or I go to a different location leaving my PC behind. Either way, I sit down at a new computer, login, and ... I can pick up from where I left off, with zero downtime.

2. Checkin and checkout is near-instantaneous (allegedly; I've seen situations where it took almost 1 second or more - but I think that was due to linux's pessimistic NFS defaults)

3. Other people can access your changes *even before you check them in*. This is, obviously, not the way you normally work. However, on the few occasions when you really do want to do this, it's very handy.

Anyway, I think this works, but will have to wait until tomorrow and get someone else to try it out with me - see if we can both checkout and edit in eclipse OK without upsetting the SCM and without upsetting eclipse...

I'm a little worried about what happens when you checkin - note that the source directory *will be deleted* every time you checkin (for obvious reasons!) - will eclipse crash / have a baby when that happens? (all it's source disappearing on each checkin?). I fear the answer is at least that every checkin/checkout will cost 1-5 minutes whilst it kills then reloads all the source from scratch ?

I don't think I could stand to work with your SCM. I did check out the web page but when I saw that it only runs on linux (you of all people know the hell that is caused by trying to run linux ) I stopped reading.

Delete the source when you check it in? I don't get it. Sounds just plain dumb.

You work with your source files always on a network, not the local machine? I see the benefit, and I also see the pain caused by disruptions to the network. If you had the files locally you could work when the network was down or simply not present.

It needs a special partition? Ugh.. I find that partitioning disks is mostly useless, and often quite a burden. The only use I've found is to isolate the OS from everything else, the rest of the time it l fragments disk space on-purpose with no gain. Subdirectories on a single partition work better in most cases. Of course this is a bigger problem on Windows where stupid, archaic concept of drive letters makes things a mess.

I currently have to use SourceSafe.. which sucks in several ways, but it's integration into windows IDEs and Eclipse with the plugin makes things easy.

I'm hoping that subversion will mature and have decent tools and a simple one click install some day.

Sometime Blah^3, I think that you always manage to find the hard way to do things

Here's the rub: it was written by engineers, for engineers, with none of the open-source stereotypes (e.g. quality is extremely high, evangelism extremely low, design is extremely clever / well-architected) - which means there's no-one who knows it really well who also knows how to sell it as an idea.

And personally I've avoided so much of the worst badness of poor SCM's and poor build systems (by ceasing to use bad ones relatively soon after plumbing their first worst inadequacies) that I lack the library of horror-stories to compare to .

Quote

Delete the source when you check it in? I don't get it. Sounds just plain dumb.

Now you say it, I recall how it alarmed me when I first read it. "But I don't WANT my source disappearing off the hard disk! I know that in theory that's the mathematically correct approach (since it's no longer checked out, I shouldn't be interacting with it) but ... I don't like it when bits of my FS disappear on me!".

Very quickly it becomes second nature, and becomes one of those things where once you've got used to it you wonder how you could ever have been opposed to it - like with switching from text editors to an IDE (if you've used text editors a lot, the IDE often distresses people with the way it does lots of things *without asking you first* - you're not convinced that you necessarily want it to do all that ... or perhaps you're one of those that initially hates the myriad buttons, menus, etc, when you're used to just having a single plain fullscreen edit pane).

Quote

You work with your source files always on a network, not the local machine? I see the benefit, and I also see the pain caused by disruptions to the network.

When was the last time your LAN crashed? How much development will you be doing anyway if you lose all internet access, all email access, access to the bug-tracker, etc?

Quote

If you had the files locally you could work when the network was down or simply not present.

But the answer to this is incredibly simple - it's just that it was such an obvious aspect of vesta that I'm so used to I forgot to even mention it.

A feature that all SCM's since 1995 (or before) should have had: effortless automatic replication between different servers.

So, if you ever want to work locally and know this in advance, you just install the server app on your PC, replicate the repository to your local server, and then can work from there. This is how laptop users work.

Quote

It needs a special partition? Ugh.. I find that partitioning disks is mostly useless, and often quite a burden ... Of course this is a bigger problem on Windows where stupid, archaic concept of drive letters makes things a mess.

I explained poorly, but ... this is all integrated and done automatically. On the client, because you are working on the server's FS you (obviously) need somehow to mount that FS, so you just have a mount point which lets you browse the entire repository as if it were a hard disk (this is a very cool feature - you can go into it and just modify it using all the standard linux commands available in your shell; you don't need special tools for everything, and your existing tools just think it's a normal partition of normal files).

On the server, the partition isn't necessarily a partition (although it's weakly recommended for performance reasons) - it can just be a normal directory, which the server app then exports in such a way that you can't tell the difference (save of course the free space!).

On windows, it should on the client simply appear as just another extra drive and drive letter - however, I've never used MS's implementation of NFS clients so I'm not entirely sure what it woudl look like (I've only ever used the corporate clients like the Hummingbird stuff from years ago, in the days before MS provided free NFS).

Quote

Sometime Blah^3, I think that you always manage to find the hard way to do things

Vying for supremacy with any really good IDE, vesta is possibly the biggest improvement in turnaround time for the development process I've ever come across. It really has streamlined things. One of the nicest things is knowing that we can release a new version of the GrexEngine to different licensees (whoever needs it) each day - or even several times a day - and that when anyone finds a bug, no longer how long it takes them to raise it, they only have to email us a single file and we can instantaneously and fully reproduce the exact build, even modify the exact source and recompile it with the exact compiler etc.

The only problems we've had with vesta are:

- we were the first group to seriously use java with vesta, so I had to write the build-scripts for java. My first attempts were hundreds of lines long. Now I have it down to around 150 lines, including about 50% extensive comments, none of which ever needs editing any more - and since we're giving it away the only code that you need to run java in vesta now is:

... which compiles all your source with the 1.4.2 compiler, makes a jar file, and outputs it to the files "output.jar" - leaving no intermediate files (class files etc) anywhere. Just a nice, clean, single file. (or of course you can also export the compiled source with one extra line, etc)

- we broke a repository by not bothering to read the instructions properly when doing something dangerous (and it was something you'd only do if you'd screwed up on the original install, which we had also done )

- the debian and RPM packages were only created a few months ago, and there were a few bugs in them where we had to use the official workarounds whilst we waited for the next version to be released that fixed the bugs

- Sun's 1.4.2_03 JVM was broken (had a dependency it wasn't supposed to have on /proc) and wouldn't work in ANY chroot environment. They fixed that bug in 04, but until they did it was impossible to run 1.4.2 in vesta.

- eclipse 2.x is very authoritarian: each day I discover more things that it should let you do but doesn't (even basic stuff, like you cannot select a few lines of code and go "Source - Format": you have to select entire class (or maybe entire methods?) and until you do it ads insult to injury by greying-out the option!

Now you say it, I recall how it alarmed me when I first read it. "But I don't WANT my source disappearing off the hard disk! I know that in theory that's the mathematically correct approach (since it's no longer checked out, I shouldn't be interacting with it) but ... I don't like it when bits of my FS disappear on me!".

I disagree that having the source disappear is the "mathematically correct approach"... I never thought I would say this, but I think what SourceSafe does is the mathematically correct approach. unless you have the file checked out it is read-only, when you check it out it becomes writable.

Quote

When was the last time your LAN crashed?

Last week. It happens more often then you might expect.

Quote

How much development will you be doing anyway if you lose all internet access, all email access, access to the bug-tracker, etc?

If I have the source I can continue to do development for quite a while. In fact I have taken work up to the cottage which doesn't even have a telephone. I can get a few days work done before I hook back up.

Quote

A feature that all SCM's since 1995 (or before) should have had: effortless automatic replication between different servers.

So, if you ever want to work locally and know this in advance, you just install the server app on your PC, replicate the repository to your local server, and then can work from there. This is how laptop users work.

That is probably a good thing. If done correctly when you introduce your modified version of the repository back to the main one. The merge must work well. That is where SourceSafe blows chunks - specially with branches. It will only merge changes from one branch to another once, after that if you attempt to do the same merge after more changes it will corrupt your source.

Quote

, so you just have a mount point which lets you browse the entire repository as if it were a hard disk (this is a very cool feature - you can go into it and just modify it using all the standard linux commands available in your shell; you don't need special tools for everything, and your existing tools just think it's a normal partition of normal files).

But it doesn't sound easy to move between that model and the laptop/duplicate repository model. Also if I want to work from home, everything would be horribly slow if I wanted to work using a VPN. I guess the solution there is to replicate the repository to the server on my home machine, then somehow replicate my changes back when I was done (that's the part that scares me until I see it working ).

Quote

On windows, it should on the client simply appear as just another extra drive and drive letter - however, I've never used MS's implementation of NFS clients so I'm not entirely sure what it woudl look like (I've only ever used the corporate clients like the Hummingbird stuff from years ago, in the days before MS provided free NFS).

One of these days someone should put a UI on the NFS configuration.. then I could use it. I remember looking into it just a couple months ago (I think you gave me a tip or two about it), but as always the config files didn't match the documentation I could find and it was just to ridiculous to bother with. (Will the unix world ever learn?)

Quote

- eclipse 2.x is very authoritarian

That's what I mean by doing things the hard way . Use Eclipse 3, I've been using it since 3.0M3 and find it is a big improvement on 2.x.

I disagree that having the source disappear is the "mathematically correct approach" ... unless you have the file checked out it is read-only, when you check it out it becomes writable.

Unfortunately, if you ever make any of those files writable (which the OS allows you to do) you now have to manually check whether you changed them accidentally.

On past projects, especially with CVS, I've seen people accidentally edit the wrong copy of their source (especially easy with advanced editors, and/or when you're tired, and don't notice that you're editing a copy you didn't mean to edit) and then lose the changes.

With the "source disappears when not checked out" method, you always know when you start editing not-checked-out source - because it's not there.

Whilst I agree that this may not be the most user-friendly of methods, it is correct: there is literally no valid need for you to have that source any more. You can't change it (well, you can (c.f. above) but - you mustn't!), and if you just want those files for read access then ... you already have the whole repository mounted as a directory anyway, and can just reference them directly.

Quote

Last week. It happens more often then you might expect.

How?!?! Back in 1995, when I was at ICL, the LAN used to crash once a week on average - usually on a friday, when everyone went surfing for porn (early days of the net; they were unprepared for the amount of b/w usage).

When I was at uni, the core router crashed approximately twice a year because they were too cheap-ass to put it in a cool room - and in the worst heatwaves it would overheat and shutdown.

But...apart from that, the LAN has always worked everywhere I've worked, and at home too - networking equipment these days is very reliable (even the flakey stuff : my home LAN is built on hubs and switches from ebay, some with dents in, that have run for years without fail). The uni situation was extremely embarassing for them - someone not spending a few thousand pounds cost the uni probably 5 times that much each year until they finally persuaded someone to pay for the necessary equipment.

Quote

That is probably a good thing. If done correctly when you introduce your modified version of the repository back to the main one. The merge must work well. That is where SourceSafe blows chunks - specially with branches.

This is where you're happy that vesta has been used for 10 years by the DEC/Compaq and Intel processor-development teams . Chances of it having bugs like that are spectacularly low (although c.f. my earlier comments about being written "by engineers...for engineers").

Quote

But it doesn't sound easy to move between that model and the laptop/duplicate repository model.

It's effortless - but you have to learn to apprecaite replication. I find it extremely sad that people don't, and I suspect it's MS's fault...replication has been a bog-standard process for decades, but one that MS never understood nor really used fully (let alone learnt how to implement properly!).

Dittor it's sad that POP3 is still the main email system (naive, antiquated, poor), when IMAP has been around for decades, using replication, and hence works so much better.

Quote

Also if I want to work from home, everything would be horribly slow if I wanted to work using a VPN. I guess the solution there is to replicate the repository to the server on my home machine, then somehow replicate my changes back when I was done (that's the part that scares me until I see it working ).

Exactly! You go:

1. Double click on ISP logo2. Connect to internet3. start off replication with the server4. Make a cup of tea5. disconnect6. ...Do your work...7. ...at end of day, do steps 1-5 again.

However, I'm not aggressively using the vesta replication at the moment, so I can't vouch for it's ease of use. I know that others are doing so very happily, and that my minor usage of it has been effortless so far (so long as you configure things correctly, of course).

Quote

That's what I mean by doing things the hard way . Use Eclipse 3, I've been using it since 3.0M3 and find it is a big improvement on 2.x.

Linux kernel cannot be recompiled if you screw up the set of libraries and apps you install.

Linux powers-that-be decree that everyone should compile a kernel regularly, and for many problems the only solution involves compiling a new kernel.

Hence, if you screw up your installed apps and libs, you may lose the ability to use your OS sometime in the future. This happened to me. It is the most mind-bogglingly stupid piece of system architecture I've come across, but you have to live with it.

So, now that I have Debian's aptitude, which effortlessly and accurately manages all installation and uninstallation, and - unlike RPM - seems almost impossible to break, I'm not risking losing my machine by installing ANYTHING that didn't come in a deb file.

And, of course, I couldn't even install 3.0x on my old OS because of bugs in SWT . So, 2.x it is!

Apparently, the person who packages eclipse for debian is expecting to get version 3 available "in the first few weeks of august", so I probably don't have long to wait (thank god).

Unfortunately, if you ever make any of those files writable (which the OS allows you to do) you now have to manually check whether you changed them accidentally.

I want the ability to control the files. I must actively change the write protection otherwise they are protected as much as they should be, without crippling my ability to access them in an emergency.

Quote

How?!?!

We run Windows. In this last screw up (one of many that have happened in the last two months). A machine running a test broke into the SoftIce kernel debugger. Suspending all threads, including the kernel/drivers. This seemed to leave the network card in a hosed state that took down everything connected to the same switch, and eventually most of the net.

Quote

Linux...

There you go doing things the hard way

Eclipse is nicely packaged in a single subdirectory... no screwing around with your system. Unzip it run it, it works.

Why? What's the point? If you're afraid you might lose net connectivity, then run a local server and replicate periodically (you don't even need to work on the local server - work on the remote server, just remember to replicate once a day, or however up-to-date you need to be in an emergency, and if it happens, it's a matter of 10 seconds to tell vesta to mount the local sever instead of the remote one - because you're mounting filesystems, nothing else will notice the change (all the files will appear in the same place, of course, even though in fact it's coming from a different server).

Quote

We run Windows.

...and he accuses ME of doing things the hard way? LOL

Quote

This seemed to leave the network card in a hosed state that took down everything connected to the same switch, and eventually most of the net.

Sounds like you need an extra LAN for "dodgy test machines" (*) and some form of bridge with enough intelligence to cut off that LAN when it dies. Although I'm mighty impressed that it crossed the switch - what was it doing, DoS broadcasting?

(*) i.e. any unattended (non desktop) machine which has a chance to do stuff like this. Nothing wrong with the machine, just that you're using it to do stuff such that this becomes a risk. Like the standard scheme of having 3 versions of your server farm: "test", "devleopment", "live" - all of them identical, but all sufficiently separated that screw ups on one don't harm the others.

Quote

Eclipse is nicely packaged in a single subdirectory... no screwing around with your system. Unzip it run it, it works.

Hmm. But when it refused to install on Mandrake because of the SWT errors, it was apparently inside an "installer" IIRC the stack trace...

Why? What's the point? If you're afraid you might lose net connectivity, then run a local server and replicate periodically

It just seems like such a heavyweight workaround for the more obvious solution of not taking away the files I want to work with. I mean I can't even look at the source of files that aren't checked out to find out if they are in fact the files I might want to edit.

Quote

...and he accuses ME of doing things the hard way? LOL

Quote

Sounds like you need an extra LAN for "dodgy test machines" (*) and some form of bridge with enough intelligence to cut off that LAN when it dies. Although I'm mighty impressed that it crossed the switch - what was it doing, DoS broadcasting?

Again too much of a heavyweight solution. It causes frequent inconveniences to avoid a relatively rare situation on one machine. WE are constantly needing to move data between these machines. Now that we know that SoftIce can cause that problem we can just disconnect that one machine when it's in SoftIce.

Quote

Hmm. But when it refused to install on Mandrake because of the SWT errors, it was apparently inside an "installer" IIRC the stack trace...

Not sure what it was doing, all I know is I download a ZIP from the Eclipse servers extract it and ./eclipse/eclipse fires it up. I suppose the installer is attempting to integrate it into the launch menu structure and all that jaz?

I don't waste time trying to "optimize" my processes to that kind of extent - I've got better things to do than worry about whether my SCM is the best available, or whether there is something that's 10% better.

But yes, if your SCM works such that your entire source directory blips in and out of existence faster than a politicians ethics, then you'll need to find a special IDE that understands this. You can't expect to use one tool that does wierd stuff that no one in their right mind would think of doing, and then complain when other tools that are written to cater for the rest of reality don't like it.

That SCM sounds interesting technically, but if it uses such sociopathic concepts, I suspect you might be better off with CVS.

FYI, it seems that eclipse is perfectly happy with disappearing source folders. Maybe this is simply because we've been lucky with timing so far, but we've done plenty of checkouts/checkins and seen no issues. All that happens is eclipse occasionally pops up:

File X has been changed by the OS. Would you like me to reload the new version?

...because the checkout resets the file timestamp, of course. I'm hoping that eclipse 3.x has a setting to make this automatic instead of asking you each time, but other than that no problems. Yet.

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org