I am amazed at how slow MSVC is. Maybe I'm spoiled by clang, but compilations of a codebase that takes 10 seconds for a full rebuild on XCode takes over a minute on my Windows box--and the Windows box has similar disk I/O and more RAM and a faster CPU.

And now I have to go back and manually (manually--really? in 2013?) rebuild debug versions of all my libraries because otherwise I have linker mismatches, then put the libraries in a sane place so I don't later have to go hunting for them. Isn't this so very productive? Seriously, this is bad and the people who perpetuate it should apologize. Xcode and OS X successfully doesn't force me to play "rebuild everything twelve times"--that Windows can't be bothered to do likewise is embarrassing. (I'm sure there's a reason for it, but it's hard to describe how little I care about their technical debt.)

I am amazed at how slow MSVC is. Maybe I'm spoiled by clang, but compilations of a codebase that takes 10 seconds for a full rebuild on XCode takes over a minute on my Windows box--and the Windows box has similar disk I/O and more RAM and a faster CPU.

I am amazed at how slow MSVC is. Maybe I'm spoiled by clang, but compilations of a codebase that takes 10 seconds for a full rebuild on XCode takes over a minute on my Windows box--and the Windows box has similar disk I/O and more RAM and a faster CPU.

And now I have to go back and manually (manually--really? in 2013?) rebuild debug versions of all my libraries because otherwise I have linker mismatches

There's something really odd here. If your build files are set up properly this shouldn't ever happen. The build file should specify the proper library dependencies for your configuration (including debug/release parameters, etc.) and you should be good to go.

I don't think so? I have exactly the stuff in the Win32 Application template, and I don't see that in the command line output.

Metasyntactic wrote:

There's something really odd here. If your build files are set up properly this shouldn't ever happen. The build file should specify the proper library dependencies for your configuration (including debug/release parameters, etc.) and you should be good to go.

I have looked into this and maybe I'm missing something, but as far as I am aware that's not very practical. Like, I have stuff like Boost where you don't check it out, you download and build a tarball (and I am definitely not chucking Boost into my source tree). So as far as I am aware, I can't set up "proper library dependencies" because my dependencies end up with a bunch of .lib files as artifacts, not MSBuild-friendly vcxprojects. Others are vcxprojects, but are coming from sources I can't make into git submodules so putting them in my solution as dependencies (which, of course, works fine) is to me very questionable because, when I check it out elsewhere, it's going to lose the vcxproj and break. This leaves me with a boatload of .lib files and "oh, hey, this stupid project has /MT not /MD, enjoy your linker errors!" unless I go manually fix them--and then fix them again next time on the next machine I work on, because herp derp stupid library defaults.

I feel like you're a victim of your platform with that statement. All (literally-literally all) of my deps are a brew or apt-get away on either OS X or Ubuntu--and for Homebrew at least it's not like versioning is an issue, Homebrew works off a git repo of scripts. I find it incredible that you would suggest that there isn't a better way than "maintain personal copies of everything" in the Windows environment--in the sense that you are an extremely sharp dude and it strains my credulity that that would ever be best-practice.

Fair enough, I didn't realize you weren't still on that stack. Maybe that approach is great for enterprise development. It's a ton of extra work for no good reason for what I'm doing compared to literally everyone else, though, and it's going to force me to rework my build process that already works great on four other platforms (because consistency matters) to drag around a bag on the side because--well, I don't know why. "Because we could," I guess.

Spending another three days getting a bunch of dependencies in order is doing wonders for my motivation on this shit--I don't have a lot of time for personal projects, spending so much of it on something so stupid sucks so hard. The Good Enough Brigade is awfully, awfully demoralizing.

And while CMake isn't good on OS X, it at least turns out readable makefiles. If something breaks in the MSBuild horrorshow it's giving you, don't make any plans.

I have had CMake produce a workable VS project on Windows once. At all other times it seems you need a very specific version of Visual Studio. The C++ experience makes me sad, but I need it for work, so what are you gonna do?

I suspect a lot of the CMake issues are probably in the CMakeLists.txt stuff and not something endemic to CMake, but when the easiest way to get a DLL out of LLVM is via bash and a configure script, it makes me sad.

I actually find CMake to be mostly not gigantically badfor generating VS projects. It generates a ton of stupid vcxproj files that correspond to its targets, but they're ignore-able and the end project seems (in every case I've ever tried) to work fine. Writing one, however, as Meltdown will state, is like unlocking a box of footguns.

It's more my other deps, which don't have vcxprojs at all and no way to get them, that's making me soberly consider quitting programming and working on a goddamn farm instead of putting myself through this just to make some textured quads dance on the screen.

I actually find CMake to be mostly not gigantically badfor generating VS projects. It generates a ton of stupid vcxproj files that correspond to its targets, but they're ignore-able and the end project seems (in every case I've ever tried) to work fine.

I'm usually stuck on Express, being the pauper I am, so that may play into my experiences.

Quote:

Writing one, however, as Meltdown will state, is like unlocking a box of footguns.

Yeah. I have no issue with using a configure script or cmake thing, so long as I don't have to write them. I don't want to write m4 macros or cmake's YADSL, thank you very much.

That said, I use build scripts (like .sh or .bat files) for as long as I can get away with (can't be bothered making portable IDE projects on personal stuff), so I'm probably not the guy to ask.

I feel like you're a victim of your platform with that statement. All (literally-literally all) of my deps are a brew or apt-get away on either OS X or Ubuntu--and for Homebrew at least it's not like versioning is an issue, Homebrew works off a git repo of scripts.

Which is great until you have to distribute your software, and then neither gets you very much unless your only targets are that version of Ubuntu or OS X with Homebrew.

Install VMWare on Linux sometime and note how they take the entire stack with them. Since Windows runs a lot of consumer software that has to install and run out of the box, it's not surprising the practices are oriented in that fashion.

Quote:

it strains my credulity that that would ever be best-practice.

It is the best practice for 3rd-party OOB software on every platform, doubly so for C++. You have to bring the runtime with you on Linux too, or package for each distribution, or let the distributions package and distribute your software.

Doing some exploration on fancy mcollective stuff for work. Millions of possibilities. Downside is that I need to learn some Ruby, and I don't want to (I might have to learn Perl soon, too... that's more than enough).

Can anyone here who has used Darcs tell me if it's worth using over other distributed VCSes? At a glance it seems cool, and lets you include some but not all changes to a file in a commit, which I really like, but it seems awfully developed-in-my-parents'-garageish

Can anyone here who has used Darcs tell me if it's worth using over other distributed VCSes? At a glance it seems cool, and lets you include some but not all changes to a file in a commit, which I really like, but it seems awfully developed-in-my-parents'-garageish

Git can also do the same thing, actually. Not sure about Mercurial.

But... I would have to say no to using darcs. Haven't really used it seriously but darcs couldn't even import a codebase of roughly around 500,000 LOC with some small binaries. Fails with an out of memory error on my machine with 4 gigs of ram. What they recommend is importing in pieces but even so, I find that it's impractical as 1) it still eats memory and 2) for the project I'm testing with darcs, it's very tedious.

Also, from what I remember, darcs still has the problem with exponential merges, although it's not as bad as 1.X and it's still very slow.

I feel like you're a victim of your platform with that statement. All (literally-literally all) of my deps are a brew or apt-get away on either OS X or Ubuntu--and for Homebrew at least it's not like versioning is an issue, Homebrew works off a git repo of scripts.

Which is great until you have to distribute your software, and then neither gets you very much unless your only targets are that version of Ubuntu or OS X with Homebrew.

Apt-get and Homebrew both install static libraries. I have not yet exhaustively tested Linux and may never because I don't care about it, but I don't use any dylibs on OS X except those in core frameworks, and it works fine on out-of-the-box Macs.

Can anyone here who has used Darcs tell me if it's worth using over other distributed VCSes? At a glance it seems cool, and lets you include some but not all changes to a file in a commit, which I really like, but it seems awfully developed-in-my-parents'-garageish

The particular feature you call out is also available in git, with `git add -p`. Besides that, I've heard nothing about darcs in years, since Git and Mercurial (and Bazaar, and a few more) DVCS started to target the mainstream. I honestly didn't realize the project was still alive.

That said, if you want to learn about their way of doing things, it may well be worthwhile. It definitely came at DVCS from a very different perspective than what's popular now, which are mostly based on the principles first seen in monotone.

I feel like you're a victim of your platform with that statement. All (literally-literally all) of my deps are a brew or apt-get away on either OS X or Ubuntu--and for Homebrew at least it's not like versioning is an issue, Homebrew works off a git repo of scripts.

Which is great until you have to distribute your software, and then neither gets you very much unless your only targets are that version of Ubuntu or OS X with Homebrew.

Install VMWare on Linux sometime and note how they take the entire stack with them. Since Windows runs a lot of consumer software that has to install and run out of the box, it's not surprising the practices are oriented in that fashion.

Quote:

it strains my credulity that that would ever be best-practice.

It is the best practice for 3rd-party OOB software on every platform, doubly so for C++. You have to bring the runtime with you on Linux too, or package for each distribution, or let the distributions package and distribute your software.

In our case, we're producing packages for our supported variants of *nix (and we're open source, so it could be packaged anywhere; we've just selected a few to package ourselves). That means we can't carry around our own versions of external dependencies-- we need to integrate with the distro's packaged versions so that, at runtime, we don't end up with conflicts (because, again, we're producing distro packages and end up installed in /usr/bin and /usr/lib).

There's not a good equivalent to this workflow on Windows. I've been tempted to roll my own (along the lines of Homebrew or BSD Ports), but have never gotten up the initiative to do it.

The particular feature you call out is also available in git, with `git add -p`.

Ah, cool, thanks.

MT5 wrote:

But... I would have to say no to using darcs. Haven't really used it seriously but darcs couldn't even import a codebase of roughly around 500,000 LOC with some small binaries. Fails with an out of memory error on my machine with 4 gigs of ram. What they recommend is importing in pieces but even so, I find that it's impractical as 1) it still eats memory and 2) for the project I'm testing with darcs, it's very tedious.

Also, from what I remember, darcs still has the problem with exponential merges, although it's not as bad as 1.X and it's still very slow.

I'm aware of the performance issue but I doubt they'd be relevant to me because the projects I'd use it with probably wouldn't exceed 1000 lines of code.

MilleniX wrote:

That said, if you want to learn about their way of doing things, it may well be worthwhile. It definitely came at DVCS from a very different perspective than what's popular now, which are mostly based on the principles first seen in monotone.

Yes, I suppose there's value in learning it regardless of whether it's useful for any particular project, but there's so much out there to learn that it's hard to justify doing so in this particular case.

Actually Debian and Ubuntu do not, they're included in the seperate development packages. Applications packaged as part of those operating systems (all Linux distributions) should be packaged against shared libraries.

Static libraries are fundamentally broken on Linux and UNIX generally, as they don't mesh well with the dynamic linker. Unfortunately, GLIBC itself will call dlopen(3) under specific situations (primarily NSS, but there are others), so you're playing with a loaded gun unless your program is ANSI C.

C++ is even worse because of runtime features that need cross library support. In fact, static linking is essentially broken in some GCC releases; it's simply not a priority for them at all.

Full static builds are only possible if you only build everything yourself and never ever call dlopen(3) in any way, shape, or form. That's possible, but it's also pretty hard to build interesting programs this way.

Far easier to either distribute as source, build for each distribution, or carry everything with you.

Vagrant came up a page or so back, and I was hoping to ping the hive mind for best practices on a new setup. We're distributing a Ruby on Rails app to a wide variety of team members (Developers, Designers, UX Engineers and Developer Contractors). I'd like to be able to simplify the rails startup process within the Vagrant machine to not require SSHing into the VM every time the user wants to start the local server. I know this can be done through VM local settings or through Vagrant plugins, but I'm not sure what the experienced Vagrant maintainer would do in that situation.

Static libraries are fundamentally broken on Linux and UNIX generally, as they don't mesh well with the dynamic linker. Unfortunately, GLIBC itself will call dlopen(3) under specific situations (primarily NSS, but there are others), so you're playing with a loaded gun unless your program is ANSI C.

Didn't realize that, thank you. More reason to ignore it for now, I guess; targeting it is lowest-priority.

Vagrant came up a page or so back, and I was hoping to ping the hive mind for best practices on a new setup. We're distributing a Ruby on Rails app to a wide variety of team members (Developers, Designers, UX Engineers and Developer Contractors). I'd like to be able to simplify the rails startup process within the Vagrant machine to not require SSHing into the VM every time the user wants to start the local server. I know this can be done through VM local settings or through Vagrant plugins, but I'm not sure what the experienced Vagrant maintainer would do in that situation.

Anyone have any wisdom to share?

I'm no Vagrant expert, and esp. not a Rails expert, but... Wouldn't it be worth it to run the real web server you use in production? Packing VMs means you only have to set it up once and you have a very similar setup to production (thus reducing any possible "it worked on my dev box" issues). Or is the local server especially convenient (auto-reload?).

I've noticed that some Ruby on Rails components (Unicorn and RVM come to mind) doesn't seem to be desigedn to be started automatically via (for example) a chkconfig enabled /etc/init.d script.

I have a consultant written application that isn't setup automatically on boot if the host gets restarted. Instead you have to issue 'cap deploy:start' from another machine (setup as a dev/deploy environment) as a specific user.

This seems wrong. But the consultant seems to find nothing wrong with this setup.

Is this common to RoR or just those components? Not familiar with RoR and not sure if this is considered normal.

I've noticed that some Ruby on Rails components (Unicorn and RVM come to mind) doesn't seem to be desigedn to be started automatically via (for example) a chkconfig enabled /etc/init.d script.

I have a consultant written application that isn't setup automatically on boot if the host gets restarted. Instead you have to issue 'cap deploy:start' from another machine (setup as a dev/deploy environment) as a specific user.

This seems wrong. But the consultant seems to find nothing wrong with this setup.

Is this common to RoR or just those components? Not familiar with RoR and not sure if this is considered normal.

Unless the application is totally unimportant, that's not just wrong, it's Wrong!(tm). I have no idea if that's a RoR thing or what, but seems like there has to be a better way...

I've noticed that some Ruby on Rails components (Unicorn and RVM come to mind) doesn't seem to be desigedn to be started automatically via (for example) a chkconfig enabled /etc/init.d script.

I have a consultant written application that isn't setup automatically on boot if the host gets restarted. Instead you have to issue 'cap deploy:start' from another machine (setup as a dev/deploy environment) as a specific user.

This seems wrong. But the consultant seems to find nothing wrong with this setup.

Is this common to RoR or just those components? Not familiar with RoR and not sure if this is considered normal.

Unless the application is totally unimportant, that's not just wrong, it's Wrong!(tm). I have no idea if that's a RoR thing or what, but seems like there has to be a better way...

It's a rewrite of a production webapp (perl to Ruby) that is the major money maker for a division of the company. So having it down for a day because of the Rube Goldberg start up process would be very bad. We've got monitoring setup on the server and processes, so we shouldn't miss more than a few hours of downtime, but the setup is still wrong in my opinion.