If I’m going to indulge this pipe-dream of Linux-using, then it’s time to stop fussing around in Minecraft and work on something serious. It’s time to see if I can use Linux to program. If I can’t do that, then I ought to walk away now before I get too comfortable.

Going by the comments yesterday, it seems like Eclipse is the go-to IDE for coders on Linux. (IDE means “Integrated development environment”, and is to coding what a word processor is to writing) I open the Software manager and install it. It seems to work fine, except…

For a quick test, I input the classic Hello World program to find that Eclipse can’t find <stdio.h>. This is very strange. I expected some confusion and growing pains in moving to Linux, although I didn’t expect them quite this soon or quite this simple. For completeness, I try the C++ variant of Hello World, and discover that it doesn’t know what to make of <iostream>.

This is such a basic, fundamental failure that I don’t know where to begin. Imagine if word processors would only let you use a word if it was in the dictionary. Now imagine a word processor that came without any dictionary. That’s what we have here. This is a C development environment that doesn’t know C.

Is this a problem with Eclipse? A problem with my Linux install? A problem with how I set up this project? I don’t know, and so I don’t know where to look for answers. In bemused frustration (yes that’s possible) I turn to Twitter. My tweeps suggest that code::blocks is a good IDE to use. So I install that.

Installing code::blocks also manages to install whatever files were missing and preventing Eclipse from working properly. This is the first sign that we’re not in Kansas anymore, Toto. In Windows, programs tend to keep to themselves. If I install Visual Studio, it will install <stdio.h>, <iostream>, and the thousands of other files that a compiler needs. If I then install (say) Qt Creator, it will need its very own copy of those files – even if the files are identical, down to the last character. There are situations where Windows programs share, but generally not stuff like this. Eclipse and code::blocks are obviously drawing from the same pool of files.

So now I have two IDEs working. I have decided to keep them both for now, and pit them against one another. The one that confuses / irritates / hassles me the least will be the winner and become my IDE of choice. The other one will… actually, no. I don’t dare uninstall the loser, lest it take those shared files with it to oblivion. So the loser will remain installed and I’ll just have to remember to make a point of not clicking on it. That’ll teach it.

Well, I’ve compiled Hello World, which is satisfying but not productive. If we’re going to write real code then we’ll need real tools. We might as well solve the most difficult problem first: Window management and portability. Based on the suggestions yesterday, I’ve decided to build a little test application on GTK. It’s popular to the point of ubiquity and is available on both Linux and Windows. Anything I write that uses GTK ought to be able to compile on either platform.

When I want to use some code, the Windows tradition is to go to the website, download it, and then cuss at the install scripts in outraged bafflement until you give up and try something else. I assume that’s also how we do things here on Linux? I download GTK from the website and begin squinting at the ./configure scripts, trying to figure out how they should work, why they aren’t working now, and what I need to do to get them from here to there. GTK depends on no less than five other projects, although none of those projects seems to depend on anything. So our dependencies are broad but shallow. This is better than narrow but deep, but not nearly as good as “no dependencies”, which never happens because LOL GET A REAL PROGRAMING LANGUAG YOU FOSIL.

Sigh.

After about 45 minutes of frustration I haven’t made any headway. Even if I get this configuration script to work, I don’t understand where I’m supposed to put these files or how to tell the IDE where to find them. This is because there are only two types of programming tutorials on the net:

Buttons are useful! Here is how to identify a button, and how to use your finger to press one!

Here is how to deal with a flame-out in an F/A-18 Hornet while on approach to a Nimitz class carrier. This tutorial is particularly focused on daytime scenarios. If this happens in nighttime fly conditions or you’re operating in sub-tropical weather, please consult a more appropriate guide.

There is no bridge between baby steps and ninjutsu. This problem is particularly bad in the C/C++ world where every development environment seems to have a slightly different (or radically different) solution, so it’s not possible for someone to write a sensible, general-purpose guide.

Eventually I discover that this business of downloading and configuring stuff manually is only for obscure packages or hardcore devs with exotic requirements. For most people, you can get stuff like this from the Software Manager. It works a bit like Windows Update, except it downloads all kinds of different software, themes, updates, tools, packages, and other nuggets of digital treasure. You can also do this using the terminal, which is how I discovered this feature, but sense I could really make a mess doing things that way. In one forum I find the example terminal command:

sudo apt-get install gtk2.0+-

Breaking this down for the curious:

sudo stands for “Super User DO“. It means you want to execute this command with root privileges, which are required for installing new stuff.

apt-get Stands for Advanced Packaging Tool get. The apt is used for downloading and installing new components, making sure they end up in the right places, and making sure the system knows where they are. Here you’re telling the apt to get something.

install means you want to install this thing. I’m sure this is obvious to you but explaining it allows me to feel superior and smart even though I have no idea what I’m doing.

gtk2.0+- means it should download the GTK package, version 2.0. I don’t know what the plus-minus stands for on the end.

The problem here is that GTK is now on version 3.6. Do I have it on my system already? Which version? If not, which version is best to get? Will any particular version cause conflicts with things I already have installed? (It shouldn’t, of course. We’re downloading source files, not executables. Still, I don’t know how this install might impact Eclipse or code::blocks, which might be changed, or what else might go wrong. In programming, what you don’t know can actually hurt you and is very likely to do so.

So I fire up the software manager and try to figure out if I have GTK, what version I have, and if I don’t have it, how to get it. This is very, very hard because GTK is used by a lot of programs. For example, the interface of GIMP is built on GTK. (Photoshop for Open Source hippies.) So in the software manager, the description is something like “GIMP using GTK library” or whatever. When I search for GTK packages, I get hundreds of results, wading through them takes forever, and I’m not even sure if I’m on the right track.

What about chocolate bars? Is that what you need? It kind of looks like it. Maybe you’re supposed to melt it down. But it’s got sugar in it, and your recipe calls for sugar. Hmmm.

Of course, the phrase you’re looking for is “BAKING CHOCOLATE”. If you know this then the search is stupid easy and if you don’t then the search is impossible. In my case, the phrase I was looking for is “libgtk”. It turns out that terminal command above was incorrect. (Or old. Or both.) By narrowing my search to libgtk I’m able to find the components for building GTK applications. There are many, and I seem to have everything except the core files. The version available in software manager is 3.0 and the latest is 3.6, but I’m betting (hoping!) that the 3.0 version is just fine and was chosen because it was the version with the least potential conflicts.

So I install it.

Now what?

I try #including <gtk.h> in both Eclipse and code::blocks, which naturally doesn’t work. Do I need to make it <gtk/gtk.h>? Or perhaps <gtk3-0/gtk/gtk.h>? Or maybe <gtk3-0/gtk.h>? I find examples and variants of all of these in searches, none of them work, and I have no idea how to do this amazingly simple, straightforward thing. I don’t know where these files are on my computer, how to find them, or how to tell my IDE how to find them. (I search file system for gtk.h, and a long search turns up bupkis.)

This is pretty demoralizing. In the time I’ve sunk into this so far I could have installed Windows 7 and Visual Studio and just returned to work. I wouldn’t mind if I felt like I was making progress, but I really didn’t expect to spend two and a half hours trying to write my first line of code. The problem with situations like this is that you don’t know how deep the rabbit hole goes. I could solve this problem thirty seconds from now. Or maybe it will take another hour. Maybe when I solve this problem there will be another one that will require a similar investment of time. Maybe I’ll get stuck on a new compile problem. Or a linker problem. Or an execution problem. There’s no limit to how many things can go wrong or how long it can take to fix each one.

It’s entirely possible I’ll slam away at this problem, fix it, and run into another problem that’s baffling and insurmountable. Even if you’ve got a good friend who is really patient and knows all about this sort of thing, answers to these sorts of questions often begin with, “Well, depending on your setup…”

It’s like a game of chance where you keep betting time, and if you win all you get is the chance to make another wager. How much time do you put in before you conclude this isn’t going to pay off?

I’m going to go play Minecraft until I get enough patience to continue.

From the Archives:

And once you’ve got it all set up, you’ll happily work on it for 6 months – 2 years. Then the machine will die / bork / reinstall, and you will have forgotten how you set it up the last time. By the time you’ve learnt how to install everything for the 4th or 5th time (2-10 years later) you will learn how Linux works under-the-hood sufficiently to do it the right way the first time.

This is the trick of Linux alas, only by countless re-installs and headaches do you get to the point where you understand the ‘language’ of linux well enough to know that ‘libX’ is the complied library and ‘libX-dev‘ are the source libraries you need to create code based on ‘libX’.

Every O/S has it’s own idiosyncrasies but Linux was designed for devs, by devs and their headspace is not where Joe/Jane Q Public are. I recommend forswearing off any form of ‘Linux Mint’ or ‘friendly’ distros if you want to get into Linux beyond the casual user paradigm. Gentoo is a pig of a distro that requires ground up building but the errors and grief you encounter over several hours will lay bare the mysteries of Linux (it is relatively well documented in all fairness though). For me though having made it work (around kernel 2.4 / 2.6) I had a revelation ‘ this was a personally satisfying but hollow victory‘ because while most things can be made to work on Linux it all takes more time than I lose through Windows crashes.

Win7 + Visual Studio is the premier development environment for a reason though, for all the flaws and weirdness of Windows (the registry says hi BTW) most things just work. Choose your poison basically, neither is ‘better’ than the other but with your inexperience of Linux and familiarity with Visual Studio I’d stick with Win7 or accept that productivity will be out the windows for 3-4 weeks while you get up to speed w/Linux + New IDE.

The attitude that “everyone should just use” is exactly opposite to the whole concept of open-source, which is “everyone should be able to modify and/or replace anything with whatever they want to” and this is what newbies should be taught, rather than someone’s arbitrary personal preference (and personally I find aptitude just complicates many common tasks, like sorting out reverse dependencies etc).

I could also argue that beginners might want to look at “Synaptic Package Manager” first which is point&click and installs by default on the “sensible newbie distribution” (Ubuntu), but the fact is any Debian-derived system has apt-get/apt-cache installed by default.

Ps. Don’t know about Mint, but Ubuntu has this cool meta-package “build-essential” which installs all the “usual stuff” you would want for programming purposes, including make, autotools, gcc, basic headers.. etc.

Yup. I didn’t do much programming in linux during the brief experiences with it that I had, but the general tone of events was pretty similar – a complete mess of iconography and terminal commands, with online “help” being about as useful as a rancid pile of prechewed pineapples.

I gave up.. not long after, in fact, because I simply believe in the following: The OS is a tool, not an end. I should not have to spend ages to figure out what/how the OS works, it should ust work and allow me to easily do the things that I actually want to do in it. Unless I am working with developing, tweaking or changing an OS, it should not be a problem in the way of doing stuff. Linux was, all the time. And… if it takes 2 years and 4-5 reinstalls to properly get how it works, then.. wtf. Do you realize how ridiculous that sounds?

I agree with what you’re saying, but my experience with linux has been the exact opposite. On windows, installing a new library entails looking for the files on the internet, downloading them, putting them in a place that your compiler can see/telling your compiler where they are, discovering that you’re missing ObscureDependency.dll, going looking for that, and it’s on. Shamus frequently bitches about how much of a hassle this can be in his programming posts, and I agree with him(Obviously, it’s not always that bad. Most libraries you’ll need are quite well documented and tell you exactly where to get their dependencies. But varying degrees of this problem do happen).

That’s me pointing out the best-case scenario on linux and the worst-case scenario on Windows, of course. And if you’ve had the opposite experience, there’s absolutely no reason you should switch to linux if you’re happy with windows.

This is my experience too. I can’t get anything done in Windows (or MacOS, for that matter) because you have to go hunting across the web for obscure libraries and manually check that the versions match and do all the stuff Shamus described that an experienced Linux user could have told him to skip. I just type aptitude install and it’s where it needs to be, handily in my include path.

Unfortunately, I’ve never tried Mint (stuck with Debian, and now know what I’m doing with it) so I’m not certain what the problem Shamus is seeing is. However, it’s probably that he should install libgtk3-dev. Packages with headers for actually writing software are always named -dev.

And… if it takes 2 years and 4-5 reinstalls to properly get how it works, then.. wtf. Do you realize how ridiculous that sounds?

The only reason this sounds crazy is because you don’t remember doing the exact same thing in Windows.

Or perhaps because the FIRST time you do it (which ever system that is) doesn’t feel as painful as doing it a second time.

But yes, Windows has the same issues. Along with HORRIBLE memory management (and that’s the new and improved version) and the ever-loving registry, which is the most insane and stupid thing in the history of computing.

How is this a fundamental problem of linux? It’s actually a lot better for development than windows and once you learn it it’s a lot better than windows the problem is that everyone is accustomed to windows and doesn’t want to learn the new system which isn’t the fault of linux.

If you were to start of with linux instead of windows then the commands would make a ton more sense than looking for the apropriate installer or dll on windows.

Your first mistake was trying to use Eclipse, which is complete garbage. It utterly baffles me why people reach for that turd first when they look for an IDE. I’ve had success with Netbeans for C++. It’s certainly vastly improved over Eclipse for Java anyway.

And comments like this are the next piece of the fun – you’ll find them all over. I used to work at a very fast moving software development company filled with super-bright MIT grad types. Their preferred IDE was Eclipse. Yet I’m sure some of them would refer to NetBeans as a turd just as easily. Next, let’s talk about which is better: emacs or vi!

Please don’t open that particular can of worms, two of my best friends constantly argue over this.

@ken: Regardless of what your personal preference might be, I don’t think you’re going to end up helping Shamus by telling him “Go back to start, do not collect 20.000$ and try this new direction from scratch.”. He’s bound to run into the exact same problems getting your IDE of choice installed.

The only thing about editors is this: Avoid those that use “modes”, such as emacs. They require a lot of arcane knowledge to do basic things, like “writing text” or “saving a file”. If you’ve grown up with them, it’s not an issue, but for a new-comer, there’s really nothing to gain. You spend an inordinate amount of time to figure out the basics, and then you are just on equal footing to everyone else. It’s like teaching yourself Spanish so you can read the Spanish translation of a French novel instead of its English translation.

IDEs that I have used:

Borland Delphi: Easy to use, good GUI editor, very limited libraries.
Netbeans for Java: It was okay.
Eclipse for Java: It was okay and crashed less than Netbeans.
VS2008: It was decent for C++, but the configuration is insanely complex. Also, you really need VisualAssist to be productive. Great debugger.
VS2010: Decent for C++, different from 2008 but the configuration is just as complex and you still need VA. Incredibly great debugger.
Flexbuilder/FlashBuilder (plugin for Eclips): Also decent.

See where this is going? They are all decent, and none of them are truly great or truly awful. As long as you don’t use Notepad and gcc, you’re fine.

Last but not least: Use an IDE that is made for the language you want to use. It’s much harder to use Eclipse with C# than it is to use Visual Studio with C#, because the closer the IDE is integrated with the language, the stronger its features are.

I’d like to try and correct your first paragraph without turning this into a holy war.

Mode-based editors certainly look arcane, but they’re really, really not, you just need a good tutor. Luckily vim comes with one: open up vim and type vimtutor. It doesn’t take very long and it explains the basics in an interactive way very well.

Saying that saving a file in vim is arcane is like saying pressing Ctrl+S is arcane. To save, you type “:w” and press enter. The colon is a leader, telling Vim to expect a command, ‘w’ is for write. To save and exit, type “:wq” – leader, write, quit.

Anyone who is capable of typing should be able to pick vim up very quickly, over time you add tricks to your repertoire. I’ve gone from a working knowledge of four commands (ESC, i, :w, :q – normal mode, insert mode, write, quit) to tens of commands with millions of permutations and I never really noticed. I’d want to do something simple, so I’d Google it (“delete a line in vim”) and find the solution (type ‘dd’ and press enter) after doing it a couple of times you don’t tend to forget them as they’re always quite concise.

I didn’t grow up with vim, my early editors were Visual Studio (6.0), Gedit, Notepad++ and similar. I picked up vim for my job because it’s available on almost all the servers I work with and it seemed like a worthwhile skill to have, instead I found a programming environment I love working with.

It’s certainly not for everyone, but it is definitely not a complex skill to pick up, and if you use Linux it’s well worth giving vimtutor a try just to see if it interests you.

Actually, you just confirmed what Kdansky said. There is nothing wrong with emacs or vim, but the fact is, you have to learn how to use them, and there are plenty of other editors that you do not need to learn for, thus making learning to use mode-based editors mainly a waste of time.

This hasn’t stopped me from doing it anyway, since I liked the magic, but unless you are going to be working with servers, there is really very little reason to learn vim (and there are alternatives for servers as well, but I tend to hate them).

Also, if you do not use vim on a daily basis to actually program in, you are probably not going to have much success remembering more than the basic commands (I don’t anyway).

Being truly proficient in any editor requires a significant time commitment. Some programmers never bother to become truly proficient. Watching them work is painful. If you’re going to spend the time, I would recommend going with something as cross-platform as possible. It turns out that the selection of editors matching “cross-platform” and “suitable for programming” is pretty small.

I think the point is, you had to learn “control-s” too, but now that you know it you don’t even think of it as a skill or as knowledge. It is, though. It’s just that you’ve gotten past that hurdle where you don’t know what the hell is going on because you’ve mastered its domain knowledge.

Vim and Emacs are no different, it’s just a very different domain than what most of us have grown up with (on Windows).

(Edit: let’s not forget there are plenty of editors that use the Windows-style approach.)

“making learning to use mode-based editors mainly a waste of time.”
This is complete nonsense. vim definitely has a steeper learning curve than the usual GUI text editors, but once you get past that learning curve, you can be significantly more productive in it because it’s possible to do absolutely everything without your hands leaving the home keys. It’s similar in theory to the command-line: it’s counterintuitive, and has a steep learning curve, but it’s capable of being much, much easier and more powerful than doing things through your typical GUI.

The reason I love vim is simple.
I will never ever get better/faster at using notepad++ than I am now.
I am already better and faster in vim than I am in notepad++. I get better and faster still every time I use it.

Arguing that vim is worse because you have to learn it is like arguing that Starcraft is worse than Farmville because you had to learn it. Depth has value, as does accessibility – they’re completely different kinds of value, and worth different amounts to different people.

People can be hyper-productive in these old-school editors because they were designed to be used without a mouse.

But lets be honest, for those coming from a windows or mac background, where many shortcut commands have become de-facto standards and lots of work has gone into making usage patterns obvious, they’re a bit like trying to use an application written by an alien.

Home,Shift-End,Delete (using delete line as an example) is pretty universal at this point. Is that as terse as VI? No, but there’s a huge advantage to that single command set working almost everywhere.

I’ve been using Linux for around 10 years now, and frequently editing files over SSH where I can’t use GUI*.

I still have never bothered to learn how to use vim properly.

I tried emacs in terminal mode and quit in frustration after a few attempts.

Pico/nano are significantly simpler to use, as they list many of the key combinations for common commands at the bottom of the terminal, rather than expecting the user to remember them all.

And here we are with the Linux holy wars again! Everybody thinks that their favourite tool is the best one. The good side to this is that there tend to be so many options that you are likely to find one that meets your style; the bad thing is trying to find that one in the list of applications that all basically do the same thing.

*Well, if you want to get technical you can forward an X session over SSH and then start a GUI program on the server and have it output its display to the machine you are connecting from. I wouldn’t recommend it though, because over most connections it runs like a dog. With no legs.

Ah Vim/Emacs wars, the Star Wars/Star Trek row for coders. For me I hate them both and ran away fast as soon as I found Pico. Pico = MS-DOS Edit if you remember that gem from the DOS days and all the same commands work if all you want is to hack a few lines into a .conf it’s the bees’ knees.

+1 for Nano for simple terminal based edits. It takes time to find an IDE that fits. I had the perfect one for me on Windows but had to leave it behind when I moved to Linux. Still searching years later.

When kicking off holy wars, please get your facts right. Emacs does not have modes. vi(m) has a modal interface and while I personally find it abhorrent, I have many friends who are very productive in it. Far more productive than I could be in an IDE like Visual Studio, for that matter. Meanwhile, I’m perfectly happy with Emacs, which has menus (yes, the kind you can click with a mouse) for people who want them. Emacs also has great gdb integration, which is essential. However, I don’t know many people outside my office who write C (no, not C++, that’s a different language) for embedded systems for a living. So I don’t advise people on which editor would be best suited for them.

Actually Vim can do menus as well. I think they are even enabled by default for the GVim variant (IIRC you can get them working in terminal too), though most normal people turn them off pretty soon (they just take space, and Vim’s built-in help is faster than searching the menus).

I write a lot of code (for Windows) in MSVC, but on Linux I would never consider an IDE. I’ve also used Vim on Windows and it made me realize one important thing: it’s not just the editor itself, but how it interplays with rest of the system that makes it so powerful. With Vim on Windows, I’m certainly less productive than in MSVC, but on Linux with rest of the system supporting the editor, it’s a totally different situation.

That’s actually why I wanted to comment in the first place: on Linux you don’t necessarily need an “IDE” if you have a powerful enough editor, because the system itself can provide rest of the functionality that you would normally need an IDE for. You can work on the command line and just do the editing itself in the editor (or more commonly, you edit in the editor, and shell-out commands to the command line when you need something outside the editor). This DOES actually make you more productive once you learn to work that way.

So my hypothesis is that all the Linux (or Unix) IDEs suck (and they do), because their target audience is those people that come to Linux expecting an experience similar to Windows (or maybe OSX?), while the “real Unix programmers” (and I emphasize I’m NOT implying any level of skill; rather I simply mean those not trying to make it work like Windows) have no interest whatsoever in these tools; they are perfectly happy with super-flexible editors (whether Vim or Emacs) that interact seamlessly with rest of the system.

The unfortunate part is: these editors are not something you learn overnight (well, I’ve taught the basics of Vim to several people in about an hour average, but it would take longer learning from a written document; I think it took myself a few days originally). Nobody claims they are something you just pick up, and few people use them as they ship either. You are pretty much expected to customize them to fit your workflow; I for one hate working with stock Vi/Vim unless all I have to do is edit some config file.

But that’s the thing: if you want (vi-like) modal Emacs, go for it. If you want mode-less Vim, sure (actually it ships with an optional “easy” config that does roughly that, pointless as it is). While Emacs would be the more flexible of the two (most importantly it has much better slave-process support), even Vim can be scripted to do things way beyond your average IDE.. and let’s face it: it’s fairly easy compared to learning programming.

What you probably need to install is the libgtk2.0-dev package. Development files (headers, etc.) are always in a separate package to the binary shared libraries (think DLLs on Windows). These are usually the same name as the binary package with -dev appended to the end.

If you want to see what files were installed by a package you can use the dpkg command, like this: dpkg –listfiles package_name

This is of course only for distributions that use Debian packages (Debian, Mint, Ubuntu, etc.).

My supervisor pointed out the exact same thing when I had problems with my undergraduate thesis. I eventually just ended up downloading every library, development or otherwise, through the software manager. While perhaps a tad overkill, this enabled me to get everything running.

Also, if you want to be able to uninstall Eclipse or code::blocks in the future without worrying about your package system thinking you don’t need all kinds of development stuff, install “build-essential”.

Also, since you’re demoralized, can I try moralizing you? Is that even a thing?

This isn’t a rabbit hole like you fear. Once you get all the development libraries installed, you’re pretty much there. It might seem like the solutions to your current problems are a bunch of random incantations, and that any time in the future, you’ll have to do it again with different random incantations. I went through the same confusion and frustration moving from Windows to Linux development.

Guess what? In this case, it actually gets better! You’re learning a fairly consistent system (Linux development and the Debian package system) that actually does tend to Just Work. Doing it the first time is a pain, though.

The world needs a “Linux development for Windows developers” guide. Maybe you’re writing it. ;)

Most software can be accessed through the GUI, and you don’t need the commandline at all (I virtually never use it). This has the added advantage that the OS will usually keep track of dependencies and resolve them automatically. It should also allow you to remove eclipse without breaking code::blocks or vice versa.

For most software, there’s:
the thing itself, [thing]
the library (dll for Winowsers), lib[thing]
the source code, usually called (lib)[thing]-dev
There are also debug versions and/or header files, or language files [thing]-lang
If there are different versions of a library where the api has changed and you might need more than one of them, there’s usually a number added to the name: lib[thing]3

When compiling a project you can decide to do that with shared libraries (dynamic linking) which will require the user to have those libraries installed, or link them statically, i.e. include them in the finished executable. The latter is good for reducing dependencies, the former is good if the libraries may contain routines that could be system-specific (… I bet you knew this bit…)

All of this works a bit different than in windows. The problem introduced by the software management of Linux is that you may be missing libraries for some software (though the repositories are usually consistent), but the advantage is that updating one library will improve performance of all software using it. Often you find different software providing different user interfaces for the same libraries. Like, libdcRAW (for converting raw photos to jpeg or whatnot). There’s a vast amount of programs using this library, and each time Dave Coffin adds support for a new camera, it will be available in all of them.
Having software composed of such modular parts also makes it much easier to replace one piece with another, or to build a new piece of software out of the remains of other, possibly orphaned projects.

Having said all this: I’m not a fan of apper or the thing that Mint seems to be using. This is a pretty simple interface for end-users. Yast2, the software manager for SUSE distros gives you a customized search for -dev packages, all packages that require [x], provide [x], are called [x] or have [x] in their description, or any combination of those. I think with Ubuntu or Mint, you’ll have to use the command line for that.
On the other hand, most “simple” end users will never need that type of functionality and not confusing them with complex UIs is one of the selling point of Ubuntu & Co, and I think that was no bad idea, since the added ways of customizing software repos and more fine-grained control over which version of what is installed, and from which repository, also make it easier to break dependencies, something which I have not yet seen on Ubuntu.

AFAIR (information may not be 100% remembered or be from up to 2 revisions behind current) Ubuntu has moved to using a user friendly mess (ok, you can search for “technical packages” but by default is a bit of a power user’s nightmare) including pay software in the Ubuntu Software Centre and the pre-installed alternative is aptitude on the command line. You can install anything you want to for yourself but be default you use the USC store or you use aptitude.

Mint also has the clean UI, App Store design language, software manager as seen above but the default installation does include both command line aptitude and a GUI in Synaptic (the awesome package manager from Debian with all the customisation and searches you want) for those of us who were raised Windows power users and so don’t associate power user with fetishistic love of command line and editing bare text config files (not that it is a bad way to do things, but a WIMP GUI has been with us for the entire history of home computers and makes for a more discoverable interface a lot of times so why shouldn’t power users have graphical interfaces).

Personally I’m very happy with Mint and decisions like keeping Synaptic as a default installed option make me think the distro is being steered in the direction that makes it right for me. Of course this does mean you have aptitude on the command line (both command line and terminal GUI), Synaptic GUI, Software Manager GUI, and the Software Update all playing with your package list so it is another area where the new user has a bit of a learning curve to understand why there are so many things that kinda do the same job.

GTK is painful to work with if you do not groke the GObject object system & UNIX way of building software projects. I suggest you try QT 4.7+ and use QML for UI.

Do you really need to code in C++? Why don’t you try C#? MonoDevelop is much closer to Visual Studio due to ease of writing tooling for C#. If you do not want to throw away your C++ code, you can bundle it in a dll & call it directly from C# (see Pinvoke.Net).

Oh and speaking of fossils check out Fossil Scm for your personal projects.

I recommend using aptitude instead of apt-get. It has a search function and a text-mode UI.
sudo apt-get install aptitude
sudo aptitude install libgtk-3-dev #In ubuntu 12.04, it’s version 3.4.2. Not the latest and greatest, but it’ll do.

Lets see, where to start. I don’t do much GTK programming so I can’t really help you with specifics. I *can* tell you that if you are looking for the GTK header files, on debian (mint is derived from debian, as I recall) you will want to install the appropriate -dev package for GTK. It will be named something like libgtk-dev with a version number. The bare library package contains the runtime files you need to use GTK applications; the -dev package contains the files you need to compile against.

I can tell you this: Linux is worth it. Once you have mastered the learning curve — which is admittedly difficult — it will be much more rewarding than windows as a development platform. What you are doing is learning a whole new way of doing things, and once you see how it works and fits together, as a programmer you will understand that this way of doing things *makes sense*. It will violate your learned expectations from Windows, but it will make much more sense to you once you see the logic.

Just to give you an example from your problems. Under windows, you install VS and it installs all the dependencies it needs. That’s how windows does it — one huge download, lots of wasted disk space on file duplicates, possibly the same application installed in each user’s space, and so on. Under Linux, distributions make an effort to package applications so that resources are shared rather than duplicated. It’s more efficient, results in smaller downloads, and makes it easier to collaborate on development.

I haven’t used mint, but in Debian Linux, there is a step in the install process that asks you what application suites to install; stuff like “web server” “file server” “development environment”… had you selected that last item, you would probably have had all the files you needed already installed by default. Eclipse can’t be packaged as depending on stdio and the rest because it is a multi-language IDE; you might have been installing it only for Java, for example.

Again, stick with it. There *is* a learning curve, but you will get past it, and once you do, everything just makes that much more sense.

I cannot agree more. I went from Windows development to Linux four or five years ago and while it can be frustrating initially, I’m so very glad I persevered.

Although there is a learning curve, the pain usually drops off early on. There’s the initial culture shock of an utterly alien workflow, but once you’ve picked up a few things, learning the rest is a lot less frustrating.

I did a bit of gtk related dev a long time ago, but didn’t start from scratch.
Anyway, here are a few tips you get by reading basic apt tutorials and stuff. It will require a lot of commitment to learn a new environment. I won’t switch to Windows overnight for the same reason…

And you get the documentation for your version in file:///usr/share/doc/libgtk-3-doc/gtk3/index.html (also available online http://developer.gnome.org/gtk3/ ).
It looks quite well done, to me. But it won’t explain how to handle your IDE.

1) You also need the -dev package.
In most Linux distribution, there’s two packages for every library: libfoo, which contains the compiled shared library (libfoo.so), and libfoo-dev, which contains the include files (foo.h or somesuch) and statically linkable binaries (libfoo.a, libfoo.la).

2) Likewise, the C include files should be inside a package like “libc6-dev”, the C++ ones in something like “libstdc++6-4.2-dev”.

3) Searching packages:
On a terminal, you can normally search for packages with “apt-cache search foo”. If you have aptitude installed, you can also do “aptitude search foo”, to look for package names containing “foo” and “aptitude search ~dfoo” for package descriptions containing the same.

4) Searching files:
A quick way to search for files is, on a terminal, “locate foo”, which uses a database that’s periodically updates (once every day or so, using a cronjob). Provided you have locate installed, which most distributions do by default

5) Using the configure scripts is normally quite easy:
./configure (resolve all errors it throws)
make
sudo make install

If a configure script doesn’t yet exist, but only configure.ac, you may need to run an ./autogen.sh script first. Or autoreconf.

6) If you feel too sane, take a look into the autotools (autoconf, automake, libtool) and how people regularly use them to set up portable build systems for every little variation in include file, library, etc. placement and naming

7) About IDEs:
The “real” way to program is to eschew them altogether and use vim instead. ;P

The -dev packages were the first weird thing I encountered when trying to develop on linux, but they actually made a lot of sense once I’d figured out what was going on.

I’ve spent more time trying to get other peoples code to compile (custom patches for wine, etc.) than I have writing my own code (not that I actually write that much), and finding my way around gcc and autotools was incredibly educational. The linux shared object/library model was rather bizarre at first, but each new piece made more sense as I got a better idea of the whole picture.

Shamus, I recommend the comment above. If you want to compile something which uses a certain library, you not only have to install the library itself but the development package, too. You can recognise it by the suffix “-dev”.

And as a general advice: Don’t expect Linux to work exactly like Windows. Installing new software is probably the textbook example. You should install software via your distribution’s Software Manager or APT. Only download software from the project’s website if you really know what you are doing – and as a Linux beginner you don’t.

The upside is that the Software Manager or APT or similar tools resolve all the dependencies for you. You don’t have to search for individual libraries yourself – just let the system do it for you.

Eclipse was primarily a Java IDE, and you have to install the header files for C/C++ yourself (unless you install the correct bundled Eclipse package, I suppose).
Generally, you need to install the -dev or -devel packages to get access to the header files, which go in /usr/include. It’s worth having a nose inside that directory just to get to know it.

IDE of choice for me in Linux is netbeans, because eclipse tends to get very clogged up when you have a large workspace, and runs very slowly.

This is, however, a very personal choice. And ignore all the people saying you should use vi + gcc . I’m sure it works for them, but it’s definitely a massive step for someone used to an IDE, and all the tools that go with an IDE.

I hope you have the patience to stick with this experiment. I understand if you don’t – you’ve got a family and a lot of other projects, so it’s fair to go back to the well known and familiar environment. I’d shake my head at you and revoke your status as an internet hero though.

Reading your post, it looks like you’ve almost fallen into the trap of “Why isn’t this like Windows?”, and gotten frustrated.

It’s also worth reading about the Filesystem Hierarchy Standard, to get an feel for how your new linux box is laid out, and where shared libraries, user files, etcetera live.

There’s a wiki article on the subject, which is a reasonable place to start:

There should be decent graphical package management tools, but I can’t for the life of me recall what they’re called. They usually have a somewhat better “I am just starting out knowing what I am doing” feel.

“I have no idea how to do this amazingly simple, straightforward thing”

Ah, yes, happens to me all the time. It’s amazing how annoying problems with configuring your IDE, making sure you’ve properly installed libraries and such are, and such, compared to, well, actual programming. It seems it’s easier to gain headway on even the most elusive bugs than making a program you didn’t make yourself work.

Your frustrations have been in line with my general experience with linux.

-Repository searches are magically useless. You need to google first to find out exactly what you’re looking for. Don’t even open the repository until you’ve done that.

-I find that searching for help is usually nigh-useless. Linux is so fractured that it’s not good enough to find someone who was having the same problem. They need to be having the same problem using the same version of the same distro using the save version of the program/library (and since they rev every six months, there are a ton of versions out there).

The solution is probably out there, somewhere, on the net, but you practically need to know how to solve the problem already in order to know how to construct the right search string that will get you there.

-Linux has this huge problem of being developed for two classes of people: People who can rewrite parts of the kernel if they want to, and people who think that an OS is something that loads a web browser for them. It’s not like Windows where there’s a slow gradation of difficulty where you can slowly learn while still being productive; as soon as you run into even the smallest problem you hit a massive brick wall that requires a ton of magical sudo statements to get past.

-Linux lacks the clear separation between developer-land stuff and user-land stuff that Windows or Apple has. Linux leaves a lot of should-be-behind-the-scenes stuff- like which libraries your OS has installed- in user-land. The design mentality is overwhelmingly “tool for developers”, where having this kind of stuff visible makes sense, over “tool for end-user”, where the system should hide it behind behavior that seems intuitive to somebody who doesn’t understand everything the OS is doing under the hood.

Linux lacks the clear separation between developer-land stuff and user-land stuff that Windows or Apple has. Linux leaves a lot of should-be-behind-the-scenes stuff- like which libraries your OS has installed- in user-land. The design mentality is overwhelmingly “tool for developers”, where having this kind of stuff visible makes sense, over “tool for end-user”, where the system should hide it behind behavior that seems intuitive to somebody who doesn’t understand everything the OS is doing under the hood.

Can you give some examples of that please? I’m not too sure what you mean. Are you talking about the /usr/ directories?

Generally, what you said about it being for 2 classes of people, is only right because people don’t want to learn what their computer is doing. And it’s this kind of person who absolutely should learn, because they’re the easiest marks for spam or calls that say “Hello this is David from Windows, we detected your Windows has a virus”.

The searching on the net thing….I half agree. I’m finding that to be less and less the case though, more and more useful information can be found trivially. But try to do something “non-standard” (like installing graphics card drivers yourself. And then uninstalling. And then reinstalling the correct version, and then reinstalling X) can turn into a nightmare.

Like I mentioned, linux treats libraries the same way it treats end-user programs. In most repositories I’ve seen, they’re listed in the same space.

“Generally, what you said about it being for 2 classes of people, is only right because people don’t want to learn what their computer is doing.”

First off, all you’re doing is suggesting that everybody become part of class 1. The fundamental flaw of the system having no gradient of user power (which would actually HELP people to become part of class 1) is still there.

Second off- no, people really shouldn’t need to learn how a computer works on the back end in order to do simple things like opening a .rar file. It’s an absurd conceit to think that everyone should have to share your field of interests and study in order to get use out of a computer. It’s like expecting somebody to know how to change out the engine in their car before they can drive it.

Computers are tools to most people, and tools should be designed to make things easier for people. Windows, Mac OS, and Android all manage to successfully do that. Desktop linux has no excuse.

Second off- no, people really shouldn’t need to learn how a computer works on the back end in order to do simple things like opening a .rar file. It’s an absurd conceit to think that everyone should have to share your field of interests and study in order to get use out of a computer. It’s like expecting somebody to know how to change out the engine in their car before they can drive it.

But I do expect them to know how to change a tire, check the oil and be able to identify things like “I can smell burning because I drove down the dual carriage way with the handbrake on”.

And I’m not saying that everyone should learn to use a computer to the same extent a geek does – that would be pointless. But basic knowledge is something that should be required. Like knowing the difference between “File” and “Program”, so when asked a question like “Where is your spreadsheet stored?” they don’t reply with “Excel”.

And I’m still not clear on what you mean about libraries and user programs. Are you talking about when you search for a library, (as per Shamus’s example), you get a load of Programs back in the search?

I’m talking about both, plus the overall problem of having to install programs piecemeal. The fact that users wind up having to worry about these libraries at all is the problem.

“And I’m not saying that everyone should learn to use a computer to the same extent a geek does – that would be pointless.”

That’s exactly what linux requires. The amount of expertise required is entirely out of proportion with the functionality you’re getting out of it. Command line arguments are deep into geek territory, and they’re mandatory for linux.

“But I do expect them to know how to change a tire, check the oil and be able to identify things like “I can smell burning because I drove down the dual carriage way with the handbrake on”.”

The only thing I expect out of somebody driving a car is to be able to do so safetly and considerately. Modern cars are designed to minimized the need for maintenance, and with a clear separation between maintaining/fixing the vehicle and operating it. Their operation is based off of what they do, not how they do it. You know what the difference between using the acceleration pedal and break for an electric vehicle and diesel pickup is? Nothing, because cars aren’t designed by the people who developed desktop linux.

Continuing the car analogy (until it breaks, damnit), you make it sound like you’re in favour of deskilling the population entirely. And when you take your driving test you’re meant to be able to open the bonnet and identify things inside it.

Back on topic, it’s not hard for most casual users to pick up Linux as their main OS any more. I know several non-geek people who are happily using ubuntu, and without ever touching the terminal.

And I’d argue that Windows isn’t exactly any better, and is a lot less transparent when it comes to deep magic.

“Continuing the car analogy (until it breaks, damnit), you make it sound like you’re in favour of deskilling the population entirely. And when you take your driving test you’re meant to be able to open the bonnet and identify things inside it”

I’ve never heard of that being required in a driver’s test before.

I’m telling you how the rest of the non-linux world works- products and services exist to be useful, not to force people to learn about their inner workings. When you go to a restaurant, they don’t expect you to be able to tell them which spices to use and how long to cook the meat. When you go to the theater they don’t make you know how many frames per second the film needs to be run at. When you buy a ticket for an airline flight they ask you what airport you want to go to, not which flightpath you want to take, how fast you want the plane to go, and what altitude it needs to cruise at.

There is far too much technology out there for products to demanding about how much we know before we can use them.

“Back on topic, it’s not hard for most casual users to pick up Linux as their main OS any more. I know several non-geek people who are happily using ubuntu, and without ever touching the terminal.”

I hear people say this all the time, about how their grandmother and mom and dad and their neighbor have all started using linux and they love it, and yet the linux user base is mysteriously not growing exponentially.

It just plain isn’t true. Again, I had spend hours on the command line to get a .rar file open. The GUI flat-out did not support the fucking around with the repository settings that was needed. Also, I have never searched for help in doing something on linux and gotten a GUI-based response. It’s always right to the command line, no matter what it is. There are extreme limitations on what you can do with Linux without it.

I don’t remember whether it was Ubuntu or Fedora, but it was last year. The program I installed to try to open it didn’t work because it was missing a library. Figuring out why that library wasn’t there took a lot of goggling, and eventually screwing around with the repository settings to get it to pull from the right one without problems.

I know it’s difficult to do simple things when you are new at Linux, but it’s just a matter of learning a few fundamentals and then the rest falls into place. You’ll either know how to go about doing something, or know how to troubleshoot it when it goes horribly wrong :-)

For example, I’ve never had any trouble opening a .rar file, but that’s because .rar is a proprietary file type, you can’t just open them in Windows either; you have to install a program that knows what to do with it. 7zip is popular in both operating systems, and once installed involves double clicking on the folder to open – no command line involved :-)

Not going to contradict your main point as I am, admittedly, a geek (though definitely not the kernel rewriting kind), but I’d like to say a few words in defence of the command line. When you talk about “people who still like the command line”, it implies that they are some sort of relict form the dark ages when GUI didn’t exist. Well, I started using computers in 2001 and used Windows (98, then XP) until around the time 7 came out, when I switched to Linux (Ubuntu, then Arch) and haven’t looked back. Lack of a usable command line in Windows is now a major deal-breaker for me.

The thing about command line skills is that they are easily generalizable – the tools you use to fix one problem come in handy when fixing another. Sure, at the beginning it all looks like eldritch magick you are not supposed to question, but after a while you learn useful bits – how to find what you need in the repositories, where the config files are stored, when and how to use “ls”, “grep”, “|”, and so on. Before long, all you need to solve an unfamiliar problem is “–help”, “man” or “apropos”.

Also, I consider terminal-based fixes you find around the net a good thing. “Paste these lines in the terminal and you’re done” is neat, especially considering the above points about long term benefits. Reading descriptions of how to navigate a swarm of dialogs in Control Panel, screenshot heavy articles or videos are a waste of time and don’t feel like learning, just jumping though arbitrary hoops.

The way it is set up, Linux command line is an extremely powerful and useful tool, and not in any way outdated. Though if somebody was sticking to DOS, you could rightfully call them crazy.

“When you talk about “people who still like the command line”, it implies that they are some sort of relict form the dark ages when GUI didn’t exist.”

It is a relict from the dark ages before GUIs existed. They’re part of linux culture, but every other consumer interface from phones to desktops is designed around the GUI and nobody other than the linux community is campaigning to get them to switch to the command line.

It’s a lost argument, and as long as linux insists on trying to keep making it it’ll never gain any ground.

“Also, I consider terminal-based fixes you find around the net a good thing. “Paste these lines in the terminal and you’re done” is neat, especially considering the above points about long term benefits.”

That’s only if they work right the first time, with no tweaking, and if you can identify them as the right ones right off of the bat. And then you haven’t actually learned anything because you don’t understand what you just did.

The thing about dialog boxes is that they contain a lot of information. It’s a hell of a lot easier to re-figure out what you did earlier when dealing with them, and it’s a lot easier to adapt instructions from a slightly different version to your version.

And that’s if you’re even searching for help. Most things I need to search for help for are only because I need to go to the command line in the first place- if the solution was available via GUI, I can usually work it out on my own. The command line requires you to know far too much before you can even start diagnosing your problem.

bloodsquirrel:”It is a relict from the dark ages before GUIs existed. They’re part of linux culture, but every other consumer interface from phones to desktops is designed around the GUI and nobody other than the linux community is campaigning to get them to switch to the command line.”
Are you serious? What about operations like bulk renaming files, or reorganising files based on some pattern, or deduplicating data, or quickly sorting a data heap, or viewing debug output from a program, or searching the filesystem based on a regex, or invoking a program with the output from another program, or…

Those are the kinds of tasks that I only ever see brought up when somebody is trying to think of a use for the command line, probably because when people really need to do stuff like that regularly someone generally goes and writes a program to do it for them, usually with a GUI.

One of the reasons why Linux command line is so powerful is that its commands can to be called from pretty much anywhere around the OS, not only from the terminal window. It means that if you want to, say, have all photos from your digital camera resized, renamed to include some relevant metadata like a timestamp and copied over to your photo folder with some automatic sorting applied, you can write the relevant console commands into a bash script (five lines or so), and make it callable from the right-click menu of your file manager, or better yet – add a hook to have it run every time you connect the camera. I’m sure there are programs for Windows that do this precise thing, but the custom script will work without using virtually any resources (including HDD space), cluttering up the tray or slowing down the system startup, as is usually the case with various third-party daemons.

Now I’m all for making as many useful tools available with a GUI as possible, but removing command line altogether like you seem to be suggesting would be silly and counter-productive.

bloodsquirrel: I couldn’t do my job anywhere near as fast without the command line (on Windows I use cygwin on my own machines, PowerShell on others). Probably not my day-to-day computing either.

It’s a very, very small minority in the Linux world that think the command line is the only way (I don’t actually know anyone who solely uses the command line), but as a supplement to a GUI I use it almost constantly during the day (Guake is a brilliant command line set up, providing a Quake style dropdown command line).

Each command is a small program – if you think writing big programs to do things is a good idea, what’s the objection to being able to quickly write very small programs to automate complex tasks, something that the terminal is ideally suited to?

I think at this point you are criticizing the community, not Linux itself. I’ve never opened RARs in another way than clicking on them, and I never had to install anything for it, because it came with the package.

I do agree that people in fora telling people to use the commandline for such things are not doing Linux a favour. At the same time, many windows how-tos also include the command line, and in Linux people may not know what your GUI looks like, so this is the safer way. Also, there _are_ people who will claim with vigour that noone needs more than a commandline and vi. This may be true, and it’s actually a good thing (like, if your graphics are broken), but it’s also very retro-oriented thinking.

Linus Torvalds has complained a few times that it’s not good that Linux desktops are so many and so different. On the other hand: try and make the Gnome and KDE people agree to one environment, when many Gnome users don’t even like Unity… the ability to choose and modify is part of what makes Linux good.

“First off, all you’re doing is suggesting that everybody become part of class 1. The fundamental flaw of the system having no gradient of user power (which would actually HELP people to become part of class 1) is still there.”

I consider myself the counter-example.
I was thrown into the Linux world not of my own accord, but have managed to orient myself fairly well. People who respond to user questions with shall commands are usually either showing off or going the safe route because the user asking for help may have a different UI. I almost never use the commandline, and it is almost never necessary, and only seldom quicker or more effective than using the GUI, which in turn is just as hard to understand as the Windows GUI and gives you a lot more opportunities to understand what is actually happening than, say, any Apple-designed interface.

“Second off- no, people really shouldn’t need to learn how a computer works on the back end in order to do simple things like opening a .rar file. It’s an absurd conceit to think that everyone should have to share your field of interests and study in order to get use out of a computer. It’s like expecting somebody to know how to change out the engine in their car before they can drive it. ”

…. So, I have a colleague who keeps asking me how to open .rar files in Linux. Ahem… double-click? Except you’ve got your file manager configured for single-click open. In that case: single left mouse button click on the file in the file manager. I do not think this is more complicated than in Windows, except there you may have to install some sort of software for it, before that works.

If this does not make you happy: right-click on the file and select an extraction option

“Computers are tools to most people, and tools should be designed to make things easier for people.”
No lie: I can do most tasks _faster_ in Linux (KDE) than I could in Windows. That’s not because I know better where everything is, but because the interface works that way, _and_ it allows me to mold it to my liking. _and_ it allows me to understand what it’s actually doing but does not force me. Any option I don’t understand, I simply leave as it is, and it has never been a problem.

I completely agree on the games problem, and on the problem of slightly slower graphics drivers, also the power saving support has taken a while to be fixed (but these days, Sandy Bridge laptops are supported just fine). All of your points are either misinformed or have only been valid a long time ago.

Another example could be me. I tried installing Mythbuntu and Ubuntu a few years back. I found I was running into nested problems just trying to learn enough to get it to run. I like to think it would have been easy enough if someone had held my hand to the point where I knew the fundamentals… but I don’t know. I’m no slouch with Windows and alter services.msc and my registry with ease. But I built up to that over time. Linux never let me get my feet under me.

The problem was similar to a common problem with hardware. I kept being asked to choose between options that I didn’t know enough about to make anything more than a guess. I found Linux like trying to buy a graphics card. At least with a graphics card there are particular up to date expert knowledge bases you can reference. But with Linux, everyone thinks they’re an expert and everyone argues that their way is best. And only experts know if the info is up to date or not.

Steve: One of the other issues with Linux’s learning curve is that most people start out with Windows preinstalled on their computer, with driver support for whatever hardware is installed already built in by the manufacturer. To get Linux working, you have to 1) Wipe out windows, or configure a dual-boot setup 2) get drivers for all of your hardware installed and working 3) If you have only one computer, do all of this *without any help from the internet* 4) get Linux installed and working, including troubleshooting any of your hardware issues, before you know anything about Linux.

Microsoft has long used the preinstallation barrier against potential competitors on PC-style hardware. It is their ace in the hole. Until it becomes possible to break that barrier — that is, preinstalled linux distribution of reasonable choice with drivers for all hardware at the same price point as windows in the same configuration, without buying a copy of windows behind the scenes to keep Microsoft happy — then end-user adoption of linux is going to face the install barrier.

It doesn’t help that the people who already want linux usually look at overpriced and underpowered mass-market computers with scorn, and build their own from parts. People wanting to sell preinstalled Linux computers are stuck selling to people who want to try Linux, can afford to buy a new computer that only has Linux, and … aren’t good enough with Linux to install it themselves.

Dell at one point offered laptops with Ubuntu preinstalled, but these laptops actually cost MORE than the same spec laptop with Windows on it.
Drivers on linux are not so much of an issue now. The vast majority of hardware has it’s drivers already baked into the kernel, it’s often only the really esoteric stuff that doesn’t work out of the box. Oh – and Broadcom wireless drivers. Those things remain a pig to set up.
Windows has it’s share of terrible drivers too.

Anorak, I was thinking of Dell with some of my conditions there. I seem to recall, too, that Dell was slipping a windows license check to MS for each of those laptops, but I could be misremembering on that part. I do remember the price difference for sure. And it’s things like the broadcom wireless drivers, and the Optimus laptop drivers, and the binary blob video drivers that take an expert to install and compile for the end user..

Broad hardware support is there and if you are lucky, you put the disk in and run the install process and it works. All we need is a vendor to say “I picked the hardware in this laptop so that it is all supported out of the box in the included Linux install CD. I swear MS gets none of this money. I took the MS price, subtracted the cost of windows, and added the cost of your chosen linux distribution (or a small fee for free beer distributions).”

Essentially, *nix doesn’t separate dev and users because the history of the OS is that the users *are* developers, and that’s whom the OS is written and structured for. The adoption as a general purpose OS is a later development. (Bordering on tardy, considering unix’s conventions were pretty established 40 years ago. People think free unixes are a new thing, but the first one was 1978. Bill Gates was still telling people “I only do BASIC; go buy CP/M.”)

Max OS is running right on top of Unix. Android runs right on top of linux. Neither have the confusion that desktop linux has.

It has less to do with the way the OS is structured and more to do with the community developing the desktop environment. It’s designed by people who still like the command line, and who tend to assume that the user has the same level of expertise that they do.

To reassert: You really do not _need_ to use the command line with any current distribution I know.
What I like about Linux is that the user gets to decide whether to use it or not. Just never open a terminal. There you go.

Of course, giving the user more different options how to achieve things can make it more complicated to do things. That’s the price of freedom, you could say…

“There is no bridge between baby steps and ninjutsu.”
Unfortunately, IME, this is true of trying to learn virtually ANYTHING via the internet–everything is written either for the rank amateur or meant as a guide to a specific problem for people who already know what they’re doing in general.

download GTK from the website and begin squinting at the ./configure scripts, trying to figure out how they should work, why they aren’t working now, and what I need to do to get them from here to there.

Trying to use configure, make, make install scripts is usually a lot more difficult than getting whatever version your package manager provides – all the dependencies are done for you. Of course, you certainly LEARN more by compiling it yourself, but you also risk losing your sanity and patience.

Where I learn that glibconfig.h doesn’t live in a SENSIBLE place, like where you keep headers. No, it’s in with the lib files. For some damn reason. Except, not on this machine. I look in usr/lib and I don’t have glib* or libglib* anything.

So…

sudo apt-get install libglib2.0

But it spews out a bunch of “libglib2.0 is already the newest version.”

Okay then. How about:

sudo apt-get install libglib2.0-dev

“libglib2.0-dev is already the newest version.”

Awesome. Except, I still don’t see any glib here in usr/lib. Going by what zacaj posted above, let’s take a look and see where this crap is going:

dpkg -L libglib2.0

“dpkg-query: package ‘libglib2.0’ is not installed”

I hate you so much.

I just want to find a single stupid header file, which may or may not be installed, which is part of glib, which is part of GTK. We are very far down the rabbit-hole now, and yet… this is one of five sub-packages. Once I fix this the other four could fail in other ridiculous ways.

And we’re still stuck just #including a single file. We haven’t even compiled anything yet.

That actually makes sense. Previously I have only ever used locate after I’d just installed a library and was looking for some header file or whatever. It would never find it, which from you said makes sense. So I’d just use find. Thanks!

Pretty sure you can skip a step there–as far as i’m aware, ‘find / -name “glibconfig.h”‘ is equivalent. The sudo is optional, depending on whether or not your current user has privileges to peer into the folders where the thing you’re looking for is hiding. (Also depending on how annoyed you are by messages like “can’t open directory /whatever/whatever/whatever” all the time.)

As per below, locate is also great but it isn’t guaranteed to find everything so if locate doesn’t find it you can’t be sure it doesn’t exist…

For even easier future reference: You can also search the whole file system using the search option in the file manager, while the current directory displayed is /
The search accepts wildcards (*) and probably all regular expressions if you know those.
This is probably not quite as fast but won’t require you to memorize or look up command line parameters for find (which, frankly, I cannot remember for the life of me)

It’s asking for “cairo.h”. Easily found. It’s in /usr/include/cairo – simple. I just add that to the include path and…

It STILL can’t find “cairo.h”. WTF? The exact line is:

In gtktypes.h:
#include <cairo.h>

So, no funny path monkeybusiness in front of the filename. Adding /usr/include/cairo to the include paths should make it impossible for it to NOT find this file. It’s right there! What is this? It’s in lowercase. Permissions aren’t set to anything odd. I used copy & paste, so this isn’t a problem with typos or anything. I am baffled.

This may be a “Buttons are useful” comment, but in Code::Blocks, if you go to Menu->Settings->Compiler and Debugger, you’ll get a window where you can manage any compilers you may want to configure. You’ll note it has a lot of preconfigured settings already, but as far as I know, names aren’t terribly important, just the individual settings, and you can change all of those.(I’m still in Windows, and since GCC, MSVC, and both D compiler variants are preconfigured and just need to be told where the executables are, I haven’t had to muck around in the deeper portions.)

The 3rd tab is titled “Search Directories,” and you can use that to tell the compiler in question where you want it to look to find all your libraries. Try adding /usr/include/cairo to that list.

I apt-get installed libgtk2.0-dev and pkg-config, then opened a new GTK+ project in code blocks, and it.. failed. Closing it and making a 2nd new project then compiled fine. On ubuntu, but should be the same as mint as far as this goes.

Bingo, pkg-config is the main thing Shamus doesn’t know about at this point. He’s trying to do all this stuff manually, like you would need to do it Windows, when there are tools right there that do it the right way with little effort.

2 is totally incompatible with 3, so they’re both separate and have different pkg-config names. You *probably* want 3 (it’s several years newer), but it depends on which version the code was written for.

Thanks for mentioning pkg-config, I hadn’t heard of it before now! Granted it’s been a looooong time since I did any C/++ programming on Linux (or anywhere)

For reference, Shamus, I was able to get the GTK test project built once I followed the instructions in this thread: http://ubuntuforums.org/showthread.php?t=498306 ; pkg-config just kindof automagically knows all the flags you need to send to gcc for all the includes and libraries to link are.

It’s not actually that it magically knows. The pkg-config binary has a couple paths built into it in terms of defaults (running “strings” on the version I have on this ubuntu laptop, and grepping for “pkgconfig” in the output, shows that the default is “/usr/local/lib/x86_64-linux-gnu/pkgconfig:/usr/local/lib/pkgconfig:/usr/local/share/pkgconfig:/usr/lib/x86_64-linux-gnu/pkgconfig:/usr/lib/pkgconfig:/usr/share/pkgconfig” — wihch is rather longer than I expected actually). You can add to these paths with an environment variable (PKG_CONFIG_PATH), but that should only ever be necessary if you’ve tried to install stuff from source instead of from the distro package manager.

Anyway, pkg-config searches that path for a file named whatever you give it as the package-name specification (gtk+-3.0 is standard for recent gtk), followed by .pc (so on this machine, it eventually finds /usr/lib/x86_64-linux-gnu/pkgconfig/gtk+-3.0.pc — though I note that the x86_64-linux-gnu stuff appears to be an Ubuntu-ism and nonstandard, although meh, whatever, it still works).

This file is a small text file in a standard format, but it’d be worth seeing what’s inside. “less /usr/lib/x86_64-linux-gnu/pkgconfig/gtk+-3.0.pc” shows this:

The first bits (with = signs) are variables set up for later; the last bits (with : characters) are the fields that pkg-config actually uses to find out information. When you ask it for –cflags or –libs, it collects the Cflags: and/or Libs: lines from all required .pc files (the Requires: and Requires.private: lines cause it to recurse) and prints the result.

So here, for –cflags, it’ll print out -I/usr/include/gtk-3.0, then go find the Cflags: for gdk-3.0, atk, cairo, cairo-gobject, gdk-pixbuf-2.0, gio-2.0, pangoft2, and gio-unix-2.0 (and any of their dependencies) and add them to the list.

This last bit — automatically recursing dependencies — is exactly the issue Shamus was seeing. He found the right flags for gtk+-3.0, then ran into the fact that it has a bunch of deps as well.

(At one point I used to know what the difference between Requires and Requires.private was. But it really only matters when you’re writing a .pc file that depends on this one, I believe, so not here. Maybe it causes something to happen differently for –libs, too? Shrug, just running the right pkg-config invocation should get you everything required.)

Yes, pkg-config for the win. All you need to know is the pkg-config package name (which, annoyingly, isn’t always the same as the package name, just to be confusing, but that’s because distros set the name of the thing you install, and the developers set the name of the file that pkg-config uses — so the pkg-config argument is always the same, everywhere, while the argument to apt-get or aptitude or rpm or whatever, is always different; this is part of the confusion above actually).

Anyway. To find the flags to compile stuff from .cc or .cpp into .o files:

pkg-config –cflags gtk+-3.0

will print *everything* required. To then link those .o files into an executable:

pkg-config –libs gtk+-3.0

will print everything required. You can also get the both together, with:

pkg-config –cflags –libs gtk+-3.0

Not sure how well eclipse or code::blocks handle this though. If you only care about it working on your machine, then you can just run these once and feed the output into the IDE I suppose.

I don’t believe anyone was born with the Windows registry & things secrets at hand, it took time to master them, so it’s rather moot to say “hey it did not work like I want/knew it in the first 2 hours so it must s**k”… well… yeah so? LEARN IT THE LINUX WAY. This link needs to be placed on more pages: http://linux.oneandoneis2.org/LNW.htm

This is a really obvious comment and slightly flippant, but worth saying: Linux is not Windows. It works differently. I might be worth reading up on some of the fundamentals, as opposed to just diving in and assuming you know what you are doing. The command line is your friend, if you know how to use it.

Reading up on Linux isn’t as easy as it sounds. It’s actually quite hard to self teach Linux fundamentals with only the internet as a tutor. DOS was also really hard to learn without hand holding. It’s the nature of the beast.

See, it’s things like this that have been the reason I’ve never really seen the appeal of Linux. Maybe it’s just a terrible inaccurate stereotype, but nearly every story I hear about Linux seems riddled with “I spent an hour to do something that should be incredibly simple” moments. And I don’t see how having multiple disparate distributions of the same operating system can possibly be a good thing. But maybe I just listen to the wrong people.

Either way, it’s always been easy for me to dismiss Linux entirely because any computer I use must be able to do three things: browse the web, allow me access to my work (via the web) and play games. And Linux just can’t do one of those things – at least, not enough of it anyway. I’m sure it has some redeeming features, and it’s free, but if I can’t play my steam library I may as well not be using it at all.

So I guess in the end, I’m really just hoping you’ll reinstall Windows soon so we can play more Borderlands 2.

Incidentally Shamus, ever since this whole Linux incident you’ve been impossible to get a hold of. You haven’t responded to my emails (well, you did to one of them a few days ago I suppose) and you haven’t been on any IM. How am I supposed to make fun of you if you can’t hear me?

So get on ventrilo, or at least, check your email (which, incidentally, will tell you to get on ventrilo)!

Nope – not sarcasm! I misunderstood – I thought you were saying that Linux couldn’t do one of those things, as in Linux couldn’t do any one of them. Not that it could do two but not the third. If you follow :)

I still occasionally have problems where it’s definitely Linux’s fault, though, but you’re right, when you get started it really feels like “god this is so stupid and backwards why doesn’t it just WORK” when really it’s just a different way of thinking about things.

This. I’ve recently spent more than a few moments trying to figure out where to find the Windows package manager before remembering that you’re supposed to google and download installers from the web there.

Same here.
It’s sooo tedious trying to find software on the web, download and install it, when you can have a package manager where you type in keywords, tick a few boxes (or untick a few others), and have the stuff installed(uninstalled) in the background and be prompted to update as soon as one is available.
For no money.

Yes, I agree, however! The sense of satisfaction after figuring out how to do said simple thing, for me, is wonderful.

I used Linux (Mint, no bonus geek cred there) for the better part of a year as my primary OS. Hell of a lot of fun, and by the end of it my productivity was almost where it was in Windows!

“Getting games to run” was always a fun challenge. Installing Wine is a start, but even then it seemed like every game required slightly different tricks and tweaks, so getting to that “New Game” button was itself the game. My friends all thought I was a ridiculous masochist…

And about having multiple distributions. And OS is essentially just a bunch of programs packet together, and as long as all that software exists you can’t stop people from putting together a package with a different combination of software.

Having multiple but in many way compatible OSes is definitely preferable over all of them being completely different and incompatible with each other.

The benefits? People want different things out of their computers. 95% of the time I get annoyed with something in Windows it’s because I want to do something, but Microsoft has decided that their default way is the only correct one. If things aren’t going to work, I prefer it to be for some mysterious technical reason rather than someone arbitrarily telling me what to do.

In my experience, it’s a terrible inaccurate stereotype, possibly fueled by beginners being told by experienced users to use distributions like Arch, which shares America’s dedication to allowing people to shoot themselves in the foot, and Gentoo, which is so convoluted merely installing it is viewed as a trial of manhood in certain geek circles.

I’ve been using Linux for a bit over a decade and don’t see how anyone can get anything done on Windows. I did get a wintendo recently because I really wanted to play GW2, but trying to get real work done with that thing would be pretty horrible.

Well, that was part of your problem. Try opening/installing Synaptic Package Manager. This will give you a lot more options when you search (like just searching for words in the title rather than in the description).

The Software Manager is a slimmed down version of Synaptic, and can be frustrating when you want to install more than a couple desktop applications (ie libraries and dev packages). I’ve had cases where files didn’t even show up in the Software Manager when they were found easily by Synaptic.

I remember the first time I tried linux … I was still in the Windows mindset, and nothing worked! I was so frustrated. I spent hours trying to install GIMP and all of its dependencies only to eventually wind up at a dead-end. Once I gave up, I noticed that GIMP was already installed. good god. The next time I tried Linux, it was a distro that came with a package manager which solved a lot of the problems I had with using it.

I went through a similar story when I first tried to use WxWidgets. Tutorials on how to set it up differed wildly, there were absolutely none for the latest version and I was getting the weirdest error messages on the planet. That was on Windows.

No such trouble with a Debian-based Linux distribution (which Mint is). As long as what you’re trying to install is in the repositories (or at least a .deb package) it’s ridiculously easy to install it. Not to mention you’re set on updates. Hell, I probably have more updaters on my Windows machine than actual programs.

I also see most of Shamus’ problems as being specific to C and C++. In Java, most of the libs you need come with the compiler (which comes with its own caveats), and in Python, adding additional libraries is really easy (and there are far fewer ways for the developers of the libraries to ruin your day).

Additionally, Shamus tried to change everything at once: IDE, OS, and libraries he’s not very comfortable with. Moving his development over to a cross-platform IDE would be a start, then get the project running as it stands. Next, he can practice in this new IDE implementing and learning his way through a new library. Then switch OS. It would have taken longer, but the non-vertical learning-curve would have reduced frustration. One step at a time.

When he started the project, there was the option to build an application for windows, and only windows, or to build an application one could run — and conceivably, develop — on any modern OS. By starting with one choice, then changing his mind (which happens a lot in big development projects), he’s trapped himself on a tougher path. :P

I would highly recommend using Qt Creator for your first attempt at Linux programming. It is way more integrated than any of the other IDEs, in the sense that it is kind of a closed environment – Qt libraries and headers will be installed with it and you should be able to start it up, make a new GUI application using a short wizard, compile, run and go from there. Since you have used Qt Creator on Windows, everything you have learned will still be applicable. In addition to that, Qt libraries are a world of their own, so the need for linking in external libraries (which is, of course, possible) is usually a bit delayed.
Eclipse is immensely powerful and extensible, but you have to keep in mind that it supports many different languages, tools and platforms. This, in turn, makes one jump through some hoops until everything is set up, but it gets better with time.
You should remember that you’ve come from a rather monolithic platform with well-defined properties (Windows) to a one that is much more heterogeneous, flexible and, even, divergent (Linux). I understand all too well how frustrating it can be, having to deal with all the downsides of said flexibility, while feeling no benefits. Managed make approach, in which the IDE controls the build process more or less completely, makes all the sense in the world if you’re on Windows. Things are much foggier on Linux, because the spectrum of possible situations is wider: consider just the common requirement that an application should compile and work properly on different CPU architectures. I realize that you are simply using Linux on a desktop PC and don’t care for all of this, but all the complexity is here for a reason.
I feel that Qt should isolate you from the worst of it, because it is a relatively self-sufficient set of libraries and development tools, much more like what you’re used to, coming from VS and Windows. Having said that, I hope you’ll stick with Linux and decide to explore a bit more. It is an ever-changing landscape, but, coming from Windows myself, I have learned to love the ease (yes!) with which one can get, examine and use the huge amount of existing code. Remember, whenever you think you have to write a function that performs some mundane task, chances are, it is already sitting in a library in a repository of your Linux distribution :)

I love how folks praise Linux, or speak of “learning curve”, what Shamus is going through is not a learning curve, it’s going through bootcamp with a open stomach wound (Yep! That happen to me for real, oh the joy of seeing straight down into my stomach muscles).

My advise Shamus is that if you don’t mind looking at alternative programming languages then PureBasic is nice (good API support, it’s not really a basic language (any more), and is for Linux/Max/Windows, demos are available and the license is not too bad and you get all 3 platforms for the one license and it’s a lifetime license, the developer behind it is accessible and brilliant.)

Then again you may still have GTK issues on Linux even with PureBasic, as it’s platform independent GUI commands rely on GTK on Linux.
Linux suffers from the “Too many cooks in the kitchen!” problem, and nobody has a clue how many sinks there are. Hehe!

Linux will never compete fully with Windows until the day you no longer need to do apt-get and compile sources to get programs installed and running.

Windows is no saint either though. Trying to compile a program that will work for all 5.x and 6.x using the C++ Express 2012 is a real pain in the ass, caused by dependencies that are not really dependencies.

On Windows, PureBasic is a dream. You just download and install, run it and type a few lines of code and compile (F5 key is compile + run). With C++ Express 2012 it was not that easy. and it took ages to install and “optimize” and stuff, then you have to use the wizard and create a project and so on.

A PureBasic source (that only use the native PureBasic commandset) can have the source files copied to Linux or a Mac and then compiled there and they should work. There is no need to use any OS API stuff for simple 2D and 3D (uses OGRE) programs.

And the forum is damn helpful and friendly, and loaded with code examples and info.

I know I sound like a walking billboard, but that just shows how impressed I am with it. If it wasn’t for PureBasic I may not have bothered to continue with programming when I moved from the Amiga to the PC as a hardware platform.

Heck, Fred (the guy behind PureBasic) even opensourced the Amiga version of PureBasic since it would no longer be developed any further.
Interoperability with other code is not difficult either. On Windows you can use/call .lib and .dll and on Linux it would be .lib and .so if I remember correctly.

Myself I’m coding a few libs in C (using C++ Express 2012, and sometimes GCC in the past), and then using those in PureBasic. Prototyping and playing around with ideas is blazingly fast with PureBasic.

The IDE for PureBasic is written in PureBasic, so that alone is a nice indicator of the potential. I always have the IDE running on my system as I find it practical to use it as a “complex calculator/idea tester” when I surf the web or read something new, and then just tab to the IDE and try out my thoughts, hit F5 and see if it’s worth pursuing or not.

Asking Shamus to change programming languages just to make things easier is the wrong path entirely. It’s silly. He knows the language he picked and it works perfectly well on Linux, once it is set up. The idea should be to minimize the new things you have to learn, not add to them.

Most folks are being nice about it, but Roger, you are bordering on spam, and that’s coming from an Amiga guy too.

As for this: “Linux will never compete fully with Windows until the day you no longer need to do apt-get and compile sources to get programs installed and running.”

What part of Shamus’s desire to compile and run hello world did you miss? He’s a programmer. He wants to compile and run a program he wrote. That’s why he’s trying to do it. And I’m sure he has a substantial codebase in his chosen language he would rather not throw away.

If you are not a developer and do not want to compile and run your own programs, you can get almost everything in prepackaged format that is available from a GUI (synaptic or software manager). apt-get or aptitude are nice command line options for the same thing, they are not massive sources of additional complexity.

As for this: “I love how folks praise Linux, or speak of “learning curve”, what Shamus is going through is not a learning curve, it’s going through bootcamp with a open stomach wound”

That’s a little excessive, don’t you think? This is the classic definition of a learning curve. When you move from one environment to another, you have to learn how the new environment works. Shamus is making a quintuple leap; he’s using linux for the first (well, sort of) time, learning to administer linux for the first time (package management, etc), trying to learn the new development environment (Eclipse), with a new GUI toolkit (GTK), and how to develop software on linux at the same time.

All those tasks, all at the same time, *take* time. Personally I wouldn’t suggest all at once, but then it’s trivial for me to keep multiple computers around for different purposes. What I would normally recommend is to wait until you are going to buy a new computer anyway, do that, and put linux on the old one. Use the old one with linux to learn linux gradually.

People have this idea that Windows (and Macs, for that matter) are easy to use and Linux isn’t. That’s wrong, because it’s looking at the difficulty of *switching* from a preferred operating system TO linux. If you cloned someone and put each clone in front of windows, macos, or linux for the first time and with similar tasks, the results would be a lot closer than you think. There ARE differences, and Linux is probably not the user-friendly leader, but the differences are not as big as people think.

“If you are not a developer and do not want to compile and run your own programs, you can get almost everything in prepackaged format that is available from a GUI”

To expand on this point–I have been using Linux day-to-day for a long time, and in the last five years or so I have not once had to build anything from source to get anything user-facing to work. I will occasionally (VERY occasionally) have to build some programming-related library from source, but I’m a developer, that kind of thing is to be expected. Your average user can do everything fine on Linux without touching the command line or compiling a single thing from source. This has been true for quite some time now.

Please explain yourself about this statement “Most folks are being nice about it, but Roger, you are bordering on spam, and that’s coming from an Amiga guy too.” as this borders on being a personal attack.

Roger, all I meant by that was that you were pushing PureBasic very, very hard in your comment. There was more about that language that you love so much than, well, anything else. You admitted it yourself: “I know I sound like a walking billboard…” Well, you DID sound like a walking billboard, frankly, and I was pointing that out. That’s all there is to it, nothing personal. And it’s not my blog to moderate, either.

Oh, and the thing about the Amiga — I was an Amiga user back in the day, too, so I was referring to myself there.

“What part of Shamus’s desire to compile and run hello world did you miss?” there is no need to be condescending, I’m not blind nor stupid.

Also my statement ““Linux will never compete fully with Windows until the day you no longer need to do apt-get and compile sources to get programs installed and running.””
still holds true. Go to any site that has a Linux program/project. What do you find? a tar archive with source code.
Go to any site with a Windows program/project and you will usually find a exe or even a full installer.

And to see what Linux “could” be, look at MacOS X, it is “Linux” as well.
Linux distros out there need to surpass MacOS X to compete with Windows 7 for example.

Having the distros being based on the same kernel but using wildly different GUI’s and tools, with differences that sometimes are as big as or larger than the Windows 7 and Windows 8 difference.

Just look at the ire that change in look “where’s the start menu?” (it seems moving the mouse to the right side of the screen was to reveal it was too complicated), and you expect “those” users to have the patience to use any of the current Linux distros?

You got me completely wrong man, I’d like nothing more to see Linux having a 90% market share with MacOS and Windows being niche OS instead.

You are saying that Linux is fine as it is, issue is that Linux aficionados have been saying that for years now.

Heck Linux is not even an actual “OS” it is a kernel, each distro is the actual Linux “OS”.

To compete all distos need to look and behave exactly the same “out of the box”, and the distro spesific stuff should be selectable from within the OS instead, allowing to change the look/feel and the apps/tools/gadgets of the OS an Linux OS Bundle pack if you will, or “Linux Interface Pack”.

And it should be as easy as changing a theme on Windows or Mac.

And for users/devs installing a IDE like Shamus did should “just work ™” instead of having to download files and have them spread all over the place. Microsofts Express this and studio that and SDK what, is just as guilty.

Using PureBasic as a fond (to me) example on how you can just install and compile straight away was to contrast that.
And sure, if I get a few folks to test out the demo and maybe like it and get it means that the guy that makes it can keep living on doing what he does, namely developing PureBasic which means new cool features and better IDE and language expansion. (I got my license a decade ago soon, and I haven’t paid since (lifetime license) and if I want to do Mac or Linux coding I can just download the IDEs for that, no extra cost.

This is as close to an ideal for a independent developer as you can get. Id’ be happy to be proven wrong if someone could should me any other Mac OS, Linux and Windows developer systems as easy as that.
As long as you do not add any API calls (obviously) you can copy-and-paste the source to any of the 3 platforms and compile-run and the GUI/program you made will work without a change.

How can Linux progress unless people say it can still be improved? If you say it’s fine as it is then those that work on it will focus less on those areas.

Heck even PureBasic isn’t perfect and I’m myself in the process or coding my own C standard lib, for Windows APIs and will do the same for Linux and Mac later as well, as one of my upcoming projects will be cross platform and this will both be a challenge and something I enjoy doing. I will be making my own GUI API/lib etc.

If it wasn’t for the fact that I need to live I’d probably have started working on that OS I always wanted to make by now, with ideas and concepts that would blow any current popular OS out of the water. (all I can say is that MicroSoft’s original “Blue” project from a few years back was on the right track, we’ll see if they’ve messed it up since then.)

There is only one way to code correctly in my opinion. Simplify, simplify, simplify, and stop short of actually loosing functionality.

People hate change, and unless what they change to is easier/better/faster then they will dislike it.
If Linux is “just as good” or “better” then why isn’t people emigrating to Linux by the millions?

Shamus’s adventure is an example of not just a “windows” developer but a guy that also have a very logical way of thinking about problems (and enjoys doing so), and if he is having these issues, then imagine what someone with less skills will have to do? Worst case is that they give up and go back to Windows. (and then tell everyone to stay away from “Linux” (rather than the distro or even the dev environment they had issue switch).

“Go to any site that has a Linux program/project. What do you find? a tar archive with source code. Go to any site with a Windows program/project and you will usually find a exe or even a full installer.”

On Linux, you are supposed to go first to your distribution’s package manager. There, you will find an RPM or DEB. If you can’t find one from your distribution of choice, or you need a newer version, you can go to the software’s website and perhaps find a more recent package for your distribution there. If you are a non-developer user and you can’t find a particular piece of software in your distribution’s archive or package flavor, this should be considered a hint that maybe you don’t want to try that particular piece of software yet, as a learning experience may lurk in the weeds.

This is a problem for commercial software, but most commercial software with a linux version tries to provide packages these days. (Some with more success than others). I agree this is a flaw, but it’s not one that people actually developing linux itself can solve; commercial software people need to get better at this.

“There is only one way to code correctly in my opinion”… No. Just no. I’ll code the way I think is right, and you do the same, and if there is some overlap, maybe we can get along in the same project. ;)

“If Linux is “just as good” or “better” then why isn’t people emigrating to Linux by the millions?”

People are emigrating to Linux. I’m surprised to hear an ex-Amiga user arguing from popularity, though. You should know from the history of that platform that being just as good and better than the alternatives does not mean you will be more popular. All that said, I am not arguing Linux is right for everyone or that it is more user friendly than the alternatives. It has pluses and minuses. Linux is right *for me* and in *my opinion* is a superior platform. I feel that way because I’ve invested the time and energy to learn the quirks. I’m just trying to point out that if you start learning Linux *from scratch* without having to compare it to a platform you have already learned to use, the user interface doesn’t seem nearly as bad. Why? Because the Linux user interface isn’t awful; it’s different.

“and you expect “those” users [who have trouble finding the start menu on Win8] to have the patience to use any of the current Linux distros?”

Actually, they could use any of the current linux distributions without much trouble if they had someone to install the distribution for them and get hardware drivers loaded. For that type of user, a web browser and the Software Center is going to be more than adequate, and they will never need to touch a command line.

“This is as close to an ideal for a independent developer as you can get. Id’ be happy to be proven wrong if someone could should me any other Mac OS, Linux and Windows developer systems as easy as that.”

I’m not interested in proving you wrong. I’m glad you’re happy with your choice of language and IDE, but they are not my choice.

“And to see what Linux “could” be, look at MacOS X, it is “Linux” as well.”

MacOS is not Linux. MacOS is based on a commercial UNIX with a user interface facelift. That’s great for Mac users! I am happy they have a computer with a reliable infrastructure beneath that user interface now.

I’ve tried to use the Mac user interface, and I can’t stand it. I hate it to pieces. Nothing works properly. It’s a horrible experience FOR ME, and I can’t do any of the things I want to do easily or at all. MacOS is like UNIX with training wheels that cannot be removed, or at least not easily.

I use Linux because I have a liberty interest in open software that I can control myself, and because I swore after the Amiga died that I wasn’t going to tie myself to a platform that could be killed by a monopolist’s stranglehold.

“Shamus’s adventure is an example of not just a “windows” developer but a guy that also have a very logical way of thinking about problems (and enjoys doing so), and if he is having these issues, then imagine what someone with less skills will have to do?”

He’s having these issues because 1) There is a learning curve 2) he’s trying to learn everything at once 3) he’s trying to learn developer tasks, which are never “user friendly” because developers are expected to have Clue in copious quantities 4) Windows Clue does not transfer smoothly to Linux Clue.

I’m not denying that there is a skill investment required to use Linux, particularly for development. That’s true of any operating system.

“To compete all distos need to look and behave exactly the same “out of the box”, and the distro spesific stuff should be selectable from within the OS instead, allowing to change the look/feel and the apps/tools/gadgets of the OS an Linux OS Bundle pack if you will, or “Linux Interface Pack”. And it should be as easy as changing a theme on Windows or Mac.”

No. Oh my dear mother of god no no no. You have NO IDEA how deep the various distribution level differences go, even when a lot of the software is common. The competition between distributions about package management, update philosophy, software policy, etc, is valuable. It would be incredibly difficult and bug-ridden and unstable to try to switch between distribution philosophies on the fly. Each Linux distribution needs to be considered a separate but related operating system, where expertise is easily transferred from one to the other with a little time.

Now, if you meant “desktops” or “window managers”, most Linux distributions already allow a choice between, say, “gnome” and “kde” and “twm” and “icewm” and “CDE” and ….. you get the idea.

One of the big reasons I use Linux is that I *can* customize things. I’m not forced into the one true MacOS way or the one true Microsoft way. I can use my choice of desktops/windowmanagers/ides/GUItoolkits/etc without being forced into a choice that “everyone uses”.

Allowing for choice does make it more difficult for end users who don’t need or want choice, they just want “foo”. I won’t deny that. If having a choice between various degrees of foo is not desirable and worth the effort for them, they are welcome to not use Linux. It may be a better choice for them to use Windows or MacOS. That doesn’t bug me.

Excuse me, but you don’t GO to sites with Linux projects unless you want to contribute or fiddle with them or maybe do a bug report or ask a question or something. If you just want to use the program, you go to your friendly package-managing GUI and click on the program to install it. That’s all you do.
If you’re getting into slightly more complicated things and you’re not sure which ones you need because they look kind of similar, you click on all of them. The ones you turned out not to need, odd little libraries for specialized purposes you don’t understand, you will never see again but they probably only take up like a meg between all of them so it doesn’t matter. Linux and all the software that you’re ever likely to install on it other than big games takes up less space than a vanilla install of Windows, so whatever, install anything you have the vague feeling you might need.
So yeah. The sites of Windows programs have it all packaged up for you to download because that’s what they’re for, because Windows makes you go and find the software. The sites for Linux programs aren’t there for that–if you want to use the program there is no point going to the site, your distribution has it packaged up to work with your distribution.
To some extent this probably has less to do with Linux and more to do with open source. If you’re going to buy something to run on Linux it’s more likely you’ll want to go to the site, and that site will have stuff to download that doesn’t work well with your distribution’s package manager but makes up for it by rolling in more of the libraries it needs.

I use Linux. I am not a programmer. I use the command line very, very rarely. I’ve been using Linux for more than ten years. I’ve never compiled anything. It’s been years since I installed any individual programs by hand. As an ordinary user, it’s always better to just go to the repository, which will keep track of all the crud for you and uninstall for you cleanly if you want to get rid of it again. There was a time when there was sometimes a point to installing stuff from the command line all by yourself with rpm this and that, tracking down packages and dependencies. That time is long past. Apt works well enough that if you’re a command line kind of person there’s no need to switch to GUIs, but I’m not mostly a command line kind of guy so I don’t (although I’m glad it’s there).
So basically, I think you’re peddling mythology.

“Linux will never compete fully with Windows until the day you no longer need to do apt-get and compile sources to get programs installed and running.”

I take it you haven’t used Linux in a while. Like, 5-10 years?

For non-programming stuff I never bother with apt-get, and I certainly am not compiling sources. I just click on my distributions GUI package manager (Ubuntu Software Center in my case), and I get a list of piles of applications I can install. I can search for something I want, or just browse. When I find what I’m looking for, I just click “Install.” A few minutes later, it’s all done. In many ways it’s easier than on Windows, where software is scattered across a large number of sites, I need to deal with an install with asks a bunch of questions, the in the end it’ll likely spam icons everywhere I don’t want and install an unwanted icon in the tray.

For programming stuff, even then I usually use the package manager. I need the very popular zlib compression library, I just ask the package manager to get the developer package for me. But sometimes I need something unusual, or very new. And I need to download and compile it. Just like I do on Windows.

So, why do you see people talking about apt-get so much? Because it’s easy to explain. “To accomplish your task, open up a terminal window and type this in. Heck, you can even copy and paste it. If you’re curious, you can look up in the help to see what its doing.” This is far more convenient that the Windows or Mac equivalent where you either end up describing screen after screen of dialogue boxes and menues.

And they’ll be different dialog boxes in different versions of Windows, too. Windows XP dialogs won’t be the same as Windows 7 which won’t be the same as Windows 8 ones. Often such instructions will fail right from the “Go to place X and click” stage–place X will be different or the thing you click will have been moved.
Generally, while the Linux command line does evolve in small ways, it’s more by accretion; old commands never stop working. So a suggestion that worked three years ago should still work.

I feel like ‘how does this work’ is a pretty common phenomenon in programming Shamus. There are languages and operating systems that expect you to be an expert before you ever start using their software. I remember trying for a week to get what were supposedly the easiest web Python environments running. Then I though to myself ‘you know what takes 5 minutes to get working on Windows? PHP.’ I’ve heard plenty about how PHP is the insecure buggy devil, but Some languages use interfaces that are so obtuse I’m surprised they get adopted ever.

Saying PHP is good because it’s easy to get running is like saying garbage is good eating because it’s easy to come by. While it’s true that the wide availability of garbage is a point in its favor, it does not justify the costs associated with eating garbage.

Hey, i just wanted to say… really, you’re doing great here. I don’t know if that’s encouraging or discouraging, but anyway…

What you’re running into here is an unfortunate side-effect of the attempt to slim the various Linux distros down so they’re CD-sized. What happens is they pull out the files and such needed for developers… which is fine, except for the first time you run into this you don’t know what is going on. Bottom line is, by using synaptic or aptitude or whatever you can easily get these things and they usually should get set up relatively smoothly.

However, there is that period when you’re just figuring out what to do where everything is a mess :(

When I read the paragraph where you go to the internet to download libraries, I fell to my knees and wept. “Noooooooooooo!” I exclaimed, “Oh, Shamus, no, please don’t go there! It will only take you to a dark place of unpleasantness!”

Of course, you’ve already discovered the result, and now know the reason for my panic – the Windows Way of doing things just isn’t applicable here. We get stuff from the package manager. The package manager is civilisation!

Only, you’ve had trouble with that, too. The split between libfoo and libfoo-dev, for instance, is perhaps not intuitive at first – but most users don’t need the headers, so it kinda makes sense to install shared libraries seperately.

I’d love to point out that I’d have similar troubles if I attempted to bring my Linux know-how over to Windows and try to use it that way; I could rage against Windows Updates for not actually updating anything but the core OS, I could lament the need to set PATH to get libraries working, and so on. The Windows Way isn’t inherently “easier”, it’s just what people are used to. I don’t want to rock the boat any more than this comment section is already doing, so I’ll just say – please have patience, and remember that some of your “PC know-how” is actually “Windows know-how”.

But. “Windows updates” – of course it only updates Windows stuff. The whole name of the entire thing is WINDOWS Updates, not EVERYTHING Updates. Windows update is a tool for updating your windows software.

Eliah, that’s both right and wrong. It’s right because, well, windows updates is a tool for updating windows, a tool that microsoft wrote for their own software and it’s reasonable that it should only update that one application…

Except that one application is the operating system, and keeping installed software up to do is a problem that EVERY software application has. Installation management on Windows is a nightmare, and you end up trusting every application to install and remove itself in a way that can easily lead to chaos even before every application installs its own daemon updater process that does things its own way, in the background, out of the user’s control. I’ve seen major websites fail because windows update forced an update onto the server that broke the website, with neither notification nor the option to decline or delay. And nowadays every application wants to do this itself and there are 50 gazillion daemons checking for a new version of foo every 5 minutes and it drives me insane.

Under Linux, the OS provides a package manager. The package knows how to install itself and remove itself in the way that the distribution prefers (which may or may not be how the developer prefers!). There are sensible ways to start and stop daemon automatically. You aren’t dependent on each application’s more-or-less-buggy installer, the OS has an installer and all the user does is say “Here’s a package, install it and anything it depends on.” And the package manager will then automatically check for updates — ONE daemon checking all of them, under the user’s control. It’s more efficient and it empowers the user.

So, yes, windows updates is a tool for updating windows, and it’s understandable why MS chose to write it that way. It was also incredibly stupid of them to NOT provide a decent packaging system and updater for third party applications such as Linux has had for over a decade. There’s no need for every developer to solve the same problem over and over again (with new and different bugs). Solve it once, do it right, let everyone use the result.

This will sound blasphemous, but what you are describing sounds very much like the app store in Windows 8.

Edit: On a rather more serious note, well, I do see your point. And yeah, keeping stuff updated on Win is a right pain. Otoh, it might help that programs have their free range and decisions – there is no arbitrary ruleset to conform to, no delays of releases for validation/checking, and you can just not update the stuff you don’t care about. Moreover, you can just tell the AV to update itself on its own schedule, you can tell, say, java, to notify of available updates, and you can tell, e.g. libreoffice, to sod off and not update at all. I don’t know if you can do that with a package manager.. I guess probably you can, otherwise it would be stupidly oppressive, but still…

The thing is, the package manager, as a single source of all software, still sounds to me very much like an appstore from microsoft/apple. I don’t know what the criteria for being in the packagemanager list is, and how available stuff in there is…

I’ll preface this by saying that the only Windows machine I have is still on Windows 7 and is going to stay that way as long as I can still play games without updating. So I don’t know what exactly is in the Windows 8 app store. They MIGHT have something that shares useful features with a package manager, but they might not. I don’t know for sure, and I do have a few concerns that come to mind…

“programs have free range and decisions” — The package systems on linux are very flexible. You can do pretty much anything you need to with install scripts. Distributions often have policies about what files go where, but you don’t have to follow those policies if you are OK with distributing your application yourself rather than through debian or redhat or whoever. So, you can choose to follow the rules (which are not arbitrary) or choose to go your own way.

“no delays of releases for validation checking” — there aren’t delays of releases under linux either. Upstream authors release as often as they want to, and people can install it by hand. Distribution developers may delay a little in packaging the release, but it’s usually pretty quick. On Debian, that package goes into the unstable version. If you like living on the edge, it gets downloaded and installed right away. Eventually, the package is moved into “testing”, which is for people who like to see the edge without getting too close. And eventually the new version migrates into “stable”, probably after a formal release cycle with integration testing of everything. So users can have updates pretty much as fast as they are comfortable with.

“You can just not update the stuff you don’t care about” — that’s a good feature. except for when there are security issues with the “stuff you don’t care about” and a hacker takes over your system. With a package manager, you *want* it to update what you have installed for security fixes. If you don’t care about it, you uninstall it. That said, advanced users can tell the package manager what to update, where, and when. It’s quite powerful and useful.

When you install something you get the latest stable version with security fixes, or the latest development version with little delay. Either way it is kept up to date for you, in the background, usually without even being noticed, but also under your control. If you don’t want the package, uninstall it. Advanced users get more control. It just makes sense.

As for criteria, it depends on the distribution. In Debian, to get into the official distribution, your software has to be freely redistributable, legal, and preferably open-source. The full debian archive is *HUGE*. Everything in it is digitally signed as well, so you know you are getting official, unchanged stuff. If the software you want doesn’t fit that criteria (commercial, perhaps), it can still be packaged up in a .deb so that the package manager can handle it. Users can download and install the .deb by hand, or you can set up your own archive and ask people to add that archive to their apt configuration and your stuff will be mixed in with the official distribution stuff.

It is incredibly configurable and useful.

The biggest concerns I would have about a microsoft app store would be:
1) Is it really offering package management, or just a uniform way to spend money and download an installer?
2) Can I trust an app I download and install to do what I expect and not spy on me? Debian apps are open-source and a malicious app is likely to be noticed. Windows software has already proven untrustworthy.
3) If an app depends on another app, can the app store make sure that both apps are install together? Uninstalled together? Installed together, but separated if required by other dependencies?
4) Does Microsoft have to approve or digitally sign everything before I can install it? The “trusted-path” bios-to-cpu-to-OS stuff is SCARY. I want cryptographic protection for my OS packages, but I want the ability to install something that’s not signed by a third party if I so choose — I HAVE to be able to do that to develop software, and I might not want to ask permission from the OS vendor to sell or give it to others.

The way debian is set up, the user is in control and is safe, but can choose to take unsafe actions if desired. I don’t really trust microsoft to keep their own software safe, and I certainly don’t trust them enough to ask them for permission to install software on my own computer.

I suspect the W8 app store was inspired by the Apple/Android application markets rather than linux package management, and frankly, the apple/android markets are to package management what a 6 year old black belt wearing boxing gloves is to a fully armed ninja warrior.

(Well, OK, “I give this app permission to do this and this, but not that” is a useful feature that package management lacks. But it’s hard to add in a trustworthy way.)

The main difference between Windows 8’s app store and a package manager is that Microsoft is the king of their app store. While the various Linux distributions all have their own repositories that they choose what software is available on, adding a link to a different repository in your package manager.

“This will sound blasphemous, but what you are describing sounds very much like the app store in Windows 8.”

…except that Microsoft will decide what goes in the App store and what doesn’t, and make a decent cut on the stuff that does. For Linux, the developer keeps a repository with loads of software, but third parties are welcome to do so too, and users can use them.
openSuse even offers a buildservice, where anyone can create and host packages that they feel are missing, on a SUSE server. 95% of software you don’t find on the SUSE server doesn’t exist.

=> The package managing in Linux gives you access to lots of software, while I fear very much that MS’s app store will limit your horizon for the sake of MS’s profit, much like Apple’s app store, a golden cage.

For Windows developers, Microsoft has written a package manager, NuGet. (Or, rather, they’re the main backer of the open-source project.) It’s even built into Visual Studio 2012.

So, there’s Windows/Microsoft Update to update the OS (and Office), in Windows 8 there’s the Windows Store that will update the apps, and for developers there’s NuGet (and it’s flexible in that you can host your own, internal package repositories).

And while I have yet to use this Microsoft Store, my experience with the Mac Store leads me to conclude that neither of them can hold a candle to the Debian and Ubuntu software repositories. The Mac store is basically a glorified downloader tool for add-on software, whereas APT will manage the whole system from top to bottom, tracking dependencies and letting me upgrade or swap out OS components like what window managers are available, which java runtime to use, lets me grab everything from special Samsung printer crap I might need to a selection of nVidia drivers. And when there’s a potential buffer overflow in something like libpng, I can get that fix and know that everything on my system that was using it is now using the fixed version. It’s just so sweet.

The way to deal with a flame-out on final approach to a carrier is to turn slightly to the left and eject. The important part is the slight turn to the left: It makes sure that your flaming wreckage hits the ocean instead of the valuable aircraft carrier.

I don’t dare uninstall the loser, lest it take those shared files with it to oblivion.

As long as you install the packages through apt (or software center or synaptic) this is not something you need to worry about. Dependencies and shared files are managed for you.
Also, afaik the software center is a bit of a filtered view on the package collection, if you use Synaptic you get access to the entire library.

‘IDE means “Integrated development environment”, and is to coding what a word processor is to writing’

A significant hindrance to the actual job at hand? /facetious

“There is no bridge between baby steps and ninjutsu.”

Like Wedge says, that is way too common with, well, everything really. Try checking out stuff on physics, for an example. Outside of a couple of good sources, and that includes books, it’s all oversimplified to uselessness or basically sneering at you.

Sometimes the middle ground is covered in the baby steps and ninjutsu parts. Occasionally by having the baby steps turn into more improved help, but most often sprinkled over into both of them in a way that makes it really hard to find them.

I’m just glad that search-functions are so common today, it mitigates the problem a bit.

I had Ubuntu on my Laptop (before it died) since it was really too slow for Windows XP. I find it really odd that programs don’t check for dependencies when they install. I installed Dwarf Fortress, which needed some sort of library. I never did figure out how to install the thing, but I realized that Angband came with the same library, so I installed that.

I had some really weird bugs. Ubunutu comes with the setting that the touchpad will turn off when you aren’t using it. Often, it forgets to turn it back on. The menu bar was fixed on the left-hand side of the screen. Firefox would occasionally open full screen all the way underneath the menu bar. My zero key would stop working if I had Skype running. My escape key would stop working if I used Flash Player. I had to run a particular program that used Document Viewer first in order to get Document Viewer to start when I opened it from any other program. LibreOffice would corrupt a file if you did not shut down LibreOffice before turning off the computer. I made this mistake once. Thereafter, every time LibreOffice came on, it would prompt me to attempt to recover that file (the same one every time). Ugh.. Surprisingly, this was an improvement over Windows.

I hate to be a stickler, but Flash and Skype are both poorly made, and only Adobe and Microsoft are responsible for the poor quality of their programs. For the other problems, my suggestion would be to search for and file bug reports, and follow up on all the troubleshooting in those reports. You or Ubuntu may have (through no fault of your own) triggered poor behaviour.

Sometimes Ubuntu just breaks on you, but a fresh install usually won’t have the same issues.

I don’t program so I can’t help with those problems, but I would suggest joining a Linux support forum like LinuxQuestions I find the default distro forums are often unhelpful and unwelcoming to people with advanced or complicated problems, although I can’t speak for the Linux Mint forums as I haven’t used it. Also, as you do program, whichever Linux flavour you use, you will have to install development packages. Unless you use Slackware or a source based distro, in which case you will have to put up with hours of compiling stuff you want to install.

“Just add this dev location into your list of repositories…” I don’t see that advice a lot. It’s usually wrong. Sticking with the basic distribution’s repository is good advice until you know what you are doing. There are some exceptions. (Mainly, “How do I get multimedia stuff to work on debian?” “Add the debian-multimedia repository; it can’t be in the main repository for licensing issues.” And yes, that’s a fairly big update).