Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Most of the proprietary code is interface. There is plenty proprietry code that isn't interface, but not even much of that is OS material. OS X is layered. Interface layer, application layers... and system layers. Contrary to what most believe, an interface is not an operatng system... its window dressing. As far as the guts of the OS X is concerned, i.e. the operating system itself... it is sooooo BSD that it's what makes OS X UNIX. (I know, "BSD's not UNIX!" , and then, one day it was).

By "people" he means consumers. If OS X has no consumers then it doesn't exist so I'd say they are a tad bit more important (to Apple) than the *nix heads who dig around in the CL. Sure Carbon and Cocoa need Darwin to run but the people buying OS X by and large wouldn't if Carbon and Cocoa weren't there.

Are binaries built for any given BSD going to work on any other BSD? e.g. binaries from OpenBSD working in FreeBSD? How about differing processor architectures?

You're generally looking at compiling for target platforms. Where there is binary compatibility (such as OpenBSD providing FreeBSD binary compatibility) you're probably looking to include the expected FreeBSD libraries. Where possible, it's cleaner to compile for the intended target.

can you go to sourceforge, find some random application's binaries for bsd and install/run them without issue on mac osx?

Actually, yes. This has been the case for just about every version of OS X ever released, since before 2001. That's because it uses a BSD tool chain and APIs. Apple goes to great lengths to maintain such compatibility, and should be pretty common knowledge if you've been paying attention on Slashdot.

You're missing out, and there is a lot of confusion in this thread. You don't compile binaries. It doesn't matter if some ancient binary from some dead archetecture no longer runs on a modern system if the source code is still available. Compile from source.

OSX = BSD, so yeah, its been year of BSD on the desktop for about a decade.

It includes part of the FreeBSD userland. I don't know if I agree that it makes it BSD. The FSF would probably agree though since they insist that Linux + the GNU userland should be called GNU/Linux, so they would probably argue that it should be called BSD/OS X if they were interested.

Not just userland. Much of the OS X kernel is derived from FreeBSD and NetBSD, too.

The problem, though, is that Apple has slowly stopped developing the Unix parts. They've literally deprecated fork, because they can't be bothered to make it work reliably with Core Framework. Neither are they tracking POSIX or BSD developments anymore, having stopped several years ago. OS X's POSIX support is a full release behind. They're compliant to the 2001 specification, but the latest is 2008, plus fixes. In a few year

Not just userland. Much of the OS X kernel is derived from FreeBSD and NetBSD, too.

Almost all of the BSD in the kernel is based on BSD 4.4-Lite2 and NetBSD; there are a couple of small sections, which ironically I wrote, that were pulled in from FreeBSD, like the BSD parts of the init code, and parts that generally everyone wrote, like chunks of the networking stack. I really wanted to change some of the VM APIs to be more like FreeBSD, i.e. in band errors in value returns should have been converted to value returned into variables passed by address with out of band error returns, but this would have required work on the part of the Intel guys prior to the Intel code integration.

The problem, though, is that Apple has slowly stopped developing the Unix parts.

This is BS.

They've literally deprecated fork, because they can't be bothered to make it work reliably with Core Framework.

No, that's a combination of several factors, some of them being Apple having poor representation on the UNIX steering committee. Specifically regarding the committee, there's no such thing as a pthread_atexec() and several other APIs which would be necessary in order to make fork() deterministically useful in already multithreaded programs.

The CoreFoundation factor is a combination of GCD, which starts and stops threads behind the programs back (and can't register exec handlers), and directory services, which for non-root processes starts another thread as a means of security partitioning to support everything DNS and network address related. It doesn't actually need to do this, and neither does GCD, but between that and the missing process lifecycle management functions in POSIX for threads, it's not supportable.

Basically, CoreFoundation is a piece of shit. It's now showing its initial lack of threads support in the design, and binary backward compatibility prevents it being redesigned. Catch-22.

The positive side of this is that people effectively have to use posix_spawn[p]() instead, which means they don't have to copy a massive fricking address space from one process to the other, which is expensive as hell in Mach, since they haven't adopted the red/black tree acceleration for ptov[] translations, mostly because there's too much code that relies on address aliases. In CS terms, the p:v has a cardinality of 1:N instead of 1:!, which breaks code relying on ptov(). There wasn't a lot of it, but there was absolutely no hope of getting rid of the aliases without the VM API changes I mentioned previously.

So boo fricking hoo: use LaunchServices like you were supposed to be doing when using CoreFoundation, and quit using fork() directly, and your problems will go away.

Neither are they tracking POSIX or BSD developments anymore, having stopped several years ago.

The only "tracking" of BSD kernel code that happened since 2003 that I'm aware of (but I left Apple in 2011) was in the networking code, and there was precious little of that, since Apple and BSD selected different concurrency models. BSDs is arguably more scalable, if you have unlimited memory to burn, other wise you want XNUs. You probably want XNUs anyway, particularly if you want to take cores on and offline out from under the CPU for power management or thermal budgetary reasons, and the scalability issues can be addressed.

OS X's POSIX support is a full release behind. They're compliant to the 2001 specification, but the latest is 2008, plus fixes. In a few years, their POSIX support will be about as useful as Windows', in terms of interoperability with modern FOSS.

That just asinine.

First off, the next jump to standards conformance, if any, will be unlikely to be 2008, since it's not going to be widely adopted by industry until IBM and Oracle can get their shit together, which takes more than 5 years, since it includes a migration strategy for mai

The UNIX side of OS X has been just fine in the recent releases. The problems with OS X are:

1. It doesn't have a real package management system.2. Long turnaround time for security patches. They should stop this insane "we have to wait until 10.x.y until we ship this patch even though it's ready." A proper package management system would certainly help there.

The UNIX side of OS X has been just fine in the recent releases. The problems with OS X are:

1. It doesn't have a real package management system.

It's called "drag and drop"; properly written applications are self-contained in directories represented by the application icon. If you follow the Mac model, and don't try to install your files all over from hell to breakfast, there's no issue. This is why a lot of demo machines in stores now have epoxy in their USB ports (e.g. the ones at Fry's), since people were stealing already activated copies of Microsoft Office by plugging in their iPod shuffle or other thumb-drive and just dragging it over.

If you want to install all over from hell to breakfast, there's always http://www.macports.org/ [macports.org] or you can make a 5 line change to the FreeBSD ports management system to use "${MAKE}" instead of "make", and deal with two "echo" compatibility issues which are fixed by using "printf" instead, and almost all of the FreeBSD ports system "just works". I gave those patches back to FreeBSD (via Jordan Hubbard); not sure if they made them in.

Note that another benefit of the Mac model is that you can have different applications requiring different versions of libraries, and nobody cares except people already short on disk space. Duplicate block coalescing can fix that, but only works for ZFS, which is an add-on.

2. Long turnaround time for security patches. They should stop this insane "we have to wait until 10.x.y until we ship this patch even though it's ready." A proper package management system would certainly help there.

This is an issue for security problems in the kernel; otherwise, Apple ships regular security patches for all user space components; leave Software Update turned on, and it's automatic, and will pop up and bug you to install updates, since they usually mean an application or system restart (depending on what layer the installs happen).

For the kernel, this is really a management/resources/security-guys-do-not-push-hard-enough problem; the current development model for the Mac OS X kernel is "Scrum", which is good if you want to keep an organ bank of coders around to throw at the next iPhone/iPod Touch/iPad problem, and less good if you actually want to make substantive changes or progress in kernel technology, so it's mostly on managements back. I agree this is a problem.

It's called "drag and drop"; properly written applications are self-contained in directories represented by the application icon.

That's all fine-and-dandy until you need to keep track of the different version of library packages and make sure they're all up-to-date and not conflicting. Do you want your system handling patches and updates or do you want to manually go through an infinite number of directories and waste your time?

I think the idea is that you don't do that. Each application is supposed to use the system software as far as possible, and if an application vendor ships a third party library as part of their application bundle then that vendor is supposed to maintain it when needed. Won't be perfect from a storage efficiency point-of-view but each application will be more or less independent.

It's called "drag and drop"; properly written applications are self-contained in directories represented by the application icon.

That's all fine-and-dandy until you need to keep track of the different version of library packages and make sure they're all up-to-date and not conflicting.

You don't need to worry about different versions because there is only one version of the library associated with the app: the one in the app bundle.

The way to make sure your app is up to date is to ensure it's up to date by dragging a new version, or having the app insert itself into the Software Update process, or to have it maintain its own update checks and cycle. The method to do this is documented.

By definition, since all libraries are private to the app, they are non-conflicting. That's the reason

Self-contained applications is a nice idea, but it makes sense primarily with non-free binary only applications which the user or the OS distribution can't build from source code. If you use a system like Debian stable then you will rarely have any problems with the package manager, and as long as you're using software which is distributed as part of Debian you can be assured that a maintainer has looked at it, that it is licensed under a DFSG compatible license and will most likely not harm you.

Like their initial select() implementation, which decremented the remaining time in the timeval structure to account for elapsed time, having an API is not the same thing as having a conformant API.

The current SUS [opengroup.org] allows that ("Upon successful completion, the select() function may modify the object pointed to by the timeout argument."), and that dates back at least as far as SUSv2 [opengroup.org].

It's still a rude surprise to people used to the BSD-style behavior in most other UN*Xes, and writing code that only sets the timeout before entering a select loop, though (that one bit me ages ago).

They've literally deprecated fork, because they can't be bothered to make it work reliably with Core Framework

fork() deserves to be deprecated. The API originates with old machines that could have a single process in-core at a time. When you wanted to switch processes, you wrote the current process out and read the new one in. In this context, fork was the cheapest possible way of creating a new process, because you just wrote out the current process, tweaked the process control block, and continued executing. On a modern machine, it requires lots of TLB churn as you mark the entire process as copy-on-write (in

They are still running 5+ year old linux_base-f10. You would have thought by now they would have updated it. Check out their base system installations. Virtually all of them are old versions. You would have thought for a new release, they would have updated their application.

Entrenched market share leaders get comfortable and a bit arrogant, particularly in technology. Things are done a certain way because that's the way they've always been done, and anyone who thinks differently is a clueless moron.

I don't think Linux kernel and GCC are exceptions to this rule, which has been proved over and over and over again.

Docker is based in LXC [sourceforge.net] (linux containers), so not available in freebsd. But can be done a port or a similar project based on FreeBSD Jails [freebsd.org]. It also uses aufs and cgrups, but i think freebsd have similar tools too.

As much as I love freebsd I have stopped using it after their servers got 'served' with the use of 'legitimate' ssh keys. http://www.paritynews.com/2012/11/19/487/two-freebsd-project-servers-hacked/ [paritynews.com]
Given that Freebsd never released a good audit report after that hack I can only be worried more.
Add to that, we now that we know the NSA had access to the certs from diginotar and might had done or paid for the diginotar hack I think one might as well use windows. I hate to say it, but the complete codebase

Given that Freebsd never released a good audit report after that hack I can only be worried more.

Add to that, we now that we know the NSA had access to the certs from diginotar and might had done or paid for the diginotar hack I think one might as well use windows. I hate to say it, but the complete codebase from freebsd needs to be checked. Again and again. Preferable with the help from openbsd.

Maybe you should read over the report from freebsd.org: http://www.freebsd.org/news/2012-compromise.html

1) It was a single ssh-key that was leaked.2) The accompanying user rights allowed access to two build server nodes which they took offline and they compared the data to a known good offline copy.3) They pulled the 9.1-RELEASE packages they couldnt verify.4) The compromised user only had access to the build system for binary packages. The BUILD system (and third party at that). NO access to the source repositories (except checking out, like you and me).5) If you didn't use the 3rd party binary packages you weren't affected at all. (and who uses binary packages with freebsd anyway?)

I don't know how the infrastructure is organized in your company, but usually there is a user management on a server if you hand out ssh-keys and only a few if any are allowed to sudo su. IF there is sudo at all. That isn't a desktop box where every user added gets an entry in sudoers to su.

Someone else has already pointed you at the report on the compromise. One of our developers has a VM that turned out not to be as secure as he though, and which had his ssh keys (with no passphrase) that gave access to the FreeBSD cluster machines. As soon as the attack was noticed (very quickly, owing to one particularly paranoid developer), the affected machines were taken offline. Bringing things back online took a long time, for several reasons:

You are cherry picking. Theo released that mail to the open on day 1. The guy who made the accusations got a bit scared after a reporter did a factcheck and asked him to remove it until he could answer. The reporter did, but we are still waiting for that answer. promised to respond to a question from a reporter never did. The backdoor would have been installed in 2001 and in SSH only. While it could be true... now, 12 years later that backdoor still isn't found. And it had been checked by everyone that mat

I was wondering this too, but upon further research, VPS adds things Jails can't do like migrating from one physical machine to another without restarting programs and possibly even keeping sockets open. It has a mechanism to transfer an image of the disk state too.

Apparently, VPS also allows for sharing of several different types of resources to lower memory usage and they support distinct pids (init is pid 1 in each one for instance).

It looks like the new plan is if you want to virtualize freebsd instanc

1. No mention of VPS (virtualization containers) is made in the features list, furthermore vpsctl doesn't appear to be present on my test install. Are you sure it's part of FreeBSD 10? I really hope it is, the documentation implies that you can have nested containers with no performance penalty. How is networking handled inside these containers?

2. I'm assuming jails still exist in FreeBSD, how do they relate, or fit in, with VPS and Bhyve?

I tell you what grinds me...the installers. The best one for "just get it done" is PC-BSD - but even it can be flaky.

The FreeBSDinstaller is fine if you arent trying to do anything fancy, otherwise you're dropping to a terminal and doing a little manual work. The plus side is FreeBSD has what I consider to be the best documentation of any UNIX out there with the possible exception of Arch.

PCBSD is an unmitigated disaster though. I'd stay far away from that pile of shit. The installer is flakey, the distro is bloated as fuck and the community is fucking worthless. Being based on FreeBSD you would think the documentation wo

Still, with PCBSD you can install a FreeBSD with ZFS in 5minutes and 10 clicks. In 1 minute you can make a pure FreeBSD jail or even an linux one and still can watch YouTube and play Games on that box.

the problem is not, that you're not able to do it. The problem is, its fucked up. You need to do it the one and only way, while GNU tools are much more flexible. Of course, you can install GNU stuff. I only talk about the default installed stuff, which are really old unix tools.

I can (unfortunately) second this. When I tried to install on my netbook and asked for help, I got many variations of RTFM... which if I could find one that was written in some semblance of English I would. Most of the BSD documentation I've seen is... somewhat less than user friendly.

Exactly. The handbook is awesome. (I didn't even need to use it to get up and running because bsdinstall (the installer) is pretty self explanitory to anyonewho has been around any nix systems for a while.) You will want a copy of the manual somewhere handy

I haven't touched FreeBSD in years, but recently wanted to play with it again. It was awesomely well documented, both with a manual and several guides, not to mention a zillion Google Hits. I didn't need to bug anyone about any thing, because all th

Everything you say is true. But are the Linux developers really all that different? There have been some epic flamewars on LKML and plenty of RTFM...

The fact is OS developers are generally extremely smart, "self-confident" (I'll try not to say "egotistical" or "arrogant"), and possibly somewhat socially awkward/blunt. The only reason you don't get that from Windows and OSX is that MS and Apple hide their kernel developers away from public debate:)

These people should not be answering questions from rank newbies. They have day jobs, and spend a hell of a lot oftime maintaining the software. They just don't have enough hours in the day to handle questions from every passing neophyte.

Yes, and there are ways of saying that to someone that are not condescending, rude, or just plain assholish.

Though you know, some people in fact DO like helping others, even newbies (sometimes we call those "teachers", and sometimes they are just good people). But even if someone doesn't want to help, "please use XXX list for this question" is really not any harder to type than than "stupid question, stop posting here and RTFM".

Here's a response: "delete message". Or for those really advanced users: "delete and filter". The only reason to reply with an insulting comment that even doesn't even answer a possibly ignorant question is a misplaced need to abuse or bully someone. It doesn't really solve anything, and frankly I don't even know why you'd want to make the effort if replying is such a burden.

Despite what you think, reputation counts. A lot of good open source projects have been derailed, unnecessarily forked, or abandon

I think one of the problems might also be that they are seeing the same damn questions asked over and over but slightly different and the user isn't able to connect the slightly different question to the published answer already given somewhere.

I used to do some support on IRC with a Linux group catering to a specific distro and I saw this all the time. I eventually created macros to ask the questions just to get to the point of the problem because of the 10,000 different ways someone states it. Often the s

computers are complex tools. The more operating systems try to hide that, the more dumb the users get.. it's a race to the bottom.

This antipathy towards learning curves is a big part of today's society (the idiocracy). Not only do people abhor learning, their superiors refuse to give them the time necessary to do it... Thus we end up with desktop operating systems that work like tablets. Everyone now thinks all computers should work like smartphones, no matter what they need the machine for. Complex procedures do not work like they do in star trek. Deal with it.

This antipathy towards learning curves is a big part of today's society (the idiocracy).

I've always loved to learn, but one thing I hate is having to relearn. If a new tool has obvious advantages over an old tool I'm happy to learn the new tool: I'm lazy. I don't live to work, I work to live. I didn't mind learning Windows because it had obvious advantages over DOS. I didn't mind learning Linux because Windows was a PITA.

One reason it was such a pain was change for the sake of change, which Windows is even

These people should not be answering questions from rank newbies. They have day jobs, and spend a hell of a lot of
time maintaining the software. They just don't have enough hours in the day to handle questions from every passing neophyte.

There are other mailing lists for this.

I understand that, but still... There are some cases in which I there is not a person that has the deep technological understanding of some component, when I post in the forums of some distro. And on the other hand, as you say, the professional developers don't have time to answer all the peasant questions. Ah well.

Yes, I ran into that problem in the past as well but then I realized I was emailing the FreeBDSM mailing list. Needless to say, I've since switched to Linux and I'm being fulfilled in ways you can't imagine.

Personally I don't care, nearly all developers, doesn't matter what operating system they favor are Not-invented-here's, and assume anyone who has a problem with their software suffers from PEBKAC. Interact with developers as little as possible.If you can't replicate the problem twice by yourself, then the problem isn't reproducible.

As far as why I'll pick FreeBSD over Linux. Out of the box, I can't cripple freeb

Perhaps you deserved it because I for one have no idea what you are talking about. I am new to FreeBSD as of the last few weeks and personally have found their community to be quite helpful, especially the forums on FreeBSD.org. I recently asked a very stupid question over something obvious I had overlooked and no one flamed me at all - in fact they were helpful and didn't call me out on it. Granted when it comes to their official forums there are rather extensive rules

I hate to respond to trolls but in this case I think the record needs to be set straight. I have used FreeBSD for about 6 or 7 years now, having come off about 12 years of Slackware. In that time, I have had multiple need to ask for help or clarification both on the mailing list as well as the IRC channel. And I can honestly say I did not get one fliippant or rude remark and some people actually did try to help, unlike some other open source software (non-OS) where questions were met with crickets.

* The OS and the applications are separate. This means that you can have up to date versions of your desktop and all applications on a stable core OS. On Debian you would either have to build things yourself or upgrade your entire system to testing or sid.* A mature ZFS implementation. You can use ZFS-on-Linux or Btrfs for similar functionality on Debian, but it's often not considered to be as production ready as ZFS on FreeBSD. Also for license compatiblity issues ZFS-on-Linux will never ship as part of a GNU/Linux distribution and will have to be installed separately.

Disadvantages:

* Not as good hardware support. Usually works well on desktops and servers, but it can take some tweaking to get it to work well on modern laptops.* Some software does not run on FreeBSD. Very uncommon for open source, but can be a problem if you're running non-free software. You can mitigate this by installing the Linux compatibility layer on FreeBSD.

There's no single killer advantage, but the general trend i've seen with FreeBSD after having used it since 2001 (Linux since 1996) is that in general, whilst some things seem to take longer to be supported in FreeBSD, generally once implemented, the implementation is stable. As in, it isn't junked and replaced with something completely different 18 months later. Linux tends to be in much more of a state of flux - this can be good if you want to run the very latest, however if you want something a bit mo

The general stance is that it's not optimal bus also not a critical problem since it's not part of the OS and only used to enable certain hardware. There's no plan to remove them as far as I know, although there is work going on in tweaking the build system so that you can build a FreeBSD distribution where they are omitted.

If you're new to unix, FreeBSD 10 alpha is definitely not what you want. By default, FreeBSD installs no GUI, and v10 is not production ready yet - it is a development snapshot. If you want something equivalent to Ubuntu, check out PC-BSD, which is the user-friendly desktop oriented variant of FreeBSD.