LAS VEGAS—The Facebook-led Open Compute Project has spent the past year building an “open” switch that can boot nearly any type of networking software, giving customers more alternatives to proprietary switch vendors like Cisco.

Intel, Broadcom, Mellanox, and Cumulus Networks jumped on board last November, contributing specifications and software that will bring the project closer to a finished design. They weren’t alone, though: Software-defined networking vendor Big Switch Networks, in January, donated what it calls Open Network Linux (ONL) to the project.

In an interview with Ars at this week’s Interop conference in Las Vegas, newly appointed Big Switch CEO Douglas Murray explained the company’s reasons for getting involved. As Big Switch noted in its announcement, ONL is “the Linux distribution for bare metal switches that runs underneath our commercial Switch Light OS. ONL’s goal is to give people deploying OCP [Open Compute Project] switches a simplified experience with a standard Linux distribution that comes prepackaged with all of the relevant drivers, loaders, and platform-independent goodness. If ONL is successful and becomes a popular distribution for open network hardware, it will also mean less integration work for hardware and software vendors and thus fewer bugs and other surprises once ONL-based products get to end customers.”

Big Switch CEO Douglas Murray.

Big Switch Networks

A lot of "rudimentary work" goes into setting up switch software, Murray noted. Each time Big Switch gets its own Switch Light OS ready for new hardware (typically based on Broadcom chips), “there’s a bunch of stuff we have to do. It’s like rinse and repeat, rinse and repeat, but it takes time. So we actually took that element, packaged it into something called Open Network Linux, and donated it to OCP. We think it will get people to be able to move to bare metal faster because it streamlines the time it takes to do bring-up of a bare metal switch, and it also allows people to expand their hardware compatibility list more rapidly.”

Companies could create their own switch software to run on top of ONL, he said. “If you’re another vendor, even an incumbent, you could use that to accelerate how quickly you can get a product to market,” he said.

ONL received patches and contributions from other vendors, including Cumulus Networks, one of Big Switch’s rivals.

“We’ve seen great support not only from OCP but from Broadcom, from… ODM [original design manufacturing] vendors like Accton and Quanta, they’re now on board with this. They’re helping and participating in the donation now on top of what Big Switch put in,” he said. “We think it will help get more broad adoption of bare metal overall.”

The Open Compute Project's blog said last week that its planned top-of-rack switch is closer to reality in part because of contributions of Cumulus and Big Switch. There's still work to do, though: "The contributions from Cumulus and Big Switch provide a software foundation, but in order for the OCP switch to actually forward packets, we still need forwarding software on top of the hardware switch itself," the project noted.

There's already one available switch from Interface Masters based on Broadcom's proposed Open Compute specification, and ONL will be available for that switch. Any bare metal switch that supports the Open Network Install Environment contributed by Cumulus will run ONL.

The Open Compute Project was dealt a blow recently when its visionary, Facebook’s Frank Frankovsky, left to build an optical storage startup. Open Compute will live on under people such as Najam Ahmad, who runs Facebook's network engineering team and leads the Open Compute Project's network program. The project still has support from a variety of vendors, too. At Interop this week, Emulex announced converged network adapters for Open Compute hardware.

Switching gears

Still, Big Switch hasn’t reached the heights of success it expected when it was founded in 2010. The company shifted strategy in response, Network World’s Jim Duffy wrote last September. Big Switch “killed the first release of its Big Virtual Switch application and is now focusing on SDNs [software-defined networks] that merge the physical and virtual worlds,” Duffy wrote. “Big Switch is also now offering its products in bundles that run on commodity bare metal switches rather than piecemeal controller and monitoring and network virtualization applications that run on switches from ecosystem partners.”

Big Switch “also left the OpenDaylight open source SDN consortium and saw six of its partners—Juniper, Arista, and Brocade among them—jump ship as the company undertook this transformation over the past year.” Big Switch co-founder Kyle Forster said his company's “new focus on bare metal hardware was ‘at odds’ with its now former switch partners.”

Big Switch, which has raised $45 million from investors, sells the aforementioned Switch Light OS and software that controls the switches from a central management point. Its products implement the OpenFlow networking protocol and have plugins to connect to the OpenStack infrastructure-as-a-service software.

Big Switch’s customers number in the “double digits,” with paid deployments from $25,000 to one customer that uses Big Switch software, in 13 data centers, at a cost of about $100,000 for each data center, Murray said.

Customers have plenty of options. Cisco just revealed the OpFlex protocol, an alternative to OpenFlow, and Dell teamed with Cumulus Networks to sell switches with Cumulus’ Linux network operating system.

Despite Dell’s partnership with Cumulus, Murray is impressed with the company’s work. “The only incumbent vendor moving away from proprietary hardware is Dell,” he said.

Bare-metal hardware is important, as it lets users change vendors without abandoning hardware, simply by removing software and replacing it with an alternative, Murray said.

“Companies like Google, Facebook, and Amazon, what they’re doing is taking bare metal switches, writing their own software onto those switches, and as they write that software, they are customizing it to their applications,” he said. “In large part what we’re trying to do is take what Facebook is doing and what Google is doing and bring that to every other data center.”

30 Reader Comments

It's interesting to note that "open" switching is already allowing innovative technologies to develop at faster rates. In broadcast video, Evertz and Arista networks are pushing hard for IP-based uncompressed video routing in broadcast plants. This will be huge for the future of broadcast video infrastructure. Right now everyone has a big "house" routers that stream single 3gbit/sec video signals using coaxial cable. Video routers are hugely expensive devices. The ability to distribute switching and use IP for in-house and out-of-house routing promises to commoditize gear and simplify wiring, while allowing future scaleout without potentially having to do forklift upgrades if needs increase. 10gbit to video devices will support 6 standard 1080i60 streams, or 3 1080p60 streams. Inter-switch trunks can carry a number of video streams only limited by bandwidth. Operators that deal with lots of different video formats will be on this like white and rice when it's a matured technology. Fox is apparently driving pretty hard to develop it.

I wonder where HP fits into all this. IMHO, they are way more relevant in the networking world than Dell. HP also utilizes OpenStack and OpenFlow in their SDN application (in fact, I believe they were pretty instrumental in demonstration you can out OpenFlow on commercial equipment). The top-rack VM switch goes perfectly with their IRF architecture.

Anyway, the point, it would serve HP well to put some resources into this effort.

Old school datacenter architectures have what's known as a "high oversubscription ratio" where the bandwidth through the core of the network is a tiny fraction of the aggregate bandwidth from the individual hosts/racks - ie, it's relatively expensive to move data between different regions of the datacenter.

These days, the big tech companies typically have some form of "network fabric" loosely based on a http://en.wikipedia.org/wiki/Clos_network. This allows their datacenters to provide full bandwidth between any pair of hosts in the network (or between lots of arbitrary host pairs). Which is pretty cool because it becomes cheap to replicate data between hosts in different failure domains and because you can place roles anywhere without considering communication locality ... all of which helps to increase overall utilization and save a bunch of money. And these commodity network fabrics use less power and cost less money than the old centralized network gear from Cisco / Juniper. The only downside is that you have to write a custom network stack yourself and there's very little sharing / cooperation between companies.

Bringing the benefits of cross-company collaboration on open stacks to the network world should up the rate of innovation and make more robust tech available to all (sorta like how Linux is way better than Unix ever was, and at a fraction of the cost). Exciting times.

Are all the businesses finally learning that there is ROI to be had in public collaborative development? ROI that comes in the form of engineering instead of hard currency.

Company A donates development, community starts working, Company B notices and joins in. Bugs are fixed, features are added, security holes plugged... Company A and Company B now import the changes and benefit from the value added by both companies and the community. The community, in turn gets more software.

Company A and B update their products and polish and promote new versions to their customers.

No... It doesn't always happen that way and obviously not every project gets worked on 100% of the time. But it is a value-add move because companies shine in the areas of polish, consistency and support. THAT is what people pay for, off-loading your boiler plate is net positive all around.

Can the community polish? Sure, just look at Elementary OS... But, I think most of the community is happiest when it is doing a sort of ad-lib massive research project in every direction. If you give them something they'll run every corner of the project in directions no one in the originating company might have ever thought of. Give them just a little time and they'll find out if any particular feature idea will: A. Work or B. Is a waste of effort. Effectively turbocharging software evolution at the same time. Bad ideas are abandoned and good ones end up with a code base.

Maybe becuase the license is too restrictive ?!. It is a shame since BSD variants have a better IP stack and way better overall security. Maybe Sony can make an adaptation of a BSD based Software switch since they are doing so much work on PS4. Imagine a PS4 based switch at commodity prices with an interface based backplane that allows variants of interface types to be slotted via PCIe horizontal backplane.

BSD has a porting stack for most software that is available for other *nix flavors. And, just about any significantly useful kernel feature appears to get trans-engineered to BSD. So, there is little to worry about there. To summarize: if it appears in open source and is wanted or needed by the BSD community, it will end up there. If not, then it's most likely to be wildly incompatible with either the BSD platform or the BSD ideology.

As for BSD vs Linux, I like to think of the two this way:

Linux is the Wild West, it plays fast and loose and usually ends up with new ideas sooner than later.

BSD is The Cavalry, it is polished and organized, everything is tightly integrated so new ideas adopt more slowly than Linux, but not necessarily slower than any other large software project.

Nether of these things is wrong, they appeal to different people for different reasons and even the same people in different situations. Different tools for different jobs... License evangelists aside, they two typically affectionately refer to the other as "cousins."

So if the software is open, does it mean you can run it in any hardware you want?

If yes, why exactly would someone buy a custom proprietary more expensive hardware like Juniper or Cisco?

Because if we take a look at what they have in terms of cpu power ram, etc, they are overpriced packet switching machines, even a cheap cell phone has more power for that money.

If this is true, what exactly is going to sell Cisco to their customers?

One of the things I would expect from a device like Cisco, is security, like protecting from dos attacks, fast routing, switching, and of course stability and performance under heavy use. For this to work the hardware has to to talk very closely to the software, but then again, one of the biggest bashing of Cisco lately is their buggy and unstable software, in particular in their small business units, the reviews are not exactly great for overpriced routers and switches.

Don´t get me wrong. I love Cisco, but if they have nothing better to compete, this open switch software will be their doom as people are going to buy their own hardware for the same money, 10 times more powerful than any other Cisco switch and put the software on it.

It's interesting to note that "open" switching is already allowing innovative technologies to develop at faster rates. In broadcast video, Evertz and Arista networks are pushing hard for IP-based uncompressed video routing in broadcast plants. This will be huge for the future of broadcast video infrastructure. Right now everyone has a big "house" routers that stream single 3gbit/sec video signals using coaxial cable. Video routers are hugely expensive devices. The ability to distribute switching and use IP for in-house and out-of-house routing promises to commoditize gear and simplify wiring, while allowing future scaleout without potentially having to do forklift upgrades if needs increase. 10gbit to video devices will support 6 standard 1080i60 streams, or 3 1080p60 streams. Inter-switch trunks can carry a number of video streams only limited by bandwidth. Operators that deal with lots of different video formats will be on this like white and rice when it's a matured technology. Fox is apparently driving pretty hard to develop it.

Interesting read, although isn't the reason they're expensive is because of the hardware required to get the needed performance?

Are all the businesses finally learning that there is ROI to be had in public collaborative development? ROI that comes in the form of engineering instead of hard currency.

To me, open development is about increasing the whole market size instead of one's market share (by driving down the costs of the whole market). As it can still end up in a bigger market size for each participant, with very limited associated cost, it makes sense.

What is necessary is a disincentive for free-riding practices. These mostly take the form of: 1) putting the burden of maintenance on upstream 2) having a say (and thus possibly a head start) on upstream direction.

Maybe becuase the license is too restrictive ?!. It is a shame since BSD variants have a better IP stack and way better overall security. Maybe Sony can make an adaptation of a BSD based Software switch since they are doing so much work on PS4. Imagine a PS4 based switch at commodity prices with an interface based backplane that allows variants of interface types to be slotted via PCIe horizontal backplane.

Except it's literally the opposite, the GPL is restrictive, and the BSD license literally doesn't fucking care what you do with the code at all.

Maybe becuase the license is too restrictive ?!. It is a shame since BSD variants have a better IP stack and way better overall security. Maybe Sony can make an adaptation of a BSD based Software switch since they are doing so much work on PS4. Imagine a PS4 based switch at commodity prices with an interface based backplane that allows variants of interface types to be slotted via PCIe horizontal backplane.

GPL makes more sense in this instance because it keep everyone honest -- no one vendor can take all the work and make a proprietary fork.

i know what an ethernet switch does in my house, but what do these switches do, and why do they need an operating system ? why are they so special?.

I've been wondering about this and I am not enough of a network guru to really know. But I do know that they are (a) very powerful in that they have lots of ports and can shovel lots of data between them, and (b) they are very configurable so you can create and delete networks and assign ports to networks at will, they can also work together and stitch different networks together.

What I find weird about all that is that things are being done in Layer 2 that TCP/IP was actually invented for. Why has it (sometimes) turned out better to to have complicated ethernet networks than to stitch simple networks together at Layer 3?

What I find weird about all that is that things are being done in Layer 2 that TCP/IP was actually invented for. Why has it (sometimes) turned out better to to have complicated ethernet networks than to stitch simple networks together at Layer 3?

You can't have layer 3 without layer 2. Ethernet forwards data on the basis of globally unique hardware identifiers, which TCP/IP does not have.

If you have a "layer 2" network it really just means that everything on that network is on the same subnet, without the infrastructure needing to make TCP/IP routing decisions. That also makes things a little faster (provided the network is small) performance-wise than having to have your network devices look up the routing table and THEN make the layer 2 forwarding decisions when they've arrived at the correct subnet. Such a flat network scales very poorly however.

One of the things I would expect from a device like Cisco, is security, like protecting from dos attacks, fast routing, switching, and of course stability and performance under heavy use. For this to work the hardware has to to talk very closely to the software, but then again, one of the biggest bashing of Cisco lately is their buggy and unstable software, in particular in their small business units, the reviews are not exactly great for overpriced routers and switches.

One of Cisco's main selling points is that their kit makes the bulk of forwarding decisions (as well as access control and encryption in higher end boxes) in dedicated hardware. So since most traffic doesn't touch the CPU the performance is very good, in *most* cases. Things can start to get very ugly when your Cisco box starts getting flooded with packet types that need to be handled by the processor, eg ARP floods.

You can't have layer 3 without layer 2. Ethernet forwards data on the basis of globally unique hardware identifiers, which TCP/IP does not have.

What does addressing have to do with it? (That's a genuine questiontion, I don't mean it in an aggressive way). Anyway MAC addresses are often configurable, and IPv6 addresses are supposed to be globally unique (except for the ones that are not).

Quote:

If you have a "layer 2" network it really just means that everything on that network is on the same subnet, without the infrastructure needing to make TCP/IP routing decisions. That also makes things a little faster

But my whole question was about these "subnets" becoming increasingly sophisticated -- they are nothing like the single BNC bus I used to know. If I understand the modern world correctly, an ethernet packet can, in principle be sent from a wireless device, copied onto copper wires and passed up to a higher level switches, then down through more switches until it reaches some linux server which then bridges it onto a virtual device that is seen by a guest VM -- all in "layer 2", staying on the same "sub"-net.

That same information could make a similar journey if the wifi net, the different layers of wired networking and the internal network on the VM host were all *different* subnets but connected together at Layer 3. Then instead of "switches" we would have "routers". And I can well imagine an alien civilisation which did not invent clever switches, but instead invented efficient, easy-to-configure routers. For those aliens their "Layer 2" might be pure point-to-point, just enough to get to your nearest hub/switch/gateway.

To me, open development is about increasing the whole market size instead of one's market share (by driving down the costs of the whole market). As it can still end up in a bigger market size for each participant, with very limited associated cost, it makes sense.

What is necessary is a disincentive for free-riding practices. These mostly take the form of: 1) putting the burden of maintenance on upstream 2) having a say (and thus possibly a head start) on upstream direction.

I can only see the former in a situation where some entity is dropping tons of half-baked, unfinished, shoddy coding on upstream, in which case someone there just needs to say, "This is trash and we're not accepting it."

As to the latter, if everything is discussed at gmane, etc. that's pretty transparent. I don't know if you're referring to the divide over freedesktop or what. But, I'm betting that situation will sort itself out.

You can't have layer 3 without layer 2. Ethernet forwards data on the basis of globally unique hardware identifiers, which TCP/IP does not have.

What does addressing have to do with it? (That's a genuine question, I don't mean it in an aggressive way). Anyway MAC addresses are often configurable, and IPv6 addresses are supposed to be globally unique (except for the ones that are not)....

Ethernet MAC addresses are required by the specification to be globally unique. The fact that some aren't is a (necessary) bending of the rules to accommodate reality. The only legitimate reason to duplicate a MAC address is to minimize customer pain when adding a router to an Ethernet connection that is licensed to a particular MAC address -- usually by an ISP.

You can't have layer 3 without layer 2. Ethernet forwards data on the basis of globally unique hardware identifiers, which TCP/IP does not have.

What does addressing have to do with it? (That's a genuine questiontion, I don't mean it in an aggressive way). Anyway MAC addresses are often configurable, and IPv6 addresses are supposed to be globally unique (except for the ones that are not).

Quote:

If you have a "layer 2" network it really just means that everything on that network is on the same subnet, without the infrastructure needing to make TCP/IP routing decisions. That also makes things a little faster

But my whole question was about these "subnets" becoming increasingly sophisticated -- they are nothing like the single BNC bus I used to know. If I understand the modern world correctly, an ethernet packet can, in principle be sent from a wireless device, copied onto copper wires and passed up to a higher level switches, then down through more switches until it reaches some linux server which then bridges it onto a virtual device that is seen by a guest VM -- all in "layer 2", staying on the same "sub"-net.

That same information could make a similar journey if the wifi net, the different layers of wired networking and the internal network on the VM host were all *different* subnets but connected together at Layer 3. Then instead of "switches" we would have "routers". And I can well imagine an alien civilisation which did not invent clever switches, but instead invented efficient, easy-to-configure routers. For those aliens their "Layer 2" might be pure point-to-point, just enough to get to your nearest hub/switch/gateway.

I think the biggest reason for the separation is you can have an Ethernet link and pass any layer 3 protocol over it. If a new one (say, e.g. IPv6) comes around, the hardware can start sending that without requiring changes. It's what lets you start up an IPX network if you'd like over your home network without requiring the router to support it as long as it supports moving around Ethernet packets (and doesn't filter unknown protocols, at least).

While these days such a benefit is diminishing as processing moves from hardware to software, it does make dumb switches much simpler. They only have to track one type of traffic and only keep a look-up table for Ethernet addresses and not IPv4 addresses, IPv6 addresses, and so on. This provides for fast packet forwarding by doing it all in hardware.

You can't have layer 3 without layer 2. Ethernet forwards data on the basis of globally unique hardware identifiers, which TCP/IP does not have.

What does addressing have to do with it? (That's a genuine questiontion, I don't mean it in an aggressive way). Anyway MAC addresses are often configurable, and IPv6 addresses are supposed to be globally unique (except for the ones that are not).

Quote:

If you have a "layer 2" network it really just means that everything on that network is on the same subnet, without the infrastructure needing to make TCP/IP routing decisions. That also makes things a little faster

But my whole question was about these "subnets" becoming increasingly sophisticated -- they are nothing like the single BNC bus I used to know. If I understand the modern world correctly, an ethernet packet can, in principle be sent from a wireless device, copied onto copper wires and passed up to a higher level switches, then down through more switches until it reaches some linux server which then bridges it onto a virtual device that is seen by a guest VM -- all in "layer 2", staying on the same "sub"-net.

That same information could make a similar journey if the wifi net, the different layers of wired networking and the internal network on the VM host were all *different* subnets but connected together at Layer 3. Then instead of "switches" we would have "routers". And I can well imagine an alien civilisation which did not invent clever switches, but instead invented efficient, easy-to-configure routers. For those aliens their "Layer 2" might be pure point-to-point, just enough to get to your nearest hub/switch/gateway.

I think the biggest reason for the separation is you can have an Ethernet link and pass any layer 3 protocol over it. If a new one (say, e.g. IPv6) comes around, the hardware can start sending that without requiring changes. It's what lets you start up an IPX network if you'd like over your home network without requiring the router to support it as long as it supports moving around Ethernet packets (and doesn't filter unknown protocols, at least).

While these days such a benefit is diminishing as processing moves from hardware to software, it does make dumb switches much simpler. They only have to track one type of traffic and only keep a look-up table for Ethernet addresses and not IPv4 addresses, IPv6 addresses, and so on. This provides for fast packet forwarding by doing it all in hardware.

As you point out one of the big reasons why there is a layer2 at all is because Ethernet isn't the only protocol out there. It's really only common in home/office/datacenter environments and has been become more popular at ISPs in the last decade for transport. And back in the day ethernet was just one of the contenders.Another reason could be that it's easier to make ASIC logic to forward frames instead of ip addresses. Though I don't think that this reason applies anymore and you should be able to make a "switch" that only forward based on ip, though it requires a lot of new logic on both operating systems and network hardware. Maybe it will come in the next few years, would be nice to get rid of the pile of shit that virtualization has made us do since it came about and didn't do it properly from the start.

Forwarding is hardly moving to software. Any high performance switch utilizes ASIC/NPU's to forward frames/packets. But it also uses a control plane that typically runs a *nix operating system (junos is freebsd, nx-os is linux, ios-xr is QNX, ios-xe is linux, arista and others also use linux). This is what basically programs the switch so it knows what to do. It gains information when the switch sends certain packets up to this control plane (like arp for example), it does some software logic and programs the forwarding engine using that information.

I think the biggest reason for the separation is you can have an Ethernet link and pass any layer 3 protocol over it. If a new one (say, e.g. IPv6) comes around, the hardware can start sending that without requiring changes. It's what lets you start up an IPX network if you'd like over your home network without requiring the router to support it as long as it supports moving around Ethernet packets (and doesn't filter unknown protocols, at least).

That makes sense, separate the concerns of local networking from world-wide networking. So it works the other way too. During the era of, say, IPv4 an individual organisation can mix and switch between different solutions in-house.

As you point out one of the big reasons why there is a layer2 at all is because Ethernet isn't the only protocol out there. It's really only common in home/office/datacenter environments and has been become more

Certainly there would be more than one layer-2 solution, but they could all be "dumb" (i.e. they only support pretty simple topologies). My real question is why is at least one of them (Ethernet) becoming so "smart" when TCP/IP already is smart. As weblionx points out, one reason is that an office might want to run many layer-3 networks on a single complex topology and so they need a layer-2 that can handle that complexity.

Forwarding is hardly moving to software. Any high performance switch utilizes ASIC/NPU's to forward frames/packets. But it also uses a control plane that typically runs a *nix operating system (junos is freebsd,

I supose for some purposes software-configurable-hardware counts as "software". Like in video cards: those devices are doing more than they ever did before, but they are leaving their actual functions up to the programmers to define.

What I find weird about all that is that things are being done in Layer 2 that TCP/IP was actually invented for. Why has it (sometimes) turned out better to to have complicated ethernet networks than to stitch simple networks together at Layer 3?

What does addressing have to do with it? (That's a genuine questiontion, I don't mean it in an aggressive way). Anyway MAC addresses are often configurable, and IPv6 addresses are supposed to be globally unique (except for the ones that are not).

The difference is that a MAC address is unique out of the box (provided that locally you have been following the rules for locally assigned MAC addresses). No configuration is required. Even with the various ipv4 and ipv6 auto-configuration mechanisms you still need a unique MAC address.

The reason more smarts are being used in Ethernet vs. IP is that somethings are easier to do (configure), or easier to process quickly in Ethernet. Because Ethernet is so simple compared to IP it is possible to make small simple processors to quickly move packets around.

To me, open development is about increasing the whole market size instead of one's market share (by driving down the costs of the whole market). As it can still end up in a bigger market size for each participant, with very limited associated cost, it makes sense.

What is necessary is a disincentive for free-riding practices. These mostly take the form of: 1) putting the burden of maintenance on upstream 2) having a say (and thus possibly a head start) on upstream direction.

I can only see the former in a situation where some entity is dropping tons of half-baked, unfinished, shoddy coding on upstream, in which case someone there just needs to say, "This is trash and we're not accepting it."

As to the latter, if everything is discussed at gmane, etc. that's pretty transparent. I don't know if you're referring to the divide over freedesktop or what. But, I'm betting that situation will sort itself out.

Ah no, I did not mean burdening upstream (which would be pretty poor practice indeed ).I was speaking of the cost of maintaining downstream patches: if someone changes something upstream that impacts your patches, you need to make some further changes. If you had upstreamed these patches, then it would be the "someone" who would have to make these further changes.That's a strong business reason to participate back to a project, even if you are not required to (eg: companies working on permissively licensed projects).

For the second point, I was just stressing that companies like to have a say in the direction the projects they rely on will take, and having several paid devs in the technical committee or just being one of the main contributor certainly help in that case. It can be used for good and bad, I was just listing the main reasons (in my opinion) why some companies go open source.