If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

They now use Linux BSD Solaris MS Apple (KDE Desktops) and many more also Ubuntu no longer Supports PowerPC as Canonical lost that battle And a lot of this Server Farms are PowerPC's also cloud is just a buzzword cloud is really just a bad joke and do to NSA a lot of cloud servers are getting dropped also the link thecloudmarket.com only shows open cloud servers "EC2 Statistics" Amazon servers..

what you linked said but did not have any links to prove it
"He notes that his farm (he calls it a "render wall") is in fact an Ubuntu Server farm, and not RHEL as he has seen reported in the media"

You really are a moronic troll

The whole thing was in reference to what Dustin Kirkland heard at a 45min Paul Gunn talk made in 2010 at linux.conf.au Systems Administration Miniconf in
Wellington Weta Digital - Challenges in Data Centre Growth
or “You need how many processors to finish the movie???”
Paul Gunn
Paul’s been at Weta 9 and a half years. Focusing on the machine rooms.
Weta Digital are a visual effects house specialising in movie effects. Founded in 1993—Heavenly Creates, Contact, etc, done on proprietary systems, no render walls.
Paul covers 2000 onwards, moving towards generic hardware, Linux on the desktop, Linux render wall: LoTR, Eragon, District 9, Avatar, etc, etc.
If it took 24 hours to make a movie, visual effects would start at lunchtime. About 2pm the first production footage would arrive, scan or copy it, then hiring artists, moving on to sending to the client, taking feedback, applying changes. By 11 pm the effects team should be finishing. After that colour grading and sound happen.
(Some directors run a bit later, e.g. LotR).
There’s a pre-vis process, where the director gets low-res renders by way of preview, before shooting starts to get a feel of shots before hitting the expensive filming.
Final delivery is a fram, 12 MB, 2K x 1.5K res. 170,000 frames for a decent length movie.
Average artist has 1 - 2 high-end desktops, 8 core, 16 GB of RAM currently, with whatever is the graphics card of the day. Most artists are Linux - 90%. Some packages aren’t on Linux, e.g. Photoshop, but most artists use Maya (70%). Shake and Nuke for 2D work, Alfred (by Pixar) to manage jobs hitting the render wall.
Rander wall: Originally used pizza boxes, moved to blades for efficiency. HP Blades, 8 cores, 24 GB of RAM currently. Linux right the way through, upgrade every couple of years, Ubuntu.
Storage is about a petabyte of NFS disk, with a few more petabytes of tier 2 SATA disks.
Not a large single image - isolated nodes running farm-style. Node are sacrificial, can lose a node and restart tasks as neded; nodes pluck jobs off a queue the artists submit into. Jobs can run from 10s to 48 hours.
At peak there were 10,000 jobs and a million tasks a day last year.
The rendering pipeline hasn’t changed much, but the hardware has; the first render wall was purchased in 2000, 13 machines, 100 Mb ethernet, 1 GB RAM, a 2 Gb uplink in the rack. Current render wall is 3700 machines, 1 Gb per machine, 500 10 Gb ports active servicing the wall/storage.
The machine room used to be a single entitiy with everything crammed in, now there are 6 machine rooms and 7 wiring closets. One room has a “hot” room, a storage room, other servers.
2000 - 2003 went to 3,000 cores for LotR; 2009 was over 35,000 cores.
Cold racks are 7kW, the hot racks are 22kW - 27 kW; heat is the biggest challenge. Thermal load for the render wall in total runs at 700 kW for almost the entire week, with a couple of dips in the Sunday evening and on Mondays.
Started with standard machine room design: raised floors, hot aisle/cold aisles alternating, cooled by standard computer room aircons; this was good for 2-3 kW per square metre. The current hot room is 6.5 kW per square metre.
The first room was pre-existing, with 20cm floor space, shared with network, power, and cold air, replete with hotspots. Smoke detection system, which were triggered by the compressor units losing gas, and then the fire service were called.
Second edition machine room was ready for Kong; 60 cm floor, high cieling, 4 x 60 kW aircon units. All services above-floor; nothing except air and flood detectors. Was a fine room, bit it couldn’t scale, since it was limited by how far air could blow under the floor.
Third gen machine room: Building started in 2008, 9 months, built into a pre-existing building. Concrete plinths, 1.2 metres, for earthquake retention. 30cm pipes for services; 6 rows in the room, for 60 cabinets. Core service pipes branch off. Rittal racks. The racks are sitting 1.2m up on the plinth. All services are above the cabinets. The fire sensors can shut down individual racks. Incidentally, wire for two sensors to go off for fire, not one. Data pre-run into the racks for ease of build. Power comes top-down, too.
Racks are fully enclosed once the door closes. How air comes out about 40 degrees, the doors are water-cooled, and air exits at 20 degrees from the rack. 1800 litres of water per minute pump through the racks at peak load.
Seismic plates; for the low-density room it was more of a challenge, since they weren’t sure what they’d install. There’s a table solution; a steel table. A plenum above the room extracts hot air above the racks.
Main switchboard: 3 transformers feeding in, big UPS.
Ladder racking: over a kilometre at the end of the first stage, up to two kilometres now; managing the navigation of the racks is quite a challenge.
Plant room: Rather than using individual compressions in each room/rack, the compressors have been centralised into the plant room, which pumps water. 2 x chillers, 2 x 1 MW, 1 x 500 KW, provides a lot of redundancy. The chillers are incredibly quiet compared to most machine rooms. Magnetic bearings, so reduced wear and tear on the compressors.
Efficiency: Cooling at the rack means they aren’t worrying about ambient cooling, trapped in the back of the rack. Water is more efficient that aircon units. Free cooling: rather than pumping the heat and chilling it elsewhere, the water is pumped to the roof and run across the roof, getting natural cooling.
The render wall is deliberately run hot, 20 degrees is the norm, the wall is 25 degrees, with no noticable performance or lifespan. HP agreed to warrant. Saves big money - tens of thousands per month per degree.
Traditional machine rooms: $1 per server = $2 - $3 on plant; in the right weather, ratio is 1:0.23.
Lessons

Make room to expand. You can never have too much space. Don’t put anything under the floor except air, flood sensors, and maybe lights. There’s more volume under the floor than water in the plant, making the environment flood-proof. Plenums to manage/direct hot-air. Water cooling is magical. Free cooling is good. Anywhere with a bit of bad weather is a win. Run servers a bit hot. Not so much your discs.
Challenges (or mistakes)

Ratios of server to storage to render walls. Structured cabling - really regret not doing it. JFDI regardless. Supersize racks. Sharing the humidifier between the hot room and cold room was a mistake, since the hot room can’t run as hot as it might.
Future

Still have space to expand. Water cooling for storage? Vendors are increasing density. Run renderwall @ 28 degrees. * DC power? Maybe, but there are better options right now. Let someone else be first.

The whole thing was in reference to what Dustin Kirkland heard at a 45min Paul Gunn talk made in 2010 at linux.conf.au Systems Administration Miniconf in Wellington
Rander wall: Originally used pizza boxes, moved to blades for efficiency. HP Blades, 8 cores, 24 GB of RAM currently. Linux right the way through, upgrade every couple of years, Ubuntu.

if they're the IBM powered server blade's Ubuntu no longer Supports PowerPC's at all and your Post is From 2010 so this is no Longer relevant in the "World of Server Farms"™

"Rander wall: Originally used pizza boxes, moved to blades for efficiency. HP Blades, 8 cores, 24 GB of RAM currently. Linux right the way through, upgrade every couple of years, Ubuntu." was this a Q&A? it's missing more Text them my Comments

Heres a reference to what i heard BO$$ is Really Mark Shuttleworth long lost Son

1.) true
2.) this has more shade of gray than black or white
3.) kernel have many layers and only 1 of them talk to the hardware
4.) partially correct
5.) is exactly in the middle no more no less

to short answer 2,4, is true that an display servers don't query the graphic card / input directly in C/ASM/registers like a kernel module would do but is true also this module for many operations just pipe the raw hardware data to an upper layer as is also true you are still close enough to the hardware to need hardware/kernel layer compliant data structures. sure the kernels provide small layers in between to provide some normalization for know operations but even so is closer to kernel/hardware than to high level userspace

to explain 5, wayland and most display server have 2 faces which is why they are in the middle, the first face is hell lot closer to the kernel/hardware than any toolkit ever be and the other face is the definition of the user accesible API which abstract the whole lot of kernel/hardware operations to be used for toolkits that by definition of abstraction is very far from the kernel/hardware

Really don't even try these are web forums a Real Post on C and C++ Kernel systemd wayland Etc and the level of what is what will be a whole day of wasted time if not 2 or 3 there are many blogs on it and whos really on these Forums any ways? Ubuntu Zealots? i see about 5 of them jump out any time you say any thing about Ubuntu Mir Etc little babys who can't take facts

if they're the IBM powered server blade's Ubuntu no longer Supports PowerPC's at all and your Post is From 2010 so this is no Longer relevant in the "World of Server Farms"™

"Rander wall: Originally used pizza boxes, moved to blades for efficiency. HP Blades, 8 cores, 24 GB of RAM currently. Linux right the way through, upgrade every couple of years, Ubuntu." was this a Q&A? it's missing more Text them my Comments

Heres a reference to what i heard BO$$ is Really Mark Shuttleworth long lost Son

Well, last time I checked, C++ compilers kept complaining about using "deprecated functionality" when given well-written C code

Deprecated means, old/better version available. Not invalid.
C code is valid C++ however it's not the recommended approach as C is
very old and new better stuff have came to use.

Originally Posted by jrch2k8

1.) true
2.) this has more shade of gray than black or white
3.) kernel have many layers and only 1 of them talk to the hardware
4.) partially correct
5.) is exactly in the middle no more no less

to short answer 2,4, is true that an display servers don't query the graphic card / input directly in C/ASM/registers like a kernel module would do but is true also this module for many operations just pipe the raw hardware data to an upper layer as is also true you are still close enough to the hardware to need hardware/kernel layer compliant data structures. sure the kernels provide small layers in between to provide some normalization for know operations but even so is closer to kernel/hardware than to high level userspace

to explain 5, wayland and most display server have 2 faces which is why they are in the middle, the first face is hell lot closer to the kernel/hardware than any toolkit ever be and the other face is the definition of the user accesible API which abstract the whole lot of kernel/hardware operations to be used for toolkits that by definition of abstraction is very far from the kernel/hardware

Yes. However none of this stuff makes C++ not a valid choice which were my point, but of course
a display server is closer to the hardware than stuff above it However it isn't close enough
to require registry pickling or anything like that.

C++ gives you much better ways to structure your code and makes it easier to abstract
it for multiple back-ends. So in many cases it is a wiser choice.

Originally Posted by LinuxGamer

Really don't even try these are web forums a Real Post on C and C++ Kernel systemd wayland Etc and the level of what is what will be a whole day of wasted time if not 2 or 3 there are many blogs on it and whos really on these Forums any ways? Ubuntu Zealots? i see about 5 of them jump out any time you say any thing about Ubuntu Mir Etc little babys who can't take facts

1.) true
2.) this has more shade of gray than black or white
3.) kernel have many layers and only 1 of them talk to the hardware
4.) partially correct

to short answer 2,4, is true that an display servers don't query the graphic card / input directly in C/ASM/registers like a kernel module would do but is true also this module for many operations just pipe the raw hardware data to an upper layer as is also true you are still close enough to the hardware to need hardware/kernel layer compliant data structures. sure the kernels provide small layers in between to provide some normalization for know operations but even so is closer to kernel/hardware than to high level userspace

I think this is irrelevant when comparing C and C++. Both are completely capable of accessing raw pointers and managing memory directly.
The important difference is that it is much more difficult to make C++ self conscious of its own memory use.

Example: if you use the inline keyword in C++, the compiler might still choose not to inline the code. The function might end up in a different part of the memory than the one it is called from, a part of memory that might not be available at the time the function is called, ending in a page fault.
This can happen with many C++ constructs (inheritance, exceptions), but is not relevant in user space. The kernel makes sure pages are available to applications when they need it.

C and C++ are both at the same (low) level in terms of memory access (for data management), compared to script languages and others. C and assembly are low level compared to C++ in terms of memory footprint control (important for kernels, real-time code, embedded systems with memory and the like).
So if the display server is not in kernel space, C and C++ are (roughly) equally suitable for it.