he Sun SPARC Enterprise T5120/T5220 Servers are the first of Suns servers to

implement the Sun UltraSPARC T2 processor. This processor uses four, six, or eight64-bit SPARC cores, each of which supports eight threads running concurrently. Theservers support one processor, and therefore support a maximum of 64 threads.The Sun SPARC Enterprise T5120/T5220 Servers use the on-chip multithreadedprocessor to minimize the CPU idle time, thus ensuring that multiple threads haveaccess to the processor and do not have to wait for the processor to be completelyfreed to gain CPU access time.The processor is designed for highly threaded transactional processing. This meansthat the time usually spent waiting for memory access has been minimized, whichmaximizes the core utilization processes.The Sun SPARC Enterprise T5120/T5220 Servers implement the PCI-Express, or PCIe,as the primary I/O bus on the server.PCIe offers an advantage of speed and data transmission rate; it operates at 2.5GHz and transmits data at approximately 250 Mbytes/ sec in each direction. PCIeprovides a true bidirectional link, so with one link available, the server offers a datatransmission rate of approximately 500 Mbytes/sec.The Sun SPARC Enterprise T5120 Server has two PCIe x4 slots and one PCIe x8 slotwhile the Sun SPARC Enterprise T5220 Server has four PCIe x4 and two PCIe x8slots.Note: The x4 and x8 nomenclature refers to the number of lanes. Each lanessupports 250 Mbytes/second

architecture, a hyper-privileged architecture.Enhancements in this line include:Use of the Sun UltraSPARC T2 processor which supports multiple threads on multiplecores and its conformance to the new architecture by segregating hardware-specificdrivers from the OS.Implementation of virtualization, which introduces a layer between the operatingsystem and the platform that removes the need for the operating system to havedirect register access to the processor, memory, and critical I/O devices. Thisbenefits customers because hardware can be upgraded without changing thesoftware infrastructure.The Sun SPARC Enterprise T5120/T5220 Servers provide support for a new Ethernetdriver, referred to as e1000g. This new driver provides the following features:

Support for data link provider interface (DLPI) version 2, which enables a data linkservice user to access and use a variety of conforming data link service providerswithout special knowledge of the provider's protocol;Physical layer static configuration using FORTH code, or FCode properties;Portability to Solaris X86 and SPARC platforms through device driver interface, orDDI frameworks;Fault management infrastructure support, which provides error handling andmanagement capabilities;Message signaled interrupts, or MSI support, which allows the devices tocommunicate in a peer-to-peer manner without the involvement of the host CPU.These implementations bring the gigabit Ethernet interface in line with Suns othernetwork interface drivers.

The Sun SPARC Enterprise T5120/T5220 Servers implement RAS features, whichenable the system to maintain a higher uptime rate. RAS features are incorporatedacross both hardware and software components, including:System bussesPCI bridgesProcessorsMemory managementSystem powerAnd system coolingThe inclusion of RAS features in these entry-level servers continues Sunscommitment to providing enterprise-level services to lower-cost solutions. The RASfeatures included in the Sun SPARC Enterprise T5120/T5220 Servers include:N+1 redundant hot-pluggable power supplies and hot-swappable fan modulesExtended ECC protection on L2 cache data path and memory interfaceDRAM extended ECC, which allows the detection of up to 4-bits in error, as long asthey are on the same DRAMStandardized error message generationEnvironmental monitoring with the inter- integrated circuit (I2C) control serial bus

Remote access using ILOM-based hardware and software components

Remote monitoring using Sun Net Connect

The Sun SPARC Enterprise T5120/T5220 Servers implement several key

technologies that give it a great advantage in the entry-level server market forcompanies that require database servers, web servers, and application servers.In this module, we have discussed:An overview of the Sun SPARC Enterprise T5120/T5220 Servers, including theirfeatures, target markets, and target applicationsThe key technologies driving the Sun SPARC Enterprise T5120/T5220 Servers,including the new processor, architecture, and use of a faster PCI-based bus.The Sun SPARC Enterprise T5120 server is a 1U rack mountable server.The Sun SPARC Enterprise T5220 server is a 2U rack mountable server.Both servers implement the newly available chip multithreading (CMT), UltraSPARCT2 processor. These entry-level servers are enterprise servers, designed to meet theneeds of Suns customers who use Java servers and databases, such as Oracle, onthe back end.These Sun servers implement the sun4v architecture and on-chip multithreadingtechnology that allows them to process more information per processor, in highlythreaded workloads, than competitors. The Sun SPARC Enterprise T5120/T5220Servers implement the PCI-Express (PCIe) I/O bus, which provides a greater datatransfer rate than previous PCI standards.The Sun SPARC Enterprise T5120/T5220 Servers implement several new featuresand enhancements. Overall, the servers feature improvements in processorthroughput and IO technology.The next few slides describe the system specifications for the Sun SPARC EnterpriseT5120/T5220 Servers, starting with the processor.The Sun SPARC Enterprise T5120/T5220 Servers are equipped with a singleUltraSPARC T2 processor on-board that operates at 1.2 and 1.4 gigahertz per second(GHz/sec). The processor contains a 4 megabyte Level 2, or L2, cache, and four, six,or eight cores, each of which has one floating point unit, or FPU, and supports eightthreads. Each core has a 16 kilobyte instruction cache and an 8 kilobyte data cache.The CPU also supports up to 16 FBDIMM slots, each of which supports 1, 2, and 4gigabyte DIMMs with a maximum of 64 gigabytes supported in the Sun SPARC

Enterprise T5120/T5220 Servers. The DIMMs are controlled by four memory

The Sun SPARC Enterprise T5120/T5220 Servers are designed to fulfill the needs ofcustomers who require real-world application performance while optimizing thecomputing power that can be achieved within the expensive and shrinking datacenter space. As servers become increasingly more power hungry, maintaining anenvironment to balance power input and thermal output for each rack becomesincreasingly important.The Sun SPARC Enterprise T5120/T5220 Servers utilize Cool Computing to meet theneeds of these customers. With the Sun SPARC Enterprise T5120/T5220 Servers,changes in the architecture have improved the power input to thermal output ratio.CMT- based processors are designed for the thread-rich network computingenvironment. These processors maximize application throughput by processingmultiple threads simultaneously on a single chip.Essentially, any application that is vertically and/or horizontally scalable today canbenefit from CMT.These include real-world applications such as:DatabasesApplications serversTransaction processingAnd web-based servicesPrimary market segments that require these services include:Financial servicesTelecommunicationsGovernmentRetail organizationsAnd professional services

With the introduction of the new processor and architecture, the Sun SPARCEnterprise T5120/T5220 Servers are the right servers for customers whoseapplications are highly threaded, using parallel threads, or require large instructionor data working sets. The CMT chips can handle certain types of software and tasksbetter than other high-end 64-bit chips. Any application that creates network traffic,such as Web services software, Java, Web servers, and application servers, canbenefit from spreading lots of software threads across low-powered processor cores.

Several servers compete in the same market as the Sun SPARC EnterpriseT5120/T5220 Servers. Competition arises primarily from IBM, HP, Fujitsu, and Dell.The trend toward multi-core processors is nothing new: IBM currently ships serverswith the dual-core Power4+ and Power5 chips; HP uses the dual-core with PA-RISCprocessor; Fujitsu implements the SPARC64-V chip; Dell ships servers with the IntelXeon MP and EM64T processors; while Advanced Micro Devices is supplying anOpteron processor to the industry.Sun is well positioned to handle the boost in performance with the SolarisOperating System. Unlike most desktop operating systems, the Solaris OS canleverage CMT to run 64 simultaneous threads. For threaded applications, futureUltraSPARC processors will deliver up to 50 times the performance of today'sfastest UltraSPARC processor, without a significantly higher cost-per-chip.And just as important as speed, CMT enables administrators to consolidate theinfrastructure of dozens of servers onto a single server. Imagine the savings inadministration, maintenance, power, cooling, and floor space. The management andmaintenance savings are as phenomenal as the technology itself.Several key technologies are being used in the Sun SPARC Enterprise T5120/T5220Servers. These technologies help to increase bandwidth, connectivity, systemthroughput, and computing throughput through enhancements to the architecture,processors, and system busses.The Sun SPARC Enterprise T5120/T5220 Servers offer the following major features:Dense packaging in a reduced footprintImplementation of a four, six, or eight core Sun UltraSPARC T2 processorOn-chip multithreading technology to maximize the use of the system processorHigh performance busses and memory interconnect systemsImproved input and output (I/O) throughput through the incorporation of PCIeArchitectural enhancements to system platforms to improve system upgrades,memory management, and the I/O infrastructureIncreased on-board network connectivity

The references shown here provide additional information about the Sun SPARCEnterprise T5120/T5220 Servers.Most of the documentation shown is available on the www.sun.com/ documentationweb site. A link is provided on your screen.Well pause for just a bit so that you can review the information on this slide.he Sun SPARC Enterprise T5120/T5220 servers have the following circuit boardsinstalled in the chassis:MotherboardPower distribution board (2 in the T5220)Paddle boardUSB boardDisk backplaneFan boards (2)PCIe riser cards (3)

The motherboard is actually an assembly made up of the motherboard itself and a

tray or carrier. The motherboard assembly comes in several different versions, withthe only difference being processor speed and the number of cores.The motherboard includes a direct-attach CPU module, slots for 16 DIMMs, memorycontrol subsystems, and all system controller (ILOM) logic.In addition, a removable NVRAM contains all Mac addresses, host ID, and OpenBootPROM configuration data. When replacing the motherboard, the NVRAM can betransferred to a new board to retain system configuration data.The service processor (ILOM) subsystem contains a PowerPC Extended Core, and acommunications processor that controls the host power and monitors host systemevents (power and environmental). The ILOM controller draws power from the hosts3.3V standby supply rail, which is available whenever the system is receiving ACinput power, even when the system is turned off.

The power distribution board distributes main 12v power from the power supplies tothe rest of the system. It is directly connected to the paddle card, and to themotherboard via a bus bar and ribbon cable.

The paddle board is an assembly made up of the board, a metal mounting bracket,and a top cover interlock or kill switch.The paddle board serves as the interconnect between the fan connector boards, andSAS backplane.The paddle board is an assembly made up of the board, a metal mounting bracket,and a top cover interlock or kill switch.The paddle board serves as the interconnect between the fan connector boards, andSAS backplane.The USB board connects directly to the SAS backplane. It is packaged with the DVDdrive as a single customer-replaceable unit (CRU).The fan boards carry power to the system fan modules.In addition, they contain fan module status LEDs, and transfer I2C data for the fanmodules.Note: The fan boards in the Sun SPARC Enterprise T5120 and T5220 are differentboards, but function the same.There are three PCI riser cards per system, each attached in slots to the rear of themotherboard.In 1U systems, each riser card supports one card.In 2U systems, each riser supports two cards.PCI riser cards come in 2 different versions.One that can support both the x8 PCIe card or an XAUI card, andOne that supports x16 PCIe.Slots on the motherboard are keyed so that you can only plug in the correct type ofriser card into the motherboard.Note: The slots that you see on the motherboard are not industry standard PCIeslots. They are Sun proprietary slots that only accommodate the Sun riser cards.Most of the electrical connectivity in the Sun SPARC Enterprise T5120/T5220 serversis accomplished through connectors on the systems infrastructure boards.The only system cables in the chassis are:PDB to MB ribbon cableHorizontal to vertical PDB ribbon cable (2U only)

Disk backplane to MB cable (1 in the 1U, 2 in the 2U)

Top cover, interlock switch cableThe diagram shown illustrates the overall system architecture for the Sun SPARCEnterprise T5120/T5220 servers.The top, center portion of the diagram depicts the UltraSPARC T2 CPU and thememory architecture. The processor uses four memory channels, each of whichmanages one of four banks, each with four FBDIMM slots.There are three DC-DC converters required to deliver power to the processor andFBDIMM connectors.To the right of the UltraSPARC T2 processor, you will find the service processorarchitecture. The service processor connects to the CPU through the serial systeminterface (SSI) communications bus.To the left of the UltraSPARC T2 processor, is the LSI 1068E SAS/SATA disk controller.Finally, the lower half of the architectural diagram shows the I/O architecture. TheUltraSPARC T2 CPU interfaces with the I/O through three PCIe switches.We will provide an in depth explanation of all of these sections.

The illustration depicts all of the on-die components and data paths among thedifferent components for the UltraSPARC T2 processor.Here, you have eight cores communicating through the Cache Crossbar (CCX) to the4 MB total of 16-way associative L2 cache and then through the MCU controllerchannel, labeled MCU 0 through 3.Youll notice in the diagram that each CPU core has its own floating point unit (FPU).Each core also has its own crypto unit.Also there are two 10GB (XAUI) interfaces and two PCI-Express interfaces areintegrated onto the chip.The UltraSPARC T2 processor is designed to operate with the first generationindustry standard Fully Buffered Dual In-line Memory Modules (FBDIMMs).This feature dictates the memory architecture of the Sun SPARC EnterpriseT5120/T5220 servers.The UltraSPARC T2 processor has four memory controllers, referred to as memorybranches. Each branch has two channels. Each channel supports 10 Southbound

(from MCU to memory) and 14 Northbound (from memory to MCU) high-speed seriallanes utilizing differential pair signaling.The UltraSPARC T2 processor allows up to a maximum of 32 FBDIMMs to beaccessed, but is limited to 16 FBDIMMs in the Sun SPARC Enterprise T5120/T5220servers.An FBDIMM module is comprised of standard DDR2 memory chips with an additionalhigh speed serial link device, called an Advanced Memory Buffer (AMB) chip, whichbuffers all memory chip address, data and control signals from the outside world.The AMB serializes this information into high-speed, differentially driven data linksthat form point to point connections from the processor to the first FBDIMM, or fromone FBDIMM to another FBDIMM in a daisy-chain fashion. Since the Sun SPARCEnterprise T5120/ T5220 Servers utilize FBDIMM memory modules that contain theAMB chip within the module, only the processor-to-DIMM and DIMM-to-DIMM linksignals must be routed on the motherboard.The Sun SPARC Enterprise T5120/T5220 servers support three DIMM sizes atrevenue release, including:1 GB, 2 GB and 4 GB.Because all four branches need to be populated, the following three memoryconfigurations are supported:4 FBDIMMs, 8 FBDIMMs and 16 FBDIMMs.In total, the system supports a maximum total system memory of 128GB. TheDIMMs supported are FBDIMMs which must all be the same density within a memorybranch..Individual, faulty FBDIMMs can be replaced, meaning FBDIMMS do not have to bereplaced in pairs.Well start our discussion of the I/O architecture with the two XAUI ports on theUltraSPARC T2 processor. These control the two XAUI slots on the motherboard.To the left of the XAUI ports is a PCIe port on the UltraSPARC T2 processor.The first component on the PCIe port is the ST Probe. This is a Soft Touch probewhich is used for diagnostics.Below the ST Probe is the first PCIe switch.Coming off the bottom of the first switch is a PCIe x8 slot. Topologically, this is theclosest to the processor. It will have the least latency. All the other slots, whichincludes two x4 slots in the Sun SPARC Enterprise T5120 and an additional x8 slot

and two additional x4 slots in the case of the Sun SPARC Enterprise T5220, areconnected to another PCIe switch, which is cascaded off the first PCIe switch.The third PCIe switch, shown on the left side of the diagram, controls the remainingon-board I/O which includes:Four Gigabit Ethernet interfacesFour USB portsTwo in the frontTwo in the rearDVD-ROM drive

The SAS/SATA controller provides support for embedded mirroring for the internaldisks of the Sun SPARC Enterprise T5120/T5220 servers. It supports RAID levels 0and 1, striping, and mirroring, respectively. This allows you to mirror the boot drive.Support for mirroring would not require software intervention, but instead relies onthe controller.The SAS/SATA controller also provides external 32-bit support for Flash ROM andnon-volatile static random access memory (NVSRAM).The controller implements the Fusion-MPT, or message passing technologyarchitecture, which features a performance-based message passing protocol. Bymanaging all I/O and coalescing interrupts to minimize system bus overhead, thecontroller requires small device drivers that are independent of the I/O bus. Thisrepresents a savings as a single device driver is used for SAS/SATA, SCSI, and fiberchannel (FC).The service processor is the hardware portion of the lights-out-management systemimplemented on the Sun SPARC Enterprise T5120/T5220 servers. Unlike previousversions of Sun service processors, the service processor hardware is not on aseparate card, but is integrated on the motherboard.The hardware components of the service processor include:A field-programmable gate array (FPGA) device that controls aspects of systempower and acts as the primary ILOM to host server communications gatewayI2C devices responsible for monitoring the servers environment and FRUID dataA Motorola MPC885 microprocessor, which contains its own instruction and datacaches, and a built-in memory controller

Management portsClick the links provided to view additional information on the topics presented.The I2C bus is used on the service processor for the TOD functionality, severalSEEPROMs, and to monitor the host servers environmental monitoring devices.

The OSP card supports several devices monitored on the I2C bus for itself, including:

The Motorola MPC885 microprocessor offers a highly integrated, high computing

power, lower power dissipation device that acts as the mini-computer for theservice processor. The microprocessor contains an embedded PowerPC core thatincludes its own cache, system interface unit, and an interface to a communicationsprocessor. The microprocessor includes the following features:

Support for 66 MHz, 80 MHz, and 133 MHz core frequency

Built-in memory controller

Two serial communications controllers, one of which provides an external RS-232UART channelTwo serial management channels used to communicate with the host serverTwo 10/100BaseT Ethernet controllers, where one provides an external networkmanagement interface and the second one communicates with the host serverAn I2C port for providing information on the configuration and statusLow-power mode357-pin plastic ball grid array (PBGA), which is a method of packaging high I/OdevicesManagement Ports

The service processor supports two management ports: a serial port and a networkmanagement port. Both ports use RJ-45 connectors.

The serial port implements the full complement of RS-232-style modem controls. Itcan be connected to a terminal server or to a modem to gain access to the serviceprocessor and therefore console access to the Sun SPARC Enterprise T5120/T5220servers.The service processor draws power from the 3.3V standby that is routed through themotherboard from the PSUs, regardless of whether the server is on.When power is applied to the system and the 3.3VDC receives power, the serviceprocessor boots.==============================As predicted by Gordon Moore in his article in Electronics magazine in 1965, thenumber of transistors on a chip has doubled approximately every two years. Thishas come to be known as Moore's Law.In addition, the law shows an equivalent doubling of the clock frequency within thesame time frame. However, the memory speeds have not kept pace. DRAM doubledin speed every 6 years, leaving a growing speed gap between processors andmemory.

High speed processors can execute instructions at a fast rate. This executioninstruction, called a thread, is made up of compute time and, for memory accessinstructions, memory latency time.The compute time can be fast, but overall processor performance is stalledwhenever a memory access instruction is performed. Due to the speed gap thatexists between processors and memory, the processor can spend up to 75 percentof its processing cycles waiting on memory.One way to take advantage of this memory latency is to start another thread thatthe processor can execute, while the first thread is stalled.This concept, known as multithreading, or MT, allows multiple instruction streams tobe executed within the same period of time.An MT processor has multiple sets of registers and other thread states which allowthreads to execute either simultaneously, if the processor can physically supportthis, or to be switched off when one thread is delayed waiting for data orinstructions, such as memory access.Adding more transistors within the chip to support more threads in parallel,improves the overall processor performance, and significantly lowers the effect ofthe memory latency. With multiple threads, cycles that would otherwise be wastedare available as compute cycles for another thread.Another method of improving processor throughput is through chip multiprocessing,or CMP.CMP duplicates the processing unit of the processor, where multiple cores areincluded on a single chip. Functionally, this allows one thread to be active at a timeon each core. This improves utilization of chip resources.Because physical resources are now available with a multiple cores, that coreprocesses its thread independently from the other cores. The number of parallelthreads that can be executed has increased.Chip multithreading combines the concept of multithreading and chipmultiprocessing. This combination provides an increase in throughput and a gain inthread-level parallelism, or TLP, so you can now process multiple threads on a corein a processor with multiple cores. For example, a processor with 4 cores and 4threads per core can execute 16 parallel threads.Sun's UltraSPARC T2 processor implements the CMT design. The UltraSPARC T2processor is available with 4, 6, or 8 cores. Each of these cores supports up to eightthreads. This provides support for up to 64 threads on the server.

Each thread within a core contains a full set of system registers but shares the Level1 instruction cache, which is 16 Kbytes, and the data cache, which is 8 Kbytes. Byproviding each core with its own L1 cache, we can dramatically decrease memorylatency as compared to having L1 cache shared by all cores.L1 data and instruction cache communicates with the 4 megabyte L2 cache througha cache crossbar interface that enables L2 cache sharing among the availablecores.To realize the large opportunity for throughput that the processor provides, theoperating system must be able to effectively manage the individual cores andthreads of the processor.Applications and the operating system should be able to fully utilize all cores so thatthe CPU utilization rate is high. Threads are switched in and out quickly, dependingon their need. If a thread requires memory access as a result of a memory cachemiss, another thread can continue to execute its instructions until it has beenmoved off to make room for another thread. Keep in mind that each thread has itsown set of registers, so its state is maintained.The operating system, specifically the scheduler, is vital in scheduling threads forthe CMT environment. While for all appearances, the operating system appears totreat these cores as individual processors, support is added to various OSsubsystems to ensure that they are CMT-aware.The Solaris OS treats these logical processors like traditional SMP processors orsymmetrical multiprocessing processors, scheduling runnable threads across them.However, there is an awareness of grouping, in which threads are organized by thecore they are associated with. This helps the kernel with scheduling, as it alerts thesystem to which threads are sharing resources so that it can better managescheduling policies to improve performance, primarily through cache utilization andmaximizing aggregate data path bandwidth.Unless bound to a processor by the user, the OS scheduler must decide whichthreads should be run on the same processor or on separate processors.The scheduler uses several methods to balance loads and improve performance,including:Load balancing in which the scheduler uses a scheduling policy to distribute theworkload across the logical processors to help maximize per-chip resourceavailability. It looks at the current load on a core to determine the best fit for thethread requiring execution time. If the core is underloaded, it can handle anotherthread without straining access to resources.Thread-to-cache affinity looks at a threads timestamp to see the last time it ran onits prior CPU. If it is less than a predetermined time, the thread is reassigned to the

CPU. This method also takes into account the size of the cache to ensure the threadhas the amount of resources required to complete its tasks efficiently.Shared run queues provide an advantage with shared cache. Once a thread runs ona strand within a specific core, there may be no disadvantage to it running on anyother strands in the same core. Hence, the logical processors on a core share adispatch queue.Updating CPU performance counters, or CPCs, to make them more flexible andavailable to the kernel, as well as to help make better scheduling decisions basedon workload characterizationsField Programmable Gate Array (FPGA) Device

The FPGA device controls system power and acts as the primary gateway for ILOMto-host server communications. It provides the interface to the UltraSPARC T2 CPUso that it can gain access to the boot Flash ROM, SRAM, which acts as a mailbox andPOST scratch pad. The FPGA also provides the following functions for the server:

Reset controlClock controlPower controlInterrupts for the ILOM and UltraSPARC T2 processorA mailbox communications system is used by both the UltraSPARC T2 processor andthe ILOM to gain indirect access to these functions. The mailbox pointers are held inthe FPGA registers while data is maintained in the SRAM. Both the processor and theILOM use the mailbox, and therefore the SRAM, to pass data between each other.

The Virtual Blade System Controller (vBSC) controls most FPGA functionality andprovides interfaces for ILOM to call these functions when required, such as bootingthe system and configuring specific options.====================================================he following Sun SPARC Enterprise T5120/ T5220 components can be serviced bythe customer:Fan modules

Power supply units of which there are 2 units accessible from the rear of the server.PCI cards and riser cardsDDR2 FBDIMMs represents the memory available in 1G, 2GB and 4G FBDIMMs.System batterySAS disk drivesRail mount kit which is used to rack mount the Sun SPARC Enterprise T5120/T5220serverCable management arm which is used to streamline the cables in the rear of theserver.The following Sun SPARC Enterprise T5120/ T5220 server components should beserviced by an authorized, trained engineer:DVD ROM which is replaceable from the front of the unit.Disk backplane boardFan power board which distributes power to the system fans.Paddle boardPower distribution board which takes the power from the power supplies anddistributes it to the rest of the boards in the serverBus barsSystem configuration PROMCable Kit which contains two SAS cablesMotherboard (MB) assemblyIt should be noted that in the event of a CPU failure, the entire motherboard mustbe replaced.When servicing the internal components to the Sun SPARC Enterprise T5120/T5220server, be sure to:Turn off all peripheral devices connected to the server.Turn off the server itself, except in the case of hot-swap components.Label and disconnect all of the cables coming into the server.And finally, ensure that you follow ESD precautions.

Before working on the Sun SPARC Enterprise T5120/T5220 server be sure to have:A Flat blade No.1 screwdriver and a No. 2 Phillips head screwdriverAn Electrostatic discharge (ESD) matA grounding wrist or foot strapHot-swappable components are those that you can install or remove while thesystem is running, without affecting the systems performance. However, you mighthave to prepare the operating system before the hot-swap operation is performed.The following components are hot-swappable in a Sun SPARC EnterpriseT5120/T5220 server:The two power supply unitsThe hard disk drivesNote: The system fans are hot-pluggable. No preparation of the operating system isrequired before removing and replacing system fans.he top cover for both the Sun SPARC Enterprise T5120 and T5220 server includes anintegrated, latched door for access to the hot plug fans.Depending on the component that you are servicing, you might need to remove thetop cover.Click the top link on your screen for a printable text based procedure on the removaland replacement of the top cover.Click the bottom link on your screen for an animated demonstration of the removalof the top cover.For each of the CRUs listed, click the component name for a printable text basedprocedure on the removal and replacement of that component.Click the link provided to view an animated demonstration on servicing CRUs.Procedure for Removing and Installing a Fan Module

Fans are replaced in pairs (2 fans per module).

To remove a fan module:

Slide the system out of the rack.

Push the 2 fan door latches toward the rear of the server and open the fan door.

The faulty fan module is identified with a corresponding Service Required LED.

On the Sun SPARC Enterprise T5120, the Fan Fault indicators are located on the fanboard.

On the Sun SPARC Enterprise T5220, the Fan Fault indicators are located on the fanmodules.

Pull up on the fan module handle until the fan module is removed from the chassis.

To install a fan module:

1. With the top cover door open, install the replacement fan module into the server. The fan modulesare keyed to ensure they are installed in the correct orientation.2. Apply firm pressure to fully seat the fan module.3. Verify that the Fan Fault indicator on the replaced fan module is not lit.4. Close the fan door.5. Verify that the Top Fan indicator, Service Required indicators, and the Locator indicator/Locatorbutton are not lit.

6. Procedure for Removing and Installing a FBDIMM

7.8.

Complete the following steps to remove an FBDIMM:

1. Power off the system and slide it out of the rack.

Note: FBDIMMs should only be removed with the power cord disconnected from the chassis.

9. 2. Remove the server top cover.

10. 3. Lift the hinged air baffle.11. 4. Locate the memory module socket in which you will remove an FBDIMM. Press theFBDIMM fault button.12. The FBDIMM fault button is located on the motherboard near the FBDIMMs.

13. Faulty FBDIMMs are identified with a corresponding amber LED on the motherboard.14. 5. Push down on the ejector tabs on each side of the FBDIMM until the FBDIMM isreleased

15. 6. Carefully lift the FBDIMM straight up to remove it from the socket.16. Complete the following steps to install an FBDIMM:17. 1. Power off the system and slide it out of the rack.18. 2. Remove the top cover.19. 3. Lift the hinged air baffle.20. 4. Locate the memory module socket in which you will install an FBDIMM.21. 5. Ensure that the ejectors, at each end of the memory socket, are fully open (rotateddownward) to accept the new FBDIMM.

22. 6. Align the FBDIMM with the key in the socket.

23. 7. Press the FBDIMM straight down until it snaps into place and the ejectors engage thecutouts in the FBDIMM's left and right edges.

Procedure for Removing and Installing a Power Supply Unit

Complete the following steps to remove a power supply:1. If the server is in a rack with a cable management arm attached, swivel open the cable managementarm to view the power supplies.2. Identify which power supply you will replace. Each power supply has an amber LED that you can viewfrom the rear of the server. If the amber LED is on, the power supply is faulty and should be replaced.3. Disconnect the AC power cord from the power supply that you are replacing. The power supplies arehot-swappable, so you do not have to shut down the server or disconnect the other power supply.

Note: The Service Action Required LEDs on the front panel and back panel blink when apower supply is unplugged.

4. Remove the power supply:

a. Grasp the power supply handle and push the thumb latch toward the center of the power supply.b. While continuing to push on the latch, use the handle to pull the power supply from the chassis.Remove the power supply:1. Align the power supply with the empty bay in the chassis.2. Press the power supply into the bay until it firmly engages the connector on the power distributionboard. It is fully seated when the thumb-latch clicks into place.3. Connect the AC power cord to the new power supply.4. Swivel any cable management arm back into the closed position.

Procedure for Removing and Installing the

System BatteryComplete the following steps to remove the system battery:1. Power off the system and slide the system out of the rack.2. Remove the top cover.3. Remove PCIe/XAUI riser 0.4. Using a small (No. 1 flat-blade) screwdriver, press the latch and remove thebattery from the motherboard.To install the battery, reverse this procedure.

Note: Install the new battery with the plus sign (+) facing up.

Procedure for Removing and Installing a PCI Card and

Riser CardProcedure for Installing and Removing a PCI Card and Riser Card PCIe/XAUI cards are installed onvertical risers. You must remove the relevant riser to access a PCIe/XAUI card.Complete the following steps to install a PCI card:1. Unpackage the replacement PCIe or XAUI card and place it on an antistatic mat.2. Locate the proper PCIe/XAUI slot for the card you are replacing or3. If necessary, review the PCIe and XAUI Card Guidelines to plan your installation.4. Disconnect any data cables connected to the cards on the PCIe/XAUI riser being removed. Labelthe cables to ensure proper connection later.5. Remove the riser board.a. Remove the #2 Phillips screw securing the riser to the motherboard.b. Slide the riser forward and out of the system.6. Insert the PCIe/XAUI card into the correct slot on the riser board.7. Replace the riser board.a. Slide the riser back until it seats in its slot in the back panel.b. Replace the #2 Phillips screw securing the riser to the motherboard.8. Re-install any data cables connected to the cards on the PCIe/XAUI riser being installed.9. Install the top cover.To remove a PCI card, reverse this procedure.

Note: The procedures are the same for both the Sun SPARC Enterprise T5120 and the SunSPARC Enterprise T5220.

Procedure for Removing and Installing a Hard Disk Drive

Complete the following steps to remove and replace a hard disk drive:1. On the drive you plan to remove, push the hard drive release button.The latch opens.

Caution: The latch is not an ejector. Do not bend it too far to the left. Doing so can damagethe latch.

2. Grasp the latch and pull the drive out of the drive slot.To install a hard disk drive:1. If necessary, remove the blank panel from the chassis. Press the FBDIMM's fault button.

2. Align the replacement drive to the drive slot.

Hard drives are physically addressed according to the slot in which they are installed. If you removed anexisting hard drive from a slot in the server, you must install the replacement drive in the same slot as thedrive that was removed.3. Slide the drive into the drive slot until it is fully seated.4. Close the latch to lock the drive in place.

Procedure for Removing and Installing a DVD

AssemblyTo remove a DVD assembly:1. Remove the following hard drive:o

Sun SPARC Enterprise T5120: HDD3

Sun SPARC Enterprise T5220: HDD7

2. Release the DVD/USB module from the disk drive backplane.

Use the finger detent in the disk drive bay below the DVD/USB module to detach the module fromthe backplane.

3. Slide the DVD/USB module out of the disk drive cage.

To install the DVD assembly:1. Slide the DVD/USB module into the front of the chassis until it seats.2. Install the hard drive you removed during the DVD/USB module removal procedure.

Procedure for Removing and Installing the Power Distribution Board to Motherboard RibbonCableTo remove the PDB to MB ribbon cable:1. Power off the system and slide the system out of the rack.2. Remove the server top cover.3. Lift the hinged air baffle.4. Disconnect one end of the ribbon cable from the power distribution board and the other end fromthe motherboard.To install the PDB to MB ribbon cable, reverse this procedure.

Procedure for Removing and Installing a Fan Board

To remove a fan board:1. Power off the system and slide the system out of the rack.2. Remove the top cover.3. Remove the fan modules.

Note: If you are replacing a defective fan module connector board, remove only the fanmodules that are necessary to remove the defective fan module connector board.

4. Remove the Phillips screw that secures the fan module connector board to the chassis.5. Slide the fan board toward the left side of the chassis approximately 0.5 inch (12 mm) to disengage thefan board from the paddle board and the bottom of the chassis.6. Lift the fan board up and out of the chassis.

To install a fan board, reverse this procedure.

Procedure for Removing and Installing the Paddle Board

To remove the paddle board:1. Power off the system and slide the system out of the rack.2. Remove the top cover.3. Remove the motherboard assembly.4. Remove both power supplies from the chassis.5. Remove all the disk drives from the server.6. Remove the DVD, DVD carrier and USB board from the server.7. Remove the 4 screws securing the disk cage assembly to the chassis. There are 2 screws on the sideof the chassis near the right front and 2 screws on the side of the chassis near the left front.8. Disconnect the disk cable from the motherboard, so that it does not obstruct access to the powerdistribution board.9. Remove the PDB to MB ribbon cable.10. Disconnect the top cover intrusion switch cable connector from the power distribution board.11. Remove the 4 screws connecting the power distribution board to the bus bar.12. Remove the single screw securing the power distribution board to the bottom of the chassis.13. Slide the power distribution board toward the left approximately 0.5 inch (12 mm) to release the powerdistribution board from the paddle board and the captive standoffs on the bottom of the chassis.14. Lift the power distribution board out of the chassis.15. Remove the 2 screws that secure the paddle board bracket to the chassis.

Note: Do not remove the 2 screws that secure the paddle board to the paddle board bracket.

16. Slide the paddle board bracket toward the rear of the chassis approximately 0.5 inch (12 mm) torelease the paddle board bracket from the chassis.17. Lift the paddle board and bracket out of the chassis.To install the paddle board, reverse this procedure.

Procedure for Removing and Installing a Disk Backplane

To remove a disk backplane:1. Power off the system and slide the system out of the rack.2. Remove the top cover.3. Remove all the disk drives from the server.4. Remove the DVD, DVD carrier and USB board from the server.5. Remove the 4 screws securing the disk cage assembly to the chassis. There are 2 screws on theside of the chassis near the right front and 2 screws on the side of the chassis near the left front.6. Slide the disk cage assembly toward the front of the chassis approximately 0.5 inch (12 mm).This releases the disk cage assembly from the chassis bottom.7. Disconnect the disk cable from the disk backplane. The Sun SPARC Enterprise T5120 server has1 disk cable. The Sun SPARC Enterprise T5220 server has 2 disk cables.8. Remove the 2 screws that secure the disk backplane to the disk cage assembly. The Sun SPARCEnterprise T5220 server has 4 screws securing the disk backplane to the disk cage assembly.9. Slide the disk backplane down approximately 0.25 inch (6 mm) to release the disk backplanefrom the "fingers" on the disk cage assembly that protrude through keyhole slots in the diskbackplane.10. Pull the disk backplane away from the disk cage assembly.To install a disk backplane, reverse this procedure.

Procedure for Removing and Installing the System

Configuration PROMTo remove the system configuration PROM:1. Power off the system and slide the system out of the rack.2. Remove the top cover.

3. Remove the right most PCI and riser card.

4. The system configuration PROM is the chip located at j7901 on the motherboard. Lift it straight upoff the motherboard.To install the system configuration PROM, reverse the procedure.

Note: The PROM is keyed, so it can only be installed one way.

Procedure for Removing and Installing the

Motherboard AssemblyThe Sun SPARC Enterprise T5120 and Sun SPARC Enterprise T5220 use the same motherboard.The motherboard assembly consists of the motherboard and the tray that the motherboard sits in. Theyshould be removed and installed as a single unit.To remove the motherboard:1. Power off the system and slide the system out of the rack.2. Remove the top cover.3. Remove the air baffle.a. Open the air baffle.b. Disengage the rear of the air baffle from the motherboard and rotate the air baffleforward.c.

Press in the edges of the air baffle to disengage its pins from the chassis.

4. Disconnect the motherboard to power distribution board ribbon cable.

5. Disconnect the disk cable from the motherboard. The Sun SPARC Enterprise T5220 server has 2disk cables.6. Remove all PCI boards and riser cards.7. Remove all FBDIMMs.8. Remove the 4 screws connecting the motherboard to the bus bar.9. Loosen the captive screw securing the motherboard to the chassis.The captive screw is colored green, and is located to the left of the bus bar screws.

10. Using the green handles, slide the motherboard back and tilt the motherboard assembly to lift itout of the chassis.Grab the handles and move the motherboard toward the back of the system and lift it out of thechassis.To install the motherboard assembly, reverse this procedure.

Procedure for Removing and Installing a Power Distribution Board

Note: It is easier to service the power distribution board (PDB) with the bus bar assemblyattached.

To remove a power distribution board:

1. Power off the system and slide the system out of the rack.2. Remove the top cover.

3. Remove the motherboard assembly.

4. Remove both power supplies from the chassis.5. Remove all the disk drives from the server.6. Remove the DVD, DVD carrier and USB board from the server.7. Remove the 4 screws securing the disk cage assembly to the chassis. There are 2 screws on the sideof the chassis near the right front and 2 screws on the side of the chassis near the left front.8. Disconnect the disk cable from the motherboard, so that it does not obstruct access to the powerdistribution board. Note: There are 2 disk cables connected to the motherboard in the Sun SPARCEnterprise T5220 server.9. Remove the PDB to MB ribbon cable.10. Disconnect the top cover intrusion switch cable connector from the power distribution board.11. Remove the 4 screws securing the power distribution board to the bus bar.

Note: The Sun SPARC Enterprise T5220 has 4 additional screws to remove. They areconnected to 2 additional bus bars that connect to a vertical power distribution board.

12. The Sun SPARC Enterprise T5220 server also has a ribbon cable that connects the horizontal andvertical power distribution boards. This must be disconnected also.13. Remove the single screw securing the power distribution board to the bottom of the chassis.14. Slide the power distribution board toward the left approximately 0.5 inch (12 mm) to release the powerdistribution board from the paddle board and the captive standoffs on the bottom of the chassis.15. Lift the power distribution board out of the chassis.To install a power distribution board, reverse this procedure.

To set up the service processor with initial network configuration information, youmust establish a connection through ILOM to the service processor. Until the serviceprocessor has an IP address assigned to it, you must use a serial connection tocommunicate with the service processor. After establishing a serial connection to the

service processor, you can choose to configure the service processor with a static orDHCP IP address.The default action for the service processor is to try and use DHCP for its networkconfiguration information. When you apply power to the system for the first time,ILOM broadcasts a DHCPDISCOVER packet. If you have an established DHCPserver on the network, the DHCP server returns a DHCPOFFER packet containing anIP address and other network configuration information to the service processor.If you prefer to configure the service processor with a static IP address, and you havea DHCP server established on your network, you can configure the static IP addressprior to attaching a LAN cable to the NET MGT port of the server.Note: Sun recommends a static IP address for the service processor.Whether static or DHCP IP addresses are assigned, you must initially establish a serialconsole connection to communicate with ILOM.You can access the ILOM CLI at any time by connecting a terminal or a PC runningterminal emulation software to the serial management port on the chassis.To connect to the ILOM using a serial connection:Step 1. Verify that your terminal, laptop, or terminal server is operational.You need a display device to interface to the serial port. This can be one of thefollowing devices:Laptop or desktop computerTerminal devicePersonal Digital Assistant

Step 2. Before trying to connect to the port, make sure that your display device isproperly configured. In the case of the laptop, desktop computer, and PDA, make surethat you have a terminal simulation program loaded and started.

In the case of all the devices, make sure that they are configured with the followingdefault parameters:8N1: eight data bits, no parity, one stop bit9600 baudDisable hardware flow control

Step 3. Connect a serial cable from the serial management port on the rear of thechassis to a terminal device.The serial management port is the left-most RJ-45 port, as viewed from the rear of thechassis.The pinout requirements for the serial cable connected to the serial port are displayedin the table on your screen.

Step 4. Press Enter on the terminal device.

This establishes the connection between the terminal device and the ILOM.Note: If you connect a terminal or emulator to the serial port before it has beenpowered up or during its power-up sequence, you will see bootup messages.When the system has booted, the ILOM displays its login prompt:SUNSP 00:12:2F:4A:7A:3B login:The first string in the prompt is the default host name. It consists of the prefix SUNSPand the ILOMs MAC address. The MAC address for each ILOM is unique.

Step 5. Log in to the CLI:

Type the default user name, root.

Type the default password, changeme.Once you have successfully logged in, the ILOM displays the ILOM defaultcommand prompt:->You can now run CLI commands.Note: CLI is the only available user interface on the serial port.

The CLI architecture is based on a hierarchical namespace, which is a predefined tree

that contains every managed object in the system. This namespace defines the targetsfor each command verb.The ILOM includes three namespaces:The /SP namespace, the /SYS namespace, and the /HOST namespace.The /SP namespace manages the ILOM. For example, you use this space to manageusers, clock settings, and other ILOM issues.The /SYS namespace manages the host system. For example, you can change the hoststate, read sensor information, and access other information for managed systemhardware.The /HOST namespace - Used for monitoring and managing the host operatingsystem.A syntax diagram for the ILOM CLI is shown on your screen.An ILOM CLI command is made up of the following components:verb - The CLI supports a pre-defined list of verbs.

options - All options are not supported for all commands. See a specific commandsection for the options that are valid with that command.target - Every object in your namespace is a target. All targets are not supported for allcommands.properties - Properties are the configurable attributes specific to each object. An objectcan have one or more properties.Note: For information on using the CLI, see the Integrated Lights Out Manager(ILOM) Administration Guide.Use the set command to change properties and values for network settings.Network settings have two sets of properties:pending - which are the updated settings, not currently in useactive - read-only settings, currently in use by the ILOMTo change settings:First, enter the updated settings as the pending settings.Then, set the commitpending property to true.To display network settings, type the command:show /SP/networkIf you are already in the /SP/network directory, type:show

Follow these steps to assign a static IP address to the network management port:Step 1. At the ILOM prompt, type the following command to set the workingdirectory.

-> cd /SP/networkStep 2. Type the following commands to specify a static Ethernet configuration.-> set pendingipaddress=129.144.82.26-> set pendingipnetmask=255.255.255.0-> set pendingipgateway=129.144.82.254-> set pendingipdiscovery=static-> set commitpending=trueNote: The network values shown are samples only. You must specify the IP address,netmask, and gateway appropriate for your network configuration.

Ensure that the same IP address is always assigned to an ILOM by either assigning astatic IP address to your ILOM after initial setup, or configuring your DHCP server toalways assign the same IP address to an ILOM.This enables the ILOM to be easily located on the network.When using the network management port, the wire speed is set to 10/100 megabit,full duplex. Connection through this Ethernet port is allowed only after you haveconfigured the service processor using the serial port to have a valid IP address onyour network.The service processor accepts ssh only through its Ethernet port.The following interfaces can be used as ILOM management interfaces:CLI using ssh which meets the DMTF SMASH industry standardSNMP v1, v2c, v3Web browser

IPMI 2.0In general, feature compatibility exists between all user interfaces.In addition, the CLI is backward compatible to ALOM which is the Advanced LightsOut Management firmware application used on previous versions of Suns serviceprocessors.The first time the service processor is initialized, the following default conditionsexist:The network is enabledDHCP is enabledssh service is enabledThe shell for the root user is the DMTF CLI

As previously mentioned, there is an ALOM backward compatibility shell to

accommodate users that are more comfortable with an ALOM interface.Some differences in the ALOM compatibility shell in Sun SPARC EnterpriseT5120/T5220 service processor include:ssh access only - no telnettftp only - no ftpA tftp server must be configured to support commands like:- flashupdate- FlashupdateFPGA- frucapture As these commands use tftp as an underlying protocolUser names and passwords

Passwords have to be 8 to 16 characters (versus 8 characters)

Maximum of 10 users allowed (compared to 16)Two user roles (compared to 4 in ALOM). They are:CUAR (Administrator)No bits set (Operator)Two CLI modes:default (ILOM), andalomA new parameter - netsc_commitUsing this, network settings take immediate effect, and there isNo need to reboot the SP And, finally,setdefaults -a The -a stands for all. This resets all configuration parameters todefaults, including all usernames and passwords, which means that the only user thatis left is the root user.To create a new user, with the ALOM shell as their interface, follow the procedure onyour screen. This procedure creates the user admin to emulate the ALOMadministrator.Step 1. Log in as the root user.Step 2. Create the admin user.-> create /SP/users/adminStep 3. Set the role and cli mode.-> set /SP/users/admin role=Administrator

-> set /SP/users/admin cli_mode=alom

Step 4. Log out and log back in as the admin user. In subsequent logins as admin,youll get the ALOM shell.Over the course of a products life cycle, new versions of firmware are released. Toverify which version of firmware your system is running, issue the version command.In the command output, you are looking for the ILOM version, the SC firmwareversion, and the OBP version.If you are considering updating the ILOM firmware, be aware of the following:It is likely that the firmware images available to download from the SunSolvedatabase are more current than the image installed on your service processor at thefactory.The BIOS and the SP firmware are simultaneously updated. A single firmware imagecontains both the BIOS and the SP firmware.A firmware upgrade will cause the server and ILOM to be reset. It is recommendedthat you perform a clean shutdown of the server prior to the upgrade procedure.An upgrade takes about five minutes to complete.ILOM will enter a special mode to load new firmware. No other tasks can beperformed in ILOM until the firmware upgrade is complete and ILOM is reset.Note: Ensure that you have reliable power before upgrading your firmware. If powerto the system fails (for example, if the wall socket power fails or the system isunplugged) during the firmware update procedure, the ILOM could be left in anunbootable state.To update the ILOM firmware using the CLI:Step 1. Log in to the service processor as root.Step 2. Update the ILOM firmware (and consequently the BIOS) as follows:-> load -source tftp:// servername:port/path/to/image

Note This functionality also exists in the ILOM Web GUI.

From the service processor, you can power on the server by typing:start /SYSYou can use the stop command to perform an orderly shutdown of the server followedby a power off of the server, as shown on your screen.You can also skip the orderly shutdown and force an immediate power off with the-force option, as shown on your screen.Output to POST, OBP, and the Solaris OS are displayed to the system console, whichis accessible through the service processor.To initiate a connection to the server console, execute the start /SP/console commandfrom the service processor prompt.To terminate a connection to the server console, execute the stop /SP/consolecommand from the service processor prompt.

Working in a data center with thousands of servers in racks can sometimes pose aproblem when you are trying to locate one.The locate LED is a white LED that you can light to help you find your server in acrowded equipment room. The Locate LED has two states, fast blink and Off.To turn on the locate LED, type:-> /SYS/LOCATE value=Fast_BlinkTo turn off the locate LED, type:-> /SYS/LOCATE value=Off

The system event log accumulates various events, including administration changes tothe ILOM, software events, warnings, alerts, and events from the IPMI log.You should note that the ILOM tags all events or actions with LocalTime=GMT (orUDT). Browser clients show these events in LocalTime. This can cause apparentdiscrepancies in the event log. When an event occurs on the ILOM, the event logshows it in UDT, but a client shows it in local time.

To view and clear the system event logs, perform the following steps:Step 1. Navigate to /SP/logs/event.-> cd /SP/logs/eventStep 2. From the CLI, enter the show list command: -> show listThe event log scrolls onto your screen.Step 3. To scroll down, press any key except q.Step 4. To stop displaying the log, press q.Step 5. To clear the system event log, use the command:set clear=trueStep 6. The CLI asks you to confirm.Type y.The CLI clears the system event log.Note: The system event log accumulates many types of events, including copies ofentries that IPMI posts to the IPMI log. Clearing the system event log clears allentries, including the copies of the IPMI log entries. However, clearing the systemevent log does NOT clear the actual IPMI log.

You must use IPMI commands to view and clear the IPMI log.The service processor also allows you to create additional users. The tasks that can beperformed by a user are determined by the privileges that you assign to that usersaccount. You can have up to a maximum of ten user accounts, including root.Each user account consists of a user name, a password, and a role.As the root user, you can add, delete, and list users on the service processor.To add a user, execute the create command, providing the following information:usernamepassword, androle - either administrator or operatorYou can remove users from the service processor using the delete command, as shownon your screen.To display information about all local user accounts, type show /SP/users.

As root user, you can use the set command to change passwords and roles for existinguser accounts.For example, when changing the role for user1 from Administrator to Operator, type:-> set /SP/users/user1 role=operatorTo change user1's password, type:-> set /SP/users/user1 new_passwordTo log into the Sun ILOM Web GUI, follow these procedures:Step 1. Using secure http, type the IP address of the ILOM service processor into yourweb browser, as the example on your screen shows.

The JavaTM Web Console login screen is displayed.

The Sun SPARC Enterprise T5120/T5220 server has a virtual keyswitch with fourdifferent modes.The first mode is normal in which the service processor uses the diagnostic settingsthat you specified with the set command to determine how POST is executed.The second mode is diag in which the vBSC sets the diagnostic level to theinteractive menus.The third mode is stby. In this mode, the service processor prevents you frompowering up the server.And in the fourth mode, locked, the service processor does not allow you to send areset to the server.To set the virtual keyswitch, execute the set command with /SYS/keyswitch_state asthe target and provide a mode as a value for the target.To view the current keyswitch setting, execute the show /SYS/ keyswitch_statecommand.

The Sun SPARC Enterprise T5120/T5220 server has a virtual keyswitch with fourdifferent modes.

The first mode is normal in which the service processor uses the diagnostic settingsthat you specified with the set command to determine how POST is executed.The second mode is diag in which the vBSC sets the diagnostic level to theinteractive menus.The third mode is stby. In this mode, the service processor prevents you frompowering up the server.And in the fourth mode, locked, the service processor does not allow you to send areset to the server.To set the virtual keyswitch, execute the set command with /SYS/keyswitch_state asthe target and provide a mode as a value for the target.To view the current keyswitch setting, execute the show /SYS/ keyswitch_statecommand.You can power on the system by executing the start /SYS command at the serviceprocessor prompt, or by pressing the power button located in the bottom left corner onthe front of the machine.

Output from POST, OBP, and the Solaris OS are displayed to the system console,which is accessible through the service processor. To acquire the console, execute thestart /SP/ console command from the service processor.When you issue the command the system will prompt you Are you sure you want tostart /SP/console (y/n)?. You can suppress this prompt by using the option -script.Although multiple users can connect to the system console from ILOM, only one userat a time has write access to the console. This is referred to as a write-locked session.Any characters that other users type are ignored. These are referred to as read-onlymode sessions, where users can only view the console.If no other users have access to the system console, then the user entering the consolesession first obtains the write lock automatically.

To see if there are other users connected into the service processor and to see whetherthey are connected to the console, execute the show /SP/sessions command.Note: Terminate a console session by typing #. (pound, period).

If the Solaris OS is running, you can use the stop /SYS command from the ILOMshell to issue a graceful shutdown to Solaris. It is similar to one of the Solaris OScommands, such as shutdown, init, or uadmin.It can take up to 65 seconds for the poweroff command to completely shut down thesystem. This is because ILOM attempts to wait for a graceful shutdown to completebefore the server is powered off.You can also force the server to power down by executing the -> stop -force /SYScommand or by pressing and holding the power button on the front of the server. Thisdoes an immediate shutdown regardless of the state of the host.From the service processor console, you can issue the command: -> set/HOST/send_break_action true to bring down the server to the Open Boot PROMprompt, or ok prompt. It is the equivalent of executing an L1-A or Stop-A on a systemwith a keyboard attached.For the system to accept a break, the virtual keyswitch must not be in the lockedposition. If it is in the locked position, ILOM returns an error message.

To reset the service processor or the server, execute the reset command from theservice processor.To reset the service processor, type reset /SPIf you reset /SYS, the server reboots using the Open Boot PROM settings that youhave configured.The reset command does not perform a graceful shutdown of theoperating system.You will be prompted to confirm a reset operation.

The -script option instructs ILOM to proceed without prompting the confirmationquestion.

Whenever a device is physically added or removed from a Sun SPARC Enterprise

T5120/T5220 server, structures that describe this device to the hardware and theoperating system must be created or removed. The process of creating and removingthese structures is the foundation of device configuration.In the Sun SPARC Enterprise T5120/T5220 servers, device configuration is initiatedthrough the Hypervisor layer. The Hyrpervisor then passes this structure to the OpenBoot PROM and then ultimately up to the Solaris OS. The Solaris OS maintains itsknowledge of available devices using a set of hierarchically organized device files.These files are located in the /devices directory of the root file system. In addition, theSolaris OS uses the path_to_inst file in the /etc directory to manage device instancenames. It uses links in the /dev directory to enable logical device addressing.

The Sun SPARC Enterprise T5120/T5220 servers run OBP 4.x. The capabilities of theOpen Boot PROM in these servers have decreased as a lot of its functionality has beenmoved to the Hypervisor layer. Its key functions are to allow you to boot the operatingsystem, modify system startup parameters, load and execute programs, and get help introubleshooting.The Solaris OS comes pre-installed on the Sun SPARC Enterprise T5120/T5220servers on the disk in slot 0. The operating system is not configured, that is, the sysunconfig command was run after the OS was installed. When you boot the system forthe first time from the disk, you are prompted to configure it for your environment. Atthe ok prompt, boot from the disk that contains the Solaris operating system. Youmight want to configure an alias for this as your boot disk.

There are some instances when you might want to reinstall or upgrade the Solaris OSin your server. The Sun SPARC Enterprise T5120/T5220 servers support a minimumOS release of the Solaris 10 Update 4 OS. It has an architecture type of sun4v, to

coincide with the ILOM firmware running on the service processor. The v representsthe virtualization of the hardware to the Open Boot PROM and the Solaris OS, whichis performed by the Hypervisor layer. You can install or upgrade the operating systemthrough traditional jumpstart procedures or using the DVD-ROM.Next, were going to take a look at some standard Solaris OS commands to see whatis reflected differently in their output.The first is the date command. The system TOD (time of day) is managed by theservice processor and is provided to the Solaris OS by the Hypervisor layer. You mustset the date on the service processor using the set /SP/clock/datetime command. If youset the date using the date command within the Solaris OS, it holds until your next rePOST, when the service processor will then pass its date over to the Solaris OS onceagain.The second command is prtdiag. The prtdiag command displays system configurationand diagnostic information. Youll notice in the prtdiag output that the Solaris OS seesthe UltraSPARC T2 processor as 64 CPUs. It is an 8-core processor with 8 threadsrunning per core. Youll also see in the prtdiag output that you can obtain memoryspecifics from the service processor. You only see the total memory size in the prtdiagoutput.The next command is psrinfo. The psrinfo command reflects the same changes from aCPU perspective as the prtdiag command. You will see that the Solaris OS ispresented as 64 processors.And finally, well take a look at the output to ifconfig -a. This output represents thefour gigabit Ethernet interfaces as ipge interfaces, ipge0 through ipge3.Multipathing software lets you define and control redundant physical paths to I/Odevices, such as storage arrays and network interfaces. If the active path to a devicebecomes unavailable, the software can automatically switch to an alternative path andmaintain its availability. This is known as automatic failover. To take advantage ofmultipathing capabilities, you must first configure the server with redundant hardware for example, multiple network interfaces going to the same subnet or two controllersattached to the same storage array and then configure the software to make use of it.

For the Sun SPARC Enterprise T5120/T5220 servers, there are three different types ofmultipathing software that are available.The first is the Solaris OS IPMP (IP Multipathing) which provides multipathing andload-balancing capabilities for IP network interfaces.Next, we have Sun StorEdge Traffic Manager Software (STMS), which enables I/Odevices to be accessed through multiple host controller interfaces from a singleinstance of the I/O device. This software is fully integrated into the Solaris operatingsystem.And finally, we have Veritas Volume Manager (VxVM), which includes DynamicMultipathing (DMP). This software provides disk multipathing as well as disk loadbalancing to optimize I/O throughput.

Within the Sun SPARC Enterprise T5120/ T5220 servers, the SAS controller supportshardware mirroring and striping using the Solaris OS raidctl utility.A hardware RAID volume created using the raidctl utility behaves differently than onecreated using software RAID. When volumes are created using hardware RAID, onlyone device appears in the device tree. Member disk devices are invisible to theoperating system and are accessed only by the SAS controller.Executing the raidctl command tells you whether there are any RAID volumes found.To create a RAID volume, execute the raidctl -c primary_drive secondary_drivecommand. The secondary drive will disappear from the device tree.To set up a striped volume, execute the raidctl -c -r 0 primary_drive secondary_drivetertiary_drive, and so on command. In this case, all but the primary drive disappearsfrom the Solaris OS device tree.To delete the hardware RAID volume, execute the raidctl -d mirrored_volumecommand.

To perform a disk replacement operation through dynamic reconfiguration, verify

which hard drive corresponds to the physical device you want to remove.Ensure that no applications or processes are accessing the hard drive. To view the stateof the SCSI devices, execute the cfgadm -al command. To remove a hard drive fromthe device tree, execute the cfgadm -c unconfigure Ap_Id command, where the Ap_Idis the attachment point identifier represented in the left-most column of the cfgadm -aloutput. When this operation completes, the blue OK-to_remove LED lights on thedrive. It is now safe to remove and replace the disk.After installing the new drive, execute the cfgadm -c configure Ap_Id command toconfigure the drive back into the Solaris OS.The green activity LED flashes as thenew disk is added to the device tree.LEDs are placed throughout the Sun SPARC Enterprise T5120/T5220 servers tohelp pinpoint problem components in the server, as well as to give a visualindicator of the overall server status.LEDs are found on the following chassis locations and server components:The front and rear panelsThe system fan modulesThe power suppliesThe disk drivesThe DVD driveService indicators are categorized as either system status LEDs or componentstatus LEDs.On the Sun SPARC Enterprise T5120/T5220 servers front panel, system statusLEDs are on the left, and component status LEDs are on the right.Looking at the system status LEDs on the front panel of the server, starting on the top,and moving down, the system status LEDs are the:

White-colored Locator indicator,

The Amber-colored Service Required indicator, and theGreen-colored running or Power OK IndicatorClick the indicator name on your screen for additional information about the functionof that indicator.The system status LEDs on the front panel of the server are replicated on the rear ofthe chassis, as shown on your screen. Starting on the left, and moving to the right,they are:a White-colored Locator indicatorAmber-colored Service Required indicator, andGreen-colored running or Power OK IndicatorEach power supply has three LEDs. From top to bottom, they are:a Green-colored PSU OK indicator,Amber-colored PSU fault indicator, andGreen-colored A/C PSU power indicator

The gigabit Ethernet ports each have two LEDs to show their current status. They arethe:Green-colored Link/activity indicator, and theAmber-colored Speed indicatorClick the indicator name on your screen for additional information about the functionof that indicator.

The Sun SPARC Enterprise T5120/T5220 status indicators conform with theAmerican National Standards Institute (ANSI) Status Indicator Standard (SIS).The table on your screen describes the SIS standard LED behaviors and theirmeanings.You can also obtain the LED status information through the service processor.To view LED status information in the ALOM CLI, use the showenvironmentcommand. In the ILOM cli, use the show /SYS/<component>/<property> command.Click as indicated on your screen to view a demonstration of these commands.Each DIMM slot on the motherboard has an associated fault LED that identifies afaulty FBDIMM diagnosed by POST or FMA.When power is removed, the FBDIMM fault LEDs are lit by pressing a fault reminderbutton located on the motherboard.To identify faulty FBDIMMs, follow this procedure:Step 1. Unplug all power cords.Step 2. Press the FBDIMM fault button.The FBDIMM fault button is located on the motherboard near the FBDIMMs asshown.Step 3. Note the location of faulty FBDIMMs.Faulty FBDIMMs are identified with a corresponding amber LED on themotherboard.Step 4. Ensure that all FBDIMMs are seated correctly in their slots. If re-seating theFBDIMM does not fix the problem, remove and replace the faulty FBDIMM.Note: The DIMM fault LEDs can be lit for only a minute or so with the fault reminderbutton.

Firmware diagnostic tests are executed on both the Sun SPARC Enterprise T5120/T5220 server host and on its service processor. The purpose of these tests is to verifythe core functionality of the service processor and the host.The output displayed during the resetting of the service processor is an excellentsource of diagnostic information.Click as indicated on your screen to view a demonstration of the power reset sequenceof the service processor.

When the service processor has finished booting and the ILOM firmware has loaded,you can log in to the service processor and power on the host server. You can do thisusing the poweron command in the ALOM cli, executing start /SYS in the ILOM cli,or by pressing the power button on the front panel of the chassis.When power is applied to the host server, the vBSC, which is responsible for callingPOST and collecting POST status on completion, is initialized.Hosted on the service processor, vBSC can be thought of as an extension ofHypervisor. The functionality that OBP once provided has been moved to the serviceprocessor. This eliminates the need to tie OBP-specific settings to the entire server.When the server powers on, the following actions take place:vBSC is initialized from the service processor.POST is called from vBSC to perform a sanity check of the server as well as testing ofthe components based on the settings passed to the host server by vBSC.The ASR database is updated based on diagnostic tests performed.And Hypervisor is started for the host server.

While POST is called automatically, the output is not displayed by default. Likewise,higher levels of POST testing are not called by default. The server must be configuredfor these actions to happen.The ILOM variables that affect POST are diag_trigger, diag_verbosity, diag_level anddiag_mode.You can modify them with the set command in the ILOM or the setsc command inALOM and view them with their current settings by executing the show commnad inILOM and showsc command in ALOM.The service processors setkeyswitch command also affects the behavior of POSTexecution. If the keyswitch is set to the DIAG position, then vBSC sets the diagnosticlevels to the interactive menus. If the keyswitch is set to the NORMAL position, thenthe service processor variables are used to determine if and how POST is executed.Click as indicated on your screen to view a table of these ILOM variables and theirpossible settings.OBP tests are executed once POST has completed and the Hypervisor has beenloaded. OBP tests are affected by the diag-switch? OBP variable.The value of diag-switch? affects the verbosity of OBP. If diag_switch? is set to true,the output from the OBP initialization and probing is sent to the console. If diagswitch? is set to false, no initialization or probing output is displayed.In addition to normal POST and OBP output, there are several commands andinterfaces that you can use to analyze and test the system hardware to assist introubleshooting.In the remainder of this module, we will look at analyzing POST output, runningPOST menus, analyzing OBP diagnostic output, analyzing POST error messages,managing the ASR database, and viewing logs, console messages, and faults.There are two levels of POST tests that are executed on the host server. These are theintegrity POST and, if configured, EPOST.

Integrity POST executes the following tests on the server each time the system ispowered on:Register tests on window registers and the Niagara CPU scratchpadFloating point unit access to check the path from all threadsL2 access to check the path from all threadsQuick FPGA check by the master thread, the first thread to jump to POST, to checkthe integrity of the SRAMAll threads returned from testingEPOST executes tests for the remainder of the server based on the diag_level setting.When executed with a diag_level set to min, EPOST:Initializes registers and global variables.Runs the basic memory tests in which all memory cells are touched with uniquepatterns and checked with hardware ECC.Tests I2C operation and clock frequency for the master thread only.Performs a basic test on the JBUS-to-PCI-E bridge for the master thread only.When executed with a diag_level set to max, EPOST:Performs all the steps of the minimum diagnostic mode.Runs memory tests on any arrays not covered by BIST, such as L1 tags and theinternal register arrays.Tests the functional operation of the L1 and L2 caches and instruction and datamemory management unit and interrupts for all of the hardware strands.Performs an extended memory test in addition to hardware ECC checks.

Tests the functionality of the JBUS-to-PCI-E bridge for the master hardware thread,specifically direct memory access and interrupt testing.The POST interactive menu mode is called when two conditions are met. The first isthat the systems virtual keyswitch is set to DIAG mode and the second is that theservice processor diag_mode variable is set to menu. You can verify that theseconditions are met with the output to the showkeyswitch and showsc commands.The POST menu mode provides access to all tests available within POST, including:built in self testTests performed when diag_level is set to minTests performed when diag_level is set to max

When POST has completed, OBP initializes and works with Hypervisor to build thedevice tree structure. Even though the Open Boot PROM does not have as muchfunctionality as it previously did, you can still gather some information from it to helpyou troubleshoot.From the output to the banner command, you can see how much memory has passedPOST.From the output to show-devs, show-disks and show-nets, you can see the device treethat has been built, along with the disks and network interfaces that the Open BootPROM sees.POST displays a great deal of information on errors and warnings to assist you introubleshooting the cause of a problem. The level of information given is affected bythe value of the service processor diag_verbosity variable. Errors follow the standardformat that identifies:The test that was executing.The hardware that was being tested at the time of the failure

The suggested repair instructions for which component to replace

And the error message generated by the fault.Automatic System Recovery, or ASR, lets you manually manage blacklisted itemsfrom the service processor as well as lets the service processor manage the blacklisteditems through the vBSC.An ASR database is maintained with a list of any blacklisted items. When POSTcompletes, it returns a status of the components tested to the vBSC. If any of thecomponents are reported back as failed, the service processor attempts to unconfigurethem and map them out of the server. An event is then logged to the service processorand to the console regarding this issue.ILOM provides commands to manage the ASR database. The command that you useis cli dependent.In the ILOM cli, you can use the show /SYS command to query the component_stateof a specific component.You use the set command to disable and enable a specificcomponent.You also use the set /SYS command to clear the ASR database for a specificcomponent.

Both the ILOM and ALOM cli provide commands for viewing console messages,system messages, and system faults. The types of messages include errors, faults,notices and general system information.In the ILOM cli, execute the show /SP/ logs/event/list command to display messages,notices, and events that have been sent to the service processors event log.The equivalent ALOM CLI command is the showlogs command.The ALOM showfaults command displays current valid system faults.The ALOM cli also provides a command called consolehistory that shows the contentsof the boot log and the run log. The boot log contains the messages from POST, OBP,

and the booting of Solaris. The run log contains everything in the boot log plus theSolaris runtime messages.The ILOM cli does not provide the console history functionality.Click the links on your screen to view a demonstration of these commands.Several tools and features are available on the Sun SPARC Enterprise T5120/T5220servers to help administrators monitor, troubleshoot, and diagnose issues. In thissection we will discuss the following tools:Software-based commandsApplicationsLog filesYou can monitor and analyze the server status using a combination of these tools.Click as indicated on your screen to view a block diagram showing the diagnosticcomponents on both the host server and the service processor.The Sun SPARC Enterprise T5120/T5220 servers implement the fault managementarchitecture introduced in Solaris 10 OS. Incorporated into both the hardware andsoftware of the Sun SPARC Enterprise T5120/T5220 servers, the FMA helps theserver maintain a greater uptime rate by:Automatically and silently diagnosing underlying problemsUsing predictive self-healingDisabling faulty components if necessary and possibleIssuing alerts on problems and logging eventsProviding data to higher-level management services and, in the future, remote servicesBe sure to have your server fully patched for the latest FMA agents.

The fault management architecture incorporates the following components to help

achieve its goals:Error report creation, where the Hypervisor, vBSC, and hardened device drivers eachcreate reports that in the end are handed off to the Solaris OS fault managementdaemon.The Solaris OS fault management daemon, fmd, is responsible for forwarding reportsgenerated by the vBSC from the hypervisor and hardened system device drivers to adiagnosis engine.The diagnosis engine contains profiles with fault trees and rules for devices on theserver. These rules determine how a fault on a device should be handled and whoshould be handling it.Fault response agents help the server and system administrator manage the hardwareor software fault.Click as indicated on your screen to view the fault management architecture blockdiagram for the Sun SPARC Enterprise T5120/T5220 servers.

There are several Solaris OS commands associated with FMA. These include:The fmadm command, which lets you view, load, and unload modules. It also lets youview and update the resource cache, which is a list of faulty resources as seen by thefault management daemon, fmd.The fmdump command, which enables system administrators to view any log filesassociated with fmd and retrieve specific details of any diagnosis issued. By default,the fmdump command lists the fault log, displaying the time, the ID associated withthat fault, and the message ID, SUNW-MSG-ID, that can be viewed on Suns messagelookup website.The fmstat command, which reports the statistics of the fault management system. Bydefault, the fmstat command lists the active modules and the statistics associated withthe modules.

Intermittent problems can often times be difficult to diagnose. Diagnostic tools

exercise the components to the point where they display an emerging failingcondition. These tools are designed to stress the components to the point of failure. Onthe Sun SPARC Enterprise T5120/T5220 servers, the Sun Validation Test Suite(SunVTS) is used to exercise the server, as well as for hardware validation and repairverification.The minimum version of SunVTS that supports the Sun SPARC Enterprise T5120/T5220 servers is SunVTS 6.4 and it ships with the Solaris 10 Update 4 OS.The following Sun SPARC Enterprise T5120/ T5220 components can be diagnosedthrough SunVTS:CPUFBDIMMsI/OGigabit Network PortsSAS Disks, Controller and CablesDVD DeviceHost-to-Service Processor InterfaceSunVTS was modified for the Sun SPARC Enterprise T5120/T5220 as follows:CPU/memory tests were updated to test the UltraSPARC T2 specific features.cryptotest was enhanced to test the cryptographic unit on the UltraSPARC T2.A new SunVTS test developed for the UltraSPARC T2 is Xnetlbtest, which providestesting coverage for the two 10 Gigabit ports on the network interface unit of theUltraSPARC T2 processor.

Additional complex testing of nettest and netlbtest include the following features:Spawns off continuous Tx/Rx asynchronously to force the driver to exercise differentDMA channelsProvides classification, IP fragmentation, and variable length packets to cover jumboframe testSupports back to back (port to port) loopback testsTransmit rate can be varied by using delay between sendsSoft error threshold allows limit of packet drop in pass/fail criteriaProvides options of different payload data patterns

You can collect Sun SPARC Enterprise T5120/T5220 servers status and configurationinformation from several sources within the Solaris 10 OS. You can find systemstatus, such as error and information messages, in the log files. You can obtain systemstatus by executing specific Solaris OS commands.The following section describe the utilities that you can use to collect status andconfiguration information on the Sun SPARC Enterprise T5120/T5220 servers.The main system log for the Solaris 10 OS is the messages file located in the /var/admdirectory. Here, you can locate system status and error and informational messages byfiltering this file. This file can grow to be large, so it is important to select key valuesto filter on, for example, cpu, mem, error, and so on.You can also obtain system status and configuration data through the use of the SolarisOS utilities, such as:prtdiag, which lists the available CPUs, the I/O configuration, and the PROM andASIC versionsiostat, which displays information on each I/O device, including I/O errors

prtconf, which displays the system device drivers

prtpicl, which displays platform-specific information stored in the platforminformation and control library (PICL)psrinfo, which displays which CPUs are available and their statusraidctl, which tells us if there are any RAID sets configured and, if so, what theirmembers areand the Sun Explorer Data Collector

The Sun Explorer Data Collector is a utility, made up of shell scripts and somebinaries, that automates the collection of system configuration data from Sun Solarisservers. It collects a summary of installed software, firmware, and storage subsystemcomponents and saves it in a compressed tar format. This tool is accessible fromSunSolve, located athttp://sunsolve.sun.com.This web site provides more information on the utility, any patches that are needed forthe Sun SPARC Enterprise T5120/T5220 servers, and the link to the softwaredownload website.

Another source of system status and configuration information is a system core dump.The Solaris 10 server is enabled by default to save a core dump when it occurs.To verify that core dumps are enabled, run the dumpadm command. If core dumps areenabled, you also need to verify that you have enough swap space and file systemspace for the core dump to be stored. You can do this with the swap -l and df -kcommands, respectively.To test the save dump utility, perform a graceful shutdown of the Solaris OS. From theOBP prompt, perform a sync followed by a reset-all. Watch for the savecore messagesduring boot and then verify that the savecore files are in the savecore directory.

The Sun SPARC Enterprise T5140, code name Maramb1U, server is a 1U rackmountable server.The Sun SPARC Enterprise T5240, code name Maramba2U, server is a 2U rackmountable server.It should be noted that both servers share the same motherboard.The Sun SPARC Enterprise T5140/T5240 servers are the first servers to expand chipmultithreading (CMT) to include multi-processor systems.They are also the first servers to implement the UltraSPARC T2 Plus processor.

Each CPU has 4, 6, or 8 cores, for up to 128 threads.

These servers have twice the threads of the Sun SPARC Enterprise T5120/T5220servers in the same footprint.

Note that not every Sun SPARC Enterprise T5240 server will have a mezzanineassembly. It will only be shipped with the server when the additional memory isordered by the customer.

The Sun SPARC Enterprise T5140/T5240 servers support DIMM sizes of 1Gbyte, 2Gbytes, or 4 Gbytes.A maximum of 128 Gbytes of main memory is supported at revenue release.The availability of 8 Gbyte DIMMs will be determined after Revenue Release.Supported memory configurations in the Sun SPARC Enterprise T5140 serverinclude:8 FBDIMMs12 FBDIMMs, and16 FBDIMMs.Supported memory configurations in the Sun SPARC Enterprise T5240 serverinclude:8 FBDIMMs12 FBDIMMs16 FBDIMMs24 FBDIMMs, and32 FBDIMMs.Configuration rules for memory include:The minimum number of FBDIMMS is eight.Memory is added in sets containing two FBDIMMs.Individual FBDIMMs can be replaced.All FBDIMMs must be the same size.Mixing of FBDIMMs from different vendors is allowed.

Each unpopulated FBDIMM slot in the Sun SPARC Enterprise T5140/T5240 serversmust contain a DIMM filler.The use of DIMM fillers ensures uniform airflow throughout the server. They alsoreduce air flow bypass, and consequently lower fan speeds, which in turn improvesnoise emissions and increases the fan modules mean time between failure.System I/O is built on the PCIe bus.In the Sun SPARC Enterprise T5140 server, the PCIe bus supports 3 standard halflength/ half height PCIe expansion slots on 3 riser boards.In the Sun SPARC Enterprise T5240 server, six standard half length/half height PCIeexpansion slots on three riser boards are supported on the PCIe bus.In addition to PCI Express expansion cards, two of the expansion slots can alternatelyaccept Sun proprietary XAUI-based cards to support 10Gbps Ethernet networking.Note that XAUI cards are only supported in slots 0 and 1.Internal storage in the Sun SPARC Enterprise T5140 server is handled by up to eightsmall form factor (SFF), 2.5-inch, internal SAS hard disk drives. Both the Sun SPARCEnterprise T5140 and T5240 Servers currently support 73 or 146 gigabyte disks at10,000 revolutions per minute, or RPM. The disks are attached to a SAS and SATAdisk backplane and managed through an LSI 1068 SAS/SATA disk controller.The Sun SPARC Enterprise T5240 server supports up to sixteen small-form-factorSAS disk drives.The system interfaces offered by the Sun SPARC Enterprise T5140 and T5240 serversare as follows:Four 10/100/1000Base-T Ethernet ports.One RS-232 serial port.Four external Universal Serial Bus 2.0 ports. Two located in the front and two locatedin the rear.

Independent host operation - This means that service processor malfunctions will notcause the server to stop functioning once the server is powered on.Indicator Light Emitting Diodes, or LEDs, on the front and back of the chassis, whichallow problems to be detected and easily isolated.And available software-based RAID levels 0, 1, 1E, and 10, which can enhance diskavailability and performance.The Sun SPARC Enterprise T5140/T5240 servers have the following circuit boardsinstalled in the chassis:A motherboard.One power distribution board.One power supply backplane.One paddle board.One disk backplane.One USB board.Two fan boards.Three PCIe riser cards.And a memory board assembly.

The motherboard is actually an assembly made up of the motherboard itself and a trayor carrier. The motherboard assembly comes in several different versions, with theonly difference being processor speed and the number of cores.The motherboard includes two UltraSPARC T2 Plus processors, slots for 16 DIMMs,memory control subsystems, and all system controller, or ILOM logic.

In addition, a removable NVRAM contains all Mac addresses, host ID, and OpenBootPROM configuration data. When replacing the motherboard, the NVRAM can betransferred to a new board to retain system configuration data.The motherboard has various port connectors for USB, serial, and network.The service processor (ILOM) subsystem contains a PowerPC Extended Core, and acommunications processor that controls the host power and monitors host systemevents (power and environmental). The ILOM controller draws power from the hosts3.3V standby supply rail, which is available whenever the system is receiving ACinput power, even when the system is turned off.

The power distribution board distributes main 12v power from the power supplybackplane to the rest of the system.It is directly connected to the paddle board and the power supply backplane throughbus bars, and to the motherboard through a ribbon cable.The power supply backplane distributes main 12v power from the power supplies tothe power distribution board.The power supplies plug into connectors on the power supply backplane.The paddle board is an assembly made up of the board, a metal mounting bracket, anda top cover interlock, or kill switch.The paddle board serves as the interconnect between the fan connector boards and thedisk drive backplane.The disk backplane includes the connectors for the SATA or SAS drives, as well as theinterconnect for the USB board, Power and Locator buttons, and system componentstatus LEDs.The USB board connects directly to the disk backplane. It is packaged with the DVDdrive as a single customer-replaceable unit (CRU).The fan boards carry power to the system fan modules.

In addition, they contain fan module status LEDs, and transfer I2C data for the fanmodules.Both the T5140 and T5240 systems have three PCIe riser cards which are eachinserted in a slot to the rear of the motherboard.Note: that the slots you see on the motherboard are not industry standard PCIe slots.They are Sun proprietary slots that only accommodate the Sun riser cards.The Sun SPARC Enterprise T5240 server has a mezzanine board assembly which hassixteen additional FBDIMM slots.Most of the electrical connectivity in the Sun SPARC Enterprise T5140/T5240 serversis accomplished through connectors on the systems infrastructure boards.The only system cables in the chassis are:the power distribution board to motherboard ribbon cable.the power supply backplane to power distribution board ribbon cable.the hard drive data cables.and the top cover, interlock switch cable.The diagram shown illustrates the overall system architecture for the Sun SPARCEnterprise T5140/T5240 server.As previously mentioned, The Sun SPARC Enterprise T5140 server and Sun SPARCEnterprise T5240 server use the same motherboard. The motherboard has twoUltraSPARC T2 Plus CPUs.CPU0 connects to PCIe switch 0, PLX 8548.CPU1 connects to PCIe switch 1, PLX 8548.The Sun UltraSPARC T2 Plus processors feature integrated, on-die memory controllerunits, optimizing memory performance and bandwidth per CPU.

Each processor supports up to:

8 DIMM slots in the Sun SPARC Enterprise T514016 DIMM slots in the Sun SPARC Enterprise T5140Below the CPUs, are the two PCIe switches, PCIe switch 0 and PCIe switch 1.In terms of PCIe slots, PCIe switch 0 supports:PCIe slot 3, which is x8 electrically, T5240 onlyPCIe slot 1, which is x8 electricallyand PCIe slot 2, which is x16 physically, x8 electricallyPCIe switch 2 supports:PCIe slot 0, which is x8 electricallyPCIe slot 4, which is x8 electrically, T5240 onlyand PCIe slot 5, which is x8 electrically, T5240 onlyNote: The Sun SPARC Enterprise T5140 server has PCIe slots 0, 1 and 2. The SunSPARC Enterprise T5140 server has PCIe slots 0, 1, 2, 3, 4, and 5.Continuing with the connections to the PCIe switches...PCIe switch 0 has a connection for:The LSI 1068E SA/SATA disk controller.A PCIe/PCI bridge, that in turn connects to a PCI/USB bridge, supporting the USBports on the rear of the chassis, the USB ports on the front of the server, and the DVDdrive.Another port off PCIe switch 1 connects to the Neptune Ethernet controller, which inturn supports:

Four gigabit Ethernet ports on the rear of the server, as well as XAUI slot 0 and slot 1.The service processor connects to the CPUs through the serial system interface (SSI)communications buses.The service processor uses the Motorola MPC885 PowerPC chip.

The service processor is the hardware portion of the lights-out-management system

implemented on the Sun SPARC Enterprise T5140/T5240 servers. Unlike previousversions of Sun service processors, the service processor hardware is not on a separatecard, but is integrated on the motherboard.The hardware components of the service processor include:A field-programmable gate array (FPGA) device that controls aspects of systempower and acts as the primary ILOM to host server communications gatewayI2C devices responsible for monitoring the servers environment and FRUID dataA Motorola MPC885 microprocessor, which contains its own instruction and datacaches, and a built-in memory controllerManagement portsClick the links provided to view additional information on the topics presented.The illustration on your screen depicts on-die components and data paths among thedifferent components for the UltraSPARC T2 Plus processor.Within each CPU, you have eight cores communicating through the Cache Crossbar tothe 4 MB total of 16-way associative L2 cache and then through the MCU, or memorycontroller unit, channels.Youll notice in the diagram that each CPU core has its own floating point unit (FPU).Each core also has its own crypto unit.

Youll also notice that each CPU has two MCU, compared to four in the SunUltraSPARC T2 processors. Each MCU has two Coherency and Ordering Units,labeled CU on the diagram. The CUs are the blocks that get all the memory and IOrequests, serializes them for global visibility and routes them to the right node's MCUbased on interleave factor.Connecting the two CPUs are four coherence planes, labeled Coherence Plane 0through Coherence Plane 3. These are the only communication paths between theCPUs. All the memory cache and I/O traffic flowing between the two CPUs uses acoherence plane.Click as indicated on your screen to view a summary of the features and differencesbetween the Sun SPARC Enterprise T5140 and T5240 servers.The Sun SPARC Enterprise T5140/T5240 is available in five configurations. Eachconfiguration is differentiated by:CPU,Memory,And hard disk drive availability.The following base components are common to all available configurations:A chassis that isA 1-rack unit in the T5140,And a 2-rack unit in the T5240.Two sockets for Ultra SPARC T2 Plus processors.Sixteen memory slots in the T5140,And 32 memory slots for the T5240.Two 146 Gbyte disk drives.

One DVD drive assembly.

or PCI-Express (PCIe) slots,There are three available in the T5140,And six available in the T5240 server.Four gigabit Ethernet ports.Four USB 2.0 portsAn MPC885 integrated service processor.Two power supplies.And a standard accessory kit with documentation.Click as indicated on your screen to view the details of each available configuration.The Sun SPARC Enterprise T5140/T5240 servers are the latest offering in Sunsfamily of CoolThreads servers. CoolThreads servers are the fastest and most spaceand energy efficient systems that Sun offers.Among the CoolThreads servers are:The Sun SPARC Enterprise T1000 (code name: Erie) and T2000 (code name: Ontario)servers. These servers were the first to introduce CoolThreads technology. They havea single UltraSPARC T1 (code name: Niagara) processor, can process up to 32 threadssimultaneously, and support up to 64 Gbytes of memory.The second generation of CoolThreads servers is represented by the Sun SPARCEnterprise T5120/T5220 (code name: Huron) servers. These servers have a singleUltraSPARC T2 (code name: Niagara II) processor. The processor has integrated onchip cryptographic acceleration and 10 Gigabit Ethernet, enabling secure computingat wire speed. It can process up to 64 threads simultaneously and supports up to 64Gbytes of memory.

We now have the next generation of CoolThreads servers: the Sun SPARC EnterpriseT5140/T5240 servers, which as previously stated, have two UltraSPARC T2 Plusprocessors, can process up to 128 threads simultaneously, and support up to 128Gbytes of memoryThe Sun SPARC Enterprise T5140/T5240 servers are targeted at IT managers lookingfor more cost-effective computing solutions to meet their organizations growingcomputing needs.The Sun SPARC Enterprise T5140/T5240 servers also meet the needs of customersrequiring a basic but high-performance computing platform in a small footprint thatprovides investment protection with its market-leading price and performance.The T5140/T5240 servers will also appeal to system administrators because of theirease of system growth and manageability, and their support of multiple operatingsystems.

LDoms are currently supported on the following UltraSPARC T1-based systems: theSun Fire T1000, the Sun Fire T2000, the Netra T2000, the Netra CP3060 Blade, andthe Sun Blade T6300 Server Module.There are many methods for virtualizing or partitioning a system into multiple discreteoperating environments. Each method has different underlying functionalityand therefore different applications. Three methods of virtualization offered by Sunare Solaris Containers, Sun Fire Dynamic System Domains, and Logical Domains.

Solaris Containers can create multiple virtualized environments within one

Solaris kernel structure. This keeps the memory footprint low and providesflexibility in fine-grained resource controls, which are good for consolidating largenumbers of dynamically resourced environments within a single kernel or version ofSolaris.Solaris Containers can create multiple virtualized environments within oneSolaris kernel structure. This keeps the memory footprint low and providesflexibility in fine-grained resource controls, which are good for consolidating largenumbers of dynamically resourced environments within a single kernel or version ofSolaris.Sun Fire Dynamic System Domains allow you to create electrically isolated domainson midrange and High-End Sun Fire systems. These domains offer the maximumsecurity isolation and availability in a single chassis, and combine many redundanthardware features for high availability. They are great for consolidating a smallnumber of mission-critical services with security and availability.The features of Logical Domains sit somewhere in the middle of Solaris Containersand Sun Fire Dynamic System Domains offering isolation between the variousdomains, but achieved via a firmware layer. They lower thehardware infrastructure requirements drastically. They also are great for cost effectivesecurity and consolidation, with sun4v support for multiple operating environments.A Logical Domain or LDom is a full virtual machine, with a set of resources such asa boot environment, CPU, memory and I/O devices and ultimately its own operatingenvironment. It is isolated by virtue of the Hypervisor's capability to be anintermediate step between theoperating environment and the hardware to virtualize.

First released on the Sun Fire T1000 and T2000 systems, the Hypervisor, a firmwarelayer on the flash PROM of the motherboard, is a thin software layer with a stableinterface sun4v - between the operating system and the hardware. The Hypervisorprovides a set of support functions to the operating system, so that the OS doesnot need to know intricate details of how to perform functions with the hardware.This allows the operating system to simply call the Hypervisor with calls to the sun4vplatform. Because of this stable interface which does not change, the operatingenvironment does not require updating even if a new generation of machines with, forexample, faster CPUs is introduced thereforecreating a consistent programming model .The Hypervisor layer is very thin and exists only to support the operating environmentfor hardware-specific details. More importantly, as the Hypervisor is the engine thatabstracts the hardware, it can choose to expose or hide various aspects of the hardwareto the operating environment. For example, it can expose some CPUs but not others,and some amount of memory but not all to a specific operating environment. TheHypervisor can then create a so-called virtual machine, which also provides theOpenBoot stack.

The slide shows the relationship between the hardware layer and the created LDoms,shown here as a Virtual machine.The Hypervisor sits between the hardware and the virtual machines.

There are several different types of LDoms whose names derive from their usage.A Control Domain is an LDom that creates and manages other LDoms.

An LDom that serves up devices, virtual network switches, and SANs to other LDomsis called a Service Domain.And if you simply want to run an operating environment like Solaris - withoutsharing devices or controlling domains, you create a Guest Domain.You can also create combinations of domain types for instance, combining thefunctions of a Control Domain and Service Domain.The Control Domain forms the basis for communications among the Hypervisor, thesun4v platform, and the other domains, allowing for the creation and control ofLDoms.

The Control Domain contains the SUNWldm package, which consists of the LDomsManager utility and the associated daemon processes required for LDoms. It is thefirst domain created during the LDoms installation procedure.

Interface to the Hypervisor is made via the command line with the LDoms Manager.This application understands the mapping between the physical and virtual devices,and interacts with the various components to sequence changes - such as the additionor removal of resources, and even creation of an LDom. The LDomS Managercommunicates these changes to proxy agents located in the supported operatingenvironments of the Guest Domains that are undergoing changes.Because the Control Domain can interact with and control other domains evenremove them entirely - this domain is viewed from a security perspective similarly tothe system controller, which should be hardened and secured. One method for doingso is to apply the Solaris Security Tool kit.

The role of the Proxy Agent in the operating environment is to allow thecommunication of events from the Hypervisor, notifying the operating environment ofactions such as the addition and removal of devices.An OS that supports such features can then signal back to the Hypervisor that it isready for the action to occur.A Service Domain is one that shares out or virtualizes physical devices to otherlogical domains on the system. The Service Domain takes ownership of a component,for example, a PCI controller - and then shares one of the interface cards found underthat controller to another domain via a Logical Domain Channel. You will learn moreabout these devices further in this module.Since the Service Domain must support specific LDoms functions such asvirtualization services, it must run the same version of the Solaris operating system asthe Control Domain. Unlike the Control Domain, where you can only have onerunning at any one time, you may have multiple Service Domains currently up to amaximum of two.A Guest Domain is a domain that does not run the control structures for LDoms, suchas daemons and processors that interact with the Hypervisor. A guest domain also doesnot share out devices to other domains. It is simply a consumer of the resourcesallocated to it. It must run an operating system that recognizes both the sun4vplatform and the virtual devices presented by the Hypervisor. The Minimumsupported operating system is Solaris 10 Update 3 (11/06).

Logical Domains communicate via Logical Domain Channels or LDCs. These arechannels of communication by which data can be moved from one domain to another.LDCs are the mechanism by which virtual networks can be established betweenLDoms, and they are the conduit for services such as I/O provided to a Guestdomain.LDCs are explicitly created. They are defined by the Control Domain and bound tothe designated LDoms with specific services at each end of the channel. It is a strictpoint-to-point link rather than the traditional networking paradigm of a port opening

upon request. This helps to make LDCs more secure, and as they are created logicallywithin the Hypervisor, they are flexible and fast to set up.The goal of the Service Processor or the Advanced Lights Out Manager, known asALOM, is to manage the hardware so it does not know about LDoms.However there are some small interactions between the Hypervisor and the serviceprocessor for example, storing the LDoms configuration so that they are persistent.The Service Processor requires firmware 6.4.X.The UltraSPARC T1-based systems support three different Logical Domains typesand a total of 32 LDoms. The LDom types are the Control Domain, the ServiceDomain, and the Guest domain.

The virtualized devices are any hardware resources on the system that are abstractedby the Hypervisor and presented to the LDoms on the system. They can take the formof directly virtualized devices such as CPU and memory, and those devices like I/Oand Network devices that are proxied from a Service Domain for use by otherdomains. These are physical devices that are translated to virtual devices by theHypervisor and presented by the Service Domain to other domains. They also includeconsole and cryptographic devices.The OpenBoot environment forms the basis for initial program loading and execution,typically an operating environment. It also provides other features such as diagnosticsand boot time parameters to control operation. In Logical Domains, the OpenBootenvironment is virtualized and made available to multiple partitions as discrete bootenvironments. OpenBoot is the basis for running the operating environment in anLDom.The OpenBoot ok prompt is the first thing you will see when connecting to theconsole of a newly created LDom, and a familiar sight for those familiar with Sun's

SPARC hardware. All logical domains in a system will have the same version ofOpenBoot.All CPUs presented by the Hypervisor are referred to as Virtual CPUs or VCPUs.On a Sun Fire T1000 or T2000 system, each of the cores of the system has 4executing threads, represented as Virtual CPUs by the Hypervisor. Thus an 8 core SunFire T2000 would have 32 Virtual CPUs able to be partitioned among the variousLDoms on the system.Similar to CPUs, the memory contained in the Niagra-based servers is virtualized tobe presented in various amounts to guest LDoms by the Hypervisor. The memory canbe allocated in increments as small as 8k chunks. Most importantly, memory isrepresented to the Guest Domains as starting from the same offset.The process of translating memory from the platform to domains is referred to asmapping. This happens in most operating environments. In Solaris, applicationsalready see memory that is remapped by the kernel from a real address to a virtualone. The Hypervisor, working with the memory management units in the hardware,simply takes an additional step of mapping from the hardware, or physicalenvironment, to what is presented to the operating environment.The I/O devices on a sun4v system, such as internal disks, PCI controllers, andattached adapters and devices, can be presented to the various LDoms in several waysthrough the Hypervisor, depending upon the application requirements and theadministrative model that is needed.The virtual disk server is a way to present an LDom with a device from which to bootanoperating system. The image can take many forms, including: An entire physical disk this could also be a logical storage unit from a SAN device,which is sometimes referred to as a LUN, A Single Slice of a disk, A File via the loop-back file system called lofi, or

A ZFS (Zettabyte File System) volume.

The traditional model of direct device control by an operating environment ismaintained by the LDoms model. It uses a mode in which the Hypervisor creates amapping from the device to a virtual interface. It then allows the logical domain tomaintain ownership of the device.The maximum number of logical domains having Direct I/O devices is 2.This is based upon the systems having 2 PCI-e controllers, PCI-e A and PCI-e B, eachable to be owned independently.So in the case of 2 Service Domain deployments, each could own a PCI root and thedevices in the tree. Because this limitation can be restricting, it is one of the reasonsfor having a virtualized approach to I/O providing the flexibility for more logicaldomains to have access to devices such as PCI cards and their devices by sharingthem without direct ownership.In the direct I/O method, two domains could each control or own a single PCI-econtroller. The diagram shows a Logical Domain owning a PCI-E controller.In contrast to the Direct Devices model, the Virtual Devices model provides thecapability for devices to be shared out to multiple domains.This allows for the creation of virtual SANs thereby providing additionalconsolidation benefits in the form of rationalization of storage and interfaces, and thereduction in administrative burden.The concept of Virtual Devices is based upon at least one domain having ownership ofa device through the Direct Devices model.This domain is designated as a Service Domain, and it establishes a path to the otherdomains via a Logical Domain Channel.The operating system in a Guest Domain sees a virtual device driver as if it were alocal device.

In this diagram, the Service Domain owns a PCI-e controller and allows LogicalDomain A to access a virtual form of I/O via the Logical Domain Channel (LDC).With LDoms, the network is virtualized along with everything else for the virtualmachine. This means that network adapters are virtualized --they are called vnet -and a service is created to share them from the Control domain to the Guest Domains.Additionally it is possible to create a virtual switch, called a vsw, which is able toprovide a channel of communication that multiple domains can use in this case via anetworking protocol such as TCP/IP.Networking devices other than those on-board the system such as PCI-e networkingcards can be owned directly by a domain or virtualized as a service. The on-boardnetwork ports are shared as a service by default.The console has traditionally been the conduit for accessing system-level messagesfor administrative purposes. These include reviewing boot-up messages during anintervention when other methods cannot be used such as when networking servicesare down.The console device as a connection to the OpenBoot environment is also virtualizedvia the Hypervisor. Connection is made through a network service in the ControlDomain at a specific port.For example, connecting to the local host within the Control Domain on port5000 will by default connect to the first logical domain other than the ControlDomain. It is also possible to specify a virtual console concentrator, or vcc, to groupvirtual consoles to assist in administration.The cryptographic devices on the Sun Fire T1000 and T2000, referred to as ModularArithmetic Units (MAUs), provide high performance, dedicated cryptographicengines to perform tasks such as encrypting and decrypting network traffic that couldoccur between a secure sockets layer, or SSL, web server and an application server.

In LDoms, the MAUs are virtualized, so they are referred to as virtual MAU, rVMAU.

There are 8 VMAU units on the Sun Fire T1000 and T2000, 8 core model servers 1per core of 4 VCPUs. As they are part of a core, they may only be bound to a domainthat contains at least one strand from the parent core.