Last December (2016), I was deploying a 2-node VMware ESXi 6.5 cluster on 2 brand-new HPE ProLiant DL380 Gen9 rack-mount servers for a customer as a P2V and HA project.
A custom HPE image was used for deploying ESXi 6.5.
The cluster was working fine for a few days but later started throwing Purple Screen Of Death (PSOD) or Purple Diagnostic Screen stating unable to acquire lock with a few backtrace details on a member server.
Tried searching for solutions on the VMware Knowledge Base articles but found none.
In late January (2017), HPE released a Customer Advisory related to this issue.
The fix was to update the iLO driver to the latest version or at the least higher than 6126.96.36.199.
Recently (August, 2017), VMware also came out with a KB for the same.
You can find your iLO driver version by logging onto the HPE iLO 4 web console and go to Information > System Information > Software > hpe-ilo.
You can download the iLO Driver vib from HPE vibsdepot (latest available as of Sep 24th, 2017, day of writing this post, is: ilo_6188.8.131.52-12.4240417.vib), place it on a datastore accessible to the affected ESXi Host and then install it.
These partitions will then be assigned to the jobs.
Change the Target Device (partitions) to the new Tape Library.
Partitioning is needed so that your labeled Tapes (Cartridges) stay in the slot designated for it, i.e. the backup job will have a partition (slot) assigned to it and in the partition’s slot, the labeled Tape for that job will sit. This Tape will be carried to and fro by the Tape Library using the Robotic Arms (Media Changer) into the Tape Drive inside it.
You should see Media Changer and Tape Drive in the Device Manager if the connection between the Server and Library is correct.
One more quick-step write-up on physical installation procedure of HP StorageWorks MSL2024 Tape Library and HP MSL LTO-6 Ultrium 6250 SAS Tape Drive.
These steps are for SASconnectivity and not for FC connectivity between the tape library and the server.
Install the SAS Tape Drive (Internal) to the Tape Library.
Install from bottom proceeding to top.
If you just have 1 Tape Drive, install at the bottom enclosure.
MSL2024 supports a maximum of 2 Internal Tape Drives and provides 24 slots in total to hold the tape cartridges. First slot can if required, optionally, be configured as a mail slot for tape cartridge export purposes. Note: Mail slot comes in very handy to take a tape cartridge out of the library without disturbing the on-going tape writing (backup in-progress) operations, which is not possible otherwise.
Connect the Tape Library (via the now installed Tape Drive) to the Server using the SAS Fan-out cable.
Connect the single end of the Fan-out cable to the Server’s SAS HBA card and the one of the four end to the Port A on the Tape Drive inside of the Tape Library.
Note: Do remove the shipping lock present on the top-center before making use of the tape library.
It is always a good practice to assign and configure an IP address for managing the tape library. Do note that you need to use the OCP (Operator Control Panel) LED in front of the tape library/auto-loader to set the password for the management page (HP/E Command View for Tape Libraries) prior to logging-on via a web browser.
Additionally, install the HP/E L&TT (Library and Tape Tools) to perform management functions apart from the HP/E Command View for Tape Libraries from the web browser using the IP Address.
As the title says, here are a few of the methods to perform a firmware update or upgrade on HP/E’s ProLiant line of servers.
IP (Intelligent Processing feature built in for HP Servers starting from Gen 8) which requires Internet to download firmware. Press F10 during POST screen for IP; or
HP(E) SPP (Service Pack for ProLiant) DVD*; or
SPP releases a new version every six months – April (04) and October (10). Note: You’ll require an HPE Passport account for the downloads.
HP PSP (ProLiant Support Pack) DVD*; or
HP Smart Update Firmware DVD*
*All these DVDs (ISO) are available online with HP/E; download ISO file and burn it to a DVD. You can use HPE USB Key Utility for Windows if you don’t want to use a DVD instead a USB or Flash Drive.
To clear a difference between an update and an upgrade, refer: https://www.codetwo.com/kb/what-is-the-difference-between-upgrade-update-and-migration/
In simpler terms, update is a minor change in the same product and upgrade is a major change which requires a new release and a new version of the same product.
Example: Windows Server 2008 R1 SP2 and Windows Server 2008 R2 SP1.
Here, SP (Service Pack) is an update and R (Release) is an upgrade.
Same goes for all other products.
Btw, a Service Pack is a bundle of updates.
Here’s a quick-step post on creating a RAID array on the HDDs of a server.
Steps posted here are for an HP(E) server but can be applied to other vendor’s make/model as well.
Boot up the server and press *F8 [HP Servers] to configure array (either entering Array Configuration Utility [ACU] or Smart Storage Administrator (SSA) as the case maybe depending on the generation of the server).
*Press F8 once the iLO configuration prompt clears, else you will enter into the iLO configuration screen instead of array configuration.
Alternatively, you can create a RAID array by pressing F10 (for Gen8 and above) and entering the HPE IP (Intelligent Provisioning). Once you are on the IP page, select “Perform Maintenance” > “HPE SSA”.
Remember just how many disk failures different RAID levels can sustain as well as the amount of data reserved (parity) and the amount of data you can get for use.
a. RAID 1 can sustain just 1 disk failure
b. RAID 5 similar to RAID 1 can sustain 1 disk failure (N+1)
c. RAID 6 can sustain 2 disks failure (N+2)
d. RAID 1+0 can sustain multiple disk failures provided the both disks are not in the same mirror.
RAID level or type depends on the RAID/Array Controller installed on the server.
Not all Array Controllers are capable of providing all the different levels/types of RAID.
For more on different levels of RAID, refer the below links:
Today, we’ll discuss about the physical installation of the Dual In-line Memory Module (DIMM) into the servers. DIMMs are most commonly known as a Memory/RAM stick.
Here are a few quick guidelines, which are not tied to any specific server hardware vendor:
Install RAM DIMM modules in accordance with the processors installed.
Every processor has a few channels and every channel has a few slots.
White slots always indicate the start of a channel.
Do not install DIMM modules in the channel slots for a processor (say, for example, Processor 2, 3 or 4) if the processor itself is not present (not installed).
Always install DIMM modules first in the white slots, i.e., the start of each channel for a processor. Then proceed towards the next slot (black[s]) in the same order, i.e.
Example for a single processor (Processor 1):
a. Channel 1-white slot, Channel 2-white slot, Channel 3-white slot…Channel N-white slot.
b. Channel 1-black slot, Channel 2-black slot, Channel 3-black slot…Channel N-black slot.
c. Channel 1-2nd black slot, Channel 2-2nd black slot, Channel 3-2nd black slot…Channel N-2nd black slot.
d. So on and so forth.
A few other rules regarding types of DIMMs: UDIMMs (Unbuffered), FBDIMM (Fully Buffered), RDIMMs (Registered), LRDIMMs (Load Reduced) and NVDIMMs (Non-Volatile) should be read. Former 2 are rarely used in the newer generation server architectures. Latter 3 are commonly used.
DIMMs should be installed rank-wise, for instance, quad-rank DIMM should be installed before dual-rank DIMM and single-rank DIMM. Note: Memory Capacity doesn’t matter but Rank does matter.
Mixing of different types is not recommended as well as not supported in some scenarios. Do refer the document of the server hardware vendor before mixing 2 types.
You will be able to check the total RAM installed into the system during the server boot.
If you find the amount of RAM to be incorrect, check whether you have installed RAM correctly as mentioned above.
On the inside of the panel which opens the server, of any form of server (rack, tower or blade), you will find short instructions on memory population, along with the server architecture diagram, which come in handy at times.
The very first post of my blog starts here with the installation and configuration of HPE System Management Homepage (SMH) on a Windows Server OS.
HPE SMH provides the health statuses of different server hardware components like Processors, Memory, Power Supply Units, Array Controller, Hard Disk Drives among others.
It pulls up hardware information from either of the 2 sources: SNMP Service or WBEM Providers. As such having either of them is sufficient. But I prefer to have them both for cases where one stops working. In such a case, I can still have the data pulled from at least one working source.
Other important use of the HPE SMH is Integrated Management Logs (IML) which gives out the details of various events that have occurred and is very useful in troubleshooting hardware issues.
Physical Server: HPE Server Model
Operating System: Windows Server Edition
Simple Network Management Protocol (SNMP) Service Feature (Optional)
For Windows Server 2008 and above: Server Manager > Features > SNMP Service.
Make sure to include the Management tools for the SNMP Service , without which we will not be able to see the Security tab under SNMP Service in the Services MMC.
For Windows Server 2003 and below: Control Panel (control) > Add or Remove Programs (appwiz.cpl) > Add/Remove Windows Components > Management And Monitoring Tools > Simple Network Management Protocol.
You can see the Data Source on the top right corner of the web page listing either WBEM or SNMP. If you want to toggle between either of them, you need to go to Settings > Data Source.
Update – Jan 5th, 2016: You might face errors during installation. If you do, then please go to the path of the installation log and find out the issue. I have faced issues due to missing drivers for devices present in Device Manager. Always check Device Manager for Unknown/Other devices and try to install appropriate driver for the same. In my case, it was a driver related to iLO.