ibm pdoa hardware

Machines

Names Function Data CPU Ram (Gb) OS
adminNode Main Node Db Catalogs 4 30 Aix
sysnode1 Main Data Storage Node Management Portals - DbOptim 12 90 Aix
Aurora1 Backup Admin Node 4 30 Aix
Aurora2 Backup Data Storage Node 12 90 Aix
HMC1 Production Server Controller RHEL
HMC2 Backup Server Controller RHEL

There are 2 Servers towards the bottom of the rack and 2 servers at the top end of the rack.

All Functionality is clearly labeled on the RHS of the rack

HMC

The HMC are a link between the Server's and the hardware allocated to the servers.

Storage

Storage is shared between appliances.

Basic Startup

  • Power
  • HMC Console
  • From HMC Console
    • Power on of the Servers
    • Startup LPAR (OS)
      • Management Server
      • Admin Node
      • Standby
      • Management Server
      • Admin Server

The Startup is done by using a Gui on the HMC - Select the Node - Right Click - and then choose LPAR (Start the OS)

Network Configuration

  • Each Network switch has a backup
  • Each Connection is mirrored on the backup switch
  • Network has 3 Speeds
  • 10 Mbit
  • 1G
  • 10G
  • Each network link is labeled

Note It is important that the 10G network when it is plugged into the Coorporate network that the network settings are the same. i.e. if 4 * 10G ports are bundled together then the network switch (core Switch) needs to be enabled for 40G (i.e. 4*10)

Admin Access

  • 10/100 mbit

    Need to connect for admin port 19-42 same vlan 172.23.1.100-255 just isten as some are preallocared - check in the /etc/hosts

  • corporate network 1gb

    43-48 switch on switch 1gb_switch

10 gb

Access to high speed coorporate networks is available from 10G

  • 10_gb_switch
  • 58-63
  • then needs vlan config.

Basic Startup

  • Power
  • HMC Console
  • From HMC Console
    • Power on of the Servers
    • Startup LPAR (OS)
      • Management Server
      • Admin Node
      • Standby
      • Management Server
      • Admin Server

The Startup is done by using a Gui on the HMC - Select the Node - Right Click - and then choose LPAR (Start the OS)

Disk Configuration

The storage is hosted on a v7000 series chassis.

Storage

6 Pools Named Pool_1[6] Total Capacity 48.80 Tb Useable Capacity 42.91 Tb

Each pool is aprox 7Tb - but they change depending on the pool definition.

SSD DRives - The SSD Drives are in the Last but 1 rack space. They are used for temporary storage for the DB2 server - they appear in the system as /tmp filespace. Ssd is only available on Admin and Admin-standby. As they are not needed on the Admin nodes.

In total there are 6 * 400 Gb of flash drives.

GPFS

There are 2 Nodes admin&system nodes.

Machine Health Checks

There is a command (Something like this)

ort Operational State: |^IEEE 802.3 ... | grep -v | "FC Adapter" | dskbak -c

This produed a report which indicated that 1 network connector was not good

Gurav then logged into the machine and issued the following AIX command

lscfg -vpl ent3

This returned lots of info - but it had a Physica Address of

U788AA,9,90-P1-C1-C3-T2

This P1-C1-C3-C4 refers to the backplane of the server locations.

  • P1
  • PLANNER 1
  • C1
  • LOCATION on the Server
  • C3
  • Port on the Server
  • T2

Getting the Main Console running

The command for this is

mistart

After that then we should be able to log into the interfaces via the web consoles.