Linux

From coopzone
Jump to navigation Jump to search

Debian locale settings incorrect

run dpkg-reconfigure tzdata

and

dpkg-reconfigure locales

Setting the timezone manually

  • Change to the directory /usr/share/zoneinfo here you will find a list of time zone regions. Choose the most appropriate region, if you live in Canada or the US this directory is the "America" directory. UK time zones are under Europe etc.
  • Backup the previous timezone. mv /etc/localtime /etc/localtime-old
  • Create a symbolic link to the appropriate timezone from /etc/localtime. Example:
ln -sf /usr/share/zoneinfo/Europe/Amsterdam /etc/localtime 
  • If you have the utility rdate, update the current system time by executing
/usr/bin/rdate -s time-a.nist.gov
  • Set the ZONE entry in the file /etc/sysconfig/clock file (e.g. "Europe/London")

Grub Notes

To re-install grub manually (mostly for raid systems, if you want more than one drive to be bootable)

grub
grub> device (hd0) /dev/sda
grub> root (hd0,0)
grub> setup (hd0)

grub> setup (hd0)
 Checking if "/boot/grub/stage1" exists... yes
 Checking if "/boot/grub/stage2" exists... yes
 Checking if "/boot/grub/e2fs_stage1_5" exists... yes
 Running "embed /boot/grub/e2fs_stage1_5 (hd0)"...  15 sectors are embedded.
succeeded
 Running "install /boot/grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/boot/grub/stage2 /boot/grub/menu.lst"... succeeded
Done.

grub> quit

Repeat above for each drive. Note the (hd0,0) refers to the boot partion.

mdadm

Example: To replace a drive

Fail the drive you want to replace in the array, i.e.

mdadm --manage /dev/mdx --fail /dev/hdxy

shutdown the system, replace the drive.

If the drive is not already blank, you may need to remove any previous raid usage:

boot from cd, rescue mode.

For most commands to work you need an /etc/mdadm.conf.

This can be setup using "mdadm -E --scan >/etc/mdadm.conf"

To get the /etc/mdadm.conf file from the drives in the system.

madam --zero-superblock /dev/hdxy for each partionion.

remove the partitions, recreate from an existing drive

sfdisk -d /dev/hdx | sfdiisk /dev/hdy


start the array used by the root filesystem

madam -A /dev/md1 --force --run

add the new drive/partion to the root md device

madam --manage /dev/md1 --add /dev/hdxy

wait for rebuild

do the same for the /boot patron

finally any data partions you have, these can be done either from a rescue setup or after a reboot.

Growing encrypted disks

This example show a domU client, since the disk is allocated in the Dom0 using lvm, you need to grow that first!

[root@skype ~]# mkdir /data
[root@skype ~]# mount /dev/mapper/data /data
[root@skype ~]# df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/xvda              2064208   1499312    460040  77% /
tmpfs                    98396         0     98396   0% /dev/shm
/dev/mapper/data      77403712    184220  73287612   1% /data

fdisk -l (to find the correct partion layout/names etc)

Disk /dev/xvdb: 268 MB, 268435456 bytes
255 heads, 63 sectors/track, 32 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

    Device Boot      Start         End      Blocks   Id  System
/dev/xvdb1               1          32      257008+  82  Linux swap / Solaris

Disk /dev/xvdc: 107.3 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

    Device Boot      Start         End      Blocks   Id  System
/dev/xvdc1               1        9790    78638143+  83  Linux

Use fdisk to remove and re add a partition to grow the disk into

[root@skype ~]# fdisk /dev/xvdc

The number of cylinders for this disk is set to 13054.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/xvdc: 107.3 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

    Device Boot      Start         End      Blocks   Id  System
/dev/xvdc1               1        9790    78638143+  83  Linux

Delete and re add partition

Command (m for help): d
Selected partition 1

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-13054, default 1): 
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-13054, default 13054): 
Using default value 13054

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Re Open the Luks Device

[root@skype ~]# cryptsetup luksOpen /dev/xvdc1 data
Enter LUKS passphrase for /dev/xvdc1: 
padlock: VIA PadLock not detected.
key slot 0 unlocked.
Command successful.

Make sure crypto is aware of the new size

[root@skype ~]# cryptsetup resize data

Then resize it

[root@skype ~]# fsck.ext3 -f /dev/mapper/data 
e2fsck 1.39 (29-May-2006)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/data: 30743/9830400 files (0.1% non-contiguous), 546498/19659406 blocks

[root@skype ~]# resize2fs /dev/mapper/data 
resize2fs 1.39 (29-May-2006)
Resizing the filesystem on /dev/mapper/data to 26213926 (4k) blocks.
The filesystem on /dev/mapper/data is now 26213926 blocks long.

And re-mount it ready for use

[root@skype ~]# mount /dev/mapper/data /data

[root@skype ~]# df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/xvda              2064208   1499392    459960  77% /
tmpfs                    98396         0     98396   0% /dev/shm
/dev/mapper/data     103210424    960112  97007532   1% /data

ISCSI redhat connect to target

First you need to discover targets under portal IP address:

iscsiadm --mode discovery --type sendtargets --portal 192.168.1.10 # Replace 192.168.1.10 with Your portal IP address

Login, must use a node record id found by the discovery:

iscsiadm --mode node --targetname iqn.2001-05.com.doe:test --portal 192.168.1.1:3260 --login 

Logout:

iscsiadm --mode node --targetname iqn.2001-05.com.doe:test --portal 192.168.1.1:3260 --logout 

Display info about target:

iscsiadm -m node -T targetname -p ipaddress

List node records:

iscsiadm --mode node 

Display list of all current sessions logged in

iscsiadm -m session

View iSCSI database regarding discovery

iscsiadm -m discovery -o show

Display all data for a given node record:

iscsiadm --mode node --targetname iqn.2001-05.com.doe:test --portal 192.168.1.1:3260

Pacemaker / Corosync notes

First, install all needed packages for pacemaker:

 yum install heartbeat corosync pacemaker 

Start service and make it bootable during the system startup:

/etc/init.d/corosync start

After installation generate corosync key for nodes:

corosync-keygen

Copy corosync key to second node:

scp /etc/corosync/authkey 192.168.0.2:/etc/corosync/authkey

create /etc/corosync/service.d/pcmk on nodes with content :

service {
        name: pacemaker
        ver:  0
}

Edit configuration files on both nodes:

mv /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf

Set bindnetaddr to network address others directives can be left without the changes:

bindnetaddr: 192.168.0.0

Now configure Virtual IP resource for this nodes. Nodes have to check each other every 20 seconds:

crm configure primitive P_IP ocf:heartbeat:IPaddr2 \
       params ip="192.168.0.3" cidr_netmask="255.255.255.0" \
       op monitor interval="20s"

And go up to next level. Httpd server resource:

crm configure primitive P_APACHE ocf:heartbeat:apache \
       params configfile="/etc/httpd/conf/httpd.conf" statusurl="http://localhost/server-status" \
       op monitor interval="40s"

where

P_APACHE – resource name

configfile – path to apache configuration file

statusurl – url to status page( below how to configures one)

interval – time between checks

To prevent situation when resource apache migrate to node002 and resource IP stays at node001(It happens when apache at node001 hung but network stack works well) we need to make colocation

crm configure colocation WEB_SITE inf: P_APACHE P_IP

To make pacemaker start up apache only after IP is set up. In other words describe start up order run:

crm configure order START_ORDER inf: P_IP P_APACHE

Describe location priority:

location L_IP_NODE001 P_IP 100: node001.example.com

location L_IP_NODE002 P_IP 100: node002.example.com

Now set priority threshold. Value 110 is enough to prevent resource migration back. It could be happens when next scenario occurs:

1. node001 fails and resources are moved to node002.

2. then node001 is going online

3. resources are migrated to node001


To stick resources to node002 and prevent from further migration add resource-stickiness:

crm configure rsc_defaults resource-stickiness="110"
# crm configure show
node node001.example.com
node node002.example.com
primitive P_APACHE ocf:heartbeat:apache \
        params configfile="/etc/httpd/conf/httpd.conf" statusurl="http://localhost/server-status" \
        op monitor interval="40s"
primitive P_IP ocf:heartbeat:IPaddr2 \
        params ip="10.22.48.138" cidr_netmask="255.255.255.240" \
        op monitor interval="20s"
colocation WEB_SITE inf: P_APACHE P_IP
order START_ORDER inf: P_IP P_APACHE
property $id="cib-bootstrap-options" \
        dc-version="1.0.10-da7075976b5ff0bee71074385f8fd02f296ec8a3" \
        cluster-infrastructure="openais" \
        expected-quorum-votes="2" \
        stonith-enabled="false" \
        no-quorum-policy="ignore"

Keep in mind that server-status page should be described at apache configuration files on both nodes as next:

<VirtualHost 127.0.0.1:80>
    ServerAdmin webmaster@dummy-host.example.com
    ServerName localhost
    ErrorLog logs/dummy-host.example.com-error_log
    CustomLog logs/dummy-host.example.com-access_log common
    <Location /server-status>
        SetHandler server-status
        Order deny,allow
        Deny from all
        Allow from 127.0.0.1
    </Location>
</VirtualHost>

Finally check you pacemaker status:

crm_mon
============
Last updated: Wed Jul  6 15:17:46 2011
Stack: openais
Current DC: node001.example.com - partition with quorum
Version: 1.0.10-da7075976b5ff0bee71074385f8fd02f296ec8a3
2 Nodes configured, 2 expected votes
2 Resources configured.
============
 
Online: [ node001.example.com node001.example.com ]
 
P_APACHE        (ocf::heartbeat:apache):        Started node001.example.com
P_IP        (ocf::heartbeat:IPaddr2):       Started node001.example.com

That means that two resources started at node001 where node002 in stand-by mode.

There are couples ways to move resources to other node:

1. Set active node to standby:

crm node standby node001.example.com

2. Or directly move resource

crm resource [resource_name] [node_name]

DRBD recover split brain

On the victim (may return error if already disconnected, ignore it):

drbdadm disconnect <resource>
drbdadm secondary <resource>
drbdadm connect --discard-my-data <resource>

On the other node (the split brain survivor), if its connection state is also StandAlone, you would enter:

drbdadm connect <resource>