1

After I upgraded one of my Ovirt hosts from 4.4.0 to 4.4.2 using yum become a faulty host for the cluster. I tried to uninstall the upgrade but that didn't help. After checking the status of the Host I found out that the "ovirt-ha-agent" is not starting due to changes in the LVM setup from the upgrade. Is there any way to move back the LVM setup as was before the upgrade? I am sending you the LVM setup before and after the upgrade:

1) LVM before the upgrade:

    NAME                                                     MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
    sda                                                        8:0    0 68.3G  0 disk 
    |-sda1                                                     8:1    0    1G  0 part /boot
    `-sda2                                                     8:2    0 67.3G  0 part 
      |-onn_host2-swap                                       253:0    0    4G  0 lvm  [SWAP]
      |-onn_host2-pool00_tmeta                               253:1    0    1G  0 lvm  
      | `-onn_host2-pool00-tpool                             253:3    0 49.8G  0 lvm  
      |   |-onn_host2-ovirt--node--ng--4.4.0--0.20200521.0+1 253:4    0 12.8G  0 lvm  /
      |   |-onn_host2-pool00                                 253:5    0 49.8G  0 lvm  
      |   |-onn_host2-var_log_audit                          253:6    0    2G  0 lvm  /var/log/audit
      |   |-onn_host2-var_log                                253:7    0    8G  0 lvm  /var/log
      |   |-onn_host2-var_crash                              253:8    0   10G  0 lvm  /var/crash
      |   |-onn_host2-var                                    253:9    0   15G  0 lvm  /var
      |   |-onn_host2-tmp                                    253:10   0    1G  0 lvm  /tmp
      |   `-onn_host2-home                                   253:11   0    1G  0 lvm  /home
      `-onn_host2-pool00_tdata                               253:2    0 49.8G  0 lvm  
        `-onn_host2-pool00-tpool                             253:3    0 49.8G  0 lvm  
          |-onn_host2-ovirt--node--ng--4.4.0--0.20200521.0+1 253:4    0 12.8G  0 lvm  /
          |-onn_host2-pool00                                 253:5    0 49.8G  0 lvm  
          |-onn_host2-var_log_audit                          253:6    0    2G  0 lvm  /var/log/audit
          |-onn_host2-var_log                                253:7    0    8G  0 lvm  /var/log
          |-onn_host2-var_crash                              253:8    0   10G  0 lvm  /var/crash
          |-onn_host2-var                                    253:9    0   15G  0 lvm  /var
          |-onn_host2-tmp                                    253:10   0    1G  0 lvm  /tmp
          `-onn_host2-home                                   253:11   0    1G  0 lvm  /home
2) LVM setup after the upgrade:

NAME                                                       MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                                                          8:0    0  68.3G  0 disk  
`-3600508b1001cd12f4daf0c8bd56261f4                        253:0    0  68.3G  0 mpath 
  |-3600508b1001cd12f4daf0c8bd56261f4p1                    253:2    0     1G  0 part  /boot
  `-3600508b1001cd12f4daf0c8bd56261f4p2                    253:3    0  67.3G  0 part  
    |-onn-pool00_tmeta                                     253:4    0     1G  0 lvm   
    | `-onn-pool00-tpool                                   253:6    0  49.8G  0 lvm   
    |   |-onn-ovirt--node--ng--4.4.0--0.20200521.0+1       253:7    0  12.8G  0 lvm   /
    |   |-onn-pool00                                       253:12   0  49.8G  0 lvm   
    |   |-onn-var_log_audit                                253:13   0     2G  0 lvm   /var/log/audit
    |   |-onn-var_log                                      253:14   0     8G  0 lvm   /var/log
    |   |-onn-var_crash                                    253:15   0    10G  0 lvm   /var/crash
    |   |-onn-var                                          253:16   0    15G  0 lvm   /var
    |   |-onn-tmp                                          253:17   0     1G  0 lvm   /tmp
    |   |-onn-home                                         253:18   0     1G  0 lvm   /home
    |   `-onn-ovirt--node--ng--4.4.2--0.20200918.0+1       253:19   0  12.8G  0 lvm   
    |-onn-pool00_tdata                                     253:5    0  49.8G  0 lvm   
    | `-onn-pool00-tpool                                   253:6    0  49.8G  0 lvm   
    |   |-onn-ovirt--node--ng--4.4.0--0.20200521.0+1       253:7    0  12.8G  0 lvm   /
    |   |-onn-pool00                                       253:12   0  49.8G  0 lvm   
    |   |-onn-var_log_audit                                253:13   0     2G  0 lvm   /var/log/audit
    |   |-onn-var_log                                      253:14   0     8G  0 lvm   /var/log
    |   |-onn-var_crash                                    253:15   0    10G  0 lvm   /var/crash
    |   |-onn-var                                          253:16   0    15G  0 lvm   /var
    |   |-onn-tmp                                          253:17   0     1G  0 lvm   /tmp
    |   |-onn-home                                         253:18   0     1G  0 lvm   /home
    |   `-onn-ovirt--node--ng--4.4.2--0.20200918.0+1       253:19   0  12.8G  0 lvm   
    `-onn-swap                                             253:8    0     4G  0 lvm   [SWAP]
user599298
  • 11
  • 1
  • What is the actual problem you encountered? You described a proposed solution but nothing indicates that it has any relevance to the problem you vaguely hinted at. – Michael Hampton Nov 04 '20 at 20:42

0 Answers0