A A A

Please consider registering
guest

Log In

Lost password?
Advanced Search:

— Forum Scope —



— Match —



— Forum Options —




Wildcard usage:
*  matches any number of characters    %  matches exactly one character

Minimum search word length is 4 characters - maximum search word length is 84 characters

Topic RSS
ERROR: error retrieving mountpoint source for dataset ERROR: failed to mount file system on
June 11, 2015
11:13 am
Lo0oM
Admin
Forum Posts: 217
Member Since:
September 30, 2012
Offline

Hi

 

Description:

T1000 with installed Solaris 10 update 10. I try to upgrade it to Solaris 10 update 11 (latest). Lucreate fail with next errors:

Analyzing system configuration.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c0t0d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <s10u11>.
Source boot environment is <zfsBE>.
Creating file systems on boot environment <s10u11>.
Populating file systems on boot environment <s10u11>.
Temporarily mounting zones in PBE <zfsBE>.
Analyzing zones.
WARNING: Directory </zones/001> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/001-s10u11>.
WARNING: Device <rpool/zones/001> is shared between BEs, remapping to <rpool/zones/001-s10u11>.
WARNING: Directory </zones/002> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/002-s10u11>.
WARNING: Device <rpool/zones/002> is shared between BEs, remapping to <rpool/zones/002-s10u11>.
Duplicating ZFS datasets from PBE to ABE.
Creating snapshot for <rpool/ROOT/zfsBE> on <rpool/ROOT/zfsBE@s10u11>.
Creating clone for <rpool/ROOT/zfsBE@s10u11> on <rpool/ROOT/s10u11>.
Creating snapshot for <rpool/zones/001> on <rpool/zones/001@s10u11>.
Creating clone for <rpool/zones/001@s10u11> on <rpool/zones/001-s10u11>.
Creating snapshot for <rpool/zones/002> on <rpool/zones/002@s10u11>.
Creating clone for <rpool/zones/002@s10u11> on <rpool/zones/002-s10u11>.
Mounting ABE <s10u11>.
ERROR: error retrieving mountpoint source for dataset <              >
ERROR: failed to mount file system <              > on </.alt.tmp.b-E.f.mnt/opt>
ERROR: unmounting partially mounted boot environment file systems
ERROR: cannot mount boot environment by icf file </etc/lu/ICF.1>
ERROR: Failed to mount ABE.
Reverting state of zones in PBE <zfsBE>.
ERROR: Unable to copy file systems from boot environment <zfsBE> to BE <s10u11>.
ERROR: Unable to populate file systems on boot environment <s10u11>.
Removing incomplete BE <s10u11>.
ERROR: Cannot make file systems for boot environment <s10u11>.
You have new mail in /var/mail//root

OS have 2 zones:

# zoneadm list -vc
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              native   shared
   – 001              installed  /zones/001                     native   shared
   – 002              installed  /zones/002                     native   shared

Solution:

1. First of all you need to check if installed zones not share global zone /opt directory (all as root user).

# zonecfg -z 001 info
zonename: 001
zonepath: /zones/001
brand: native
autoboot: false
bootargs:
pool:
limitpriv: default
scheduling-class:
ip-type: shared
hostid:
inherit-pkg-dir:
        dir: /lib
inherit-pkg-dir:
        dir: /platform
inherit-pkg-dir:
        dir: /sbin
inherit-pkg-dir:
        dir: /usr
net:
        address: 10.1.1.10
        physical: bge0
        defrouter: 10.1.1.254

 

How you see there is no /opt directory, but if it present change mount path in zone. (to /opt1 for example to avoid duplication).

Do the same for all other zones.

 

Next error

ERROR: error retrieving mountpoint source for dataset <              >

 

fixed in lu patch: 121430-93  (latest for sparc)

Download it and install with command

# patchadd -d /opt/inst/121430-93

 

In my case patch was useless.

 

 

2. Now to avoid warnings and issues with zones i decided to backup them, destroy and create from backup after update:

# zoneadm -z 001 detach                                                  (detach zone for backup)   

# zfs snapshot -r rpool/zones/001@v2v                                   (make zfs snapshot of zone)

# zfs send -rc rpool/zones/001@v2v | gzip > /opt/inst/001.zfs.gz (make zone backup from snapshot)

# zfs -z 001 attach -u                                                          (attach zone to uninstall it correctly)

# zonecfg -z 001 export > /opt/inst/001.conf                            (export zone configuration to file)

# zoneadm -z 001 uninstall -F                                                 (uninstall zone)

# zonecfg -z 001 delete -F                                                     (delete zone)

 

Do the same for all zones. After that you will have:

# zoneadm list -vc
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              native   shared

 

Now i try to create BE again with lucreate. That is not mean that i type lucreate command again i followed all procedure before this command. If you dont know how please look here: 1

Any way lucreate failed with next errors:

# lucreate -n s10u11
Analyzing system configuration.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c0t0d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <s10u11>.
Source boot environment is <zfsBE>.
Creating file systems on boot environment <s10u11>.
Populating file systems on boot environment <s10u11>.
Analyzing zones.
Duplicating ZFS datasets from PBE to ABE.
Creating snapshot for <rpool/ROOT/zfsBE> on <rpool/ROOT/zfsBE@s10u11>.
Creating clone for <rpool/ROOT/zfsBE@s10u11> on <rpool/ROOT/s10u11>.
Mounting ABE <s10u11>.
ERROR: error retrieving mountpoint source for dataset <              >
ERROR: failed to mount file system <              > on </.alt.tmp.b-Pnf.mnt/opt>
ERROR: unmounting partially mounted boot environment file systems
ERROR: cannot mount boot environment by icf file </etc/lu/ICF.1>
ERROR: Failed to mount ABE.
Reverting state of zones in PBE <zfsBE>.
ERROR: Unable to copy file systems from boot environment <zfsBE> to BE <s10u11>.
ERROR: Unable to populate file systems on boot environment <s10u11>.
Removing incomplete BE <s10u11>.
ERROR: Cannot make file systems for boot environment <s10u11>.

 

How you see there is no warnings but errors still here.

 

3. At this point i decided to change mount point for /opt at global zone. Some packages installed in /opt will fail to update but it is not critical for OS update. That is what i did:

# zfs unmount -f rpool/opt

# zfs set mountpoint=/opt1 rpool/opt

# zfs mount -a

 

4. Now lucreate went fine without any error or warning and continued with luupgrade

# echo "autoreg=disable" > /var/tmp/no-autoreg

# luupgrade -u -s /mnt -k /var/tmp/no-autoreg -n s10u11

WARNING: <34> packages failed to install properly on boot environment <s10u11>. (this warning related to packages installed in /opt i changed mount point to /opt1)

INFORMATION: The file </var/sadm/system/data/upgrade_failed_pkgadds> on
boot environment <s10u11> contains a list of packages that failed to
upgrade or install properly.
INFORMATION: Review the files listed above. Remember that all of the files
are located on boot environment <s10u11>. Before you activate boot
environment <s10u11>, determine if any additional system maintenance is
required or if additional media of the software distribution must be
installed.
The Solaris upgrade of the boot environment <s10u11> is partially complete.
Installing failsafe
cp: /test/boot/sparc.miniroot: I/O error   
ERROR: Failsafe install failed.                               

 

5. I have no idea why cp failed to copy miniroot from iso file and i did it manually.

# mkdir /test

# zfs set mountpoint=/test rpool/ROOT/s10u11

# zfs mount rpool/ROOT/s10u11

# cp /mnt/boot/sparc.miniroot /test/boot/sparc.miniroot-safe    (my Solaris update dvd.iso mounted to /mnt)

# umount /test

 

6. I continue update with luactivate now:

# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
————————-- ——-- —— ——— —— ———-
zfsBE                      yes      yes    yes       no     -
s10u13                     yes      no     no        yes    -

 

# luactivate s10u11 

mount: rpool/ROOT/s10u11 or /tmp/.liveupgrade.5288.7272/.alt.luactivate, no such file or directory
ERROR: Unable to mount target boot environment <s10u11> root device <rpool/ROOT/s10u11> to mount point </tmp/.liveupgrade.5288.7272/.alt.luactivate>.
ERROR: ABE <s10u13> root slice <rpool/ROOT/s10u11> is not available.
ERROR: Unable to determine the configuration of the target boot environment <s10u11>.

 

This is very strange error, because i successfully mounted rpool/ROOT/s10u11 to /test directory before luactivate and i decided to debug luactivate command to understand the problem.

 

7.  To enable debug for all LU commands you need to edit /etc/default/lu and set debug to (1-20):

# vi /etc/default/lu

LU_DEBUG=10                      (i used 10 as debug level)

 

after running luactivate command again i found next error:

+ /usr/sbin/mount -F ufs rpool/ROOT/111 /tmp/.liveupgrade.7926.15968/.alt.luactivate
mount: rpool/ROOT/111 or /tmp/.liveupgrade.7926.15968/.alt.luactivate, no such file or directory
ret=32

 

How you see luactivate try to mount UFS file system, but my OS FS is ZFS and that is the major source of problem!

 

8. To check FS type:

# fstyp /dev/rdsk/c0t0d0s0
ufs
zfs
Unknown_fstyp (multiple matches)

 

9. To fix FS type need to start server in failsafe mode and run FSCK:

# halt                             (to go to ok prompt, do it on system console)

ok boot -F failsafe

# fsck -y /dev/rdsk/c0t0d0s0

 

After fixing errors and reboot luactivate worked as expected without any errors and i restored zones from backup.

 

10. To restore zones i revert back /opt mountpoint:

# zfs unmount -f rpool/opt

# zfs set mountpoint=/opt rpool/opt

# zfs mount -a

# zonecfg -z 001 -f /opt/inst/001.conf          (create zone configuration from file)

zonecfg:001> create -a /zones/001

zonecfg:001> exit

# zoneadm -z 001 attach -U -a /opt/inst/001.zfs.gz   (restore zone files from backup)

 

Do it for all zones.

 

Thank you.

 

 

 

 

Forum Timezone: UTC 0

Most Users Ever Online: 31

Currently Online:
4 Guest(s)

Currently Browsing this Page:
1 Guest(s)

Top Posters:

Member Stats:

Guest Posters: 0

Members: 0

Moderators: 0

Admins: 1

Forum Stats:

Groups: 3

Forums: 20

Topics: 214

Posts: 214

Newest Members: Lo0oM

Administrators: Lo0oM (217)