A Start Job is running for dev-disk/x2uuid and similar errors

VastOne

Quote from: lwfitz on October 26, 2016, 02:29:10 AM
This has been an ongoing issue for me for at least six months. I posted about it and was never able to get any movement with it.

When and where?  I missed this
VSIDO      VSIDO Change Blog    

    I dev VSIDO

jedi

OK, just rebooted my HP laptop.  It had been 'up' for about 2 weeks.  This is the one I recently posted a scrot of at 101 days of uptime.  On reboot, no joy.  This is just a regular plain jane laptop with a liteonit SSD drive. (SATA2?)
I did a nano /etc/fstab from the emergency mode and added 'nofail' to the fstab.  Booted after that, though it seemed to take a little longer than normal...
Forum Netiquette

"No matter how smart you are you can never convince someone stupid that they are stupid."  Anonymous

hakerdefo

"nofail" should be avoided as it'll keep trying to mount device even in its absense. Instead make your 'fstab' option to look like this,
noauto,x-systemd.automount,x-systemd.device-timeout=2 0 2

Try the above and the wait should be over, hopefully :)

Cheers!!!
You Can't Always Git What You Want

Snap

Great tip, hackerdefo.

Do you know if there's a timeout option for systems running with sysvinit?

hakerdefo

Quote from: Snap on October 28, 2016, 06:38:31 AM
Great tip, hackerdefo.

Do you know if there's a timeout option for systems running with sysvinit?
Hi there Snap,
You can add,
nobootwait
to the fstab entry to avoid the wait.
Cheers!!!
You Can't Always Git What You Want

PackRat

Just posting this comparrison out of curiousity (since mine is piqued by this issue); can't really comment on it.

This is from jedi's original post -

Quote### first '6' lines are the original fstab for the VSIDO install
### we'll call this section 1.
#proc      /proc   proc   defaults   0   0
#UUID=bda2cbcd-9696-419b-baf4-2e875fc1279a   /   ext4   defaults,noatime   0   1
#UUID=730da2a3-5aef-4654-908e-865d3ed8f8aa   /home   ext4   defaults,noatime   0   2
#UUID=efe98bd0-1254-40b4-8378-61811c50da34   swap   linux-swap   defaults   0   0
#UUID=9E77-1B41   /boot/efi   vfat   defaults   0   1

and this is the [near] default fstab from a clean install of Debian Testing I did yesterday:

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/sda1 during installation
UUID=dcf6fce4-beb2-4bdf-aa37-5961cec9638b /               ext4    errors=remount-ro 0       1
# swap was on /dev/sda2 during installation
UUID=cab27a4f-73f1-45e2-95ae-0e18985aa857 none            swap    sw              0       0
/dev/sr0        /media/cdrom0   udf,iso9660 user,noauto     0       0
#
UUID=4b6da59d-e2d8-4328-8212-dd10d480b301    /media/vgroup    ext4    defaults    0    1
#


The last line for the /media/vgroup I added mannaully after creating a logical volume group out of the remaining drive space (3 HDD on this old computer).

Interesting that there are some differences from what I'm use to seeing ( fstab was always similar to jedi's in the past) particulary on the swap file line; kind of wish I had made a /home with this install. Not sure if anything is relevant to jedi or lwfits' issues.
I am tired of talk that comes to nothing.
-- Chief Joseph

...the sun, the darkness, the winds are all listening to what we have to say.
-- Geronimo

jedi

Quote from: hakerdefo on October 28, 2016, 05:54:46 AM
"nofail" should be avoided as it'll keep trying to mount device even in its absense. Instead make your 'fstab' option to look like this,
noauto,x-systemd.automount,x-systemd.device-timeout=2 0 2
Try the above and the wait should be over, hopefully :)
Cheers!!!

hakerdefo, thanks for all your great input!!!  You say ""nofail" should be avoided as it'll keep trying to mount device even in its absense", and I'm wondering what exactly that means?  If the only thing in fstab happens to be the internal drives on the machine, and the drives all load correctly, is this not OK?  I guess I don't understand what you mean when you say they'll continue to 'try to mount'.

Also, do those 'options' you listed take into account the presence of SSD drives?  Pretty easy to destroy one if you make a mistake...

With all the reading some of us are doing on this, it looks like this is the most informative place on the net for this issue!!!  Your insight has proven invaluable!  Keep us informed of any new ideas you may have regarding this...

Forum Netiquette

"No matter how smart you are you can never convince someone stupid that they are stupid."  Anonymous

jedi

PackRat, I suppose I should have mentioned, it isn't "completely/exactly" the original VSIDO installed fstab file.  I removed the commented out lines like "# / was on /dev/sda1 during installation"...
Forum Netiquette

"No matter how smart you are you can never convince someone stupid that they are stupid."  Anonymous

jedi

I've also just changed to PARTUUID as it appears that is the preferred method of naming when it comes to GUID and GPT disks. (i.e. not msdos formatted during partitioning, rather GPT)

Using the PARTUUID= resulted in an endless bootloop on login.  In other words, it would allow me to type in the password, wait for an agonizingly long time, then when it looked like it was going to boot in, it just sent me back to the lightdm login screen...

Mine is working fine, with no delays, no errors, and no changes other than the two fixes described above.  I also implemented hakerdefo's fstab options as well and all seems beautiful to me

EDIT: FUCK GUID and EFI    The term GUID is generally used by developers working with Microsoft technologies, while UUID is used everywhere else.
Forum Netiquette

"No matter how smart you are you can never convince someone stupid that they are stupid."  Anonymous

PackRat

Quotehakerdefo, thanks for all your great input!!!  You say ""nofail" should be avoided as it'll keep trying to mount device even in its absense", and I'm wondering what exactly that means?

I think that was pretty much aimed at this post by snap -

QuoteI just want to point that this sounds more like a (welcomed) workaround than a true solution. IIRC, the nofail option silently (thankfully) ignores errors and keeps the thing going. I use that option for external drives included in fstab that may or maybe not present. With the nofail option there are no complaints if the drive is missing and the rest of the drives mount normally. Otherwise it stalls (with whatever init).

since he has used the nofail option on external drives (NFS or samba shares also come to mind); if the drive is detached or powered down, the system will keep trying to mount it. Since yours in an internal drive with the / partition that won't be the case. But, as snap pointed out, the nofail option is more a [non elligant?] work around, not an actual solution.
I am tired of talk that comes to nothing.
-- Chief Joseph

...the sun, the darkness, the winds are all listening to what we have to say.
-- Geronimo

hakerdefo

"nofail" option will keep trying to mount a device even if it's not available till the time-out is reached (default is 90 seconds). And the boot process will continue after the time-out period. In the absence of "nofail" option the boot process won't continue even after the time-out is reached. But thanks to "systemd" we can avoid the "nofail" option and use a better formula like this,
noauto,x-systemd.automount,x-systemd.device-timeout=2 0 2
This entry will prevent "mount" from trying to auto-mount the device and leave that to the "systemd" daemon which is a bit more smart and a bit more flexible at this particular task. And a '2' second time-out period will ensure that the boot process won't stall in the absence of the device.
Regarding the SSD drives you can safely use the above formula. And I would suggest you to add following options to your SSD entry in the fstab for some performance benefits,
noatime,discard

Cheers!!!
You Can't Always Git What You Want

jedi

Thanks PackRat and hakerdefo.  I missed Snap's post.  Sorry 'bout that.

@hakerdefo, The 'discard' option enables Trim on drives that often have no need of it.  A better option, instead of 'discard', would be to just add a monthly cron job to make sure Trim is ran at least once a month.  On newer SSD drives, the 'discard' option can be quite hazardous to your drive.  Another quick note, if you leave an adequate empty, unpartitioned space on an SSD drive there is no need of Trim at all.

I did add your advised entries to my fstab with great success! (not the discard)  Utilizing the PARTUUID in /etc/fstab DID NOT work.  This is beyond the scope of this thread so I'm not pursuing it. (associating GUID with UUID has no bearing here and was my mistake)  ???
Forum Netiquette

"No matter how smart you are you can never convince someone stupid that they are stupid."  Anonymous

hakerdefo

Hi jedi,
I stand corrected! "discard" option's benefits vary from model to model. I don't have any SSD [way way way over my budget ;)] so I have no personal experience but ext4 developer advises to use the following script to do the job instead of the "discard" option. You can run the script via a cron job to automate this task. Here is the script,

wiper.sh

Cheers!!!
You Can't Always Git What You Want

Snap

Wow, this thread has becoming really informative on the subject. Thanks guys for all the contributions.

QuoteUsing the PARTUUID= resulted in an endless bootloop on login.  In other words, it would allow me to type in the password, wait for an agonizingly long time, then when it looked like it was going to boot in, it just sent me back to the lightdm login screen...

Really? I would never have thought of fstab or UUIDs in a boot loop like this. I would drive crazy for weeks trying fix something related with the X server! ...which is not the case as you pointed out. Obviously fstab ain't what it used to be.


Snap

@ hackerdefo: tried the nobootwait option and didn't worked quite well. if the drive is present it doesn't automount any more. If the drive is off the booting sequence gets interrupted with errors. Back to nofail untill I find anything better.