Dell 7730 precision laptop deploy GPT error message
-
@Sebastian-Roth everything I’ve found on this issue refers to using the disks uuid to identify which one to apply it to. That doesn’t help us much as every drive on a system would have its own uuid. So how do we identify which is which? I know it doesn’t help anything. Everything from Serial to Pata and nvme aren’t guaranteed to be a persistent naming scheme for Linux. Luckily SATA and PATA seem to follow the channel pattern on how they’re connected and named. With NVME being on a pcie channel this makes enumeration dependent on how fast a disk feels like revealing itself to the system.
-
@Tom-Elliott You are spot on! The only thing I came up with so far is saving the disks sector sizes (in multiple disk mode only) and trying to match those on deployment again. Kind of ugly and possibly error-prone but could give it a try.
-
@Sebastian-Roth said in Dell 7730 precision laptop deploy GPT error message:
What is your deadline to get those devices imaged?
I have until mid March before my first full implementation with these new training laptops. I can always image them individually via usb until a working solution is found (aka someone learns how to control the nvme and its feelings of revealing).
-
@Tom-Elliott said in Dell 7730 precision laptop deploy GPT error message:
@Sebastian-Roth everything I’ve found on this issue refers to using the disks uuid to identify which one to apply it to. That doesn’t help us much as every drive on a system would have its own uuid.
When registering a system Host into Fog, you’d have to store the UUIDs of the drives and then specify which one would be your disk0/sda and disk/sdb, etc etc, … thinking out loud is all.
Then on deploy if the UUID fields and their mappings are set you use that, otherwise operate as usual.
-
@jmason The problem isn’t finding the UUID, it’s that the UUID for the disk will be different for each disk.
What do I mean?
One 7730 with 2 NVME drives will have different UUID’s.
Another 7730 with 2 NVME drives (identically sized of course) will also have different UUID’s.
Does this make sense?
-
@Tom-Elliott said in Dell 7730 precision laptop deploy GPT error message:
@jmason The problem isn’t finding the UUID, it’s that the UUID for the disk will be different for each disk.
What do I mean?
One 7730 with 2 NVME drives will have different UUID’s.
Another 7730 with 2 NVME drives (identically sized of course) will also have different UUID’s.
Does this make sense?
Yes it makes sense, but I failed in conveying my thought.
My thought was there might be some way when you do a full registration on each host machine to have an option (requiring user input) to designate each nvme drive and its UUID to a fog specific parameter/field ( disk0/sda disk1/sdb etc…) mapping stored in the database.
Then during deploy if the parameter(s) for the drives are present for the host machine, you would have info needed to match the images up based on the actual UUIDs and it wouldn’t matter what the init order of the nvme drives are.
It would require user input to perform the mapping and be optional, and only checked/used for multi-disk non-resizeable.
On registration, Do you wish to register you drives for use in multi-disk capture/deploy operations? Could maybe even have an option for the UUIDs to be entered manually from the web GUI, but it would be best to capture the UUIDs during the host registration.
So the needed info would not be saved with the image, but with the Host machine information in the database.
Not sure if that’s feasible, but just a thought.
-
The problem is the NVME drives are loading randomly. Essentially one time a drive is coming up as NVME0N1 and the next it’s NVME1N1.
Using the UUID would work, but only for the machine on which you capture the image. Basically, if you go down this route, you would essentially require an image for each machine.
Unless you manage to gather all machines’ UUID information, this just isn’t feasible.
Basically What I’m saying,
First: 7730 500GB SSD NVME and 1TB SSD NVME. 500GB UUID 0000-xxxx-0000-xxxx, 1TB UUID 0001-xxxx-0000-xxxx
Second: 7730 500GB SSD NVME and 1 TB SSD NVME. 500GB UUID 0001-xxxa-0001-xxxa, 1TB UUID 0002-xxxz-0000-xxxzYou see what I mean?
Each machine’s drives will have their own UUID’s. So simply put, you would need to know all machine’s UUID information, and inserted into the DB to clarify which one.
Of course, our coding doesn’t, yet, support this either. I imagine it wouldn’t be too difficult to enable, but it basically removes the autonomous element at least for these machines.
The NVME portion is changing and that’s the drive labeling that is determined. With SATA and PATA, this was also possible, but the channels (SATA0 - SATA4 – or how many you had on your machine) would enumerate to Linux in order of their channel number. This made /dev/sda always be on SATA0 and /dev/sdd on SATA4.
In the case of PATA, the naming would also be adjusted based on enumeration, but the Master slot on channel 0 would be /dev/hda, while the Slave slot on channel 1 would be /dev/hdd
Hopefully this helps clarify more what I was trying to get at.
-
@jmason I think we’re saying the same thing now, but it would entail a ton more work. It would also leave a lot to the person registering in ensuring information is accurate too.
-
@Tom-Elliott Yes I think so, but its the only thing I’ve been able to come up with. I’m sure all the other “imaging” devs are dealing with the same issue or soon will be. I’m thinking if the system had come with one nvme and one sata drive it probably wouldn’t have been an issue, but not sure. These just have 2 nvme drives and I guess that might become more prevalent as time progresses…
ah well… maybe you or @Sebastian-Roth will have a revelation of some kind. I haven’t thought of anything else here. -
I wonder if by-path would be a better option?
https://wiki.archlinux.org/index.php/persistent_block_device_naming#by-id_and_by-path
As far as I can tell in the pictures, the by-path portion relies on the PCI ID.
In the pictures I see 0000:02:00.0 and 0000:03:00.0 consistent. Or is this where the problem is residing?
I suppose by-id could also work, though we’d need to see 2 - 3 machines and see how different the ID’s are between them.
-
@Tom-Elliott said in Dell 7730 precision laptop deploy GPT error message:
I wonder if by-path would be a better option?
https://wiki.archlinux.org/index.php/persistent_block_device_naming#by-id_and_by-path
As far as I can tell in the pictures, the by-path portion relies on the PCI ID.
In the pictures I see 0000:02:00.0 and 0000:03:00.0 consistent. Or is this where the problem is residing?
I suppose by-id could also work, though we’d need to see 2 - 3 machines and see how different the ID’s are between them.
the PCI IDs were consistent
-
@jmason so the same disk 0000:02:00.0 was Always the same sized NVME drive?
-
@Tom-Elliott Just reviewed, no the related sizes were not the same, it appears only the PCI ID it assigned to nvme0 and nvme1 were consistent.
-
@Tom-Elliott @Sebastian-Roth
So this is mostly way over my head, but there is some brief mention of dealing with nvme on dell systems:https://www.dell.com/support/article/us/en/04/sln312382/nvme-on-rhel7?lang=en
About mid way down the page it talks about how to pull information on each device, which might be helpful.
After looking further this is working with the PCI ID or slot ID as they refer to it, so it seems odd if it is tied to a specific piece of hardware how could the size of it change on a given reboot, or does the system just get it completely mixed up. I guess it is only tied to the memory controller, the result of lspci -s SLOTID -v is the same except for the Memory at b5400000 for ID 02:00.0 and b5300000 for ID 03:00.0
And I’m way over my head so I think I’ll stick to only general things here on out lol.
From what I’m seeing on other forums regarding this issue most apparently are using some kind of method to deal with it as you and I arrived at earlier.
Looking at nvme commands in linux now…for fun I guess heh.
-
@jmason @Tom-Elliott Although I kind of liked the idea you both came up with at first I can’t see this being a user-friendly and reliable solution the more I think about it. On top it would mean a huge change in FOG. Not that I wanna block those kind of changes, not at all. But I only would wanna go that way if it’s an appropriate solution.
Adding a simple sector count check is not much of a thing to implement and it would work in most situations (at least those I can think of so far). Even if the two disks are same size it wouldn’t hurt because deploying to the “wrong” one is not a problem.
-
@Sebastian-Roth said in Dell 7730 precision laptop deploy GPT error message:
@jmason @Tom-Elliott
Adding a simple sector count check is not much of a thing to implement and it would work in most situations (at least those I can think of so far). Even if the two disks are same size it wouldn’t hurt because deploying to the “wrong” one is not a problem.This would definitely be true for me if my systems had 2 identical size hard drives as we would be imaging them both. I wouldn’t really care which one it picked as long as both were available at boot.
Could you make the functionality optional via some kind of check mark if multi-disk non-resizeable is selected? Then it wouldn’t affect everyone using that selection unless they so chose to do so.
-
@jmason said in Dell 7730 precision laptop deploy GPT error message:
Could you make the functionality optional via some kind of check mark if multi-disk non-resizeable is selected?
Probably can but I don’t see why this would effect other users at all. “All Disk” option is non-resizable and therefore trying to allocate the image to the right disk by using a sector count shouldn’t hurt anyone really.
-
@Sebastian-Roth Well if you move forward with this just let me know when you want some testing.
-
@Sebastian-Roth @Tom-Elliott One thing I realized today is that when the deploy fails it reboots and that gives the system a chance to initialize the way the master image expects.
Initially I assumed the key to this working for my setup was in making sure that the smaller drive was the first drive in the master image captured, so that it didn’t attempt to deploy the smaller image onto the larger drive and then fail when attempting to image the larger image onto the smaller drive. I’m not sure that is actually necessary.
So I hooked up 10 of my laptops to the switch today and deployed the group, about half failed the first startup, but on the next reboot all of them initialized the drives as the master image expected.
This might not work well for a system with more than 2 nvme drives being imaged, so I’ll still help test anything you guys come up with and need testing. But I’m fairly satisfied with even the failure and reboot and hoping it will init correctly on the next boot.
-
@jmason Well that is definitely not too bad of an idea. Just let it try often enough till it doesn’t fail anymore. While this will help you not getting under pressure time-wise it’s not an ideal solution. I will let you know when I get something to test ready.