BTRFS: open_ctree failed after ubuntu image deploy
by uploading an 16.06 image with this partition table:
# <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda3 during installation UUID=dfa3bf1a-9ca1-4fc4-863f-72815db61539 / btrfs defaults,discard,relatime,subvol=@ 0 1 # /boot was on /dev/sda1 during installation UUID=0a46a7fe-23c2-4e4c-a76a-7d2410311a25 /boot ext4 defaults 0 2 # /home was on /dev/sda5 during installation UUID=dfa3bf1a-9ca1-4fc4-863f-72815db61539 /home btrfs defaults,discard,relatime,subvol=@home 0 2 # /opt was on /dev/sda6 during installation UUID=719313d4-5d42-44e7-8cda-87c492b92ae6 /opt btrfs defaults,discard 0 2 # swap was on /dev/sda2 during installation UUID=c5c9a2e6-885a-4ed1-aeb6-909043dae122 none swap sw 0 0
i get the above after uploading the first partition.
Fog is finishing the job though. But if I put the image on onother computer, ubuntu starts with busybox.
I was not able to find any solution, so I hope someone can help. Thanks
ok. That means we need to wait now until it’s solved in partclone.
Yes. As soon as there is a new partclone version out (please inform us if you hear about it first) Tom can include it into the init’s and hopefully we should be fine then. But I don’t know when that will be.
ok. That means we need to wait now until it’s solved in partclone.
Thanks for the information!
Yes, Thomas Tsai is working on that.
BTRFS code base is under heavy development, so it changes very fast. Of
course we have to catch that.
Once Thomas has improved that, we will release another release.
On 6/17/2016 PM 03:48, sebastian.roth wrote:
I am wondering about if you have heard about any known issues with
partclone (0.2.88) and BTRFS?
After cloning a fresh installed Ubuntu system with btrfs the root
partition seems crooked.
Thanks in advance!
@Oleg what if you use those added arguments to the host kernel args?
Have you tried a fsck? There might be some useful info coming out of that if anything, although this seem like a partclone issue, more info is always nice.
There are about a dozen or so of btrfs-tools (like
btrfs-debug-treeand some more) to examine and fix those kind of filesystems. Unfortunately I haven’t played with those tools before and don’t really know enough about btrfs to find out what is causing this
BTRFS: open_ctree failed.
no problem - change the topic in something more related to the discussed issue.
today I tried to clone that image with another fstab-mount-options. I remove
discardbecause I read that this option should not be used with BTRFS and also tried with
nospace_cachebut with no success.
@Sebastian-Roth Have you tried a fsck? There might be some useful info coming out of that if anything, although this seem like a partclone issue, more info is always nice.
See, @Sebastian-Roth has successfully imaged the system using similar layout as yours, and while there are a few concerning error messages, the system is still operation. Maybe something else is causing issues?
While you’re right that my system seemed to boot up properly after cloning I am still very concerned about those messages I posted. I just started up the system again. Booted ok, but I get a couple of these
bad tree block startmessages every minute now. I am trying to get in contact with the clonezilla developers about this as I think this is not a very special case and will hit is from time to time. I don’t think we should do raw imaging with btrfs filesystems just to circumnavigate this issue.
PS: Tom is right about the title. @Oleg would you mind changing the title to something appropriate?
I feel I should at least kind of chime in a little bit.
The issue here is not in any way, shape, or form, related to the message as described in the title. The “random: nonblocking pool is initialized” is simply a kernel debug statement just telling you the pool to randomize elements in a non-blocking form has been initiated. This is NOT what is causing the failure to boot after upload, nor is it impeding with BTRFS in anyway.
I think @Quazz is right, at least in that we can perform a btrfs filesystem check. I doubt it will fix anything though. See, @Sebastian-Roth has successfully imaged the system using similar layout as yours, and while there are a few concerning error messages, the system is still operation. Maybe something else is causing issues?
Thanks for your suggestion! for a normal use yours is better - in our case we have only a couple applications which are storing their data in the /opt. For sda2 and sda3 I think I will follow your suggestion.
Yes your right - in my setting I don’t have the “lzo compressed” options in the fstab.
In your case the system comes up in my not. Will look further today to confine the issue.
I think if it’s a partcone “code-issue”, the solution could take a while?! I’m asking because then I have to switch to another filesystem.
@Quazz Good find man! Although I am wondering if this is the exact same issue as they are talking about an issue with “lzo compressed btrfs volumes” which @Oleg does not seem to have according to his
I had a bit of time while I was waiting for some other installations today so I setup Ubuntu 16.04 server (should be close enough to the scenario with Oleg’s Ubuntu desktop), booted it a couple of times without an issue, uploaded an image and deployed it again. My Ubuntu server is coming up and seems normal but taking a look at
/var/log/kern.logI see a lot of these messages:
BTRFS error (device sda3): bad tree block start 0 40345712 BTRFS error (device sda3): bad tree block start 0 40484864 BTRFS error (device sda3): bad tree block start 0 40091648 BTRFS error (device sda3): bad tree block start 0 40108032 BTRFS error (device sda3): bad tree block start 0 40042496 ...
Notice the different numbers at the end of the lines. I am not sure what that means. Guess we need to do some more research on this as it does not seem to be a showstopper in my case. I don’t see the
BTRFS: open_ctree failedbut I have some other btrfs related messages:
BTRFS info (device sda3): read error corrected: ino 1 off 125304832 (dev /dev/sda3 sector 261120) BTRFS info (device sda3): read error corrected: ino 1 off 125308928 (dev /dev/sda3 sector 261128) BTRFS info (device sda3): read error corrected: ino 1 off 125313024 (dev /dev/sda3 sector 261136) BTRFS info (device sda3): read error corrected: ino 1 off 125317120 (dev /dev/sda3 sector 261144)
Anyone keen to dig into this and take a look at the partclone code as well. I’d love to but I guess I won’t find the time in the near future.
@Oleg Starting to get my VM setup to test your issue I am a bit confused about the partition layout. While I am not saying that this is causing the error I am wondering why:
- sda1 (/boot) is about 3 GB - not bad but usually you don’t need that much for it
- sda3 (/) is around 9.5 GB - might be enough but I’d use a little more
- sda5 (/home) is around 9.5 GB - this is where users store all their data… usually need a lot more
- sda6 (/opt) is around 90 GB - usually /opt is for optional software. Do you install that much custom tools?
I am not saying that this layout is wrong. Depending on your requirements it might be very useful this way. Just saying that this is not the way I’d partition my disk.
But is it btrfs-problem? … I’ve just tried to create an Image with latest clonezilla but it’s the same.
As you see from your tests it seems to be a partclone/clonezilla issue. This confirms this as well. Although I really wonder why I can’t find anything about this on the web… There should be other people running into this issue!!
Possibly I will be able to do some tests over the weekend. Kepp us posted if you find anything new on this.
sorry, had to mention that at first - last try was with Trunk 8046
Then the last question I have, I suppose.
Have you updated to the latest FOG Version and retried uploading?
in fog - yes, all partitions have been uploaded
The system is running fine before uploading.
I mean if I setup a clean Ubuntu 16.06 Server with BTRFS and the partition table i mentioned, then i get the same error. Tested with another Fujitsu Computer, which is a couple years old.
Also, just for clarification.
The first upload broke this system, from what I understand. Are you uploading the broken system or are you ensuring the system is operational between uploads?
Does it upload all the partitions or just /dev/sda1?