@JaredBusch said in LVM Partition resize:
@fuznutz04 yes, I did exactly that.
Then whenever things got full again, I was able to simply drop/create the one table without stopping anything.
Well, looks like I know what i'll be doing tonight.

@obsolesce said in KVM on Fedora:
@dustinb3403 said in KVM on Fedora:
I've just added
/home/user.name/vm-storage/ defaults 1 1
to fstab, should that do?
And actually looking in the existing directories that I was going to use, nothing was there. .
Why didn't you set this up before you installed Fedora using custom partitioning?
Now, why aren't you fixing it via Cockpit?
Because im tired and running around raged.

@travisdh1 said in Storage Setup for KVM:
@emad-r said in Storage Setup for KVM:
@eddiejennings
keep your images and ISOs in the default location of /var/lib/libvirt/images/?
Yes I do, but I create 2 new folders there, iso and vm.
Fedora will be presented a 4 TB block device ?
Why dont you separe that a little, and have more fun. Block device I assume DAS, if no why dont you make the storage reliable and robust, and make it its own server, like another fedora or centos install, with RAID 10 and the simplest option to share is NFS, and this way you can have many KVMs and the migration feature will actually work, and you can do RAID on just /var, and you scan scale easily with KVM nodes + KVM nodes can be state file, think salt stack, and you can treat them as pure compute nodes.
Because @EddieJennings is talking about his home lab, which will consist of a single 1U server. That hadn't been mentioned in this thread.
Bah! Folks should be able to read my mind ;). There were some good ideas in this thread though.
What I decided on was giving enough space to / live comfortably, and gave everything else to /var.

@scottalanmiller said in Vultr, Block Storage CentOS:
@fuznutz04 said in Vultr, Block Storage CentOS:
@travisdh1 So on second thought, I'm thinking it might be a better approach to redirect the call recordings to the block device directly, without extending the LVM volume to the block device. So it would be like this:
Attach block device and create partition and file system.
Mount the new device to a new directory (/callrecordings)
In FreePBX, point the call recordings to this new directory.
This way, the VPS disk, is still completely separate from the block device disk. In my head, this just seems cleaner, and has less potential for errors if the block device is ever unavailable.
Thoughts?
Yes, that makes way more sense.
The only thing that made me think of that was because about 2 weeks ago Vultr NJ had some issues with block storage. If they have an issue again, at least I could still boot the VM. (although I would have to remove the block device from the fstab. But then, It should boot fine I suppose. (crosses fingers)

Of course, in more modern systems, the use of advanced LVMs instead of older partitions makes this a little more flexible so that more control over the process can exist. But all of the core problems still exist.
Some vendors try to market this mechanism as "RAID virtualization", which isn't a completely crazy name due to the layers of abstraction, but it makes it sound valuable when, in reality, it is not. RAID virtualization when used for the purpose of enabling hot or live RAID array growth is generally a good idea. Used as a kludge to enable bad ideas, it remains bad.

@scottalanmiller said in Fedora Block Device Full How - Extend Partition:
@travisdh1 said in Fedora Block Device Full How - Extend Partition:
@scottalanmiller said in Fedora Block Device Full How - Extend Partition:
I'm late to the party. But the root is HUGE for Linux. We normally use 12GB. 20GB is the largest I would use. Why are you extending that? What's the goal in having the root be so large?
Whoever first set it up (it wasn't @DustinB3403), just dropped everything onto the root partition. Against every best practice ever written/known for Unix/Linux.
Still, that doesn't create a need for expanding the root in most cases.
If I would have known this before helping to fix a broken system... hindsight sadly.

@thwr said in iscsi target Path configuration:
@scottalanmiller said in iscsi target Path configuration:
Last system I checked LVM and mapper were equal symlinks pointing to the same thing. I checked it because I was documenting on CentOS.
Still interesting. Device mapper is just that, a mapper pointing to LVM (and LUKS) devices for example. So your mapper and LVM are both pointing to something "physical" like md devices or sd*?
Both pointing to the dm device, which is another layer of mapping.

Like I already told you (PM/chat): Provide more info. What kind of storage? Some consumer grade NAS? Huawei? EMC? NetApp? vSAN? We need to know the brand and model, everything else is just wild guessing on our side.

@StrongBad said:
LVM is awesome, it is a great tool.
I use it all the time. Snapshots are awesome. I've just never needed to use multiple disks and didn't think of the above scenarios when you would have multiple PVs.