Yes, you should. If one of the drives dies, and takes the swap with it, your system just crashes. Usually the swap should not be used at all by Linux, so performance will not be impacted hugely. Finally, soft RAID performance is very good under Linux, so the additional overhead for the swap is negligible.
- Install Debian Squeeze 64 bit (6.0 Squeeze)
- Use LVM setup, and ext3 (for best performance)
- Proxmox setup per se
- echo “deb http://download.proxmox.com/debian squeeze pve” >> /etc/apt/sources.list
- wget -O- “http://download.proxmox.com/debian/key.asc” | apt-key add -
- aptitude update
- aptitude full-upgrade
- aptitude install pve-firmware
- Some systems can’t be reached without this package after the reboot
- aptitude install pve-kernel-2.6.32-14-pve
- This kernel is quite recent, as of 22.08.2012 (updated 1 week ago)
- uname –a
- you should now have the proxmox kernel: Linux 2.6.32-14-pve …
- aptitude install proxmox-ve-2.6.32
- optional: a2ensite pve-redirect.conf
- This enables access to Proxmox on the ports 80 and 443
- If you want to have the ports 80 and 443 available to you for other uses, you should consider editing the Apache setup and changing this port.
- Also we recommend to change the port 8006 proxmox binds to as per default to another secret port for higher security
- optional: /etc/init.d/apache2 restart
- only necessary if you enabled the additional ports
- aptitude install ntp ssh lvm2 postfix ksm-control-daemon vzprocps
- accept the suggestion to remove Exim and configure postfix according to your network (may not be necessary with Debian minimal)
Installing Proxmox VE on Debian Squeeze :: References:
- Install Proxmox VE on Debian Squeeze
- Hetzner Anleitung zu Proxmox VE Installation auf Debian Squeeze
- Wiki @ SCNU (French) – Use Google Chrome and translate, very good and extensive article
Proxmox and LVM
Short answer: not much. It is better to use LVM.
“Proxmox VE can use local directories or locally mounted shares for storage (Virtual disk images, ISO images, or backup files). This is the least flexible, least efficient storage solution, but is very similar to the NFS method, where images are stored on an existing filesystem as large files.”
This is a quote from Proxmox Wiki, Storage Model
Essentially you will be able to setup Virtual Machines and containers, but the performance of virtual machines may suffer.
There is another downside to going without LVM:
Proxmox offers a “snapshot” mode to backup containers. In this mode, your container keeps running and there is absolutely no downtime. (Filesystem response may be a bit slower during the backup, though.)
This mode is ONLY available if the underlying filesystem the container is in, is on a Logical Volume. Proxmox will create a snapshot of this logical volume, and can roll it into a backup.
If you try this on a filesystem residing on a normal partition, Proxmox will fall back to suspend mode:
INFO: mode failure – unable to detect lvm volume group
INFO: trying ‘suspend’ mode instead
Essentially this means, that your container will have a downtime while syncing. (And backup will take somewhat longer, as it is syncing twice to keep the downtime shorter).
If you have LVM, you can still opt to make a backup without using the snapshot mode (select mode = suspend from the dialog), if you are worried about it’s performance (see below).
We recommend to use Proxmox on LVM, and to leave some empty space in the Volume Group to be able to use the snapshot backup mode.
- pvcreate /dev/sdb
Sets up /dev/sdb as Physical Volume to be used for Logical Volume Groups
- vgcreate new_volume_group /dev/sdb
Adds the Physical Volume /dev/sdb to the new_volume_group Volume Group
Shows you the available Physical Volumes
Shows you the Logical volumes with details.
List of Physical Volumes
List of Logical volumes
- Open Proxmox Web Interface, select ‘Datacenter’
- On the storage tab, Click on ‘Add’ and select ‘Add LVM group’
- Select the new Volume Group created with vgcreate (see above) and give the storage a name
The name cannot be changed later!
- This new storage will only store Images (=> only Virtual Machines, not containers).
- use lvdisplay to view the newly created logical volumes after installing new virtual machines
- Reference: Proxmox Storage Model
No. In a LVM group added to Proxmox you can just add Images – that is KVM virtualised machines.
There is no option in the Add LVM group dialog to set up storage contents, after it is created, the content is automatically “Images”.
To set up containers in the space of this Volume Group, create a new Logical Volume, format it to a filesystem of your liking (ext3 is recommended for best performance!), mount it, and add it as storage. You will be able to store containers in this new storage, if you select the option containers in the dialog.
No. When creating a new Virtual Machine, and selecting a LVM group storage you can only use the RAW format.
If you need one of the others go with file-based storage.
The backups will be written to a filesystem-based container as compressed images of the RAW volume, including the Virtual Machine configuration information.
INFO: adding ‘/dev/test-group/vm-101-disk-1′ to archive (‘vm-disk-virtio0.raw’)
LVM snapshot backups create a “frozen” mirror image of the logical volume you want to make a snapshot of. They use the so-called copy-on-write technique: if some data is changed in the logical volume, the original data to be overwritten is copied out to the snapshot first.
The snapshot can be mounted and read from like the original logical volume, you can even write to them (please read the manpages for more information about extended usage).
Proxmox will automatically use LVM snapshots to make live backups, if your filesystem the containers reside in is located in a logical volume group, and you select the “snapshot” mode. You also need to have enough free space in your volume group for Proxmox to create the new snapshot volume. This can be a fraction of the actual volume, as only the delta is copied.
You can also create manual snapshots of the LVMs. Please be aware, that there will be a performance impact.
lvcreate -s => create a snapshot volume
The snapshot volume will have a fixed size initially, but can be grown using lvextend.
The Volume Groups may contain non-contiguos volumes, i.e. some space on the harddisk / physical volume may be empty in between Logical Volumes, and logical volumes can be non-contigious themselves (the logical extents making up one logical volume will be located in different spots on the harddrive).
This can be the case if you create several Virtual Machines, and delete one of them, or run out of diskspace to create one contigious logical volume. It can also happen if you extend logical volumes, naturally.
It is possible to move logical volumes with pvmove to fix this problem.
LVM as such apparently does not impact performance much. See this website for some opinions and further links.
This site has some numbers comparing the performance of LVM and ext 2 using bonnie++.
The Proxmox team recommends to use ext3 instead of ext4 as default file system on logical volumes under the Proxmox kernel.
There will be a huge performance impact when writing to a “snapshotted” volume – it will slow down the writing up to 90 % on initial writes. Reading is only affected by about 15 %. This is data for snapshot and source volume residing on the same physical disk. Work is underway to improve the performance, but there are no clear statements that this problem has been resolved completely, as of yet.
Please keep that in mind when manually creating snapshots.
- Proxmox Forum – the Proxmox team decides to go with ext3 as default for their installer
- Software and Hardware RAID performance with Proxmox
- LVM Snapshot performance
- Should I expect snapshot origin LVs to be 10x slower?
In general, shrinking is more complicated than extending. Modern filesystems support extension on-the-fly, for shrinking you have to take the containers offline.
- Some information about shrinking ext4 inside a logical volume
User management & privileges in Proxmox
The resource pool enables you to combine several resources under a pool “handle” and to set user / group rights for the entire pool of resources. These resources are currently:
- virtual machines
- you can also add Logical Volume Groups to the resource pool
You can have several pools, several users / groups with different permissions for each pool. All in all a very powerful possibility to setup rights and privileges on your server(s).
For example, you have three virtual machines you want a junior administrator to manage. Additionally, you want the junior admin to be able to create her own machines, but only in the special storage you specify. She should be able to use your shared templates and ISOs for new machine creation, but should not be able to download new templates on her own. This is how you do it:
- You create a new resource pool
- You add the three virtual machines
- You add the storage
- Note, that you have to add a storage with container templates / ISOs for the junior admin to be able to setup new virtual machines.
- If you have separate datastores for these, they can also be added to a different pool, with the junior admin having the PVEDatastoreUser role for that pool.
- If you desire read-only access to this datastore, it should only have the ability to store ISOs and templates, no containers / images (PVEDatastoreUser allows to create containers / images in the datastore!)
- The junior admin will not be able to upload templates / ISOs – this requires the Datastore.AllocateTemplate privilege.
- You add the junior administrator with PVEAdmin rights.
- The junior admin might need to logout and login again for the “Create CT / Create VM” buttons to work
That’s it! Now your junior admin can administrate the machines you gave her, create new ones, all in the way you intend her to do it.
- Proxmox Wiki, User management
Containers and KVM virtualisation
- Setup SSH key auth for the old machine on the new machine (see this document for more details on the following three steps)
- create RSA key on source machine (“source-m”)
- copy the RSA key pub to the target machine (“target-m” @ target-m.com)
- append the RSA key to /root/.ssh/authorized_keys2
- ssh –2 –v firstname.lastname@example.org
You should now be able to enter the target machine without providing a password using this command
- vzmigrate –r no <target-m-IP> <VE-ID>
Migrates the VE with the ID <VE-ID> to the target machine. You NEED to specify the IP here, a domain is not accepted by the script.
- The –r no option instructs the script to keep your machine on the old server, too
- The <VE-ID>.conf will be moved to <VE-ID>.conf.migrated on your old server
- Yes, this also works while the container is running.
- The container will keep all its current settings, including IP and nameservers, and it will be setup in exactly the same path as it has been setup before
- Optional: shutdown the container, and move it’s files to a directory of your choice on the new machine (private directory, root directory will be created automatically)
- You HAVE to shutdown the container to move it’s location, also editing of the .conf file should be done AFTER shutting down and moving the container.
- Edit the corresponding config file on the new server, add / modify ORIGIN_SAMPLE as follows:
- Also modify the location of the root and the private files of the container, if you changed it earlier.
Moving OpenVZ Containers to Proxmox VE :: References
- Use VirtIO for disk and network for best performance
- Use cache=none
- disable USB tablet device in Windows VMs
- use ext3 as filesystem (unless you have a SSD – then you have to use ext4 because of TRIM)
- modify ext3 mount options => rw,relatime,data=ordered 0 0
- relatime = Update inode access times relative to modify or change time. Access time is only updated if the previous access time was earlier than the current modify or change time. (Similar to noatime, but doesn’t break mutt or other applications that need to know if a file has been read since the last time it was modified.) Can help performance (see atime options).
Performance Tweaks :: Reference
- It is also possible to add additional virtual harddrives to extend diskspace
- Read this Wiki entry from the Proxmox Wiki on resizing disks
ksm-control-daemon: not correctly installed
You may get this message above, if you run pveversion –v.
The KSM is Kernel SamePage Merging – it allows similar guest machines to share memory pages, and thus resources. If it is absent, the only downside is a higher memory usage.
To install it, use
apt-get install ksm-control-daemon
- Backup and restore of LVM data
- Migration of Servers to Proxmox VE
- Proxmox VE 2.0 Documentation Category (Proxmox Wiki)
- Turnkey Linux appliances – these are installable over the Proxmox Web interface. May set up rather old versions of the software, i.e. Zimbra. Upside: it’s really easy to use. Have a look at their page for more info.
- CSNU Wiki on Proxmox 2 – in French, a very good resource of all kinds of information, i.e. LVM usage, Hard disk monitoring, and more. Google Chrome translates it into pretty good English.
- Logical Volume Management article on Wikipedia
- Proxmox Roadmap and Changelog
- Proxmox Forum for VE 2.x: Installation and configuration
- Bugzilla for Proxmox
- ProxmoxVE Channel on YouTube
- Proxmox Command Line Tools
No related posts.