![]() ![]() Size = number of copies of the data in the pool In my early testing, I discovered that if I removed a disk from pool, the size of the pool increased! After doing some reading in redhat documentation, I learned the basics of why this happened. ![]() More discussion: have not had any problems with this configuration and it provides the space I need. ![]() My Proxmox cluster is small (3 nodes.) I discovered I didn’t have enough space for 3 replicas (the default ceph configuration), so I had to drop my pool size/min down to 2/1 despite warnings not to do so, since a 3-node cluster is a special case: Lastly, I used the GUI to create a ceph storage target in Datacenter config. Then I used the GUI to create a monitor, manager, and OSD on each host. I used the Proxmox GUI to install ceph on each node by going to / Ceph. It wasn’t until I got some Intel DC S3700 drives that ceph became reliable and fast. On my ZFS array, I could disable this, but not so for ceph. This is because ceph does a cache writeback call for each transaction – much like NFS. I had a significant amount of trouble getting ceph to work with consumer-grade SSDs. I ran into a lot of bumps, but after getting proper server-grade SSDs, things have been humming smoothly long enough that it’s time to publish. These are my notes for migrating my VM storage from NFS mount to Ceph hosted on Proxmox. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |