如何清除docker的thinpool设备

我使用devicemapper和thinpool设备在Redhat系统上运行docker,就像生产系统推荐的一样。 现在,当我想重新安装docker时,我需要两个步骤:

1)删除docker目录(在我的情况/ area51 /docker)
2)清除thinpool设备

docker 文档指出,在使用devicemapper和dm.metadev和dm.datadev选项时,清理devicemapper最简单的方法是:

如果设置新的元数据池,则需要有效。 这可以通过将第一个4k归零来表示空元数据来实现,如下所示:

$ dd if=/dev/zero of=$metadata_dev bs=4096 count=1 

不幸的是,根据文档, dm.metadatadev被弃用,它说,使用dm.thinpooldev来代替。

我的thinpool已经沿着这个docker指令的线创build所以,我的设置现在看起来像这样:

 cat /etc/docker/daemon.json { "storage-driver": "devicemapper", "storage-opts": [ "dm.thinpooldev=/dev/mapper/thinpool_VG_38401-thinpool", "dm.basesize=18G" ] } 

在devicemapper目录下,我看到下面的thinpool设备

 ls -l /dev/mapper/thinpool_VG_38401-thinpool* lrwxrwxrwx 1 root root 7 Dec 6 08:31 /dev/mapper/thinpool_VG_38401-thinpool -> ../dm-8 lrwxrwxrwx 1 root root 7 Dec 6 08:31 /dev/mapper/thinpool_VG_38401-thinpool_tdata -> ../dm-7 lrwxrwxrwx 1 root root 7 Dec 6 08:31 /dev/mapper/thinpool_VG_38401-thinpool_tmeta -> ../dm-6 

所以,在成功运行docker之后,我尝试按照上面的描述重新安装,并通过在tmeta设备中写入4K zeroes来清除thinpool,然后重新启动docker:

 dd if=/dev/zero of=/dev/mapper/thinpool_VG_38401-thinpool_tmeta bs=4096 count=1 systemctl start docker 

并且继续努力

 docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled) Active: failed (Result: exit-code) since Wed 2017-12-06 10:28:46 UTC; 10s ago Docs: https://docs.docker.com Process: 1566 ExecStart=/usr/bin/dockerd -G uwsgi --data-root=/area51/docker -H unix:///var/run/docker.sock (code=exited, status=1/FAILURE) Main PID: 1566 (code=exited, status=1/FAILURE) Memory: 236.0K CGroup: /system.slice/docker.service Dec 06 10:28:45 yoda3 systemd[1]: Starting Docker Application Container Engine... Dec 06 10:28:45 yoda3 dockerd[1566]: time="2017-12-06T10:28:45.816049000Z" level=info msg="libcontainerd: new containerd process, pid: 1577" Dec 06 10:28:46 yoda3 dockerd[1566]: time="2017-12-06T10:28:46.816966000Z" level=warning msg="failed to rename /area51/docker/tmp for background deletion: renam...chronously" Dec 06 10:28:46 yoda3 dockerd[1566]: Error starting daemon: error initializing graphdriver: devmapper: Unable to take ownership of thin-pool (thinpool_VG_38401-...data blocks Dec 06 10:28:46 yoda3 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE Dec 06 10:28:46 yoda3 systemd[1]: Failed to start Docker Application Container Engine. Dec 06 10:28:46 yoda3 systemd[1]: Unit docker.service entered failed state. Dec 06 10:28:46 yoda3 systemd[1]: docker.service failed. 

我认为我可以通过重新启动来解决“无法获得精简池”的所有权。 但重启后,又试图启动docker,我得到了以下错误:

 systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled) Active: failed (Result: exit-code) since Wed 2017-12-06 10:30:37 UTC; 2min 29s ago Docs: https://docs.docker.com Process: 3180 ExecStart=/usr/bin/dockerd -G uwsgi --data-root=/area51/docker -H unix:///var/run/docker.sock (code=exited, status=1/FAILURE) Main PID: 3180 (code=exited, status=1/FAILURE) Memory: 37.9M CGroup: /system.slice/docker.service Dec 06 10:30:36 yoda3 systemd[1]: Starting Docker Application Container Engine... Dec 06 10:30:36 yoda3 dockerd[3180]: time="2017-12-06T10:30:36.893777000Z" level=warning msg="libcontainerd: makeUpgradeProof could not open /var/run/docker/lib...containerd" Dec 06 10:30:36 yoda3 dockerd[3180]: time="2017-12-06T10:30:36.901958000Z" level=info msg="libcontainerd: new containerd process, pid: 3224" Dec 06 10:30:37 yoda3 dockerd[3180]: Error starting daemon: error initializing graphdriver: devicemapper: Non existing device thinpool_VG_38401-thinpool Dec 06 10:30:37 yoda3 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE Dec 06 10:30:37 yoda3 systemd[1]: Failed to start Docker Application Container Engine. Dec 06 10:30:37 yoda3 systemd[1]: Unit docker.service entered failed state. Dec 06 10:30:37 yoda3 systemd[1]: docker.service failed. 

所以,显然在thinpool_meta设备中写零不是正确的做法,它似乎破坏了我的thinpool设备。

这里的任何人都可以告诉我清除精简池设备的正确步骤? 该解决scheme最好不要求重启。