docker集装箱中为什么会有太多的gpus?

我的主机上有4个GPU

[root@c3-sa-i2-20151229-buf023 ~]# nvidia-smi Wed Jul 12 14:27:40 2017 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 375.26 Driver Version: 375.26 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla K40m Off | 0000:02:00.0 Off | 0 | | N/A 23C P8 21W / 235W | 0MiB / 11439MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 1 Tesla K40m Off | 0000:03:00.0 Off | 0 | | N/A 23C P8 22W / 235W | 0MiB / 11439MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 2 Tesla K40m Off | 0000:83:00.0 Off | 0 | | N/A 42C P0 105W / 235W | 8336MiB / 11439MiB | 94% Default | +-------------------------------+----------------------+----------------------+ | 3 Tesla K40m Off | 0000:84:00.0 Off | 0 | | N/A 23C P8 22W / 235W | 0MiB / 11439MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 2 4148 C python 8330MiB | +-----------------------------------------------------------------------------+ 

在docker检查中,我只使用了2个GPU。

 "Devices": [ { "PathOnHost": "/dev/nvidiactl", "PathInContainer": "/dev/nvidiactl", "CgroupPermissions": "mrw" }, { "PathOnHost": "/dev/nvidia-uvm", "PathInContainer": "/dev/nvidia-uvm", "CgroupPermissions": "mrw" }, { "PathOnHost": "/dev/nvidia0", "PathInContainer": "/dev/nvidia0", "CgroupPermissions": "mrw" }, { "PathOnHost": "/dev/nvidia1", "PathInContainer": "/dev/nvidia1", "CgroupPermissions": "mrw" }, { "PathOnHost": "/dev/fuse", "PathInContainer": "/dev/fuse", "CgroupPermissions": "mrw" } ], 

但是我可以在我的容器中看到4个GPU。

 root@de-3879-ng-1-021909-1176603283-2jpbx:/notebooks# ls /dev | grep nv nvidia-uvm nvidia-uvm-tools nvidia0 nvidia1 nvidia2 nvidia3 nvidiactl root@de-3879-ng-1-021909-1176603283-2jpbx:/tmp# ./nvidia-smi Wed Jul 12 06:31:57 2017 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 375.26 Driver Version: 375.26 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla K40m Off | 0000:02:00.0 Off | 0 | | N/A 23C P8 21W / 235W | 0MiB / 11439MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 1 Tesla K40m Off | 0000:03:00.0 Off | 0 | | N/A 23C P8 22W / 235W | 0MiB / 11439MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 2 Tesla K40m Off | 0000:83:00.0 Off | 0 | | N/A 41C P0 98W / 235W | 8336MiB / 11439MiB | 66% Default | +-------------------------------+----------------------+----------------------+ | 3 Tesla K40m Off | 0000:84:00.0 Off | 0 | | N/A 23C P8 22W / 235W | 0MiB / 11439MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| +-----------------------------------------------------------------------------+ 

我可以在Docker容器中获取设备映射信息吗?

恩。

host / dev / nvidia0-> container / dev / nvidia0

我能相信docker检查信息吗?