nvidia-docker 配置GPU

docker hook 的 doPrestart
https://github.com/opencontainers/runtime-spec

1
2
3
4
Prestart
The prestart hooks MUST be called after the start operation is called but before the user-specified program command is executed. On Linux, for example, they are called after the container namespaces are created, so they provide an opportunity to customize the container (e.g. the network namespace could be specified in this hook).
Note: prestart hooks were deprecated in favor of createRuntime, createContainer and startContainer hooks, which allow more granular hook control during the create and start phase.
The prestart hooks' path MUST resolve in the runtime namespace. The prestart hooks MUST be executed in the runtime namespace.

Hook:
可以通过与外部应用程序挂钩来扩展容器的生命周期,从而扩展OCI兼容运行时的功能。用例示例包括复杂的网络配置,垃圾信息搜集等

nvidia-container-toolkit 项目:

新版本的nvidia-docker增加了这个项目,主要是用来实现hook函数,并将GPU相关的参数传递给nvidia-container-cli 进行容器的GPU配置,具体的参数如下所示,在创建、重启容器时,都会执行这样的操作
/usr/bin/nvidia-container-cli –load-kmods –debug=/var/log/nvidia-container-toolkit.log configure –ldconfig=@/sbin/ldconfig –device=all –compute –utility –pid=78717 /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83

func doPrestart() {
var err error

defer exit()
log.SetFlags(0)

hook := getHookConfig()
cli := hook.NvidiaContainerCLI
//查询容器的配置参数
container := getContainerConfig(hook)
//获取GPU相关的配置参数
nvidia := container.Nvidia
if nvidia == nil {
// Not a GPU container, nothing to do.
return
}

rootfs := getRootfsPath(container)
//获得nvidia-container-cli 的安装路径,使用该命令进行容器的GPU 相关配置,下面的全都是为这个cli 构造参数
args := []string{getCLIPath(cli)}
if cli.Root != nil {
args = append(args, fmt.Sprintf("--root=%s", *cli.Root))
}
if cli.LoadKmods {
args = append(args, "--load-kmods")
}
if cli.NoPivot {
args = append(args, "--no-pivot")
}
if *debugflag {
args = append(args, "--debug=/dev/stderr")
} else if cli.Debug != nil {
args = append(args, fmt.Sprintf("--debug=%s", *cli.Debug))
}
if cli.Ldcache != nil {
args = append(args, fmt.Sprintf("--ldcache=%s", *cli.Ldcache))
}
if cli.User != nil {
args = append(args, fmt.Sprintf("--user=%s", *cli.User))
}
args = append(args, "configure")

if cli.Ldconfig != nil {
args = append(args, fmt.Sprintf("--ldconfig=%s", *cli.Ldconfig))
}
if cli.NoCgroups {
args = append(args, "--no-cgroups")
}
//将设置的GPU 环境变量或者挂载转变为device
if len(nvidia.Devices) > 0 {
args = append(args, fmt.Sprintf("--device=%s", nvidia.Devices))
}
//mig 配置
if len(nvidia.MigConfigDevices) > 0 {
args = append(args, fmt.Sprintf("--mig-config=%s", nvidia.MigConfigDevices))
}
if len(nvidia.MigMonitorDevices) > 0 {
args = append(args, fmt.Sprintf("--mig-monitor=%s", nvidia.MigMonitorDevices))
}

for _, cap := range strings.Split(nvidia.DriverCapabilities, ",") {
if len(cap) == 0 {
break
}
args = append(args, capabilityToCLI(cap))
}

if !hook.DisableRequire && !nvidia.DisableRequire {
for _, req := range nvidia.Requirements {
args = append(args, fmt.Sprintf("--require=%s", req))
}
}

args = append(args, fmt.Sprintf("--pid=%s", strconv.FormatUint(uint64(container.Pid), 10)))
args = append(args, rootfs)

env := append(os.Environ(), cli.Environment...)
//这里的参数通常是这样的:
// <font size=8> /usr/bin/nvidia-container-cli --load-kmods --debug=/var/log/nvidia-container-toolkit.log configure --ldconfig=@/sbin/ldconfig --device=all --compute --utility --pid=78717 /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged </font>
err = syscall.Exec(args[0], args, env)
log.Panicln("exec failed:", err)
}

从环境变量获取GPU 信息,先查询envSwarmGPU,再查询envVars,如果都未设置,则获取镜像中的设置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
func getDevicesFromEnvvar(env map[string]string, legacyImage bool) *string {
// Build a list of envvars to consider.
envVars := []string{envNVVisibleDevices}
if envSwarmGPU != nil {
// The Swarm envvar has higher precedence.
envVars = append([]string{*envSwarmGPU}, envVars...)
}

// Grab a reference to devices from the first envvar
// in the list that actually exists in the environment.
var devices *string
for _, envVar := range envVars {
if devs, ok := env[envVar]; ok {
devices = &devs
}
}

// Environment variable unset with legacy image: default to "all".
if devices == nil && legacyImage {
all := "all"
return &all
}

// Environment variable unset or empty or "void": return nil
if devices == nil || len(*devices) == 0 || *devices == "void" {
return nil
}

// Environment variable set to "none": reset to "".
if *devices == "none" {
empty := ""
return &empty
}

// Any other value.
return devices
}

nvidia-contaienr-runtime 项目

对接OCI runtime,将hook函数注入

该项目会将nvidia-container-toolkit中的preStart函数作为hook加入到runtime

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
func addNVIDIAHook(spec *specs.Spec) error {
path, err := exec.LookPath("nvidia-container-runtime-hook")
if err != nil {
path = hookDefaultFilePath
_, err = os.Stat(path)
if err != nil {
return err
}
}

if fileLogger != nil {
fileLogger.Printf("prestart hook path: %s\n", path)
}

args := []string{path}
if spec.Hooks == nil {
spec.Hooks = &specs.Hooks{}
} else if len(spec.Hooks.Prestart) != 0 {
for _, hook := range spec.Hooks.Prestart {
if !strings.Contains(hook.Path, "nvidia-container-runtime-hook") {
continue
}
if fileLogger != nil {
fileLogger.Println("existing nvidia prestart hook in OCI spec file")
}
return nil
}
}
//加入hook
spec.Hooks.Prestart = append(spec.Hooks.Prestart, specs.Hook{
Path: path,
Args: append(args, "prestart"),
})

return nil
}

项目libnvidia-container 项目

nvidia-docker 的核心项目,用于将GPU驱动、相关的so库,容器可见的GPU 挂载到容器内,该项目中的 cli 相关的代码是用来使用封装nvidia-container-cli 客户端操作,项目libcontainer-toolkit 项目会使用这个cli工具进行配置。

挂载GPU驱动相关

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
configure_command(){

....

//挂载驱动
if (nvc_driver_mount(nvc, cnt, drv) < 0) {
warnx("mount error: %s", nvc_error(nvc));
goto fail;
}
//挂载设备
for (size_t i = 0; i < devices.ngpus; ++i) {
if (nvc_device_mount(nvc, cnt, devices.gpus[i]) < 0) {
warnx("mount error: %s", nvc_error(nvc));
goto fail;
}
}

....

}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
int
nvc_driver_mount(struct nvc_context *ctx, const struct nvc_container *cnt, const struct nvc_driver_info *info)
{
const char **mnt, **ptr, **tmp;
size_t nmnt;
int rv = -1;

if (validate_context(ctx) < 0)
return (-1);
if (validate_args(ctx, cnt != NULL && info != NULL) < 0)
return (-1);

if (ns_enter(&ctx->err, cnt->mnt_ns, CLONE_NEWNS) < 0)
return (-1);

nmnt = 2 + info->nbins + info->nlibs + cnt->nlibs + info->nlibs32 + info->nipcs + info->ndevs;
mnt = ptr = (const char **)array_new(&ctx->err, nmnt);
if (mnt == NULL)
goto fail;

/* Procfs mount */
// 将proc 文件系统挂载到容器
if (ctx->dxcore.initialized)
log_warn("skipping procfs mount on WSL");
else if ((*ptr++ = mount_procfs(&ctx->err, ctx->cfg.root, cnt)) == NULL)
goto fail;

/* Application profile mount */
if (cnt->flags & OPT_GRAPHICS_LIBS) {
if (ctx->dxcore.initialized)
log_warn("skipping app profile mount on WSL");
else if ((*ptr++ = mount_app_profile(&ctx->err, cnt)) == NULL)
goto fail;
}

/* Host binary and library mounts */
if (info->bins != NULL && info->nbins > 0) {
if ((tmp = (const char **)mount_files(&ctx->err, ctx->cfg.root, cnt, cnt->cfg.bins_dir, info->bins, info->nbins)) == NULL)
goto fail;
ptr = array_append(ptr, tmp, array_size(tmp));
free(tmp);
}
if (info->libs != NULL && info->nlibs > 0) {
if ((tmp = (const char **)mount_files(&ctx->err, ctx->cfg.root, cnt, cnt->cfg.libs_dir, info->libs, info->nlibs)) == NULL)
goto fail;

.............

/* Device mounts */
for (size_t i = 0; i < info->ndevs; ++i) {
/* XXX Only compute libraries require specific devices (e.g. UVM). */
if (!(cnt->flags & OPT_COMPUTE_LIBS) && major(info->devs[i].id) != NV_DEVICE_MAJOR)
continue;
/* XXX Only display capability requires the modeset device. */
if (!(cnt->flags & OPT_DISPLAY) && minor(info->devs[i].id) == NV_MODESET_DEVICE_MINOR)
continue;
if (!(cnt->flags & OPT_NO_DEVBIND)) {
if ((*ptr++ = mount_device(&ctx->err, ctx->cfg.root, cnt, &info->devs[i])) == NULL)
goto fail;
}
if (!(cnt->flags & OPT_NO_CGROUPS)) {
if (setup_cgroup(&ctx->err, cnt->dev_cg, info->devs[i].id) < 0)
goto fail;
}
}
rv = 0;

调试

在节点安装完docker、nvidia-docker相关软件后,可以通过修改/etc/nvidia-container-runtime/config.toml,nvidia-container-toolkit的debug日志,如下所示

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
disable-require = false
#swarm-resource = "DOCKER_RESOURCE_GPU"
#accept-nvidia-visible-devices-envvar-when-unprivileged = true
#accept-nvidia-visible-devices-as-volume-mounts = false

[nvidia-container-cli]
#root = "/run/nvidia/driver"
path = "/usr/bin/nvidia-container-cli"
environment = []
debug = "/var/log/nvidia-container-toolkit.log"
#ldcache = "/etc/ld.so.cache"
load-kmods = true
#no-cgroups = false
#user = "root:video"
ldconfig = "@/sbin/ldconfig"

[nvidia-container-runtime]
#debug = "/var/log/nvidia-container-runtime.log"

我们通过命令行 运行一个使用GPU卡的容器,可以在/var/log/nvidia-container-toolkit.log 看到以下日志,虽然比较长,但是详细的看到挂载过程

1
docker run -it --env NVIDIA_VISIBLE_DEVICES=GPU-2fb041ff-6df3-4d00-772d-efb3139a17a1  tensorflow:1.14-cuda10-py36 bash

具体的日志信息:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139

-- WARNING, the following logs are for debugging purposes only --

I1222 07:23:32.708870 78755 nvc.c:282] initializing library context (version=1.3.0, build=16315ebdf4b9728e899f615e208b50c41d7a5d15)
I1222 07:23:32.709176 78755 nvc.c:256] using root /
I1222 07:23:32.709208 78755 nvc.c:257] using ldcache /etc/ld.so.cache
I1222 07:23:32.709230 78755 nvc.c:258] using unprivileged user 65534:65534
I1222 07:23:32.709284 78755 nvc.c:299] attempting to load dxcore to see if we are running under Windows Subsystem for Linux (WSL)
I1222 07:23:32.709595 78755 nvc.c:301] dxcore initialization failed, continuing assuming a non-WSL environment
I1222 07:23:32.719348 78759 nvc.c:192] loading kernel module nvidia
I1222 07:23:32.720360 78759 nvc.c:204] loading kernel module nvidia_uvm
I1222 07:23:32.720862 78759 nvc.c:212] loading kernel module nvidia_modeset
I1222 07:23:32.721604 78760 driver.c:101] starting driver service
I1222 07:23:33.987396 78755 nvc_container.c:364] configuring container with 'compute utility supervised'
I1222 07:23:33.988276 78755 nvc_container.c:212] selecting /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/usr/local/cuda-10.1/compat/libcuda.so.418.87.00
I1222 07:23:33.988497 78755 nvc_container.c:212] selecting /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/usr/local/cuda-10.1/compat/libnvidia-fatbinaryloader.so.418.87.00
I1222 07:23:33.988619 78755 nvc_container.c:212] selecting /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/usr/local/cuda-10.1/compat/libnvidia-ptxjitcompiler.so.418.87.00
I1222 07:23:33.989099 78755 nvc_container.c:384] setting pid to 78717
I1222 07:23:33.989130 78755 nvc_container.c:385] setting rootfs to /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged
I1222 07:23:33.989153 78755 nvc_container.c:386] setting owner to 0:0
I1222 07:23:33.989175 78755 nvc_container.c:387] setting bins directory to /usr/bin
I1222 07:23:33.989197 78755 nvc_container.c:388] setting libs directory to /usr/lib/x86_64-linux-gnu
I1222 07:23:33.989218 78755 nvc_container.c:389] setting libs32 directory to /usr/lib/i386-linux-gnu
I1222 07:23:33.989240 78755 nvc_container.c:390] setting cudart directory to /usr/local/cuda
I1222 07:23:33.989261 78755 nvc_container.c:391] setting ldconfig to @/sbin/ldconfig (host relative)
I1222 07:23:33.989283 78755 nvc_container.c:392] setting mount namespace to /proc/78717/ns/mnt
I1222 07:23:33.989305 78755 nvc_container.c:394] setting devices cgroup to /sys/fs/cgroup/devices/system.slice/docker-dd3c4f0a0409255fba62518899d9e59fda64172f730513b788001e06d594ba8f.scope
I1222 07:23:33.989356 78755 nvc_info.c:680] requesting driver information with ''
I1222 07:23:33.993981 78755 nvc_info.c:169] selecting /usr/lib64/vdpau/libvdpau_nvidia.so.455.23.05
I1222 07:23:33.994516 78755 nvc_info.c:169] selecting /usr/lib64/libnvoptix.so.455.23.05
I1222 07:23:33.994732 78755 nvc_info.c:169] selecting /usr/lib64/libnvidia-tls.so.455.23.05
I1222 07:23:33.994866 78755 nvc_info.c:169] selecting /usr/lib64/libnvidia-rtcore.so.455.23.05
I1222 07:23:33.995003 78755 nvc_info.c:169] selecting /usr/lib64/libnvidia-ptxjitcompiler.so.455.23.05
I1222 07:23:33.995213 78755 nvc_info.c:169] selecting /usr/lib64/libnvidia-opticalflow.so.455.23.05
I1222 07:23:33.995409 78755 nvc_info.c:169] selecting /usr/lib64/libnvidia-opencl.so.455.23.05
I1222 07:23:33.995538 78755 nvc_info.c:169] selecting /usr/lib64/libnvidia-ngx.so.455.23.05
I1222 07:23:33.995666 78755 nvc_info.c:169] selecting /usr/lib64/libnvidia-ml.so.455.23.05
I1222 07:23:33.995855 78755 nvc_info.c:169] selecting /usr/lib64/libnvidia-ifr.so.455.23.05
I1222 07:23:33.996048 78755 nvc_info.c:169] selecting /usr/lib64/libnvidia-glvkspirv.so.455.23.05
I1222 07:23:33.996180 78755 nvc_info.c:169] selecting /usr/lib64/libnvidia-glsi.so.455.23.05
I1222 07:23:33.996302 78755 nvc_info.c:169] selecting /usr/lib64/libnvidia-glcore.so.455.23.05
I1222 07:23:33.996429 78755 nvc_info.c:169] selecting /usr/lib64/libnvidia-fbc.so.455.23.05
I1222 07:23:33.996630 78755 nvc_info.c:169] selecting /usr/lib64/libnvidia-encode.so.455.23.05
I1222 07:23:33.996811 78755 nvc_info.c:169] selecting /usr/lib64/libnvidia-eglcore.so.455.23.05
I1222 07:23:33.996942 78755 nvc_info.c:169] selecting /usr/lib64/libnvidia-compiler.so.455.23.05
I1222 07:23:33.997068 78755 nvc_info.c:169] selecting /usr/lib64/libnvidia-cfg.so.455.23.05
I1222 07:23:33.997266 78755 nvc_info.c:169] selecting /usr/lib64/libnvidia-cbl.so.455.23.05
I1222 07:23:33.997385 78755 nvc_info.c:169] selecting /usr/lib64/libnvidia-allocator.so.455.23.05
I1222 07:23:33.997581 78755 nvc_info.c:169] selecting /usr/lib64/libnvcuvid.so.455.23.05
I1222 07:23:33.998475 78755 nvc_info.c:169] selecting /usr/lib64/libcuda.so.455.23.05
I1222 07:23:33.998854 78755 nvc_info.c:169] selecting /usr/lib64/libGLX_nvidia.so.455.23.05
I1222 07:23:33.998982 78755 nvc_info.c:169] selecting /usr/lib64/libGLESv2_nvidia.so.455.23.05
I1222 07:23:33.999116 78755 nvc_info.c:169] selecting /usr/lib64/libGLESv1_CM_nvidia.so.455.23.05
I1222 07:23:33.999244 78755 nvc_info.c:169] selecting /usr/lib64/libEGL_nvidia.so.455.23.05
I1222 07:23:33.999392 78755 nvc_info.c:169] selecting /usr/lib/vdpau/libvdpau_nvidia.so.455.23.05
I1222 07:23:33.999540 78755 nvc_info.c:169] selecting /usr/lib/libnvidia-tls.so.455.23.05
I1222 07:23:33.999672 78755 nvc_info.c:169] selecting /usr/lib/libnvidia-ptxjitcompiler.so.455.23.05
I1222 07:23:33.999868 78755 nvc_info.c:169] selecting /usr/lib/libnvidia-opticalflow.so.455.23.05
I1222 07:23:34.000055 78755 nvc_info.c:169] selecting /usr/lib/libnvidia-opencl.so.455.23.05
I1222 07:23:34.000199 78755 nvc_info.c:169] selecting /usr/lib/libnvidia-ml.so.455.23.05
I1222 07:23:34.000387 78755 nvc_info.c:169] selecting /usr/lib/libnvidia-ifr.so.455.23.05
I1222 07:23:34.000577 78755 nvc_info.c:169] selecting /usr/lib/libnvidia-glvkspirv.so.455.23.05
I1222 07:23:34.000697 78755 nvc_info.c:169] selecting /usr/lib/libnvidia-glsi.so.455.23.05
I1222 07:23:34.000813 78755 nvc_info.c:169] selecting /usr/lib/libnvidia-glcore.so.455.23.05
I1222 07:23:34.000941 78755 nvc_info.c:169] selecting /usr/lib/libnvidia-fbc.so.455.23.05
I1222 07:23:34.001149 78755 nvc_info.c:169] selecting /usr/lib/libnvidia-encode.so.455.23.05
I1222 07:23:34.001337 78755 nvc_info.c:169] selecting /usr/lib/libnvidia-eglcore.so.455.23.05
I1222 07:23:34.001458 78755 nvc_info.c:169] selecting /usr/lib/libnvidia-compiler.so.455.23.05
I1222 07:23:34.001597 78755 nvc_info.c:169] selecting /usr/lib/libnvidia-allocator.so.455.23.05
I1222 07:23:34.001814 78755 nvc_info.c:169] selecting /usr/lib/libnvcuvid.so.455.23.05
I1222 07:23:34.002011 78755 nvc_info.c:169] selecting /usr/lib/libcuda.so.455.23.05
I1222 07:23:34.002221 78755 nvc_info.c:169] selecting /usr/lib/libGLX_nvidia.so.455.23.05
I1222 07:23:34.002350 78755 nvc_info.c:169] selecting /usr/lib/libGLESv2_nvidia.so.455.23.05
I1222 07:23:34.002475 78755 nvc_info.c:169] selecting /usr/lib/libGLESv1_CM_nvidia.so.455.23.05
I1222 07:23:34.002599 78755 nvc_info.c:169] selecting /usr/lib/libEGL_nvidia.so.455.23.05
W1222 07:23:34.002655 78755 nvc_info.c:350] missing library libnvidia-fatbinaryloader.so
W1222 07:23:34.002679 78755 nvc_info.c:354] missing compat32 library libnvidia-cfg.so
W1222 07:23:34.002701 78755 nvc_info.c:354] missing compat32 library libnvidia-fatbinaryloader.so
W1222 07:23:34.002722 78755 nvc_info.c:354] missing compat32 library libnvidia-ngx.so
W1222 07:23:34.002744 78755 nvc_info.c:354] missing compat32 library libnvidia-rtcore.so
W1222 07:23:34.002765 78755 nvc_info.c:354] missing compat32 library libnvoptix.so
W1222 07:23:34.002786 78755 nvc_info.c:354] missing compat32 library libnvidia-cbl.so
I1222 07:23:34.004155 78755 nvc_info.c:276] selecting /usr/bin/nvidia-smi
I1222 07:23:34.004247 78755 nvc_info.c:276] selecting /usr/bin/nvidia-debugdump
I1222 07:23:34.004332 78755 nvc_info.c:276] selecting /usr/bin/nvidia-persistenced
I1222 07:23:34.004418 78755 nvc_info.c:276] selecting /usr/bin/nvidia-cuda-mps-control
I1222 07:23:34.004504 78755 nvc_info.c:276] selecting /usr/bin/nvidia-cuda-mps-server
I1222 07:23:34.004625 78755 nvc_info.c:438] listing device /dev/nvidiactl
I1222 07:23:34.004648 78755 nvc_info.c:438] listing device /dev/nvidia-uvm
I1222 07:23:34.004669 78755 nvc_info.c:438] listing device /dev/nvidia-uvm-tools
I1222 07:23:34.004690 78755 nvc_info.c:438] listing device /dev/nvidia-modeset
W1222 07:23:34.004790 78755 nvc_info.c:321] missing ipc /var/run/nvidia-persistenced/socket
W1222 07:23:34.004856 78755 nvc_info.c:321] missing ipc /tmp/nvidia-mps
I1222 07:23:34.004879 78755 nvc_info.c:745] requesting device information with ''
I1222 07:23:34.012276 78755 nvc_info.c:628] listing device /dev/nvidia0 (GPU-2fb041ff-6df3-4d00-772d-efb3139a17a1 at 00000000:3b:00.0)
I1222 07:23:34.020861 78755 nvc_info.c:628] listing device /dev/nvidia1 (GPU-a15f4c20-0f41-68ab-3782-bd66d8fda9e4 at 00000000:af:00.0)
I1222 07:23:34.021102 78755 nvc_mount.c:344] mounting tmpfs at /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/proc/driver/nvidia
I1222 07:23:34.022322 78755 nvc_mount.c:112] mounting /usr/bin/nvidia-smi at /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/usr/bin/nvidia-smi
I1222 07:23:34.022504 78755 nvc_mount.c:112] mounting /usr/bin/nvidia-debugdump at /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/usr/bin/nvidia-debugdump
I1222 07:23:34.022664 78755 nvc_mount.c:112] mounting /usr/bin/nvidia-persistenced at /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/usr/bin/nvidia-persistenced
I1222 07:23:34.022825 78755 nvc_mount.c:112] mounting /usr/bin/nvidia-cuda-mps-control at /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/usr/bin/nvidia-cuda-mps-control
I1222 07:23:34.022983 78755 nvc_mount.c:112] mounting /usr/bin/nvidia-cuda-mps-server at /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/usr/bin/nvidia-cuda-mps-server
I1222 07:23:34.023745 78755 nvc_mount.c:112] mounting /usr/lib64/libnvidia-ml.so.455.23.05 at /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/usr/lib/x86_64-linux-gnu/libnvidia-ml.so.455.23.05
I1222 07:23:34.023960 78755 nvc_mount.c:112] mounting /usr/lib64/libnvidia-cfg.so.455.23.05 at /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/usr/lib/x86_64-linux-gnu/libnvidia-cfg.so.455.23.05
I1222 07:23:34.024203 78755 nvc_mount.c:112] mounting /usr/lib64/libcuda.so.455.23.05 at /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/usr/lib/x86_64-linux-gnu/libcuda.so.455.23.05
I1222 07:23:34.024411 78755 nvc_mount.c:112] mounting /usr/lib64/libnvidia-opencl.so.455.23.05 at /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/usr/lib/x86_64-linux-gnu/libnvidia-opencl.so.455.23.05
I1222 07:23:34.024615 78755 nvc_mount.c:112] mounting /usr/lib64/libnvidia-ptxjitcompiler.so.455.23.05 at /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/usr/lib/x86_64-linux-gnu/libnvidia-ptxjitcompiler.so.455.23.05
I1222 07:23:34.024815 78755 nvc_mount.c:112] mounting /usr/lib64/libnvidia-allocator.so.455.23.05 at /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/usr/lib/x86_64-linux-gnu/libnvidia-allocator.so.455.23.05
I1222 07:23:34.040552 78755 nvc_mount.c:112] mounting /usr/lib64/libnvidia-compiler.so.455.23.05 at /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/usr/lib/x86_64-linux-gnu/libnvidia-compiler.so.455.23.05
I1222 07:23:34.040717 78755 nvc_mount.c:524] creating symlink /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/usr/lib/x86_64-linux-gnu/libcuda.so -> libcuda.so.1
I1222 07:23:34.041181 78755 nvc_mount.c:112] mounting /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/usr/local/cuda-10.1/compat/libcuda.so.418.87.00 at /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/usr/lib/x86_64-linux-gnu/libcuda.so.418.87.00
I1222 07:23:34.041439 78755 nvc_mount.c:112] mounting /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/usr/local/cuda-10.1/compat/libnvidia-fatbinaryloader.so.418.87.00 at /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/usr/lib/x86_64-linux-gnu/libnvidia-fatbinaryloader.so.418.87.00
I1222 07:23:34.041662 78755 nvc_mount.c:112] mounting /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/usr/local/cuda-10.1/compat/libnvidia-ptxjitcompiler.so.418.87.00 at /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/usr/lib/x86_64-linux-gnu/libnvidia-ptxjitcompiler.so.418.87.00
I1222 07:23:34.041871 78755 nvc_mount.c:208] mounting /dev/nvidiactl at /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/dev/nvidiactl
I1222 07:23:34.041959 78755 nvc_mount.c:499] whitelisting device node 195:255
I1222 07:23:34.042196 78755 nvc_mount.c:208] mounting /dev/nvidia-uvm at /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/dev/nvidia-uvm
I1222 07:23:34.042283 78755 nvc_mount.c:499] whitelisting device node 234:0
I1222 07:23:34.042474 78755 nvc_mount.c:208] mounting /dev/nvidia-uvm-tools at /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/dev/nvidia-uvm-tools
I1222 07:23:34.042552 78755 nvc_mount.c:499] whitelisting device node 234:1
I1222 07:23:34.042781 78755 nvc_mount.c:208] mounting /dev/nvidia0 at /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/dev/nvidia0
I1222 07:23:34.043137 78755 nvc_mount.c:412] mounting /proc/driver/nvidia/gpus/0000:3b:00.0 at /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/proc/driver/nvidia/gpus/0000:3b:00.0
I1222 07:23:34.043225 78755 nvc_mount.c:499] whitelisting device node 195:0
I1222 07:23:34.043317 78755 nvc_ldcache.c:359] executing /sbin/ldconfig from host at /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged
W1222 07:23:34.070874 78755 utils.c:121] /sbin/ldconfig: File /usr/lib/x86_64-linux-gnu/libcuda.so is empty, not checked.
W1222 07:23:34.070939 78755 utils.c:121] /sbin/ldconfig: File /usr/lib/x86_64-linux-gnu/libcuda.so.1 is empty, not checked.
W1222 07:23:34.070962 78755 utils.c:121] /sbin/ldconfig: File /usr/lib/x86_64-linux-gnu/libcuda.so.418.67 is empty, not checked.
W1222 07:23:34.072833 78755 utils.c:121] /sbin/ldconfig: File /usr/lib/x86_64-linux-gnu/libnvidia-cfg.so.1 is empty, not checked.
W1222 07:23:34.072864 78755 utils.c:121] /sbin/ldconfig: File /usr/lib/x86_64-linux-gnu/libnvidia-cfg.so.418.67 is empty, not checked.
W1222 07:23:34.072909 78755 utils.c:121] /sbin/ldconfig: File /usr/lib/x86_64-linux-gnu/libnvidia-compiler.so.418.67 is empty, not checked.
W1222 07:23:34.072952 78755 utils.c:121] /sbin/ldconfig: File /usr/lib/x86_64-linux-gnu/libnvidia-fatbinaryloader.so.418.67 is empty, not checked.
W1222 07:23:34.073004 78755 utils.c:121] /sbin/ldconfig: File /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1 is empty, not checked.
W1222 07:23:34.073038 78755 utils.c:121] /sbin/ldconfig: File /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.418.67 is empty, not checked.
W1222 07:23:34.073127 78755 utils.c:121] /sbin/ldconfig: File /usr/lib/x86_64-linux-gnu/libnvidia-opencl.so.1 is empty, not checked.
W1222 07:23:34.073158 78755 utils.c:121] /sbin/ldconfig: File /usr/lib/x86_64-linux-gnu/libnvidia-opencl.so.418.67 is empty, not checked.
W1222 07:23:34.073219 78755 utils.c:121] /sbin/ldconfig: File /usr/lib/x86_64-linux-gnu/libnvidia-ptxjitcompiler.so.1 is empty, not checked.
W1222 07:23:34.073259 78755 utils.c:121] /sbin/ldconfig: File /usr/lib/x86_64-linux-gnu/libnvidia-ptxjitcompiler.so.418.67 is empty, not checked.
I1222 07:23:34.114387 78755 nvc.c:337] shutting down library context
I1222 07:23:34.618210 78760 driver.c:156] terminating driver service
I1222 07:23:34.618980 78755 driver.c:196] driver service terminated successfully

重点是这一步 挂载某个GPU

I1222 07:23:34.043137 78755 nvc_mount.c:412] mounting /proc/driver/nvidia/gpus/0000:3b:00.0 at /var/lib/docker/overlay2/6ac97e95475e9df0f32f7e2f7251ca053651c62292d1a5127c71d33e55904d2b/merged/proc/driver/nvidia/gpus/0000:3b:00.0

I1222 07:23:34.043225 78755 nvc_mount.c:499] whitelisting device node 195:0

使用的是这个函数

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
static char *
mount_procfs_gpu(struct error *err, const char *root, const struct nvc_container *cnt, const char *busid)
{
char src[PATH_MAX];
char dst[PATH_MAX] = {0};
char *gpu = NULL;
char *mnt = NULL;
mode_t mode;

for (int off = 0;; off += 4) {
/* XXX Check if the driver procfs uses 32-bit or 16-bit PCI domain */
if (xasprintf(err, &gpu, "%s/gpus/%s", NV_PROC_DRIVER, busid + off) < 0)
return (NULL);
if (path_join(err, src, root, gpu) < 0)
goto fail;
if (path_resolve_full(err, dst, cnt->cfg.rootfs, gpu) < 0)
goto fail;
if (file_mode(err, src, &mode) == 0)
break;
if (err->code != ENOENT || off != 0)
goto fail;
*dst = '\0';
free(gpu);
gpu = NULL;
}
if (file_create(err, dst, NULL, cnt->uid, cnt->gid, mode) < 0)
goto fail;

log_infof("mounting %s at %s", src, dst);
if (xmount(err, src, dst, NULL, MS_BIND, NULL) < 0)
goto fail;
if (xmount(err, NULL, dst, NULL, MS_BIND|MS_REMOUNT | MS_RDONLY|MS_NODEV|MS_NOSUID|MS_NOEXEC, NULL) < 0)
goto fail;
if ((mnt = xstrdup(err, dst)) == NULL)
goto fail;
free(gpu);
return (mnt);

fail:
free(gpu);
unmount(dst);
return (NULL);
}

proc 文件系统

Linux系统上的/proc目录是一种文件系统,即proc文件系统。与其它常见的文件系统不同的是,/proc是一种伪文件系统(也即虚拟文件系统),存储的是当前内核运行状态的一系列特殊文件,用户可以通过这些文件查看有关系统硬件及当前正在运行进程的信息,甚至可以通过更改其中某些文件来改变内核的运行状态。

基于/proc文件系统如上所述的特殊性,其内的文件也常被称作虚拟文件,并具有一些独特的特点。例如,其中有些文件虽然使用查看命令查看时会返回大量信息,但文件本身的大小却会显示为0字节。此外,这些特殊文件中大多数文件的时间及日期属性通常为当前系统时间和日期,这跟它们随时会被刷新(存储于RAM中)有关。

为了查看及使用上的方便,这些文件通常会按照相关性进行分类存储于不同的目录甚至子目录中,如/proc/scsi目录中存储的就是当前系统上所有SCSI设备的相关信息,/proc/N中存储的则是系统当前正在运行的进程的相关信息,其中N为正在运行的进程(可以想象得到,在某进程结束后其相关目录则会消失)。

大多数虚拟文件可以使用文件查看命令如cat、more或者less进行查看,有些文件信息表述的内容可以一目了然,但也有文件的信息却不怎么具有可读性。不过,这些可读性较差的文件在使用一些命令如apm、free、lspci或top查看时却可以有着不错的表现。

proc 文件系统原理

proc文件系统是一个伪文件系统,它只存在于内存中,不在外存存储。proc提供了访问系统内核信息的接口。用户和应用程序可以通过proc访问系统信息。用户和应用程序可以通过proc改变内核的某些参数。由于进程等系统信息是动态改变的,所以proc系统动态从系统内核读出所需信息,并提交给读取它的用户和应用程序。

是否可以动态修改这个文件系统呢?

三个项目的关系

包含三个项目
libnvidia-container项目(核心项目,挂载GPU驱动)、nvidia-container-runtime 项目、nvidia-container-toolkit项目
其中 nvidia-container-toolkit 和libnvidia-container 两个项目时必须安装的,对于nvidia-container-runtime 只是为了方便docker run 直接使用,增加hook 函数使用
nvidia-container-runtime 为 构造OCI 的hook,hook的具体实现为nvidia-container-toolkit项目,nvidia-container-toolkit项目会最终调用libnvidia-container项目的cli进行具体的配置
三个项目的调用顺序
Docker runc -> nvidia-container-runtime-> nvidia-container-toolkit ->libnvidia-container

Command line 创建挂载GPU的容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
# Setup a new set of namespaces 
#建立暂存文件夹,并在其创建rootfs目录
cd $(mktemp -d) && mkdir rootfs
# 对于mount pid 命名空间 与parent 的命名空间隔离,
#--fork Fork the specified program as a child process of unshare rather than running it directly. This is useful when creating a new pid namespace
#man unshare
sudo unshare --mount --pid --fork

# Setup a rootfs based on Ubuntu 16.04 inside the new namespaces
#在新的命名空间下载一个rootfs 文件
curl http://cdimage.ubuntu.com/ubuntu-base/releases/16.04/release/ubuntu-base-16.04.6-base-amd64.tar.gz | tar -C rootfs -xz

#在rootfs 目录增加user,使用该目录的配置文件,指定用户和用户组ID,指定shell
useradd -R $(realpath rootfs) -U -u 1000 -s /bin/bash nvidia
#将前一个目录挂载到后一个目录上,所有对后一个目录的访问其实都是对前一个目录的访问
mount --bind rootfs rootfs
#改变挂载类型为私有
mount --make-private rootfs
cd rootfs

# Mount standard filesystems
#将虚拟的proc 文件系统挂载到proc 目录,这里把节点的全部GPU信息挂载到了容器
mount -t proc none proc

#将sysfs 文件系统挂载到 sys目录
mount -t sysfs none sys
mount -t tmpfs none tmp
mount -t tmpfs none run

# Isolate the first GPU device along with basic utilities
nvidia-container-cli --load-kmods configure --no-cgroups --utility --device 0 $(pwd)

# Change into the new rootfs
# pivot_root new_root put_old
# pivot_root把当前进程的root文件系统放到put_old目录,而使new_root成为新的root文件系统
pivot_root . mnt
exec chroot --userspec 1000:1000 . env -i bash

# Run nvidia-smi from within the container
nvidia-smi -L

尝试在不同的shell 终端挂载不同的GPU,但是都是只能显示一个GPU
在同样的rootfs 目录,执行挂载主机的0号GPU卡,如下图所示,此时只能查看到一张卡
avatar

然后在同样的rootfs目录,执行挂载主机的1号GPU卡,如下图所示,此时仍然只能查看到一张卡
avatar

这里的proc 文件系统是关键,执行nvidia-container-cli 在proc 文件系统内就只能查看到指定的GPU 卡了,只能对运行态容器的proc 进行处理.(proc 文件系统是内核提供的映射文件),新挂载的是无法获取到 运行态容器的proc信息