OpenShift基础功能测试和验证

OpenShift

OpenShift 分布式部署

OpenShift Slave节点配置

0、 节点ssh互信设置

Maste节点执行

1
2
3
4
5
6
ssh-keygen
# for host in master.example.com \\
node1.example.com \\
node2.example.com; \\
do ssh-copy-id -i ~/.ssh/id\_rsa.pub $host; \\
done

1、 每个节点都需要安装docker,手动安装

2、设置每个节点的docker 存储,使用数据盘做为docker的存储

1
2
3
4
5
6
7
8
9
10
11
12
13
\[root@origin-master member\]# cat /etc/sysconfig/docker-storage-setup
Edit this file to override any configuration options specified in
/usr/lib/docker-storage-setup/docker-storage-setup.

For more details refer to "man docker-storage-setup"
DEVS=/dev/vdb
VG=docker-vg

docker-storage-setup

systemctl stop docker
rm -rf /var/lib/docker/\*
systemctl restart docker

3、

设置每个节点的docker 可信镜像地址/etc/sysconfig/docker(这一操作,可以在部署完成openshift后,进行修改,其中172.30.0.0/16为ansible配置,10.110.17.138:5000为外部的docker镜像源)

1
OPTIONS='--insecure-registry=10.110.17.138:5000 --insecure-registry=172.30.0.0/16    '

4、在DNS为每个Node的配置

 A记录参考配置:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
root@origin-dns:/etc/bind# cat /etc/bind/named.conf.local
//
// Do any local configuration here
//

// Consider adding the 1918 zones here, if they are not used in your
// organization
//include "/etc/bind/zones.rfc1918";
zone "novalocal" {
type master;
file "/etc/bind/db.novalocal";
};

root@origin-dns:/etc/bind# cat db.novalocal
$TTL 604800
@ IN SOA novalocal. root.novalocal. (
1 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
86400 ) ; Negative Cache TTL
;
@ IN NS novalocal.
@ IN A 192.168.10.143
origin-master IN A 192.168.10.143
origin-slave1 IN A 192.168.10.149
origin-slave2 IN A 192.168.10.145
origin-slave3 IN A 192.168.10.147

OpenShift Master节点配置

1、ansible-openshift 的配置文件/etc/ansible/hosts 增加一下配置,完成OpenShift节点的定义

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[OSEv3:children]
masters
nodes
[OSEv3:vars]
ansible_ssh_user=root
deployment_type=origin
openshift\_master\_identity\_providers=\[{'name': 'htpasswd\_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/htpasswd'}\]
os\_sdn\_network\_plugin\_name=redhat/openshift-ovs-multitenant
openshift\_master\_portal\_net=172.30.0.0/16
openshift\_master\_session\_name=ssn
openshift\_master\_session\_max\_seconds=3600
openshift\_master\_session\_auth\_secrets=\['DONT+USE+THIS+SECRET+b4NV+pmZNSO'\]
openshift\_master\_session\_encryption\_secrets=\['DONT+USE+THIS+SECRET+b4NV+pmZNSO'\]

\[masters\]
origin-master.novalocal openshift\_public\_ip=10.110.17.139 openshift\_public\_hostname=origin-master.novalocal
\[nodes\]
origin-slave1.novalocal openshift\_node\_labels="{'region': 'primary', 'zone': 'east'}"
origin-slave2.novalocal openshift\_node\_labels="{'region': 'primary', 'zone': 'east'}"
origin-slave3.novalocal openshift\_node\_labels="{'region': 'primary', 'zone': 'east'}"
origin-master.novalocal openshift\_node\_labels="{'region':'infra','zone':'default'}" openshift\_schedulable=false

2、

搭建NFS服务器:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
\[root@oc-master ~\]yum install nfs-utils
\[root@oc-master ~\]cat /etc/exports
/opt/docker-registry 192.168.10.0/24(rw,sync,no\_root\_squash,no\_all\_squash)
```` 



配置服务端访问策略:



````(python)
cat /etc/sysconfig/iptables
-A OS\_FIREWALL\_ALLOW -m state --state NEW -p udp -s 192.168.10.0/24 --dport 111 -j ACCEPT
-A OS\_FIREWALL\_ALLOW -m state --state NEW -p tcp -s 192.168.10.0/24 --dport 111 -j ACCEPT
-A OS\_FIREWALL\_ALLOW -m state --state NEW -p tcp -s 192.168.10.0/24 --dport 20048 -j ACCEPT
-A OS\_FIREWALL\_ALLOW -m state --state NEW -p udp -s 192.168.6.0/24 --dport 20048 -j ACCEPT
-A OS\_FIREWALL\_ALLOW -m state --state NEW -p udp -s 192.168.6.0/24 --dport 2049 -j ACCEPT
-A OS\_FIREWALL\_ALLOW -m state --state NEW -p tcp -s 192.168.6.0/24 --dport 2049 -j ACCEPT
-A OS\_FIREWALL\_ALLOW -m state --state NEW -p tcp -s 192.168.6.0/24 --dport 20048 -j ACCEPT
-A OS\_FIREWALL\_ALLOW -m state --state NEW -p udp -s 192.168.6.0/24 --dport 20048 -j ACCEPT
-A OS\_FIREWALL\_ALLOW -m state --state NEW -p udp -s 192.168.6.0/24 --dport 49493 -j ACCEPT
-A OS\_FIREWALL\_ALLOW -m state --state NEW -p tcp -s 192.168.6.0/24 --dport 49493 -j ACCEPT
-A OS\_FIREWALL\_ALLOW -m state --state NEW -p udp -s 192.168.6.0/24 --dport 54932 -j ACCEPT
-A OS\_FIREWALL\_ALLOW -m state --state NEW -p tcp -s 192.168.6.0/24 --dport 46120 -j ACCEPT
-A OS\_FIREWALL\_ALLOW -m state --state NEW -p udp -s 192.168.6.0/24 --dport 37588 -j ACCEPT

重启NFS服务 systemctl restart nfs-server.service

客户端挂载

mount -t nfs 192.168.6.7:/opt/docker-registry/ /opt/docker-registry/

执行安装

在master节点操作

1
1、执行安装:ansible-playbook ~/openshift-ansible/playbooks/byo/config.yml

2、执行卸载:ansible-playbook openshift-ansible/playbooks/adhoc/uninstall.yml

部署docker-Registry

Master节点配置nfs

注意:部署完成docker-registry后,不能再讲该服务进行删除,因为服务地址已经更新到etcd,该地址目前看来,无法实时更新

在master节点操作

oadm registry –config=/etc/origin/master/admin.kubeconfig  –credentials=/etc/origin/master/openshift-registry.kubeconfig –service-account=registry  –mount-host=/opt/docker-registry

部署监控metrics

1 使用 openshift-infra工程

1
oc project openshift-infra

2、

1
oc secrets new metrics-deployer nothing=/dev/null

3、
oc process -f metrics-deployer.yaml -v HAWKULAR_METRICS_HOSTNAME=hawkular-metrics.devops.inspur,USE_PERSISTENT_STORAGE=false | oc create -f –

参考demo

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
#!/bin/bash
#
# Copyright 2014-2015 Red Hat, Inc. and/or its affiliates
# and other contributors as indicated by the @author tags.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

apiVersion: "v1"
kind: "Template"
metadata:
name: metrics-deployer-template
annotations:
description: "Template for deploying the required Metrics integration. Requires cluster-admin 'metrics-deployer' service account and 'metrics-deployer' secret."
tags: "infrastructure"
labels:
metrics-infra: deployer
provider: openshift
component: deployer
objects:
-
apiVersion: v1
kind: Pod
metadata:
generateName: metrics-deployer-
spec:
containers:
- image: ${IMAGE\_PREFIX}metrics-deployer:${IMAGE\_VERSION}
name: deployer
volumeMounts:
- name: secret
mountPath: /secret
readOnly: true
- name: empty
mountPath: /etc/deploy
env:
- name: PROJECT
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: IMAGE\_PREFIX
value: ${IMAGE\_PREFIX}
- name: IMAGE\_VERSION
value: ${IMAGE\_VERSION}
- name: MASTER\_URL
value: ${MASTER\_URL}
- name: REDEPLOY
value: ${REDEPLOY}
- name: USE\_PERSISTENT\_STORAGE
value: ${USE\_PERSISTENT\_STORAGE}
- name: HAWKULAR\_METRICS\_HOSTNAME
value: ${HAWKULAR\_METRICS\_HOSTNAME}
- name: CASSANDRA\_NODES
value: ${CASSANDRA\_NODES}
- name: CASSANDRA\_PV\_SIZE
value: ${CASSANDRA\_PV\_SIZE}
- name: METRIC\_DURATION
value: ${METRIC\_DURATION}
dnsPolicy: ClusterFirst
restartPolicy: Never
serviceAccount: metrics-deployer
volumes:
- name: empty
emptyDir: {}
- name: secret
secret:
secretName: metrics-deployer
parameters:
-
description: 'Specify prefix for metrics components; e.g. for "openshift/origin-metrics-deployer:latest", set prefix "openshift/origin-"'
name: IMAGE\_PREFIX
value: "10.110.17.138:5000/library/origin-"
-
description: 'Specify version for metrics components; e.g. for "openshift/origin-metrics-deployer:latest", set version "latest"'
name: IMAGE\_VERSION
value: "latest"
-
description: "Internal URL for the master, for authentication retrieval"
name: MASTER\_URL
value: "https://oc-master.novalocal:8443"
-
description: "External hostname where clients will reach Hawkular Metrics"
name: HAWKULAR\_METRICS\_HOSTNAME
required: true
value: "hawkular-metrics.devops.inspur"
-
description: "If set to true the deployer will try and delete all the existing components before trying to redeploy."
name: REDEPLOY
value: "false"
-
description: "Set to true for persistent storage, set to false to use non persistent storage"
name: USE\_PERSISTENT\_STORAGE
value: "false"
-
description: "The number of Cassandra Nodes to deploy for the initial cluster"
name: CASSANDRA\_NODES
value: "1"
-
description: "The persistent volume size for each of the Cassandra nodes"
name: CASSANDRA\_PV\_SIZE
value: "10Gi"
-
description: "How many days metrics should be stored for."
name: METRIC\_DURATION
value: "7"

当pod运行成功后,还需要修改/etc/origin/master/master-config.yaml文件,增加metric配置

通过浏览器访问控制台时,也会出现无法访问监控信息的情况,需要确认

(1) 浏览器能够解析hawkular-metrics.devops.inspur地址

(2) 使用浏览器访问改地址https://hawkular-metrics.devops.inspur/hawkular/metrics,加载https证书

部署Route

在master节点操作

oadm router haproxy-router –replicas=1 –credentials=/etc/origin/master/openshift-router.kubeconfig –service-account=router

增加用户:htpasswd /etc/origin/htpasswd admin

普通用户登录;oc login

系统管理员登录:

export KUBECONFIG=/etc/origin/master/admin.kubeconfig

oc login -u system:admin -n default

此时,整个环境搭建完成,

外部docker-Registry搭建

搭建一个V2版本的docker-registry

使用一台新的虚拟机,部署docker,部署docker registry

docker run -d -p 5000:5000 –name registry registry:2

注意:在该registry上传镜像时,需要制定子目录:例如例如10.110.17.138:5000/library/postgresql

管理OpenShift

导入镜像

导入的镜像都存放到了 openshift的内部镜像

1、从第三方仓库导入镜像

从第三方仓库导入的镜像,只是在OpenShift的内部镜像仓库中,保存了索引信息,真正使用的时候,还是要去第三方仓库下载,将外部镜像导入到OpenShift,可以指定命名空间进行导入

进行导入:oc create -**f image-streams-centos7-python-postgresql.**json **-**n demo

导入镜像使用的文件参考参考image-streams-centos7-python-postgresql.json

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
{
"kind": "ImageStreamList",
"apiVersion": "v1",
"items": \[
{
"kind": "ImageStream",
"apiVersion": "v1",
"metadata": {
"name": "python",
"annotations": {
"openshift.io/image.insecureRepository": "true"
}
},
"spec": {
"dockerImageRepository": "10.110.17.138:5000/library/python"
}
},
{
"kind": "ImageStream",
"apiVersion": "v1",
"metadata": {
"name": "postgresql",
"annotations": {
"openshift.io/image.insecureRepository": "true"
}
},
"spec": {
"dockerImageRepository": "10.110.17.138:5000/library/postgresql"
}
}\]
}

注意:外部的私有docker-registry 的镜像存储目录必须有子目录:例如10.110.17.138:5000/library/postgresql

2、登录OpenShift 内部docker-registry,导入镜像

  1、在master节点获取,docker-registry服务的service ip

1
2
3
4
5
 \[root@origin-master image-streams\]# oc get svc
NAME CLUSTER\_IP EXTERNAL\_IP PORT(S) SELECTOR AGE
docker-registry 172.30.90.102 <none> 5000/TCP docker-registry=default 20h
haproxy-router 172.30.95.6 <none> 80/TCP router=haproxy-router 20h
kubernetes 172.30.0.1 <none> 443/TCP,53/UDP,53/TCP <none> 22h

2、查看OpenShift内部docker-registry所在的节点,并登录

1
2
3
4
5
6
  root@origin-slave3 ~\]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
40a2a7f9c5f0 openshift/origin-docker-registry:v1.1.0.1 "/bin/sh -c 'REGISTRY" 5 hours ago Up 5 hours k8s\_registry.afeb0b36\_docker-registry-1-2wsvm\_default\_fa5e59e4-af5a-11e5-8632-fa163efe8294\_4fbf4ffa
b29287d1a00c 10.110.17.138:5000/library/postgresql@sha256:11dbc16d7d84da0dfb42782ff1f64b0e263d9f8686fe4347f2d02a665582e945 "container-entrypoint" 5 hours ago Up 5 hours k8s\_postgresql.5134f420\_postgresql-1-eun94\_demo\_7e990eb5-af62-11e5-8632-fa163efe8294\_155dadb8
c217ccc5c319 openshift/origin-pod:v1.1.0.1 "/pod" 5 hours ago Up 5 hours k8s\_POD.18d9fe1e\_postgresql-1-eun94\_demo\_7e990eb5-af62-11e5-8632-fa163efe8294\_257ba619
bbc34e05ca92 openshift/origin-pod:v1.1.0.1 "/pod" 5 hours ago Up 5 hours k8s\_POD.7c1fe15\_docker-registry-1-2wsvm\_default\_fa5e59e4-af5a-11e5-8632-fa163efe8294\_7a1bd148

3、 以普通用户登录openshift,获取tocker

[**root@origin-slave3** ~**]**# oc whoami **-**t

YAiqVCwCIhMT6FoVXfhpnga4UBiud0Mwzz84-sW3Om8

4、

[**root@origin-slave3** ~]**# docker login **-**u admin **-**e admin@inspur.com **-**p YAiqVCwCIhMT6FoVXfhpnga4UBiud0Mwzz84-sW3Om8 **172.30.90.102:5000

WARNING**:** login credentials saved in /**root/.docker/config.**json

Login Succeeded

5、 上传镜像

1
2
3
4
5
6
7
8
\[root@origin-slave3 ~\]# docker tag 10.110.17.138:5000/library/hello-openshift 172.30.90.102:5000/demo/hello-openshift
\[root@origin-slave3 ~\]# docker push 172.30.90.102:5000/demo/hello-openshift
The push refers to a repository \[172.30.90.102:5000/demo/hello-openshift\] (len: 1)
bcfa6006862b: Pushed
bbc202431232: Pushed
eb19dbec24c6: Pushed
0055f04883dd: Pushed
latest: digest: sha256:724411548d3c1ea88402288b2045275cd3fd0488f6d6500f4f6974b278f27f39 size: 6982

部署Template

Template定义了一套部署应用的完成流程,包含打包、发布等,供开发者使用

部署template

oc create-f django-postgres.json -n openshif

Template 参考文件django-postgres.json,特别注意镜像的使用方法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
{
"kind": "Template",
"apiVersion": "v1",
"metadata": {
"name": "django-psql-example",
"annotations": {
"description": "An example Django application with a PostgreSQL database",
"tags": "instant-app,python,django,postgresql",
"iconClass": "icon-python"
}
},
"labels": {
"template": "django-psql-example"
},
"objects": \[
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "django-psql-example",
"annotations": {
"description": "Exposes and load balances the application pods"
}
},
"spec": {
"ports": \[
{
"name": "web",
"port": 8080,
"targetPort": 8080
}
\],
"selector": {
"name": "django-psql-example"
}
}
},
{
"kind": "Route",
"apiVersion": "v1",
"metadata": {
"name": "django-psql-example"
},
"spec": {
"host": "${APPLICATION\_DOMAIN}",
"to": {
"kind": "Service",
"name": "django-psql-example"
}
}
},
{
"kind": "ImageStream",
"apiVersion": "v1",
"metadata": {
"name": "django-psql-example",
"annotations": {
"description": "Keeps track of changes in the application image"
}
}
},
{
"kind": "BuildConfig",
"apiVersion": "v1",
"metadata": {
"name": "django-psql-example",
"annotations": {
"description": "Defines how to build the application"
}
},
"spec": {
"source": {
"type": "Git",
"git": {
"uri": "${SOURCE\_REPOSITORY\_URL}",
"ref": "${SOURCE\_REPOSITORY\_REF}"
},
"contextDir": "${CONTEXT\_DIR}"
},
"strategy": {
"type": "Source",
"sourceStrategy": {
"from": {
"kind": "ImageStreamTag",
"namespace": "openshift",
"name": "python"
}
}
},
"output": {
"to": {
"kind": "ImageStreamTag",
"name": "django-psql-example:latest"
}
},
"triggers": \[
{
"type": "ImageChange"
},
{
"type": "ConfigChange"
},
{
"type": "GitHub",
"github": {
"secret": "${GITHUB\_WEBHOOK\_SECRET}"
}
}
\]
}
},
{
"kind": "DeploymentConfig",
"apiVersion": "v1",
"metadata": {
"name": "django-psql-example",
"annotations": {
"description": "Defines how to deploy the application server"
}
},
"spec": {
"strategy": {
"type": "Rolling"
},
"triggers": \[
{
"type": "ImageChange",
"imageChangeParams": {
"automatic": true,
"containerNames": \[
"django-psql-example"
\],
"from": {
"kind": "ImageStreamTag",
"name": "django-psql-example:latest"
}
}
},
{
"type": "ConfigChange"
}
\],
"replicas": 1,
"selector": {
"name": "django-psql-example"
},
"template": {
"metadata": {
"name": "django-psql-example",
"labels": {
"name": "django-psql-example"
}
},
"spec": {
"containers": \[
{
"name": "django-psql-example",
"image": "django-psql-example",
"ports": \[
{
"containerPort": 8080
}
\],
"env": \[
{
"name": "DATABASE\_SERVICE\_NAME",
"value": "${DATABASE\_SERVICE\_NAME}"
},
{
"name": "DATABASE\_ENGINE",
"value": "${DATABASE\_ENGINE}"
},
{
"name": "DATABASE\_NAME",
"value": "${DATABASE\_NAME}"
},
{
"name": "DATABASE\_USER",
"value": "${DATABASE\_USER}"
},
{
"name": "DATABASE\_PASSWORD",
"value": "${DATABASE\_PASSWORD}"
},
{
"name": "APP\_CONFIG",
"value": "${APP\_CONFIG}"
},
{
"name": "DJANGO\_SECRET\_KEY",
"value": "${DJANGO\_SECRET\_KEY}"
}
\]
}
\]
}
}
}
},
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "${DATABASE\_SERVICE\_NAME}",
"annotations": {
"description": "Exposes the database server"
}
},
"spec": {
"ports": \[
{
"name": "postgresql",
"port": 5432,
"targetPort": 5432
}
\],
"selector": {
"name": "${DATABASE\_SERVICE\_NAME}"
}
}
},
{
"kind": "DeploymentConfig",
"apiVersion": "v1",
"metadata": {
"name": "${DATABASE\_SERVICE\_NAME}",
"annotations": {
"description": "Defines how to deploy the database"
}
},
"spec": {
"strategy": {
"type": "Recreate"
},
"triggers": \[
{
"type": "ImageChange",
"imageChangeParams": {
"automatic": false,
"containerNames": \[
"postgresql"
\],
"from": {
"kind": "ImageStreamTag",
"namespace": "openshift",
"name": "postgresql"
}
}
},
{
"type": "ConfigChange"
}
\],
"replicas": 1,
"selector": {
"name": "${DATABASE\_SERVICE\_NAME}"
},
"template": {
"metadata": {
"name": "${DATABASE\_SERVICE\_NAME}",
"labels": {
"name": "${DATABASE\_SERVICE\_NAME}"
}
},
"spec": {
"containers": \[
{
"name": "postgresql",
"image": "postgresql",
"ports": \[
{
"containerPort": 5432
}
\],
"env": \[
{
"name": "POSTGRESQL\_USER",
"value": "${DATABASE\_USER}"
},
{
"name": "POSTGRESQL\_PASSWORD",
"value": "${DATABASE\_PASSWORD}"
},
{
"name": "POSTGRESQL\_DATABASE",
"value": "${DATABASE\_NAME}"
}
\]
}
\]
}
}
}
}
\],
"parameters": \[
{
"name": "SOURCE\_REPOSITORY\_URL",
"description": "The URL of the repository with your application source code",
"value": "https://github.com/openshift/django-ex.git"
},
{
"name": "SOURCE\_REPOSITORY\_REF",
"description": "Set this to a branch name, tag or other ref of your repository if you are not using the default branch"
},
{
"name": "CONTEXT\_DIR",
"description": "Set this to the relative path to your project if it is not in the root of your repository"
},
{
"name": "APPLICATION\_DOMAIN",
"description": "The exposed hostname that will route to the Django service, if left blank a value will be defaulted.",
"value": ""
},
{
"name": "GITHUB\_WEBHOOK\_SECRET",
"description": "A secret string used to configure the GitHub webhook",
"generate": "expression",
"from": "\[a-zA-Z0-9\]{40}"
},
{
"name": "DATABASE\_SERVICE\_NAME",
"description": "Database service name",
"value": "postgresql"
},
{
"name": "DATABASE\_ENGINE",
"description": "Database engine: postgresql, mysql or sqlite (default)",
"value": "postgresql"
},
{
"name": "DATABASE\_NAME",
"description": "Database name",
"value": "default"
},
{
"name": "DATABASE\_USER",
"description": "Database user name",
"value": "django"
},
{
"name": "DATABASE\_PASSWORD",
"description": "Database user password",
"generate": "expression",
"from": "\[a-zA-Z0-9\]{16}"
},
{
"name": "APP\_CONFIG",
"description": "Relative path to Gunicorn configuration file (optional)"
},
{
"name": "DJANGO\_SECRET\_KEY",
"description": "Set this to a long random string",
"generate": "expression",
"from": "\[\\\\w\]{50}"
}
\]
}

应用管理:

创建应用:基于源码(首先将源码build成镜像),基于镜像,基于模板

基于源码创建:

1
2
oc new-app myproject/my-ruby~https://github.com/openshift/ruby-hello-world.git (failed)
oc new-app /home/user/code/myapp --strategy=docker

基于镜像创建:

使用本地镜像创建:

1
oc new-app 10.110.17.138:5000/centos  --insecure-registry=true

ImageStreamTag

Deployment

文件定义一个部署,例如,可以使用内部的镜像,也可以使用外部的镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
apiVersion: v1
kind: ReplicationController
metadata:
name: frontend-1
spec:
replicas: 1
selector:
name: frontend
template:
metadata:
labels:
name: frontend
spec:
containers:
- image: 172.30.90.102:5000/demo/hello-openshift
name: helloworld
ports:
- containerPort: 8080
protocol: TCP
restartPolicy: Always

持久化存储 PersistentVolume 

搭建NFS服务器,只支持NFSV4

1
2
3
4
5
6
7
8
9
10
11
12

**\[**root@oc**\-**master example**\]**\# cat **/**etc**/**exports

**/**opt**/**docker-registry **192.168.6.0/**24**(**rw**,**sync**,**no\_root\_squash**,**no\_all\_squash**)**

**/**openshift-pv **\*(**rw**,**all\_squash**)**

**\[**root@oc**\-**master example**\]**\# mkdir **/**openshift-pv

**\[**root@oc**\-**master example**\]**\# chown **\-**R nfsnobody**:**nfsnobody /openshift-pv

**\[**root@oc**\-**master example**\]**\# chmod **777** **/**opt**/**openshift-pv**/**

创建PV:persistentvolume

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
**{**

  "apiVersion"**:** "v1"**,**

  "kind"**:** "PersistentVolume"**,**

  "metadata"**: {**

    "name"**:** "jenkins"

  **},**

  "spec"**: {**

    "capacity"**: {**

        "storage"**:** "5Gi"

    **},**

    "accessModes"**: \[** "ReadWriteOnce" **\],**

    "nfs"**: {**

        "path"**:** "/openshit-pv"**,**

        "server"**:** "192.168.6.7"

    **},**

    "persistentVolumeReclaimPolicy"**:** "Recycle"

  **}**

**}**

举例:jenkins使用创建的PV

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
**{**

  "kind"**:** "Template"**,**

  "apiVersion"**:** "v1"**,**

  "metadata"**: {**

    "name"**:** "jenkins-persistent"**,**

    "creationTimestamp"**:** null**,**

    "annotations"**: {**

      "description"**:** "Jenkins service, with persistent storage."**,**

      "iconClass"**:** "icon-jenkins"**,**

      "tags"**:** "instant-app,jenkins"

    **}**

  **},**

  "objects"**: \[**

    **{**

      "kind"**:** "Service"**,**

      "apiVersion"**:** "v1"**,**

      "metadata"**: {**

        "name"**:** "${JENKINS\_SERVICE\_NAME}"**,**

        "creationTimestamp"**:** null

      **},**

      "spec"**: {**

        "ports"**: \[**

          **{**

            "name"**:** "web"**,**

            "protocol"**:** "TCP"**,**

            "port"**:** **8080,**

            "targetPort"**:** **8080,**

            "nodePort"**:** **0**

          **}**

        **\],**

        "selector"**: {**

          "name"**:** "${JENKINS\_SERVICE\_NAME}"

        **},**

        "portalIP"**:** ""**,**

        "type"**:** "ClusterIP"**,**

        "sessionAffinity"**:** "None"

      **}**

    **},**

    **{**

      "kind"**:** "Route"**,**

      "apiVersion"**:** "v1"**,**

      "metadata"**: {**

        "name"**:** "jenkins"**,**

        "creationTimestamp"**:** null

      **},**

      "spec"**: {**

        "to"**: {**

          "kind"**:** "Service"**,**

          "name"**:** "${JENKINS\_SERVICE\_NAME}"

        **},**

        "tls"**: {**

          "termination"**:** "edge"**,**

          "certificate"**:** "-----BEGIN CERTIFICATE-----**\\n**MIIDIjCCAgqgAwIBAgIBATANBgkqhkiG9w0BAQUFADCBoTELMAkGA1UEBhMCVVMx**\\n**CzAJBgNVBAgMAlNDMRUwEwYDVQQHDAxEZWZhdWx0IENpdHkxHDAaBgNVBAoME0Rl**\\n**ZmF1bHQgQ29tcGFueSBMdGQxEDAOBgNVBAsMB1Rlc3QgQ0ExGjAYBgNVBAMMEXd3**\\n**dy5leGFtcGxlY2EuY29tMSIwIAYJKoZIhvcNAQkBFhNleGFtcGxlQGV4YW1wbGUu**\\n**Y29tMB4XDTE1MDExMjE0MTk0MVoXDTE2MDExMjE0MTk0MVowfDEYMBYGA1UEAwwP**\\n**d3d3LmV4YW1wbGUuY29tMQswCQYDVQQIDAJTQzELMAkGA1UEBhMCVVMxIjAgBgkq**\\n**hkiG9w0BCQEWE2V4YW1wbGVAZXhhbXBsZS5jb20xEDAOBgNVBAoMB0V4YW1wbGUx**\\n**EDAOBgNVBAsMB0V4YW1wbGUwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMrv**\\n**gu6ZTTefNN7jjiZbS/xvQjyXjYMN7oVXv76jbX8gjMOmg9m0xoVZZFAE4XyQDuCm**\\n**47VRx5Qrf/YLXmB2VtCFvB0AhXr5zSeWzPwaAPrjA4ebG+LUo24ziS8KqNxrFs1M**\\n**mNrQUgZyQC6XIe1JHXc9t+JlL5UZyZQC1IfaJulDAgMBAAGjDTALMAkGA1UdEwQC**\\n**MAAwDQYJKoZIhvcNAQEFBQADggEBAFCi7ZlkMnESvzlZCvv82Pq6S46AAOTPXdFd**\\n**TMvrh12E1sdVALF1P1oYFJzG1EiZ5ezOx88fEDTW+Lxb9anw5/KJzwtWcfsupf1m**\\n**V7J0D3qKzw5C1wjzYHh9/Pz7B1D0KthQRATQCfNf8s6bbFLaw/dmiIUhHLtIH5Qc**\\n**yfrejTZbOSP77z8NOWir+BWWgIDDB2//3AkDIQvT20vmkZRhkqSdT7et4NmXOX/j**\\n**jhPti4b2Fie0LeuvgaOdKjCpQQNrYthZHXeVlOLRhMTSk3qUczenkKTOhvP7IS9q**\\n**+Dzv5hqgSfvMG392KWh5f8xXfJNs4W5KLbZyl901MeReiLrPH3w=**\\n**\-----END CERTIFICATE-----"**,**

          "key"**:** "-----BEGIN PRIVATE KEY-----**\\n**MIICeAIBADANBgkqhkiG9w0BAQEFAASCAmIwggJeAgEAAoGBAMrvgu6ZTTefNN7j**\\n**jiZbS/xvQjyXjYMN7oVXv76jbX8gjMOmg9m0xoVZZFAE4XyQDuCm47VRx5Qrf/YL**\\n**XmB2VtCFvB0AhXr5zSeWzPwaAPrjA4ebG+LUo24ziS8KqNxrFs1MmNrQUgZyQC6X**\\n**Ie1JHXc9t+JlL5UZyZQC1IfaJulDAgMBAAECgYEAnxOjEj/vrLNLMZE1Q9H7PZVF**\\n**WdP/JQVNvQ7tCpZ3ZdjxHwkvf//aQnuxS5yX2Rnf37BS/TZu+TIkK4373CfHomSx**\\n**UTAn2FsLmOJljupgGcoeLx5K5nu7B7rY5L1NHvdpxZ4YjeISrRtEPvRakllENU5y**\\n**gJE8c2eQOx08ZSRE4TkCQQD7dws2/FldqwdjJucYijsJVuUdoTqxP8gWL6bB251q**\\n**elP2/a6W2elqOcWId28560jG9ZS3cuKvnmu/4LG88vZFAkEAzphrH3673oTsHN+d**\\n**uBd5uyrlnGjWjuiMKv2TPITZcWBjB8nJDSvLneHF59MYwejNNEof2tRjgFSdImFH**\\n**mi995wJBAMtPjW6wiqRz0i41VuT9ZgwACJBzOdvzQJfHgSD9qgFb1CU/J/hpSRIM**\\n**kYvrXK9MbvQFvG6x4VuyT1W8mpe1LK0CQAo8VPpffhFdRpF7psXLK/XQ/0VLkG3O**\\n**KburipLyBg/u9ZkaL0Ley5zL5dFBjTV2Qkx367Ic2b0u9AYTCcgi2DsCQQD3zZ7B**\\n**v7BOm7MkylKokY2MduFFXU0Bxg6pfZ7q3rvg8gqhUFbaMStPRYg6myiDiW/JfLhF**\\n**TcFT4touIo7oriFJ**\\n**\-----END PRIVATE KEY-----"**,**

          "caCertificate"**:** "-----BEGIN CERTIFICATE-----**\\n**MIIEFzCCAv+gAwIBAgIJALK1iUpF2VQLMA0GCSqGSIb3DQEBBQUAMIGhMQswCQYD**\\n**VQQGEwJVUzELMAkGA1UECAwCU0MxFTATBgNVBAcMDERlZmF1bHQgQ2l0eTEcMBoG**\\n**A1UECgwTRGVmYXVsdCBDb21wYW55IEx0ZDEQMA4GA1UECwwHVGVzdCBDQTEaMBgG**\\n**A1UEAwwRd3d3LmV4YW1wbGVjYS5jb20xIjAgBgkqhkiG9w0BCQEWE2V4YW1wbGVA**\\n**ZXhhbXBsZS5jb20wHhcNMTUwMTEyMTQxNTAxWhcNMjUwMTA5MTQxNTAxWjCBoTEL**\\n**MAkGA1UEBhMCVVMxCzAJBgNVBAgMAlNDMRUwEwYDVQQHDAxEZWZhdWx0IENpdHkx**\\n**HDAaBgNVBAoME0RlZmF1bHQgQ29tcGFueSBMdGQxEDAOBgNVBAsMB1Rlc3QgQ0Ex**\\n**GjAYBgNVBAMMEXd3dy5leGFtcGxlY2EuY29tMSIwIAYJKoZIhvcNAQkBFhNleGFt**\\n**cGxlQGV4YW1wbGUuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA**\\n**w2rK1J2NMtQj0KDug7g7HRKl5jbf0QMkMKyTU1fBtZ0cCzvsF4CqV11LK4BSVWaK**\\n**rzkaXe99IVJnH8KdOlDl5Dh/+cJ3xdkClSyeUT4zgb6CCBqg78ePp+nN11JKuJlV**\\n**IG1qdJpB1J5O/kCLsGcTf7RS74MtqMFo96446Zvt7YaBhWPz6gDaO/TUzfrNcGLA**\\n**EfHVXkvVWqb3gqXUztZyVex/gtP9FXQ7gxTvJml7UkmT0VAFjtZnCqmFxpLZFZ15**\\n**+qP9O7Q2MpsGUO/4vDAuYrKBeg1ZdPSi8gwqUP2qWsGd9MIWRv3thI2903BczDc7**\\n**r8WaIbm37vYZAS9G56E4+wIDAQABo1AwTjAdBgNVHQ4EFgQUugLrSJshOBk5TSsU**\\n**ANs4+SmJUGwwHwYDVR0jBBgwFoAUugLrSJshOBk5TSsUANs4+SmJUGwwDAYDVR0T**\\n**BAUwAwEB/zANBgkqhkiG9w0BAQUFAAOCAQEAaMJ33zAMV4korHo5aPfayV3uHoYZ**\\n**1ChzP3eSsF+FjoscpoNSKs91ZXZF6LquzoNezbfiihK4PYqgwVD2+O0/Ty7UjN4S**\\n**qzFKVR4OS/6lCJ8YncxoFpTntbvjgojf1DEataKFUN196PAANc3yz8cWHF4uvjPv**\\n**WkgFqbIjb+7D1YgglNyovXkRDlRZl0LD1OQ0ZWhd4Ge1qx8mmmanoBeYZ9+DgpFC**\\n**j9tQAbS867yeOryNe7sEOIpXAAqK/DTu0hB6+ySsDfMo4piXCc2aA/eI2DCuw08e**\\n**w17Dz9WnupZjVdwTKzDhFgJZMLDqn37HQnT6EemLFqbcR0VPEnfyhDtZIQ==**\\n**\-----END CERTIFICATE-----"

        **}**

      **}**

    **},**

    **{**

      "kind"**:** "PersistentVolumeClaim"**,**

      "apiVersion"**:** "v1"**,**

      "metadata"**: {**

        "name"**:** "${JENKINS\_SERVICE\_NAME}"

      **},**

      "spec"**: {**

        "accessModes"**: \[**

          "ReadWriteOnce"

        **\],**

        "resources"**: {**

          "requests"**: {**

            "storage"**:** "${VOLUME\_CAPACITY}"

          **}**

        **}**

      **}**

    **},**   

    **{**

      "kind"**:** "DeploymentConfig"**,**

      "apiVersion"**:** "v1"**,**

      "metadata"**: {**

        "name"**:** "${JENKINS\_SERVICE\_NAME}"**,**

        "creationTimestamp"**:** null

      **},**

      "spec"**: {**

        "strategy"**: {**

          "type"**:** "Recreate"**,**

          "resources"**: {}**

        **},**

        "triggers"**: \[**

          **{**

            "type"**:** "ImageChange"**,**

            "imageChangeParams"**: {**

              "automatic"**:** **true,**

              "containerNames"**: \[**

                "jenkins"

              **\],**

              "from"**: {**

                "kind"**:** "ImageStreamTag"**,**

                "name"**:** "jenkins-1-centos7:latest"**,**

                "namespace"**:** "openshift"

              **},**

              "lastTriggeredImage"**:** ""

            **}**

          **},**

          **{**

            "type"**:** "ConfigChange"

          **}**

        **\],**

        "replicas"**:** **1,**

        "selector"**: {**

          "name"**:** "${JENKINS\_SERVICE\_NAME}"

        **},**

        "template"**: {**

          "metadata"**: {**

            "creationTimestamp"**:** null**,**

            "labels"**: {**

              "name"**:** "${JENKINS\_SERVICE\_NAME}"

            **}**

          **},**

          "spec"**: {**

            "containers"**: \[**

              **{**

                "name"**:** "jenkins"**,**

                "image"**:** "${JENKINS\_IMAGE}"**,**

                "env"**: \[**

                  **{**

                    "name"**:** "JENKINS\_PASSWORD"**,**

                    "value"**:** "${JENKINS\_PASSWORD}"

                  **}**

                **\],**

                "resources"**: {},**

                "volumeMounts"**: \[**

                  **{**

                    "name"**:** "${JENKINS\_SERVICE\_NAME}-data"**,**

                    "mountPath"**:** "/var/lib/jenkins"

                  **}**

                **\],**

                "terminationMessagePath"**:** "/dev/termination-log"**,**

                "imagePullPolicy"**:** "IfNotPresent"**,**

                "capabilities"**: {},**

                "securityContext"**: {**

                  "capabilities"**: {},**

                  "privileged"**:** **false**

                **}**

              **}**

            **\],**

            "volumes"**: \[**

              **{**

                "name"**:** "${JENKINS\_SERVICE\_NAME}-data"**,**

                "persistentVolumeClaim"**: {**

                  "claimName"**:** "${JENKINS\_SERVICE\_NAME}"

                **}**

              **}**

            **\],**

            "restartPolicy"**:** "Always"**,**

            "dnsPolicy"**:** "ClusterFirst"

          **}**

        **}**

      **}**

    **}**

  **\],**

  "parameters"**: \[**

    **{**

      "name"**:** "JENKINS\_SERVICE\_NAME"**,**

      "description"**:** "Jenkins service name"**,**

      "value"**:** "jenkins"

    **},**

    **{**

      "name"**:** "JENKINS\_PASSWORD"**,**

      "description"**:** "Password for the Jenkins user"**,**

      "generate"**:** "expression"**,**

      "value"**:** "password"

    **},**

    **{**

      "name"**:** "VOLUME\_CAPACITY"**,**

      "description"**:** "Volume space available for data, e.g. 512Mi, 2Gi"**,**

      "value"**:** "512Mi"**,**

      "required"**:** **true**

    **}**

  **\],**

  "labels"**: {**

    "template"**:** "jenkins-persistent-template"

  **}**

**}**

Route使用

Route提供了外部访问服务的能力,此时需要外部dns的支持,dns的进行地址解析需要配置到haproxy-rouer所在的node节点IP

登录到haproxy-route所在的容器查看配置信息,确认是将每个应用的pod的IP地址配置到了haproxy的配置文件,完成负载均衡

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
backend be\_http\_demo\_django-psql-example

  mode http

  option redispatch

  option forwardfor

  balance leastconn

  timeout check **5000**ms

  http-request set-header X-Forwarded-Host **%\[**req**.**hdr**(**host**)\]**

  http-request set-header X-Forwarded-Port **%\[**dst\_port**\]**

  http-request set-header X-Forwarded-Proto https if **{** ssl\_fc **}**

    cookie OPENSHIFT\_demo\_django-psql-example\_SERVERID insert indirect nocache httponly

    http-request set-header X-Forwarded-Proto http

  http-request set-header Forwarded for**\=%\[**src**\],**host**\=%\[**req**.**hdr**(**host**)\],**proto**\=%\[**req**.**hdr**(**X-Forwarded-Proto**)\]**

  server **10.1.0.3:8080 10.1.0.3:8080** check inter **5000**ms cookie **10.1.0.3:8080**

backend be\_http\_demo\_jenkins

  mode http

  option redispatch

  option forwardfor

  balance leastconn

  timeout check **5000**ms

  http-request set-header X-Forwarded-Host **%\[**req**.**hdr**(**host**)\]**

  http-request set-header X-Forwarded-Port **%\[**dst\_port**\]**

  http-request set-header X-Forwarded-Proto https if **{** ssl\_fc **}**

    cookie OPENSHIFT\_demo\_jenkins\_SERVERID insert indirect nocache httponly

    http-request set-header X-Forwarded-Proto http

  http-request set-header Forwarded for**\=%\[**src**\],**host**\=%\[**req**.**hdr**(**host**)\],**proto**\=%\[**req**.**hdr**(**X-Forwarded-Proto**)\]**

  server **10.1.1.20:8080 10.1.1.20:8080** check inter **5000**ms cookie **10.1.1.20:8080**

DNS 配置

DNS是基于ubuntu14.04 部署了bind9,参考配置:192.168.10.149为haproxy-router所在的节点IP(运行haproxy-router的容器所在的节点IP)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
root@origin**\-**dns**:/**etc**/**bind# cat **/**etc**/**bind**/**named**.**conf**.**local

//

// Do any local configuration here

//

// Consider adding the 1918 zones here, if they are not used in your

// organization

//include "/etc/bind/zones.rfc1918";

zone "novalocal" **{**

        **type** master**;**

        file "/etc/bind/db.novalocal"**;**

**};**

zone  "devops.inspur" **{** 

     **type** master**;** 

     file "/etc/bind/db.devops.inspur"**;** 

**};** 

zone  "10.168.192.in-addr.arpa" **{** 

     **type** master**;** 

     file "/etc/bind/db.17.110.10"**;** 

**};**

cat db**.**devops**.**inspur

**;** 

**;** BIND data file for dev sites 

**;** 

$TTL    **604800** 

@       IN      SOA     devops**.**inspur**.** root**.**devops**.**inspur**. (** 

                              **3**         **;** Serial 

                         **604800**         **;** Refresh 

                          **86400**         **;** Retry  

                        **2419200**         **;** Expire 

                         **604800** **)       ;** Negative Cache TTL 

**;** 

@       IN      NS      devops**.**inspur**.** 

@       IN      A       **10.110.17.131**

**\*.**devops**.**inspur**.**  **14400**   IN      A       **10.110.17.131**

root@origin**\-**dns**:/**etc**/**bind# cat db.17.110.10

**;** 

**;** BIND reverse data file for dev domains 

**;** 

$TTL    **604800** 

@       IN      SOA     dev**.** root**.**dev**. (** 

                              **3**         **;** Serial 

                         **604800**         **;** Refresh 

                          **86400**         **;** Retry 

                        **2419200**         **;** Expire 

                         **604800** **)       ;** Negative Cache TTL 

**;** 

@        IN      NS      devops**.**inspur**.** 

**149**      IN      PTR     devops**.**inspur**.**

获取部署的应用对应的Router

1
2
3
4
5
**\[**root@origin**\-**master image-streams**\]**\# oc get route **\-**n demo

NAME               HOST**/**PORT                            PATH      SERVICE            LABELS                      INSECURE POLICY   TLS TERMINATION

django-psql-test   www**.**django-psql-test**.**devops**.**inspur             django-psql-test   template**\=**django-psql-test

windows或者linux配置dns地址就可以访问该应用

S2I

参考(https://github.com/openshift/source-to-image)

简单了解了S2I的思路,类似CloudFoundry的BuildPack,S2I基于build image+source code,使用脚本将源码打入build image,作为应用的docker image。这里的build image 类似CloudFoundry的buildpack,build image里面内置了应用的运行环境例如tomcat、jre、nginx

例如运行python程序的buil image

https://github.com/openshift/sti-python/tree/master/2.7

制作Build Image的dockerFile:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
FROM openshift**/**base-centos7

\# This image provides a Python **2.7** environment you can use to run your Python

\# applications**.**

MAINTAINER SoftwareCollections**.**org **<**sclorg@redhat**.**com**\>**

EXPOSE **8080**

ENV PYTHON\_VERSION**\=****2.7** \\

    PATH**\=**$HOME**/.**local**/**bin**/:**$PATH

LABEL io**.**k8s**.****description****\=**"Platform for building and running Python 2.7 applications" \\

      io**.**k8s**.**display-name**\=**"Python 2.7" \\

      io**.**openshift**.**expose-services**\=**"8080:http" \\

      io**.**openshift**.**tags**\=**"builder,python,python27,rh-python27"

RUN yum install **\-**y centos-release-scl **&&** \\

    yum install **\-**y **\--**setopt**\=**tsflags**\=**nodocs **\--**enablerepo**\=**centosplus python27 python27-python-devel python27-python-setuptools python27-python-pip epel-release **&&** \\

    yum install **\-**y **\--**setopt**\=**tsflags**\=**nodocs install nss\_wrapper **&&** \\

    yum clean all **\-**y

\# Copy the S2I scripts from the specific language image to $STI\_SCRIPTS\_PATH

COPY **./**s2i**/**bin**/** $STI\_SCRIPTS\_PATH

\# Each language image can have 'contrib' a directory with extra files needed to

\# run and build the applications**.**

COPY **./**contrib**/ /**opt**/**app-root

RUN chown **\-**R **1001:0** **/**opt**/**app-root **&&** chmod **\-**R og**+**rwx **/**opt**/**app-root

USER **1001**

\# Set the **default** CMD to print the usage of the language image

CMD $STI\_SCRIPTS\_PATH**/**usage

不同开发环境需要提供不同的build image,如果同一开发环境下,应用对运行环境需求不同,也需要开发不同的build image。

S2I 虽然简化了应用部署的流程,但是增加了更多的定制化过程。

建议使用场景:平台提供固定的几种build image。