mac 上学习k8s系列knative part III

发布时间:2025-12-09 11:50:25 浏览次数:2

由于国内的原因,gcr的镜像无法直接拉取,导致knative部署困难,采用

https://github.com/anjia0532/gcr.io_mirror

的镜像,但是命名需要处理下

# #原镜像# gcr.io/knative-releases/knative.dev/eventing/cmd/controller:latest# #转换后镜像# anjia0532/knative-releases.knative.dev.eventing.cmd.controller:latest

由于serving-core镜像命名中含有commit id,

替换

@sha256: xxxx

:latestnimagePullPolicy: IfNotPresentn#xxxx

过滤出需要的镜像,然后下载

images=`grep 'image:' serverless/knative/setup/serving-core.yaml |grep 'gcr.io'`eval $(echo ${images}|        sed 's/k8s.gcr.io/anjia0532/google-containers/g;s/gcr.io/anjia0532/g;s///./g;s/ /n/g;s/anjia0532./anjia0532//g' |        uniq |        awk '{print "docker pull "$1";"}'       )

下载完毕后重新命名

for img in $(docker images --format "{{.Repository}}:{{.Tag}}"| grep "anjia0532"); do  n=$(echo ${img}| awk -F'[/.:]' '{printf "gcr.io/%s",$2}')  image=$(echo ${img}| awk -F'[/.:]' '{printf "/knative.%s/%s/%s/%s",$4,$5,$6,$7}')  tag=$(echo ${img}| awk -F'[:]' '{printf ":%s",$2}')  echo "${n}${image}${tag}"  docker tag $img "${n}${image}${tag}"   [[ ${n} == "gcr.io/google-containers" ]] && docker tag $img "k8s.gcr.io${image}${tag}"  docker rmi $imgdone

部署

kubectl apply -f serving-core.yaml

由于由于可能和现有的istio冲突,所以选择安装krouter

kubectl apply -f https://github.com/knative/net-kourier/releases/download/knative-v1.3.0/kourier.yaml

它依赖两个镜像

image: gcr.io/knative-releases/knative.dev/net-kourier/cmd/kourier@sha256:84af1fba93bcc1d504ee6fc110a49be80440f08d461ccb0702621b7b62d0f7b6
image: docker.io/envoyproxy/envoy:v1.18-latest

同样需要到github的镜像地址去下载

images=`echo "gcr.io/knative-releases/knative.dev/net-kourier/cmd/kourier:latest"`eval $(echo ${images}|        sed 's/k8s.gcr.io/anjia0532/google-containers/g;s/gcr.io/anjia0532/g;s///./g;s/ /n/g;s/anjia0532./anjia0532//g' |        uniq |        awk '{print "docker pull "$1";"}'       )docker tag docker.io/anjia0532/knative-releases.knative.dev.net-kourier.cmd.kourier:latest gcr.io/knative-releases/knative.dev/net-kourier/cmd/kourier:latestdocker rmi docker.io/anjia0532/knative-releases.knative.dev.net-kourier.cmd.kourier:latest

然后patch

% kubectl patch configmap/config-network   --namespace knative-serving   --type merge   --patch '{"data":{"ingress.class":"kourier.ingress.networking.knative.dev"}}'configmap/config-network patched

检查下

% kubectl --namespace kourier-system get service kourierNAME      TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGEkourier   LoadBalancer   10.108.41.244   localhost     80:31859/TCP,443:30753/TCP   34s
 % kubectl -n kourier-system  get podsNAME                                      READY   STATUS    RESTARTS   AGE3scale-kourier-gateway-5f96966d45-n5tgg   1/1     Running   0          115s

至此knative serving起来了

% kubectl get pods -n knative-servingNAME                                      READY   STATUS    RESTARTS   AGEactivator-7cf4bd8548-gg5cm                1/1     Running   0          29mautoscaler-577d766bdd-5xmkp               1/1     Running   0          29mcontroller-5b74bfcc9f-z6kpf               1/1     Running   0          29mdomain-mapping-5b4f5f66b5-g8cmt           1/1     Running   0          29mdomainmapping-webhook-5d7fb6566d-59blp    1/1     Running   0          29mnet-kourier-controller-766c565d78-5gqpz   1/1     Running   0          50swebhook-699fc555bf-4t9nk                  1/1     Running   0          29m

如果要部署default-domain也同样处理

images=`echo "gcr.io/knative-releases/knative.dev/serving/cmd/default-domain:latest"`eval $(echo ${images}|        sed 's/k8s.gcr.io/anjia0532/google-containers/g;s/gcr.io/anjia0532/g;s///./g;s/ /n/g;s/anjia0532./anjia0532//g' |        uniq |        awk '{print "docker pull "$1";"}'       )# docker tag docker.io/anjia0532/knative-releases.knative.dev.serving.cmd.default-domain:latest gcr.io/knative-releases/knative.dev/serving/cmd/default-domain:latest# docker rmi docker.io/anjia0532/knative-releases.knative.dev.serving.cmd.default-domain:latest

然后部署我们的hello word例子

package mainimport (  "fmt"  "log"  "net/http"  "os")func handler(w http.ResponseWriter, r *http.Request) {  log.Print("helloworld: received a request")  target := os.Getenv("TARGET")  if target == "" {    target = "World"  }  fmt.Fprintf(w, "Hello %s!n", target)}func main() {  log.Print("helloworld: starting server...")  http.HandleFunc("/", handler)  port := os.Getenv("PORT")  if port == "" {    port = "8080"  }  log.Printf("helloworld: listening on port %s", port)  log.Fatal(http.ListenAndServe(fmt.Sprintf(":%s", port), nil))}

部署文件

---apiVersion: v1kind: Namespacemetadata:  name: helloworld  labels:    serving.knative.dev/service: hello    serving.knative.dev/visibility: cluster-local---apiVersion: serving.knative.dev/v1kind: Servicemetadata:  name: hello  namespace: helloworld  labels:    serving.knative.dev/service: hello    serving.knative.dev/visibility: cluster-localspec:  template:    metadata:      labels:        app: hello      annotations:        autoscaling.knative.dev/target: "10"    spec:      containers:        - image: docker.io/xia*****/helloworld-go          env:            - name: TARGET              value: "World!"

打包镜像

go mod init helloworldgo mod tidydocker build -t xia*****/helloworld-go . => => naming to docker.io/xia*****/helloworld-go   

然后部署应用

% kubectl apply -f helloworld.yamlnamespace/helloworld unchangedservice.serving.knative.dev/hello created

检查下

 % kubectl get route hello -n helloworldNAME    URL                                   READY   REASONhello   http://hello.helloworld.example.com   False   RevisionMissing

发现镜像拉取失败

%  kubectl -n helloworld describe pod hello-00005-deployment-77c7d84b98-gjwlrName:         hello-00005-deployment-77c7d84b98-gjwlrNamespace:    helloworldPriority:     0Node:         docker-desktop/192.168.65.4Start Time:   Sat, 26 Mar 2022 18:46:22 +0800Labels:       app=hello              pod-template-hash=77c7d84b98              serving.knative.dev/configuration=hello              serving.knative.dev/configurationGeneration=5              serving.knative.dev/configurationUID=ee31bffa-a1ac-4001-83b6-4ead5457d4e5              serving.knative.dev/revision=hello-00005              serving.knative.dev/revisionUID=fd0d447e-94cd-4da6-b02a-50b4d1fd8032              serving.knative.dev/service=hello              serving.knative.dev/serviceUID=b96afd85-667a-42f3-95d6-f279ba418235Annotations:  serving.knative.dev/creator: docker-for-desktopStatus:       PendingIP:           10.1.4.234IPs:  IP:           10.1.4.234Controlled By:  ReplicaSet/hello-00005-deployment-77c7d84b98Containers:  user-container:    Container ID:   docker://bb422eb2d6bf0d6bdec082597814fc86391855f678e684d2b473b05bd5f68888    Image:          index.docker.io/xia*****/helloworld-go@sha256:23b742c725fa51786bb71603a28c34e2723f9b65384a4886349145df532c1405    Image ID:       docker-pullable://xia*****/helloworld-go@sha256:23b742c725fa51786bb71603a28c34e2723f9b65384a4886349145df532c1405    Port:           8080/TCP    Host Port:      0/TCP    State:          Running      Started:      Sat, 26 Mar 2022 18:46:24 +0800    Ready:          True    Restart Count:  0    Environment:      TARGET:           World!      PORT:             8080      K_REVISION:       hello-00005      K_CONFIGURATION:  hello      K_SERVICE:        hello    Mounts:      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h77nr (ro)  queue-proxy:    Container ID:       Image:          gcr.io/knative-releases/knative.dev/serving/cmd/queue:latest    Image ID:           Ports:          8022/TCP, 9090/TCP, 9091/TCP, 8012/TCP    Host Ports:     0/TCP, 0/TCP, 0/TCP, 0/TCP    State:          Waiting      Reason:       ImagePullBackOff    Ready:          False    Restart Count:  0    Requests:      cpu:      25m    Readiness:  http-get http://:8012/ delay=0s timeout=1s period=10s #success=1 #failure=3    Environment:      SERVING_NAMESPACE:                 helloworld      SERVING_SERVICE:                   hello      SERVING_CONFIGURATION:             hello      SERVING_REVISION:                  hello-00005      QUEUE_SERVING_PORT:                8012      CONTAINER_CONCURRENCY:             0      REVISION_TIMEOUT_SECONDS:          300      SERVING_POD:                       hello-00005-deployment-77c7d84b98-gjwlr (v1:metadata.name)      SERVING_POD_IP:                     (v1:status.podIP)      SERVING_LOGGING_CONFIG:                  SERVING_LOGGING_LEVEL:                   SERVING_REQUEST_LOG_TEMPLATE:      {"httpRequest": {"requestMethod": "{{.Request.Method}}", "requestUrl": "{{js .Request.RequestURI}}", "requestSize": "{{.Request.ContentLength}}", "status": {{.Response.Code}}, "responseSize": "{{.Response.Size}}", "userAgent": "{{js .Request.UserAgent}}", "remoteIp": "{{js .Request.RemoteAddr}}", "serverIp": "{{.Revision.PodIP}}", "referer": "{{js .Request.Referer}}", "latency": "{{.Response.Latency}}s", "protocol": "{{.Request.Proto}}"}, "traceId": "{{index .Request.Header "X-B3-Traceid"}}"}      SERVING_ENABLE_REQUEST_LOG:        false      SERVING_REQUEST_METRICS_BACKEND:   prometheus      TRACING_CONFIG_BACKEND:            none      TRACING_CONFIG_ZIPKIN_ENDPOINT:          TRACING_CONFIG_DEBUG:              false      TRACING_CONFIG_SAMPLE_RATE:        0.1      USER_PORT:                         8080      SYSTEM_NAMESPACE:                  knative-serving      METRICS_DOMAIN:                    knative.dev/internal/serving      SERVING_READINESS_PROBE:           {"tcpSocket":{"port":8080,"host":"127.0.0.1"},"successThreshold":1}      ENABLE_PROFILING:                  false      SERVING_ENABLE_PROBE_REQUEST_LOG:  false      METRICS_COLLECTOR_ADDRESS:               CONCURRENCY_STATE_ENDPOINT:              CONCURRENCY_STATE_TOKEN_PATH:      /var/run/secrets/tokens/state-token      HOST_IP:                            (v1:status.hostIP)      ENABLE_HTTP2_AUTO_DETECTION:       false    Mounts:      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h77nr (ro)Conditions:  Type              Status  Initialized       True   Ready             False   ContainersReady   False   PodScheduled      True Volumes:  kube-api-access-h77nr:    Type:                    Projected (a volume that contains injected data from multiple sources)    TokenExpirationSeconds:  3607    ConfigMapName:           kube-root-ca.crt    ConfigMapOptional:       <nil>    DownwardAPI:             trueQoS Class:                   BurstableNode-Selectors:              <none>Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300sEvents:  Type     Reason     Age                  From               Message  ----     ------     ----                 ----               -------  Normal   Scheduled  11m                  default-scheduler  Successfully assigned helloworld/hello-00005-deployment-77c7d84b98-gjwlr to docker-desktop  Normal   Pulled     11m                  kubelet            Container image "index.docker.io/xia*****/helloworld-go@sha256:23b742c725fa51786bb71603a28c34e2723f9b65384a4886349145df532c1405" already present on machine  Normal   Created    11m                  kubelet            Created container user-container  Normal   Started    11m                  kubelet            Started container user-container  Warning  Failed     9m59s (x2 over 11m)  kubelet            Failed to pull image "gcr.io/knative-releases/knative.dev/serving/cmd/queue:latest": rpc error: code = Unknown desc = Error response from daemon: Get "https://gcr.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)  Warning  Failed     9m22s (x5 over 11m)  kubelet            Error: ImagePullBackOff  Normal   Pulling    9m7s (x4 over 11m)   kubelet            Pulling image "gcr.io/knative-releases/knative.dev/serving/cmd/queue:latest"  Warning  Failed     8m52s (x4 over 11m)  kubelet            Error: ErrImagePull  Warning  Failed     8m52s (x2 over 10m)  kubelet            Failed to pull image "gcr.io/knative-releases/knative.dev/serving/cmd/queue:latest": rpc error: code = Unknown desc = Error response from daemon: Get "https://gcr.io/v2/": context deadline exceeded  Normal   BackOff    85s (x35 over 11m)   kubelet            Back-off pulling image "gcr.io/knative-releases/knative.dev/serving/cmd/queue:latest"

但是这个镜像我们本地已经存在了,研究发现deploy的镜像拉取策略是always,

image: gcr.io/knative-releases/knative.dev/serving/cmd/queue:latestimagePullPolicy: Always

修改成IfNotPresent

% kubectl -n helloworld edit deploy hello-00001-deployment deployment.apps/hello-00001-deployment edited

问题解决了。

 % kubectl -n helloworld get deployNAME                     READY   UP-TO-DATE   AVAILABLE   AGEhello-00001-deployment   1/1     1            1           3m19s

但是发现另外一个问题

 % kubectl get ksvc -n helloworldNAME    URL                                   LATESTCREATED   LATESTREADY   READY     REASONhello   http://hello.helloworld.example.com   hello-00001     hello-00001   Unknown   IngressNotConfigured

发现router类型是loadbalancer修改成Nodeport

% kubectl get svc -n kourier-system NAME               TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGEkourier            LoadBalancer   10.108.41.244   localhost     80:31859/TCP,443:30753/TCP   143mkourier-internal   ClusterIP      10.96.62.112    <none>        80/TCP                       143m
 % kubectl --namespace kourier-system edit service kourierservice/kourier edited

检查下

% kubectl --namespace kourier-system get service         NAME               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGEkourier            NodePort    10.108.41.244   <none>        80:31859/TCP,443:30753/TCP   3h13mkourier-internal   ClusterIP   10.96.62.112    <none>        80/TCP                       3h13m

问题仍然存在,欲知如何解决,且听下回分解。

knative系列
需要做网站?需要网络推广?欢迎咨询客户经理 13272073477