-
-
Notifications
You must be signed in to change notification settings - Fork 114
Description
Is there an existing issue for this?
- I have searched the existing issues
Current Behavior
cat coredns.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
spec:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 25%
maxUnavailable: 0
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
spec:
priorityClassName: system-cluster-critical
serviceAccountName: coredns
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values: ["kube-dns"]
topologyKey: kubernetes.io/hostname
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
nodeSelector:
kubernetes.io/os: linux
containers:
- name: coredns
image: coredns/coredns:1.14.1@sha256:82b57287b29beb757c740dbbe68f2d4723da94715b563fffad5c13438b71b14a
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: ["-conf", "/etc/coredns/Corefile"]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
readOnlyRootFilesystem: true
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30 {
disable success cluster.local
disable denial cluster.local
}
loop
reload
loadbalance
}
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
spec:
clusterIP: 10.10.0.10
# k8s dns 可以就近调用
trafficDistribution: PreferClose
ports:
- name: dns
port: 53
protocol: UDP
targetPort: 53
- name: dns-tcp
port: 53
protocol: TCP
targetPort: 53
- name: metrics
port: 9153
protocol: TCP
targetPort: 9153
selector:
k8s-app: kube-dns
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: corednsPipeline ID :
source: coredns
Searching for version matching pattern ">=1.14.1"
✔ Docker Image Tag "1.14.1" found matching pattern ">=1.14.1"
source: coredns-digest
✔ Docker Image Tag coredns/coredns:1.14.1 resolved to digest index.docker.io/coredns/coredns:1.14.1@sha256:82b57287b29beb757c740dbbe68f2d4723da94715b563fffad5c13438b71b14a
target: coredns
[transformers]
✔ Result correctly transformed from "1.14.1@sha256:82b57287b29beb757c740dbbe68f2d4723da94715b563fffad5c13438b71b14a" to "coredns/coredns:1.14.1@sha256:82b57287b29beb757c740dbbe68f2d4723da94715b563fffad5c13438b71b14a"
ERROR: couldn't find key "$.spec.template.spec.containers[0].image" from file "coredns.yaml"
ERROR: couldn't find key "$.spec.template.spec.containers[0].image" from file "coredns.yaml"
ERROR: couldn't find key "$.spec.template.spec.containers[0].image" from file "coredns.yaml"
ERROR: couldn't find key "$.spec.template.spec.containers[0].image" from file "coredns.yaml"
ERROR: couldn't find key "$.spec.template.spec.containers[0].image" from file "coredns.yaml"
✗ Something went wrong
ERROR: something went wrong in "target#coredns" : updating yaml file: key not found from file "coredns.yaml"
Pipeline "deps: bump container image "coredns"" failed
Skipping due to:
something went wrong during target execution
PUSHING GIT CHANGES
No SCM repositories have changes to push
ACTIONS
=============================
SUMMARY:
✔ Local AutoDiscovery:
✗ deps: bump container image "coredns":
Source:
✔ [coredns] get latest container image tag for "coredns/coredns"
✔ [coredns-digest] get latest container image digest for "coredns/coredns:1.14.1"
Target:
✗ [coredns] deps: bump container image digest for "coredns/coredns:1.14.1"
Expected Behavior
work correct without error
Steps To Reproduce
- use from like
coredns.yaml - run updatecli apply
Environment
- OS: windows 11
- updatecli: v0.113.0Pipeline Graph
graph TD
source#coredns-digest(["get latest container image digest for :#quot;coredns/coredns:1.14.1:#quot; (dockerdigest)"])
source#coredns-digest --> target#coredns
target#coredns("deps: bump container image digest for :#quot;coredns/coredns:{{ source :#quot;coredns:#quot; }}:#quot; (yaml)")
source#coredns(["get latest container image tag for :#quot;coredns/coredns:#quot; (dockerimage)"])
source#coredns --> target#coredns
source#coredns --> source#coredns-digest
Anything else?
version 0.112.0 is ok
version 0.113.0 is ok with single yaml document
Metadata
Metadata
Assignees
Labels
Type
Projects
Status