Cloud Native Computing Foundation (CNCF) CNI (Container Networking Interface) 0.7.4 has a network firewall misconfiguration which affects Kubernetes. The CNI 'portmap' plugin, used to setup HostPorts for CNI, inserts rules at the front of the iptables nat chains; which take precedence over the KUBE- SERVICES chain. Because of this, the HostPort/portmap rule could match incoming traffic even if there were better fitting, more specific service definition rules like NodePorts later in the chain. The issue is fixed in CNI 0.7.5 and Kubernetes 1.11.9, 1.12.7, 1.13.5, and 1.14.0.
15554296360000000
CVE-2019-1002100 In all Kubernetes versions prior to v1.11.8, v1.12.6, and v1.13.4, users that are authorized to make patch requests to the Kubernetes API Server can send a specially crafted patch of type "json-patch" (e.g. `kubectl patch --type json` or `"Content-Type: application/json-patch+json"`) that consumes excessive resources while processing, causing a Denial of Service on the API Server.
15554296360000000
CVE-2018-1999040 An exposure of sensitive information vulnerability exists in Jenkins Kubernetes Plugin 1.10.1 and earlier in KubernetesCloud.java that allows attackers to capture credentials with a known credentials ID stored in Jenkins.
15554296360000000
CVE-2018-1002103 In Minikube versions 0.3.0-0.29.0, minikube exposes the Kubernetes Dashboard listening on the VM IP at port 30000. In VM environments where the IP is easy to predict, the attacker can use DNS rebinding to indirectly make requests to the Kubernetes Dashboard, create a new Kubernetes Deployment running arbitrary code. If minikube mount is in use, the attacker could also directly access the host filesystem.
15554296360000000
CVE-2018-1000187 A exposure of sensitive information vulnerability exists in Jenkins Kubernetes Plugin 1.7.0 and older in ContainerExecDecorator.java that results in sensitive variables such as passwords being written to logs.
15554296360000000
CVE-2017-1002100 Default access permissions for Persistent Volumes (PVs) created by the Kubernetes Azure cloud provider in versions 1.6.0 to 1.6.5 are set to "container" which exposes a URI that can be accessed without authentication on the public internet. Access to the URI string requires privileged access to the Kubernetes cluster or authenticated access to the Azure portal.
15554296360000000
CVE-2016-1906 Openshift allows remote attackers to gain privileges by updating a build configuration that was created with an allowed type to a type that is not allowed.
15554296360000000
CVE-2015-5305 Directory traversal vulnerability in Kubernetes, as used in Red Hat OpenShift Enterprise 3.0, allows attackers to write to arbitrary files via a crafted object type name, which is not properly handled before passing it to etcd.
15554296360000000
[] [domhalps] [] [sbucloud] [] [apalia]
@domhalps @SBUCloud: Pierre Vacherand #CTO @Apalia Switzerland talks about the customer's benefits from a full stack #Automation for #containers I…
15559393430000000
[] [markdeneve]
@markdeneve Another week, another blog post... Using oc client or kubectl client to generate ad-hoc reports with the go-templat… https://t.co/8NarXnWblJ
15559393430000000
[] [sbucloud] [] [social_4u] [] [datamattsson]
@SBUCloud @Social_4U: Would you like to take advantage of policy-based provisioning for persistent volume in ? Join @datamattsson at #…
15559393430000000
[] [social_4u]
@Social_4U Would you like to take advantage of policy-based provisioning for persistent volume in ? Join… https://t.co/7lxzk6NFwM
15559393430000000
[] [crypto___touch] [] [rhdevelopers]
@Crypto___Touch @rhdevelopers: Whether you're still learning or an experienced or #Kubernetes application #developer, add this #YAML extensio…
15559393430000000
[] [fleischmantweet] [] [sbucloud]
@Fleischmantweet @SBUCloud: Learn How to #Backup & Restore your #RedHat Environment and How #HPE can help to ease the process during #RHSummit…
15559393430000000
[] [polarbear_pc] [] [nadhaneg]
@polarbear_pc @NadhanEG: #RHSummit Track Guide for Emerging Technology :: #CTO Chris Wright :: #AI #ML on :: #Edge -- where #5G meets #IoT…
15559393430000000
[] [edgeiotai] [] [nadhaneg]
@EdgeIotAi @NadhanEG: #RHSummit Track Guide for Emerging Technology :: #CTO Chris Wright :: #AI #ML on :: #Edge -- where #5G meets #IoT…
15559393430000000
[] [tech_halcyon] [] [chennai]
@tech_halcyon #Docker & #Kubernetes Weekend Classroom Training scheduled on 27th & 28th April @Chennai hurry up to enroll… https://t.co/TUaIjREHa3
15559393430000000
[] [akshayg196] [] [rhdevelopers]
@AkshayG196 @rhdevelopers: Whether you're still learning or an experienced or #Kubernetes application #developer, add this #YAML extensio…
15559393430000000
[] [smartecocity] [] [openshift]
@smartecocity @openshift: How does the partnership between #RedHat and Atos-managed push innovation and solve sma cities problems? Find o…
15559393430000000
[] [garryjgray]
@GarryJGray Are you attending #RedHatSummit in Boston? Red Hat will be talking . Please find below website to find ou… https://t.co/BQl21F1sxy
15559393430000000
[] [spbreed] [] [openshift]
@spbreed Deploying Applications to Multiple #Datacenters https://t.co/wkRIUclo0r via @openshift
15559393430000000
[] [dimarzo_chad] [] [openshift]
@dimarzo_chad @openshift: Planning your Red Hat Summit experience around ? We’ve made it simple for you to find all the best sessions & acti…
15559393430000000
[] [suravarjjala] [] [couchbase] [] [redhat] [] [couchbase]
@suravarjjala @couchbase: Virtual event: Join @RedHat this Friday for a @Couchbase on Openshift demo; learn how to run Couchbase deployments natively…
15559393430000000
[] [vtunka] [] [openshift]
@vtunka @openshift: Planning your Red Hat Summit experience around ? We’ve made it simple for you to find all the best sessions & acti…
15559393430000000
[] [javapsyche] [] [couchbase] [] [redhat] [] [couchbase]
@javapsyche @couchbase: Virtual event: Join @RedHat this Friday for a @Couchbase on Openshift demo; learn how to run Couchbase deployments natively…
15559393430000000
[] [bentonam] [] [couchbase] [] [redhat] [] [couchbase]
@bentonam @couchbase: Virtual event: Join @RedHat this Friday for a @Couchbase on Openshift demo; learn how to run Couchbase deployments natively…
15559393430000000
[] [cbleoschuman] [] [couchbase] [] [redhat] [] [couchbase]
@cbleoschuman @couchbase: Virtual event: Join @RedHat this Friday for a @Couchbase on Openshift demo; learn how to run Couchbase deployments natively…
15559393430000000
[] [anilkumar1129] [] [couchbase] [] [redhat] [] [couchbase]
@anilkumar1129 @couchbase: Virtual event: Join @RedHat this Friday for a @Couchbase on Openshift demo; learn how to run Couchbase deployments natively…
15559393430000000
[] [perrykrug] [] [couchbase] [] [redhat] [] [couchbase]
@perrykrug @couchbase: Virtual event: Join @RedHat this Friday for a @Couchbase on Openshift demo; learn how to run Couchbase deployments natively…
15559393430000000
[] [agonyou] [] [couchbase] [] [redhat] [] [couchbase]
@agonyou @couchbase: Virtual event: Join @RedHat this Friday for a @Couchbase on Openshift demo; learn how to run Couchbase deployments natively…
15559393430000000
[] [gummybaren] [] [couchbase] [] [redhat] [] [couchbase]
@gummybaren @couchbase: Virtual event: Join @RedHat this Friday for a @Couchbase on Openshift demo; learn how to run Couchbase deployments natively…
15559393430000000
<p>I am trying to schedule a pod on my local <a href="https://github.com/ubuntu/microk8s" rel="nofollow noreferrer">microk8s</a> cluster. in the events section i see a warning <code>0/1 nodes are available 1 node(s) had diskpressure</code> how to check how much space node has and how to set a bigger value ..</p>0/1 nodes are available 1 node(s) had diskpressure<p>I was testing my kubernetes services recently. And I found it's very unreliable. Here are the situation:<br>
15581188260000000
<pre><code>apiVersion: extensions/v1beta1
15581188260000000
annotations:
15581188260000000
servicePort: 80
15581188260000000
<pre><code>NAME READY STATUS RESTARTS AGE
15581188260000000
</code></pre>
15581188260000000
neo4j ClusterIP None <none> 7474/TCP,6362/TCP 20h
15581188260000000
<p>What should I do to access the dashboard?</p>
15581188260000000
Fri May 04 11:08:06 UTC 2018
15581188260000000
<pre><code>server {
15581188260000000
set $proxy_upstream_name "-";
15581188260000000
ssl_certificate /ingress-controller/ssl/default-payday.pem;
15581188260000000
ssl_stapling_verify on;
15581188260000000
port_in_redirect off;
15581188260000000
set $service_name "webapp-svc";
15581188260000000
client_max_body_size "1m";
15581188260000000
proxy_set_header ssl-client-verify "";
15581188260000000
proxy_set_header Connection $connection_upgrade;
15581188260000000
proxy_set_header X-Forwarded-Port $pass_port;
15581188260000000
# Pass the original X-Forwarded-For
15581188260000000
proxy_set_header Proxy "";
15581188260000000
proxy_read_timeout 60s;
15581188260000000
proxy_request_buffering "on";
15581188260000000
# In case of errors try the next upstream server before returning an error
15581188260000000
</code></pre>
15581188260000000
metadata:
15581188260000000
kubernetes.io/ingress.class: "nginx"
15581188260000000
- my.domain.com
15581188260000000
serviceName: springboot-service
15581188260000000
<pre><code>kind: ConfigMap
15581188260000000
namespace: ingress-nginx
15581188260000000
force-ssl-redirect: "true"
15581188260000000
<p><strong>service.yaml</strong></p>
15581188260000000
name: ingress-nginx
15581188260000000
annotations:
15581188260000000
type: LoadBalancer
15581188260000000
- name: https
15581188260000000
</code></pre>
15581188260000000
</blockquote>401 Error for google authentication with Spring boot + spring security behing nginx ingress on kubernetes cluster<p>I've been following the Kubernetes The Hard Way tutorial, but instead using on-prem hardware for it. I also updated to using the v1.13.2 release instead of v1.12.0, which the tutorial is based on.</p>
15581188260000000
<p>The tutorial does the healthz check by having an nginx instance fronting the API server, which connects to the API server using TLS. The only reason to do this is because the GCP load balancer needs a non-TLS endpoint for the health check. I don't see why using curl directly with TLS shouldn't work. Has something changed in terms of default permissions between the v1.12.0 and v1.13.2 releases?</p>
15581188260000000
<p>curl will just spit out the usual 401 message.</p><p>I'm new to ingress controller.
15581188260000000
kind: Service
15581188260000000
selector:
15581188260000000
<p><code>kubectl describe ingress</code> returns:</p>
15581188260000000
Default backend: nginx:80 (10.1.0.123:80,10.1.0.124:80,10.1.0.125:80 + 1 more...)
15581188260000000
/v nginx:80 (10.1.0.123:80,10.1.0.124:80,10.1.0.125:80 + 1 more...)
15581188260000000
<p>When running <code>curl http://localhost:30874/v/version.html -H "host: foo.bar.com"</code> I get 403 error and the ingress-control pod says:</p>
15581188260000000
<p>I created ServiceEntry type object to whitelist <code>metadata.google.internal</code>, as follows (have tried different combos of this):</p>
15581188260000000
name: google-metadata-server
15581188260000000
location: MESH_EXTERNAL
15581188260000000
protocol: HTTP
15581188260000000
<pre><code>[2019-02-07T15:29:22.834Z] "GET /computeMetadata/v1/project/project-idHTTP/1.1" 200 - 0 14 2 1 "-" "Google-HTTP-Java-Client/1.27.0 (gzip)" "513f6e25-57ce-4cf0-a273-d391b3da604b" "metadata.google.internal" "169.254.169.254:80" outbound|80||metadata.google.internal - 169.254.169.254:80 10.16.0.29:58790
15581188260000000
[2019-02-07T15:29:47.781Z] "GET /computeMetadata/v1/project/project-idHTTP/1.1" 200 - 0 14 4 3 "-" "Google-HTTP-Java-Client/1.27.0 (gzip)" "7115bf46-e7e9-4b2f-ba37-10cd6b8c9dea" "metadata.google.internal" "169.254.169.254:80" outbound|80||metadata.google.internal - 169.254.169.254:80 10.16.0.29:58876
15581188260000000
<p>What I did is to create a simple deployment with Istio, in the same cluster, same namespace, and telnet the metadata server manually:</p>
15581188260000000
Escape character is '^]'.
15581188260000000
metadata-flavor: Google
15581188260000000
content-length: 22
15581188260000000
<p>I've been experimenting and building my deployment using minikube and I have created a yaml file that will successfully deploy everything locally on minikube without error. You can see the full deployment yaml file here: <a href="https://github.com/mwinteringham/restful-booker-platform/blob/kubes/kubes/deploy.yml" rel="nofollow noreferrer">https://github.com/mwinteringham/restful-booker-platform/blob/kubes/kubes/deploy.yml</a></p>
15581188260000000
kind: Ingress
15581188260000000
serviceName: rbp-booking
15581188260000000
serviceName: rbp-room
15581188260000000
serviceName: rbp-search
15581188260000000
serviceName: rbp-ui
15581188260000000
serviceName: rbp-auth
15581188260000000
serviceName: rbp-report
15581188260000000
serviceName: rbp-ui
15581188260000000
[] [google_cloud]
<p>My image is hello-world project with node + express + google cloud client libray <code>@google-cloud/language</code></p>
15581188260000000
<blockquote>
15581188260000000
a5386aa0f20d: Pushing
15581188260000000
9dfa40a0da3b: Layer already exists error parsing HTTP 408 response
15581188260000000
auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}</em> >
15581188260000000
[] [media]
img{border:0}@media screen and
15581188260000000
no-repeat 0% 0%/100%
15581188260000000
no-repeat;-webkit-background-size:100%
15581188260000000
request. That’s all we know.\n"</p>
15581188260000000
<p>Do you have some idea ? </p>
15581188260000000
<pre><code>kind: ConfigMap
15581188260000000
proxy-read-timeout: "600"
15581188260000000
body-size: "64m"
15581188260000000
name: nginx-configuration
15581188260000000
</code></pre>
15581188260000000
client_max_body_size "1m";
15581188260000000
client_max_body_size "1m";
15581188260000000
client_max_body_size "1m";
15581188260000000
</code></pre>
15581188260000000
<pre><code>resource "google_container_cluster" "k8s" {
15581188260000000
master_auth {
15581188260000000
<p>My provider.tf looks like this:</p>
15581188260000000
data "vault_generic_secret" "google" {
15581188260000000
region = "us-east1"
15581188260000000
<p>Now, my issue is that when I do a <code>terraform apply</code>, it continues to run over and over until it eventually fails with a 500 error. Here's what the debug logs look like:</p>
15581188260000000
2019-01-09T14:35:07.384-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: Host: container.googleapis.com
15581188260000000
2019-01-09T14:35:07.384-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: Accept-Encoding: gzip
15581188260000000
2019-01-09T14:35:07.384-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: "binaryAuthorization": {
15581188260000000
2019-01-09T14:35:07.384-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: "legacyAbac": {
15581188260000000
2019-01-09T14:35:07.384-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: "password": "****",
15581188260000000
2019-01-09T14:35:07.384-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: "network": "projects/ProjectName/global/networks/default",
15581188260000000
2019-01-09T14:35:07.384-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: "https://www.googleapis.com/auth/logging.write",
15581188260000000
2019-01-09T14:35:07.384-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: "https://www.googleapis.com/auth/trace.append"
15581188260000000
2019-01-09T14:35:07.384-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: }
15581188260000000
2019-01-09T14:35:07.521-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: ---[ RESPONSE ]--------------------------------------
15581188260000000
2019-01-09T14:35:07.521-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: Content-Type: application/json; charset=UTF-8
15581188260000000
2019-01-09T14:35:07.521-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: Vary: X-Origin
15581188260000000
2019-01-09T14:35:07.522-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: X-Xss-Protection: 1; mode=block
15581188260000000
2019-01-09T14:35:07.522-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: "code": 500,
15581188260000000
2019-01-09T14:35:07.522-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: "message": "Internal error encountered.",
15581188260000000
2019-01-09T14:35:07.522-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: ],
15581188260000000
2019-01-09T14:35:07.522-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe:
15581188260000000
<p>The real error looks like this</p>
15581188260000000
HTTP/2.0 500 Internal Server Error
15581188260000000
Date: Wed, 09 Jan 2019 19:35:07 GMT
15581188260000000
Vary: Referer
15581188260000000
"error": {
15581188260000000
"message": "Internal error encountered.",
15581188260000000
</code></pre>
15581188260000000
<pre><code>#---
15581188260000000
name: myservice
15581188260000000
# Port to forward to inside the pod
15581188260000000
metadata:
15581188260000000
template:
15581188260000000
imagePullPolicy: Always
15581188260000000
- name: regcred
15581188260000000
kind: Ingress
15581188260000000
servicePort: 80
15581188260000000
<p>Does anyone have an idea why it happended or did I do something wrong with the configuration? Thank you in advanced!</p><p>I'm on Google Kubernetes Cloud and the whole uploads folder is mounted to Google Cloud Storage using GCSFuse. I am using an nginx server (on alpine).</p>
15581188260000000
<p>No updates/changes has been made to cluster or pods. All pods are green and working and all hosts are healty. Everything has been working for more then 2 weeks until this morning.</p>
15581188260000000
<p>I am not able to access the Jenkins x dashboard. </p>
15581188260000000
<p><em>note:</em> I have tried with restarting minikube cluster also.</p><p>I'm following <a href="http://kubernetes.io/docs/getting-started-guides/aws/" rel="nofollow noreferrer">this guide</a> to set up Kubernetes on an Ubuntu 14.04 image on AWS.</p>
15581188260000000
aws configure # enter credentials, etc.
15581188260000000
export KUBE_AWS_ZONE=us-east-1b
15581188260000000
export AWS_S3_BUCKET=my.s3.bucket.kube
15581188260000000
curl -sS https://get.k8s.io | bash
15581188260000000
Downloading kubernetes release v1.2.4 to /home/ubuntu/kubernetes.tar.gz
15581188260000000
HTTP request sent, awaiting response... 200 OK
15581188260000000
2016-05-21 17:01:29 (58.1 MB/s) - ‘kubernetes.tar.gz’ saved [496696744/496696744]
15581188260000000
... calling verify-prereqs
15581188260000000
+++ Staging server tars to S3 Storage: my.s3.bucket.kube/devel
15581188260000000
<p>I tried editing <code>cluster/aws/util.sh</code> to print out <code>s3_bucket_location</code> (following advice from <a href="https://stackoverflow.com/questions/35664787/nosuchbucket-error-when-running-kubernetes-on-aws">this question</a>, and I get an empty string. I'm guessing that's why it fails?</p>
15581188260000000
<p>Following the examples from the <code>dask-kubernetes</code> docs I got a <code>kube</code> cluster running on AWS and (on a separate AWS machine) started a <code>notebook</code> with the local <code>dask.distributed</code> scheduler. The scheduler launches a number of workers on the <code>kube</code> cluster, but it can not connect to said workers because the workers are on a different network: the internal <code>kube</code> network.</p>
15581188260000000
<li><code>kube</code> cluster EC2 instances also on 192.168.0.0/24</li>
15581188260000000
<p>The workers are able to connect to the scheduler, but in the scheduler I get a errors of the form </p>
15581188260000000
<p>I'm not looking for a list of possible things I <em>could</em> do, I'm looking for the <em>recommended</em> way of setting this up, specifically in relation to <code>dask.distributed</code>.</p>
15581188260000000
<p>After the test completes, I use drone again to build & push several docker images of about ~40mb each to us.gcr.io</p>
15581188260000000
time="2018-03-19T03:31:17.208009069Z" level=error msg="Upload failed, retrying: net/http: HTTP/1.x transport connection broken: write tcp w.x.y.z:39662->z.y.x.w:443: write: broken pipe"
15581188260000000
time="2018-03-19T03:31:23.432621075Z" level=error msg="Upload failed, retrying: unexpected EOF"
15581188260000000
<p>Here are the docker commands being ran. Obviously the sensitive data has been omitted.</p>
15581188260000000
</code></pre>
15581188260000000
<p>Is there something I could do with docker daemon or kubernetes network settings or something to mitigate this? At the very least I want to understand why this is happening.</p>
15581188260000000
<p>This doesn't even require Kubernetes to happen!</p>
15581188260000000
<p>Is there no way to set the name of a chart you are targeting with <code>upgrade</code>? Is this only possible for <code>install</code>?</p>`helm upgrade --name` results in "Error: unknown flag: --name"<p>I setup a 3 nodes kubernetes (<code>v1.9.3</code>) cluster on Ubuntu 16.04. </p>
15581188260000000
</code></pre>
15581188260000000
etcd-master 1/1 Running 0 3m
15581188260000000
kube-flannel-ds-wbx97 1/1 Running 0 1m
15581188260000000
<p>But the problem is <code>kube-dns</code> seems got wrong service endpoint address assigned, this can be seen with following commands:</p>
15581188260000000
root@master:~# kubectl describe service kube-dns -n kube-system
15581188260000000
Type: ClusterIP
15581188260000000
Endpoints: 172.17.0.2:53
15581188260000000
Session Affinity: None
15581188260000000
<p>The effect of current setup is all the pods will not have functioned DNS while IP communication is ok. </p>
15581188260000000
producer Deployment/producer <unknown>/1% 1 3 1 42m
15581188260000000
kind: Deployment
15581188260000000
kompose.version: 1.1.0 (36652f6)
15581188260000000
name: producer
15581188260000000
template:
15581188260000000
io.kompose.service: producer
15581188260000000
name: producer
15581188260000000
- name: mongoUrl
15581188260000000
- name: mongoPort
15581188260000000
cpu: 10m
15581188260000000
</code></pre>
15581188260000000
Warning FailedGetResourceMetric 4m (x91 over 49m) horizontal-pod-autoscaler missing request for cpu on container producer in pod default/producer-c7dd566f6-69gbq
15581188260000000
<pre><code>{"log":"I0912 10:36:40.806224 1 event.go:218] Event(v1.ObjectReference{Kind:\"HorizontalPodAutoscaler\", Namespace:\"default\", Name:\"producer\", UID:\"135d0ebc-b671-11e8-a19f-080027646864\", APIVersion:\"autoscaling/v2beta1\", ResourceVersion:\"71101\", FieldPath:\"\"}): type: 'Warning' reason: 'FailedGetResourceMetric' missing request for cpu on container producer in pod default/producer-c7dd566f6-w8zcd\n","stream":"stderr","time":"2018-09-12T10:36:40.80645916Z"}
15581188260000000
<pre><code>NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
15581188260000000
<p>myconfig.yaml:</p>
15581188260000000
name: counter
15581188260000000
image: busybox
15581188260000000
<p>then</p>
15581188260000000
<p>The pod appears to be running fine:</p>
15581188260000000
Node: ip-10-0-0-43.ec2.internal/10.0.0.43
15581188260000000
Status: Running
15581188260000000
Container ID: docker://d2dfdb8644b5a6488d9d324c8c8c2d4637a460693012f35a14cfa135ab628303
15581188260000000
Host Port: <none>
15581188260000000
i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done
15581188260000000
Restart Count: 0
15581188260000000
Conditions:
15581188260000000
PodScheduled True
15581188260000000
SecretName: default-token-r6tr6
15581188260000000
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
15581188260000000
Normal Scheduled 16m default-scheduler Successfully assigned counter to ip-10-0-0-43.ec2.internal
15581188260000000
Normal Created 16m kubelet, ip-10-0-0-43.ec2.internal Created container
15581188260000000
<pre><code>kubectl logs counter --follow=true
15581188260000000
<p>And get an error:</p>
15581188260000000
<pre><code>$ kubectl top nodes
15581188260000000
ip-10-43-0-12 362m 18% 2030Mi 55%
15581188260000000
<p>Ok, what I should do? give permissions to the <code>system:node</code> group I suppose</p>
15581188260000000
<p>Ok, inspecting cluster role:</p>
15581188260000000
endpoints [] [] [get]
15581188260000000
nodes/status [] [] [patch update]
15581188260000000
pods/eviction [] [] [create]
15581188260000000
<pre><code>kubectl patch clusterrole system:node --type='json' -p='[{"op": "add", "path": "/rules/0", "value":{"apiGroups": [""], "resources": ["services/proxy"], "verbs": ["get", "list", "watch"]}}]'
15581188260000000
Name: system:node
15581188260000000
Resources Non-Resource URLs Resource Names Verbs
15581188260000000
<p>Only way that it works is:</p>
15581188260000000
<pre><code>kind: ClusterRole
15581188260000000
name: top-nodes-watcher
15581188260000000
verbs: ["get", "watch", "list"]
15581188260000000
metadata:
15581188260000000
name: system:node:ip-10-43-0-13
15581188260000000
name: top-nodes-watcher
15581188260000000
<p>More details:</p>
15581188260000000
</code></pre>
15581188260000000
<p><code>the node was low on resource imagefs</code></p>
15581188260000000
Node: ip-192-168-66-176.eu-west-1.compute.internal/
15581188260000000
Annotations: <none>
15581188260000000
Port: <none>
15581188260000000
memory: 512Mi
15581188260000000
DOCKER_CONFIG: /home/jenkins/.docker/
15581188260000000
_JAVA_OPTIONS: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -Xms10m -Xmx192m
15581188260000000
JENKINS_URL: http://jenkins:8080
15581188260000000
/home/jenkins/.docker from volume-2 (rw)
15581188260000000
/var/run/secrets/kubernetes.io/serviceaccount from jenkins-token-smvvp (ro)
15581188260000000
Host Port: <none>
15581188260000000
Requests:
15581188260000000
JENKINS_SECRET: 131c407141521c0842f62a69004df926be6cb531f9318edf0885aeb96b0662b4
15581188260000000
GIT_COMMITTER_EMAIL: jenkins-x@googlegroups.com
15581188260000000
JENKINS_NAME: maven-96wmn
15581188260000000
Mounts:
15581188260000000
/root/.m2 from volume-1 (rw)
15581188260000000
volume-0:
15581188260000000
volume-2:
15581188260000000
volume-1:
15581188260000000
workspace-volume:
15581188260000000
Type: Secret (a volume populated by a Secret)
15581188260000000
Type: Secret (a volume populated by a Secret)
15581188260000000
Node-Selectors: <none>
15581188260000000
Type Reason Age From Message
15581188260000000
Normal SuccessfulMountVolume 7m kubelet, ip-192-168-66-176.eu-west-1.compute.internal MountVolume.SetUp succeeded for volume "volume-1"
15581188260000000
Normal Pulled 7m kubelet, ip-192-168-66-176.eu-west-1.compute.internal Container image "jenkinsxio/builder-maven:0.0.516" already present on machine
15581188260000000
Normal Created 7m kubelet, ip-192-168-66-176.eu-west-1.compute.internal Created container
15581188260000000
Normal Killing 5m kubelet, ip-192-168-66-176.eu-west-1.compute.internal Killing container with id docker://maven:Need to kill Pod
15581188260000000
<pre><code>kubectl delete gateway istio-autogenerated-k8s-ingress -n istio-system
15581188260000000
<p>Is it related and if so, how can I set them up again?
15581188260000000
this is the federation kube config file :</p>
15581188260000000
certificate-authority-data: REDACTED
15581188260000000
certificate-authority-data: REDACTED
15581188260000000
insecure-skip-tls-verify: true
15581188260000000
- context:
15581188260000000
name: default-context
15581188260000000
name: federation
15581188260000000
user: kubectl
15581188260000000
namespace: default
15581188260000000
kind: Config
15581188260000000
- name: federation-basic-auth
15581188260000000
- name: kubectl
15581188260000000
- name: kubernetes-admins1
15581188260000000
<p>i run this command : kubefed join site-1 --host-cluster-context=default-context --cluster-context=kubernetes-admin-s1 --insecure-skip-tls-verify=true, the cluster is created but with offline status , is not reacheable ;
15581188260000000
Name: site-1
15581188260000000
Creation Timestamp: 2018-04-22T17:37:40Z
15581188260000000
Client CIDR: 0.0.0.0/0
15581188260000000
Last Probe Time: 2018-04-22T18:09:43Z
15581188260000000
Status: True
15581188260000000
<code>$ which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )
15581188260000000
$ ssh-add <(echo "$SSH_PRIVATE_KEY")
15581188260000000
<p>This will enable my service to (for example) use the kuberntes client api.</p>
15581188260000000
<p>Logging section in appsettings.json & appsettings.Development.json</p>
15581188260000000
"LogLevel": {
15581188260000000
"Console": {
15581188260000000
"Microsoft": "Information"
15581188260000000
return new WebHostBuilder()
15581188260000000
config.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
15581188260000000
if (appAssembly != null)
15581188260000000
config.AddCommandLine(args);
15581188260000000
logging.AddDebug();
15581188260000000
.Build();
15581188260000000
</code></pre><p>I want to make a call to Kubernetes API from .NET Core app outside the cluster. </p>
15581188260000000
</code></pre>
15581188260000000
- cluster:
15581188260000000
<p>How can I validate server certificate using that <strong>certificate-authority-data</strong> in my application?</p><p><strong>PLEASE READ UPDATE 2</strong></p>
15581188260000000
<p><strong>C# Code:</strong></p>
15581188260000000
cfg.AddProfile<AiElementProfile>();
15581188260000000
ConsumerGroupName,
15581188260000000
// Registers the Event Processor Host and starts receiving messages
15581188260000000
</code></pre>
15581188260000000
metadata:
15581188260000000
matchLabels:
15581188260000000
metadata:
15581188260000000
containers:
15581188260000000
- containerPort: 80
15581188260000000
<p><strong>kubectl get pods:</strong></p>
15581188260000000
</code></pre>
15581188260000000
Node: aks-nodepool1-81522366-0/10.240.0.4
15581188260000000
Annotations: <none>
15581188260000000
Containers:
15581188260000000
Image ID: docker-pullable://vncont.azurecr.io/historysvc@sha256:636d81435bd421ec92a0b079c3841cbeb3ad410509a6e37b1ec673dc4ab8a444
15581188260000000
Exit Code: 0
15581188260000000
Reason: Completed
15581188260000000
Ready: False
15581188260000000
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mt8mm (ro)
15581188260000000
Ready False
15581188260000000
Type: Secret (a volume populated by a Secret)
15581188260000000
Node-Selectors: <none>
15581188260000000
Type Reason Age From Message
15581188260000000
Normal Created 7s (x5 over 1m) kubelet, aks-nodepool1-81522366-0 Created container
15581188260000000
<p>What am I missing?</p>
15581188260000000
<pre><code>Name: historysvc-deployment-558fc5649f-jgjvq
15581188260000000
Labels: app=historysvc
15581188260000000
IP: 10.244.0.12
15581188260000000
Container ID: docker://ccf83bce216276450ed79d67fb4f8a66daa54cd424461762478ec62f7e592e30
15581188260000000
State: Waiting
15581188260000000
Exit Code: 0
15581188260000000
Restart Count: 277
15581188260000000
Conditions:
15581188260000000
PodScheduled True
15581188260000000
SecretName: default-token-mt8mm
15581188260000000
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
15581188260000000
Warning BackOff 2m (x6238 over 23h) kubelet, aks-nodepool1-81522366-0 Back-off restarting failed container
15581188260000000
<pre><code>docker run <image>
15581188260000000
<pre><code>docker run -it <image>
15581188260000000
<p>$ kubectl get hpa</p>
15581188260000000
</code></pre>
15581188260000000
Server Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.5-gke.4", GitCommit:"0c81dc1e8c26fa2c47e50072dc7f98923cb2109c", GitTreeState:"clean", BuildDate:"2018-12-07T00:22:06Z", GoVersion:"go1.10.3b4", Compiler:"gc", Platform:"linux/amd64"}
15581188260000000
</code></pre>
15581188260000000
<p>While attempting to register the Microsoft.Compute provider in order
15581188260000000
</blockquote>
15581188260000000
<p><code>az provider register -n Microsoft.Compute</code></p>
15581188260000000
</code></pre>
15581188260000000
</code></pre>
15581188260000000
<li>Microsoft.Storage</li>
15581188260000000
<p><a href="https://i.stack.imgur.com/MN156.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MN156.png" alt="enter image description here"></a></p>
15581188260000000
<p>Two days ago I preformed the identical operations on a clients account successfully and everything finished within 5 minutes. I have tried the following options to solve the issue (thus far with no impact):</p>
15581188260000000
This is where my situation diverges from the above question. When unregistering the component with:
15581188260000000
<p>Will post back with my findings.</p><p>I deploy apps to <em>Kubernetes</em> running on <em>Google Cloud</em> from CI. CI makes use of <em>kubectl</em> config which contains auth information (either in directly CVS or templated from the env vars during build)</p>
15581188260000000
<p><code>gcloud container clusters get-credentials <cluster-name></code></p>
15581188260000000
</blockquote>
15581188260000000
<li><a href="https://stackoverflow.com/questions/48761952/cant-contact-our-azure-aks-kube-tls-handshake-timeout">Can't contact our Azure-AKS kube - TLS handshake timeout</a></li>
15581188260000000
<li><a href="https://github.com/Azure/AKS/issues/177" rel="noreferrer">https://github.com/Azure/AKS/issues/177</a></li>
15581188260000000
<blockquote>
15581188260000000
<p>You can also try scaling your Cluster (assuming that doesn't break your app).</p>
15581188260000000
<li><a href="https://github.com/Azure/AKS/tree/master/annoucements" rel="noreferrer">https://github.com/Azure/AKS/tree/master/annoucements</a></li>
15581188260000000
<p>The first piece I haven't seen mentioned elsewhere is Resource usage on the nodes / vms / instances that are being impacted by the above Kubectl 'Unable to connect to the server: net/http: TLS handshake timeout' issue. </p>
15581188260000000
<p>The drop in utilization and network io correlates strongly with both the increase in disk utilization AND the time period we began experiencing the issue. </p>
15581188260000000
<p><a href="https://i.stack.imgur.com/LVma8.png" rel="noreferrer"><img src="https://i.stack.imgur.com/LVma8.png" alt="enter image description here"></a></p>
15581188260000000
<p>Zimmergren over on GitHub indicates that he has less issues with larger instances than he did running bare bones smaller nodes. This makes sense to me and could indicate that the way the AKS servers divy up the workload (see next section) could be based on the size of the instances.</p>
15581188260000000
<p><em>An AKS server responsible for more smaller Clusters may possibly get hit more often?</em></p>
15581188260000000
<p>The fact that users (Zimmergren etc above) seem to feel that the Node size impacts the likelihood that this issue will impact you also seems to indicate that node size may relate to the way the sub-region responsibilities are assigned to the sub-regional AKS management servers. </p>
15581188260000000
<h3>Staging Cluster Utilization</h3>
15581188260000000
<p>Both of our Clusters are running identical ingresses, services, pods, containers so it is also unlikely that anything a user is doing causes this problem to crop up.</p>
15581188260000000
<p>In an emergency (ie your production site... like ours... needs to be managed) you can <strong><em>PROBABLY</em></strong> just re-create until you get a working cluster that happens to land on a different AKS management server instance (one that is not impacted) but be aware that this may not happen on your first attempt — AKS cluster re-creation is not exactly instant.</p>
15581188260000000
<blockquote>
15581188260000000
<h2>Why no GKE?</h2>
15581188260000000
<li><a href="https://stackoverflow.com/questions/47481022/tls-handshake-timeout-with-kubernetes-in-gke?rq=1">TLS handshake timeout with kubernetes in GKE</a></li>
15581188260000000
// initialized to an unstable value to ensure meaning isn't attributed to the suffix.
15581188260000000
<p>Is there an advantage of doing this over another method of getting a random number under 12345?</p>unstable value' in reference to time % 12345<p>kube version:1.22</p>
15581188260000000
<li>In minion B,<code>telnet $A_IP 30003</code> success( or <code>nc $A_IP 30003</code>)</li>
15581188260000000
<p>So I think should clean the iptables when kube-proxy abnormal exit?</p>’systemctl stop kube-proxy‘ will not clean the iptables<p>I am getting some issues while creating a Kubernetes cluster on a Google Cloud instance.</p>
15581188260000000
<p>Please see error below from the console:</p>
15581188260000000
</code></pre>
15581188260000000
<p>At first I was getting CORS errors but fixed that by adding lines 48-52 and creating a new service that serves HTTP1.</p>
15581188260000000
kind: Ingress
15581188260000000
servicePort: 5601
15581188260000000
- kibana.test.com
15581188260000000
<pre><code>server {
15581188260000000
set $proxy_upstream_name "-";
15581188260000000
ssl_certificate /etc/ingress-controller/ssl/monitoring-kb-kibana-tls.pem;
15581188260000000
ssl_stapling_verify on;
15581188260000000
# therefore we have to explicitly set this variable again so that when the parent request
15581188260000000
proxy_set_header Content-Length "";
15581188260000000
proxy_set_header X-Sent-From "nginx-ingress-controller";
15581188260000000
proxy_buffering off;
15581188260000000
proxy_http_version 1.1;
15581188260000000
# Pass the extracted client certificate to the auth provider
15581188260000000
set $namespace "monitoring";
15581188260000000
set $location_path "/";
15581188260000000
balancer.log()
15581188260000000
access_log off;
15581188260000000
{"proxy_protocol_addr": "","remote_addr": "xxx.xxx.xxx.xx", "proxy_add_x_forwarded_for": "xxx.xxx.xxx.xx, xxx.xxx.xxx.xx", "remote_user": "", "time_local": "21/Nov/2018:09:53:40 +0000", "request" : "GET /app/kibana HTTP/1.1", "status": "202", "body_bytes_sent": "0", "http_referer": "https://kibana.test.com/", "http_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36", "request_length" : "0", "request_time": "0.001", "proxy_upstream_name": "monitoring-kb-kibana-5601", "upstream_addr": "xxx.xxx.xxx.xx:4180", "upstream_response_length": "0", "upstream_response_time": "0.001", "upstream_status": "202", "request_body": "", "http_authorization": ""}
15581188260000000
{"proxy_protocol_addr": "","remote_addr": "xxx.xxx.xxx.xx", "proxy_add_x_forwarded_for": "xxx.xxx.xxx.xx, xxx.xxx.xxx.xx", "remote_user": "", "time_local": "21/Nov/2018:09:53:43 +0000", "request" : "GET /plugins/kibana/assets/settings.svg HTTP/1.1", "status": "202", "body_bytes_sent": "0", "http_referer": "https://kibana.test.com/app/kibana", "http_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36", "request_length" : "0", "request_time": "0.029", "proxy_upstream_name": "monitoring-kb-kibana-5601", "upstream_addr": "xxx.xxx.xxx.xx:4180", "upstream_response_length": "0", "upstream_response_time": "0.030", "upstream_status": "202", "request_body": "", "http_authorization": ""}
15581188260000000
{"proxy_protocol_addr": "","remote_addr": "xxx.xxx.xxx.xx", "proxy_add_x_forwarded_for": "xxx.xxx.xxx.xx, xxx.xxx.xxx.xx", "remote_user": "", "time_local": "21/Nov/2018:09:53:45 +0000", "request" : "GET /ui/fonts/open_sans/open_sans_v15_latin_600.woff2 HTTP/1.1", "status": "202", "body_bytes_sent": "0", "http_referer": "https://kibana.test.com/app/kibana", "http_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36", "request_length" : "0", "request_time": "0.002", "proxy_upstream_name": "monitoring-kb-kibana-5601", "upstream_addr": "xxx.xxx.xxx.xx:4180", "upstream_response_length": "0", "upstream_response_time": "0.002", "upstream_status": "202", "request_body": "", "http_authorization": ""}
15581188260000000
<p>Is there some config that needs to be applied(via Halyard or otherwise) in order to make the <code>Bake</code> Stage Type available? I'm running Spinnaker version 1.5.3.</p>"Bake" Stage Type not showing in Stage Types Dropdown in Spinnaker Pipeline<p>I am new to Prometheus and relatively new to kubernetes so bear with me, please. I am trying to test Prometheus out and have tried two different approaches. </p>
15581188260000000
ADD prometheus.yml /etc/prometheus/
15581188260000000
scrape_interval: 15s
15581188260000000
- job_name: 'kubernetes-apiservers'
15581188260000000
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
15581188260000000
</code></pre>
15581188260000000
Failed to list *v1.Endpoints: Get http://localhost:443/api/v1/pods?limit=500&amp;resourceVersion=0: dial tcp 127.0.0.1:443: connect: connection refused"
15581188260000000
<pre><code>kind: Deployment
15581188260000000
template:
15581188260000000
# args:
15581188260000000
- name: webui
15581188260000000
<pre><code>*curl http://localhost:5002/analyst_rating -v
15581188260000000
> Host: localhost:5002
15581188260000000
* HTTP 1.0, assume close after body
15581188260000000
< Server: Werkzeug/0.12.2 Python/2.7.12
15581188260000000
* Closing connection 0*
15581188260000000
* Trying 184.173.44.62...
15581188260000000
> Host: 184.173.44.62:30484
15581188260000000
* Closing connection 0
15581188260000000
I am able to make connections but not able to receive any response.
15581188260000000
Name: sunlife-analystrating-deployment
15581188260000000
MinReadySeconds: 0
15581188260000000
Containers:
15581188260000000
Environment: <none>
15581188260000000
Type Status Reason
15581188260000000
Events: <none>
15581188260000000
Name: kubernetes
15581188260000000
Annotations: <none>
15581188260000000
Port: https 443/TCP
15581188260000000
Events: <none>
15581188260000000
Annotations: <none>
15581188260000000
Port: <unset> 5002/TCP
15581188260000000
Session Affinity: None
15581188260000000
<p>Following is the code snippet, that I have used to expose the rest client inside container</p>
15581188260000000
response="Hello World"
15581188260000000
app.run(port='5002')
15581188260000000
metadata:
15581188260000000
targetPort: 80
15581188260000000
</code></pre>
15581188260000000
Warning CreatingLoadBalancerFailed 52s (x2 over 52s) service-controller Error cr
15581188260000000
401 Code="InvalidAuthenticationTokenTenant" Message="The access token is from the wr
15581188260000000
iption is transferred to another tenant there is no impact to the services, but info
15581188260000000
<p>I tried deleting the service, and now, on every service on the cluster that I run <code>kubectl describe svc <svc-name></code> I'm getting the following message in the <code>Events</code> section: </p>
15581188260000000
</blockquote>
15581188260000000
<li>deploy mysql using helm</li>
15581188260000000
</blockquote>
15581188260000000
mysqlDatabase=xxx,persistence.size=50Gi \
15581188260000000
<p>I am deploying my wordpress app and nginx containers in one pod, for mutual persistent volume use. The deployment yaml looks like this:</p>
15581188260000000
metadata:
15581188260000000
app: wordpress
15581188260000000
app: wordpress
15581188260000000
name: nginx
15581188260000000
- name: DB_HOST
15581188260000000
- name: DB_PASSWORD
15581188260000000
key: password
15581188260000000
volumeMounts:
15581188260000000
mountPath: "/etc/nginx/conf.d"
15581188260000000
- name: MY_DB_HOST
15581188260000000
- name: MY_DB_PASSWORD
15581188260000000
key: password
15581188260000000
value: "https://example.com"
15581188260000000
value: "true"
15581188260000000
- name: wordpress-persistent-storage
15581188260000000
persistentVolumeClaim:
15581188260000000
name: wp-config
15581188260000000
imagePullSecrets:
15581188260000000
<p>For reference, my config file is as follows:
15581188260000000
server_name $SITE_URL;
15581188260000000
error_log /var/log/nginx/error.log;
15581188260000000
rewrite .* /index.php;
15581188260000000
fastcgi_pass wordpress:9000;
15581188260000000
fastcgi_param PATH_INFO $fastcgi_path_info;
15581188260000000
<p>Some extra info in case it helps:</p>
15581188260000000
<pre><code> - port: 3306
15581188260000000
<pre><code> - name: wordpress
15581188260000000
targetPort: 80
15581188260000000
targetPort: 443
15581188260000000
<p>I have added the mysql password as a secret with the proper yaml file and base64 value. I have also tried using the command line instead for creating the secret, and both don't change anything in the results. </p>
15581188260000000
<p>MySQL init process in progress... <br>
15581188260000000
Warning: Unable to load '/usr/share/zoneinfo/posix/Factory' as time zone. Skipping it.<br>
15581188260000000
MySQL init process done. Ready for start up.</p>
15581188260000000
<p>[11:15:03 +0000] "GET /robots.txt HTTP/1.1" 500 262 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +<a href="http://www.google.com/bot.html" rel="nofollow noreferrer">http://www.google.com/bot.html</a>)"<br>
15581188260000000
</blockquote>
15581188260000000
127.0.0.1 - 16:04:42 +0000 "GET /index.php" 200</p>
15581188260000000
</code></pre>
15581188260000000
* Kubernetes 1.9.3
15581188260000000
<p><strong>Incident instance timeline</strong></p>
15581188260000000
<li>Another pod created by a daemon set (fluentd pod) scheduled on the same node as the above one had slightly different error: network is not ready:[runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready:cni clnfig uninitialized]
15581188260000000
Mar 16 08:29:54 ip-172-20-85-48 kubelet[1346]: I0316 08:29:54.512797 1346 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "authservice-ca" (UniqueName: "kubernetes.io/secret/8ead64a3-28f3-11e8-b520-025c267c6ea8-authservice-ca") pod "broker-0" (UID: "8ead64a3-28f3-11e8-b520-025c267c6ea8")
15581188260000000
Mar 16 08:29:54 ip-172-20-85-48 kubelet[1346]: I0316 08:29:54.512980 1346 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "broker-prometheus-config" (UniqueName: "kubernetes.io/configmap/8ead64a3-28f3-11e8-b520-025c267c6ea8-broker-prometheus-config") pod "broker-0" (UID: "8ead64a3-28f3-11e8-b520-025c267c6ea8")
15581188260000000
Mar 16 08:29:54 ip-172-20-85-48 kubelet[1346]: I0316 08:29:54.613544 1346 reconciler.go:262] operationExecutor.MountVolume started for volume "default-token-vrhqr" (UniqueName: "kubernetes.io/secret/8ead64a3-28f3-11e8-b520-025c267c6ea8-default-token-vrhqr") pod "broker-0" (UID: "8ead64a3-28f3-11e8-b520-025c267c6ea8")
15581188260000000
Mar 16 08:29:54 ip-172-20-85-48 kubelet[1346]: I0316 08:29:54.616720 1346 operation_generator.go:522] MountVolume.SetUp succeeded for volume "broker-prometheus-config" (UniqueName: "kubernetes.io/configmap/8ead64a3-28f3-11e8-b520-025c267c6ea8-broker-prometheus-config") pod "broker-0" (UID: "8ead64a3-28f3-11e8-b520-025c267c6ea8")
15581188260000000
Mar 16 08:29:54 ip-172-20-85-48 kubelet[1346]: I0316 08:29:54.626604 1346 operation_generator.go:522] MountVolume.SetUp succeeded for volume "default-token-vrhqr" (UniqueName: "kubernetes.io/secret/8ead64a3-28f3-11e8-b520-025c267c6ea8-default-token-vrhqr") pod "broker-0" (UID: "8ead64a3-28f3-11e8-b520-025c267c6ea8")
15581188260000000
Mar 16 08:29:56 ip-172-20-85-48 kubelet[1346]: E0316 08:29:56.018024 1346 nestedpendingoperations.go:263] Operation for "\"kubernetes.io/aws-ebs/aws://eu-central-1b/vol-04145a1c9d1a26280\"" failed. No retries permitted until 2018-03-16 08:29:58.017982038 +0000 UTC m=+36.870107444 (durationBeforeRetry 2s). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-b673d6da-26e3-11e8-aa99-02cd3728faaa\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-central-1b/vol-04145a1c9d1a26280\") pod \"broker-0\" (UID: \"8ead64a3-28f3-11e8-b520-025c267c6ea8\") "
15581188260000000
Mar 16 08:30:02 ip-172-20-85-48 kubelet[1346]: E0316 08:30:02.034045 1346 nestedpendingoperations.go:263] Operation for "\"kubernetes.io/aws-ebs/aws://eu-central-1b/vol-04145a1c9d1a26280\"" failed. No retries permitted until 2018-03-16 08:30:10.034017896 +0000 UTC m=+48.886143256 (durationBeforeRetry 8s). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-b673d6da-26e3-11e8-aa99-02cd3728faaa\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-central-1b/vol-04145a1c9d1a26280\") pod \"broker-0\" (UID: \"8ead64a3-28f3-11e8-b520-025c267c6ea8\") "
15581188260000000
Mar 16 08:30:10 ip-172-20-85-48 kubelet[1346]: I0316 08:30:10.156188 1346 operation_generator.go:446] MountVolume.WaitForAttach entering for volume "pvc-b673d6da-26e3-11e8-aa99-02cd3728faaa" (UniqueName: "kubernetes.io/aws-ebs/aws://eu-central-1b/vol-04145a1c9d1a26280") pod "broker-0" (UID: "8ead64a3-28f3-11e8-b520-025c267c6ea8") DevicePath "/dev/xvdcr"
15581188260000000
Mar 16 08:30:12 ip-172-20-85-48 kubelet[1346]: I0316 08:30:12.672408 1346 kuberuntime_manager.go:385] No sandbox for pod "broker-0(8ead64a3-28f3-11e8-b520-025c267c6ea8)" can be found. Need to start a new one
15581188260000000
Mar 16 08:34:12 ip-172-20-85-48 kubelet[1346]: E0316 08:34:12.673020 1346 pod_workers.go:186] Error syncing pod 8ead64a3-28f3-11e8-b520-025c267c6ea8 ("broker-0(8ead64a3-28f3-11e8-b520-025c267c6ea8)"), skipping: failed to "CreatePodSandbox" for "broker-0(8ead64a3-28f3-11e8-b520-025c267c6ea8)" with CreatePodSandboxError: "CreatePodSandbox for pod \"broker-0(8ead64a3-28f3-11e8-b520-025c267c6ea8)\" failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
15581188260000000
Mar 16 08:34:14 ip-172-20-85-48 kubelet[1346]: I0316 08:34:14.005589 1346 kubelet.go:1880] SyncLoop (PLEG): "broker-0(8ead64a3-28f3-11e8-b520-025c267c6ea8)", event: &pleg.PodLifecycleEvent{ID:"8ead64a3-28f3-11e8-b520-025c267c6ea8", Type:"ContainerDied", Data:"b08ea5b45ce3ba467856952ad6cc095f4b796673d7dfbf3b9c4029b6b1a75a1b"}
15581188260000000
<p>Does anyone has an any idea what the problem here is and what would be a remedy?</p><p>I'm attempting to deploy a Docker container to a minikube instance running locally, and getting this error when it attempts to pull(?) the image. The image exists in a self-hosted Docker registry. The image I'm testing with is built with the following Dockerfile:</p>
15581188260000000
<p>I'm using the fabric8io <code>kubernetes-client</code> library to create a deployment like so:</p>
15581188260000000
.withNewMetadata()
15581188260000000
.withNewSpec()
15581188260000000
.addToLabels("app", name)
15581188260000000
// "regsecret" is the kubectl-created docker secret
15581188260000000
.withName(name)
15581188260000000
.endTemplate()
15581188260000000
<p>This is all running on Arch Linux, kernel <code>Linux 4.10.9-1-ARCH x86_64 GNU/Linux</code>. Using <code>minikube 0.18.0-1</code> and <code>kubectl-bin 1.6.1-1</code> from the AUR, <code>docker 1:17.04.0-1</code> from the community repositories, and the docker <code>registry</code> container at <code>latest</code> (<code>2.6.1</code> as of writing this). fabric8io <code>kubernetes-client</code> is at version <code>2.2.13</code>. </p>
15581188260000000
<li>that the image can even be pulled. <code>docker pull</code> and <code>docker run</code> on both the host and inside the minikube VM work exactly as expected</li>
15581188260000000
<li>read through the kubernetes source code, as I don't know golang</li>
15581188260000000
Error syncing pod, skipping: failed to "StartContainer" for "YYY" with ImageInspectError: "Failed to inspect image \"registry_domain/XXX/YYY:latest\": Id or size of image \"registry_domain/XXX/YYY:latest\" is not set"
15581188260000000
</code></pre>
15581188260000000
<li>The kubernetes source, which isn't helpful to me</li>
15581188260000000
<p>I self-host the cluster on digitalocean.</p>
15581188260000000
apiVersion: storage.k8s.io/v1
15581188260000000
volumeBindingMode: WaitForFirstConsumer
15581188260000000
name: prometheus-pv-volume
15581188260000000
accessModes:
15581188260000000
hostPath:
15581188260000000
nodeSelectorTerms:
15581188260000000
kind: PersistentVolume
15581188260000000
labels:
15581188260000000
storageClassName: local-storage
15581188260000000
volumeMode: Filesystem
15581188260000000
path: "/grafana-volume"
15581188260000000
- matchExpressions:
15581188260000000
<p>And 2 pvc's using them on a same node. Here is one:</p>
15581188260000000
storageClassName: local-storage
15581188260000000
resources:
15581188260000000
<p>Everything works fine.</p>
15581188260000000
prometheus-pv-volume 100Gi RWO Retain Bound monitoring/prometheus-k8s-db-prometheus-k8s-0 local-storage 16m
15581188260000000
monitoring grafana-storage Bound grafana-pv-volume 1Gi RWO local-storage 10m
15581188260000000
<pre><code>W0302 17:16:07.877212 1 plugins.go:845] FindExpandablePluginBySpec(prometheus-pv-volume) -> err:no volume plugin matched
15581188260000000
ERROR: (gcloud.container.clusters.create) ResponseError: code=403, message=Required "container.clusters.create" permission for "projects/test-project".
15581188260000000
<pre><code>$ gcloud container clusters create myproject --machine-type=n1-standard1# --zone=asia-northeast1-a
15581188260000000
</code></pre>
15581188260000000
funny-turtle-myservice-xxx-yyy 1/1 Terminating 1 11d
15581188260000000
<h3>try to delete the pod.</h3>
15581188260000000
- also tried with <code>--force --grace-period=0</code>, same outcome with extra warning</p>
15581188260000000
<h3>try to read the logs (kubectl logs ...).</h3>
15581188260000000
<p>So I assume this pod somehow got "disconnected" from the aws API, reasoning from the error message that <code>kubectl logs</code> printed.</p>
15581188260000000
<p><strong>nginx-basic.conf-</strong></p>
15581188260000000
proxy_pass 35.239.243.201:9200;
15581188260000000
<p>I'm getting an error before I even get to the Couchbase part. I successfully created a resource group (which I called "cb_ask_spike", and yes it does appear on the Portal) from the command line, but then I try to create an AKS cluster:</p>
15581188260000000
<p>In both cases, I get an error:</p>
15581188260000000
<p><a href="https://i.stack.imgur.com/h9gEj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/h9gEj.png" alt="enter image description here"></a></p>
15581188260000000
const googleCloudErrorReporting = new googleCloud.ErrorReporting();
15581188260000000
</blockquote>
15581188260000000
<blockquote>
15581188260000000
<p>This container is fairly simple:</p>
15581188260000000
image: tmaier/postgresql-client
15581188260000000
[] [db_host]
psql "postgresql://$DB_USER:$DB_PASSWORD@db-host:5432" -c "CREATE DATABASE fusionauth ENCODING 'UTF-8' LC_CTYPE 'en_US.UTF-8' LC_COLLATE 'en_US.UTF-8' TEMPLATE template0"
15581188260000000
<p>This kubernetes initContainer according to what I can see runs before the "istio-init" container. Is that the reason why it cannot resolve the db-host:5432 to the ip of the pod running the postgres service?</p>
15581188260000000
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
15581188260000000
<p>I have a feeling that it is similar to <a href="https://stackoverflow.com/questions/44312745/kubernetes-rbac-unable-to-upgrade-connection-forbidden-user-systemanonymous">this issue</a>. But the error message is a tad different.</p>
15581188260000000
I0614 16:50:11.003705 64104 round_trippers.go:398] curl -k -v -XPOST -H "X-Stream-Protocol-Version: v4.channel.k8s.io" -H "X-Stream-Protocol-Version: v3.channel.k8s.io" -H "X-Stream-Protocol-Version: v2.channel.k8s.io" -H "X-Stream-Protocol-Version: channel.k8s.io" -H "User-Agent: kubectl/v1.6.4 (darwin/amd64) kubernetes/d6f4332" https://localhost:6443/api/v1/namespaces/monitoring/pods/alertmanager-main-0/exec?command=%2Fbin%2Fls&amp;container=alertmanager&amp;container=alertmanager&amp;stderr=true&amp;stdout=true
15581188260000000
I0614 16:50:11.169500 64104 round_trippers.go:426] Content-Length: 12
15581188260000000
I0614 16:50:11.169512 64104 round_trippers.go:426] Date: Wed, 14 Jun 2017 08:50:11 GMT
15581188260000000
</code></pre><p>So when I run <code>kubectl get all --all-namespaces</code> on different machines, I get different output and I can't understand why.</p>
15581188260000000
kube-system po/tiller-deploy-78d74d4979-rh7nv 1/1 Running 0 23h
15581188260000000
kube-system service-mesh-traefik-5bb8d58bf6-gfdqd 1/1 Running 0 2d
15581188260000000
<p>What is different? The cluster is the same, so it should be returning the same data. The first machine is kubectl version 1.9.2, second machine is 1.10.0. The cluster is running 1.8.7.</p>"kubectl get all --all-namespaces" has different output against the same cluster<p>Could anybody help how I can change the version number shown from "kubectl get nodes"? The binaries are compiled from source. "kubectl version" shows the correct version, but "kubectl get nodes" not.</p>
15581188260000000
<p>And here is what I get from <code>kubectl get nodes</code>:</p>
15581188260000000
<p>This script will finally use ...release-1.2/cluster/ubuntu/download-release.sh to download the binaries. I commented the call to download-release.sh and put my own binaries compiled from the up-to-date sources into ubuntu/binaries folder. </p>
15581188260000000
KUBELET_HOSTNAME="--hostname-override=centos-minion"
15581188260000000
<p>When kubelete service is started following logs could be seen </p>
15581188260000000
</blockquote>
15581188260000000
KUBELET_PORT="--kubelet-port=10250"
15581188260000000
<pre><code>KUBE_ETCD_SERVERS="--etcd-servers=http://centos-master:2379"
15581188260000000
KUBE_MASTER="--master=http://centos-master:8080"
15581188260000000
<p>kube 5657 1 0 Mar15 ? 00:12:05 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=<a href="http://centos-master:2379" rel="nofollow noreferrer">http://centos-master:2379</a> --address=0.0.0.0 --port=8080 --kubelet-port=10250 --allow-privileged=false --service-cluster-ip-range=10.254.0.0/16</p>
15581188260000000
<p>So i still do not know what is missing. </p>"kubectl get nodes" shows NotReady always even after giving the appropriate IP<p>I am following the document <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/</a> to try to create a kubernetes cluster with 3 vagrant ubuntu vm in my local mac. But I can only see the master by running "kubectl get nodes" in master node after "kubeadm join" successfully. After tried several possible ways googled from internet, still the same issue.</p>
15581188260000000
- (master) eth0: 10.0.2.15, eth1: 192.168.101.101
15581188260000000
- (worker2) eth0: 10.0.2.15, eth1: 192.168.101.103
15581188260000000
<p>Regards,
15581188260000000
<p><a href="https://i.stack.imgur.com/YHx7k.jpg" rel="nofollow noreferrer">log-new-part1</a>
15581188260000000
<pre><code>Failed to pull image "localhost:5000/dev/customer:v1": rpc error: code = Unknown desc
15581188260000000
<p><strong>Pod Events</strong></p>
15581188260000000
Normal BackOff 16m (x2 over 16m) kubelet, minikube Back-off pulling image "localhost:5000/dev/customer:v1"
15581188260000000
= Error response from daemon: Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: getsockopt: connection refused
15581188260000000
v1: Pulling from dev/customer
15581188260000000
<p>Is it because <code>kubectl logs</code> under the hood using ssh? Is there any workaround to see the pod log?</p>"kubectl logs" not working after adding NAT gateways in GCE<p>Very often when I want to deploy new image with "kubectl set image" it's failing with ErrImagePull status, and then fixes itself after some time (up to few hours). These are events from "kubectl describe pod":</p>
15581188260000000
36m 12m 6 {kubelet gke-xxxxxxxxxx-staging-default-pool-ac6a32f4-09h5} spec.containers{zzz-staging} Normal Pulling pulling image "us.gcr.io/yyyy-staging/zzz:latest"
15581188260000000
16m 7m 7m 3 {kubelet gke-xxxxxxxxxx-staging-default-pool-ac6a32f4-09h5} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "zzz-staging" with ImagePullBackOff: "Back-off pulling image \"us.gcr.io/yyyy-staging/zzz:latest\""
15581188260000000
<p>Is there a way to avoid that?</p><p>I am deploying a container in Google Kubernetes Engine with this YAML fragment:</p>
15581188260000000
image: registry/service-go:latest
15581188260000000
cpu: "20m"
15581188260000000
</code></pre>
15581188260000000
<p>I only have the default namespace and when executing</p>
15581188260000000
<pre><code>Name: default
15581188260000000
No resource quota.
15581188260000000
</code></pre>"Limits" property ignored when deploying a container in a Kubernetes cluster<p>I successfully deployed with kubernetes a custom container based on the official docker-vault image, but when using the <code>vault init</code> command I get the following error:</p>
15581188260000000
<pre><code>FROM vault:0.8.3
15581188260000000
CMD ["server", "-config=vault.conf"]
15581188260000000
export VAULT_ADDR="http://127.0.0.1:8200"
15581188260000000
<p>To execute it, I configured my kubernetes yaml deployment file as follows:</p>
15581188260000000
- image: // my image
15581188260000000
- containerPort: 8200
15581188260000000
# memory being swapped to disk so that secrets
15581188260000000
mountPath: /vault/file
15581188260000000
command: ["/bin/sh", "./configure_vault.sh"]
15581188260000000
claimName: vault
15581188260000000
<p>a) I created a docker private registry on slave node using basic ssl authentication. Lets call it <code>abc.def.com:1234</code></p>
15581188260000000
<p>e) Now, I stopped the container. I deleted the image from local cache as well. </p>
15581188260000000
root 23683 1 2 05:42 ? 00:01:12 /usr/bin/kubelet --logtostderr=true --v=0 --api-servers=http://wer.txy.com:8080 --address=0.0.0.0 --hostname-override=abc.def.com --allow-privileged=false --pod-infra-container-image=abc.def.com:1234/s5678:test --cluster-dns=x.y.z.b --cgroup-driver=systemd
15581188260000000
<pre><code>apiVersion: v1
15581188260000000
- name: regsecret
15581188260000000
NAME READY STATUS RESTARTS AGE
15581188260000000
<pre><code>Error from server (BadRequest): container "utest1" in pod "test1" is waiting to start: ContainerCreating
15581188260000000
</code></pre>
15581188260000000
<p>AS you can see above command is received by Kubelet. But is not passed to dockerd-current daemon: </p>
15581188260000000
<p>Before pod creation: </p>
15581188260000000
</code></pre>
15581188260000000
abc.def.com:1234/s5678 test 54ae12a89367 8 days ago 108 MB
15581188260000000
<p>Can any of you please help me to triage this issue further and move into closure?</p>"No command specified" Error while creating pod in Kubernetes<p>I have a Spring application built into a Docker image with the following command in <code>dockerfile</code></p>
15581188260000000
<p>When creating app on OpenShift with </p>
15581188260000000
<pre><code>java.lang.IllegalStateException: Logback configuration error detected:
15581188260000000
<p>However, when I run the image directly with <code>docker run -it <image_ID> /bin/bash</code>, and then execute the <code>java -jar</code> command above, it runs fine.</p>
15581188260000000
<encoder>
15581188260000000
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
15581188260000000
</rollingPolicy>
15581188260000000
<p>Versions I use:</p>
15581188260000000
features: Basic-Auth GSSAPI Kerberos SPNEGO
15581188260000000
API version: 1.24
15581188260000000
Built: Wed Dec 13 12:18:58 2017
15581188260000000
API version: 1.24
15581188260000000
Built: Wed Dec 13 12:18:58 2017
15581188260000000
<pre><code>- job_name: 'kubernetes_pods'
15581188260000000
- api_server: http://172.29.219.102:8080
15581188260000000
target_label: __address__
15581188260000000
<p>Where <code>172.29.219.110:8080</code> is the IP & Port of my standalone HA Proxy.</p>
15581188260000000
{"status":"UP"}
15581188260000000
</code></pre>
15581188260000000
<p>Error starting host: Error creating host: Error executing step: Running pre
15581188260000000
<p>When I am doing this step:</p>
15581188260000000
<p>Looking at docker ps -l, I have </p>
15581188260000000
About an hour ago Up About an hour
15581188260000000
<p>I run this command:</p>
15581188260000000
<p>Anything wrong here?
15581188260000000
<pre><code>root@kubemaster:~/istio-0.8.0# docker-compose -f samples/bookinfo/consul/bookinfo.yaml up -d
15581188260000000
<pre><code>root@kubemaster:~/istio-0.8.0# docker network create consul_istiomesh
15581188260000000
<pre><code>root@kubemaster:~/istio-0.8.0# docker-compose -f samples/bookinfo/consul/bookinfo.yaml up -d
15581188260000000
Creating consul_reviews-v1_1
15581188260000000
Traceback (most recent call last):
15581188260000000
line 63, in main
15581188260000000
<p>What to do ?</p>"Running on Docker with Consul or Eureka" ?<p>I'd like a multi-container pod with a couple of components:</p>
15581188260000000
<pre><code>apiVersion: v1
15581188260000000
volumeMounts:
15581188260000000
- name: test-volume
15581188260000000
fsType: ext4
15581188260000000
Output: mount: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-west-2a/vol-xxxxxxxx does not exist
15581188260000000
<p>I wanted to test the <code>/metrics</code> API.<br>
15581188260000000
--key '/var/lib/kubelet/pki/kubelet.key' \
15581188260000000
Maybe I am not using the correct certificates ? </p>
15581188260000000
* Connected to 172.31.29.121 (172.31.29.121) port 10250 (#0)
15581188260000000
* CAfile: /etc/kubernetes/pki/ca.crt
15581188260000000
* TLSv1.2 (IN), TLS handshake, Certificate (11):
15581188260000000
* Closing connection 0
15581188260000000
establish a secure connection to it. To learn more about this situation and
15581188260000000
<pre><code>ballerina.home = "/Library/Ballerina/ballerina-0.975.1"
15581188260000000
import ballerina/log;
15581188260000000
name: "ballerina-abdennour-demo"
15581188260000000
service<http:Service> hello bind { port: 9090 } {
15581188260000000
caller ->respond(res) but { error e => log:printError("Error sending response", err = e)};
15581188260000000
undefined annotation "Deployment"
15581188260000000
<h2>UPDATE</h2>
15581188260000000
<p>When I run a client sanity test, the only exception returned is this:</p>
15581188260000000
<p>While it has been suggested that this problem is kube-dns related as indicated <a href="https://github.com/kubernetes/contrib/issues/2737." rel="nofollow noreferrer">here</a>.<br>
15581188260000000
Server: 10.63.240.10
15581188260000000
<pre><code>2017-11-29 15:14:39,923 [myid:] - INFO [main:QuorumPeer$QuorumServer@167] - Resolved hostname: zk-0.zk-svc.default.svc.cluster.local to address: zk-0.zk-svc.default.svc.cluster.local/10.60.4.4
15581188260000000
<pre><code>root@kubernetes01:~/kubernetes/cluster# KUBERNETES_PROVIDER=ubuntu ./kube-down.sh
15581188260000000
No resources found
15581188260000000
waiting for tearing down pods
15581188260000000
waiting for tearing down pods
15581188260000000
waiting for tearing down pods
15581188260000000
waiting for tearing down pods
15581188260000000
waiting for tearing down pods
15581188260000000
<p>so in general I created the licenses on Gentoo Linux using the following bash script:</p>
15581188260000000
export WORKER_IP=10.79.218.3
15581188260000000
openssl genrsa -out apiserver-key.pem 2048
15581188260000000
openssl req -new -key ${WORKER_FQDN}-worker-key.pem -out ${WORKER_FQDN}-worker.csr -subj "/CN=${WORKER_FQDN}" -config worker-openssl.cnf
15581188260000000
openssl x509 -req -in admin.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out admin.pem -days 365
15581188260000000
<pre><code>[req]
15581188260000000
[ v3_req ]
15581188260000000
[alt_names]
15581188260000000
IP.2 = 10.79.218.2
15581188260000000
<pre><code>[req]
15581188260000000
[ v3_req ]
15581188260000000
[alt_names]
15581188260000000
<p>My controller machine is <code>coreos-2.tux-in.com</code> which resolves to the lan ip <code>10.79.218.2</code></p>
15581188260000000
Nov 08 21:24:06 coreos-2.tux-in.com kubelet-wrapper[2018]: E1108 21:24:06.950827 2018 reflector.go:203] pkg/kubelet/kubelet.go:384: Failed to list *api.Service: Get https://coreos-2.tux-in.com:443/api/v1/services?resourceVersion=0: x509: certificate signed by unknown authority
15581188260000000
Nov 08 21:24:08 coreos-2.tux-in.com kubelet-wrapper[2018]: E1108 21:24:08.171170 2018 eviction_manager.go:162] eviction manager: unexpected err: failed GetNode: node '10.79.218.2' not found
15581188260000000
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
15581188260000000
kube-system coredns-78fcdf6894-s4l8n 1/1 Running 1 18h 10.244.0.14 master2
15581188260000000
kube-system kube-controller-manager-master2 1/1 Running 1 18h 10.0.2.15 master2
15581188260000000
kube-system kube-proxy-xldph 1/1 Running 1 18h 10.0.2.15 master2
15581188260000000
</code></pre>
15581188260000000
<pre><code>kubectl -v=10 exec -it hello-kubernetes-55857678b4-4xbgd sh
15581188260000000
I0703 08:44:01.255808 10307 round_trippers.go:386] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.11.0 (linux/amd64) kubernetes/91e7b4f" 'https://192.168.0.33:6443/api/v1/namespaces/default/pods/hello-kubernetes-55857678b4-4xbgd'
15581188260000000
I0703 08:44:01.273692 10307 round_trippers.go:414] Content-Type: application/json
15581188260000000
inationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}]},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-02T18:09:02Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-03T12:32:26Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":null},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-02T18:09:02Z"}],"
15581188260000000
I0703 08:44:01.317938 10307 round_trippers.go:411] Response Headers:
15581188260000000
F0703 08:44:01.318118 10307 helpers.go:119] error: unable to upgrade connection: pod does not exist
15581188260000000
(1045, "Access denied for user 'root'@'cloudsqlproxy~[cloudsql instance ip]' (using password: NO)")</p>
15581188260000000
<pre><code>from django.db import connection
15581188260000000
<p>the env vars are set correctly in the container, this works:</p>
15581188260000000
</code></pre>
15581188260000000
'ENGINE': 'django.db.backends.mysql',
15581188260000000
'HOST': os.getenv('DB_HOST'),
15581188260000000
'charset': 'utf8mb4',
15581188260000000
</code></pre>
15581188260000000
metadata:
15581188260000000
labels:
15581188260000000
- name: aesh-web
15581188260000000
value: 127.0.0.1
15581188260000000
value: aesh_db
15581188260000000
name: cloudsql-db-credentials
15581188260000000
secretKeyRef:
15581188260000000
volumeMounts:
15581188260000000
volumes:
15581188260000000
</code></pre><p>We have a fairly large kubernetes deployment on GKE, and we wanted to make our life a little easier by enabling auto-upgrades. The <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-upgrades" rel="nofollow noreferrer">documentation on the topic</a> tells you how to enable it, but not how it actually <strong>works</strong>.</p>
15581188260000000
<p>Has someone used this feature in production and can shed some light on what it'll actually do?</p>
15581188260000000
<li>I set up a maintenance window</li>
15581188260000000
<li>The update would happen in the next maintenance window</li>
15581188260000000
<h1>My question</h1>
15581188260000000
<li>If so, to what version?</li>
15581188260000000
<pre><code>apiVersion: v1
15581188260000000
labels:
15581188260000000
domainName: "my.personal-site.de"
15581188260000000
app: django-app
15581188260000000
targetPort: 8000
15581188260000000
name: django-app-deployment
15581188260000000
strategy:
15581188260000000
maxUnavailable: 1
15581188260000000
app: django-app
15581188260000000
name: django-app
15581188260000000
</code></pre><p>I have setup docker on my machine and also minikube which have docker inside it, so probably i have two docker instances running on different VM</p>
15581188260000000
3- docker tag a3703d02a199 127.0.0.1:5000/eliza/console:0.0.1
15581188260000000
<p>all above steps are working fine with no problems at all.</p>
15581188260000000
2- eval $(minikube docker-env)
15581188260000000
<p>in last step (point 4) it gave me next message</p>
15581188260000000
<p>So i can access image registry from my machine but not from minikube which make a problems of course with me when i deploy this image using Kubernetes on minikube and make deploy failed due to can't connect to <a href="http://127.0.0.1:5000" rel="noreferrer">http://127.0.0.1:5000</a></p>
15581188260000000
<pre><code>apiVersion: v1
15581188260000000
labels:
15581188260000000
- port: 9080
15581188260000000
app: tripbru-console
15581188260000000
kind: Deployment
15581188260000000
app: tripbru-console
15581188260000000
template:
15581188260000000
tier: frontend
15581188260000000
name: tripbru-console
15581188260000000
</code></pre>
15581188260000000
</blockquote>
15581188260000000
</code></pre>
15581188260000000
</blockquote>
15581188260000000
<a href="https://docker.local:5000/v1/_ping" rel="noreferrer">https://docker.local:5000/v1/_ping</a>: dial tcp: lookup docker.local on
15581188260000000
<p>So how can i solve this issue too. </p>
15581188260000000
<p>(Inside the pod)</p>
15581188260000000
</code></pre>
15581188260000000
drwx------ 6 999 docker 4096 Oct 30 11:21 base
15581188260000000
drwx------ 2 999 docker 4096 Oct 30 11:21 pg_dynshmem
15581188260000000
drwx------ 4 999 docker 4096 Oct 30 11:21 pg_multixact
15581188260000000
drwx------ 2 999 docker 4096 Oct 30 11:21 pg_snapshots
15581188260000000
drwx------ 2 999 docker 4096 Oct 30 11:21 pg_tblspc
15581188260000000
-rw------- 1 999 docker 88 Oct 30 11:21 postgresql.auto.conf
15581188260000000
</code></pre>
15581188260000000
drwx------ 6 postgres postgres 4096 Oct 30 11:21 base
15581188260000000
drwx------ 2 postgres postgres 4096 Oct 30 11:21 pg_dynshmem
15581188260000000
drwx------ 4 postgres postgres 4096 Oct 30 11:21 pg_multixact
15581188260000000
drwx------ 2 postgres postgres 4096 Oct 30 11:21 pg_snapshots
15581188260000000
drwx------ 2 postgres postgres 4096 Oct 30 11:21 pg_tblspc
15581188260000000
-rw------- 1 postgres postgres 88 Oct 30 11:21 postgresql.auto.conf
15581188260000000
</code></pre>
15581188260000000
metadata:
15581188260000000
matchLabels:
15581188260000000
template:
15581188260000000
valueFrom:
15581188260000000
- name: POSTGRES_PASSWORD
15581188260000000
key: password
15581188260000000
volumeMounts:
15581188260000000
name: information-system
15581188260000000
command: ["bash", "-c", "python main.py"]
15581188260000000
claimName: information-system-db-claim
15581188260000000
apiVersion: v1
15581188260000000
type: local
15581188260000000
storage: 10Gi
15581188260000000
path: "/tmp/data/postgres"
15581188260000000
apiVersion: v1
15581188260000000
storageClassName: manual
15581188260000000
requests:
15581188260000000
<p>These are pods are running on kube-system namespace</p>
15581188260000000
etcd-minikube 1/1 Running 0 6h
15581188260000000
kube-dns-86f4d74b45-xxznk 3/3 Running 15 1d
15581188260000000
nginx-ingress-controller-tjljg 1/1 Running 3 6h
15581188260000000
</code></pre>
15581188260000000
Kubelet version: "1.12.0-rc.1" Control plane version: "1.11.3"
15581188260000000
<p>Many Thanks
15581188260000000
Jan 3 21:28:46 master kubelet: I0103 21:28:46.829714 8726 kubelet_node_status.go:204] Setting node annotation to enable volume controller attach/detach
15581188260000000
Jan 3 21:29:02 master kubelet: E0103 21:29:02.762461 8726 cni.go:163] error updating cni config: No networks found in /etc/cni/net.d
15581188260000000
CentOS Linux release 7.3.1611 (Core)
15581188260000000
Docker version 1.12.5, build 7392c3b
15581188260000000
kubeadm version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.2074+a092d8e0f95f52", GitCommit:"a092d8e0f95f5200f7ae2cba45c75ab42da36537", GitTreeState:"clean", BuildDate:"2016-12-13T17:03:18Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
15581188260000000
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
15581188260000000
6b56cda441d6 gcr.io/google_containers/etcd-amd64:3.0.14-kubeadm "etcd --listen-client" 8 minutes ago Up 8 minutes k8s_etcd.c323986f_etcd-master_kube-system_3a26566bb004c61cd05382212e3f978f_80669ce9
15581188260000000
66de3a3ad7e9 gcr.io/google_containers/pause-amd64:3.0 "/pause" 8 minutes ago Up 8 minutes k8s_POD.d8dbe16c_etcd-master_kube-system_3a26566bb004c61cd05382212e3f978f_d58fa3b8
15581188260000000
[] [kubernetes]
kubernetes-cni.x86_64 0.3.0.1-0.07a8a2 @kubernetes
15581188260000000
<pre><code>kubectl logs --follow -n kube-system deployment/nginx-ingress
15581188260000000
<p>and on the dashboard logs I see this</p>
15581188260000000
2018/08/27 21:14:11 Metric client health check failed: the server is currently unable to handle the request (get services heapster). Retrying in 30 seconds.
15581188260000000
<p>I tried deleting all the pods in the kube-system namespace. Deleted the dashboard/ heapster but nothing helped. Any ideas what is going on. or what to check. Note, I have upgraded the cluster and everything came up fine after that. I rebooted the master node after upgrade and this is what happened</p>
15581188260000000
osmsku---kubenode01..local Ready <none> 140d v1.11.2
15581188260000000
77.5 is the docker interface ip</p>
15581188260000000
heapster ClusterIP 10.98.52.12 <none> 80/TCP 1h k8s-app=heapster
15581188260000000
monitoring-influxdb ClusterIP 10.101.205.79 <none> 8086/TCP 1h k8s-app=influxdb
15581188260000000
osmsku--prod-kubemaster02..local Ready <none> 140d v1.11.2 <none> CentOS Linux 7 (Core) 3.10.0-514.26.2.el7.x86_64 docker://18.3.0
15581188260000000
NAME READY STATUS RESTARTS AGE
15581188260000000
etcd-osmsku--prod-kubemaster01..local 1/1 Running 15 2h
15581188260000000
kibana-logging-66fcf97dc8-57nd5 1/1 Running 1 2h
15581188260000000
kube-flannel-ds-5g26z 1/1 Running 2 2h
15581188260000000
kube-proxy-gv2lf 1/1 Running 2 2h
15581188260000000
kubernetes-dashboard-6bc9c6f7cb-f8g7s 1/1 Running 0 2h
15581188260000000
</code></pre>*4 connect() failed (113: No route to host) while connecting to kubernetes dashboard upstream<p>When I deploy the following I get this error:</p>
15581188260000000
apiVersion: extensions/v1beta1
15581188260000000
labels:
15581188260000000
app.kubernetes.io/managed-by: {{ .Release.Service }}
15581188260000000
{{- end }}
15581188260000000
{{- range .Values.front.ingress.tls }}
15581188260000000
{{- end }}
15581188260000000
serviceName: {{ include "marketplace.name" . }}-{{ $.Values.front.name }}
15581188260000000
{{- end }}
15581188260000000
</code></pre>
15581188260000000
{{- end -}}
15581188260000000
<p>Values used:</p>
15581188260000000
annotations:
15581188260000000
</code></pre><p>Currently, under kubernetes1.5.3, kube-apiserver.log and kube-controller-manager.log is generated by adding '1>>/var/log/kube-apiserver.log 2>&1' in /etc/kubernetes/kube-apiserver.yaml file.
15581188260000000
<pre><code>vagrant@master:~$ helm init
15581188260000000
Happy Helming!
15581188260000000
vagrant@master:~$ helm install nginx
15581188260000000
<p><a href="https://kubernetes.io/docs/getting-started-guides/ubuntu/troubleshooting/" rel="nofollow noreferrer">https://kubernetes.io/docs/getting-started-guides/ubuntu/troubleshooting/</a></p>
15581188260000000
<pre><code> juju expose kubernetes-master
15581188260000000
<p>Can anyone help me?</p>$ helm version gives "Cannot connect to tiller"<p>I have a Daemonset running in privileged mode in a kubernetes cluster. This is the YAML spec of the daemon set.</p>
15581188260000000
name: my-daemon
15581188260000000
labels:
15581188260000000
serviceAccountName: my-sa-account
15581188260000000
imagePullPolicy: Always
15581188260000000
<p>Instead of using <code>privileged:true</code>, I am moving on to linux capabilties to give permissions to the DaemonSet. Therefore, I added all the linux capabilities to the container and removed <code>privileged:true</code>. This is the new YAML spec</p>
15581188260000000
name: my-daemon
15581188260000000
labels:
15581188260000000
serviceAccountName: my-sa-account
15581188260000000
imagePullPolicy: Always
15581188260000000
</code></pre>
15581188260000000
ShdPnd: 0000000000000000
15581188260000000
CapInh: 0000003fffffffff
15581188260000000
CapAmb: 0000000000000000
15581188260000000
<p>The <code>Namespace</code> is <code>default</code> and this is the result of <code>kubectl cluster-info:</code>
15581188260000000
</code>
15581188260000000
metadata:
15581188260000000
name: busybox
15581188260000000
port: 80
15581188260000000
metadata:
15581188260000000
- image: time-provider
15581188260000000
metadata:
15581188260000000
- image: gateway
15581188260000000
LB: 10.240.0.16 ( haproxy) </p>
15581188260000000
mv /home/${USER}/ca.crt /etc/kubernetes/pki/
15581188260000000
mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
15581188260000000
mv /home/${USER}/admin.conf /etc/kubernetes/admin.conf
15581188260000000
[certificates] Generated etcd/ca certificate and key.
15581188260000000
[certificates] Generated etcd/peer certificate and key.
15581188260000000
[certificates] Generated apiserver-kubelet-client certificate and key.
15581188260000000
[certificates] Generated front-proxy-ca certificate and key.
15581188260000000
>`sudo kubeadm alpha phase kubelet config write-to-disk --config kubeadm-config.yaml`
15581188260000000
>`sudo kubeadm alpha phase kubelet write-env-file --config kubeadm-config.yaml`
15581188260000000
>`sudo kubeadm alpha phase kubeconfig kubelet --config kubeadm-config.yaml`
15581188260000000
>`export KUBECONFIG=/etc/kubernetes/admin.conf`
15581188260000000
The connection to the server localhost:8080 was refused - did you specify the right host or port?</p>
15581188260000000
<p>No output</p>
15581188260000000
<p><strong><em>a kubeconfig file "/etc/kubernetes/admin.conf" exists already but has got the wrong API Server URL</em></strong></p>
15581188260000000
kind: InitConfiguration
15581188260000000
controlPlaneEndpoint: "10.240.0.16:6443"
15581188260000000
listen-client-urls: "https://127.0.0.1:2379,https://10.240.0.33:2379"
15581188260000000
initial-cluster: "kb8-master1=https://10.240.0.4:2380,kb8-master2=https://10.240.0.33:2380"
15581188260000000
- 10.240.0.33
15581188260000000
networking:
15581188260000000
Quotes are not sourced from all markets and may be delayed by up to 20 minutes. Information is provided 'as is' and solely for informational purposes, not for trading purposes or advice.DisclaimerA browser error has occurred.
15581188260000000
CVE-2018-1002105 is one of the most severe #Kubernetes #security vulnerabilities of all time. How does this flaw wo… https://t.co/FLv4b9BVFE
15581194720000000
@olivierboukili: I just published A GCP / Kubernetes production migration retrospective (pa 1) https://t.co/RngsmtMwor
15581194720000000
@IanColdwater: I got accepted to speak at #BHUSA. ☺️ @mauilion and I are going to be demonstrating some little-known attacks on default…
15581194720000000
@CloudExpo: OpsRamp’s to Present AI & AIOps Education Track at CloudEXPO @OpsRamp #HybridCloud #AI #IoT #AIOps #DevOps #Blockchain #Cl…
15581194720000000
@brendandburns: Windows containers are now in public preview in @Azure Kubernetes Service!!! https://t.co/ANfWHkbZrV Many, many thanks…
15581194720000000
@Rancher_Labs: Introducing the Rancher 2 #Terraform Provider: This week, @HashiCorp published the Rancher2 provider to help you provisio…
15581194720000000
@azureflashnews: Announcing the preview of Windows Server containers suppo in Azure Kubernetes Service https://t.co/lqmAJZgCbR #Azur…
15581194720000000
@YvosPivo: Have you ever thought about how you can impo volumes into a #Kubernetes cluster, for instance to migrate & transform legacy…
15581194720000000
The First Way – Systems Thinking • Understand the entire flow of work • Seek to increase the flow of work • Stop… https://t.co/rAtcN6b3st
15581194720000000
@brendandburns: Windows containers are now in public preview in @Azure Kubernetes Service!!! https://t.co/ANfWHkbZrV Many, many thanks…
15581194720000000
@KubeSUMMIT: Join 180 Sponsors & Partners and 60 Exhibitors at CloudEXPO Silicon Valley #HybridCloud #AI #AIOps #IoT #DevOps #DevSecOp…
15581194720000000
Announcing the preview of Windows Server containers suppo in Azure Kubernetes Service https://t.co/E90N6qj05w
15581194720000000
@CloudExpo: It's 4:00 AM in Silicon Valley! Do You Know Where Your Data Is? #HybridCloud #AI #AIOps #CIO #IoT #DevOps #SDN #CloudNative…
15581194720000000
@vshn_ch: Check out Adrian's excellent blog post about Kubernetes Serverless Frameworks: https://t.co/wi9Pl6Dtex Thanks @akosma! #server…
15581194720000000
@brendandburns: Windows containers are now in public preview in @Azure Kubernetes Service!!! https://t.co/ANfWHkbZrV Many, many thanks…
15581194720000000
@DBArgenis: Congrats @Taylorb_msft et al on launching! This is a big, big deal! Windows Server containers in Azure Kubernetes Service h…
15581194720000000
HNews: Lokomotive: An engine to drive cutting-edge Linux technologies into Kubernetes https://t.co/0LIZvhtKfU #linux
15581194720000000
Heading to #KubeCon Barcelona Spain. #k8s #Kubernetes #k8 @KubeCon_ https://t.co/tEgBQ7mzQn
15581194720000000
I'm super interested in this. We're starting to use @Rancher_Labs to manage our #Kubernetes at work and are already… https://t.co/ZMzdIIu4qM
15581194720000000
@kubeflow: Kubernetes, The Open and Scalable Approach to ML Pipelines by @yaronhaviv https://t.co/sMLX2ib4fu Courtesy of our friends at…
15581194720000000
@OracleDevs: Get Hands-on Microservices on Kubernetes and Autonomous Database! Register for the Live Virtual Lab that we will run on May…
15581194720000000
@CloudExpo: Join CloudEXPO Silicon Valley June 24-26 at Biggest Expo Floor in 5 Years ! #BigData #HybridCloud #Cloud #CloudNative #Serv…
15581194720000000
Announcing the preview of Windows Server containers suppo in Azure Kubernetes Service https://t.co/OUbl64E9pN
15581194720000000
Congrats @Taylorb_msft et al on launching! This is a big, big deal! Windows Server containers in Azure Kubernetes… https://t.co/kugG23JAC3
15581194720000000
Announcing the preview of Windows Server containers suppo in Azure Kubernetes Service https://t.co/HGAC6WOnd8 #Microsoft #Azure #Cloud
15581194720000000
3 Things Every CTO Should Know About Kubernetes
15585496040000000
5 things I wish I'd known about Kubernetes before I started
15585496040000000
7.5 tips to help you ace the Certified Kubernetes Administrator (CKA) exam
15585496040000000
10 most important differences between OpenShift and Kubernetes
15585496040000000
50 Useful Kubernetes Tools
15585496040000000
068: Screaming in the Cloud, GDPR, Kubernetes PoCs, InfoSec, and More
15585496040000000
076: Hiring in DevOps, Security, Kubernetes, and More
15585496040000000
080: Improving the Workforce, Programming Myths, Kubernetes, New Books, and More
15585496040000000
087: Psychological Safety, Kubernetes, Ansible, Serverless, AWS, OpenFaaS,& More
15585496040000000
093: Hard Week, Ansible, Kubernetes, Nathen Harvey, InfoSec, and More
15585496040000000
100: At Least It Wasn't Oracle, AWS, HQ2, Kubernetes, Neomonolith, and More
15585496040000000
106: KubeKhan, KubeCon, Etcd, Licenses, Securing Kubernetes, JFrog, More
15585496040000000
115: CVE-2019-5736 Runc Vuln, Kubernetes, Liz Fong-Jones, MongoDB's End and More
15585496040000000
122: Chefnanigans, Derek the DevOps Dinosaur, BPF, Envoy, Kubernetes, OPA, More
15585496040000000
[Audio] Making Innovative Containers Using Kubernetes and DevOps
15585496040000000
[Talk] Kubernetes 1.6+ Feat. David Aronchick and Kubernetes on AWS Zalando
15585496040000000
[Webinar] Kubernetes Monitoring Best Practices from KubeCon
15585496040000000
A CLI tool for deploying cloud native apps to Kubernetes
15585496040000000
A comparison of Kubernetes network plugins
15585496040000000
A conversion tool to go from Docker Compose to Kubernetes
15585496040000000
A developers stand point on Docker Swarm and Kubernetes
15585496040000000
A few things I've learned about Kubernetes
15585496040000000
A Guide to Deploy Elasticsearch Cluster on Google Kubernetes Engine
15585496040000000
A Kubernetes Admission controller to gate pod execution based on image analysis
15585496040000000
A Kubernetes-based polyglot microservices application with Istio service mesh
15585496040000000
A new Kubernetes sandbox
15585496040000000
A reason for unexplained connection timeouts on Kubernetes/Docker
15585496040000000
A Service Mesh for Kubernetes: Distributed Tracing
15585496040000000
A story about a Kubernetes migration
15585496040000000
A Tale of Cloud, Containers and Kubernetes
15585496040000000
A Top10 list of Kubernetes applications
15585496040000000
A VPN for Minikube: transparent networking access to your Kubernetes cluster
15585496040000000
Abusing the Kubernetes API Server Proxying
15585496040000000
Adding Persistent Volumes to Jenkins with Kubernetes
15585496040000000
Advanced Kubernetes Objects You Need to Know
15585496040000000
After 2 years, Kubernetes is at the center of the cloud; now comes the hard part
15585496040000000
All the Fun in Kubernetes 1.9 – The New Stack
15585496040000000
Amazon Considering New Cloud Service Based on Kubernetes
15585496040000000
Amazon Elastic File System on Kubernetes
15585496040000000
Amazon Web Services chooses its Kubernetes path, joins CNCF
15585496040000000
AMD ROCm 2.0 released (TensorFlow v1.12, FP16 support, OpenCL 2.0, Kubernetes, )
15585496040000000
An open source operator for Kafka on Kubernetes
15585496040000000
Analysis of a Kubernetes hack - Backdooring through kubelet
15585496040000000
Announcement: Cloud 66 Kubernetes Support Is Here
15585496040000000
Announcing HashiCorp Consul and Kubernetes
15585496040000000
Announcing Submariner, Multi-Cluster Network Connectivity for Kubernetes
15585496040000000
Announcing Terraform Support for Kubernetes Service on AWS
15585496040000000
Ansible playbook to deploy Rancher k3s kubernetes cluster
15585496040000000
Apache Mesos vs. Google’s Kubernetes
15585496040000000
Apollo – The Logz.io Continuous Deployment Solution Over Kubernetes
15585496040000000
Application Tracing on Kubernetes with AWS X-Ray – AWS Compute Blog
15585496040000000
Are you learning Kubernetes/Docker? What resources are you using?
15585496040000000
Argo: Open source Kubernetes native workflows, events, CI and CD
15585496040000000
Ask HN: Anyone with inside knowledge about Amazon working on Kubernetes product?
15585496040000000
Ask HN: Best container runtime for Kubernetes CRI-O
15585496040000000
Ask HN: Did AWS give you access to EKS(ECS for Kubernetes)?
15585496040000000
Ask HN: Docker, Kubernetes, Openshift, etc – how do you deploy your products?
15585496040000000
Ask HN: How has Kubernetes changed your workflow?
15585496040000000
Ask HN: Kubernetes application level rollback?
15585496040000000
Ask HN: Nomad (Hashicorp) vs. Kubernetes?
15585496040000000
Ask HN: What are downsides of Kubernetes/containers?
15585496040000000
Ask HN: Who is using Kubernetes or Docker in production and how has it been?
15585496040000000
Assess Kubernetes performance and scalability using Automation Pipeline
15585496040000000
Auto generate Kubernetes pod security policies
15585496040000000
Automate deep learning training with Kubernetes GPU-cluster
15585496040000000
Automated TLS with cert-manager and letsencrypt for Kubernetes
15585496040000000
Automating TLS and DNS with Kubernetes Ingress
15585496040000000
Autoscaling Deep Learning Training with Kubernetes
15585496040000000
AWS ALB Ingress Controller for Kubernetes
15585496040000000
AWS managed kubernetes (EKS)
15585496040000000
AWS Service Operator for Kubernetes Now Available
15585496040000000
Azure brings new Serverless and DevOps capabilities to the Kubernetes community
15585496040000000
Azure Kubernetes Service (AKS) GA
15585496040000000
Becoming a Kubernetes Maintainer
15585496040000000
Best Practices for Kubernetes' Pods
15585496040000000
Beyond Kubernetes: Istio network service mesh
15585496040000000
Bitnami Kubernetes Production Runtime
15585496040000000
BlockChain App Deployment Using Microservices with Kubernetes
15585496040000000
Bootstrap Kubernetes the Hard Way on GCP
15585496040000000
Borg, Omega, and Kubernetes [pdf]
15585496040000000
Brigate: Event-Driven Scripting for Kubernetes (JS)
15585496040000000
Bringing Kubernetes to Containership
15585496040000000
Build and deploy docker images to Kubernetes using Git push
15585496040000000
Build Your Kubernetes Cluster with Raspberry Pi, .NET Core and OpenFaas
15585496040000000
Build, deploy, manage modern serverless workloads using Knative on Kubernetes
15585496040000000
Building a Hybrid x86–64 and ARM Kubernetes Cluster
15585496040000000
Building a Kubernetes Operator for Prometheus and Thanos
15585496040000000
Building an ARM Kubernetes Cluster
15585496040000000
Building containers with Kubernetes
15585496040000000
Building Machine Learning Services in Kubernetes
15585496040000000
Cabin (mobile app for kubernetes) is now open source
15585496040000000
Canary deployments on kubernetes using Traefik
15585496040000000
Canonical Distribution of Kubernetes
15585496040000000
Canonical makes Kubernetes moves
15585496040000000
Cbi: Container Builder Interface for Kubernetes
15585496040000000
Certified Kubernetes and Google Kubernetes Engine
15585496040000000
Checking Out the Kubernetes Service Catalog
15585496040000000
CI/CD new features: Kubernetes, JaCoCo, and more
15585496040000000
CI/CD with Amazon Elastic Container Service for Kubernetes (Amazon EKS)
15585496040000000
Cisco joins the Kubernetes cloud rush
15585496040000000
CLI tool to generate repeatable, cloud-based Kubernetes infrastructure
15585496040000000
Cloud Foundry adds native Kubernetes support for running containers
15585496040000000
Cloud Migration Best Practices: How to Move Your Project to Kubernetes
15585496040000000
Cloudbees Kubernetes Continuous Delivery
15585496040000000
Cluster-in-a-box: How to deploy one or more Kubernetes clusters to a single box
15585496040000000
CNCF just got 36 companies to agree to a Kubernetes certification standard
15585496040000000
Collecting application logs in Kubernetes
15585496040000000
Comparing Kubernetes Authentication Methods
15585496040000000
Comparing Kubernetes Operators and in-house scripts to build Platform automation
15585496040000000
Compose on Kubernetes Now Open Source
15585496040000000
Concerns about Kubernetes Community newcomers
15585496040000000
Configuring permissions in Kubernetes with RBAC
15585496040000000
Conjuring up Kubernetes on Ubuntu
15585496040000000
Container code cluster-fact: There's a hole in Kubernetes
15585496040000000
Container Security Part 3 – Kubernetes Cheat Sheet
15585496040000000
Containerized App Deployment on Kubernetes (K8s) with Nutanix Calm
15585496040000000
Containership Launches Its Fully Managed Kubernetes Service
15585496040000000
Continuous Delivery with Kubernetes the Hard Way
15585496040000000
Continuous Deployment with Docker, Kubernetes and Jenkins
15585496040000000
Contribute to Kubernetes without writing code
15585496040000000
Convergence to Kubernetes – Standardization to Scale
15585496040000000
CoreOS (YC S13) Is Hiring – Help Accelerate Kubernetes (BER/SFO/NYC/remote)
15585496040000000
CoreOS Automates Kubernetes Node OS Upgrades
15585496040000000
CoreOS Tectonic extends Kubernetes to new platforms with automated installer
15585496040000000
CoreOS' Tectonic delivers "self-driving” Kubernetes to the enterprise
15585496040000000
Create a TLS-Protected Kubernetes Ingress from Scratch
15585496040000000
Create, manage, snapshot and scale Kubernetes infrastructure in the public cloud
15585496040000000
Creating a Kubernetes Cluster on AWS with a single command
15585496040000000
Creating dashboards of Kubernetes security events with Falco and a EFK stack
15585496040000000
Critical Privilege Escalation Flaw Patched in Kubernetes
15585496040000000
Cruise open-sources RBACSync – Gsuite group membership controller for Kubernetes
15585496040000000
Customizing Kubernetes DNS Using Consul
15585496040000000
CVE-2018-1002100 Kubernetes: the kubectl cp command insecurely handles tar data
15585496040000000
Cycle Pedals Bare Metal Container Orchestrator as Kubernetes Alternative
15585496040000000
Databases on Kubernetes – How to Recover from Failures, Scale Up and Down
15585496040000000
Debug Your Live Apps Running in Azure Virtual Machines and Azure Kubernetes
15585496040000000
Debugging a TCP socket leak in a Kubernetes cluster
15585496040000000
Debugging microservices on Kubernetes with the Conduit service mesh 0.4 release
15585496040000000
Dedicated Game Server Hosting and Scaling for Multiplayer Games on Kubernetes
15585496040000000
Deep Dive into Kubernetes Networking in Azure
15585496040000000
Demo Kubernetes-based polyglot microservices application with Istio service mesh
15585496040000000
Deploy a HIG Stack in Kubernetes for Monitoring
15585496040000000
Deploy and use a multi-framework deep learning platform on Kubernetes
15585496040000000
Deploy InfluxDB and Grafana on Kubernetes to Collect Twitter Stats
15585496040000000
Deploy OpenFaaS and Kubernetes on DigitalOcean with Ansible
15585496040000000
Deploy-to-kube: Deploy your Node.js app to Kubernetes with a single command
15585496040000000
Deploying a Node App to Google Cloud with Kubernetes
15585496040000000
Deploying Java Applications with Docker and Kubernetes
15585496040000000
Deploying Kubernetes applications with Helm
15585496040000000
Deploying Kubernetes with CoreDNS using kubeadm
15585496040000000
Deploying OSv on Kubernetes using virtlet
15585496040000000
Deploying Spark on Kubernetes
15585496040000000
Deploying to Google Kubernetes Engine
15585496040000000