×
Download the Graph Image:

PNG (Image)  SVG (Hi-Res)

Download the Graph Data:

JSON  CSV  Gexf (Gephi)

Download the Text Data:

CSV tagged w Topics   Blocks with Topics   Plain Text
×
Share Graph Image

 
Share a non-interactive image of the graph only, no text:
Download Image Tweet
 
Share Interactive Text Graph

 

 
×
Save / Rename This Graph:

 

×
Delete This Graph:

 

×
About this Context Graph:

 
total nodes:  extend
 
InfraNodus
Top keywords (global influence):
Top topics (local contexts):
Explore the main topics and terms outlined above or see them in the excerpts from this text below.
See the relevant data in context: click here to show the excerpts from this text that contain these topics below.
Tip: use the form below to save the most relevant keywords for this search query. Or start writing your content and see how it relates to the existing search queries and results.
Tip: here are the keyword queries that people search for but don't actually find in the search results.

Cloud Native Computing Foundation (CNCF) CNI (Container Networking Interface) 0.7.4 has a network firewall misconfiguration which affects Kubernetes. The CNI 'portmap' plugin, used to setup HostPorts for CNI, inserts rules at the front of the iptables nat chains; which take precedence over the KUBE- SERVICES chain. Because of this, the HostPort/portmap rule could match incoming traffic even if there were better fitting, more specific service definition rules like NodePorts later in the chain. The issue is fixed in CNI 0.7.5 and Kubernetes 1.11.9, 1.12.7, 1.13.5, and 1.14.0.

   edit   deselect   +add

 

CVE-2019-1002100 In all Kubernetes versions prior to v1.11.8, v1.12.6, and v1.13.4, users that are authorized to make patch requests to the Kubernetes API Server can send a specially crafted patch of type "json-patch" (e.g. `kubectl patch --type json` or `"Content-Type: application/json-patch+json"`) that consumes excessive resources while processing, causing a Denial of Service on the API Server.

   edit   deselect   +add

 

CVE-2018-1999040 An exposure of sensitive information vulnerability exists in Jenkins Kubernetes Plugin 1.10.1 and earlier in KubernetesCloud.java that allows attackers to capture credentials with a known credentials ID stored in Jenkins.

   edit   deselect   +add

 

CVE-2018-1002103 In Minikube versions 0.3.0-0.29.0, minikube exposes the Kubernetes Dashboard listening on the VM IP at port 30000. In VM environments where the IP is easy to predict, the attacker can use DNS rebinding to indirectly make requests to the Kubernetes Dashboard, create a new Kubernetes Deployment running arbitrary code. If minikube mount is in use, the attacker could also directly access the host filesystem.

   edit   deselect   +add

 

CVE-2018-1000187 A exposure of sensitive information vulnerability exists in Jenkins Kubernetes Plugin 1.7.0 and older in ContainerExecDecorator.java that results in sensitive variables such as passwords being written to logs.

   edit   deselect   +add

 

CVE-2017-1002100 Default access permissions for Persistent Volumes (PVs) created by the Kubernetes Azure cloud provider in versions 1.6.0 to 1.6.5 are set to "container" which exposes a URI that can be accessed without authentication on the public internet. Access to the URI string requires privileged access to the Kubernetes cluster or authenticated access to the Azure portal.

   edit   deselect   +add

 

CVE-2016-1906 Openshift allows remote attackers to gain privileges by updating a build configuration that was created with an allowed type to a type that is not allowed.

   edit   deselect   +add

 

CVE-2015-5305 Directory traversal vulnerability in Kubernetes, as used in Red Hat OpenShift Enterprise 3.0, allows attackers to write to arbitrary files via a crafted object type name, which is not properly handled before passing it to etcd.

   edit   deselect   +add

 

@domhalps @SBUCloud: Pierre Vacherand #CTO @Apalia Switzerland talks about the customer's benefits from a full stack #Automation for #containers I…

   edit   deselect   +add

 

@markdeneve Another week, another blog post... Using oc client or kubectl client to generate ad-hoc reports with the go-templat… https://t.co/8NarXnWblJ

   edit   deselect   +add

 

@SBUCloud @Social_4U: Would you like to take advantage of policy-based provisioning for persistent volume in ? Join @datamattsson at #…

   edit   deselect   +add

 

@Social_4U Would you like to take advantage of policy-based provisioning for persistent volume in ? Join… https://t.co/7lxzk6NFwM

   edit   deselect   +add

 

@Crypto___Touch @rhdevelopers: Whether you're still learning or an experienced or #Kubernetes application #developer, add this #YAML extensio…

   edit   deselect   +add

 

@Fleischmantweet @SBUCloud: Learn How to #Backup & Restore your #RedHat Environment and How #HPE can help to ease the process during #RHSummit…

   edit   deselect   +add

 

@polarbear_pc @NadhanEG: #RHSummit Track Guide for Emerging Technology :: #CTO Chris Wright :: #AI #ML on :: #Edge -- where #5G meets #IoT…

   edit   deselect   +add

 

@EdgeIotAi @NadhanEG: #RHSummit Track Guide for Emerging Technology :: #CTO Chris Wright :: #AI #ML on :: #Edge -- where #5G meets #IoT…

   edit   deselect   +add

 

@tech_halcyon #Docker & #Kubernetes Weekend Classroom Training scheduled on 27th & 28th April @Chennai hurry up to enroll… https://t.co/TUaIjREHa3

   edit   deselect   +add

 

@AkshayG196 @rhdevelopers: Whether you're still learning or an experienced or #Kubernetes application #developer, add this #YAML extensio…

   edit   deselect   +add

 

@smartecocity @openshift: How does the partnership between #RedHat and Atos-managed push innovation and solve sma cities problems? Find o…

   edit   deselect   +add

 

@GarryJGray Are you attending #RedHatSummit in Boston? Red Hat will be talking . Please find below website to find ou… https://t.co/BQl21F1sxy

   edit   deselect   +add

 

@spbreed Deploying Applications to Multiple #Datacenters https://t.co/wkRIUclo0r via @openshift

   edit   deselect   +add

 

@dimarzo_chad @openshift: Planning your Red Hat Summit experience around ? We’ve made it simple for you to find all the best sessions & acti…

   edit   deselect   +add

 

@suravarjjala @couchbase: Virtual event: Join @RedHat this Friday for a @Couchbase on Openshift demo; learn how to run Couchbase deployments natively…

   edit   deselect   +add

 

@vtunka @openshift: Planning your Red Hat Summit experience around ? We’ve made it simple for you to find all the best sessions & acti…

   edit   deselect   +add

 

@javapsyche @couchbase: Virtual event: Join @RedHat this Friday for a @Couchbase on Openshift demo; learn how to run Couchbase deployments natively…

   edit   deselect   +add

 

@bentonam @couchbase: Virtual event: Join @RedHat this Friday for a @Couchbase on Openshift demo; learn how to run Couchbase deployments natively…

   edit   deselect   +add

 

@cbleoschuman @couchbase: Virtual event: Join @RedHat this Friday for a @Couchbase on Openshift demo; learn how to run Couchbase deployments natively…

   edit   deselect   +add

 

@anilkumar1129 @couchbase: Virtual event: Join @RedHat this Friday for a @Couchbase on Openshift demo; learn how to run Couchbase deployments natively…

   edit   deselect   +add

 

@perrykrug @couchbase: Virtual event: Join @RedHat this Friday for a @Couchbase on Openshift demo; learn how to run Couchbase deployments natively…

   edit   deselect   +add

 

@agonyou @couchbase: Virtual event: Join @RedHat this Friday for a @Couchbase on Openshift demo; learn how to run Couchbase deployments natively…

   edit   deselect   +add

 

@gummybaren @couchbase: Virtual event: Join @RedHat this Friday for a @Couchbase on Openshift demo; learn how to run Couchbase deployments natively…

   edit   deselect   +add

 

<p>I am trying to schedule a pod on my local <a href="https://github.com/ubuntu/microk8s" rel="nofollow noreferrer">microk8s</a> cluster. in the events section i see a warning <code>0/1 nodes are available 1 node(s) had diskpressure</code> how to check how much space node has and how to set a bigger value ..</p>0/1 nodes are available 1 node(s) had diskpressure<p>I was testing my kubernetes services recently. And I found it's very unreliable. Here are the situation:<br>

   edit   deselect   +add

 

<pre><code>apiVersion: extensions/v1beta1

   edit   deselect   +add

 

annotations:

   edit   deselect   +add

 

spec:

   edit   deselect   +add

 

paths:

   edit   deselect   +add

 

servicePort: 80

   edit   deselect   +add

 

</ol>

   edit   deselect   +add

 

<pre><code>NAME READY STATUS RESTARTS AGE

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

neo4j ClusterIP None &lt;none&gt; 7474/TCP,6362/TCP 20h

   edit   deselect   +add

 

<pre><code>kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

   edit   deselect   +add

 

<p>What should I do to access the dashboard?</p>

   edit   deselect   +add

 

Fri May 04 11:08:06 UTC 2018

   edit   deselect   +add

 

<pre><code>server {

   edit   deselect   +add

 

set $proxy_upstream_name "-";

   edit   deselect   +add

 

ssl_certificate /ingress-controller/ssl/default-payday.pem;

   edit   deselect   +add

 

ssl_stapling_verify on;

   edit   deselect   +add

 

port_in_redirect off;

   edit   deselect   +add

 

set $service_name "webapp-svc";

   edit   deselect   +add

 

client_max_body_size "1m";

   edit   deselect   +add

 

proxy_set_header ssl-client-verify "";

   edit   deselect   +add

 

proxy_set_header Connection $connection_upgrade;

   edit   deselect   +add

 

proxy_set_header X-Forwarded-Port $pass_port;

   edit   deselect   +add

 

# Pass the original X-Forwarded-For

   edit   deselect   +add

 

proxy_set_header Proxy "";

   edit   deselect   +add

 

proxy_read_timeout 60s;

   edit   deselect   +add

 

proxy_request_buffering "on";

   edit   deselect   +add

 

# In case of errors try the next upstream server before returning an error

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

metadata:

   edit   deselect   +add

 

kubernetes.io/ingress.class: "nginx"

   edit   deselect   +add

 

- my.domain.com

   edit   deselect   +add

 

http:

   edit   deselect   +add

 

serviceName: springboot-service

   edit   deselect   +add

 

<pre><code>kind: ConfigMap

   edit   deselect   +add

 

namespace: ingress-nginx

   edit   deselect   +add

 

force-ssl-redirect: "true"

   edit   deselect   +add

 

<p><strong>service.yaml</strong></p>

   edit   deselect   +add

 

name: ingress-nginx

   edit   deselect   +add

 

annotations:

   edit   deselect   +add

 

type: LoadBalancer

   edit   deselect   +add

 

- name: https

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

</blockquote>401 Error for google authentication with Spring boot + spring security behing nginx ingress on kubernetes cluster<p>I've been following the Kubernetes The Hard Way tutorial, but instead using on-prem hardware for it. I also updated to using the v1.13.2 release instead of v1.12.0, which the tutorial is based on.</p>

   edit   deselect   +add

 

<p>The tutorial does the healthz check by having an nginx instance fronting the API server, which connects to the API server using TLS. The only reason to do this is because the GCP load balancer needs a non-TLS endpoint for the health check. I don't see why using curl directly with TLS shouldn't work. Has something changed in terms of default permissions between the v1.12.0 and v1.13.2 releases?</p>

   edit   deselect   +add

 

<p>curl will just spit out the usual 401 message.</p><p>I'm new to ingress controller.

   edit   deselect   +add

 

kind: Service

   edit   deselect   +add

 

spec:

   edit   deselect   +add

 

selector:

   edit   deselect   +add

 

<p><code>kubectl describe ingress</code> returns:</p>

   edit   deselect   +add

 

Default backend: nginx:80 (10.1.0.123:80,10.1.0.124:80,10.1.0.125:80 + 1 more...)

   edit   deselect   +add

 

/v nginx:80 (10.1.0.123:80,10.1.0.124:80,10.1.0.125:80 + 1 more...)

   edit   deselect   +add

 

<p>When running <code>curl http://localhost:30874/v/version.html -H "host: foo.bar.com"</code> I get 403 error and the ingress-control pod says:</p>

   edit   deselect   +add

 

<p>I created ServiceEntry type object to whitelist <code>metadata.google.internal</code>, as follows (have tried different combos of this):</p>

   edit   deselect   +add

 

name: google-metadata-server

   edit   deselect   +add

 

location: MESH_EXTERNAL

   edit   deselect   +add

 

protocol: HTTP

   edit   deselect   +add

 

<pre><code>[2019-02-07T15:29:22.834Z] "GET /computeMetadata/v1/project/project-idHTTP/1.1" 200 - 0 14 2 1 "-" "Google-HTTP-Java-Client/1.27.0 (gzip)" "513f6e25-57ce-4cf0-a273-d391b3da604b" "metadata.google.internal" "169.254.169.254:80" outbound|80||metadata.google.internal - 169.254.169.254:80 10.16.0.29:58790

   edit   deselect   +add

 

[2019-02-07T15:29:47.781Z] "GET /computeMetadata/v1/project/project-idHTTP/1.1" 200 - 0 14 4 3 "-" "Google-HTTP-Java-Client/1.27.0 (gzip)" "7115bf46-e7e9-4b2f-ba37-10cd6b8c9dea" "metadata.google.internal" "169.254.169.254:80" outbound|80||metadata.google.internal - 169.254.169.254:80 10.16.0.29:58876

   edit   deselect   +add

 

<p>What I did is to create a simple deployment with Istio, in the same cluster, same namespace, and telnet the metadata server manually:</p>

   edit   deselect   +add

 

Escape character is '^]'.

   edit   deselect   +add

 

metadata-flavor: Google

   edit   deselect   +add

 

content-length: 22

   edit   deselect   +add

 

0.1/

   edit   deselect   +add

 

<p>I've been experimenting and building my deployment using minikube and I have created a yaml file that will successfully deploy everything locally on minikube without error. You can see the full deployment yaml file here: <a href="https://github.com/mwinteringham/restful-booker-platform/blob/kubes/kubes/deploy.yml" rel="nofollow noreferrer">https://github.com/mwinteringham/restful-booker-platform/blob/kubes/kubes/deploy.yml&lt;/a&gt;&lt;/p&gt;

   edit   deselect   +add

 

kind: Ingress

   edit   deselect   +add

 

nginx.ingress.kubernetes.io/rewrite-target: /

   edit   deselect   +add

 

http:

   edit   deselect   +add

 

serviceName: rbp-booking

   edit   deselect   +add

 

serviceName: rbp-room

   edit   deselect   +add

 

serviceName: rbp-search

   edit   deselect   +add

 

serviceName: rbp-ui

   edit   deselect   +add

 

serviceName: rbp-auth

   edit   deselect   +add

 

serviceName: rbp-report

   edit   deselect   +add

 

serviceName: rbp-ui

   edit   deselect   +add

 

<ol>

   edit   deselect   +add

 

<p>My image is hello-world project with node + express + google cloud client libray <code>@google-cloud/language</code></p>

   edit   deselect   +add

 

<blockquote>

   edit   deselect   +add

 

a5386aa0f20d: Pushing

   edit   deselect   +add

 

9dfa40a0da3b: Layer already exists error parsing HTTP 408 response

   edit   deselect   +add

 

auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}</em> >

   edit   deselect   +add

 

img{border:0}@media screen and

   edit   deselect   +add

 

no-repeat 0% 0%/100%

   edit   deselect   +add

 

no-repeat;-webkit-background-size:100%

   edit   deselect   +add

 

request. That’s all we know.\n"</p>

   edit   deselect   +add

 

<p>Do you have some idea ? </p>

   edit   deselect   +add

 

<pre><code>kind: ConfigMap

   edit   deselect   +add

 

proxy-read-timeout: "600"

   edit   deselect   +add

 

body-size: "64m"

   edit   deselect   +add

 

name: nginx-configuration

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

client_max_body_size "1m";

   edit   deselect   +add

 

client_max_body_size "1m";

   edit   deselect   +add

 

client_max_body_size "1m";

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

<pre><code>resource "google_container_cluster" "k8s" {

   edit   deselect   +add

 

master_auth {

   edit   deselect   +add

 

<p>My provider.tf looks like this:</p>

   edit   deselect   +add

 

data "vault_generic_secret" "google" {

   edit   deselect   +add

 

region = "us-east1"

   edit   deselect   +add

 

<p>Now, my issue is that when I do a <code>terraform apply</code>, it continues to run over and over until it eventually fails with a 500 error. Here's what the debug logs look like:</p>

   edit   deselect   +add

 

2019-01-09T14:35:07.384-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: Host: container.googleapis.com

   edit   deselect   +add

 

2019-01-09T14:35:07.384-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: Accept-Encoding: gzip

   edit   deselect   +add

 

2019-01-09T14:35:07.384-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: "binaryAuthorization": {

   edit   deselect   +add

 

2019-01-09T14:35:07.384-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: "legacyAbac": {

   edit   deselect   +add

 

2019-01-09T14:35:07.384-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: "password": "****",

   edit   deselect   +add

 

2019-01-09T14:35:07.384-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: "network": "projects/ProjectName/global/networks/default",

   edit   deselect   +add

 

2019-01-09T14:35:07.384-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: "https://www.googleapis.com/auth/logging.write",

   edit   deselect   +add

 

2019-01-09T14:35:07.384-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: "https://www.googleapis.com/auth/trace.append"

   edit   deselect   +add

 

2019-01-09T14:35:07.384-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: }

   edit   deselect   +add

 

2019-01-09T14:35:07.521-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: ---[ RESPONSE ]--------------------------------------

   edit   deselect   +add

 

2019-01-09T14:35:07.521-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: Content-Type: application/json; charset=UTF-8

   edit   deselect   +add

 

2019-01-09T14:35:07.521-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: Vary: X-Origin

   edit   deselect   +add

 

2019-01-09T14:35:07.522-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: X-Xss-Protection: 1; mode=block

   edit   deselect   +add

 

2019-01-09T14:35:07.522-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: "code": 500,

   edit   deselect   +add

 

2019-01-09T14:35:07.522-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: "message": "Internal error encountered.",

   edit   deselect   +add

 

2019-01-09T14:35:07.522-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: ],

   edit   deselect   +add

 

2019-01-09T14:35:07.522-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe:

   edit   deselect   +add

 

<p>The real error looks like this</p>

   edit   deselect   +add

 

HTTP/2.0 500 Internal Server Error

   edit   deselect   +add

 

Date: Wed, 09 Jan 2019 19:35:07 GMT

   edit   deselect   +add

 

Vary: Referer

   edit   deselect   +add

 

"error": {

   edit   deselect   +add

 

"message": "Internal error encountered.",

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

<pre><code>#---

   edit   deselect   +add

 

name: myservice

   edit   deselect   +add

 

ports:

   edit   deselect   +add

 

# Port to forward to inside the pod

   edit   deselect   +add

 

metadata:

   edit   deselect   +add

 

template:

   edit   deselect   +add

 

spec:

   edit   deselect   +add

 

imagePullPolicy: Always

   edit   deselect   +add

 

- name: regcred

   edit   deselect   +add

 

kind: Ingress

   edit   deselect   +add

 

nginx.ingress.kubernetes.io/ssl-redirect: "false"

   edit   deselect   +add

 

paths:

   edit   deselect   +add

 

servicePort: 80

   edit   deselect   +add

 

<p>Does anyone have an idea why it happended or did I do something wrong with the configuration? Thank you in advanced!</p><p>I'm on Google Kubernetes Cloud and the whole uploads folder is mounted to Google Cloud Storage using GCSFuse. I am using an nginx server (on alpine).</p>

   edit   deselect   +add

 

<p>No updates/changes has been made to cluster or pods. All pods are green and working and all hosts are healty. Everything has been working for more then 2 weeks until this morning.</p>

   edit   deselect   +add

 

<p>I am not able to access the Jenkins x dashboard. </p>

   edit   deselect   +add

 

<p><em>note:</em> I have tried with restarting minikube cluster also.</p><p>I'm following <a href="http://kubernetes.io/docs/getting-started-guides/aws/" rel="nofollow noreferrer">this guide</a> to set up Kubernetes on an Ubuntu 14.04 image on AWS.</p>

   edit   deselect   +add

 

aws configure # enter credentials, etc.

   edit   deselect   +add

 

export KUBE_AWS_ZONE=us-east-1b

   edit   deselect   +add

 

export AWS_S3_BUCKET=my.s3.bucket.kube

   edit   deselect   +add

 

curl -sS https://get.k8s.io | bash

   edit   deselect   +add

 

Downloading kubernetes release v1.2.4 to /home/ubuntu/kubernetes.tar.gz

   edit   deselect   +add

 

HTTP request sent, awaiting response... 200 OK

   edit   deselect   +add

 

2016-05-21 17:01:29 (58.1 MB/s) - ‘kubernetes.tar.gz’ saved [496696744/496696744]

   edit   deselect   +add

 

... calling verify-prereqs

   edit   deselect   +add

 

+++ Staging server tars to S3 Storage: my.s3.bucket.kube/devel

   edit   deselect   +add

 

<p>I tried editing <code>cluster/aws/util.sh</code> to print out <code>s3_bucket_location</code> (following advice from <a href="https://stackoverflow.com/questions/35664787/nosuchbucket-error-when-running-kubernetes-on-aws">this question</a>, and I get an empty string. I'm guessing that's why it fails?</p>

   edit   deselect   +add

 

<p>Following the examples from the <code>dask-kubernetes</code> docs I got a <code>kube</code> cluster running on AWS and (on a separate AWS machine) started a <code>notebook</code> with the local <code>dask.distributed</code> scheduler. The scheduler launches a number of workers on the <code>kube</code> cluster, but it can not connect to said workers because the workers are on a different network: the internal <code>kube</code> network.</p>

   edit   deselect   +add

 

<li><code>kube</code> cluster EC2 instances also on 192.168.0.0/24&lt;/li&gt;

   edit   deselect   +add

 

<p>The workers are able to connect to the scheduler, but in the scheduler I get a errors of the form </p>

   edit   deselect   +add

 

<p>I'm not looking for a list of possible things I <em>could</em> do, I'm looking for the <em>recommended</em> way of setting this up, specifically in relation to <code>dask.distributed</code>.</p>

   edit   deselect   +add

 

<p>After the test completes, I use drone again to build &amp; push several docker images of about ~40mb each to us.gcr.io</p>

   edit   deselect   +add

 

time="2018-03-19T03:31:17.208009069Z" level=error msg="Upload failed, retrying: net/http: HTTP/1.x transport connection broken: write tcp w.x.y.z:39662-&gt;z.y.x.w:443: write: broken pipe"

   edit   deselect   +add

 

time="2018-03-19T03:31:23.432621075Z" level=error msg="Upload failed, retrying: unexpected EOF"

   edit   deselect   +add

 

<p>Here are the docker commands being ran. Obviously the sensitive data has been omitted.</p>

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

<p>Is there something I could do with docker daemon or kubernetes network settings or something to mitigate this? At the very least I want to understand why this is happening.</p>

   edit   deselect   +add

 

<p>This doesn't even require Kubernetes to happen!</p>

   edit   deselect   +add

 

<p>Is there no way to set the name of a chart you are targeting with <code>upgrade</code>? Is this only possible for <code>install</code>?</p>`helm upgrade --name` results in "Error: unknown flag: --name"<p>I setup a 3 nodes kubernetes (<code>v1.9.3</code>) cluster on Ubuntu 16.04. </p>

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

etcd-master 1/1 Running 0 3m

   edit   deselect   +add

 

kube-flannel-ds-wbx97 1/1 Running 0 1m

   edit   deselect   +add

 

<p>But the problem is <code>kube-dns</code> seems got wrong service endpoint address assigned, this can be seen with following commands:</p>

   edit   deselect   +add

 

root@master:~# kubectl describe service kube-dns -n kube-system

   edit   deselect   +add

 

kubernetes.io/cluster-service=true

   edit   deselect   +add

 

Type: ClusterIP

   edit   deselect   +add

 

Endpoints: 172.17.0.2:53

   edit   deselect   +add

 

Session Affinity: None

   edit   deselect   +add

 

<p>The effect of current setup is all the pods will not have functioned DNS while IP communication is ok. </p>

   edit   deselect   +add

 

producer Deployment/producer &lt;unknown&gt;/1% 1 3 1 42m

   edit   deselect   +add

 

kind: Deployment

   edit   deselect   +add

 

kompose.version: 1.1.0 (36652f6)

   edit   deselect   +add

 

name: producer

   edit   deselect   +add

 

template:

   edit   deselect   +add

 

io.kompose.service: producer

   edit   deselect   +add

 

name: producer

   edit   deselect   +add

 

- name: mongoUrl

   edit   deselect   +add

 

- name: mongoPort

   edit   deselect   +add

 

cpu: 10m

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

Warning FailedGetResourceMetric 4m (x91 over 49m) horizontal-pod-autoscaler missing request for cpu on container producer in pod default/producer-c7dd566f6-69gbq

   edit   deselect   +add

 

<pre><code>{"log":"I0912 10:36:40.806224 1 event.go:218] Event(v1.ObjectReference{Kind:\"HorizontalPodAutoscaler\", Namespace:\"default\", Name:\"producer\", UID:\"135d0ebc-b671-11e8-a19f-080027646864\", APIVersion:\"autoscaling/v2beta1\", ResourceVersion:\"71101\", FieldPath:\"\"}): type: 'Warning' reason: 'FailedGetResourceMetric' missing request for cpu on container producer in pod default/producer-c7dd566f6-w8zcd\n","stream":"stderr","time":"2018-09-12T10:36:40.80645916Z"}

   edit   deselect   +add

 

<pre><code>NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%

   edit   deselect   +add

 

<p>myconfig.yaml:</p>

   edit   deselect   +add

 

name: counter

   edit   deselect   +add

 

image: busybox

   edit   deselect   +add

 

<p>then</p>

   edit   deselect   +add

 

<p>The pod appears to be running fine:</p>

   edit   deselect   +add

 

Node: ip-10-0-0-43.ec2.internal/10.0.0.43

   edit   deselect   +add

 

Status: Running

   edit   deselect   +add

 

Container ID: docker://d2dfdb8644b5a6488d9d324c8c8c2d4637a460693012f35a14cfa135ab628303

   edit   deselect   +add

 

Host Port: &lt;none&gt;

   edit   deselect   +add

 

i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done

   edit   deselect   +add

 

Restart Count: 0

   edit   deselect   +add

 

Conditions:

   edit   deselect   +add

 

PodScheduled True

   edit   deselect   +add

 

SecretName: default-token-r6tr6

   edit   deselect   +add

 

Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s

   edit   deselect   +add

 

Normal Scheduled 16m default-scheduler Successfully assigned counter to ip-10-0-0-43.ec2.internal

   edit   deselect   +add

 

Normal Created 16m kubelet, ip-10-0-0-43.ec2.internal Created container

   edit   deselect   +add

 

<pre><code>kubectl logs counter --follow=true

   edit   deselect   +add

 

<p>And get an error:</p>

   edit   deselect   +add

 

<pre><code>$ kubectl top nodes

   edit   deselect   +add

 

ip-10-43-0-12 362m 18% 2030Mi 55%

   edit   deselect   +add

 

<p>Ok, what I should do? give permissions to the <code>system:node</code> group I suppose</p>

   edit   deselect   +add

 

<p>Ok, inspecting cluster role:</p>

   edit   deselect   +add

 

Annotations: rbac.authorization.kubernetes.io/autoupdate=true

   edit   deselect   +add

 

endpoints [] [] [get]

   edit   deselect   +add

 

nodes/status [] [] [patch update]

   edit   deselect   +add

 

pods/eviction [] [] [create]

   edit   deselect   +add

 

subjectaccessreviews.authorization.k8s.io [] [] [create]

   edit   deselect   +add

 

<pre><code>kubectl patch clusterrole system:node --type='json' -p='[{"op": "add", "path": "/rules/0", "value":{"apiGroups": [""], "resources": ["services/proxy"], "verbs": ["get", "list", "watch"]}}]'

   edit   deselect   +add

 

Name: system:node

   edit   deselect   +add

 

Resources Non-Resource URLs Resource Names Verbs

   edit   deselect   +add

 

<p>Only way that it works is:</p>

   edit   deselect   +add

 

<pre><code>kind: ClusterRole

   edit   deselect   +add

 

name: top-nodes-watcher

   edit   deselect   +add

 

verbs: ["get", "watch", "list"]

   edit   deselect   +add

 

metadata:

   edit   deselect   +add

 

name: system:node:ip-10-43-0-13

   edit   deselect   +add

 

name: top-nodes-watcher

   edit   deselect   +add

 

<p>More details:</p>

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

<p><code>the node was low on resource imagefs</code></p>

   edit   deselect   +add

 

Node: ip-192-168-66-176.eu-west-1.compute.internal/

   edit   deselect   +add

 

Annotations: &lt;none&gt;

   edit   deselect   +add

 

IP:

   edit   deselect   +add

 

Port: &lt;none&gt;

   edit   deselect   +add

 

-c

   edit   deselect   +add

 

cpu: 1

   edit   deselect   +add

 

memory: 512Mi

   edit   deselect   +add

 

DOCKER_CONFIG: /home/jenkins/.docker/

   edit   deselect   +add

 

_JAVA_OPTIONS: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -Xms10m -Xmx192m

   edit   deselect   +add

 

JENKINS_URL: http://jenkins:8080

   edit   deselect   +add

 

/home/jenkins/.docker from volume-2 (rw)

   edit   deselect   +add

 

/var/run/secrets/kubernetes.io/serviceaccount from jenkins-token-smvvp (ro)

   edit   deselect   +add

 

Host Port: &lt;none&gt;

   edit   deselect   +add

 

Requests:

   edit   deselect   +add

 

JENKINS_SECRET: 131c407141521c0842f62a69004df926be6cb531f9318edf0885aeb96b0662b4

   edit   deselect   +add

 

GIT_COMMITTER_EMAIL: jenkins-x@googlegroups.com

   edit   deselect   +add

 

JENKINS_NAME: maven-96wmn

   edit   deselect   +add

 

Mounts:

   edit   deselect   +add

 

/root/.m2 from volume-1 (rw)

   edit   deselect   +add

 

volume-0:

   edit   deselect   +add

 

volume-2:

   edit   deselect   +add

 

volume-1:

   edit   deselect   +add

 

workspace-volume:

   edit   deselect   +add

 

Type: Secret (a volume populated by a Secret)

   edit   deselect   +add

 

Type: Secret (a volume populated by a Secret)

   edit   deselect   +add

 

Node-Selectors: &lt;none&gt;

   edit   deselect   +add

 

Type Reason Age From Message

   edit   deselect   +add

 

Normal SuccessfulMountVolume 7m kubelet, ip-192-168-66-176.eu-west-1.compute.internal MountVolume.SetUp succeeded for volume "volume-1"

   edit   deselect   +add

 

Normal Pulled 7m kubelet, ip-192-168-66-176.eu-west-1.compute.internal Container image "jenkinsxio/builder-maven:0.0.516" already present on machine

   edit   deselect   +add

 

Normal Created 7m kubelet, ip-192-168-66-176.eu-west-1.compute.internal Created container

   edit   deselect   +add

 

Normal Killing 5m kubelet, ip-192-168-66-176.eu-west-1.compute.internal Killing container with id docker://maven:Need to kill Pod

   edit   deselect   +add

 

<pre><code>kubectl delete gateway istio-autogenerated-k8s-ingress -n istio-system

   edit   deselect   +add

 

<p>Is it related and if so, how can I set them up again?

   edit   deselect   +add

 

this is the federation kube config file :</p>

   edit   deselect   +add

 

certificate-authority-data: REDACTED

   edit   deselect   +add

 

certificate-authority-data: REDACTED

   edit   deselect   +add

 

insecure-skip-tls-verify: true

   edit   deselect   +add

 

- context:

   edit   deselect   +add

 

name: default-context

   edit   deselect   +add

 

name: federation

   edit   deselect   +add

 

user: kubectl

   edit   deselect   +add

 

namespace: default

   edit   deselect   +add

 

kind: Config

   edit   deselect   +add

 

user:

   edit   deselect   +add

 

- name: federation-basic-auth

   edit   deselect   +add

 

- name: kubectl

   edit   deselect   +add

 

- name: kubernetes-admins1

   edit   deselect   +add

 

<p>i run this command : kubefed join site-1 --host-cluster-context=default-context --cluster-context=kubernetes-admin-s1 --insecure-skip-tls-verify=true, the cluster is created but with offline status , is not reacheable ;

   edit   deselect   +add

 

Name: site-1

   edit   deselect   +add

 

federation.kubernetes.io/servive-account-name=site-1-default-context

   edit   deselect   +add

 

Creation Timestamp: 2018-04-22T17:37:40Z

   edit   deselect   +add

 

Spec:

   edit   deselect   +add

 

Client CIDR: 0.0.0.0/0

   edit   deselect   +add

 

Last Probe Time: 2018-04-22T18:09:43Z

   edit   deselect   +add

 

Status: True

   edit   deselect   +add

 

<code>$ which ssh-agent || ( apt-get update -y &amp;&amp; apt-get install openssh-client -y )

   edit   deselect   +add

 

$ ssh-add &lt;(echo "$SSH_PRIVATE_KEY")

   edit   deselect   +add

 

<p>This will enable my service to (for example) use the kuberntes client api.</p>

   edit   deselect   +add

 

<p>Logging section in appsettings.json &amp; appsettings.Development.json</p>

   edit   deselect   +add

 

"LogLevel": {

   edit   deselect   +add

 

"Console": {

   edit   deselect   +add

 

"Microsoft": "Information"

   edit   deselect   +add

 

return new WebHostBuilder()

   edit   deselect   +add

 

config.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)

   edit   deselect   +add

 

if (appAssembly != null)

   edit   deselect   +add

 

config.AddCommandLine(args);

   edit   deselect   +add

 

logging.AddDebug();

   edit   deselect   +add

 

.Build();

   edit   deselect   +add

 

</code></pre><p>I want to make a call to Kubernetes API from .NET Core app outside the cluster. </p>

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

- cluster:

   edit   deselect   +add

 

<p>How can I validate server certificate using that <strong>certificate-authority-data</strong> in my application?</p><p><strong>PLEASE READ UPDATE 2</strong></p>

   edit   deselect   +add

 

<p><strong>C# Code:</strong></p>

   edit   deselect   +add

 

cfg.AddProfile&lt;AiElementProfile&gt;();

   edit   deselect   +add

 

ConsumerGroupName,

   edit   deselect   +add

 

// Registers the Event Processor Host and starts receiving messages

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

metadata:

   edit   deselect   +add

 

matchLabels:

   edit   deselect   +add

 

metadata:

   edit   deselect   +add

 

containers:

   edit   deselect   +add

 

- containerPort: 80

   edit   deselect   +add

 

<p><strong>kubectl get pods:</strong></p>

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

Node: aks-nodepool1-81522366-0/10.240.0.4

   edit   deselect   +add

 

Annotations: &lt;none&gt;

   edit   deselect   +add

 

Containers:

   edit   deselect   +add

 

Image ID: docker-pullable://vncont.azurecr.io/historysvc@sha256:636d81435bd421ec92a0b079c3841cbeb3ad410509a6e37b1ec673dc4ab8a444

   edit   deselect   +add

 

Exit Code: 0

   edit   deselect   +add

 

Reason: Completed

   edit   deselect   +add

 

Ready: False

   edit   deselect   +add

 

/var/run/secrets/kubernetes.io/serviceaccount from default-token-mt8mm (ro)

   edit   deselect   +add

 

Ready False

   edit   deselect   +add

 

Type: Secret (a volume populated by a Secret)

   edit   deselect   +add

 

Node-Selectors: &lt;none&gt;

   edit   deselect   +add

 

Type Reason Age From Message

   edit   deselect   +add

 

Normal Created 7s (x5 over 1m) kubelet, aks-nodepool1-81522366-0 Created container

   edit   deselect   +add

 

<p>What am I missing?</p>

   edit   deselect   +add

 

<pre><code>Name: historysvc-deployment-558fc5649f-jgjvq

   edit   deselect   +add

 

Labels: app=historysvc

   edit   deselect   +add

 

IP: 10.244.0.12

   edit   deselect   +add

 

Container ID: docker://ccf83bce216276450ed79d67fb4f8a66daa54cd424461762478ec62f7e592e30

   edit   deselect   +add

 

State: Waiting

   edit   deselect   +add

 

Exit Code: 0

   edit   deselect   +add

 

Restart Count: 277

   edit   deselect   +add

 

Conditions:

   edit   deselect   +add

 

PodScheduled True

   edit   deselect   +add

 

SecretName: default-token-mt8mm

   edit   deselect   +add

 

Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s

   edit   deselect   +add

 

Warning BackOff 2m (x6238 over 23h) kubelet, aks-nodepool1-81522366-0 Back-off restarting failed container

   edit   deselect   +add

 

<pre><code>docker run &lt;image&gt;

   edit   deselect   +add

 

<pre><code>docker run -it &lt;image&gt;

   edit   deselect   +add

 

<p>$ kubectl get hpa</p>

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

Server Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.5-gke.4", GitCommit:"0c81dc1e8c26fa2c47e50072dc7f98923cb2109c", GitTreeState:"clean", BuildDate:"2018-12-07T00:22:06Z", GoVersion:"go1.10.3b4", Compiler:"gc", Platform:"linux/amd64"}

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

<ol>

   edit   deselect   +add

 

<p>While attempting to register the Microsoft.Compute provider in order

   edit   deselect   +add

 

</blockquote>

   edit   deselect   +add

 

<p><code>az provider register -n Microsoft.Compute</code></p>

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

<li>Microsoft.Storage</li>

   edit   deselect   +add

 

<p><a href="https://i.stack.imgur.com/MN156.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MN156.png" alt="enter image description here"></a></p>

   edit   deselect   +add

 

<p>Two days ago I preformed the identical operations on a clients account successfully and everything finished within 5 minutes. I have tried the following options to solve the issue (thus far with no impact):</p>

   edit   deselect   +add

 

This is where my situation diverges from the above question. When unregistering the component with:

   edit   deselect   +add

 

</ol>

   edit   deselect   +add

 

<p>Will post back with my findings.</p><p>I deploy apps to <em>Kubernetes</em> running on <em>Google Cloud</em> from CI. CI makes use of <em>kubectl</em> config which contains auth information (either in directly CVS or templated from the env vars during build)</p>

   edit   deselect   +add

 

<p><code>gcloud container clusters get-credentials &lt;cluster-name&gt;</code></p>

   edit   deselect   +add

 

</blockquote>

   edit   deselect   +add

 

<li><a href="https://stackoverflow.com/questions/48761952/cant-contact-our-azure-aks-kube-tls-handshake-timeout">Can&#39;t contact our Azure-AKS kube - TLS handshake timeout</a></li>

   edit   deselect   +add

 

<ol>

   edit   deselect   +add

 

<li><a href="https://github.com/Azure/AKS/issues/177" rel="noreferrer">https://github.com/Azure/AKS/issues/177&lt;/a&gt;&lt;/li&gt;

   edit   deselect   +add

 

<ol>

   edit   deselect   +add

 

<blockquote>

   edit   deselect   +add

 

<p>You can also try scaling your Cluster (assuming that doesn't break your app).</p>

   edit   deselect   +add

 

<li><a href="https://github.com/Azure/AKS/tree/master/annoucements" rel="noreferrer">https://github.com/Azure/AKS/tree/master/annoucements&lt;/a&gt;&lt;/li&gt;

   edit   deselect   +add

 

<p>The first piece I haven't seen mentioned elsewhere is Resource usage on the nodes / vms / instances that are being impacted by the above Kubectl 'Unable to connect to the server: net/http: TLS handshake timeout' issue. </p>

   edit   deselect   +add

 

<p>The drop in utilization and network io correlates strongly with both the increase in disk utilization AND the time period we began experiencing the issue. </p>

   edit   deselect   +add

 

<p><a href="https://i.stack.imgur.com/LVma8.png" rel="noreferrer"><img src="https://i.stack.imgur.com/LVma8.png" alt="enter image description here"></a></p>

   edit   deselect   +add

 

<p>Zimmergren over on GitHub indicates that he has less issues with larger instances than he did running bare bones smaller nodes. This makes sense to me and could indicate that the way the AKS servers divy up the workload (see next section) could be based on the size of the instances.</p>

   edit   deselect   +add

 

<ol>

   edit   deselect   +add

 

<p><em>An AKS server responsible for more smaller Clusters may possibly get hit more often?</em></p>

   edit   deselect   +add

 

<p>The fact that users (Zimmergren etc above) seem to feel that the Node size impacts the likelihood that this issue will impact you also seems to indicate that node size may relate to the way the sub-region responsibilities are assigned to the sub-regional AKS management servers. </p>

   edit   deselect   +add

 

<h3>Staging Cluster Utilization</h3>

   edit   deselect   +add

 

<p>Both of our Clusters are running identical ingresses, services, pods, containers so it is also unlikely that anything a user is doing causes this problem to crop up.</p>

   edit   deselect   +add

 

<p>In an emergency (ie your production site... like ours... needs to be managed) you can <strong><em>PROBABLY</em></strong> just re-create until you get a working cluster that happens to land on a different AKS management server instance (one that is not impacted) but be aware that this may not happen on your first attempt — AKS cluster re-creation is not exactly instant.</p>

   edit   deselect   +add

 

<blockquote>

   edit   deselect   +add

 

<ol>

   edit   deselect   +add

 

<h2>Why no GKE?</h2>

   edit   deselect   +add

 

<li><a href="https://stackoverflow.com/questions/47481022/tls-handshake-timeout-with-kubernetes-in-gke?rq=1">TLS handshake timeout with kubernetes in GKE</a></li>

   edit   deselect   +add

 

// initialized to an unstable value to ensure meaning isn't attributed to the suffix.

   edit   deselect   +add

 

<p>Is there an advantage of doing this over another method of getting a random number under 12345?</p>unstable value' in reference to time % 12345<p>kube version:1.22</p>

   edit   deselect   +add

 

<li>In minion B,<code>telnet $A_IP 30003</code> success( or <code>nc $A_IP 30003</code>)</li>

   edit   deselect   +add

 

<p>So I think should clean the iptables when kube-proxy abnormal exit?</p>’systemctl stop kube-proxy‘ will not clean the iptables<p>I am getting some issues while creating a Kubernetes cluster on a Google Cloud instance.</p>

   edit   deselect   +add

 

<p>Please see error below from the console:</p>

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

<p>At first I was getting CORS errors but fixed that by adding lines 48-52 and creating a new service that serves HTTP1.</p>

   edit   deselect   +add

 

kind: Ingress

   edit   deselect   +add

 

kubernetes.io/ingress.class: nginx

   edit   deselect   +add

 

nginx.ingress.kubernetes.io/enable-access-log: "false"

   edit   deselect   +add

 

nginx.ingress.kubernetes.io/ssl-redirect: "true"

   edit   deselect   +add

 

http:

   edit   deselect   +add

 

servicePort: 5601

   edit   deselect   +add

 

- kibana.test.com

   edit   deselect   +add

 

<pre><code>server {

   edit   deselect   +add

 

set $proxy_upstream_name "-";

   edit   deselect   +add

 

ssl_certificate /etc/ingress-controller/ssl/monitoring-kb-kibana-tls.pem;

   edit   deselect   +add

 

ssl_stapling_verify on;

   edit   deselect   +add

 

# therefore we have to explicitly set this variable again so that when the parent request

   edit   deselect   +add

 

proxy_set_header Content-Length "";

   edit   deselect   +add

 

proxy_set_header X-Sent-From "nginx-ingress-controller";

   edit   deselect   +add

 

proxy_buffering off;

   edit   deselect   +add

 

proxy_http_version 1.1;

   edit   deselect   +add

 

# Pass the extracted client certificate to the auth provider

   edit   deselect   +add

 

set $namespace "monitoring";

   edit   deselect   +add

 

set $location_path "/";

   edit   deselect   +add

 

balancer.log()

   edit   deselect   +add

 

access_log off;

   edit   deselect   +add

 

{"proxy_protocol_addr": "","remote_addr": "xxx.xxx.xxx.xx", "proxy_add_x_forwarded_for": "xxx.xxx.xxx.xx, xxx.xxx.xxx.xx", "remote_user": "", "time_local": "21/Nov/2018:09:53:40 +0000", "request" : "GET /app/kibana HTTP/1.1", "status": "202", "body_bytes_sent": "0", "http_referer": "https://kibana.test.com/", "http_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36", "request_length" : "0", "request_time": "0.001", "proxy_upstream_name": "monitoring-kb-kibana-5601", "upstream_addr": "xxx.xxx.xxx.xx:4180", "upstream_response_length": "0", "upstream_response_time": "0.001", "upstream_status": "202", "request_body": "", "http_authorization": ""}

   edit   deselect   +add

 

{"proxy_protocol_addr": "","remote_addr": "xxx.xxx.xxx.xx", "proxy_add_x_forwarded_for": "xxx.xxx.xxx.xx, xxx.xxx.xxx.xx", "remote_user": "", "time_local": "21/Nov/2018:09:53:43 +0000", "request" : "GET /plugins/kibana/assets/settings.svg HTTP/1.1", "status": "202", "body_bytes_sent": "0", "http_referer": "https://kibana.test.com/app/kibana", "http_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36", "request_length" : "0", "request_time": "0.029", "proxy_upstream_name": "monitoring-kb-kibana-5601", "upstream_addr": "xxx.xxx.xxx.xx:4180", "upstream_response_length": "0", "upstream_response_time": "0.030", "upstream_status": "202", "request_body": "", "http_authorization": ""}

   edit   deselect   +add

 

{"proxy_protocol_addr": "","remote_addr": "xxx.xxx.xxx.xx", "proxy_add_x_forwarded_for": "xxx.xxx.xxx.xx, xxx.xxx.xxx.xx", "remote_user": "", "time_local": "21/Nov/2018:09:53:45 +0000", "request" : "GET /ui/fonts/open_sans/open_sans_v15_latin_600.woff2 HTTP/1.1", "status": "202", "body_bytes_sent": "0", "http_referer": "https://kibana.test.com/app/kibana", "http_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36", "request_length" : "0", "request_time": "0.002", "proxy_upstream_name": "monitoring-kb-kibana-5601", "upstream_addr": "xxx.xxx.xxx.xx:4180", "upstream_response_length": "0", "upstream_response_time": "0.002", "upstream_status": "202", "request_body": "", "http_authorization": ""}

   edit   deselect   +add

 

<p>Is there some config that needs to be applied(via Halyard or otherwise) in order to make the <code>Bake</code> Stage Type available? I'm running Spinnaker version 1.5.3.</p>"Bake" Stage Type not showing in Stage Types Dropdown in Spinnaker Pipeline<p>I am new to Prometheus and relatively new to kubernetes so bear with me, please. I am trying to test Prometheus out and have tried two different approaches. </p>

   edit   deselect   +add

 

ADD prometheus.yml /etc/prometheus/

   edit   deselect   +add

 

scrape_interval: 15s

   edit   deselect   +add

 

- job_name: 'kubernetes-apiservers'

   edit   deselect   +add

 

bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

Failed to list *v1.Endpoints: Get http://localhost:443/api/v1/pods?limit=500&amp;amp;resourceVersion=0: dial tcp 127.0.0.1:443: connect: connection refused"

   edit   deselect   +add

 

<pre><code>kind: Deployment

   edit   deselect   +add

 

spec:

   edit   deselect   +add

 

template:

   edit   deselect   +add

 

spec:

   edit   deselect   +add

 

# args:

   edit   deselect   +add

 

- name: webui

   edit   deselect   +add

 

</ol>

   edit   deselect   +add

 

<pre><code>*curl http://localhost:5002/analyst_rating -v

   edit   deselect   +add

 

&gt; Host: localhost:5002

   edit   deselect   +add

 

* HTTP 1.0, assume close after body

   edit   deselect   +add

 

&lt; Server: Werkzeug/0.12.2 Python/2.7.12

   edit   deselect   +add

 

* Closing connection 0*

   edit   deselect   +add

 

* Trying 184.173.44.62...

   edit   deselect   +add

 

&gt; Host: 184.173.44.62:30484

   edit   deselect   +add

 

* Closing connection 0

   edit   deselect   +add

 

I am able to make connections but not able to receive any response.

   edit   deselect   +add

 

Name: sunlife-analystrating-deployment

   edit   deselect   +add

 

Annotations: deployment.kubernetes.io/revision=1

   edit   deselect   +add

 

MinReadySeconds: 0

   edit   deselect   +add

 

Containers:

   edit   deselect   +add

 

Environment: &lt;none&gt;

   edit   deselect   +add

 

Type Status Reason

   edit   deselect   +add

 

Events: &lt;none&gt;

   edit   deselect   +add

 

Name: kubernetes

   edit   deselect   +add

 

Annotations: &lt;none&gt;

   edit   deselect   +add

 

Port: https 443/TCP

   edit   deselect   +add

 

Events: &lt;none&gt;

   edit   deselect   +add

 

Annotations: &lt;none&gt;

   edit   deselect   +add

 

Port: &lt;unset&gt; 5002/TCP

   edit   deselect   +add

 

Session Affinity: None

   edit   deselect   +add

 

<p>Following is the code snippet, that I have used to expose the rest client inside container</p>

   edit   deselect   +add

 

response="Hello World"

   edit   deselect   +add

 

app.run(port='5002')

   edit   deselect   +add

 

metadata:

   edit   deselect   +add

 

spec:

   edit   deselect   +add

 

targetPort: 80

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

Warning CreatingLoadBalancerFailed 52s (x2 over 52s) service-controller Error cr

   edit   deselect   +add

 

401 Code="InvalidAuthenticationTokenTenant" Message="The access token is from the wr

   edit   deselect   +add

 

iption is transferred to another tenant there is no impact to the services, but info

   edit   deselect   +add

 

<p>I tried deleting the service, and now, on every service on the cluster that I run <code>kubectl describe svc &lt;svc-name&gt;</code> I'm getting the following message in the <code>Events</code> section: </p>

   edit   deselect   +add

 

</blockquote>

   edit   deselect   +add

 

<li>deploy mysql using helm</li>

   edit   deselect   +add

 

</ol>

   edit   deselect   +add

 

</blockquote>

   edit   deselect   +add

 

mysqlDatabase=xxx,persistence.size=50Gi \

   edit   deselect   +add

 

<p>I am deploying my wordpress app and nginx containers in one pod, for mutual persistent volume use. The deployment yaml looks like this:</p>

   edit   deselect   +add

 

metadata:

   edit   deselect   +add

 

spec:

   edit   deselect   +add

 

app: wordpress

   edit   deselect   +add

 

app: wordpress

   edit   deselect   +add

 

name: nginx

   edit   deselect   +add

 

- name: DB_HOST

   edit   deselect   +add

 

- name: DB_PASSWORD

   edit   deselect   +add

 

key: password

   edit   deselect   +add

 

volumeMounts:

   edit   deselect   +add

 

mountPath: "/etc/nginx/conf.d"

   edit   deselect   +add

 

- name: MY_DB_HOST

   edit   deselect   +add

 

- name: MY_DB_PASSWORD

   edit   deselect   +add

 

key: password

   edit   deselect   +add

 

value: "https://example.com"

   edit   deselect   +add

 

value: "true"

   edit   deselect   +add

 

- name: wordpress-persistent-storage

   edit   deselect   +add

 

persistentVolumeClaim:

   edit   deselect   +add

 

name: wp-config

   edit   deselect   +add

 

imagePullSecrets:

   edit   deselect   +add

 

<p>For reference, my config file is as follows:

   edit   deselect   +add

 

server_name $SITE_URL;

   edit   deselect   +add

 

error_log /var/log/nginx/error.log;

   edit   deselect   +add

 

rewrite .* /index.php;

   edit   deselect   +add

 

fastcgi_pass wordpress:9000;

   edit   deselect   +add

 

fastcgi_param PATH_INFO $fastcgi_path_info;

   edit   deselect   +add

 

<p>Some extra info in case it helps:</p>

   edit   deselect   +add

 

<pre><code> - port: 3306

   edit   deselect   +add

 

<pre><code> - name: wordpress

   edit   deselect   +add

 

targetPort: 80

   edit   deselect   +add

 

targetPort: 443

   edit   deselect   +add

 

<p>I have added the mysql password as a secret with the proper yaml file and base64 value. I have also tried using the command line instead for creating the secret, and both don't change anything in the results. </p>

   edit   deselect   +add

 

<p>MySQL init process in progress... <br>

   edit   deselect   +add

 

Warning: Unable to load '/usr/share/zoneinfo/posix/Factory' as time zone. Skipping it.<br>

   edit   deselect   +add

 

MySQL init process done. Ready for start up.</p>

   edit   deselect   +add

 

<p>[11:15:03 +0000] "GET /robots.txt HTTP/1.1" 500 262 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +<a href="http://www.google.com/bot.html" rel="nofollow noreferrer">http://www.google.com/bot.html&lt;/a&gt;)"<br>

   edit   deselect   +add

 

</blockquote>

   edit   deselect   +add

 

127.0.0.1 - 16:04:42 +0000 "GET /index.php" 200</p>

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

* Kubernetes 1.9.3

   edit   deselect   +add

 

<p><strong>Incident instance timeline</strong></p>

   edit   deselect   +add

 

<li>Another pod created by a daemon set (fluentd pod) scheduled on the same node as the above one had slightly different error: network is not ready:[runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready:cni clnfig uninitialized]

   edit   deselect   +add

 

Mar 16 08:29:54 ip-172-20-85-48 kubelet[1346]: I0316 08:29:54.512797 1346 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "authservice-ca" (UniqueName: "kubernetes.io/secret/8ead64a3-28f3-11e8-b520-025c267c6ea8-authservice-ca") pod "broker-0" (UID: "8ead64a3-28f3-11e8-b520-025c267c6ea8")

   edit   deselect   +add

 

Mar 16 08:29:54 ip-172-20-85-48 kubelet[1346]: I0316 08:29:54.512980 1346 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "broker-prometheus-config" (UniqueName: "kubernetes.io/configmap/8ead64a3-28f3-11e8-b520-025c267c6ea8-broker-prometheus-config") pod "broker-0" (UID: "8ead64a3-28f3-11e8-b520-025c267c6ea8")

   edit   deselect   +add

 

Mar 16 08:29:54 ip-172-20-85-48 kubelet[1346]: I0316 08:29:54.613544 1346 reconciler.go:262] operationExecutor.MountVolume started for volume "default-token-vrhqr" (UniqueName: "kubernetes.io/secret/8ead64a3-28f3-11e8-b520-025c267c6ea8-default-token-vrhqr") pod "broker-0" (UID: "8ead64a3-28f3-11e8-b520-025c267c6ea8")

   edit   deselect   +add

 

Mar 16 08:29:54 ip-172-20-85-48 kubelet[1346]: I0316 08:29:54.616720 1346 operation_generator.go:522] MountVolume.SetUp succeeded for volume "broker-prometheus-config" (UniqueName: "kubernetes.io/configmap/8ead64a3-28f3-11e8-b520-025c267c6ea8-broker-prometheus-config") pod "broker-0" (UID: "8ead64a3-28f3-11e8-b520-025c267c6ea8")

   edit   deselect   +add

 

Mar 16 08:29:54 ip-172-20-85-48 kubelet[1346]: I0316 08:29:54.626604 1346 operation_generator.go:522] MountVolume.SetUp succeeded for volume "default-token-vrhqr" (UniqueName: "kubernetes.io/secret/8ead64a3-28f3-11e8-b520-025c267c6ea8-default-token-vrhqr") pod "broker-0" (UID: "8ead64a3-28f3-11e8-b520-025c267c6ea8")

   edit   deselect   +add

 

Mar 16 08:29:56 ip-172-20-85-48 kubelet[1346]: E0316 08:29:56.018024 1346 nestedpendingoperations.go:263] Operation for "\"kubernetes.io/aws-ebs/aws://eu-central-1b/vol-04145a1c9d1a26280\"" failed. No retries permitted until 2018-03-16 08:29:58.017982038 +0000 UTC m=+36.870107444 (durationBeforeRetry 2s). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-b673d6da-26e3-11e8-aa99-02cd3728faaa\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-central-1b/vol-04145a1c9d1a26280\") pod \"broker-0\" (UID: \"8ead64a3-28f3-11e8-b520-025c267c6ea8\") "

   edit   deselect   +add

 

Mar 16 08:30:02 ip-172-20-85-48 kubelet[1346]: E0316 08:30:02.034045 1346 nestedpendingoperations.go:263] Operation for "\"kubernetes.io/aws-ebs/aws://eu-central-1b/vol-04145a1c9d1a26280\"" failed. No retries permitted until 2018-03-16 08:30:10.034017896 +0000 UTC m=+48.886143256 (durationBeforeRetry 8s). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-b673d6da-26e3-11e8-aa99-02cd3728faaa\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-central-1b/vol-04145a1c9d1a26280\") pod \"broker-0\" (UID: \"8ead64a3-28f3-11e8-b520-025c267c6ea8\") "

   edit   deselect   +add

 

Mar 16 08:30:10 ip-172-20-85-48 kubelet[1346]: I0316 08:30:10.156188 1346 operation_generator.go:446] MountVolume.WaitForAttach entering for volume "pvc-b673d6da-26e3-11e8-aa99-02cd3728faaa" (UniqueName: "kubernetes.io/aws-ebs/aws://eu-central-1b/vol-04145a1c9d1a26280") pod "broker-0" (UID: "8ead64a3-28f3-11e8-b520-025c267c6ea8") DevicePath "/dev/xvdcr"

   edit   deselect   +add

 

Mar 16 08:30:12 ip-172-20-85-48 kubelet[1346]: I0316 08:30:12.672408 1346 kuberuntime_manager.go:385] No sandbox for pod "broker-0(8ead64a3-28f3-11e8-b520-025c267c6ea8)" can be found. Need to start a new one

   edit   deselect   +add

 

Mar 16 08:34:12 ip-172-20-85-48 kubelet[1346]: E0316 08:34:12.673020 1346 pod_workers.go:186] Error syncing pod 8ead64a3-28f3-11e8-b520-025c267c6ea8 ("broker-0(8ead64a3-28f3-11e8-b520-025c267c6ea8)"), skipping: failed to "CreatePodSandbox" for "broker-0(8ead64a3-28f3-11e8-b520-025c267c6ea8)" with CreatePodSandboxError: "CreatePodSandbox for pod \"broker-0(8ead64a3-28f3-11e8-b520-025c267c6ea8)\" failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded"

   edit   deselect   +add

 

Mar 16 08:34:14 ip-172-20-85-48 kubelet[1346]: I0316 08:34:14.005589 1346 kubelet.go:1880] SyncLoop (PLEG): "broker-0(8ead64a3-28f3-11e8-b520-025c267c6ea8)", event: &amp;pleg.PodLifecycleEvent{ID:"8ead64a3-28f3-11e8-b520-025c267c6ea8", Type:"ContainerDied", Data:"b08ea5b45ce3ba467856952ad6cc095f4b796673d7dfbf3b9c4029b6b1a75a1b"}

   edit   deselect   +add

 

<p>Does anyone has an any idea what the problem here is and what would be a remedy?</p><p>I'm attempting to deploy a Docker container to a minikube instance running locally, and getting this error when it attempts to pull(?) the image. The image exists in a self-hosted Docker registry. The image I'm testing with is built with the following Dockerfile:</p>

   edit   deselect   +add

 

<p>I'm using the fabric8io <code>kubernetes-client</code> library to create a deployment like so:</p>

   edit   deselect   +add

 

.withNewMetadata()

   edit   deselect   +add

 

.withNewSpec()

   edit   deselect   +add

 

.addToLabels("app", name)

   edit   deselect   +add

 

// "regsecret" is the kubectl-created docker secret

   edit   deselect   +add

 

.withName(name)

   edit   deselect   +add

 

.endTemplate()

   edit   deselect   +add

 

<p>This is all running on Arch Linux, kernel <code>Linux 4.10.9-1-ARCH x86_64 GNU/Linux</code>. Using <code>minikube 0.18.0-1</code> and <code>kubectl-bin 1.6.1-1</code> from the AUR, <code>docker 1:17.04.0-1</code> from the community repositories, and the docker <code>registry</code> container at <code>latest</code> (<code>2.6.1</code> as of writing this). fabric8io <code>kubernetes-client</code> is at version <code>2.2.13</code>. </p>

   edit   deselect   +add

 

<li>that the image can even be pulled. <code>docker pull</code> and <code>docker run</code> on both the host and inside the minikube VM work exactly as expected</li>

   edit   deselect   +add

 

</ul>

   edit   deselect   +add

 

<li>read through the kubernetes source code, as I don't know golang</li>

   edit   deselect   +add

 

Error syncing pod, skipping: failed to "StartContainer" for "YYY" with ImageInspectError: "Failed to inspect image \"registry_domain/XXX/YYY:latest\": Id or size of image \"registry_domain/XXX/YYY:latest\" is not set"

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

<li>The kubernetes source, which isn't helpful to me</li>

   edit   deselect   +add

 

<p>I self-host the cluster on digitalocean.</p>

   edit   deselect   +add

 

apiVersion: storage.k8s.io/v1

   edit   deselect   +add

 

volumeBindingMode: WaitForFirstConsumer

   edit   deselect   +add

 

name: prometheus-pv-volume

   edit   deselect   +add

 

spec:

   edit   deselect   +add

 

accessModes:

   edit   deselect   +add

 

hostPath:

   edit   deselect   +add

 

nodeSelectorTerms:

   edit   deselect   +add

 

kind: PersistentVolume

   edit   deselect   +add

 

labels:

   edit   deselect   +add

 

storageClassName: local-storage

   edit   deselect   +add

 

volumeMode: Filesystem

   edit   deselect   +add

 

path: "/grafana-volume"

   edit   deselect   +add

 

- matchExpressions:

   edit   deselect   +add

 

<p>And 2 pvc's using them on a same node. Here is one:</p>

   edit   deselect   +add

 

storageClassName: local-storage

   edit   deselect   +add

 

resources:

   edit   deselect   +add

 

<p>Everything works fine.</p>

   edit   deselect   +add

 

prometheus-pv-volume 100Gi RWO Retain Bound monitoring/prometheus-k8s-db-prometheus-k8s-0 local-storage 16m

   edit   deselect   +add

 

monitoring grafana-storage Bound grafana-pv-volume 1Gi RWO local-storage 10m

   edit   deselect   +add

 

<pre><code>W0302 17:16:07.877212 1 plugins.go:845] FindExpandablePluginBySpec(prometheus-pv-volume) -&gt; err:no volume plugin matched

   edit   deselect   +add

 

<pre>

   edit   deselect   +add

 

ERROR: (gcloud.container.clusters.create) ResponseError: code=403, message=Required "container.clusters.create" permission for "projects/test-project".

   edit   deselect   +add

 

<pre><code>$ gcloud container clusters create myproject --machine-type=n1-standard1# --zone=asia-northeast1-a

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

funny-turtle-myservice-xxx-yyy 1/1 Terminating 1 11d

   edit   deselect   +add

 

<h3>try to delete the pod.</h3>

   edit   deselect   +add

 

- also tried with <code>--force --grace-period=0</code>, same outcome with extra warning</p>

   edit   deselect   +add

 

<h3>try to read the logs (kubectl logs ...).</h3>

   edit   deselect   +add

 

<p>So I assume this pod somehow got "disconnected" from the aws API, reasoning from the error message that <code>kubectl logs</code> printed.</p>

   edit   deselect   +add

 

<p><strong>nginx-basic.conf-</strong></p>

   edit   deselect   +add

 

proxy_pass 35.239.243.201:9200;

   edit   deselect   +add

 

<p>I'm getting an error before I even get to the Couchbase part. I successfully created a resource group (which I called "cb_ask_spike", and yes it does appear on the Portal) from the command line, but then I try to create an AKS cluster:</p>

   edit   deselect   +add

 

<p>In both cases, I get an error:</p>

   edit   deselect   +add

 

<p><a href="https://i.stack.imgur.com/h9gEj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/h9gEj.png" alt="enter image description here"></a></p>

   edit   deselect   +add

 

const googleCloudErrorReporting = new googleCloud.ErrorReporting();

   edit   deselect   +add

 

<ul>

   edit   deselect   +add

 

</ul>

   edit   deselect   +add

 

</blockquote>

   edit   deselect   +add

 

<blockquote>

   edit   deselect   +add

 

<p>This container is fairly simple:</p>

   edit   deselect   +add

 

image: tmaier/postgresql-client

   edit   deselect   +add

 

psql "postgresql://$DB_USER:$DB_PASSWORD@db-host:5432" -c "CREATE DATABASE fusionauth ENCODING 'UTF-8' LC_CTYPE 'en_US.UTF-8' LC_COLLATE 'en_US.UTF-8' TEMPLATE template0"

   edit   deselect   +add

 

<p>This kubernetes initContainer according to what I can see runs before the "istio-init" container. Is that the reason why it cannot resolve the db-host:5432 to the ip of the pod running the postgres service?</p>

   edit   deselect   +add

 

connections on Unix domain socket "/tmp/.s.PGSQL.5432"?

   edit   deselect   +add

 

<p>I have a feeling that it is similar to <a href="https://stackoverflow.com/questions/44312745/kubernetes-rbac-unable-to-upgrade-connection-forbidden-user-systemanonymous">this issue</a>. But the error message is a tad different.</p>

   edit   deselect   +add

 

I0614 16:50:11.003705 64104 round_trippers.go:398] curl -k -v -XPOST -H "X-Stream-Protocol-Version: v4.channel.k8s.io" -H "X-Stream-Protocol-Version: v3.channel.k8s.io" -H "X-Stream-Protocol-Version: v2.channel.k8s.io" -H "X-Stream-Protocol-Version: channel.k8s.io" -H "User-Agent: kubectl/v1.6.4 (darwin/amd64) kubernetes/d6f4332" https://localhost:6443/api/v1/namespaces/monitoring/pods/alertmanager-main-0/exec?command=%2Fbin%2Fls&amp;amp;container=alertmanager&amp;amp;container=alertmanager&amp;amp;stderr=true&amp;amp;stdout=true

   edit   deselect   +add

 

I0614 16:50:11.169500 64104 round_trippers.go:426] Content-Length: 12

   edit   deselect   +add

 

I0614 16:50:11.169512 64104 round_trippers.go:426] Date: Wed, 14 Jun 2017 08:50:11 GMT

   edit   deselect   +add

 

</code></pre><p>So when I run <code>kubectl get all --all-namespaces</code> on different machines, I get different output and I can't understand why.</p>

   edit   deselect   +add

 

kube-system po/tiller-deploy-78d74d4979-rh7nv 1/1 Running 0 23h

   edit   deselect   +add

 

kube-system service-mesh-traefik-5bb8d58bf6-gfdqd 1/1 Running 0 2d

   edit   deselect   +add

 

<p>What is different? The cluster is the same, so it should be returning the same data. The first machine is kubectl version 1.9.2, second machine is 1.10.0. The cluster is running 1.8.7.</p>"kubectl get all --all-namespaces" has different output against the same cluster<p>Could anybody help how I can change the version number shown from "kubectl get nodes"? The binaries are compiled from source. "kubectl version" shows the correct version, but "kubectl get nodes" not.</p>

   edit   deselect   +add

 

<p>And here is what I get from <code>kubectl get nodes</code>:</p>

   edit   deselect   +add

 

<p>This script will finally use ...release-1.2/cluster/ubuntu/download-release.sh to download the binaries. I commented the call to download-release.sh and put my own binaries compiled from the up-to-date sources into ubuntu/binaries folder. </p>

   edit   deselect   +add

 

KUBELET_HOSTNAME="--hostname-override=centos-minion"

   edit   deselect   +add

 

<p>When kubelete service is started following logs could be seen </p>

   edit   deselect   +add

 

</blockquote>

   edit   deselect   +add

 

KUBELET_PORT="--kubelet-port=10250"

   edit   deselect   +add

 

<pre><code>KUBE_ETCD_SERVERS="--etcd-servers=http://centos-master:2379"

   edit   deselect   +add

 

KUBE_MASTER="--master=http://centos-master:8080"

   edit   deselect   +add

 

<p>kube 5657 1 0 Mar15 ? 00:12:05 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=<a href="http://centos-master:2379" rel="nofollow noreferrer">http://centos-master:2379</a> --address=0.0.0.0 --port=8080 --kubelet-port=10250 --allow-privileged=false --service-cluster-ip-range=10.254.0.0/16</p>

   edit   deselect   +add

 

<p>So i still do not know what is missing. </p>"kubectl get nodes" shows NotReady always even after giving the appropriate IP<p>I am following the document <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/&lt;/a&gt; to try to create a kubernetes cluster with 3 vagrant ubuntu vm in my local mac. But I can only see the master by running "kubectl get nodes" in master node after "kubeadm join" successfully. After tried several possible ways googled from internet, still the same issue.</p>

   edit   deselect   +add

 

- (master) eth0: 10.0.2.15, eth1: 192.168.101.101

   edit   deselect   +add

 

- (worker2) eth0: 10.0.2.15, eth1: 192.168.101.103

   edit   deselect   +add

 

<p>Regards,

   edit   deselect   +add

 

<p><a href="https://i.stack.imgur.com/YHx7k.jpg" rel="nofollow noreferrer">log-new-part1</a>

   edit   deselect   +add

 

<pre><code>Failed to pull image "localhost:5000/dev/customer:v1": rpc error: code = Unknown desc

   edit   deselect   +add

 

<p><strong>Pod Events</strong></p>

   edit   deselect   +add

 

Normal BackOff 16m (x2 over 16m) kubelet, minikube Back-off pulling image "localhost:5000/dev/customer:v1"

   edit   deselect   +add

 

= Error response from daemon: Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: getsockopt: connection refused

   edit   deselect   +add

 

v1: Pulling from dev/customer

   edit   deselect   +add

 

<p>Is it because <code>kubectl logs</code> under the hood using ssh? Is there any workaround to see the pod log?</p>"kubectl logs" not working after adding NAT gateways in GCE<p>Very often when I want to deploy new image with "kubectl set image" it's failing with ErrImagePull status, and then fixes itself after some time (up to few hours). These are events from "kubectl describe pod":</p>

   edit   deselect   +add

 

36m 12m 6 {kubelet gke-xxxxxxxxxx-staging-default-pool-ac6a32f4-09h5} spec.containers{zzz-staging} Normal Pulling pulling image "us.gcr.io/yyyy-staging/zzz:latest"

   edit   deselect   +add

 

16m 7m 7m 3 {kubelet gke-xxxxxxxxxx-staging-default-pool-ac6a32f4-09h5} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "zzz-staging" with ImagePullBackOff: "Back-off pulling image \"us.gcr.io/yyyy-staging/zzz:latest\""

   edit   deselect   +add

 

<p>Is there a way to avoid that?</p><p>I am deploying a container in Google Kubernetes Engine with this YAML fragment:</p>

   edit   deselect   +add

 

image: registry/service-go:latest

   edit   deselect   +add

 

cpu: "20m"

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

<p>I only have the default namespace and when executing</p>

   edit   deselect   +add

 

<pre><code>Name: default

   edit   deselect   +add

 

No resource quota.

   edit   deselect   +add

 

</code></pre>"Limits" property ignored when deploying a container in a Kubernetes cluster<p>I successfully deployed with kubernetes a custom container based on the official docker-vault image, but when using the <code>vault init</code> command I get the following error:</p>

   edit   deselect   +add

 

<pre><code>FROM vault:0.8.3

   edit   deselect   +add

 

CMD ["server", "-config=vault.conf"]

   edit   deselect   +add

 

export VAULT_ADDR="http://127.0.0.1:8200"

   edit   deselect   +add

 

<p>To execute it, I configured my kubernetes yaml deployment file as follows:</p>

   edit   deselect   +add

 

- image: // my image

   edit   deselect   +add

 

- containerPort: 8200

   edit   deselect   +add

 

# memory being swapped to disk so that secrets

   edit   deselect   +add

 

add:

   edit   deselect   +add

 

mountPath: /vault/file

   edit   deselect   +add

 

command: ["/bin/sh", "./configure_vault.sh"]

   edit   deselect   +add

 

claimName: vault

   edit   deselect   +add

 

<p>a) I created a docker private registry on slave node using basic ssl authentication. Lets call it <code>abc.def.com:1234</code></p>

   edit   deselect   +add

 

<p>e) Now, I stopped the container. I deleted the image from local cache as well. </p>

   edit   deselect   +add

 

root 23683 1 2 05:42 ? 00:01:12 /usr/bin/kubelet --logtostderr=true --v=0 --api-servers=http://wer.txy.com:8080 --address=0.0.0.0 --hostname-override=abc.def.com --allow-privileged=false --pod-infra-container-image=abc.def.com:1234/s5678:test --cluster-dns=x.y.z.b --cgroup-driver=systemd

   edit   deselect   +add

 

<pre><code>apiVersion: v1

   edit   deselect   +add

 

spec:

   edit   deselect   +add

 

ports:

   edit   deselect   +add

 

- name: regsecret

   edit   deselect   +add

 

NAME READY STATUS RESTARTS AGE

   edit   deselect   +add

 

<pre><code>Error from server (BadRequest): container "utest1" in pod "test1" is waiting to start: ContainerCreating

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

<p>AS you can see above command is received by Kubelet. But is not passed to dockerd-current daemon: </p>

   edit   deselect   +add

 

<p>Before pod creation: </p>

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

abc.def.com:1234/s5678 test 54ae12a89367 8 days ago 108 MB

   edit   deselect   +add

 

<p>Can any of you please help me to triage this issue further and move into closure?</p>"No command specified" Error while creating pod in Kubernetes<p>I have a Spring application built into a Docker image with the following command in <code>dockerfile</code></p>

   edit   deselect   +add

 

<p>When creating app on OpenShift with </p>

   edit   deselect   +add

 

<pre><code>java.lang.IllegalStateException: Logback configuration error detected:

   edit   deselect   +add

 

<p>However, when I run the image directly with <code>docker run -it &lt;image_ID&gt; /bin/bash</code>, and then execute the <code>java -jar</code> command above, it runs fine.</p>

   edit   deselect   +add

 

&lt;encoder&gt;

   edit   deselect   +add

 

&lt;rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy"&gt;

   edit   deselect   +add

 

&lt;/rollingPolicy&gt;

   edit   deselect   +add

 

<p>Versions I use:</p>

   edit   deselect   +add

 

features: Basic-Auth GSSAPI Kerberos SPNEGO

   edit   deselect   +add

 

API version: 1.24

   edit   deselect   +add

 

Built: Wed Dec 13 12:18:58 2017

   edit   deselect   +add

 

API version: 1.24

   edit   deselect   +add

 

Built: Wed Dec 13 12:18:58 2017

   edit   deselect   +add

 

<pre><code>- job_name: 'kubernetes_pods'

   edit   deselect   +add

 

- api_server: http://172.29.219.102:8080

   edit   deselect   +add

 

target_label: __address__

   edit   deselect   +add

 

<p>Where <code>172.29.219.110:8080</code> is the IP &amp; Port of my standalone HA Proxy.</p>

   edit   deselect   +add

 

{"status":"UP"}

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

<p>Error starting host: Error creating host: Error executing step: Running pre

   edit   deselect   +add

 

<p>When I am doing this step:</p>

   edit   deselect   +add

 

<p>Looking at docker ps -l, I have </p>

   edit   deselect   +add

 

About an hour ago Up About an hour

   edit   deselect   +add

 

<p>I run this command:</p>

   edit   deselect   +add

 

<p>Anything wrong here?

   edit   deselect   +add

 

<pre><code>root@kubemaster:~/istio-0.8.0# docker-compose -f samples/bookinfo/consul/bookinfo.yaml up -d

   edit   deselect   +add

 

<pre><code>root@kubemaster:~/istio-0.8.0# docker network create consul_istiomesh

   edit   deselect   +add

 

<pre><code>root@kubemaster:~/istio-0.8.0# docker-compose -f samples/bookinfo/consul/bookinfo.yaml up -d

   edit   deselect   +add

 

Creating consul_reviews-v1_1

   edit   deselect   +add

 

Traceback (most recent call last):

   edit   deselect   +add

 

line 63, in main

   edit   deselect   +add

 

<p>What to do ?</p>"Running on Docker with Consul or Eureka" ?<p>I'd like a multi-container pod with a couple of components:</p>

   edit   deselect   +add

 

</ul>

   edit   deselect   +add

 

<pre><code>apiVersion: v1

   edit   deselect   +add

 

spec:

   edit   deselect   +add

 

volumeMounts:

   edit   deselect   +add

 

- name: test-volume

   edit   deselect   +add

 

fsType: ext4

   edit   deselect   +add

 

Output: mount: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-west-2a/vol-xxxxxxxx does not exist

   edit   deselect   +add

 

<p>I wanted to test the <code>/metrics</code> API.<br>

   edit   deselect   +add

 

--key '/var/lib/kubelet/pki/kubelet.key' \

   edit   deselect   +add

 

Maybe I am not using the correct certificates ? </p>

   edit   deselect   +add

 

* Connected to 172.31.29.121 (172.31.29.121) port 10250 (#0)

   edit   deselect   +add

 

* CAfile: /etc/kubernetes/pki/ca.crt

   edit   deselect   +add

 

* TLSv1.2 (IN), TLS handshake, Certificate (11):

   edit   deselect   +add

 

* Closing connection 0

   edit   deselect   +add

 

establish a secure connection to it. To learn more about this situation and

   edit   deselect   +add

 

<pre><code>ballerina.home = "/Library/Ballerina/ballerina-0.975.1"

   edit   deselect   +add

 

import ballerina/log;

   edit   deselect   +add

 

name: "ballerina-abdennour-demo"

   edit   deselect   +add

 

service&lt;http:Service&gt; hello bind { port: 9090 } {

   edit   deselect   +add

 

caller -&gt;respond(res) but { error e =&gt; log:printError("Error sending response", err = e)};

   edit   deselect   +add

 

undefined annotation "Deployment"

   edit   deselect   +add

 

<h2>UPDATE</h2>

   edit   deselect   +add

 

<p>When I run a client sanity test, the only exception returned is this:</p>

   edit   deselect   +add

 

<p>While it has been suggested that this problem is kube-dns related as indicated <a href="https://github.com/kubernetes/contrib/issues/2737." rel="nofollow noreferrer">here</a>.<br>

   edit   deselect   +add

 

Address 1: 10.63.240.10 kube-dns.kube-system.svc.cluster.local

   edit   deselect   +add

 

Server: 10.63.240.10

   edit   deselect   +add

 

/ # nslookup zk-1.zk-svc.default.svc.cluster.local

   edit   deselect   +add

 

Address 1: 10.60.2.5 zk-1.zk-svc.default.svc.cluster.local

   edit   deselect   +add

 

<pre><code>2017-11-29 15:14:39,923 [myid:] - INFO [main:QuorumPeer$QuorumServer@167] - Resolved hostname: zk-0.zk-svc.default.svc.cluster.local to address: zk-0.zk-svc.default.svc.cluster.local/10.60.4.4

   edit   deselect   +add

 

<pre><code>root@kubernetes01:~/kubernetes/cluster# KUBERNETES_PROVIDER=ubuntu ./kube-down.sh

   edit   deselect   +add

 

No resources found

   edit   deselect   +add

 

waiting for tearing down pods

   edit   deselect   +add

 

waiting for tearing down pods

   edit   deselect   +add

 

waiting for tearing down pods

   edit   deselect   +add

 

waiting for tearing down pods

   edit   deselect   +add

 

waiting for tearing down pods

   edit   deselect   +add

 

<p>so in general I created the licenses on Gentoo Linux using the following bash script:</p>

   edit   deselect   +add

 

export WORKER_IP=10.79.218.3

   edit   deselect   +add

 

openssl genrsa -out apiserver-key.pem 2048

   edit   deselect   +add

 

openssl req -new -key ${WORKER_FQDN}-worker-key.pem -out ${WORKER_FQDN}-worker.csr -subj "/CN=${WORKER_FQDN}" -config worker-openssl.cnf

   edit   deselect   +add

 

openssl x509 -req -in admin.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out admin.pem -days 365

   edit   deselect   +add

 

<pre><code>[req]

   edit   deselect   +add

 

[ v3_req ]

   edit   deselect   +add

 

[alt_names]

   edit   deselect   +add

 

IP.2 = 10.79.218.2

   edit   deselect   +add

 

<pre><code>[req]

   edit   deselect   +add

 

[ v3_req ]

   edit   deselect   +add

 

[alt_names]

   edit   deselect   +add

 

<p>My controller machine is <code>coreos-2.tux-in.com</code> which resolves to the lan ip <code>10.79.218.2</code></p>

   edit   deselect   +add

 

Nov 08 21:24:06 coreos-2.tux-in.com kubelet-wrapper[2018]: E1108 21:24:06.950827 2018 reflector.go:203] pkg/kubelet/kubelet.go:384: Failed to list *api.Service: Get https://coreos-2.tux-in.com:443/api/v1/services?resourceVersion=0: x509: certificate signed by unknown authority

   edit   deselect   +add

 

Nov 08 21:24:08 coreos-2.tux-in.com kubelet-wrapper[2018]: E1108 21:24:08.171170 2018 eviction_manager.go:162] eviction manager: unexpected err: failed GetNode: node '10.79.218.2' not found

   edit   deselect   +add

 

NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE

   edit   deselect   +add

 

kube-system coredns-78fcdf6894-s4l8n 1/1 Running 1 18h 10.244.0.14 master2

   edit   deselect   +add

 

kube-system kube-controller-manager-master2 1/1 Running 1 18h 10.0.2.15 master2

   edit   deselect   +add

 

kube-system kube-proxy-xldph 1/1 Running 1 18h 10.0.2.15 master2

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

<pre><code>kubectl -v=10 exec -it hello-kubernetes-55857678b4-4xbgd sh

   edit   deselect   +add

 

I0703 08:44:01.255808 10307 round_trippers.go:386] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.11.0 (linux/amd64) kubernetes/91e7b4f" 'https://192.168.0.33:6443/api/v1/namespaces/default/pods/hello-kubernetes-55857678b4-4xbgd'

   edit   deselect   +add

 

I0703 08:44:01.273692 10307 round_trippers.go:414] Content-Type: application/json

   edit   deselect   +add

 

inationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}]},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-02T18:09:02Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-03T12:32:26Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":null},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-02T18:09:02Z"}],"

   edit   deselect   +add

 

I0703 08:44:01.317938 10307 round_trippers.go:411] Response Headers:

   edit   deselect   +add

 

F0703 08:44:01.318118 10307 helpers.go:119] error: unable to upgrade connection: pod does not exist

   edit   deselect   +add

 

(1045, "Access denied for user 'root'@'cloudsqlproxy~[cloudsql instance ip]' (using password: NO)")</p>

   edit   deselect   +add

 

<pre><code>from django.db import connection

   edit   deselect   +add

 

<p>the env vars are set correctly in the container, this works:</p>

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

'ENGINE': 'django.db.backends.mysql',

   edit   deselect   +add

 

'HOST': os.getenv('DB_HOST'),

   edit   deselect   +add

 

'charset': 'utf8mb4',

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

metadata:

   edit   deselect   +add

 

spec:

   edit   deselect   +add

 

labels:

   edit   deselect   +add

 

- name: aesh-web

   edit   deselect   +add

 

env:

   edit   deselect   +add

 

value: 127.0.0.1

   edit   deselect   +add

 

value: aesh_db

   edit   deselect   +add

 

name: cloudsql-db-credentials

   edit   deselect   +add

 

secretKeyRef:

   edit   deselect   +add

 

image: gcr.io/cloudsql-docker/gce-proxy:1.11

   edit   deselect   +add

 

volumeMounts:

   edit   deselect   +add

 

volumes:

   edit   deselect   +add

 

</code></pre><p>We have a fairly large kubernetes deployment on GKE, and we wanted to make our life a little easier by enabling auto-upgrades. The <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-upgrades" rel="nofollow noreferrer">documentation on the topic</a> tells you how to enable it, but not how it actually <strong>works</strong>.</p>

   edit   deselect   +add

 

<p>Has someone used this feature in production and can shed some light on what it'll actually do?</p>

   edit   deselect   +add

 

<li>I set up a maintenance window</li>

   edit   deselect   +add

 

</ul>

   edit   deselect   +add

 

<li>The update would happen in the next maintenance window</li>

   edit   deselect   +add

 

<ul>

   edit   deselect   +add

 

<h1>My question</h1>

   edit   deselect   +add

 

<li>If so, to what version?</li>

   edit   deselect   +add

 

<pre><code>apiVersion: v1

   edit   deselect   +add

 

labels:

   edit   deselect   +add

 

domainName: "my.personal-site.de"

   edit   deselect   +add

 

app: django-app

   edit   deselect   +add

 

targetPort: 8000

   edit   deselect   +add

 

name: django-app-deployment

   edit   deselect   +add

 

strategy:

   edit   deselect   +add

 

maxUnavailable: 1

   edit   deselect   +add

 

app: django-app

   edit   deselect   +add

 

name: django-app

   edit   deselect   +add

 

</code></pre><p>I have setup docker on my machine and also minikube which have docker inside it, so probably i have two docker instances running on different VM</p>

   edit   deselect   +add

 

3- docker tag a3703d02a199 127.0.0.1:5000/eliza/console:0.0.1

   edit   deselect   +add

 

<p>all above steps are working fine with no problems at all.</p>

   edit   deselect   +add

 

2- eval $(minikube docker-env)

   edit   deselect   +add

 

<p>in last step (point 4) it gave me next message</p>

   edit   deselect   +add

 

<p>So i can access image registry from my machine but not from minikube which make a problems of course with me when i deploy this image using Kubernetes on minikube and make deploy failed due to can't connect to <a href="http://127.0.0.1:5000" rel="noreferrer">http://127.0.0.1:5000</a></p>

   edit   deselect   +add

 

<pre><code>apiVersion: v1

   edit   deselect   +add

 

labels:

   edit   deselect   +add

 

- port: 9080

   edit   deselect   +add

 

app: tripbru-console

   edit   deselect   +add

 

kind: Deployment

   edit   deselect   +add

 

app: tripbru-console

   edit   deselect   +add

 

template:

   edit   deselect   +add

 

tier: frontend

   edit   deselect   +add

 

name: tripbru-console

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

</blockquote>

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

</blockquote>

   edit   deselect   +add

 

<a href="https://docker.local:5000/v1/_ping" rel="noreferrer">https://docker.local:5000/v1/_ping&lt;/a&gt;: dial tcp: lookup docker.local on

   edit   deselect   +add

 

<p>So how can i solve this issue too. </p>

   edit   deselect   +add

 

<p>(Inside the pod)</p>

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

drwx------ 6 999 docker 4096 Oct 30 11:21 base

   edit   deselect   +add

 

drwx------ 2 999 docker 4096 Oct 30 11:21 pg_dynshmem

   edit   deselect   +add

 

drwx------ 4 999 docker 4096 Oct 30 11:21 pg_multixact

   edit   deselect   +add

 

drwx------ 2 999 docker 4096 Oct 30 11:21 pg_snapshots

   edit   deselect   +add

 

drwx------ 2 999 docker 4096 Oct 30 11:21 pg_tblspc

   edit   deselect   +add

 

-rw------- 1 999 docker 88 Oct 30 11:21 postgresql.auto.conf

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

drwx------ 6 postgres postgres 4096 Oct 30 11:21 base

   edit   deselect   +add

 

drwx------ 2 postgres postgres 4096 Oct 30 11:21 pg_dynshmem

   edit   deselect   +add

 

drwx------ 4 postgres postgres 4096 Oct 30 11:21 pg_multixact

   edit   deselect   +add

 

drwx------ 2 postgres postgres 4096 Oct 30 11:21 pg_snapshots

   edit   deselect   +add

 

drwx------ 2 postgres postgres 4096 Oct 30 11:21 pg_tblspc

   edit   deselect   +add

 

-rw------- 1 postgres postgres 88 Oct 30 11:21 postgresql.auto.conf

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

metadata:

   edit   deselect   +add

 

matchLabels:

   edit   deselect   +add

 

template:

   edit   deselect   +add

 

spec:

   edit   deselect   +add

 

env:

   edit   deselect   +add

 

valueFrom:

   edit   deselect   +add

 

- name: POSTGRES_PASSWORD

   edit   deselect   +add

 

key: password

   edit   deselect   +add

 

volumeMounts:

   edit   deselect   +add

 

name: information-system

   edit   deselect   +add

 

command: ["bash", "-c", "python main.py"]

   edit   deselect   +add

 

claimName: information-system-db-claim

   edit   deselect   +add

 

apiVersion: v1

   edit   deselect   +add

 

type: local

   edit   deselect   +add

 

storage: 10Gi

   edit   deselect   +add

 

path: "/tmp/data/postgres"

   edit   deselect   +add

 

apiVersion: v1

   edit   deselect   +add

 

storageClassName: manual

   edit   deselect   +add

 

requests:

   edit   deselect   +add

 

<p>These are pods are running on kube-system namespace</p>

   edit   deselect   +add

 

etcd-minikube 1/1 Running 0 6h

   edit   deselect   +add

 

kube-dns-86f4d74b45-xxznk 3/3 Running 15 1d

   edit   deselect   +add

 

nginx-ingress-controller-tjljg 1/1 Running 3 6h

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

Kubelet version: "1.12.0-rc.1" Control plane version: "1.11.3"

   edit   deselect   +add

 

<p>Many Thanks

   edit   deselect   +add

 

Jan 3 21:28:46 master kubelet: I0103 21:28:46.829714 8726 kubelet_node_status.go:204] Setting node annotation to enable volume controller attach/detach

   edit   deselect   +add

 

Jan 3 21:29:02 master kubelet: E0103 21:29:02.762461 8726 cni.go:163] error updating cni config: No networks found in /etc/cni/net.d

   edit   deselect   +add

 

CentOS Linux release 7.3.1611 (Core)

   edit   deselect   +add

 

Docker version 1.12.5, build 7392c3b

   edit   deselect   +add

 

kubeadm version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.2074+a092d8e0f95f52", GitCommit:"a092d8e0f95f5200f7ae2cba45c75ab42da36537", GitTreeState:"clean", BuildDate:"2016-12-13T17:03:18Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

   edit   deselect   +add

 

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

   edit   deselect   +add

 

6b56cda441d6 gcr.io/google_containers/etcd-amd64:3.0.14-kubeadm "etcd --listen-client" 8 minutes ago Up 8 minutes k8s_etcd.c323986f_etcd-master_kube-system_3a26566bb004c61cd05382212e3f978f_80669ce9

   edit   deselect   +add

 

66de3a3ad7e9 gcr.io/google_containers/pause-amd64:3.0 "/pause" 8 minutes ago Up 8 minutes k8s_POD.d8dbe16c_etcd-master_kube-system_3a26566bb004c61cd05382212e3f978f_d58fa3b8

   edit   deselect   +add

 

kubernetes-cni.x86_64 0.3.0.1-0.07a8a2 @kubernetes

   edit   deselect   +add

 

<pre><code>kubectl logs --follow -n kube-system deployment/nginx-ingress

   edit   deselect   +add

 

<p>and on the dashboard logs I see this</p>

   edit   deselect   +add

 

2018/08/27 21:14:11 Metric client health check failed: the server is currently unable to handle the request (get services heapster). Retrying in 30 seconds.

   edit   deselect   +add

 

<p>I tried deleting all the pods in the kube-system namespace. Deleted the dashboard/ heapster but nothing helped. Any ideas what is going on. or what to check. Note, I have upgraded the cluster and everything came up fine after that. I rebooted the master node after upgrade and this is what happened</p>

   edit   deselect   +add

 

osmsku---kubenode01..local Ready &lt;none&gt; 140d v1.11.2

   edit   deselect   +add

 

77.5 is the docker interface ip</p>

   edit   deselect   +add

 

heapster ClusterIP 10.98.52.12 &lt;none&gt; 80/TCP 1h k8s-app=heapster

   edit   deselect   +add

 

monitoring-influxdb ClusterIP 10.101.205.79 &lt;none&gt; 8086/TCP 1h k8s-app=influxdb

   edit   deselect   +add

 

osmsku--prod-kubemaster02..local Ready &lt;none&gt; 140d v1.11.2 &lt;none&gt; CentOS Linux 7 (Core) 3.10.0-514.26.2.el7.x86_64 docker://18.3.0

   edit   deselect   +add

 

NAME READY STATUS RESTARTS AGE

   edit   deselect   +add

 

etcd-osmsku--prod-kubemaster01..local 1/1 Running 15 2h

   edit   deselect   +add

 

kibana-logging-66fcf97dc8-57nd5 1/1 Running 1 2h

   edit   deselect   +add

 

kube-flannel-ds-5g26z 1/1 Running 2 2h

   edit   deselect   +add

 

kube-proxy-gv2lf 1/1 Running 2 2h

   edit   deselect   +add

 

kubernetes-dashboard-6bc9c6f7cb-f8g7s 1/1 Running 0 2h

   edit   deselect   +add

 

</code></pre>*4 connect() failed (113: No route to host) while connecting to kubernetes dashboard upstream<p>When I deploy the following I get this error:</p>

   edit   deselect   +add

 

apiVersion: extensions/v1beta1

   edit   deselect   +add

 

labels:

   edit   deselect   +add

 

app.kubernetes.io/managed-by: {{ .Release.Service }}

   edit   deselect   +add

 

{{- end }}

   edit   deselect   +add

 

{{- range .Values.front.ingress.tls }}

   edit   deselect   +add

 

{{- end }}

   edit   deselect   +add

 

rules:

   edit   deselect   +add

 

paths:

   edit   deselect   +add

 

serviceName: {{ include "marketplace.name" . }}-{{ $.Values.front.name }}

   edit   deselect   +add

 

{{- end }}

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

{{- end -}}

   edit   deselect   +add

 

<p>Values used:</p>

   edit   deselect   +add

 

annotations:

   edit   deselect   +add

 

paths:

   edit   deselect   +add

 

</code></pre><p>Currently, under kubernetes1.5.3, kube-apiserver.log and kube-controller-manager.log is generated by adding '1>>/var/log/kube-apiserver.log 2>&amp;1' in /etc/kubernetes/kube-apiserver.yaml file.

   edit   deselect   +add

 

<pre><code>vagrant@master:~$ helm init

   edit   deselect   +add

 

Happy Helming!

   edit   deselect   +add

 

vagrant@master:~$ helm install nginx

   edit   deselect   +add

 

<p><a href="https://kubernetes.io/docs/getting-started-guides/ubuntu/troubleshooting/" rel="nofollow noreferrer">https://kubernetes.io/docs/getting-started-guides/ubuntu/troubleshooting/&lt;/a&gt;&lt;/p&gt;

   edit   deselect   +add

 

<pre><code> juju expose kubernetes-master

   edit   deselect   +add

 

<p>Can anyone help me?</p>$ helm version gives "Cannot connect to tiller"<p>I have a Daemonset running in privileged mode in a kubernetes cluster. This is the YAML spec of the daemon set.</p>

   edit   deselect   +add

 

name: my-daemon

   edit   deselect   +add

 

labels:

   edit   deselect   +add

 

serviceAccountName: my-sa-account

   edit   deselect   +add

 

imagePullPolicy: Always

   edit   deselect   +add

 

<p>Instead of using <code>privileged:true</code>, I am moving on to linux capabilties to give permissions to the DaemonSet. Therefore, I added all the linux capabilities to the container and removed <code>privileged:true</code>. This is the new YAML spec</p>

   edit   deselect   +add

 

name: my-daemon

   edit   deselect   +add

 

labels:

   edit   deselect   +add

 

serviceAccountName: my-sa-account

   edit   deselect   +add

 

imagePullPolicy: Always

   edit   deselect   +add

 

</code></pre>

   edit   deselect   +add

 

ShdPnd: 0000000000000000

   edit   deselect   +add

 

CapInh: 0000003fffffffff

   edit   deselect   +add

 

CapAmb: 0000000000000000

   edit   deselect   +add

 

<p>The <code>Namespace</code> is <code>default</code> and this is the result of <code>kubectl cluster-info:</code>

   edit   deselect   +add

 

</code>

   edit   deselect   +add

 

metadata:

   edit   deselect   +add

 

name: busybox

   edit   deselect   +add

 

port: 80

   edit   deselect   +add

 

metadata:

   edit   deselect   +add

 

spec:

   edit   deselect   +add

 

- image: time-provider

   edit   deselect   +add

 

metadata:

   edit   deselect   +add

 

spec:

   edit   deselect   +add

 

- image: gateway

   edit   deselect   +add

 

LB: 10.240.0.16 ( haproxy) </p>

   edit   deselect   +add

 

mv /home/${USER}/ca.crt /etc/kubernetes/pki/

   edit   deselect   +add

 

mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/

   edit   deselect   +add

 

mv /home/${USER}/admin.conf /etc/kubernetes/admin.conf

   edit   deselect   +add

 

[certificates] Generated etcd/ca certificate and key.

   edit   deselect   +add

 

[certificates] Generated etcd/peer certificate and key.

   edit   deselect   +add

 

[certificates] Generated apiserver-kubelet-client certificate and key.

   edit   deselect   +add

 

[certificates] Generated front-proxy-ca certificate and key.

   edit   deselect   +add

 

&gt;`sudo kubeadm alpha phase kubelet config write-to-disk --config kubeadm-config.yaml`

   edit   deselect   +add

 

&gt;`sudo kubeadm alpha phase kubelet write-env-file --config kubeadm-config.yaml`

   edit   deselect   +add

 

&gt;`sudo kubeadm alpha phase kubeconfig kubelet --config kubeadm-config.yaml`

   edit   deselect   +add

 

&gt;`export KUBECONFIG=/etc/kubernetes/admin.conf`

   edit   deselect   +add

 

The connection to the server localhost:8080 was refused - did you specify the right host or port?</p>

   edit   deselect   +add

 

<p>No output</p>

   edit   deselect   +add

 

<p><strong><em>a kubeconfig file "/etc/kubernetes/admin.conf" exists already but has got the wrong API Server URL</em></strong></p>

   edit   deselect   +add

 

kind: InitConfiguration

   edit   deselect   +add

 

controlPlaneEndpoint: "10.240.0.16:6443"

   edit   deselect   +add

 

listen-client-urls: "https://127.0.0.1:2379,https://10.240.0.33:2379"

   edit   deselect   +add

 

initial-cluster: "kb8-master1=https://10.240.0.4:2380,kb8-master2=https://10.240.0.33:2380"

   edit   deselect   +add

 

- 10.240.0.33

   edit   deselect   +add

 

networking:

   edit   deselect   +add

 

Quotes are not sourced from all markets and may be delayed by up to 20 minutes. Information is provided 'as is' and solely for informational purposes, not for trading purposes or advice.DisclaimerA browser error has occurred.

   edit   deselect   +add

 

CVE-2018-1002105 is one of the most severe #Kubernetes #security vulnerabilities of all time. How does this flaw wo… https://t.co/FLv4b9BVFE

   edit   deselect   +add

 

@olivierboukili: I just published A GCP / Kubernetes production migration retrospective (pa 1) https://t.co/RngsmtMwor

   edit   deselect   +add

 

@IanColdwater: I got accepted to speak at #BHUSA. ☺️ @mauilion and I are going to be demonstrating some little-known attacks on default…

   edit   deselect   +add

 

@CloudExpo: OpsRamp’s to Present AI &amp; AIOps Education Track at CloudEXPO @OpsRamp #HybridCloud #AI #IoT #AIOps #DevOps #Blockchain #Cl…

   edit   deselect   +add

 

@brendandburns: Windows containers are now in public preview in @Azure Kubernetes Service!!! https://t.co/ANfWHkbZrV Many, many thanks…

   edit   deselect   +add

 

@Rancher_Labs: Introducing the Rancher 2 #Terraform Provider: This week, @HashiCorp published the Rancher2 provider to help you provisio…

   edit   deselect   +add

 

@azureflashnews: Announcing the preview of Windows Server containers suppo in Azure Kubernetes Service https://t.co/lqmAJZgCbR #Azur…

   edit   deselect   +add

 

@YvosPivo: Have you ever thought about how you can impo volumes into a #Kubernetes cluster, for instance to migrate &amp; transform legacy…

   edit   deselect   +add

 

The First Way – Systems Thinking • Understand the entire flow of work • Seek to increase the flow of work • Stop… https://t.co/rAtcN6b3st

   edit   deselect   +add

 

@brendandburns: Windows containers are now in public preview in @Azure Kubernetes Service!!! https://t.co/ANfWHkbZrV Many, many thanks…

   edit   deselect   +add

 

@KubeSUMMIT: Join 180 Sponsors &amp; Partners and 60 Exhibitors at CloudEXPO Silicon Valley #HybridCloud #AI #AIOps #IoT #DevOps #DevSecOp…

   edit   deselect   +add

 

Announcing the preview of Windows Server containers suppo in Azure Kubernetes Service https://t.co/E90N6qj05w

   edit   deselect   +add

 

@CloudExpo: It's 4:00 AM in Silicon Valley! Do You Know Where Your Data Is? #HybridCloud #AI #AIOps #CIO #IoT #DevOps #SDN #CloudNative…

   edit   deselect   +add

 

@vshn_ch: Check out Adrian's excellent blog post about Kubernetes Serverless Frameworks: https://t.co/wi9Pl6Dtex Thanks @akosma! #server…

   edit   deselect   +add

 

@brendandburns: Windows containers are now in public preview in @Azure Kubernetes Service!!! https://t.co/ANfWHkbZrV Many, many thanks…

   edit   deselect   +add

 

@DBArgenis: Congrats @Taylorb_msft et al on launching! This is a big, big deal! Windows Server containers in Azure Kubernetes Service h…

   edit   deselect   +add

 

HNews: Lokomotive: An engine to drive cutting-edge Linux technologies into Kubernetes https://t.co/0LIZvhtKfU #linux

   edit   deselect   +add

 

Heading to #KubeCon Barcelona Spain. #k8s #Kubernetes #k8 @KubeCon_ https://t.co/tEgBQ7mzQn

   edit   deselect   +add

 

I'm super interested in this. We're starting to use @Rancher_Labs to manage our #Kubernetes at work and are already… https://t.co/ZMzdIIu4qM

   edit   deselect   +add

 

@kubeflow: Kubernetes, The Open and Scalable Approach to ML Pipelines by @yaronhaviv https://t.co/sMLX2ib4fu Courtesy of our friends at…

   edit   deselect   +add

 

@OracleDevs: Get Hands-on Microservices on Kubernetes and Autonomous Database! Register for the Live Virtual Lab that we will run on May…

   edit   deselect   +add

 

@CloudExpo: Join CloudEXPO Silicon Valley June 24-26 at Biggest Expo Floor in 5 Years ! #BigData #HybridCloud #Cloud #CloudNative #Serv…

   edit   deselect   +add

 

Announcing the preview of Windows Server containers suppo in Azure Kubernetes Service https://t.co/OUbl64E9pN

   edit   deselect   +add

 

Congrats @Taylorb_msft et al on launching! This is a big, big deal! Windows Server containers in Azure Kubernetes… https://t.co/kugG23JAC3

   edit   deselect   +add

 

Announcing the preview of Windows Server containers suppo in Azure Kubernetes Service https://t.co/HGAC6WOnd8 #Microsoft #Azure #Cloud

   edit   deselect   +add

 

3 Things Every CTO Should Know About Kubernetes

   edit   deselect   +add

 

5 things I wish I'd known about Kubernetes before I started

   edit   deselect   +add

 

7.5 tips to help you ace the Certified Kubernetes Administrator (CKA) exam

   edit   deselect   +add

 

10 most important differences between OpenShift and Kubernetes

   edit   deselect   +add

 

50 Useful Kubernetes Tools

   edit   deselect   +add

 

068: Screaming in the Cloud, GDPR, Kubernetes PoCs, InfoSec, and More

   edit   deselect   +add

 

076: Hiring in DevOps, Security, Kubernetes, and More

   edit   deselect   +add

 

080: Improving the Workforce, Programming Myths, Kubernetes, New Books, and More

   edit   deselect   +add

 

087: Psychological Safety, Kubernetes, Ansible, Serverless, AWS, OpenFaaS,& More

   edit   deselect   +add

 

093: Hard Week, Ansible, Kubernetes, Nathen Harvey, InfoSec, and More

   edit   deselect   +add

 

100: At Least It Wasn't Oracle, AWS, HQ2, Kubernetes, Neomonolith, and More

   edit   deselect   +add

 

106: KubeKhan, KubeCon, Etcd, Licenses, Securing Kubernetes, JFrog, More

   edit   deselect   +add

 

115: CVE-2019-5736 Runc Vuln, Kubernetes, Liz Fong-Jones, MongoDB's End and More

   edit   deselect   +add

 

122: Chefnanigans, Derek the DevOps Dinosaur, BPF, Envoy, Kubernetes, OPA, More

   edit   deselect   +add

 

[Audio] Making Innovative Containers Using Kubernetes and DevOps

   edit   deselect   +add

 

[Talk] Kubernetes 1.6+ Feat. David Aronchick and Kubernetes on AWS Zalando

   edit   deselect   +add

 

[Webinar] Kubernetes Monitoring Best Practices from KubeCon

   edit   deselect   +add

 

A CLI tool for deploying cloud native apps to Kubernetes

   edit   deselect   +add

 

A comparison of Kubernetes network plugins

   edit   deselect   +add

 

A conversion tool to go from Docker Compose to Kubernetes

   edit   deselect   +add

 

A developers stand point on Docker Swarm and Kubernetes

   edit   deselect   +add

 

A few things I've learned about Kubernetes

   edit   deselect   +add

 

A Guide to Deploy Elasticsearch Cluster on Google Kubernetes Engine

   edit   deselect   +add

 

A Kubernetes Admission controller to gate pod execution based on image analysis

   edit   deselect   +add

 

A Kubernetes-based polyglot microservices application with Istio service mesh

   edit   deselect   +add

 

A new Kubernetes sandbox

   edit   deselect   +add

 

A reason for unexplained connection timeouts on Kubernetes/Docker

   edit   deselect   +add

 

A Service Mesh for Kubernetes: Distributed Tracing

   edit   deselect   +add

 

A story about a Kubernetes migration

   edit   deselect   +add

 

A Tale of Cloud, Containers and Kubernetes

   edit   deselect   +add

 

A Top10 list of Kubernetes applications

   edit   deselect   +add

 

A VPN for Minikube: transparent networking access to your Kubernetes cluster

   edit   deselect   +add

 

Abusing the Kubernetes API Server Proxying

   edit   deselect   +add

 

Adding Persistent Volumes to Jenkins with Kubernetes

   edit   deselect   +add

 

Advanced Kubernetes Objects You Need to Know

   edit   deselect   +add

 

After 2 years, Kubernetes is at the center of the cloud; now comes the hard part

   edit   deselect   +add

 

All the Fun in Kubernetes 1.9 – The New Stack

   edit   deselect   +add

 

Amazon Considering New Cloud Service Based on Kubernetes

   edit   deselect   +add

 

Amazon Elastic File System on Kubernetes

   edit   deselect   +add

 

Amazon Web Services chooses its Kubernetes path, joins CNCF

   edit   deselect   +add

 

AMD ROCm 2.0 released (TensorFlow v1.12, FP16 support, OpenCL 2.0, Kubernetes, )

   edit   deselect   +add

 

An open source operator for Kafka on Kubernetes

   edit   deselect   +add

 

Analysis of a Kubernetes hack - Backdooring through kubelet

   edit   deselect   +add

 

Announcement: Cloud 66 Kubernetes Support Is Here

   edit   deselect   +add

 

Announcing HashiCorp Consul and Kubernetes

   edit   deselect   +add

 

Announcing Submariner, Multi-Cluster Network Connectivity for Kubernetes

   edit   deselect   +add

 

Announcing Terraform Support for Kubernetes Service on AWS

   edit   deselect   +add

 

Ansible playbook to deploy Rancher k3s kubernetes cluster

   edit   deselect   +add

 

Apache Mesos vs. Google’s Kubernetes

   edit   deselect   +add

 

Apollo – The Logz.io Continuous Deployment Solution Over Kubernetes

   edit   deselect   +add

 

Application Tracing on Kubernetes with AWS X-Ray – AWS Compute Blog

   edit   deselect   +add

 

Are you learning Kubernetes/Docker? What resources are you using?

   edit   deselect   +add

 

Argo: Open source Kubernetes native workflows, events, CI and CD

   edit   deselect   +add

 

Ask HN: Anyone with inside knowledge about Amazon working on Kubernetes product?

   edit   deselect   +add

 

Ask HN: Best container runtime for Kubernetes CRI-O

   edit   deselect   +add

 

Ask HN: Did AWS give you access to EKS(ECS for Kubernetes)?

   edit   deselect   +add

 

Ask HN: Docker, Kubernetes, Openshift, etc – how do you deploy your products?

   edit   deselect   +add

 

Ask HN: How has Kubernetes changed your workflow?

   edit   deselect   +add

 

Ask HN: Kubernetes application level rollback?

   edit   deselect   +add

 

Ask HN: Nomad (Hashicorp) vs. Kubernetes?

   edit   deselect   +add

 

Ask HN: What are downsides of Kubernetes/containers?

   edit   deselect   +add

 

Ask HN: Who is using Kubernetes or Docker in production and how has it been?

   edit   deselect   +add

 

Assess Kubernetes performance and scalability using Automation Pipeline

   edit   deselect   +add

 

Auto generate Kubernetes pod security policies

   edit   deselect   +add

 

Automate deep learning training with Kubernetes GPU-cluster

   edit   deselect   +add

 

Automated TLS with cert-manager and letsencrypt for Kubernetes

   edit   deselect   +add

 

Automating TLS and DNS with Kubernetes Ingress

   edit   deselect   +add

 

Autoscaling Deep Learning Training with Kubernetes

   edit   deselect   +add

 

AWS ALB Ingress Controller for Kubernetes

   edit   deselect   +add

 

AWS managed kubernetes (EKS)

   edit   deselect   +add

 

AWS Service Operator for Kubernetes Now Available

   edit   deselect   +add

 

Azure brings new Serverless and DevOps capabilities to the Kubernetes community

   edit   deselect   +add

 

Azure Kubernetes Service (AKS) GA

   edit   deselect   +add

 

Becoming a Kubernetes Maintainer

   edit   deselect   +add

 

Best Practices for Kubernetes' Pods

   edit   deselect   +add

 

Beyond Kubernetes: Istio network service mesh

   edit   deselect   +add

 

Bitnami Kubernetes Production Runtime

   edit   deselect   +add

 

BlockChain App Deployment Using Microservices with Kubernetes

   edit   deselect   +add

 

Bootstrap Kubernetes the Hard Way on GCP

   edit   deselect   +add

 

Borg, Omega, and Kubernetes [pdf]

   edit   deselect   +add

 

Brigate: Event-Driven Scripting for Kubernetes (JS)

   edit   deselect   +add

 

Bringing Kubernetes to Containership

   edit   deselect   +add

 

Build and deploy docker images to Kubernetes using Git push

   edit   deselect   +add

 

Build Your Kubernetes Cluster with Raspberry Pi, .NET Core and OpenFaas

   edit   deselect   +add

 

Build, deploy, manage modern serverless workloads using Knative on Kubernetes

   edit   deselect   +add

 

Building a Hybrid x86–64 and ARM Kubernetes Cluster

   edit   deselect   +add

 

Building a Kubernetes Operator for Prometheus and Thanos

   edit   deselect   +add

 

Building an ARM Kubernetes Cluster

   edit   deselect   +add

 

Building containers with Kubernetes

   edit   deselect   +add

 

Building Machine Learning Services in Kubernetes

   edit   deselect   +add

 

Cabin (mobile app for kubernetes) is now open source

   edit   deselect   +add

 

Canary deployments on kubernetes using Traefik

   edit   deselect   +add

 

Canonical Distribution of Kubernetes

   edit   deselect   +add

 

​Canonical makes Kubernetes moves

   edit   deselect   +add

 

Cbi: Container Builder Interface for Kubernetes

   edit   deselect   +add

 

Certified Kubernetes and Google Kubernetes Engine

   edit   deselect   +add

 

Checking Out the Kubernetes Service Catalog

   edit   deselect   +add

 

CI/CD new features: Kubernetes, JaCoCo, and more

   edit   deselect   +add

 

CI/CD with Amazon Elastic Container Service for Kubernetes (Amazon EKS)

   edit   deselect   +add

 

Cisco joins the Kubernetes cloud rush

   edit   deselect   +add

 

CLI tool to generate repeatable, cloud-based Kubernetes infrastructure

   edit   deselect   +add

 

Cloud Foundry adds native Kubernetes support for running containers

   edit   deselect   +add

 

Cloud Migration Best Practices: How to Move Your Project to Kubernetes

   edit   deselect   +add

 

Cloudbees Kubernetes Continuous Delivery

   edit   deselect   +add

 

Cluster-in-a-box: How to deploy one or more Kubernetes clusters to a single box

   edit   deselect   +add

 

CNCF just got 36 companies to agree to a Kubernetes certification standard

   edit   deselect   +add

 

Collecting application logs in Kubernetes

   edit   deselect   +add

 

Comparing Kubernetes Authentication Methods

   edit   deselect   +add

 

Comparing Kubernetes Operators and in-house scripts to build Platform automation

   edit   deselect   +add

 

Compose on Kubernetes Now Open Source

   edit   deselect   +add

 

Concerns about Kubernetes Community newcomers

   edit   deselect   +add

 

Configuring permissions in Kubernetes with RBAC

   edit   deselect   +add

 

Conjuring up Kubernetes on Ubuntu

   edit   deselect   +add

 

Container code cluster-fact: There's a hole in Kubernetes

   edit   deselect   +add

 

Container Security Part 3 – Kubernetes Cheat Sheet

   edit   deselect   +add

 

Containerized App Deployment on Kubernetes (K8s) with Nutanix Calm

   edit   deselect   +add

 

Containership Launches Its Fully Managed Kubernetes Service

   edit   deselect   +add

 

Continuous Delivery with Kubernetes the Hard Way

   edit   deselect   +add

 

Continuous Deployment with Docker, Kubernetes and Jenkins

   edit   deselect   +add

 

Contribute to Kubernetes without writing code

   edit   deselect   +add

 

Convergence to Kubernetes – Standardization to Scale

   edit   deselect   +add

 

CoreOS (YC S13) Is Hiring – Help Accelerate Kubernetes (BER/SFO/NYC/remote)

   edit   deselect   +add

 

CoreOS Automates Kubernetes Node OS Upgrades

   edit   deselect   +add

 

CoreOS Tectonic extends Kubernetes to new platforms with automated installer

   edit   deselect   +add

 

CoreOS' Tectonic delivers "self-driving” Kubernetes to the enterprise

   edit   deselect   +add

 

Create a TLS-Protected Kubernetes Ingress from Scratch

   edit   deselect   +add

 

Create, manage, snapshot and scale Kubernetes infrastructure in the public cloud

   edit   deselect   +add

 

Creating a Kubernetes Cluster on AWS with a single command

   edit   deselect   +add

 

Creating dashboards of Kubernetes security events with Falco and a EFK stack

   edit   deselect   +add

 

Critical Privilege Escalation Flaw Patched in Kubernetes

   edit   deselect   +add

 

Cruise open-sources RBACSync – Gsuite group membership controller for Kubernetes

   edit   deselect   +add

 

Customizing Kubernetes DNS Using Consul

   edit   deselect   +add

 

CVE-2018-1002100 Kubernetes: the kubectl cp command insecurely handles tar data

   edit   deselect   +add

 

Cycle Pedals Bare Metal Container Orchestrator as Kubernetes Alternative

   edit   deselect   +add

 

Databases on Kubernetes – How to Recover from Failures, Scale Up and Down

   edit   deselect   +add

 

Debug Your Live Apps Running in Azure Virtual Machines and Azure Kubernetes

   edit   deselect   +add

 

Debugging a TCP socket leak in a Kubernetes cluster

   edit   deselect   +add

 

Debugging microservices on Kubernetes with the Conduit service mesh 0.4 release

   edit   deselect   +add

 

Dedicated Game Server Hosting and Scaling for Multiplayer Games on Kubernetes

   edit   deselect   +add

 

Deep Dive into Kubernetes Networking in Azure

   edit   deselect   +add

 

Demo Kubernetes-based polyglot microservices application with Istio service mesh

   edit   deselect   +add

 

Deploy a HIG Stack in Kubernetes for Monitoring

   edit   deselect   +add

 

Deploy and use a multi-framework deep learning platform on Kubernetes

   edit   deselect   +add

 

Deploy InfluxDB and Grafana on Kubernetes to Collect Twitter Stats

   edit   deselect   +add

 

Deploy OpenFaaS and Kubernetes on DigitalOcean with Ansible

   edit   deselect   +add

 

Deploy-to-kube: Deploy your Node.js app to Kubernetes with a single command

   edit   deselect   +add

 

Deploying a Node App to Google Cloud with Kubernetes

   edit   deselect   +add

 

Deploying Java Applications with Docker and Kubernetes

   edit   deselect   +add

 

Deploying Kubernetes applications with Helm

   edit   deselect   +add

 

Deploying Kubernetes with CoreDNS using kubeadm

   edit   deselect   +add

 

Deploying OSv on Kubernetes using virtlet

   edit   deselect   +add

 

Deploying Spark on Kubernetes

   edit   deselect   +add

 

Deploying to Google Kubernetes Engine

   edit   deselect</