Cloud Native Computing Foundation (CNCF) CNI (Container Networking Interface) 0.7.4 has a network firewall misconfiguration which affects Kubernetes. The CNI 'portmap' plugin, used to setup HostPorts for CNI, inserts rules at the front of the iptables nat chains; which take precedence over the KUBE- SERVICES chain. Because of this, the HostPort/portmap rule could match incoming traffic even if there were better fitting, more specific service definition rules like NodePorts later in the chain. The issue is fixed in CNI 0.7.5 and Kubernetes 1.11.9, 1.12.7, 1.13.5, and 1.14.0.
15554296360000000
CVE-2018-5543 The F5 BIG-IP Controller for Kubernetes 1.0.0-1.5.0 (k8s-bigip-crtl) passes BIG-IP username and password as command line parameters, which may lead to disclosure of the credentials used by the container.
15554296360000000
CVE-2018-18264 Kubernetes Dashboard before 1.10.1 allows attackers to bypass authentication and use Dashboard's Service Account for reading secrets within the cluster.
15554296360000000
CVE-2018-1000400 Kubernetes CRI-O version prior to 1.9 contains a Privilege Context Switching Error (CWE-270) vulnerability in the handling of ambient capabilities that can result in containers running with elevated privileges, allowing users abilities they should not have. This attack appears to be exploitable via container execution. This vulnerability appears to have been fixed in 1.9.
15554296360000000
CVE-2017-1002100 Default access permissions for Persistent Volumes (PVs) created by the Kubernetes Azure cloud provider in versions 1.6.0 to 1.6.5 are set to "container" which exposes a URI that can be accessed without authentication on the public internet. Access to the URI string requires privileged access to the Kubernetes cluster or authenticated access to the Azure portal.
15554296360000000
CVE-2016-1905 The API server in Kubernetes does not properly check admission control, which allows remote authenticated users to access additional resources via a crafted patched object.
15554296360000000
[] [domhalps] [] [sbucloud]
@domhalps @SBUCloud: With Ka Wai Leung get a preview of #HPE participation at #RHSummit Boston MA ! Engage the #HPERedHat team on booth #637 #So…
15559393430000000
[] [sbucloud] [] [apalia]
@SBUCloud Pierre Vacherand #CTO @Apalia Switzerland talks about the customer's benefits from a full stack #Automation for… https://t.co/RlDUZHsry0
15559393430000000
[] [sbucloud] [] [social_4u] [] [datamattsson]
@SBUCloud @Social_4U: Would you like to take advantage of policy-based provisioning for persistent volume in ? Join @datamattsson at #…
15559393430000000
[] [mulenga29] [] [lightbend]
@mulenga29 @lightbend: BIG RELEASE! 🙌 Lightbend Pipelines 1.0.0 now out - build and deploy streaming pipelines with #AkkaStreams, #SparkStreaming,…
15559393430000000
[] [crypto__graph] [] [rhdevelopers]
@Crypto__graph @rhdevelopers: Whether you're still learning or an experienced or #Kubernetes application #developer, add this #YAML extensio…
15559393430000000
[] [serenamarie125] [] [kialiproject]
@serenamarie125 @KialiProject: Stop scrolling. Don't miss this. Kiali Operator is now available. Sta using it to install Kiali. See how easy it is:…
15559393430000000
[] [edgeiotai] [] [nadhaneg]
@EdgeIotAi @NadhanEG: #RHSummit Track Guide for Emerging Technology :: #CTO Chris Wright :: #AI #ML on :: #Edge -- where #5G meets #IoT…
15559393430000000
[] [slawomirkumka]
@SlawomirKumka #Kubernetes & #Microservices enabling the Cloud's "Next Frontier" #IBMCloudPrivate https://t.co/bIrSu0pDd7
15559393430000000
[] [nicholas_redhat]
@nicholas_redhat openshift: #SavetheDate: Commons Gathering co-located with KubeCon in Barcelona, Spain, May 20, 2019.… https://t.co/B0D4yPWdSu
15559393430000000
[] [jamieeduncan] [] [openshift]
@jamieeduncan @openshift: Planning your Red Hat Summit experience around ? We’ve made it simple for you to find all the best sessions & acti…
15559393430000000
[] [spbreed] [] [openshift]
@spbreed Deploying Applications to Multiple #Datacenters https://t.co/wkRIUclo0r via @openshift
15559393430000000
[] [kamesh_sampath] [] [couchbase] [] [redhat] [] [couchbase]
@kamesh_sampath @couchbase: Virtual event: Join @RedHat this Friday for a @Couchbase on Openshift demo; learn how to run Couchbase deployments natively…
15559393430000000
[] [stefanvoirschot] [] [openshift]
@stefanvoirschot @openshift: Planning your Red Hat Summit experience around ? We’ve made it simple for you to find all the best sessions & acti…
15559393430000000
[] [social_4u] [] [sbucloud]
@Social_4U @SBUCloud: Learn how to build an end-to-end #DataAnalytics pipeline to get value from Your #Data with at #RHSummit May 7-9 Bo…
15559393430000000
[] [bentonam] [] [couchbase] [] [redhat] [] [couchbase]
@bentonam @couchbase: Virtual event: Join @RedHat this Friday for a @Couchbase on Openshift demo; learn how to run Couchbase deployments natively…
15559393430000000
[] [jimmylarroche65] [] [couchbase] [] [redhat] [] [couchbase]
@JimmyLarroche65 @couchbase: Virtual event: Join @RedHat this Friday for a @Couchbase on Openshift demo; learn how to run Couchbase deployments natively…
15559393430000000
[] [fujioturner] [] [couchbase] [] [redhat] [] [couchbase]
@FujioTurner @couchbase: Virtual event: Join @RedHat this Friday for a @Couchbase on Openshift demo; learn how to run Couchbase deployments natively…
15559393430000000
[] [wyly_ellen] [] [1vizuri] [] [redhatpartners] [] [sysdig]
@wyly_ellen @1Vizuri: Just one week until our April 25 workshop in Columbia, MD with @RedHatPartners and @sysdig. Learn about transforming applicati…
15559393430000000
[] [gummybaren] [] [couchbase] [] [redhat] [] [couchbase]
@gummybaren @couchbase: Virtual event: Join @RedHat this Friday for a @Couchbase on Openshift demo; learn how to run Couchbase deployments natively…
15559393430000000
1. The test service 'A' which receives HTTP requests at port 80 has five pods deployed on three nodes.<br>
15581188260000000
metadata:
15581188260000000
</code></pre>
15581188260000000
<strong>In the next 3 minutes, about 20% requests were timeout, which is unacceptable in product environment.</strong> </p>
15581188260000000
neo4j-core-2 1/1 Running 0 20h
15581188260000000
neo4j ClusterIP None <none> 7474/TCP,6362/TCP 20h
15581188260000000
</code></pre>
15581188260000000
<p><strong>Whitelabel Error Page
15581188260000000
<p>Here is my generate nginx.conf entry for my server</p>
15581188260000000
set $proxy_upstream_name "-";
15581188260000000
ssl_certificate_key /ingress-controller/ssl/default-payday.pem;
15581188260000000
if ($scheme = https) {
15581188260000000
set $ingress_name "frontend-routing";
15581188260000000
client_max_body_size "1m";
15581188260000000
proxy_set_header ssl-client-dn "";
15581188260000000
proxy_set_header X-Forwarded-For $the_real_ip;
15581188260000000
proxy_set_header X-Scheme $pass_access_scheme;
15581188260000000
proxy_set_header Proxy "";
15581188260000000
proxy_buffering "off";
15581188260000000
proxy_cookie_domain off;
15581188260000000
proxy_redirect off;
15581188260000000
metadata:
15581188260000000
backend:
15581188260000000
<pre><code>kind: ConfigMap
15581188260000000
labels:
15581188260000000
use-proxy-protocol: "true"
15581188260000000
metadata:
15581188260000000
annotations:
15581188260000000
selector:
15581188260000000
protocol: TCP
15581188260000000
<p>org.springframework.security.oauth2.common.exceptions. InvalidRequestException: Possible CSRF detected - state parameter was required but no state could be found</p>
15581188260000000
<p>The tutorial does the healthz check by having an nginx instance fronting the API server, which connects to the API server using TLS. The only reason to do this is because the GCP load balancer needs a non-TLS endpoint for the health check. I don't see why using curl directly with TLS shouldn't work. Has something changed in terms of default permissions between the v1.12.0 and v1.13.2 releases?</p>
15581188260000000
I installed the necessary namespace for the basic nginx-ingress-controller, I'm running on docker-for-mac.</p>
15581188260000000
name: ingress-nginx
15581188260000000
targetPort: 80
15581188260000000
<p><code>kubectl describe ingress</code> returns:</p>
15581188260000000
/v23 nginx23:80 (10.1.0.124:80,10.1.0.128:80)
15581188260000000
<p>Can you tell what's wrong with this config?</p><p>We have an app that we are trying to move into Istio mesh. One of the services makes requests to <code>metadata.google.internal</code> in order to finish configuring the environment.</p>
15581188260000000
name: google-metadata-server
15581188260000000
</code></pre>
15581188260000000
[2019-02-07T15:29:22.886Z] "GET /HTTP/1.1" 404 NR 0 0 0 - "-" "Google-HTTP-Java-Client/1.27.0 (gzip)" "3411a0be-6d29-42f3-b01a-567edf2cc3e2" "169.254.169.254" "-" - - 169.254.169.254:80 10.16.0.29:58794
15581188260000000
<p>What I did is to create a simple deployment with Istio, in the same cluster, same namespace, and telnet the metadata server manually:</p>
15581188260000000
GET / HTTP/1.1
15581188260000000
date: Thu, 07 Feb 2019 16:35:52 GMT
15581188260000000
x-envoy-upstream-service-time: 1
15581188260000000
<p>I've been experimenting and building my deployment using minikube and I have created a yaml file that will successfully deploy everything locally on minikube without error. You can see the full deployment yaml file here: <a href="https://github.com/mwinteringham/restful-booker-platform/blob/kubes/kubes/deploy.yml" rel="nofollow noreferrer">https://github.com/mwinteringham/restful-booker-platform/blob/kubes/kubes/deploy.yml</a></p>
15581188260000000
metadata:
15581188260000000
backend:
15581188260000000
serviceName: rbp-room
15581188260000000
servicePort: 3002
15581188260000000
- path: /auth
15581188260000000
backend:
15581188260000000
serviceName: rbp-ui
15581188260000000
<li>I have left the deployment run for approx. 10 minutes to make sure the initial startup 404s are done</li>
15581188260000000
<p><a href="https://github.com/innostarterkit/language" rel="nofollow noreferrer">https://github.com/innostarterkit/language</a></p>
15581188260000000
[==================================================>] 61.12MB/61.12MB
15581188260000000
9dfa40a0da3b: Layer already exists error parsing HTTP 408 response
15581188260000000
body{background:url(//www.google.com/images/errors/robot.png) 100% 5px
15581188260000000
[] [media]
no-repeat;margin-left:-5px}@media only screen and
15581188260000000
(-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png)
15581188260000000
request. That’s all we know.\n"</p>
15581188260000000
<p>Daniel</p><p>I'm trying to change the <code>client_max_body_size</code> value, so my nginx ingress will not return 413 error. </p>
15581188260000000
hsts-include-subdomains: "false"
15581188260000000
name: nginx-configuration
15581188260000000
<p>These changes has no effect at all, after loading it, in the nginx controller log I can see the information about reloading config map, but the values in nginx.conf are the same: </p>
15581188260000000
client_max_body_size "1m";
15581188260000000
client_max_body_size "1m";
15581188260000000
</code></pre>
15581188260000000
name = "******"
15581188260000000
password = "******"
15581188260000000
token = "t0k3n"
15581188260000000
region = "us-east1"
15581188260000000
<pre><code>2019-01-09T14:35:07.384-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: 2019/01/09 14:35:07 [DEBUG] Google API Request Details:
15581188260000000
2019-01-09T14:35:07.384-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: Content-Length: 584
15581188260000000
2019-01-09T14:35:07.384-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: "cluster": {
15581188260000000
2019-01-09T14:35:07.384-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: "legacyAbac": {
15581188260000000
2019-01-09T14:35:07.384-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: "username": "****"
15581188260000000
2019-01-09T14:35:07.384-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: "oauthScopes": [
15581188260000000
2019-01-09T14:35:07.384-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: "https://www.googleapis.com/auth/servicecontrol",
15581188260000000
2019-01-09T14:35:07.384-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: }
15581188260000000
2019-01-09T14:35:07.521-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: HTTP/2.0 500 Internal Server Error
15581188260000000
2019-01-09T14:35:07.521-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: Server: ESF
15581188260000000
2019-01-09T14:35:07.522-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: X-Frame-Options: SAMEORIGIN
15581188260000000
2019-01-09T14:35:07.522-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: "code": 500,
15581188260000000
2019-01-09T14:35:07.522-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: "domain": "global",
15581188260000000
2019-01-09T14:35:07.522-0500 [DEBUG] plugin.terraform-provider-google_v1.20.0_x4.exe: }
15581188260000000
</code></pre>
15581188260000000
HTTP/2.0 500 Internal Server Error
15581188260000000
Server: ESF
15581188260000000
X-Frame-Options: SAMEORIGIN
15581188260000000
"errors": [
15581188260000000
</code></pre>
15581188260000000
kind: Service
15581188260000000
selector:
15581188260000000
port: 8080
15581188260000000
metadata:
15581188260000000
metadata:
15581188260000000
- name: my-service
15581188260000000
imagePullSecrets:
15581188260000000
kind: Ingress
15581188260000000
backend:
15581188260000000
<p>My minikube is running under ip 192.168.99.100 and when I tried to access my application with address: curl 192.168.99.100:80/myservice, I got 502 Bad Gateway.</p>
15581188260000000
<p>No updates/changes has been made to cluster or pods. All pods are green and working and all hosts are healty. Everything has been working for more then 2 weeks until this morning.</p>
15581188260000000
<blockquote>
15581188260000000
sudo apt-get install curl
15581188260000000
export LANG=en_US.UTF-8
15581188260000000
export AWS_S3_BUCKET=my.s3.bucket.kube
15581188260000000
</code></pre>
15581188260000000
Resolving storage.googleapis.com (storage.googleapis.com)... 74.125.29.128, 2607:f8b0:400d:c03::80
15581188260000000
100%[======================================>] 496,696,744 57.4MB/s in 8.2s
15581188260000000
... calling verify-prereqs
15581188260000000
usage: aws [options] <command> <subcommand> [parameters]
15581188260000000
<p>The s3 bucket does get created.</p>
15581188260000000
<li>notebook server running on 192.168.0.0/24</li>
15581188260000000
<p>The workers are able to connect to the scheduler, but in the scheduler I get a errors of the form </p>
15581188260000000
<p>I set up the <code>kube</code> cluster using <code>kops</code>.</p>
15581188260000000
<p>However, the combination of Kubernetes pod networking, and Docker-in-Docker results in the following when trying to push to gcr:</p>
15581188260000000
time="2018-03-19T03:31:17.410403394Z" level=error msg="Upload failed, retrying: net/http: HTTP/1.x transport connection broken: write tcp w.x.y.z:39662->z.y.x.w:443: write: broken pipe"
15581188260000000
<p>Here are the docker commands being ran. Obviously the sensitive data has been omitted.</p>
15581188260000000
<p>The <code>jules -stage deploy_docker</code> command runs a <code>go build</code>, <code>docker build</code>, and then <code>gcloud docker -- push...</code> on 8 different directories simultaneously.</p>
15581188260000000
<p>I get <code>Error: unknown flag: --name</code>.</p>
15581188260000000
</code></pre>
15581188260000000
kube-apiserver-master 1/1 Running 0 2m
15581188260000000
kube-scheduler-master 1/1 Running 0 2m
15581188260000000
kube-dns 172.17.0.2:53,172.17.0.2:53 3m
15581188260000000
IP: 10.96.0.10
15581188260000000
TargetPort: 53/TCP
15581188260000000
<p>The <code>172.17.0.2</code> is the IP address assigned by docker bridge (<code>docker0</code>) for <code>kube-dns</code> container. On working k8s network setup, the <code>kube-dns</code> should have endpoints with address from <code>podSubnet</code> (<code>10.244.0.0/16</code>). </p>
15581188260000000
producer Deployment/producer <unknown>/1% 1 3 1 42m
15581188260000000
metadata:
15581188260000000
labels:
15581188260000000
strategy: {}
15581188260000000
io.kompose.service: producer
15581188260000000
- name: mongoHost
15581188260000000
requests:
15581188260000000
</code></pre>
15581188260000000
</code></pre>
15581188260000000
</code></pre>
15581188260000000
<p>I have tried setting the <code>horizontal-pod-autoscaler-use-rest-clients=false</code> for <code>kube-controller-manager</code>, still facing the same issue. </p>`kubectl get hpa` showing targets as unknown and not autoscaling the pod when the load is increased?<p>I am not able to see any log output when deploying a very simple Pod:</p>
15581188260000000
name: counter
15581188260000000
args: [/bin/sh, -c,
15581188260000000
</code></pre>
15581188260000000
Namespace: default
15581188260000000
Status: Running
15581188260000000
Image: busybox
15581188260000000
/bin/sh
15581188260000000
Ready: True
15581188260000000
Conditions:
15581188260000000
Volumes:
15581188260000000
QoS Class: BestEffort
15581188260000000
Type Reason Age From Message
15581188260000000
Normal Created 16m kubelet, ip-10-0-0-43.ec2.internal Created container
15581188260000000
</code></pre>`kubectl logs counter` not showing any output following official Kubernetes example<p>I try to run on any Kube slave node:</p>
15581188260000000
</code></pre>
15581188260000000
ip-10-43-0-11 656m 32% 1736Mi 47%
15581188260000000
<p>Ok, what I should do? give permissions to the <code>system:node</code> group I suppose</p>
15581188260000000
<pre><code>$ kubectl describe clusterrole system:node
15581188260000000
Resources Non-Resource URLs Resource Names Verbs
15581188260000000
nodes [] [] [create get list watch delete patch update]
15581188260000000
pods/eviction [] [] [create]
15581188260000000
<p>Now:</p>
15581188260000000
PolicyRule:
15581188260000000
<p>Only way that it works is:</p>
15581188260000000
- apiGroups: [""]
15581188260000000
name: system:node:ip-10-43-0-13
15581188260000000
apiGroup: rbac.authorization.k8s.io
15581188260000000
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:48:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
15581188260000000
<p>During some of these relatively large builds, I am intermittently (~every other build) seeing a build pod become evicted, using <code>kubectl describe pod <podname></code> I can investigate and I've noticed that the pod is evicted due to the following: </p>
15581188260000000
Node: ip-192-168-66-176.eu-west-1.compute.internal/
15581188260000000
Status: Failed
15581188260000000
/bin/sh
15581188260000000
Environment:
15581188260000000
GIT_COMMITTER_EMAIL: jenkins-x@googlegroups.com
15581188260000000
XDG_CONFIG_HOME: /home/jenkins
15581188260000000
/home/jenkins/.docker from volume-2 (rw)
15581188260000000
131c407141521c0842f62a69004df926be6cb531f9318edf0885aeb96b0662b4
15581188260000000
Environment:
15581188260000000
GIT_COMMITTER_EMAIL: jenkins-x@googlegroups.com
15581188260000000
XDG_CONFIG_HOME: /home/jenkins
15581188260000000
/home/jenkins/.docker from volume-2 (rw)
15581188260000000
Volumes:
15581188260000000
volume-2:
15581188260000000
Type: Secret (a volume populated by a Secret)
15581188260000000
Medium:
15581188260000000
jenkins-token-smvvp:
15581188260000000
Node-Selectors: <none>
15581188260000000
Normal Created 7m kubelet, ip-192-168-66-176.eu-west-1.compute.internal Created container
15581188260000000
Normal SuccessfulMountVolume 7m kubelet, ip-192-168-66-176.eu-west-1.compute.internal MountVolume.SetUp succeeded for volume "volume-3"
15581188260000000
Normal Pulled 7m kubelet, ip-192-168-66-176.eu-west-1.compute.internal Container image "jenkinsci/jnlp-slave:3.14-1" already present on machine
15581188260000000
Normal Killing 5m kubelet, ip-192-168-66-176.eu-west-1.compute.internal Killing container with id docker://maven:Need to kill Pod
15581188260000000
kubectl delete gateway istio-system-gateway -n istio-system
15581188260000000
<p>I have tried to reinstall istio following this <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/installing-istio" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/installing-istio</a>.
15581188260000000
- cluster:
15581188260000000
certificate-authority-data: REDACTED
15581188260000000
namespace: default
15581188260000000
user: federation
15581188260000000
user: kubectl
15581188260000000
user: kubernetes-admins1
15581188260000000
token: e7506989-42eb-11e8-bf70-fa163eb593a3
15581188260000000
- name: kubectl
15581188260000000
where am I going wrong ? </p>
15581188260000000
Creation Timestamp: 2018-04-22T17:37:40Z
15581188260000000
Secret Ref:
15581188260000000
Status:
15581188260000000
Reason: ClusterNotReachable
15581188260000000
<code>$ which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )
15581188260000000
Enter passphrase for /dev/fd/63: ERROR: Job failed: exit code 1</code>
15581188260000000
<p>This works fine on a larger level, but I lose the ability to use role and claims based authorization since I do not have the authentication set up in my app specifically.</p>
15581188260000000
"IncludeScopes": true,
15581188260000000
"Console": {
15581188260000000
</code></pre>
15581188260000000
.ConfigureAppConfiguration((hostingContext, config) =>
15581188260000000
var appAssembly = Assembly.Load(new AssemblyName(env.ApplicationName));
15581188260000000
config.AddCommandLine(args);
15581188260000000
.UseDefaultServiceProvider((context, options) =>
15581188260000000
<p>Example of logging in other classes:</p>
15581188260000000
(message, certificate, chain, errors) => true;
15581188260000000
- cluster:
15581188260000000
<p>I have a very simple EventHubClient app. It will just listen to an EventHub messages.</p>
15581188260000000
// Init Mapper
15581188260000000
EventHubPath,
15581188260000000
// Registers the Event Processor Host and starts receiving messages
15581188260000000
<p><strong>Kubernetes Manifest file (.yaml):</strong></p>
15581188260000000
template:
15581188260000000
containers:
15581188260000000
imagePullSecrets:
15581188260000000
historysvc-deployment-558fc5649f-bln8f 0/1 CrashLoopBackOff 17 1h
15581188260000000
Namespace: default
15581188260000000
Annotations: <none>
15581188260000000
historysvc:
15581188260000000
State: Terminated
15581188260000000
Last State: Terminated
15581188260000000
Ready: False
15581188260000000
Conditions:
15581188260000000
Volumes:
15581188260000000
QoS Class: BestEffort
15581188260000000
Type Reason Age From Message
15581188260000000
Normal Started 6s (x5 over 1m) kubelet, aks-nodepool1-81522366-0 Started container
15581188260000000
<p><strong>UPDATE 1</strong></p>
15581188260000000
Start Time: Tue, 24 Jul 2018 10:15:37 +0200
15581188260000000
IP: 10.244.0.12
15581188260000000
Last State: Terminated
15581188260000000
Ready: False
15581188260000000
Conditions:
15581188260000000
Volumes:
15581188260000000
QoS Class: BestEffort
15581188260000000
Type Reason Age From Message
15581188260000000
<pre><code>docker run <image>
15581188260000000
</code></pre>
15581188260000000
<pre><code>NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
15581188260000000
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:34:11Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
15581188260000000
</code></pre>
15581188260000000
<li><a href="https://stackoverflow.com/questions/50437947/azure-microsoft-compute-resource-provider-stuck-registering-for-about-a-day">Azure: Microsoft.Compute resource provider stuck 'Registering' for about a day</a></li>
15581188260000000
from within Azure), the provider takes an exorbitantly long time to
15581188260000000
<p>The following command should register Microsoft.Compute in a timely manner:</p>
15581188260000000
</code></pre>
15581188260000000
<p>Other Providers that Registered Successfully:</p>
15581188260000000
<h2>Other Background:</h2>
15581188260000000
This is where my situation diverges from the above question. When unregistering the component with:
15581188260000000
<p>This is different from the user over here who then gets Stuck / Hangs at Un-Registering, as opposed to failing with the above message when attempting to unregister (<a href="https://stackoverflow.com/questions/50437947/azure-microsoft-compute-resource-provider-stuck-registering-for-about-a-day">Azure: Microsoft.Compute resource provider stuck 'Registering' for about a day</a>)</p>
15581188260000000
<p><code>gcloud auth activate-service-account --key-file=key-file.json</code></p>
15581188260000000
<p>My question (to MS and anyone else) is: Why is this issue occurring and what work around can be implemented by the users / customers themselves as opposed to by Microsoft Support?</p>
15581188260000000
<li><a href="https://stackoverflow.com/questions/48761952/cant-contact-our-azure-aks-kube-tls-handshake-timeout">Can't contact our Azure-AKS kube - TLS handshake timeout</a></li>
15581188260000000
<li><a href="https://github.com/Azure/AKS/issues/112" rel="noreferrer">https://github.com/Azure/AKS/issues/112</a></li>
15581188260000000
<h2>TL;DR</h2>
15581188260000000
<p>You can also try scaling your Cluster (assuming that doesn't break your app).</p>
15581188260000000
<p>The node(s) on my impacted cluster look like this:</p>
15581188260000000
<p>To the above point, here are the metrics the same Node after Scaling up and then back down (which happened to alleviate our issue, but does not always work — see answers at bottom):</p>
15581188260000000
<p>Zimmergren over on GitHub indicates that he has less issues with larger instances than he did running bare bones smaller nodes. This makes sense to me and could indicate that the way the AKS servers divy up the workload (see next section) could be based on the size of the instances.</p>
15581188260000000
<li>giorgited (<a href="https://github.com/Azure/AKS/issues/268#issuecomment-376390692" rel="noreferrer">https://github.com/Azure/AKS/issues/268#issuecomment-376390692</a>)</li>
15581188260000000
<h2>Existence of Multiple AKS Management 'Servers' in one Az Region</h2>
15581188260000000
</blockquote>
15581188260000000
<p>Both of our Clusters are running identical ingresses, services, pods, containers so it is also unlikely that anything a user is doing causes this problem to crop up.</p>
15581188260000000
<p>That said...</p>
15581188260000000
</blockquote>
15581188260000000
<li><a href="https://stackoverflow.com/questions/47481022/tls-handshake-timeout-with-kubernetes-in-gke?rq=1">TLS handshake timeout with kubernetes in GKE</a></li>
15581188260000000
var reflectorDisambiguator = int64(time.Now().UnixNano() % 12345)
15581188260000000
<li><p>There is a svc running in k8s cluster which use nodeport 30003</p></li>
15581188260000000
<p>Please see error below from the console:</p>
15581188260000000
<p>How can I solve this error?</p><p>so I've deployed a GRPC service to GKE and <strong>confirmed it works by connecting and making calls in python</strong>... but my goal is to create a front end web app rather than just use python.</p>
15581188260000000
<p>Using <em>ingress-nginx 0.20.0</em> which is built on top of <em>NGINX 1.15.5</em> I have the following ingress object.</p>
15581188260000000
annotations:
15581188260000000
- backend:
15581188260000000
- hosts:
15581188260000000
<pre><code>server {
15581188260000000
listen 443 ssl http2;
15581188260000000
ssl_trusted_certificate /etc/ingress-controller/ssl/monitoring-kb-kibana-tls-full-chain.pem;
15581188260000000
# ngx_auth_request module overrides variables in the parent request,
15581188260000000
proxy_set_header Content-Length "";
15581188260000000
proxy_set_header X-Real-IP $the_real_ip;
15581188260000000
proxy_buffers 4 4k;
15581188260000000
client_max_body_size 10m;
15581188260000000
set $namespace "monitoring";
15581188260000000
rewrite_by_lua_block {
15581188260000000
if ($scheme = https) {
15581188260000000
<pre><code>{"proxy_protocol_addr": "","remote_addr": "xxx.xxx.xxx.xx", "proxy_add_x_forwarded_for": "xxx.xxx.xxx.xx, xxx.xxx.xxx.xx", "remote_user": "", "time_local": "21/Nov/2018:09:53:39 +0000", "request" : "GET / HTTP/1.1", "status": "202", "body_bytes_sent": "0", "http_referer": "", "http_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36", "request_length" : "0", "request_time": "0.004", "proxy_upstream_name": "monitoring-kb-kibana-5601", "upstream_addr": "xxx.xxx.xxx.xx:4180", "upstream_response_length": "0", "upstream_response_time": "0.003", "upstream_status": "202", "request_body": "", "http_authorization": ""}
15581188260000000
{"proxy_protocol_addr": "","remote_addr": "xxx.xxx.xxx.xx", "proxy_add_x_forwarded_for": "xxx.xxx.xxx.xx, xxx.xxx.xxx.xx", "remote_user": "", "time_local": "21/Nov/2018:09:53:43 +0000", "request" : "GET /plugins/kibana/assets/settings.svg HTTP/1.1", "status": "202", "body_bytes_sent": "0", "http_referer": "https://kibana.test.com/app/kibana", "http_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36", "request_length" : "0", "request_time": "0.029", "proxy_upstream_name": "monitoring-kb-kibana-5601", "upstream_addr": "xxx.xxx.xxx.xx:4180", "upstream_response_length": "0", "upstream_response_time": "0.030", "upstream_status": "202", "request_body": "", "http_authorization": ""}
15581188260000000
</code></pre>
15581188260000000
<li><p>Run Prometheus as a docker container outside of kubernetes. To accomplish this I have created this Dockerfile:</p>
15581188260000000
<pre><code>global:
15581188260000000
- job_name: 'kubernetes-apiservers'
15581188260000000
kubernetes_sd_configs:
15581188260000000
<pre><code>Failed to list *v1.Pod: Get http://localhost:443/api/v1/pods?limit=500&amp;resourceVersion=0: dial tcp 127.0.0.1:443: connect: connection refused"
15581188260000000
<li><p>I thought deploying Prometheus as a Kubernetes deployment may help, so I made this yaml and deployed it.</p>
15581188260000000
metadata:
15581188260000000
- name: prometheus-monitor
15581188260000000
* Trying 127.0.0.1...
15581188260000000
> Accept: */*
15581188260000000
< Content-Length: 37
15581188260000000
* Closing connection 0*
15581188260000000
* TCP_NODELAY set
15581188260000000
> Accept: */*
15581188260000000
<p>My Cluster IP is 184.173.44.62 and my service node port is 30484.
15581188260000000
Name: sunlife-analystrating-deployment
15581188260000000
Selector: app=sunlife-analystrating-deployment
15581188260000000
Pod Template:
15581188260000000
Port: 5002/TCP
15581188260000000
Type Status Reason
15581188260000000
</code></pre>
15581188260000000
Labels: component=apiserver
15581188260000000
IP: 172.21.0.1
15581188260000000
Events: <none>
15581188260000000
Selector: app=sunlife-analystrating-deployment
15581188260000000
NodePort: <unset> 30484/TCP
15581188260000000
</code></pre>
15581188260000000
response="Hello World"
15581188260000000
</code></pre><p>I tried to create a <code>LoadBalancer</code> service on an AKS cluster with </p>
15581188260000000
app: myapp
15581188260000000
protocol: TCP
15581188260000000
</code></pre>
15581188260000000
eating load balancer (will retry): failed to ensure load balancer for service defaul
15581188260000000
ts.windows.net/&lt;token 2>/' associated with this subscription. Please use the authori
15581188260000000
</code></pre>
15581188260000000
</blockquote>
15581188260000000
<li>expose the pod as a clusterIP service and dump my tables onto the database</li>
15581188260000000
<blockquote>
15581188260000000
mysqlRootPassword=xxx,mysqlUser=xxx,mysqlPassword=xxx, \
15581188260000000
<p>I am deploying my wordpress app and nginx containers in one pod, for mutual persistent volume use. The deployment yaml looks like this:</p>
15581188260000000
name: wordpress
15581188260000000
selector:
15581188260000000
labels:
15581188260000000
name: nginx
15581188260000000
value: mysql:3306
15581188260000000
secretKeyRef:
15581188260000000
- containerPort: 80
15581188260000000
mountPath: "/etc/nginx/conf.d"
15581188260000000
value: mysql:3306
15581188260000000
secretKeyRef:
15581188260000000
- name: MY_WP_SITEURL
15581188260000000
value: "true"
15581188260000000
mountPath: /var/www/html
15581188260000000
- name: wp-config
15581188260000000
path: wp.conf
15581188260000000
<p>For reference, my config file is as follows:
15581188260000000
root /var/www/html;
15581188260000000
types {
15581188260000000
fastcgi_split_path_info ^(.+\.php)(/.+)$;
15581188260000000
fastcgi_param PATH_INFO $fastcgi_path_info;
15581188260000000
</code></pre>
15581188260000000
- port: 80
15581188260000000
targetPort: 443
15581188260000000
<p>Here are some logs in case it can tell anything about the problem (I couldn't find much with that regards):</p>
15581188260000000
Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it.<br>
15581188260000000
mysql: [Warning] Using a password on the command line interface can be insecure.<br>
15581188260000000
<p>[11:15:03 +0000] "GET /robots.txt HTTP/1.1" 500 262 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +<a href="http://www.google.com/bot.html" rel="nofollow noreferrer">http://www.google.com/bot.html</a>)"<br>
15581188260000000
<p><strong>Wordpress container logs</strong></p>
15581188260000000
<p>I personally think there is something simple missing here, but I haven't been able to point it out in the past few days. Anyone know what I'm missing here?</p><p>When I use this command: "kubectl get nodes" I get the below errors:</p>
15581188260000000
<p><strong>Environment</strong>
15581188260000000
<p><strong>Incident instance timeline</strong></p>
15581188260000000
*The affected node is healthy according to Kubernetes. Looking with kubectl describe nodes, the affected node has more than enough resources to run pods</li>
15581188260000000
Mar 16 08:29:54 ip-172-20-85-48 kubelet[1346]: I0316 08:29:54.512934 1346 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "ssl-certs" (UniqueName: "kubernetes.io/secret/8ead64a3-28f3-11e8-b520-025c267c6ea8-ssl-certs") pod "broker-0" (UID: "8ead64a3-28f3-11e8-b520-025c267c6ea8")
15581188260000000
Mar 16 08:29:54 ip-172-20-85-48 kubelet[1346]: I0316 08:29:54.613329 1346 reconciler.go:262] operationExecutor.MountVolume started for volume "broker-prometheus-config" (UniqueName: "kubernetes.io/configmap/8ead64a3-28f3-11e8-b520-025c267c6ea8-broker-prometheus-config") pod "broker-0" (UID: "8ead64a3-28f3-11e8-b520-025c267c6ea8")
15581188260000000
Mar 16 08:29:54 ip-172-20-85-48 kubelet[1346]: I0316 08:29:54.616720 1346 operation_generator.go:522] MountVolume.SetUp succeeded for volume "broker-prometheus-config" (UniqueName: "kubernetes.io/configmap/8ead64a3-28f3-11e8-b520-025c267c6ea8-broker-prometheus-config") pod "broker-0" (UID: "8ead64a3-28f3-11e8-b520-025c267c6ea8")
15581188260000000
Mar 16 08:29:55 ip-172-20-85-48 kubelet[1346]: I0316 08:29:55.014972 1346 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-b673d6da-26e3-11e8-aa99-02cd3728faaa" (UniqueName: "kubernetes.io/aws-ebs/aws://eu-central-1b/vol-04145a1c9d1a26280") pod "broker-0" (UID: "8ead64a3-28f3-11e8-b520-025c267c6ea8")
15581188260000000
Mar 16 08:29:58 ip-172-20-85-48 kubelet[1346]: E0316 08:29:58.023871 1346 nestedpendingoperations.go:263] Operation for "\"kubernetes.io/aws-ebs/aws://eu-central-1b/vol-04145a1c9d1a26280\"" failed. No retries permitted until 2018-03-16 08:30:02.023814124 +0000 UTC m=+40.875939520 (durationBeforeRetry 4s). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-b673d6da-26e3-11e8-aa99-02cd3728faaa\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-central-1b/vol-04145a1c9d1a26280\") pod \"broker-0\" (UID: \"8ead64a3-28f3-11e8-b520-025c267c6ea8\") "
15581188260000000
Mar 16 08:30:10 ip-172-20-85-48 kubelet[1346]: I0316 08:30:10.156111 1346 reconciler.go:262] operationExecutor.MountVolume started for volume "pvc-b673d6da-26e3-11e8-aa99-02cd3728faaa" (UniqueName: "kubernetes.io/aws-ebs/aws://eu-central-1b/vol-04145a1c9d1a26280") pod "broker-0" (UID: "8ead64a3-28f3-11e8-b520-025c267c6ea8")
15581188260000000
Mar 16 08:30:12 ip-172-20-85-48 kubelet[1346]: I0316 08:30:12.672408 1346 kuberuntime_manager.go:385] No sandbox for pod "broker-0(8ead64a3-28f3-11e8-b520-025c267c6ea8)" can be found. Need to start a new one
15581188260000000
Mar 16 08:34:12 ip-172-20-85-48 kubelet[1346]: E0316 08:34:12.673020 1346 pod_workers.go:186] Error syncing pod 8ead64a3-28f3-11e8-b520-025c267c6ea8 ("broker-0(8ead64a3-28f3-11e8-b520-025c267c6ea8)"), skipping: failed to "CreatePodSandbox" for "broker-0(8ead64a3-28f3-11e8-b520-025c267c6ea8)" with CreatePodSandboxError: "CreatePodSandbox for pod \"broker-0(8ead64a3-28f3-11e8-b520-025c267c6ea8)\" failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
15581188260000000
<li><p>Reproduced the problem with another pod (adapter-mqtt-vertx) by forcing it to be rescheduled on the "affected node" AFTER docker daemon and kubelet restart, produces similar result</p></li>
15581188260000000
</code></pre>
15581188260000000
.withNewMetadata()
15581188260000000
.withReplicas(1)
15581188260000000
.withNewSpec()
15581188260000000
.addNewContainer()
15581188260000000
.endTemplate()
15581188260000000
<p>I have checked:</p>
15581188260000000
<li>that there aren't any name conflicts / etc. in minikube. I delete the deployments, replica sets, and pods between attempts, and I recreate the namespace, just to be safe. However, I've found that it doesn't make a difference which I do, as my code cleans up existing pods/replica sets/deployments as needed</li>
15581188260000000
<li>run kubernetes locally (as opposed to minikube), as the AUR package for kubernetes takes an unbelievably long time to build on my machine</li>
15581188260000000
Error syncing pod, skipping: failed to "StartContainer" for "YYY" with ImageInspectError: "Failed to inspect image \"registry_domain/XXX/YYY:latest\": Id or size of image \"registry_domain/XXX/YYY:latest\" is not set"
15581188260000000
<p>Looking up the error message provided leads me to <a href="https://github.com/kubernetes/minikube/issues/947" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/947</a>, but this is not the same issue, as <code>kube-dns</code> is working as expected. This is the only relevant search result, as the other results that come up are </p>
15581188260000000
<pre><code>kind: StorageClass
15581188260000000
volumeBindingMode: WaitForFirstConsumer
15581188260000000
labels:
15581188260000000
capacity:
15581188260000000
volumeMode: Filesystem
15581188260000000
nodeSelectorTerms:
15581188260000000
apiVersion: v1
15581188260000000
name: grafana-pv-volume
15581188260000000
persistentVolumeReclaimPolicy: Retain
15581188260000000
path: "/grafana-volume"
15581188260000000
volumeClaimTemplate:
15581188260000000
name: prometheus-pv-volume
15581188260000000
<p>Everything works fine.</p>
15581188260000000
</code></pre>
15581188260000000
</code></pre>
15581188260000000
<p>Why do they appear? How can i fix this?</p>"FindExpandablePluginBySpec err:no volume plugin matched" messages in logs while volumes are working<p>When I create kubernetes cluster with gcloud container clusters create command, a permission error occurs as follows:</p>
15581188260000000
ERROR: (gcloud.container.clusters.create) ResponseError: code=403, message=Required "container.clusters.create" permission for "projects/test-project".
15581188260000000
ERROR: (gcloud.container.clusters.create) ResponseError: code=403, message=Google
15581188260000000
<p>I have a kubernetes pod stuck in "Terminating" state that resists pod deletions</p>
15581188260000000
<h1>What I have tried</h1>
15581188260000000
- also tried with <code>--force --grace-period=0</code>, same outcome with extra warning</p>
15581188260000000
<p>Outcome: <code>Error from server (NotFound): nodes "ip-xxx.yyy.compute.internal" not found</code></p>
15581188260000000
<h3>EDIT 1</h3>
15581188260000000
location / {
15581188260000000
<p>I'm getting an error before I even get to the Couchbase part. I successfully created a resource group (which I called "cb_ask_spike", and yes it does appear on the Portal) from the command line, but then I try to create an AKS cluster:</p>
15581188260000000
<blockquote>
15581188260000000
<p>I'm using azure-cli v2.0.31.</p>"Incorrect padding" when trying to create managed Kubernetes cluster on Azure with AKS<p>I'm trying to report Node.js errors to Google Error Reporting, from one of our kubernetes deployments running on a GCP/GKE cluster with RBAC. (i.e. permissions defined in a service account associated to the cluster)</p>
15581188260000000
<p>This works only in certain environments:</p>
15581188260000000
<p>It feels like the job did pick up the permission changes of the cluster's service account, whereas my deployment did not.</p>
15581188260000000
</blockquote>"insufficient authentication scopes" from Google API when calling from K8S cluster<p>using a standard istio deployment in a kubernetes cluster I am trying to add an initContainer to my pod deployment, which does additional database setup.</p>
15581188260000000
- name: create-database
15581188260000000
[] [db_host]
psql "postgresql://$DB_USER:$DB_PASSWORD@db-host:5432" -c "CREATE DATABASE fusionauth ENCODING 'UTF-8' LC_CTYPE 'en_US.UTF-8' LC_COLLATE 'en_US.UTF-8' TEMPLATE template0"
15581188260000000
<p>The error message in the init-container is:</p>
15581188260000000
<p>The same command from fully initialized pod works just fine.</p><p>I tried <code>kubectl exec</code> on a k8s 1.6.4 RBAC-enabled cluster and the error returned was: <code>error: unable to upgrade connection: Unauthorized</code>. <code>docker exec</code> on the same container succeeds. Otherwise, <code>kubectl</code> is working. <code>kubectl</code> tunnels through an SSH connection but I don't think this is the issue.</p>
15581188260000000
I0614 16:50:11.169500 64104 round_trippers.go:426] Content-Length: 12
15581188260000000
I0614 16:50:11.169545 64104 round_trippers.go:426] Content-Length: 12
15581188260000000
<pre><code>kube-system po/kubernetes-dashboard-5569448c6d-w2bdb 1/1 Running 0 16h
15581188260000000
<pre><code>kube-system kubernetes-dashboard-5569448c6d-w2bdb 1/1 Running 0 16h
15581188260000000
<p>What is different? The cluster is the same, so it should be returning the same data. The first machine is kubectl version 1.9.2, second machine is 1.10.0. The cluster is running 1.8.7.</p>"kubectl get all --all-namespaces" has different output against the same cluster<p>Could anybody help how I can change the version number shown from "kubectl get nodes"? The binaries are compiled from source. "kubectl version" shows the correct version, but "kubectl get nodes" not.</p>
15581188260000000
<p><a href="https://i.stack.imgur.com/ijlGO.png" rel="nofollow noreferrer">kubectl get nodes</a></p>
15581188260000000
<pre><code>KUBELET_ADDRESS="--address=0.0.0.0"
15581188260000000
</code></pre>
15581188260000000
</blockquote>
15581188260000000
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
15581188260000000
KUBE_LOG_LEVEL="--v=0"
15581188260000000
<blockquote>
15581188260000000
<p>So i still do not know what is missing. </p>"kubectl get nodes" shows NotReady always even after giving the appropriate IP<p>I am following the document <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/</a> to try to create a kubernetes cluster with 3 vagrant ubuntu vm in my local mac. But I can only see the master by running "kubectl get nodes" in master node after "kubeadm join" successfully. After tried several possible ways googled from internet, still the same issue.</p>
15581188260000000
--> kubeadm init --ignore-preflight-errors Swap --apiserver-advertise-address=192.168.101.101
15581188260000000
</blockquote>
15581188260000000
<p><a href="https://i.stack.imgur.com/L23UY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L23UY.jpg" alt="screenshot for kubelet log"></a></p>
15581188260000000
<pre><code>Failed to pull image "localhost:5000/dev/customer:v1": rpc error: code = Unknown desc
15581188260000000
<pre><code>Type Reason Age From Message
15581188260000000
Normal Pulling 15m (x3 over 16m) kubelet, minikube pulling image "localhost:5000/dev/customer:v1"
15581188260000000
PS C:\Sunny\Projects\NodeApps\Nodejs-Apps\Customer> docker pull localhost:5000/dev/customer:v1
15581188260000000
<p>Is it because <code>kubectl logs</code> under the hood using ssh? Is there any workaround to see the pod log?</p>"kubectl logs" not working after adding NAT gateways in GCE<p>Very often when I want to deploy new image with "kubectl set image" it's failing with ErrImagePull status, and then fixes itself after some time (up to few hours). These are events from "kubectl describe pod":</p>
15581188260000000
31m 11m 6 {kubelet gke-xxxxxxxxxx-staging-default-pool-ac6a32f4-09h5} spec.containers{zzz-staging} Warning Failed Failed to pull image "us.gcr.io/yyyy-staging/zzz:latest": net/http: request canceled
15581188260000000
24m 7m 5m 6 {kubelet gke-xxxxxxxxxx-staging-default-pool-ac6a32f4-09h5} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "zzz-staging" with ImageInspectError: "Failed to inspect image \"us.gcr.io/yyyy-staging/zzz:latest\": operation timeout: context deadline exceeded"
15581188260000000
- name: service
15581188260000000
cpu: "20m"
15581188260000000
<p>But it keeps taking 120m. <strong>Why is "limits" property being ignored?</strong> Everything else is working correctly. If I request 200m, 200m are being reserved, but limit keeps being ignored.</p>
15581188260000000
<p>kubectl describe namespace default</p>
15581188260000000
Status: Active
15581188260000000
</code></pre>"Limits" property ignored when deploying a container in a Kubernetes cluster<p>I successfully deployed with kubernetes a custom container based on the official docker-vault image, but when using the <code>vault init</code> command I get the following error:</p>
15581188260000000
WORKDIR /app
15581188260000000
<p>What I'm trying to achieve is to execute a shell script after the container is started in order to configure the vault. I have a configuration script that starts like this:</p>
15581188260000000
</code></pre>
15581188260000000
- image: // my image
15581188260000000
name: vaultport
15581188260000000
securityContext:
15581188260000000
- name: vault-volume
15581188260000000
command: ["/bin/sh", "./configure_vault.sh"]
15581188260000000
</code></pre>
15581188260000000
<p>c) I downloaded <code>opensuse:latest</code> image from docker hub, tagged it and uploaded to the registry. Docker push succeeded. I deleted from docker local cache and tried docker pull to verify if image push was successful and yes, it was successful. </p>
15581188260000000
<pre><code>$ ps -ef | grep kubelet
15581188260000000
<pre><code>apiVersion: v1
15581188260000000
containers:
15581188260000000
command: ["/bin/bash"]
15581188260000000
<pre><code>$ kubectl get pods
15581188260000000
<pre><code>Error from server (BadRequest): container "utest1" in pod "test1" is waiting to start: ContainerCreating
15581188260000000
<p>l) I further went ahead and checked the <code>/var/log/messages</code>. It specified that kubelet was able to receive the required arguments: </p>
15581188260000000
</code></pre>
15581188260000000
docker.io/registry 2 28525f9a6e46 10 days ago 33.2 MB
15581188260000000
abc.def.com:1234/s5678 test 54ae12a89367 8 days ago 108 MB
15581188260000000
<pre><code>CMD cd /opt/app/jar \
15581188260000000
</code></pre>
15581188260000000
</code></pre>
15581188260000000
<encoder>
15581188260000000
<fileNamePattern>${user.dir}/../log/archived/debug.%d{yyyy-MM-dd}.%i.log</fileNamePattern>
15581188260000000
</code></pre>
15581188260000000
kubernetes v1.7.6+a08f5eeb62
15581188260000000
API version: 1.24
15581188260000000
OS/Arch: linux/amd64
15581188260000000
Go version: go1.8.3
15581188260000000
<p>My base Promethues config is :</p>
15581188260000000
- api_server: http://172.29.219.102:8080
15581188260000000
regex: (.*)
15581188260000000
<p>When I do a simple curl command from anywhere, I see :</p>
15581188260000000
<pre><code>level=warn ts=2017-12-15T16:40:48.301741927Z caller=scrape.go:673 component="target manager" scrape_pool=kubernetes_pods target=http://172.29.219.110:8080/auth/health msg="append failed" err="no token found"
15581188260000000
<p>Error starting host: Error creating host: Error executing step: Running pre
15581188260000000
<pre><code>docker-machine sshdocker-machine active-N -L 8080:localhost:8080
15581188260000000
<p>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b1e95e26f46d
15581188260000000
<p>So it means kubernete is running</p>
15581188260000000
<p>Anything wrong here?
15581188260000000
ERROR: Network consul_istiomesh declared as external, but could not be found. Please create the network manually using `docker network create consul_istiomesh` and try again.
15581188260000000
</code></pre>
15581188260000000
Creating consul_details-v1_1
15581188260000000
Traceback (most recent call last):
15581188260000000
log.error(e.msg)
15581188260000000
<li>A "main" container which contains a build job</li>
15581188260000000
Here is the pod's yaml file:</p>
15581188260000000
- mountPath: /test-ebs
15581188260000000
awsElasticBlockStore:
15581188260000000
<pre><code>SetUp failed for volume "kubernetes.io/aws-ebs/8e830149-9c95-11e6-b969-0691ac4fce05-test-volume" (spec.Name: "test-volume") pod "8e830149-9c95-11e6-b969-0691ac4fce05" (UID: "8e830149-9c95-11e6-b969-0691ac4fce05") with: mount failed: exit status 32 Mounting arguments: /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-west-2a/vol-xxxxxxxx /var/lib/kubelet/pods/8e830149-9c95-11e6-b969-0691ac4fce05/volumes/kubernetes.io~aws-ebs/test-volume [bind]
15581188260000000
<p>I wanted to test the <code>/metrics</code> API.<br>
15581188260000000
-vv -H "Content-Type: application/json" https://172.31.29.121:10250/metrics
15581188260000000
<pre><code>* Trying 172.31.29.121...
15581188260000000
* successfully set certificate verify locations:
15581188260000000
* TLSv1.2 (IN), TLS handshake, Certificate (11):
15581188260000000
curl: (60) SSL certificate problem: self signed certificate in certificate chain
15581188260000000
</code></pre><p><a href="https://marketplace.visualstudio.com/items?itemName=ballerina.ballerina" rel="nofollow noreferrer">Ballerina extension</a> was installed successfully in visual code.
15581188260000000
<pre><code>import ballerina/http;
15581188260000000
name: "ballerina-abdennour-demo"
15581188260000000
sayHello (endpoint caller, http:Request request) {
15581188260000000
<p>VisualCode reports an error :</p>
15581188260000000
<p>Or is it a package that is missing and should be installed inside Ballerina SDK/ Platform?</p>
15581188260000000
<p>When I run a client sanity test, the only exception returned is this:</p>
15581188260000000
kube-dns (dns.go:48] version: 1.14.4-2-g5584e04) seems to be working as expected:</p>
15581188260000000
</code></pre>
15581188260000000
Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa)
15581188260000000
waiting for tearing down pods
15581188260000000
waiting for tearing down pods
15581188260000000
waiting for tearing down pods
15581188260000000
waiting for tearing down pods
15581188260000000
<p>I'm using a slightly modified version of the controller and worker install scripts from <a href="https://github.com/coreos/coreos-kubernetes/tree/master/multi-node/generic" rel="nofollow noreferrer">https://github.com/coreos/coreos-kubernetes/tree/master/multi-node/generic</a></p>
15581188260000000
export WORKER_IP=10.79.218.3
15581188260000000
openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf
15581188260000000
openssl genrsa -out admin-key.pem 2048
15581188260000000
<p>and this is <code>openssl.cnf</code></p>
15581188260000000
[ v3_req ]
15581188260000000
DNS.1 = coreos-2.tux-in.com
15581188260000000
</code></pre>
15581188260000000
[req_distinguished_name]
15581188260000000
[alt_names]
15581188260000000
<p>my worker machine is <code>coreos-3.tux-in.com</code> which resolves to lan ip <code>10.79.218.3</code></p>
15581188260000000
Nov 08 21:24:07 coreos-2.tux-in.com kubelet-wrapper[2018]: E1108 21:24:07.461340 2018 reflector.go:203] pkg/kubelet/kubelet.go:403: Failed to list *api.Node: Get https://coreos-2.tux-in.com:443/api/v1/nodes?fieldSelector=metadata.name%3D10.79.218.2&amp;resourceVersion=0: x509: certificate signed by unknown authority
15581188260000000
</code></pre><pre><code>root@master2:/home/osboxes# kubectl get pods --all-namespaces -o wide
15581188260000000
kube-system coredns-78fcdf6894-s4l8n 1/1 Running 1 18h 10.244.0.14 master2
15581188260000000
kube-system kube-flannel-ds-4br99 1/1 Running 1 18h 10.0.2.15 node
15581188260000000
root@master2:/home/osboxes# kubectl exec -it hello-kubernetes-55857678b4-4xbgd sh
15581188260000000
<p>Verbose:</p>
15581188260000000
I0703 08:44:01.255808 10307 round_trippers.go:386] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.11.0 (linux/amd64) kubernetes/91e7b4f" 'https://192.168.0.33:6443/api/v1/namespaces/default/pods/hello-kubernetes-55857678b4-4xbgd'
15581188260000000
I0703 08:44:01.273967 10307 round_trippers.go:414] Content-Length: 2725
15581188260000000
I0703 08:44:01.290627 10307 round_trippers.go:386] curl -k -v -XPOST -H "X-Stream-Protocol-Version: v4.channel.k8s.io" -H "X-Stream-Protocol-Version: v3.channel.k8s.io" -H "X-Stream-Protocol-Version: v2.channel.k8s.io" -H "X-Stream-Protocol-Version: channel.k8s.io" -H "User-Agent: kubectl/v1.11.0 (linux/amd64) kubernetes/91e7b4f" 'https://192.168.0.33:6443/api/v1/namespaces/default/pods/hello-kubernetes-55857678b4-4xbgd/exec?command=sh&container=hello-kubernetes&container=hello-kubernetes&stdin=true&stdout=true&tty=true'
15581188260000000
I0703 08:44:01.317951 10307 round_trippers.go:414] Content-Type: text/plain; charset=utf-8
15581188260000000
(1045, "Access denied for user 'root'@'cloudsqlproxy~[cloudsql instance ip]' (using password: NO)")</p>
15581188260000000
connection.settings_dict
15581188260000000
>> import os
15581188260000000
'default': {
15581188260000000
'HOST': os.getenv('DB_HOST'),
15581188260000000
'TEST': {
15581188260000000
<pre><code>apiVersion: extensions/v1beta1
15581188260000000
app: aesh-web
15581188260000000
labels:
15581188260000000
value: [service ip]
15581188260000000
- name: DB_NAME
15581188260000000
name: cloudsql-db-credentials
15581188260000000
name: cloudsql-db-credentials
15581188260000000
"-instances=[instance-connection-name]:aesh-web-db=tcp:3306",
15581188260000000
readOnly: true
15581188260000000
</code></pre><p>We have a fairly large kubernetes deployment on GKE, and we wanted to make our life a little easier by enabling auto-upgrades. The <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-upgrades" rel="nofollow noreferrer">documentation on the topic</a> tells you how to enable it, but not how it actually <strong>works</strong>.</p>
15581188260000000
<h1>What I did:</h1>
15581188260000000
<li>Nodepools had version <code>1.11.5-gke.X</code></li>
15581188260000000
<li>The nodepool would be updated to either <code>1.11.7-gke.3</code> (the default cluster version) or <code>1.11.7-gke.6</code> (the most recent version)</li>
15581188260000000
<p>So is Teleport a parallell technology to Container / Kubernetes / Immutable Infrastructure or is it orthogonal, as in, can be used in addition?</p>(How) Does Gravitational Teleport fit into container/Kubernetes environment?<p>I am new to Kops and a bit to kubernetes as well. I managed to create a cluster with Kops, and run a deployment and a service on it. everything went well, and an ELB was created for me and I could access the application via this ELB endpoint.</p>
15581188260000000
name: django-app-service
15581188260000000
domainName: "my.personal-site.de"
15581188260000000
kind: Deployment
15581188260000000
minReadySeconds: 15
15581188260000000
maxUnavailable: 1
15581188260000000
2- docker run -d -p 5000:5000 --name registry registry:2
15581188260000000
<p>all above steps are working fine with no problems at all.</p>
15581188260000000
3- minikube ssh
15581188260000000
<p>curl: (7) Failed to connect to 127.0.0.1 port 5000: Connection refused</p>
15581188260000000
<p>I am using this yaml file (i named it <em>ConsolePre.yaml</em>) to deploy my image using kubernetes</p>
15581188260000000
labels:
15581188260000000
targetPort: 9080
15581188260000000
type: NodePort
15581188260000000
labels:
15581188260000000
template:
15581188260000000
- containerPort: 9080
15581188260000000
<p>sudo kubectl apply -f /PATH_TO_YAML_FILE/ConsolePre.yaml</p>
15581188260000000
</code></pre>
15581188260000000
<p>i found next message in description result</p>
15581188260000000
</blockquote>
15581188260000000
<p>No relations/tables or whatsoever appear on my DB:</p>
15581188260000000
</code></pre>
15581188260000000
drwx------ 2 999 docker 4096 Oct 30 11:22 global
15581188260000000
-rw------- 1 999 docker 1636 Oct 30 11:21 pg_ident.conf
15581188260000000
drwx------ 2 999 docker 4096 Oct 30 11:21 pg_serial
15581188260000000
drwx------ 2 999 docker 4096 Oct 30 11:21 pg_tblspc
15581188260000000
-rw------- 1 999 docker 22205 Oct 30 11:21 postgresql.conf
15581188260000000
<pre><code>root@information-system-deployment-5dccfcb7c9-54trz:/var/lib/postgresql/data# ls -l
15581188260000000
drwx------ 2 postgres postgres 4096 Oct 30 11:21 pg_commit_ts
15581188260000000
drwx------ 4 postgres postgres 4096 Oct 30 11:21 pg_multixact
15581188260000000
drwx------ 2 postgres postgres 4096 Oct 30 11:21 pg_stat
15581188260000000
-rw------- 1 postgres postgres 4 Oct 30 11:21 PG_VERSION
15581188260000000
-rw------- 1 postgres postgres 85 Oct 30 11:21 postmaster.pid
15581188260000000
metadata:
15581188260000000
app: information-system-deployment
15581188260000000
labels:
15581188260000000
name: information-system-db
15581188260000000
valueFrom:
15581188260000000
valueFrom:
15581188260000000
- containerPort: 5432
15581188260000000
- image: my-registry:5000/information-system-test:latest
15581188260000000
command: ["bash", "-c", "python main.py"]
15581188260000000
</code></pre>
15581188260000000
name: information-system-db-pv-volume
15581188260000000
capacity:
15581188260000000
path: "/tmp/data/postgres"
15581188260000000
metadata:
15581188260000000
- ReadWriteOnce
15581188260000000
etcd-minikube 1/1 Running 0 6h
15581188260000000
kube-proxy-j28zs 1/1 Running 0 6h
15581188260000000
storage-provisioner 1/1 Running 8 1d
15581188260000000
This is not a supported version skew and may lead to a malfunctional cluster.
15581188260000000
<p>Many Thanks
15581188260000000
Jan 3 21:28:47 master kubelet: I0103 21:28:47.015478 8726 kubelet_node_status.go:74] Attempting to register node master
15581188260000000
<p>My Linux version is:</p>
15581188260000000
<pre><code>[root@master ~]# docker -v
15581188260000000
kubeadm version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.2074+a092d8e0f95f52", GitCommit:"a092d8e0f95f5200f7ae2cba45c75ab42da36537", GitTreeState:"clean", BuildDate:"2016-12-13T17:03:18Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
15581188260000000
f9d197b32eeb gcr.io/google_containers/kube-controller-manager-amd64:v1.5.1 "kube-controller-mana" 8 minutes ago Up 8 minutes k8s_kube-controller-manager.c989015b_kube-controller-manager-master_kube-system_403e1523940e3f352d70e32c97d29be5_812fd5f5
15581188260000000
434d49024d1f gcr.io/google_containers/pause-amd64:3.0 "/pause" 8 minutes ago Up 8 minutes k8s_POD.d8dbe16c_kube-scheduler-master_kube-system_3bfbd36dfb8c8f71984a0d812e4dad33_f8d4ad55
15581188260000000
<pre><code>[root@master ~]# yum list |grep kubernetes-cni.x86_64
15581188260000000
<pre><code>kubectl logs --follow -n kube-system deployment/nginx-ingress
15581188260000000
<pre><code>2018/08/27 21:13:35 Creating in-cluster Heapster client
15581188260000000
2018/08/27 21:15:17 Metric client health check failed: the server is currently unable to handle the request (get services heapster). Retrying in 30 seconds.
15581188260000000
osmsku---kubemaster02..local Ready <none> 140d v1.11.2
15581188260000000
77.5 is the docker interface ip</p>
15581188260000000
kibana-logging NodePort 10.99.101.8 <none> 5601:30275/TCP 4h k8s-app=kibana-logging
15581188260000000
NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
15581188260000000
k get pods -n=kube-system
15581188260000000
etcd-osmsku--prod-kubemaster01..local 1/1 Running 15 2h
15581188260000000
kube-apiserver-osmsku--prod-kubemaster01..local 1/1 Running 2 2h
15581188260000000
kube-flannel-ds-jbsfm 1/1 Running 3 2h
15581188260000000
kube-scheduler-osmsku--prod-kubemaster01..local 1/1 Running 2 2h
15581188260000000
</code></pre>*4 connect() failed (113: No route to host) while connecting to kubernetes dashboard upstream<p>When I deploy the following I get this error:</p>
15581188260000000
kind: Ingress
15581188260000000
helm.sh/chart: {{ include "marketplace.chart" . }}
15581188260000000
{{- toYaml . | nindent 4 }}
15581188260000000
{{- range .Values.front.ingress.tls }}
15581188260000000
secretName: {{ .secretName }}
15581188260000000
- host: {{ . | quote }}
15581188260000000
backend:
15581188260000000
{{- end }}
15581188260000000
<p><code>marketplace.name</code> is defined in _helpers.tpl: </p>
15581188260000000
<p><code>.Chart.Name</code> is an internal variable and the order of preference is explained <a href="https://github.com/helm/helm/issues/2913#issuecomment-328609510" rel="nofollow noreferrer">here</a> but even setting <code>nameOverride</code> the error is the same.</p>
15581188260000000
enabled: true
15581188260000000
When I upgrade the kubernetes version to 1.6.3, it not work. There is no log file created under /var/log. How to get the kubernetes log file?
15581188260000000
Warning: Tiller is already installed in the cluster.
15581188260000000
Error: cannot connect to Tiller
15581188260000000
<p><a href="https://kubernetes.io/docs/getting-started-guides/ubuntu/troubleshooting/" rel="nofollow noreferrer">https://kubernetes.io/docs/getting-started-guides/ubuntu/troubleshooting/</a></p>
15581188260000000
</code></pre>
15581188260000000
kind: DaemonSet
15581188260000000
metadata:
15581188260000000
serviceAccountName: my-sa-account
15581188260000000
securityContext:
15581188260000000
kind: DaemonSet
15581188260000000
metadata:
15581188260000000
serviceAccountName: my-sa-account
15581188260000000
securityContext:
15581188260000000
<pre><code>...
15581188260000000
SigCgt: 0000000000014002
15581188260000000
CapAmb: 0000000000000000
15581188260000000
<pre><code>apiVersion: v1
15581188260000000
selector:
15581188260000000
port: 80
15581188260000000
name: busybox1
15581188260000000
subdomain: default-subdomain
15581188260000000
kind: Pod
15581188260000000
name: busybox
15581188260000000
<p>In the kb8-master2 </p>
15581188260000000
mv /home/${USER}/sa.key /etc/kubernetes/pki/
15581188260000000
mv /home/${USER}/admin.conf /etc/kubernetes/admin.conf
15581188260000000
[certificates] Generated etcd/server certificate and key.
15581188260000000
[certificates] Generated etcd/healthcheck-client certificate and key.
15581188260000000
>`sudo kubeadm alpha phase kubelet config write-to-disk --config kubeadm-config.yaml`
15581188260000000
Output:-
15581188260000000
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
15581188260000000
<p>Output:-
15581188260000000
<p>No output</p>
15581188260000000
<p>I am really stuck here. Further </p>
15581188260000000
apiServerCertSANs:
15581188260000000
extraArgs:
15581188260000000
initial-cluster: "kb8-master1=https://10.240.0.4:2380,kb8-master2=https://10.240.0.33:2380"
15581188260000000
peerCertSANs:
15581188260000000
</code></pre>
15581188260000000
@olivierboukili: I just published A GCP / Kubernetes production migration retrospective (pa 1) https://t.co/RngsmtMwor
15581194720000000
@OracleDevs: Get Hands-on Microservices on Kubernetes and Autonomous Database! Failure of any microservice limits downtime to a portion…
15581194720000000
@CloudExpo: OpsRamp’s to Present AI & AIOps Education Track at CloudEXPO @OpsRamp #HybridCloud #AI #IoT #AIOps #DevOps #Blockchain #Cl…
15581194720000000
@Azure: Public Preview now supported for Windows Server containers on #Azure #Kubernetes Service. Modernize Windows workloads to get the…
15581194720000000
@azureflashnews: Announcing the preview of Windows Server containers suppo in Azure Kubernetes Service https://t.co/lqmAJZgCbR #Azur…
15581194720000000
@_precompiled: Interested in meeting some nice ppl and talk about Prometheus and/or Kubernetes? Then this @Meetup might be for you! (Als…
15581194720000000
@Taylorb_msft: I am excited and honored to be announcing the public preview of Windows Server containers suppo in Azure Kubernetes Ser…
15581194720000000
@OpenAtMicrosoft: Interested in real-time sentiment analysis of tweets? Check out this sample KEDA (Kubernetes-based event-driven auto…
15581194720000000
Announcing the preview of Windows Server containers suppo in Azure Kubernetes Service https://t.co/E90N6qj05w
15581194720000000
@CloudExpo: Exhibitor Directory Published ! https://t.co/hmkGxHT58m #Cloud #CIO #HybridCloud #AI #IoT #IIoT #AIOps #DevOps #CloudNativ…
15581194720000000
@CloudExpo: It's 4:00 AM in Silicon Valley! Do You Know Where Your Data Is? #HybridCloud #AI #AIOps #CIO #IoT #DevOps #SDN #CloudNative…
15581194720000000
@kinvolkio: Announcing Lokomotive, our @kubernetesio distribution and engine to drive cutting-edge #Linux technologies into Kubernetes.…
15581194720000000
HNews: Lokomotive: An engine to drive cutting-edge Linux technologies into Kubernetes https://t.co/0LIZvhtKfU #linux
15581194720000000
@Azure: Public Preview now supported for Windows Server containers on #Azure #Kubernetes Service. Modernize Windows workloads to get the…
15581194720000000
Announcing the preview of Windows Server containers suppo in Azure Kubernetes Service https://t.co/6qsgGeoGAO
15581194720000000
Missed our hybrid #cloud strategies webinar this week? Watch the recording now. #MariaDB #Kubernetes https://t.co/WzPQsCcmva
15581194720000000
@CloudExpo: Join CloudEXPO Silicon Valley June 24-26 at Biggest Expo Floor in 5 Years ! #BigData #HybridCloud #Cloud #CloudNative #Serv…
15581194720000000
@Kingwulf: #ApacheSpark on K8S by Palantir engineering https://t.co/sTlkBvweTA
15581194720000000
@danveloper: This shouldn’t have to be said, but do not put your Kubernetes API Server on the public internet.
15581194720000000
#Windows on @kubernetesio went stable on 1.14 in March and in May as promised AKS enables Windows and 1.14 as publi… https://t.co/e94E4YhjAo
15581194720000000
5 things I wish I'd known about Kubernetes before I started
15585496040000000
9 Lessons Learned Migrating from Heroku to Kubernetes with Zero Downtime
15585496040000000
10 open-source Kubernetes tools for highly effective SRE and Ops Teams
15585496040000000
065: Change, Dropbox, GitHub DDoS, Kubernetes, Docker, and More
15585496040000000
076: Hiring in DevOps, Security, Kubernetes, and More
15585496040000000
085: Ansible, Kubernetes, Jr Engineers, Prime Disaster, Awesome TUIs, and More
15585496040000000
090: DevOps, GitOps, Commons Clause, NotPetya, Kubernetes, Prometheus, More
15585496040000000
098: Open Source, Kubernetes, Vote, Ansible, Serverless, Amazon HQ2 and More
15585496040000000
106: KubeKhan, KubeCon, Etcd, Licenses, Securing Kubernetes, JFrog, More
15585496040000000
116: OSS Licenses, Kubernetes, .dev Grossness, Hashicorp, Ansible, Knative, More
15585496040000000
"cyber-” comes from "cybernetics”, which comes from "Kubernetes”
15585496040000000
[podcast] the Impact and Future of Kubernetes
15585496040000000
[Webinar] Kubernetes Monitoring Best Practices from KubeCon
15585496040000000
A comparison of all Kubernetes ingresses
15585496040000000
A complete guide to the new Kubernetes Operator SDK released today
15585496040000000
A developer onramp to Kubernetes with GKE
15585496040000000
A few things I've learned about Kubernetes
15585496040000000
A Guide to the Kubernetes Networking Model
15585496040000000
A Kubernetes quick start for people who know just enough about Docker to get by
15585496040000000
A Multitude of Kubernetes Deployment Tools: Kubespray, Kops, and Kubeadm
15585496040000000
A reason for unexplained connection timeouts on Kubernetes/Docker
15585496040000000
A short, concise and high-level introduction to Kubernetes
15585496040000000
A Stronger Foundation for Creating and Managing Kubernetes Clusters
15585496040000000
A Text to Speech Server with gRPC and Kubernetes [video]
15585496040000000
A VPN for Minikube: transparent networking access to your Kubernetes cluster
15585496040000000
Accelerate Kubernetes at CoreOS (YC S13) as a Field Eng, Eng, or Product Manager
15585496040000000
Adopting Kubernetes step by step
15585496040000000
Adventures in High Availability Logging – ELK Stack on Kubernetes
15585496040000000
All the Fun in Kubernetes 1.9 – The New Stack
15585496040000000
Amazon EKS – Highly available and scalable Kubernetes service
15585496040000000
Amazon joins the rush to Kubernetes
15585496040000000
AMD GPU device plugin for Kubernetes
15585496040000000
An open source operator for Kafka on Kubernetes
15585496040000000
Analysis of Open Source Kubernetes Operators
15585496040000000
Announcing Dotmesh 0.5 the Dotmesh Kubernetes Operator
15585496040000000
Announcing Limited Availability of DigitalOcean Kubernetes
15585496040000000
Announcing Terraform Support for Kubernetes Service on AWS
15585496040000000
Anthos: Kubernetes Infrastructure to Make Developers More Productive
15585496040000000
Apache Spark on Kubernetes
15585496040000000
Application Tracing on Kubernetes with AWS X-Ray
15585496040000000
Are you learning Kubernetes/Docker? What resources are you using?
15585496040000000
As Kubernetes grows, a startup ecosystem develops in its wake
15585496040000000
Ask HN: Are you learning Kubernetes/Docker? What resources are you using?
15585496040000000
Ask HN: Could Kubernetes be written to use alternatives?
15585496040000000
Ask HN: Docker, Kubernetes, Openshift, etc – how do you deploy your products?
15585496040000000
Ask HN: How many pods for Kubernetes should be set at a large scale website?
15585496040000000
Ask HN: Kubernetes or ECS
15585496040000000
Ask HN: Single node Kubernetes on a VPS?
15585496040000000
Ask HN: Who is using Kubernetes or Docker in production and how has it been?
15585496040000000
Assholes, SRE, Container and Kubernetes Security, Ansible for Windows, and More
15585496040000000
Auto-Discovered Maps for Kubernetes Apps
15585496040000000
Automated Testing for Kubernetes and Helm Charts Using Terratest
15585496040000000
Automating TLS and DNS with Kubernetes Ingress
15585496040000000
Autoscaling for Kubernetes workloads
15585496040000000
AWS EKS and OpenFaaS Operator for Serverless on Kubernetes
15585496040000000
AWS S3 compatible storage on Kubernetes
15585496040000000
Azure brings new Serverless and DevOps capabilities to the Kubernetes community
15585496040000000
Azure Kubernetes Service (AKS) Is Now GA
15585496040000000
Benchmark results of Kubernetes network plugins over 10Gb/s network
15585496040000000
Beyond Kubernetes: building a complete orchestration platform
15585496040000000
Bitnami Kubernetes Production Runtime
15585496040000000
Boosting Your Kubernetes Productivity
15585496040000000
Bootstrapping Kubernetes Google Cloud Platform without scripts
15585496040000000
Brigade: Event-driven scripting for Kubernetes
15585496040000000
Bringing Kubernetes to Containership
15585496040000000
Build AWS S3 Compatible Cloud Storage with Kubernetes and Minio
15585496040000000
Build your own multi-node Kubernetes cluster
15585496040000000
Building a Control Plane for an Envoy-Powered API Gateway on Kubernetes
15585496040000000
Building a Kubernetes Operator for Prometheus and Thanos
15585496040000000
Building an Enterprise Kubernetes Strategy Online Training Today 1PM ET
15585496040000000
Building Distributed Systems with Kubernetes
15585496040000000
Bundle Kubernetes Fundamentals and CKA Certification (Course and Certification)
15585496040000000
Canary deployments on kubernetes using Traefik
15585496040000000
Canonical Distribution of Kubernetes: Dev Summary 2017 (Week 32)
15585496040000000
Canonical's Web and Design team use Kubernetes to deploy their sites
15585496040000000
Centralized logging on Kubernetes using fluentd and fluent-bit
15585496040000000
Checking Out the Kubernetes Service Catalog
15585496040000000
CI/CD Pipeline with Auto Deploy to Kubernetes Using GitLab and Helm
15585496040000000
CircleCI's outrageous decision to orchestrate Kubernetes with Nomad
15585496040000000
CLI to get overview of resource requests, utilization in Kubernetes cluster
15585496040000000
Cloud Foundry adds native Kubernetes support for running containers
15585496040000000
Cloud Native Computing Foundation Adopts Kubernetes-Friendly Container Runtime
15585496040000000
Cluster Schedulers: Kubernetes and Nomad
15585496040000000
CNCF Announces Kubernetes 1.9 – SD Times
15585496040000000
Collecting application logs in Kubernetes
15585496040000000
Comparing Kubernetes CNI Network Providers
15585496040000000
Comparing Mesos and Kubernetes
15585496040000000
Comprehensive Container-Based Service Monitoring with Kubernetes and Istio
15585496040000000
Configuring permissions in Kubernetes with RBAC
15585496040000000
Connecting Elixir Nodes with Libcluster, Locally and on Kubernetes
15585496040000000
Container orchestration: Moving from fleet to Kubernetes
15585496040000000
Containerd namespaces for Docker, Kubernetes, and beyond
15585496040000000
Containership Launches Its Fully Managed Kubernetes Service
15585496040000000
Continuous Delivery with Spinnaker and Kubernetes
15585496040000000
Continuous Deployment with Google Container Engine and Kubernetes
15585496040000000
Convergence to Kubernetes
15585496040000000
CoreOS (YC S13) Is Hiring – Help Accelerate Kubernetes (BER/SFO/NYC/remote)
15585496040000000
CoreOS Extends Kubernetes to Microsoft Azure
15585496040000000
CoreOS Tectonic Now Installs Kubernetes on OpenStack
15585496040000000
Create a Kubernetes cron job in OKD
15585496040000000
Create, manage, snapshot and scale Kubernetes infrastructure in the public cloud
15585496040000000
Creating a Kubernetes Cluster on DigitalOcean with Python and Fabric
15585496040000000
Creating Multi-Tenant Moodle Service on Kubernetes Using Operator Pattern
15585496040000000
Crossword Puzzles, Kubernetes and CI/CD
15585496040000000
Customizing Kubernetes DNS Using Consul
15585496040000000
CVE-2018-1002105 – Kubernetes privilege escalation flaw
15585496040000000
Data Analytics, Meet Containers: Kubernetes Operator for Apache Spark in Beta
15585496040000000
Debug Kubernetes-Based Services with Datawire's Telepresence – The New Stack
15585496040000000
Debugging a TCP socket leak in a Kubernetes cluster
15585496040000000
Declarative application management in Kubernetes
15585496040000000
Deep Dive into Apache Spark Resilience on Kubernetes
15585496040000000
Deis Workflow – Open Source Kubernetes PaaS
15585496040000000
Deploy a HIG Stack in Kubernetes for Monitoring
15585496040000000
Deploy Apps to Kubernetes via REST, Using Helm
15585496040000000
Deploy microservices on Kubernetes
15585496040000000
Deploy SciKit Learn Model into Kubernetes Production in Under 2 Minutes
15585496040000000
Deploying a Node App to Google Cloud with Kubernetes
15585496040000000
Deploying Jenkins to a Kubernetes Cluster Using Helm
15585496040000000
Deploying Kubernetes On-Premise with RKE and Deploying OpenFaaS on It – Part 2
15585496040000000
Deploying Nginx ingress with let’s encrypt on Kubernetes using Helm
15585496040000000
Deploying Spark on Kubernetes
15585496040000000
Deploying to Kubernetes at ZipRecruiter
15585496040000000
Deploying WordPress on GKE Kubernetes
15585496040000000
Dev to Production Workflow Coverage by Popular Tools – Kubernetes, Heroku
15585496040000000
Developments around Microservices, API Gateways, Kubernetes and Service Mesh
15585496040000000
DevOps'ish 083: Survey, Career Advice, No Bastions, Kubernetes, Ansible, Etcd
15585496040000000
DevOps’ish 049: Basics, Kubernetes, Intel Is Blowing It, IDEs, Go, and More
15585496040000000
Diamanti Offers a Plug-And-Play Kubernetes Deployment
15585496040000000
Distributed App Deployment with Kubernetes and MongoDB Atlas
15585496040000000
Django on Kubernetes
15585496040000000
Docker Clustering Tools Compared: Kubernetes vs. Docker Swarm
15585496040000000
Docker embraces Kubernetes, will let customers run it alongside Swarm
15585496040000000
Docker Fueled Nostalgia: Building a Retro-Gaming Rig on Kubernetes
15585496040000000
Docker orchestration with Kubernetes and Rancher
15585496040000000
Docker Swarm vs. Mesos vs. Kubernetes – DZone Cloud
15585496040000000
Does invoking AWS CLI through serverless (OpenFaaS on Kubernetes) makes sense?
15585496040000000
Draft: A tool for developers to create cloud-native applications on Kubernetes
15585496040000000
Drone CI announces native Kubernetes support
15585496040000000
Dynamic Kubernetes Client for Ansible
15585496040000000
Easily Install Kubernetes on Ubuntu Step by Step
15585496040000000
Effective Docker HealthChecks for Node.js to Make Running in Kubernetes Reliable
15585496040000000
Embracing failures and cutting costs: Spot instances in Kubernetes
15585496040000000
Enforcing Kubernetes network policies with iptables
15585496040000000
Episode #126 Kubernetes for Pythonistas – [Talk Python to Me Podcast]
15585496040000000
Everything Kubernetes
15585496040000000
Experiences porting Heron to Kubernetes
15585496040000000
Expose Kubernetes Services Over HTTPS with Ngrok
15585496040000000
Extending Docker Enterprise Edition to Support Kubernetes – Docker Blog
15585496040000000
FaaS-netes (Functions as a Service) for Kubernetes and Docker Swarm
15585496040000000
Fast Serverless Functions for Kubernetes
15585496040000000
First Impressions: Docker for Mac with Kubernetes
15585496040000000
Fishing for Miners – Cryptojacking Honeypots in Kubernetes
15585496040000000
Fission: Serverless Functions for Kubernetes
15585496040000000
Forced Evolution: Shopify's Journey to Kubernetes
15585496040000000
Frakti – hypervisor-based container runtime for Kubernetes
15585496040000000
From bare-metal to Kubernetes
15585496040000000
Functions for Kubernetes
15585496040000000
Gardener: Manage Kubernetes clusters across multiple cloud providers
15585496040000000
Get Kubernetes Cluster Metrics with Prometheus in 5 Minutes
15585496040000000
Get Started with OpenFaaS and KinD (Kubernetes in Docker)
15585496040000000
Getting Started with DevOps, Containers and Kubernetes
15585496040000000
Getting started with Kubernetes: 5 misunderstandings, explained
15585496040000000
GitHub actions to deliver on kubernetes
15585496040000000
GitHub now uses Kubernetes
15585496040000000
GitOps: High Velocity CICD for Kubernetes
15585496040000000
GoCD introduces native integrations for Kubernetes
15585496040000000
Google Cloud Platform: Cloud Audit Logging for Kubernetes Engine
15585496040000000
Google donates money to help run the Kubernetes infrastructure
15585496040000000
Google Kubernetes Engine launches multi-master, regional clusters to beta
15585496040000000
Google Skaffold – Easy and Repeatable Kubernetes Development
15585496040000000
Google's real Kubernetes magic is all about community, not code
15585496040000000
GPUs in Kubernetes Engine now available in beta
15585496040000000
gRPC Load Balancing Inside Kubernetes
15585496040000000
Hacking and Hardening Kubernetes Clusters by Example
15585496040000000
HAProxy Ingress Controller for Kubernetes
15585496040000000
HashiCorp Consul: Connect Sidecar on Kubernetes
15585496040000000
Health checking gRPC servers on Kubernetes
15585496040000000
Heptio Comes Out of Stealth Mode with a Kubernetes Configuration Tool, Ksonnet
15585496040000000
Heptio launches its Kubernetes undistribution
15585496040000000
Highly Effective Kubernetes Deployments with GitOps
15585496040000000
Host Serverless Event Gateway on Kubernetes
15585496040000000
How Atlassian does Kubernetes
15585496040000000
How CloudBoost uses Docker, Kubernetes and Azure to scale 60,000+ apps
15585496040000000
How does the Kubernetes scheduler work?
15585496040000000
How kubernetes can break: networking
15585496040000000
How Kubernetes is making contributing easy
15585496040000000
How Nav Moved to Kubernetes with Houston, by Turbine Labs
15585496040000000
How Serverless Technologies Impact Kubernetes
15585496040000000
How to Build a Kubernetes Cluster with ARM Raspberry Pi Then Run .NET Core
15585496040000000