Grafana pod keeps restarting after helm install











up vote
1
down vote

favorite












I have a clean AKS cluster that I deployed the prometheus-operator chart. The Grafana pod is showing a ton of restarts. My cluster version is 1.11.3. Grafana logs below. Anyone else encounter this issue?



File in configmap grafana-dashboard-k8s-node-rsrc-use.json ADDED
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 543, in _update_chunk_length
self.chunk_left = int(line, 16)
ValueError: invalid literal for int() with base 16: b''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 302, in _error_catcher
yield
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 598, in read_chunked
self._update_chunk_length()
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 547, in _update_chunk_length
raise httplib.IncompleteRead(line)
http.client.IncompleteRead: IncompleteRead(0 bytes read)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/sidecar.py", line 58, in <module>
main()
File "/app/sidecar.py", line 54, in main
watchForChanges(label, targetFolder)
File "/app/sidecar.py", line 23, in watchForChanges
for event in w.stream(v1.list_config_map_for_all_namespaces):
File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 124, in stream
for line in iter_resp_lines(resp):
File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 45, in iter_resp_lines
for seg in resp.read_chunked(decode_content=False):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 626, in read_chunked
self._original_response.close()
File "/usr/local/lib/python3.6/contextlib.py", line 99, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 320, in _error_catcher
raise ProtocolError('Connection broken: %r' % e, e)
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))









share|improve this question
























  • Looks like you have a python sidecar. Do you have the deployment/pod definition for grafana?
    – Rico
    Nov 10 at 2:02










  • Yes, there are three containers in the pod. kiwigrid/k8s-sidecar:0.0.3 kiwigrid/k8s-sidecar:0.0.3 grafana/grafana:5.3.1
    – Jerry Joyce
    Nov 12 at 17:34












  • What did you use to install this? the guide I followed doesn't have sidecars
    – Rico
    Nov 12 at 18:13










  • helm install stable/prometheus-operator
    – Jerry Joyce
    Nov 13 at 19:18















up vote
1
down vote

favorite












I have a clean AKS cluster that I deployed the prometheus-operator chart. The Grafana pod is showing a ton of restarts. My cluster version is 1.11.3. Grafana logs below. Anyone else encounter this issue?



File in configmap grafana-dashboard-k8s-node-rsrc-use.json ADDED
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 543, in _update_chunk_length
self.chunk_left = int(line, 16)
ValueError: invalid literal for int() with base 16: b''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 302, in _error_catcher
yield
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 598, in read_chunked
self._update_chunk_length()
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 547, in _update_chunk_length
raise httplib.IncompleteRead(line)
http.client.IncompleteRead: IncompleteRead(0 bytes read)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/sidecar.py", line 58, in <module>
main()
File "/app/sidecar.py", line 54, in main
watchForChanges(label, targetFolder)
File "/app/sidecar.py", line 23, in watchForChanges
for event in w.stream(v1.list_config_map_for_all_namespaces):
File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 124, in stream
for line in iter_resp_lines(resp):
File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 45, in iter_resp_lines
for seg in resp.read_chunked(decode_content=False):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 626, in read_chunked
self._original_response.close()
File "/usr/local/lib/python3.6/contextlib.py", line 99, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 320, in _error_catcher
raise ProtocolError('Connection broken: %r' % e, e)
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))









share|improve this question
























  • Looks like you have a python sidecar. Do you have the deployment/pod definition for grafana?
    – Rico
    Nov 10 at 2:02










  • Yes, there are three containers in the pod. kiwigrid/k8s-sidecar:0.0.3 kiwigrid/k8s-sidecar:0.0.3 grafana/grafana:5.3.1
    – Jerry Joyce
    Nov 12 at 17:34












  • What did you use to install this? the guide I followed doesn't have sidecars
    – Rico
    Nov 12 at 18:13










  • helm install stable/prometheus-operator
    – Jerry Joyce
    Nov 13 at 19:18













up vote
1
down vote

favorite









up vote
1
down vote

favorite











I have a clean AKS cluster that I deployed the prometheus-operator chart. The Grafana pod is showing a ton of restarts. My cluster version is 1.11.3. Grafana logs below. Anyone else encounter this issue?



File in configmap grafana-dashboard-k8s-node-rsrc-use.json ADDED
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 543, in _update_chunk_length
self.chunk_left = int(line, 16)
ValueError: invalid literal for int() with base 16: b''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 302, in _error_catcher
yield
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 598, in read_chunked
self._update_chunk_length()
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 547, in _update_chunk_length
raise httplib.IncompleteRead(line)
http.client.IncompleteRead: IncompleteRead(0 bytes read)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/sidecar.py", line 58, in <module>
main()
File "/app/sidecar.py", line 54, in main
watchForChanges(label, targetFolder)
File "/app/sidecar.py", line 23, in watchForChanges
for event in w.stream(v1.list_config_map_for_all_namespaces):
File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 124, in stream
for line in iter_resp_lines(resp):
File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 45, in iter_resp_lines
for seg in resp.read_chunked(decode_content=False):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 626, in read_chunked
self._original_response.close()
File "/usr/local/lib/python3.6/contextlib.py", line 99, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 320, in _error_catcher
raise ProtocolError('Connection broken: %r' % e, e)
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))









share|improve this question















I have a clean AKS cluster that I deployed the prometheus-operator chart. The Grafana pod is showing a ton of restarts. My cluster version is 1.11.3. Grafana logs below. Anyone else encounter this issue?



File in configmap grafana-dashboard-k8s-node-rsrc-use.json ADDED
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 543, in _update_chunk_length
self.chunk_left = int(line, 16)
ValueError: invalid literal for int() with base 16: b''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 302, in _error_catcher
yield
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 598, in read_chunked
self._update_chunk_length()
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 547, in _update_chunk_length
raise httplib.IncompleteRead(line)
http.client.IncompleteRead: IncompleteRead(0 bytes read)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/sidecar.py", line 58, in <module>
main()
File "/app/sidecar.py", line 54, in main
watchForChanges(label, targetFolder)
File "/app/sidecar.py", line 23, in watchForChanges
for event in w.stream(v1.list_config_map_for_all_namespaces):
File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 124, in stream
for line in iter_resp_lines(resp):
File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 45, in iter_resp_lines
for seg in resp.read_chunked(decode_content=False):
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 626, in read_chunked
self._original_response.close()
File "/usr/local/lib/python3.6/contextlib.py", line 99, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 320, in _error_catcher
raise ProtocolError('Connection broken: %r' % e, e)
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))






kubernetes grafana kubernetes-helm azure-kubernetes prometheus-operator






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 9 at 23:48









Emruz Hossain

1,005210




1,005210










asked Nov 9 at 18:51









Jerry Joyce

30615




30615












  • Looks like you have a python sidecar. Do you have the deployment/pod definition for grafana?
    – Rico
    Nov 10 at 2:02










  • Yes, there are three containers in the pod. kiwigrid/k8s-sidecar:0.0.3 kiwigrid/k8s-sidecar:0.0.3 grafana/grafana:5.3.1
    – Jerry Joyce
    Nov 12 at 17:34












  • What did you use to install this? the guide I followed doesn't have sidecars
    – Rico
    Nov 12 at 18:13










  • helm install stable/prometheus-operator
    – Jerry Joyce
    Nov 13 at 19:18


















  • Looks like you have a python sidecar. Do you have the deployment/pod definition for grafana?
    – Rico
    Nov 10 at 2:02










  • Yes, there are three containers in the pod. kiwigrid/k8s-sidecar:0.0.3 kiwigrid/k8s-sidecar:0.0.3 grafana/grafana:5.3.1
    – Jerry Joyce
    Nov 12 at 17:34












  • What did you use to install this? the guide I followed doesn't have sidecars
    – Rico
    Nov 12 at 18:13










  • helm install stable/prometheus-operator
    – Jerry Joyce
    Nov 13 at 19:18
















Looks like you have a python sidecar. Do you have the deployment/pod definition for grafana?
– Rico
Nov 10 at 2:02




Looks like you have a python sidecar. Do you have the deployment/pod definition for grafana?
– Rico
Nov 10 at 2:02












Yes, there are three containers in the pod. kiwigrid/k8s-sidecar:0.0.3 kiwigrid/k8s-sidecar:0.0.3 grafana/grafana:5.3.1
– Jerry Joyce
Nov 12 at 17:34






Yes, there are three containers in the pod. kiwigrid/k8s-sidecar:0.0.3 kiwigrid/k8s-sidecar:0.0.3 grafana/grafana:5.3.1
– Jerry Joyce
Nov 12 at 17:34














What did you use to install this? the guide I followed doesn't have sidecars
– Rico
Nov 12 at 18:13




What did you use to install this? the guide I followed doesn't have sidecars
– Rico
Nov 12 at 18:13












helm install stable/prometheus-operator
– Jerry Joyce
Nov 13 at 19:18




helm install stable/prometheus-operator
– Jerry Joyce
Nov 13 at 19:18












1 Answer
1






active

oldest

votes

















up vote
0
down vote













Based on the Prometheus operator repository... The sidecar container on the Grafana pod is failing to contact Grafana and reload/refresh the dashboards defined on the configmap being watched.



So the this is a symptom of the Grafana container failing... can you check Grafana container inside your Grafana pod logs?






share|improve this answer





















  • The logs for the Grafana container appear normal and I am able to view the dashboards in a browser. The pod restarts have also leveled out. There were 280 in the first 12 hours or so and none since. The dashboard appears to be working, but it is a bit troubling that I am still seeing failures in the logs for the sidecar containers as in the original question.
    – Jerry Joyce
    Nov 12 at 17:38











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53231716%2fgrafana-pod-keeps-restarting-after-helm-install%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
0
down vote













Based on the Prometheus operator repository... The sidecar container on the Grafana pod is failing to contact Grafana and reload/refresh the dashboards defined on the configmap being watched.



So the this is a symptom of the Grafana container failing... can you check Grafana container inside your Grafana pod logs?






share|improve this answer





















  • The logs for the Grafana container appear normal and I am able to view the dashboards in a browser. The pod restarts have also leveled out. There were 280 in the first 12 hours or so and none since. The dashboard appears to be working, but it is a bit troubling that I am still seeing failures in the logs for the sidecar containers as in the original question.
    – Jerry Joyce
    Nov 12 at 17:38















up vote
0
down vote













Based on the Prometheus operator repository... The sidecar container on the Grafana pod is failing to contact Grafana and reload/refresh the dashboards defined on the configmap being watched.



So the this is a symptom of the Grafana container failing... can you check Grafana container inside your Grafana pod logs?






share|improve this answer





















  • The logs for the Grafana container appear normal and I am able to view the dashboards in a browser. The pod restarts have also leveled out. There were 280 in the first 12 hours or so and none since. The dashboard appears to be working, but it is a bit troubling that I am still seeing failures in the logs for the sidecar containers as in the original question.
    – Jerry Joyce
    Nov 12 at 17:38













up vote
0
down vote










up vote
0
down vote









Based on the Prometheus operator repository... The sidecar container on the Grafana pod is failing to contact Grafana and reload/refresh the dashboards defined on the configmap being watched.



So the this is a symptom of the Grafana container failing... can you check Grafana container inside your Grafana pod logs?






share|improve this answer












Based on the Prometheus operator repository... The sidecar container on the Grafana pod is failing to contact Grafana and reload/refresh the dashboards defined on the configmap being watched.



So the this is a symptom of the Grafana container failing... can you check Grafana container inside your Grafana pod logs?







share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 11 at 15:25









Carlos

211




211












  • The logs for the Grafana container appear normal and I am able to view the dashboards in a browser. The pod restarts have also leveled out. There were 280 in the first 12 hours or so and none since. The dashboard appears to be working, but it is a bit troubling that I am still seeing failures in the logs for the sidecar containers as in the original question.
    – Jerry Joyce
    Nov 12 at 17:38


















  • The logs for the Grafana container appear normal and I am able to view the dashboards in a browser. The pod restarts have also leveled out. There were 280 in the first 12 hours or so and none since. The dashboard appears to be working, but it is a bit troubling that I am still seeing failures in the logs for the sidecar containers as in the original question.
    – Jerry Joyce
    Nov 12 at 17:38
















The logs for the Grafana container appear normal and I am able to view the dashboards in a browser. The pod restarts have also leveled out. There were 280 in the first 12 hours or so and none since. The dashboard appears to be working, but it is a bit troubling that I am still seeing failures in the logs for the sidecar containers as in the original question.
– Jerry Joyce
Nov 12 at 17:38




The logs for the Grafana container appear normal and I am able to view the dashboards in a browser. The pod restarts have also leveled out. There were 280 in the first 12 hours or so and none since. The dashboard appears to be working, but it is a bit troubling that I am still seeing failures in the logs for the sidecar containers as in the original question.
– Jerry Joyce
Nov 12 at 17:38


















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53231716%2fgrafana-pod-keeps-restarting-after-helm-install%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Schultheiß

Verwaltungsgliederung Dänemarks

Liste der Kulturdenkmale in Wilsdruff