6

I have created application image on google cloud and while trying to push through google cloud shell, getting following error:

08db9ff34fc6: Pushing [==================================================>] 73.38 MB
5313937c4c49: Pushing [==========================================>        ] 62.36 MB/73.37 MB
162f935b1198: Pushing [==========================>                        ] 84.09 MB/155.9 MB
dcf909146faa: Pushing [==================================================>] 6.787 MB
23b9c7b43573: Pushing [==================================================>]  4.23 MB
**denied: Unable to determine the upload's size.**

I tried hard while searching for solutions but didn't find a single. Please help.

Rao
  • 20,781
  • 11
  • 57
  • 77
  • See if this helps - https://github.com/docker/docker/issues/2292 – Rao Mar 14 '17 at 06:12
  • No, that community basically discussing on "Connection reset by peer" and that was no such issue with size limit or anything else with my docker image. My image size is 313 MB only. – user1662655 Mar 14 '17 at 06:28
  • Still pending with this issue, I am in contact with Google cloud support. This might be because I am using Free tier of Google cloud. I will update if found any resolution – user1662655 Mar 17 '17 at 03:13
  • Did you ever hear back from GCP support? – joshwa Jul 03 '18 at 07:28

2 Answers2

17

Had the same problem. Can you try to pull another image that you have configured there:

docker pull gcr.io/...

For me it initially failed with AccessDenied.

Solution:

To fix it, I went to the Storage browser in Google Cloud UI interface:

https://console.cloud.google.com/storage/browser

Go to the artifacts.<project-name>.appspot.com bucket and give yourself Storage access. Then it worked

Daniel Hasegan
  • 785
  • 1
  • 8
  • 15
  • thanks @Daniel . this helped me. Details : added myemailId to have Storage Admin access in Bucket permissions – BrB Jan 29 '20 at 18:08
5

I just ran into this, for a container registry that had been working.

We had set the registry as private and then went to the storage level, and added an identity from a customer organization as a Storage Viewer. We changed the permission granularity from object-level to bucket-level policy, to simplify permission management.

Setting bucket-level policy was the mistake

Reverting permission granularity to object level cured the problem.

Update: Daniel Hasegan's answer below is correct. It is possible to enable bucket-level permissions as long as any account accessing the bucket has the correct rights to push or pull as necessary. If you are running on Google Kubernetes Engine, you must ensure that the service account that's running your cluster nodes has at least Storage Object Viewer permissions, else your pods will fail with ImagePullBackoff errors.

Eric Schoen
  • 668
  • 9
  • 16