Transfer closed with outstanding read data remaining ошибка

when retrieving data from a URL using curl, I sometimes (in 80% of the cases) get

error 18: transfer closed with outstanding read data remaining

Part of the returned data is then missing. The weird thing is that this does never occur when the CURLOPT_RETURNTRANSFER is set to false, that is the curl_exec function doesn’t return the data but displays the content directly.

What could be the problem? Can I set some of the options to avoid such behaviour?

BenMorel's user avatar

BenMorel

34.1k49 gold badges179 silver badges319 bronze badges

asked Nov 18, 2009 at 23:52

David's user avatar

3

The error string is quite simply exactly what libcurl sees: since it is receiving a chunked encoding stream it knows when there is data left in a chunk to receive. When the connection is closed, libcurl knows that the last received chunk was incomplete. Then you get this error code.

There’s nothing you can do to avoid this error with the request unmodified, but you can try to work around it by issuing a HTTP 1.0 request instead (since chunked encoding won’t happen then) but the fact is that this is most likely a flaw in the server or in your network/setup somehow.

answered Dec 4, 2009 at 18:52

Daniel Stenberg's user avatar

Daniel StenbergDaniel Stenberg

53.6k16 gold badges144 silver badges221 bronze badges

5

I bet this is related to a wrong Content-Length header sent by the peer.
My advice is to let curl set the length by itself.

answered Nov 19, 2009 at 8:17

Christophe Eblé's user avatar

Christophe EbléChristophe Eblé

8,0513 gold badges33 silver badges32 bronze badges

5

Seeing this error during the use of Guzzle as well. The following header fixed it for me:

'headers' => [
    'accept-encoding' => 'gzip, deflate',
],

I issued the request with Postman which gave me a complete response and no error.
Then I started adding the headers that Postman sends to the Guzzle request and this was the one that fixed it.

answered Apr 30, 2019 at 10:16

rambii's user avatar

rambiirambii

4514 silver badges11 bronze badges

3

I had the same problem, but managed to fix it by suppressing the ‘Expect: 100-continue’ header that cURL usually sends (the following is PHP code, but should work similarly with other cURL APIs):

curl_setopt($curl, CURLOPT_HTTPHEADER, array('Expect:'));

By the way, I am sending calls to the HTTP server that is included in the JDK 6 REST stuff, which has all kinds of problems. In this case, it first sends a 100 response, and then with some requests doesn’t send the subsequent 200 response correctly.

answered Dec 4, 2009 at 15:14

jcsahnwaldt Reinstate Monica's user avatar

5

I got this error when my server process got an exception midway during generating the response and simply closed the connection without saying goodbye. curl still expected data from the connection and complained (rightfully).

answered Jul 27, 2014 at 7:07

koljaTM's user avatar

koljaTMkoljaTM

10k2 gold badges39 silver badges42 bronze badges

Encountered similar issue, my server is behind nginx.
There’s no error in web server’s (Python flask) log, but some error messsage in nginx log.

[crit] 31054#31054: *269464 open() «/var/cache/nginx/proxy_temp/3/45/0000000453» failed (13: Permission denied) while reading upstream

I fixed this issue by correcting the permission of directory:

/var/cache/nginx

Peyman Mohamadpour's user avatar

answered May 18, 2020 at 10:33

FeiXia's user avatar

FeiXiaFeiXia

1015 bronze badges

1

I got this error when my server ran out of disk space and closed the connection midway during generating the response and simply closed the connection

answered Feb 10, 2021 at 22:12

Bill Richard's user avatar

I’ve solved this error by this way.

$ch = curl_init ();
curl_setopt ( $ch, CURLOPT_URL, 'http://www.someurl/' );
curl_setopt ( $ch, CURLOPT_TIMEOUT, 30);
ob_start();
$response = curl_exec ( $ch );
$data = ob_get_clean();
if(curl_getinfo($ch, CURLINFO_HTTP_CODE) == 200 ) success;

Error still occurs, but I can handle response data in variable.

answered Oct 24, 2012 at 8:45

Parshin Dmitry's user avatar

I had this problem working with pycurl and I solved it using

c.setopt(pycurl.HTTP_VERSION, pycurl.CURL_HTTP_VERSION_1_0) 

like Eric Caron says.

Community's user avatar

answered Feb 5, 2015 at 8:51

Jorge Blanco's user avatar

0

I got this error when i was accidentally downloading a file onto itself.
(I had created a symlink in an sshfs mount of the remote directory to make it available for download, forgot to switch the working directory, and used -OJ).

I guess it won’t really »help« you when you read this, since it means your file got trashed.

answered Jan 26, 2019 at 8:54

Darklighter's user avatar

DarklighterDarklighter

2,0521 gold badge15 silver badges21 bronze badges

I had this same problem. I tried all of these solutions but none worked. In my case, the request was working fine in Postman but when I do it with curl in php I get the error mentioned above.

What I did was check the PHP code generated by Postman and replicate the same thing.

First the request is set to use Http version 1.1
And the second most important part is the encoding for me.

Here is the code that helped me

curl_setopt($ch, CURLOPT_ENCODING, '');
curl_setopt($ch, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_1_1);

If I remove the CurlOpt Encoding I get back the error.

answered Apr 5, 2021 at 10:01

Ndi Cedric's user avatar

I got this error when running through a nginx proxy and I was running nginx under the user-id daemon instead of the user id nginx.

This means some of nginx’s scratch directories weren’t accessible / writable.

Switching from user daemon; to user nginx; fixed it for me.

answered Mar 25, 2021 at 16:43

James Stevens's user avatar

it can be related to many issues. In my case, i was using Curl to build an image (via Docker api). Thus, the build was stuck that’s why i got this error.
when I fixed the build, the error disappeared.

answered Sep 8, 2021 at 8:24

hakik ayoub's user avatar

We can fix this by suppressing the Expect: 100-continue header that cURL normally sends.

answered Aug 21, 2022 at 15:17

Ashot's user avatar

AshotAshot

334 bronze badges

1

Usually, the cURL error ‘cURL 18 transfer closed with outstanding read data remaining’ occurs while retrieving data from a URL using a cURL.

Here at Bobcares, we have seen several such cURL related errors as part of our Server Management Services for web hosts and online service providers.

Today we’ll take a look at the causes for this error and see the fix.

More about ‘cURL 18 transfer closed with outstanding read data remaining’ error

Sometimes, the file we transfer will be smaller or larger than expected. Such cases arise when the server initially reports an expected transfer size, and then delivers data that doesn’t match the previously sent size.

In short, this error is related to content-length.

cURL error 18 can be described as below:

CURLE_PARTIAL_FILE

It means a Partial file i.e. only a part of the file was transferred.

Different causes and fixes for ‘cURL 18 transfer closed with outstanding read data remaining’ error

Now let’s see the different causes for this error. Also, we shall see how our Support Engineers fix it.

1. An incorrect Content-Length header was sent by the peer.

If an incorrect Content-Length header is been sent then the best option is to allow the cURL to set the length by itself. This will avoid the issues that might arise after setting the wrong size.

Moreover, we can fix this by suppressing the ‘Expect: 100-continue’ header that cURL usually sends.

curl_setopt($curl, CURLOPT_HTTPHEADER, array(‘Expect:’));

2. The connection gets timed-out as keep-alives were not sent to keep the connection going on.

To fix this issue, add the –keepalive-time.

For instance,

–keepalive-time 2

This option will set a time a connection will need to remain idle before sending the keepalive probes and the time between individual keepalive probes. However, if we use –no-keepalive then this option has no effect.

In case, if we use this option several times then the last one will be used. If you don’t specify the value then it defaults to 60 seconds.

In PHP cURL, the –keepalive-time option is available from the PHP version 5.5. You can use it as follows:

curl_setopt($connection, CURLOPT_TCP_KEEPALIVE, 1);
curl_setopt($connection, CURLOPT_TCP_KEEPIDLE, 2);

[Need any further assistance in fixing cURL errors? – We are here to help you.]

Conclusion

In short, this error occurs while retrieving data from a URL using a cURL. Today, we saw how our Support Engineers fix this error.

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

GET STARTED

var google_conversion_label = «owonCMyG5nEQ0aD71QM»;

Usually, the cURL error ‘cURL 18 transfer closed with outstanding read data remaining’ occurs while retrieving data from a URL using a cURL.

Here at Bobcares, we have seen several such cURL related errors as part of our Server Management Services for web hosts and online service providers.

Today we’ll take a look at the causes for this error and see the fix.

More about ‘cURL 18 transfer closed with outstanding read data remaining’ error

Sometimes, the file we transfer will be smaller or larger than expected. Such cases arise when the server initially reports an expected transfer size, and then delivers data that doesn’t match the previously sent size.

In short, this error is related to content-length.

cURL error 18 can be described as below:

CURLE_PARTIAL_FILE

It means a Partial file i.e. only a part of the file was transferred.

Different causes and fixes for ‘cURL 18 transfer closed with outstanding read data remaining’ error

Now let’s see the different causes for this error. Also, we shall see how our Support Engineers fix it.

1. An incorrect Content-Length header was sent by the peer.

If an incorrect Content-Length header is been sent then the best option is to allow the cURL to set the length by itself. This will avoid the issues that might arise after setting the wrong size.

Moreover, we can fix this by suppressing the ‘Expect: 100-continue’ header that cURL usually sends.

curl_setopt($curl, CURLOPT_HTTPHEADER, array(‘Expect:’));

2. The connection gets timed-out as keep-alives were not sent to keep the connection going on.

To fix this issue, add the –keepalive-time.

For instance,

–keepalive-time 2

This option will set a time a connection will need to remain idle before sending the keepalive probes and the time between individual keepalive probes. However, if we use –no-keepalive then this option has no effect.

In case, if we use this option several times then the last one will be used. If you don’t specify the value then it defaults to 60 seconds.

In PHP cURL, the –keepalive-time option is available from the PHP version 5.5. You can use it as follows:

curl_setopt($connection, CURLOPT_TCP_KEEPALIVE, 1);
curl_setopt($connection, CURLOPT_TCP_KEEPIDLE, 2);

[Need any further assistance in fixing cURL errors? – We are here to help you.]

Conclusion

In short, this error occurs while retrieving data from a URL using a cURL. Today, we saw how our Support Engineers fix this error.

PREVENT YOUR SERVER FROM CRASHING!

Never again lose customers to poor server speed! Let us help you.

Our server experts will monitor & maintain your server 24/7 so that it remains lightning fast and secure.

GET STARTED

var google_conversion_label = «owonCMyG5nEQ0aD71QM»;

Hello guys,

i really need help with this error :

Fatal error: Uncaught exception ‘GuzzleHttpExceptionRequestException’ with message ‘cURL error 18: transfer closed with outstanding read data remaining (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)’ in C:xampphtdocsTRELLOtrellovendorguzzlehttpguzzlesrcHandlerCurlFactory.php:188 Stack trace: #0 C:xampphtdocsTRELLOtrellovendorguzzlehttpguzzlesrcHandlerCurlFactory.php(151): GuzzleHttpHandlerCurlFactory::createRejection(Object(GuzzleHttpHandlerEasyHandle), Array) #1 C:xampphtdocsTRELLOtrellovendorguzzlehttpguzzlesrcHandlerCurlFactory.php(104): GuzzleHttpHandlerCurlFactory::finishError(Object(GuzzleHttpHandlerCurlHandler), Object(GuzzleHttpHandlerEasyHandle), Object(GuzzleHttpHandlerCurlFactory)) #2 C:xampphtdocsTRELLOtrellovendorguzzlehttpguzzlesrcHandlerCurlHandler.php(45): GuzzleHttpHandlerCurlFactory::finish(Object(GuzzleHttpHandlerCurlHandler), Object(GuzzleHttpHandlerEasyHandle), Object(GuzzleHttpHandlerCurlFactory)) #3 C:xampphtdocsTRELLOtrello in C:xampphtdocsTRELLOtrellovendorstevenmaguiretrello-phpsrcHttp.php on line 273

i already try to change $conf[CURLOPT_HTTPHEADER][] = ‘Expect:’; in CurlFactory.php into $conf[CURLOPT_HTTPHEADER][] = ‘Expect: 100 — continue’; it fixed this error, but another error continue , the error was like this :

Fatal error: Uncaught exception ‘GuzzleHttpExceptionRequestException’ with message ‘cURL error 56: Received HTTP code 417 from proxy after CONNECT (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)’ in C:xampphtdocsTRELLOtrellovendorguzzlehttpguzzlesrcHandlerCurlFactory.php:188 Stack trace: #0 C:xampphtdocsTRELLOtrellovendorguzzlehttpguzzlesrcHandlerCurlFactory.php(151): GuzzleHttpHandlerCurlFactory::createRejection(Object(GuzzleHttpHandlerEasyHandle), Array) #1 C:xampphtdocsTRELLOtrellovendorguzzlehttpguzzlesrcHandlerCurlFactory.php(104): GuzzleHttpHandlerCurlFactory::finishError(Object(GuzzleHttpHandlerCurlHandler), Object(GuzzleHttpHandlerEasyHandle), Object(GuzzleHttpHandlerCurlFactory)) #2 C:xampphtdocsTRELLOtrellovendorguzzlehttpguzzlesrcHandlerCurlHandler.php(45): GuzzleHttpHandlerCurlFactory::finish(Object(GuzzleHttpHandlerCurlHandler), Object(GuzzleHttpHandlerEasyHandle), Object(GuzzleHttpHandlerCurlFactory)) #3 C:xampphtdocsTRELLOtrellovendo in C:xampphtdocsTRELLOtrellovendorstevenmaguiretrello-phpsrcHttp.php on line 273

how can i fix this error? and what is the cause of this error ?
Thank You so much for your help.. every help from others will be appreciated too.

Best Regards,
-Me

when retrieving data from a URL using curl, I sometimes (in 80% of the cases) get

error 18: transfer closed with outstanding read data remaining

Part of the returned data is then missing. The weird thing is that this does never occur when the CURLOPT_RETURNTRANSFER is set to false, that is the curl_exec function doesn’t return the data but displays the content directly.

What could be the problem? Can I set some of the options to avoid such behaviour?

12 Answers

I bet this is related to a wrong Content-Length header sent by the peer.
My advice is to let curl set the length by itself.

The error string is quite simply exactly what libcurl sees: since it is receiving a chunked encoding stream it knows when there is data left in a chunk to receive. When the connection is closed, libcurl knows that the last received chunk was incomplete. Then you get this error code.

There’s nothing you can do to avoid this error with the request unmodified, but you can try to work around it by issuing a HTTP 1.0 request instead (since chunked encoding won’t happen then) but the fact is that this is most likely a flaw in the server or in your network/setup somehow.

Seeing this error during the use of Guzzle as well. The following header fixed it for me:

'headers' => [
    'accept-encoding' => 'gzip, deflate',
],

I issued the request with Postman which gave me a complete response and no error.
Then I started adding the headers that Postman sends to the Guzzle request and this was the one that fixed it.

I had the same problem, but managed to fix it by suppressing the ‘Expect: 100-continue’ header that cURL usually sends (the following is PHP code, but should work similarly with other cURL APIs):

curl_setopt($curl, CURLOPT_HTTPHEADER, array('Expect:'));

By the way, I am sending calls to the HTTP server that is included in the JDK 6 REST stuff, which has all kinds of problems. In this case, it first sends a 100 response, and then with some requests doesn’t send the subsequent 200 response correctly.

I got this error when my server process got an exception midway during generating the response and simply closed the connection without saying goodbye. curl still expected data from the connection and complained (rightfully).

Encountered similar issue, my server is behind nginx.
There’s no error in web server’s (Python flask) log, but some error messsage in nginx log.

[crit] 31054#31054: *269464 open() «/var/cache/nginx/proxy_temp/3/45/0000000453» failed (13: Permission denied) while reading upstream

I fixed this issue by correcting the permission of directory:

/var/cache/nginx

I had this problem working with pycurl and I solved it using

c.setopt(pycurl.HTTP_VERSION, pycurl.CURL_HTTP_VERSION_1_0) 

like Eric Caron says.

I’ve solved this error by this way.

$ch = curl_init ();
curl_setopt ( $ch, CURLOPT_URL, 'http://www.someurl/' );
curl_setopt ( $ch, CURLOPT_TIMEOUT, 30);
ob_start();
$response = curl_exec ( $ch );
$data = ob_get_clean();
if(curl_getinfo($ch, CURLINFO_HTTP_CODE) == 200 ) success;

Error still occurs, but I can handle response data in variable.

I got this error when i was accidentally downloading a file onto itself.
(I had created a symlink in an sshfs mount of the remote directory to make it available for download, forgot to switch the working directory, and used -OJ).

I guess it won’t really »help« you when you read this, since it means your file got trashed.

I got this error when my server ran out of disk space and closed the connection midway during generating the response and simply closed the connection

I got this error when running through a nginx proxy and I was running nginx under the user-id daemon instead of the user id nginx.

This means some of nginx’s scratch directories weren’t accessible / writable.

Switching from user daemon; to user nginx; fixed it for me.

I had this same problem. I tried all of these solutions but none worked. In my case, the request was working fine in Postman but when I do it with curl in php I get the error mentioned above.

What I did was check the PHP code generated by Postman and replicate the same thing.

First the request is set to use Http version 1.1
And the second most important part is the encoding for me.

Here is the code that helped me

curl_setopt($ch, CURLOPT_ENCODING, '');
curl_setopt($ch, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_1_1);

If I remove the CurlOpt Encoding I get back the error.

Skip to content



Open


Issue created Aug 11, 2016 by Carlo Cabanilla@clofresh

error: RPC failed; curl 18 transfer closed with outstanding read data remaining

Summary

My ci runner jobs are periodically failing with

error: RPC failed; curl 18 transfer closed with outstanding read data remaining

Steps to reproduce

It’s difficult to reproduce, it happens on any job and retrying usually makes them pass

Expected behavior

The repo gets cloned into the job’s workspace

Relevant logs and/or screenshots

(Some url details anonymized)

Running with gitlab-ci-multi-runner 1.4.1 (fae8f18)
Using Docker executor with image domain.com/group/project:branch ...
Pulling docker image domain.com/group/project:branch ...
Running on runner-166323eb-project-5-concurrent-1 via hostname...
Cloning repository...
Cloning into '/builds/group/project'...
error: RPC failed; curl 18 transfer closed with outstanding read data remaining
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed
ERROR: Build failed: exit code 1

Output of checks

Results of GitLab Application Check

Checking GitLab Shell ...

GitLab Shell version >= 3.2.1 ? ... OK (3.2.1)
Repo base directory exists?
default... yes
Repo storage directories are symlinks?
default... no
Repo paths owned by git:git?
default... yes
Repo paths access is drwxrws---?
default... yes
hooks directories in repos are links: ... 
2/1 ... ok
2/2 ... ok
2/3 ... ok
2/4 ... ok
2/5 ... ok
2/6 ... ok
2/8 ... ok
2/9 ... ok
2/10 ... ok
1/11 ... repository is empty
1/12 ... repository is empty
1/13 ... repository is empty
1/14 ... repository is empty
1/15 ... repository is empty
1/16 ... repository is empty
1/17 ... repository is empty
1/18 ... repository is empty
Running /opt/gitlab/embedded/service/gitlab-shell/bin/check
Check GitLab API access: OK
Check directories and files: 
        /var/opt/gitlab/.ssh/authorized_keys: OK
Send ping to redis server: gitlab-shell self-check successful

Checking GitLab Shell ... Finished

Checking Sidekiq ...

Running? ... yes
Number of Sidekiq processes ... 1

Checking Sidekiq ... Finished

Checking Reply by email ...

Reply by email is disabled in config/gitlab.yml

Checking Reply by email ... Finished

Checking LDAP ...

LDAP is disabled in config/gitlab.yml

Checking LDAP ... Finished

Checking GitLab ...

Git configured with autocrlf=input? ... yes
Database config exists? ... yes
All migrations up? ... yes
Database contains orphaned GroupMembers? ... no
GitLab config exists? ... yes
GitLab config outdated? ... no
Log directory writable? ... yes
Tmp directory writable? ... yes
Uploads directory setup correctly? ... yes
Init script exists? ... skipped (omnibus-gitlab has no init script)
Init script up-to-date? ... skipped (omnibus-gitlab has no init script)
projects have namespace: ... 
2/1 ... yes
2/2 ... yes
2/3 ... yes
2/4 ... yes
2/5 ... yes
2/6 ... yes
2/8 ... yes
2/9 ... yes
2/10 ... yes
1/11 ... yes
1/12 ... yes
1/13 ... yes
1/14 ... yes
1/15 ... yes
1/16 ... yes
1/17 ... yes
1/18 ... yes
Redis version >= 2.8.0? ... yes
Ruby version >= 2.1.0 ? ... yes (2.1.8)
Your git bin path is "/opt/gitlab/embedded/bin/git"
Git version >= 2.7.3 ? ... yes (2.7.4)
Active users: 31

Checking GitLab ... Finished

Results of GitLab Environment Info

System information
System:         Ubuntu 14.04
Current User:   git
Using RVM:      no
Ruby Version:   2.1.8p440
Gem Version:    2.5.1
Bundler Version:1.10.6
Rake Version:   10.5.0
Sidekiq Version:4.1.4

GitLab information
Version:        8.10.3
Revision:       131ea30
Directory:      /opt/gitlab/embedded/service/gitlab-rails
DB Adapter:     postgresql
URL:            https://domain.com
HTTP Clone URL: https://domain.com/some-group/some-project.git
SSH Clone URL:  git@domain.com:some-group/some-project.git
Using LDAP:     no
Using Omniauth: yes
Omniauth Providers: google_oauth2

GitLab Shell
Version:        3.2.1
Repository storage paths:
- default:      /var/opt/gitlab/git-data/repositories
Hooks:          /opt/gitlab/embedded/service/gitlab-shell/hooks/
Git:            /opt/gitlab/embedded/bin/git


Я сталкиваюсь с этой ошибкой, когда пытаюсь клонировать репозиторий из GitLab (GitLab 6.6.2 4ef8369):

введите описание изображения здесь

remote: Counting objects: 66352, done.
remote: Compressing objects: 100% (10417/10417), done.
error: RPC failed; curl 18 transfer closed with outstanding read data remaining
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed

Затем клон прерывается. Как мне этого избежать?

Ответы:


Чаще всего такое случается: у меня медленное интернет-соединение, и мне приходится клонировать прилично огромный репозиторий git. Наиболее частая проблема заключается в том, что соединение закрывается и весь клон отменяется.

Cloning into 'large-repository'...
remote: Counting objects: 20248, done.
remote: Compressing objects: 100% (10204/10204), done.
error: RPC failed; curl 18 transfer closed with outstanding read data remaining 
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed

После множества проб и ошибок, а также после множества случаев «неожиданное отключение удаленного конца» у меня есть способ, который работает для меня. Идея состоит в том, чтобы сначала сделать неглубокий клон, а затем обновить репозиторий его историей.

$ git clone http://github.com/large-repository --depth 1
$ cd large-repository
$ git fetch --unshallow







Через несколько дней, сегодня я решил эту проблему. Создайте ключ ssh, следуйте этой статье:

https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/

Объявить это

  1. Поставщик Git (GitLab, который я использую, GitHub).
  2. Добавьте это к местной идентичности.

Затем клонируйте командой:

git clone username@mydomain.com:my_group/my_repository.git

И никаких ошибок не происходит.

Вышеупомянутая проблема

ошибка: сбой RPC; curl 18 передача закрыта, оставшиеся данные для чтения

из-за ошибки при клонировании по протоколу HTTP ( curlкоманда).

И вы должны увеличить размер буфера:

git config --global http.postBuffer 524288000






Когда я пытался клонировать с пульта, неоднократно возникала одна и та же проблема:

remote: Counting objects: 182, done.
remote: Compressing objects: 100% (149/149), done.
error: RPC failed; curl 18 transfer closed with outstanding read data remaining
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed

Наконец, это сработало для меня:

git clone https://username@bitbucket.org/repositoryName.git --depth 1




нужно отключить сжатие:

git config --global core.compression 0

тогда вам нужно использовать мелкий клон

git clone --depth=1 <url>

тогда самый важный шаг — это cd в ваш клонированный проект

cd <shallow cloned project dir>

теперь деактивируйте клон, шаг за шагом

git fetch --depth=N, with increasing N

например.

git fetch --depth=4

затем,

git fetch --depth=100

затем,

git fetch --depth=500

вы можете выбрать, сколько шагов вы хотите, заменив это N,

и, наконец, загрузите все оставшиеся версии, используя,

git fetch --unshallow 

проголосуйте за, если это вам поможет :)


Простое решение: вместо клонирования через https клонируйте его через ssh.

Например:

git clone https://github.com/vaibhavjain2/xxx.git - Avoid
git clone git@github.com:vaibhavjain2/xxx.git - Correct


Проблемы с сетевым подключением.
Может быть, из-за постоянного тайм-аута соединения.
Лучший способ — перейти в другую сеть.


Эти шаги сработали для меня: использование git://вместоhttps://




Как упоминалось выше, прежде всего запустите команду git из bash, добавив вначале расширенные директивы журнала: GIT_TRACE=1 GIT_CURL_VERBOSE=1 git ...

Например, GIT_CURL_VERBOSE=1 GIT_TRACE=1 git -c diff.mnemonicprefix=false -c core.quotepath=false fetch origin
это покажет вам подробную информацию об ошибке.


У меня эта проблема возникла из-за конфигурации прокси. Я добавил ip git server в исключение прокси. Сервер git был локальным, но переменная среды no_proxy была установлена ​​неправильно.

Я использовал эту команду, чтобы определить проблему:

#Linux:
export GIT_TRACE_PACKET=1
export GIT_TRACE=1
export GIT_CURL_VERBOSE=1

#Windows
set GIT_TRACE_PACKET=1
set GIT_TRACE=1
set GIT_CURL_VERBOSE=1

Взамен была «авторизация прокси», так как сервер git не должен проходить через прокси. Но настоящей проблемой был размер файлов, определяемый правилами прокси.


Для меня проблема заключалась в том, что соединение закрывается до завершения всего клона. Я использовал Ethernet вместо Wi-Fi. Тогда это решает для меня



Эта ошибка чаще возникает при медленном или проблемном подключении к Интернету. Я подключил с хорошей скоростью интернета, тогда он работает отлично.


Эта проблема возникает, когда у вас проблема с прокси-сервером или медленная сеть. Вы можете использовать глубинное решение или

git fetch --all  or git clone 

    

Если это дает ошибку curl 56 Recv failure, загрузите файл через zip или укажите имя ветки вместо —all

git fetch origin BranchName 


Изменение протокола git clone, чтобы попробовать.

например, эта ошибка произошла, когда «git clone https: // xxxxxxxxxxxxxxx «

вы можете попробовать с «git clone git: // xxxxxxxxxxxxxx», тогда может быть хорошо.


Эти шаги работают для меня:

cd [dir]
git init
git clone [your Repository Url]

Надеюсь, это сработает и для вас.


Issue

I have been trying to git push some files into a repo I just created, but it keeps failing.

I’ve already tried changing http.version from HTTP/2 to HTTP/1.1 (I’ve tried both) and I also increased the http.postBuffer and http.maxRequestBuffer size. Most fixes I found online recommend changing one or both of these.

The largest file in my local working directory is 24.6 MB (excluding a .pack file) so I don’t have to use Git LFS.

Here is some of the output of git config --list:

diff.astextplain.textconv=astextplain
filter.lfs.clean=git-lfs clean -- %f
filter.lfs.smudge=git-lfs smudge -- %f
filter.lfs.process=git-lfs filter-process
filter.lfs.required=true
http.sslbackend=openssl
http.sslcainfo=C:/Program Files/Git/mingw64/ssl/certs/ca-bundle.crt
core.autocrlf=true
core.fscache=true
core.symlinks=false
pull.rebase=false
credential.helper=manager-core
credential.https://dev.azure.com.usehttppath=true
init.defaultbranch=master
core.editor="C:UsersusernameAppDataLocalProgramsMicrosoft VS CodeCode.exe"
--wait
core.longpaths=true
core.compression=0
gui.recentrepo=C:/Users/username/path/to/myRepo
filter.lfs.clean=git-lfs clean -- %f
filter.lfs.smudge=git-lfs smudge -- %f
filter.lfs.process=git-lfs filter-process
filter.lfs.required=true
...
http.postbuffer=30000000
http.version=HTTP/1.1
http.maxrequestbuffer=300000000
credential.helper=wincred
core.bare=false
core.repositoryformatversion=0
core.filemode=false
core.symlinks=false
core.ignorecase=true
core.logallrefupdates=true
remote.origin.url=https://github.com/username/myRepo.git
remote.origin.fetch=+refs/heads/*:refs/remotes/origin/*
remote.origin.pushurl=https://github.com/username/myRepo.git
branch.main.remote=origin
branch.main.merge=refs/heads/main

And here is the output after git push:

Enumerating objects: 177, done.
Counting objects: 100% (177/177), done.
Delta compression using up to 8 threads
Compressing objects: 100% (168/168), done.
error: RPC failed; curl 18 transfer closed with outstanding read data remaining
send-pack: unexpected disconnect while reading sideband packet
Writing objects: 100% (177/177), 444.10 MiB | 612.00 KiB/s, done.
Total 177 (delta 5), reused 177 (delta 5), pack-reused 0
fatal: the remote end hung up unexpectedly
Everything up-to-date

I am using no antivirus/firewall other than Windows Defender.

Please help.

Solution

In my situation, I was trying to push too large a payload. According to this article, «Because github does not allow a single push larger than 2GB you can push it in batches. Split the push into part1, part2…»

My solution was to break up the payload into several commits and push a couple at a time. Depending on your situation, you could try writing a shell script to push incrementally up to HEAD if you have commit history on your repo. I didn’t end up doing that but instead, I made ample use of matching patterns with multiline git add commands to select the files I wanted for each push.

Answered By – michen00

This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0

The «RPC failed; curl transfer closed with outstanding read data remaining» error in Git usually occurs when pushing large changes to a remote repository over HTTP. This error message indicates that the connection between the client and server has been closed before all the data has been transferred.

Method 1: Increase the HTTP Post Buffer Size

If you are facing the Git error «RPC failed; curl transfer closed with outstanding read data remaining», it could be due to the default buffer size in Git being too small for the data being transferred. To fix this error, you can increase the HTTP post buffer size by following the steps below.

Step 1: Open Git Bash

Open Git Bash by right-clicking in the folder where you want to clone the repository and selecting «Git Bash Here».

Step 2: Set the buffer size

In the Git Bash terminal, enter the following command to set the buffer size to 500 MB:

git config http.postBuffer 524288000

Step 3: Clone the repository

Now, clone the repository using the usual Git command:

git clone <repository URL>

With the increased buffer size, the transfer should complete without any errors.

Method 2: Clone the Repository using SSH instead of HTTP

If you are encountering the error «RPC failed; curl transfer closed with outstanding read data remaining» while cloning a Git repository, you can try to fix it by cloning the repository using SSH instead of HTTP. Here are the steps to do it:

  1. Generate SSH keys if you haven’t already done it. You can do it by running the following command in your terminal:
ssh-keygen -t rsa -b 4096 -C "your_email@example.com"
  1. Add your SSH key to your GitHub account. You can do it by copying the content of your public key (usually located at ~/.ssh/id_rsa.pub) and adding it to your GitHub account settings.

  2. Clone the repository using SSH instead of HTTP. You can do it by replacing the HTTP URL with the SSH URL. For example, instead of running:

git clone https://github.com/username/repository.git

You can run:

git clone git@github.com:username/repository.git
  1. If you have already cloned the repository using HTTP, you can change the remote URL to use SSH instead. You can do it by running the following command in your local repository:
git remote set-url origin git@github.com:username/repository.git

These steps should help you fix the «RPC failed; curl transfer closed with outstanding read data remaining» error while cloning a Git repository by using SSH instead of HTTP.

Method 3: Set the Global Git Configuration to Use a Lower Transfer Buffer Size

To fix the error «RPC failed; curl transfer closed with outstanding read data remaining» in Git, you can try setting the global Git configuration to use a lower transfer buffer size. Here are the steps to do this:

  1. Open Git Bash or your preferred terminal application.
  2. Run the following command to set the transfer buffer size to 128KB:
git config --global http.postBuffer 131072
  1. Try running your Git command again and see if the error persists.

Explanation:

Git uses the HTTP protocol for communication with remote repositories. By default, Git sets the transfer buffer size to 1MB, which can cause issues with some servers that have lower limits. By setting the transfer buffer size to a lower value, such as 128KB, you can avoid these issues.

The git config command is used to set Git configuration options. The --global flag tells Git to apply the configuration globally, rather than just for the current repository. The http.postBuffer option specifies the transfer buffer size in bytes. In this example, we set it to 131072 bytes, which is equivalent to 128KB.

By following these steps, you should be able to resolve the «RPC failed; curl transfer closed with outstanding read data remaining» error in Git.

Method 4: Use Git Protocol Version 2

To fix the error «RPC failed; curl transfer closed with outstanding read data remaining» in Git, you can try using Git Protocol Version 2. Here are the steps to do it:

  1. Open your Git Bash or terminal and navigate to your repository.
  2. Run the following command to enable Git Protocol Version 2:
git config --global protocol.version 2
  1. If you’re still experiencing the error, try running the following command to disable the «keepalive» feature:
git config --global http.keepalive false
  1. If the issue persists, you can try increasing the buffer size by running the following command:
git config --global http.postBuffer 524288000
  1. Finally, try resetting your repository by running the following command:

These steps should help resolve the «RPC failed; curl transfer closed with outstanding read data remaining» error in Git. Here are the code examples for each step:

git config --global protocol.version 2

git config --global http.keepalive false

git config --global http.postBuffer 524288000

git reset --hard

Explanation:

  • Step 2: This command sets the Git Protocol Version to 2, which can help improve performance and stability.
  • Step 3: Disabling the «keepalive» feature can help prevent connection issues when transferring large files.
  • Step 4: Increasing the buffer size can help prevent the error by allowing Git to handle larger data transfers.
  • Step 5: Resetting the repository can help fix any corrupted files or settings that may be causing the error.

Note: These steps may not work for all cases of the «RPC failed; curl transfer closed with outstanding read data remaining» error, but they are a good place to start. If the issue persists, you may need to seek additional help or try other solutions.

Method 5: Reduce the Size of the Files Being Pushed

If you encounter the error «RPC failed; curl transfer closed with outstanding read data remaining» while pushing your Git repository, one possible solution is to reduce the size of the files being pushed. Here are the steps to do it:

  1. Identify the large files in your repository by running the following command in your terminal:
git rev-list --objects --all | git cat-file --batch-check='%(objecttype) %(objectname) %(objectsize) %(rest)' | awk '/^blob/ {print substr($0,6)}' | sort --numeric-sort --key=2 | cut --complement --characters=13-40 --stable | awk '{printf "%dt%sn", $1, $2}' | cut --characters=2-

This command will output a list of all the files in your repository, sorted by size in descending order.

  1. Remove the large files from your repository by running the following command:
git filter-branch --force --index-filter 'git rm --cached --ignore-unmatch path/to/large/file' --prune-empty --tag-name-filter cat -- --all

Replace «path/to/large/file» with the path to the large file you want to remove. Repeat this command for each large file you want to remove.

  1. After removing the large files, run the following command to compress the repository:
git gc --aggressive --prune=now

This command will compress the repository and remove any unused objects.

  1. Finally, push the reduced repository to the remote server:

If you still encounter the error after reducing the size of the files being pushed, you may need to try other solutions such as increasing the buffer size or upgrading your Git version.

I have been getting «error: RPC failed; curl 18 transfer closed with outstanding read data remaining» for a few days now when trying to update pods (1.0.1). Here is the github command:

 > Git download
     $ /usr/bin/git clone https://github.com/SVGKit/SVGKit.git /var/folders/bm/3q6p4v0119xfywv1yb182w0c0000gn/T/d20160811-12646-1k9vb1h
     --template= --single-branch --depth 1 --branch 2.x
     Cloning into '/var/folders/bm/3q6p4v0119xfywv1yb182w0c0000gn/T/d20160811-12646-1k9vb1h'...
     error: RPC failed; curl 18 transfer closed with outstanding read data remaining
     fatal: The remote end hung up unexpectedly
     fatal: early EOF
     fatal: index-pack failed

[!] Error installing SVGKit
[!] /usr/bin/git clone https://github.com/SVGKit/SVGKit.git /var/folders/bm/3q6p4v0119xfywv1yb182w0c0000gn/T/d20160811-12646-1k9vb1h --template= --single-branch --depth 1 --branch 2.x

Cloning into '/var/folders/bm/3q6p4v0119xfywv1yb182w0c0000gn/T/d20160811-12646-1k9vb1h'...
error: RPC failed; curl 18 transfer closed with outstanding read data remaining
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed

I have even gone as far as spinning up a new OS X instance (10.11.5), full git and pods install, new SSH key (I have access to this branch). Same thing.

Looking for any suggestion for a next step.

  • Trans malfunction bmw ошибка
  • Trans failsafe prog bmw x5 e53 расшифровка ошибки
  • Trainz ошибка загрузки текстуры
  • Trailer link ошибка рено
  • Trailer breaking danger ошибка на рено премиум