- From:
- To: "
" <
>
- Cc: Bradford L Knowles <
>
- Subject: [chef] Re: Re: remote_file checksum
- Date: Tue, 11 Mar 2014 15:14:07 +0000
It’s downloading the file with a mismatch in the checksums with the previous
example.
When using AWS S3 the non-multipart uploaded files seem to use etag md5
checksums.
Multipart uploads are different again, depending on how they are uploaded (
the web ui seems to multipart using 50Mb parts )
On 11 Mar 2014, at 15:05, Brad Knowles
<
>
wrote:
>
On Mar 11, 2014, at 9:59 AM,
>
>
wrote:
>
>
> with the following code , should it retry download until retries limit?
>
>
>
> remote_file "/tmp/myfile.tmp" do
>
> source "http://example.com/test.tmp"
>
> checksum "incorrect_should_cause_redownload"
>
> retries 3
>
> retry_delay 20
>
> end
>
>
If the checksum you provide doesn't match the checksum of the remote file,
>
then I don't think it will download anything. If the checksums do match
>
and you don't have a local copy of the file, then there should be multiple
>
download attempts.
>
>
> My understanding was that it should only fetch the source if the checksums
>
> don’t match.
>
> I’ve tried this with forcing the use_conditional_get true and use_etag
>
> true.
>
>
The checksum is compared against the remote file, not the local one.
>
>
> It seems to ignore what I set checksum too , string , md5, and sha256 is
>
> what it should be.
>
>
IIUC, checksum is always sha256. If you use anything else, you are likely
>
to be unpleasantly surprised.
>
>
--
>
Brad Knowles
>
<
>
>
LinkedIn Profile: <http://tinyurl.com/y8kpxu>
>
Archive powered by MHonArc 2.6.16.