The author of http://s3tools.org/s3cmd-sync asked for better wording; so here is my attempt:
With directories there is one thing to watch out for – you can either upload the directory and its contents or just the contents. It all depends on a slash.
A slash is the directory, the name before the slash is the directory's name:
PATH DESCRIPTION
dir a directory name
dir/ a directory named dir
A component following the specified source directory path is implied:
PATH COMPONENT FOLLOWING PATH
dir directory named dir
dir/ content of directory named dir
To upload a directory, specify the source without the trailing slash:
$ s3cmd put -r dir1 s3://s3tools-demo/some/path/
^copy component following source (directory)
To upload the content of a directory, specify the source with the trailing slash:
$ s3cmd put -r dir1/ s3://s3tools-demo/some/path/
^copy component following source (directory's content)
That's all there is to it.
I am just starting to learn s3cmd, so this may not be any better or correct.
Edit and use it as you please.
wolfv

I've found where this message that I reported previously comes from. It
results when I issue the command
s3cmd mv -r BUCKETFOLDER1/ BUCKETFOLDER2/
The command executes ok so it's not a problem for me, but as I reported it
previously thought you might like to know. Perhaps I could slightly alter
the syntax to avoid the message?
Regards
Russell

Thanks - I fixed the config region description and it now behaves ok in
this simple example. I do still get some cases in one of my scripts where a
file appears to have been downloaded twice, but I only need to do this
rarely so I haven't tried to isolate the circumstances. I don't know if it
has really been downloaded twice or if it's just a spurious message. If you
want me to, I'll do some more investigation.
Russell
On 24 May 2015 at 13:51, Matt Domsch <matt@...> wrote:
> Your bucket_location config option is incorrect "eu-west" rather than
> "eu-west-1" or whatever. So it's failing the initial request with v4
> signature, and falling back to v2 signature.
>
> On Thu, May 21, 2015 at 1:22 AM, Russell Gadd <rustleg@...> wrote:
>
>> There are a couple of oddities when running s3cmd version 1.5.2 on Linux
>> Mint 17. (Neither of these are a problem for me but offered just to make
>> you aware).
>>
>> I already reported (5 May) that I get spurious messages "WARNING: Empty
>> object name on S3 found, ignoring."
>>
>> I also found that when performing a simple get the command is echoed
>> twice, which I wouldn't expect is normal behaviour. Sanitised debug output
>> attached
>>
>> $ s3cmd get s3://mybucket/files/file1
>> s3://mybucket/files/file1 -> ./file1 [1 of 1]
>> s3://mybucket/files/file1 -> ./file1 [1 of 1]
>> 32802 of 32802 100% in 0s 149.34 kB/s done
>> $
>>
>>
>> Regards
>> Russell
>>
>>
>> ------------------------------------------------------------------------------
>> One dashboard for servers and applications across Physical-Virtual-Cloud
>> Widest out-of-the-box monitoring support with 50+ applications
>> Performance metrics, stats and reports that give you Actionable Insights
>> Deep dive visibility with transaction tracing using APM Insight.
>> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
>> _______________________________________________
>> S3tools-general mailing list
>> S3tools-general@...
>> https://lists.sourceforge.net/lists/listinfo/s3tools-general
>>
>>
>
>
> ------------------------------------------------------------------------------
> One dashboard for servers and applications across Physical-Virtual-Cloud
> Widest out-of-the-box monitoring support with 50+ applications
> Performance metrics, stats and reports that give you Actionable Insights
> Deep dive visibility with transaction tracing using APM Insight.
> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
> _______________________________________________
> S3tools-general mailing list
> S3tools-general@...
> https://lists.sourceforge.net/lists/listinfo/s3tools-general
>
>

OK, it's just because the version without slash serves as a wild card for
other directories with this string as a prefix.
On Tue, May 19, 2015 at 9:29 AM, Joshua Fox <joshua@...> wrote:
> s3cmd du -H s3://bucketabc/prefix/further-prefix
>
> gives 21G
>
> s3cmd du -H s3://bucketabc/prefix/further-prefix/
>
> gives 10G.
>
> There are no files directly in there, just four "subdirectories."
>
> I have five buckets which are near-copies and this only happens in two of
> them. The others show 10G consistently. I am moderately sure that 10G is
> correct.
>
> The only apparent difference between buckets -- and a seemingly irrelevant
> one -- is that the two which give 10G consistently have one *more* subdirectory
> than the ones that give 21G vs 10G; there is a single 138M file in that
> extra dir.
>
> Why 21G vs 10G?
>
> Thanks,
>
> Joshua
>
> (s3cmd version 1.5.0~rc1-2)
>
>

s3cmd du -H s3://bucketabc/prefix/further-prefix
gives 21G
s3cmd du -H s3://bucketabc/prefix/further-prefix/
gives 10G.
There are no files directly in there, just four "subdirectories."
I have five buckets which are near-copies and this only happens in two of
them. The others show 10G consistently. I am moderately sure that 10G is
correct.
The only apparent difference between buckets -- and a seemingly irrelevant
one -- is that the two which give 10G consistently have one *more* subdirectory
than the ones that give 21G vs 10G; there is a single 138M file in that
extra dir.
Why 21G vs 10G?
Thanks,
Joshua
(s3cmd version 1.5.0~rc1-2)

With thanks to Gianfranco Costamagna, s3cmd 1.5.2 is now available in
Debian experimental, unstable, and Ubuntu Wily (the latter through the
magic of their automatic sync process). And of course, it's been in Fedora
and EPEL repos for quite some time, which solves all the RHEL, CentOS,
Amazon, Scientific, and related distros too. By maintaining s3cmd in the
upstream distro repos, it makes it far more likely that an end user will
have the latest and most bug-free version that we know how to make, which
also cuts down on the (suggested) bug reports on versions 0.9.9.1 and 1.0.0
that plague us.
Thanks,
Matt

Hi Folks,
I've got a weird use-case where my source is an SFTP folder on a
third-party server. I've got the remote filesystem mounted, and I've got
the sync set up, but it's pretty slow (and will get slower as the
third-party adds files). (This slowness isn't surprising.)
I'm afraid you're going to tell me this is just as hard as I think it is,
but maybe you'll surprise me: Is there a way I can calculate md5sums on the
remote server (since I've got SSH access, and md5sum is available) and then
use that information? I'm thinking that's going to be orders of magnitude
faster than calculating them over the wire. I know there's the --cache-file
option, but I don't have much clue what goes into generating that file, nor
exactly what it tells s3cmd.
Let me know what you think.
Thanks,
Jamie

I'm having difficulty understanding this:
continue-put
"Continue uploading partially uploaded files or
multipart upload parts. Restarts/parts files that
don't have matching size and md5. Skips files/parts
that do. Note: md5sum checks are not always
sufficient to check (part) file equality. Enable this
at your own risk."
It reads ok up to the note but then I'm stumped as to why "md5sum checks
are not always sufficient to check (part) file equality". I understood md5
collisions are virtually impossible in practice. Furthermore I experimented
with the feature within sync which copies files across the bucket in s3
when it detects that a file already is in s3, rather than uploading it,
even if the filename and timestamp are different. So I assume you can only
detect identical files using the MD5 which implies your program relies on
them.
Can you explain why md5sum checks aren't necessarily sufficient and what
the risk is? How would I avoid the risk?
Regards
Russell

I was surprised (pleasantly) that when doing a sync I found at the end a
series of remote copies. These copied files from one version in the bucket
to another giving it a different name according to the local file system's
path. So it appears you are duplicating the storage in order to see the
file as a mirror of the local file, but avoiding the file transfer. I
assume that this process has identified a duplicate file by means of its
MD5. Nice.
I suggest this point is added to the docs somewhere (unless I've missed it)
as it's a plus point for your application.
Russell

I was testing s3cmd and it appears to repeat the first upload of put
command. There appears to be a delay between the confirmations on the
terminal so I expect it is actually uploading twice rather than just a
reporting error
This was the terminal output (Linux Mint 17, s3cmd version 1.5.2):
/data/WorkInProgress/testfiles $ s3cmd put * s3://mybucket-test2/
testfile1.pdf -> s3://mybucket-test2/testfile1.pdf [1 of 2]
153171 of 153171 100% in 1s 134.03 kB/s done
testfile1.pdf -> s3://mybucket-test2/testfile1.pdf [1 of 2]
153171 of 153171 100% in 1s 109.50 kB/s done
testfile2.xls -> s3://mybucket-test2/testfile2.xls [2 of 2]
62464 of 62464 100% in 0s 120.24 kB/s done
/data/WorkInProgress/testfiles $
Is there an explanation for this behaviour or is this a bug?
Russell

I could make great use of an ls -l compliment to the regular ls and la
commands that would just dump everything about each object to standard out.
lsl and lal?
On Thu, Apr 2, 2015 at 2:56 PM, Matt Domsch <matt@...> wrote:
> Not presently. The storage class is returned from the S3 bucket list API
> call, in the XML, but we don't parse it out and display it. Might be
> interesting to do so though.
>
> On Thu, Apr 2, 2015 at 2:02 PM, Billy Crook <bcrook@...>
> wrote:
>
>> Is there any way with s3cmd, to search for any objects with the reduced
>> redundancy storage class? (Or barring that, a way to enumerate the storage
>> class of all objects in such a way I could grep for it?)
>>
>> --
>> Billy Crook • Network and Security Administrator • RiskAnalytics, LLC
>>
>>
>> ------------------------------------------------------------------------------
>> Dive into the World of Parallel Programming The Go Parallel Website,
>> sponsored
>> by Intel and developed in partnership with Slashdot Media, is your hub
>> for all
>> things parallel software development, from weekly thought leadership
>> blogs to
>> news, videos, case studies, tutorials and more. Take a look and join the
>> conversation now. http://goparallel.sourceforge.net/
>> _______________________________________________
>> S3tools-general mailing list
>> S3tools-general@...
>> https://lists.sourceforge.net/lists/listinfo/s3tools-general
>>
>>
>
>
> ------------------------------------------------------------------------------
> Dive into the World of Parallel Programming The Go Parallel Website,
> sponsored
> by Intel and developed in partnership with Slashdot Media, is your hub for
> all
> things parallel software development, from weekly thought leadership blogs
> to
> news, videos, case studies, tutorials and more. Take a look and join the
> conversation now. http://goparallel.sourceforge.net/
> _______________________________________________
> S3tools-general mailing list
> S3tools-general@...
> https://lists.sourceforge.net/lists/listinfo/s3tools-general
>
>
--
Billy Crook • Network and Security Administrator • RiskAnalytics, LLC

Not presently. The storage class is returned from the S3 bucket list API
call, in the XML, but we don't parse it out and display it. Might be
interesting to do so though.
On Thu, Apr 2, 2015 at 2:02 PM, Billy Crook <bcrook@...>
wrote:
> Is there any way with s3cmd, to search for any objects with the reduced
> redundancy storage class? (Or barring that, a way to enumerate the storage
> class of all objects in such a way I could grep for it?)
>
> --
> Billy Crook • Network and Security Administrator • RiskAnalytics, LLC
>
>
> ------------------------------------------------------------------------------
> Dive into the World of Parallel Programming The Go Parallel Website,
> sponsored
> by Intel and developed in partnership with Slashdot Media, is your hub for
> all
> things parallel software development, from weekly thought leadership blogs
> to
> news, videos, case studies, tutorials and more. Take a look and join the
> conversation now. http://goparallel.sourceforge.net/
> _______________________________________________
> S3tools-general mailing list
> S3tools-general@...
> https://lists.sourceforge.net/lists/listinfo/s3tools-general
>
>

Is there any way with s3cmd, to search for any objects with the reduced
redundancy storage class? (Or barring that, a way to enumerate the storage
class of all objects in such a way I could grep for it?)
--
Billy Crook • Network and Security Administrator • RiskAnalytics, LLC