… X) due to split() incompatibility
thanks mauke from #perl on freenode!
from http://perldoc.perl.org/perl5140delta.html
"split() no longer modifies @_ when called in scalar or void context. In void context it now produces a "Useless use of split" warning. This was also a perl 5.12.0 change that missed the perldelta."

Summary:
because of the way time comparisons were being done maxWriteInterval set to
1second will actually result in 2s delay.
Test Plan:
Instrumented the code to check at what times and with what values
last_periodic_check and last_handled_messages gets updated. Checked
how long the thread stays blocked waiting for work-signal or timeout.
Verified that there is no thrashing - where the thread keeps
continuosuly waking up.
DiffCamp Revision: 120744
Reviewed By: groys
CC: agiardullo, pkhemani, groys, scribe-dev@lists
Tasks:
#219753: honor maxWriteInterval
Revert Plan:
OK
git-svn-id: svn+ssh://tubbs/svnapps/fbomb/branches/scribe-os/fbcode/scribe@29223 2248de34-8caa-4a3c-bc55-5e52d9d7b73a

Summary:
On detecting corruption in a backed up file, stop reading any further.
Calculate how
many bytes are being lost. Output this information in LOG messages and updated
the
bytes lost counter.
StdFile::readNext can only work for framed files. The dead code for reading
from non-framed
files was anyway wrong - it allocated only 4K for a message line while there
are many messages
longer than that. I removed this dead code.
I haven't yet added checksums - what the task asks for. I will not close the
task after this
diff
Test Plan:
1/ Make sure that the regular code when the backup file is not corrupted works.
2/ Force backup. corrupt backup file by changing frame size field.
move the scribe-server to sending_buffer state. The backup file is removed,
as much
data that could be sent is sent and log and counters have the loss
information.
[Tue May 25 23:22:35 2010] "WARNING: Corruption Data Loss -14 bytes in
/tmp/corr/foo/foo_00000"
scribe_overall:bytes lost: 28
scribe_overall:received good: 7
foo:received good: 7
scribe_overall:retries: 154
foo:bytes lost: 28 <===
foo:retries: 154
scribe_overall:sent: 3
3/ same as 2/ but when uploading the backup file there is an error. The backup
file is left as
it is. In this situation when the backup file was being read the LOG will
contain info that
x bytes were lost. But the bytes lost counter won't go up. The bytes lost
counter only goes
up when the corrupted file is being deleted.
DiffCamp Revision: 118317
Reviewed By: groys
CC: agiardullo, pkhemani, groys, scribe-dev@lists
Tasks:
Revert Plan:
OK
git-svn-id: svn+ssh://tubbs/svnapps/fbomb/branches/scribe-os/fbcode/scribe@28770 2248de34-8caa-4a3c-bc55-5e52d9d7b73a

This reverts commit 997408f58371170c9b5d3b72d05667f540c36380.
While, this was aimed at reducing empty files created by closed stores,
some systems depend on the file being rotated even if a store is closed.
git-svn-id: svn+ssh://tubbs/svnapps/fbomb/branches/scribe-os/fbcode/scribe@28274 2248de34-8caa-4a3c-bc55-5e52d9d7b73a

Summary:
Two changes to help eliminate empty files when running with replay_buffer=no,
i.e. when we care
about what data shows up in both primary and secondary
1. eliminate unnecessary periodicCheck calls.
there are a lot of empty files created due periodicchecks on closed stores
2. We dont take any action on failure to open secondary so trying to do
that in bufferstore::open is useless
This should also fix the bucketupdater path problem for running
testsuite
Test Plan:
testsuite
DiffCamp Revision: 115118
Reviewed By: jsong
Commenters: agiardullo
CC: agiardullo, jsong, groys, scribe-dev@lists
Revert Plan:
OK
git-svn-id: svn+ssh://tubbs/svnapps/fbomb/branches/scribe-os/fbcode/scribe@28040 2248de34-8caa-4a3c-bc55-5e52d9d7b73a

Summary:
we found that when there are problems w/hdfs and an ls fails,
scribe treats this as ls returning no files and creates the default file
with index
(00000). Now this method throws an exception and the exception is
suitably handled.
Test Plan:
Testsuite. Simulate a backup scenario and see that it works
DiffCamp Revision: 113049
Reviewed By: zshao
CC: agiardullo, zshao, groys, scribe-dev@lists
Tasks:
#206325: scribe : handle list errors from hdfs
Revert Plan:
OK
git-svn-id: svn+ssh://tubbs/svnapps/fbomb/branches/scribe-os/fbcode/scribe@27540 2248de34-8caa-4a3c-bc55-5e52d9d7b73a

Summary:
Check if store is open before calling periodicCheck or
handlemessages.
Test Plan:
check with only network store that this problem does not occur.
Tested for bufferstore with network as primary and file as secondary.
used the test I had for adaptive backoff with test/simulatebackoff/hammersource.conf on one dev machine test/simulatebackoff/hammersink.conf on another.
1. started sending data to hammersource using superstress, and did not start hammersink
2. started hammersink, data began to get transferred.
3. stopped hammersink for a while
4. started hammersink again
Counters on hammersource side show all the data was sent cant verify on the hammersink side because started and stopped and started …but should be ok?
DiffCamp Revision: 99586
Reviewed By: jsong
Commenters: agiardullo
CC: scribe-ops@lists, agiardullo, jsong, groys, scribe-dev@lists
Tasks:
Revert Plan:
OK
git-svn-id: svn+ssh://tubbs/svnapps/fbomb/trunk/fbcode/scribe@24787 2248de34-8caa-4a3c-bc55-5e52d9d7b73a

Summary:
This fix ensures that the FileStore doesn't do small writes when it can
Test Plan:
printed the size of the blocks that were being written out. Noticed the
behavior
mentioned in the bug - after the file reaches DEFAULT_FILESTORE_MAX_WRITE_SIZE
we start writing out small blocks. After this fix we always write in bigger
chunks.
DiffCamp Revision: 112337
Reviewed By: groys
CC: agiardullo, pkhemani, groys, scribe-dev@lists
Tasks:
#198720
Revert Plan:
OK
git-svn-id: svn+ssh://tubbs/svnapps/fbomb/trunk/fbcode/scribe@27287 2248de34-8caa-4a3c-bc55-5e52d9d7b73a